{"text": "In recent years, peptides have received increased interest in pharmaceutical, food, cosmetics and various other fields. The high potency, specificity and good safety profile are the main strengths of bioactive peptides as new and promising therapies that may fill the gap between small molecules and protein drugs. Peptides possess favorable tissue penetration and the capability to engage into specific and high-affinity interactions with endogenous receptors. These positive attributes of peptides have driven research in evaluating peptides as versatile tools for drug discovery and delivery. In addition, among bioactive peptides, those released from food protein sources have acquired importance as active components in functional foods and nutraceuticals because they are known to possess regulatory functions that can lead to health benefits.International Journal of Molecular Sciences represents the second in a series dedicated to peptides. This issue includes thirty-six outstanding papers describing examples of the most recent advances in peptide research and its applicability. This Special Issue of Pseudomonas and Burkholderia strains clinically isolated from cystic fibrosis patients. These findings will open interesting perspectives to apoB cryptides applicability in the treatment of chronic lung infections associated with cystic fibrosis disease. The issue follows with research by Tarallo et al. [Staphylococcus aureus. The bactericidal and keratinocytes cytoprotective mechanisms against invading bacteria are also elucidated. Staphylococcus aureus and Pseudomonas aeruginosa, individually or in co-occurrence, are the two main pathogens implied in multiple bacterial infections. Since their discovery, the antimicrobial peptides (AMPs) of innate defense have been considered as a potential alternative to conventional antibiotics. However, no commercial AMPs are still available. The review of Ron\u010devi\u0107 et al. [Pseudomonas aeruginosa and Staphylococcus aureus, also hampering the proliferation of their single and dual-species biofilms. The study of Prasad et al. [The Special Issue begins with a group of papers exploring aspects of synthetic peptides that are of significance to develop novel drugs for controlling and/or managing chronic diseases. It begins with a study of Gaglione et al. on the io et al. on a newo et al. demonstro et al. investigo et al. investigo et al. study tho et al. demonstro et al. . The stuo et al. aiming to et al. screen a\u0107 et al. is aimed\u0107 et al. to exertd et al. reviews d et al. investigd et al. report td et al. synthesiFollowing, there is a short series of articles dealing with the elucidation of modes of action of known food-derived bioactive peptides. Fern\u00e1ndez-Tom\u00e9 et al. provide Miichthys miiuy) swim bladder against oxidative damage to human umbilical vein endothelial cells. Gomez et al. [Crassostrea angulata) proteins through in silico analyses and in vitro tests. C. angulata proteins were proven to be sources of angiotensin I-converting enzyme and dipeptidyl peptidase IV inhibitory peptides with pharmaceutical and nutraceutical applications. Using different commercial proteases, Ding et al. [Lactobacillus rhamnosus EBD1 by Daliri et al. [Another group of papers explores the potential of new proteins as sources of bioactive peptides. Cai et al. explore z et al. report og et al. produce g et al. describei et al. show anti et al. summarizChlorella sorokiniana proteins as source of bioactive peptides. The BIOPEP\u2019s profile shows that these proteins have multiple dipeptydil peptidase IV inhibitors, glucose uptake stimulants, antioxidant, regulating, anti-amnestic and anti-thrombotic peptides. Pepsin, bromelain and papain are the main proteases responsible for the release of bioactive peptides with pharmaceutical and nutraceutical potential. The review of Bozovi\u010dar and Bratkovic [The issue includes some studies on bioinformatic and proteomic tools useful for peptide research. Using molecular docking, Chamata et al. describeratkovic focuses Another group of papers explores the effects of endogenous peptides on body functions and their potential for new drug alternatives. In a glioma mouse model, Kucheryavykh et al. reveal bFinally, a couple of articles describe new developed techniques to investigate the response of immune system. Thus, Kametani et al. describehttps://www.mdpi.com/journal/ijms/special_issues/peptides_2020).We wish to thank the invited authors for their interesting and insightful contributions, and look forward to a new set of advances in the bioactive peptides field to be included in the following Special Issue \u201cPeptides for Health Benefits 2020\u201d ("} {"text": "The epidermal cells of flowers come in different shapes and have different functions, but how they evolved remains largely unknown. Floral micro\u2010texture can provide tactile cues to insects, and increases in surface roughness by means of conical (papillose) epidermal cells may facilitate flower handling by landing insect pollinators. Whether flower microstructure correlates with pollination system remains unknown.Here, we investigate the floral epidermal microstructure in 29 (congeneric) species pairs with contrasting pollination system. We test whether flowers pollinated by bees and/or flies feature more structured, rougher surfaces than flowers pollinated by non\u2010landing moths or birds and flowers that self\u2010pollinate.In contrast with earlier studies, we find no correlation between epidermal microstructure and pollination system. The shape, cell height and roughness of floral epidermal cells varies among species, but is not correlated with pollinators at large. Intriguingly, however, we find that the upper flower surface that surrounds the reproductive organs and often constitutes the floral display is markedly more structured than the lower surface.We thus conclude that conical epidermal cells probably play a role in plant reproduction other than providing grip or tactile cues, such as increasing hydrophobicity or enhancing the visual signal. The shape and size of floral epidermal cells differs between different flower sides, but is not associated with pollinator guild across different genera. Kay et al. et al. et al. et al. et al. Ranunculus and Ficaria (Ranunculaceae), contributing to their glossy appearance feature flat epidermal cells that are indeed slippery, causing prey to slide into the pitcher and be digested versus areas that will rarely be touched by pollinators, such as the lower side of the floral display.If conical epidermal cells provide grip to landing insects then, conversely, their absence could be an \u2018anti\u2010bee\u2019 adaptation will also not need to provide grip or tactile cues to landing insects, so surface structure may be one of the multiple (pollinator\u2010attracting) traits that degenerate in flowers of self\u2010pollinating plants than areas that are invisible/inaccessible to pollinators, and whether (ii) surfaces of bee\u2010 and fly\u2010pollinated flowers will be more structured than species that are pollinated by hawkmoths, birds or via self\u2010fertilisation. We found that there is no large\u2010scale correlation of flower surface and pollinator guild or mating system, but that the adaxial surfaces of flowers are markedly more structured than abaxial surfaces, hinting at abiotic and/or visual effects as main drivers for flower surface evolution.In this study, we investigate the evolution of flower surfaces by virtue of comparing closely related species with contrasting pollination or mating systems. We include 13 genera across nine angiosperm families, and in total compare the floral epidermal cell shape for 29 congeneric sister species pairs with vertically presented flowers that differ in pollination or mating system. We test whether (i) flower surfaces that are frequently touched and/or seen by insect pollinators are more structured with a 40\u00d7 objective. The surface structure of the whole floral organ was examined and the most dominant structure was photographed with a Nikon D70 camera. Epidermal cell shape dimensions (below) were measured in Fiji . To avoid observer bias, the flower surfaces were observed and measured by an observer (MK) who had no prior knowledge of the pollinator, i.e. the observer was \u2018blind\u2019 was examined regardless of its colour (pattern) using dental impression material as per van der Kooi et al. . Immediavia a \u2018roughness index\u2019, which is the ratio of the lateral, 3D surface area to the projected, geometric surface area of the cell\u2019s base hexagonal base, we measured the diameter of the cell\u2019s base, and for cells with a rectangular base, we measured cell length and width Fig. a, b. TheFor the exemplary cases shown in Fig. via a parametric bootstrap test, using the pbkrtest package in R . In the linear models, species was nested within genus as a random effect when testing the effect of flower side, and pair was nested within genus as random effect when testing the effect of pollination system. A likelihood ratio test (LRT) was used to see whether a model with the response variable (pollination system or flower side) fits better to the data than the same model without the response variable (1000 simulations). R script and data files are provided as Supporting Information.Statistical significance was tested P\u00a0=\u00a00.18), but the adaxial cell height is on average twice that of the abaxial side . As cell height does not adequately capture overall surface curvature, we also calculated a roughness index, which is the ratio of the cell\u2019s outer surface to that of the flat, projected cell surface area . In ten out of the 12 studied species, the adaxial epidermal cells are higher than the abaxial cells; in one species it is approximately the same and in one there is a small decrease in height; the roughness shows a similar effect , cell height nor roughness index differs significantly between sister species. Also, when compared per pollinator guild, none are significantly different in epidermal cell height or roughness diverged sufficiently to develop phenotypic differences or because our method is inadequate to detect a biological signal. Indeed, we found up to three\u2010fold differences in epidermal cell surface, height and roughness index between sister species, albeit in opposite directions Fig. ; hence tet al. et al (et al. (et al. Our study does not invalidate previous experiments on tactile or grip effects of flower surfaces (Kevan & Lane l. et al reported (et al. compared (et al. , and wit. et al. .vice versa, reflecting the asymmetry in the direction of transitions between insect and bird pollination found globally (Thomson & Wilson It remains unknown how striations in the flower\u2019s cuticle contribute to the mechanical interaction with pollinators. In the studied species, the frequent lack of cuticle structure precluded analysis of whether cuticle differences, if any, are linked to specific pollinators. Deeper species sampling would help to elucidate the importance of phylogeny on flower surface evolution. Additionally, in the systems so far considered, bird pollination was derived from bee/fly pollination and not et al. et al. et al. et al. et al. et al. The marked differences between the adaxial and abaxial surfaces Fig. , combine(e.g. flavonoids and anthocyanins; Kay et al. et al. et al. et al. The asymmetry in epidermal cell shape between adaxial and abaxial flower sides may be linked with the pigmentary aspects of the flower\u2019s visual signal. Many species that have flowers with conical epidermal cells have pigments that occur in the epidermal cells only CJvdK designed the study, both authors collected the data, MK made the photographs and performed measurements, both authors performed the analyses and CJvdK wrote the manuscript. Both authors approved the final version of the manuscript.Table S1. P\u2010values for different comparisons. The p\u2010values obtained for the different sublevels were Bonferroni corrected for multiple testing.Data S1. Information on species\u2010pairs, data file and R script.Click here for additional data file."} {"text": "Familial HLH (FHL) is a fatal disorder and determining gene mutations is a good guide for predicting the prognosis and choosing treatment options. This study aimed to illustrate the clinical, laboratory characteristics, including perforin gene mutation screening, treatment and survival outcome of pediatric HLH patients.we conducted this cross-sectional study on pediatric patients who were diagnosed with HLH using the revised HLH-2004 criteria, from January 2014 to February 2019 at Zagazig University Children's Hospital, Egypt. We collected demographic, clinical and laboratory data and screened for the presence of mutations in perforin (PRF1) gene by polymerase chain reaction (PCR) amplification. We treated the patients according to HLH-2004 treatment protocol and documented their survival outcome.the total number of cases were 18; eight males and ten females, the age range was between three months and 12 years. Of the eight HLH-2004 diagnostic criteria, all patients met at least five criteria. We detected PRF1 gene mutation in 38.9% (7 patients) with nine previously unreported mutations. Sixteen patients (88.9%) received HLH-2004 treatment protocol and the remaining two patients died before initiation of treatment. The overall mortality was 72.2% (13 patients).our results increase the awareness of clinical and laboratory characterizations of pediatric HLH patients and the prevalence of PRF1 gene mutations among those patients. HLH is an immunological disease characterized by hemophagocytosis of red bood cells (RBCs), platelets and neutrophils by histiocytes (activated macrophages) and proliferation of T-cells and histiocytes in the spleen, bone marrow and infiltration into body organs. The activated macrophages secrete large amounts of cytokines, which cause tissue damage and lead to organ failure . In pati9/L, absolute neutrophil count <1x109/L)), hypertriglyceridemia and/or hypofibrinogenemia (fasting serum triglyceride \u22653mmol/L (\u2265265mg/dl) plasma fibrinogen <1.5g/L), serum ferritin >500ng/ml, sCD25 (s IL-2 receptor) \u22652400U/ml, NK cell activity decreased (below 5%) or absent and hemophagocytosis [For PRF1 gene, more than 120 different mutations have been detected: 101 missense/non-sense mutations and 21 deletion/insertion mutations . The incl fluid) . The aimWe conducted this cross-sectional study on pediatric patients diagnosed with HLH using the revised HLH-2004 criteria and they met at least five out of the eight diagnostic criteria , from JaWe screened our patients for the common viral infections triggering HLH including EBV, CMV and HBV using PCR techniques. Also, we searched for other causes of secondary HLH such as rheumatological diseases, malignancies, metabolic and autoimmune diseases in patients\u00b4 records. We screened patients for the presence of mutations in the coding exons and exon-intron boundaries of PRF1 gene by PCR amplification of genomic DNA followed by direct sequencing of the PCR products . We perfZagazig University institutional review board approved this study. Data were collected, extracted and analysed by SPSS version 20. Descriptive and analytical statistics were performed for all the studied variables. A number of statistical tests were used during the analysis process including Wilcoxon Mann-Whitney tests for comparisons of continuous data that were not normally distributed and X2 tests for comparisons of categorical data if all data were >5 (or Fisher\u00b4s exact tests if not).This study included 18 patients who met at least five out of the eight HLH-2004 diagnostic criteria. Age of the studied patients ranged from 3 months to 12 years with a median 2.25 years. Eight patients were male and ten were female with 0.8: 1.0 male to female ratio. The values of laboratory characteristics of our patients are summarized in As regards patients' outcome, 13 patients died and five survived; two patients died before starting treatment, six patients died during induction period of HLH-2004 treatment protocol (before eight weeks of treatment) and five patients died during the maintenance period of treatment protocol (after eight weeks of treatment). According to patient survival; three patients with FHL2 survived after receiving successful hematopoietic stem cell transplantation (HSCT) and two patients with viral infections (one patient with EBV and the other with CMV) after completing HLH protocol and supportive treatment including antiviral therapy for CMV. As regards outcome in patients with PRF1 gene mutation; four out of the eight patients died during induction period of treatment (three patients did not find HLA matched donors and one patient was on the waiting list for HSCT) and the remaining three patients are alive on the last follow up after receiving HSCT. There was a significant negative correlation between patients\u00b4 age and their outcome . On the other hand, there was non-significant relation between patients\u00b4 outcome and gender, presence of fever, splenomegaly, hepatomegaly or laboratory data as shown in et al. [et al. [et al. [et al. [et al. [et al. Xu et al. and Sasan et al. who stated that male to female ratio of 1.2: 1, 1.27: 1 and 1.83: 1, respectively [Nowadays, there is improved knowledge about the aetiology of HLH as a syndromic disorder with a unique pattern. However, persistent efforts are made to identify the genetic and the immunologic basis of this syndrome . Our stuet al. , Xu et a [et al. , Sasan e [et al. and Zhan [et al. . Accordi [et al. but diffectively -12.et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [According to the diagnostic criteria of HLH, we found that all of our studied patients had fever and hyperferritinemia which were a common finding in many reports ,15,16. F [et al. and Mukd [et al. and is m [et al. . About 8 [et al. , Leow et [et al. who repo [et al. despite [et al. but diff [et al. ,15,22-24 [et al. . Althoug [et al. . Meanwhi [et al. and Leow [et al. whose reConcerning the cause of HLH, we detected seven patients with FHL2 and six patients with positive viral infection either for EBV, CMV or HBV. HLH in these patients could be either EBV-driven HLH, secondary HLH or genetic forms of FHL other than FHL2 triggered by infection as none of patients with PRF1 gene mutation in our study were positive for viral infection. Viral infection is a common triggering factor to primary or secondary HLH . The reset al. [et al. [et al. [et al. [et al. [et al. Bin et al. Alzyoud et al. and Gregory et al. whose percentage of deaths were 31%, 26.7%, 19% and 21%, respectively [In our study, direct sequencing of PCR products spanning the two coding exons (exon 2 and 3) of the PRF1 gene revealed mutations in seven out of 18 patients tested 38.9%), which is similar to Zhang et al. and highet al. . Accordi [et al. , mutatio [et al. ,27,28. T%, which [et al. ,30. As r [et al. and diff [et al. and Kaya [et al. whose peectively ,21,27,33et al. [et al. [et al. [et al. [et al. [et al. [et al. found that severe neutropenia was significantly associated with increased early death in HLH [This results has also been different from those of Bin et al. whose st [et al. whose re [et al. and Kaya [et al. and diff [et al. and Sasa [et al. who stath in HLH . Our stuThe majority of HLH patients had fever, hepato-splenomegaly, cytopenia and hyperferritinemia. Hemophagocytosis could not be detected in bone marrow of all patients and its sensitivity as an early diagnostic criterion should be revised. PRF1 gene mutation is a leading cause for familial HLH and we detected novel mutations related to it. HSCT is life saving in patients with FHL2. Our findings increase the awareness of the clinical and laboratory presentation and the prevalence of PRF1 gene mutations among paediatric patients with HLH.The diagnosis of HLH can be done with the presence of at least five of the eight following diagnostic criteria: fever, splenomegaly, cytopenias, hypertriglyceridemia and/or hypofibrinogenemia hyperferritinemia, elevated sCD25, decreased or absent NK cell activity and hemophagocytosis;The PRF1 gene mutation causes FHL2.Hemophagocytosis could not be detected in bone marrow of all patients and its sensitivity as an early diagnostic criterion should be revised;We detected novel mutations related to PRF1 gene."} {"text": "After adjustment for baseline symptoms and psychiatric history the prospective associations between neuroticism and internalizing phenomena were reduced by half (d = 0.10\u20130.40), whereas the association with substance abuse and thought problems were not attenuated. Prospective associations were four times larger over short (<4 year) than long (\u22654 years) follow-up intervals, suggesting a substantial decay of the association with increasing time intervals. Adjusted effects were only slightly larger over short vs. long time intervals, however, which suggests that high neuroticism indexes a risk constellation that exists years prior to the development and onset of all measured mental disorders. Admittedly, such prospective associations do not rule out the spectrum and scar model\u2014see Ormel et al. (Finally, the review by Brandes and Tackett missed a recent meta-analysis of the prospective associations between neuroticism and psychopathology with 59 longitudinal/prospective studies and 444.313 participants . This mel et al. or Tackel et al. for elabl et al. ; GoldsteThe author confirms being the sole contributor of this work and has approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Biochar is the solid residue that is recovered after the thermal cracking of biomasses in an oxygen-free atmosphere. Biochar has been used for many years as a soil amendment and in general soil applications. Nonetheless, biochar is far more than a mere soil amendment. In this review, we report all the non-soil applications of biochar including environmental remediation, energy storage, composites, and catalyst production. We provide a general overview of the recent uses of biochar in material science, thus presenting this cheap and waste-derived material as a high value-added and carbonaceous source. Nowaday000 \u20ac/kg . In cont000 \u20ac/kg . High-te000 \u20ac/kg ,10,11. T000 \u20ac/kg ,13, chem000 \u20ac/kg . The con000 \u20ac/kg , mainly 000 \u20ac/kg ,18,19. C000 \u20ac/kg ,21,22 an000 \u20ac/kg . These t000 \u20ac/kg .In this review paper, we report a comprehensive overview of the non-soil applications of biochar to prove its feasibility as a replacement for traditional carbon materials and as a solid competitor with high tech materials.We summarize the recent literature in four main sections dedicated to (i) environmental remediation, (ii) energy storage, (iii) composite production, and (iv) other applications.We hope that this review is an useful tool to navigate the great sea of biochar potential.Biochar is produced through four main thermochemical routes: (i) torrefaction, (ii) pyrolysis, (iii) hydrothermal carbonization and (iv) gasification.Torrefaction is a low temperature thermal treatment that is used to densify biomasses for energy purposes . The opePyrolysis is a high temperature thermal treatment that breaks polymeric macromolecules, thus giving compounds that have a lower molecular weight in an oxygen free atmosphere ,32. PyroHydrothermal carbonization is a thermal depolymerization process that is used to convert wet biomass into crude-like oil, gas, and hydrochar under moderate temperature and high pressure by usingGasification is the conversion of biomass into a gaseous fuel by heating in a gasification medium such as air , oxygen,As shown in Hydrogen/carbon and oxygen/carbon ratio low values are characteristic of less defective carbonaceous structures and could be more appealing for electronic and electric applications, while the other samples could be more useful as additive and for adsorbitive processes ,52.Environmental pollution is a global menace, and its magnitude is increasing day by day due to urbanization, heavy industrialization, and the changing lifestyles of people. In view of this, providing clean air, water and environments for people is a challenging task. In particular, the overall demand of water for human activities and the amount of wastewater that is produced are continuously increasing worldwide year by year . WastewaWater pollution is a global problem that is threatening the entire biosphere and affecting millions of lives . Water pBiochar represents a game-changer material that is able to remove both inorganic and organic pollutants through adsorbitive and degradative processes. Furthermore, biochar could be successfully used for air purification by removing molecules such as carbon dioxide or hydrogen disulfide.Water pollution due to the presence of dissolved metal species has become a serious issue in a lot of underdeveloped ,64,65,6643\u2212, NH4+, As3+, Cd2+, Cr3+, Pb2+, Zn2+, and Cu2+) of wastewater treatment with more efficiency than activated carbon. Furthermore, biochar can be produced from so many feedstock sources that it guarantees a high versatility. Ar\u00e1n et al. [Carbonaceous materials play a relevant role in the detoxification of watery sources, and biochar represents a very affordable solution. Huggins et al. comparedn et al. proved tChromium is a widely diffused element in the earth\u2019s crust that has found a lot of applications. Consequently, chromium pollution has arisen as a serious environmental issue due to its abundant emissions from refractory materials, stainless steel production, and steel alloy production . ChromiuAnother promising approach taken by various researchers is based on redox methodology that converts Cr(VI) into Cr(III) after adsorption onto a carbonaceous structure . In one Adsorption onto neat biochar particles was also used to purify water streams from Cd(II), Pb(II), Cu(II), Zn(II), Sm(III) ,90,91,92Biochar surface modification plays a very relevant role on the adsorptive performances of biochar-based materials.3, H2O2 or KMnO4. This functionalization led to a 97.4 wt.% sorption of Pb(II) from the watery solution. Liatsou et al. [3, that anhydrides and carboxylic acids act as main surface groups to bind metal ions.One of the most established procedures to magnify functionalities on a biochar surface is partial oxidation during or after pyrolytic treatment. A low-oxygen pyrolysis atmosphere (1%\u20134%) was used by Zhang et al. with a su et al. showed, 3-decorated biochar. This study clearly showed that the main removal mechanism of bromate was due to the oxidation of hydroxyl groups while an Fe(III)/Fe(II) redox couple served as electron shuttle to facilitate the electron transfer.This assumption remains true for inorganic tailored biochar, as clearly showed by Feng et al. . The autBiochar iron decoration represents a very interesting approach to produce a performing and highly recoverable adsorbent material with complex interactions between iron and carbonaceous phases . Zhang eAnionic species that are dissolved in water represent another great family of watery pollutants; however, in this case, biochars still represent a valuable tool ,108,109.Phosphates are probably one of the principal causes of the eutrophication of surface waters ,112,113.Several biochar modifications have been used to tailor phosphorous uptake, ranging from electrochemical to inorgNitrate represents another anion species that is strongly correlated with eutrophication . DivbandBiochar destiny after adsorption represents another strong point for its use for water purification. As reported by several authors ,125, conThe desalinization process represents a further relevant application of water treatments that use biochar. This procedure has been performed through simple osmotic filtration and throWatery pollution, due to the presence of organic molecules, has risen together with the anthropization. The anthropogenic effect is the main cause of the release of pollutants such as dyes, pharmaceuticals, and polymers residues ,132. DurBiochar represents a cheaper solution compared to other carbonaceous materials with very promising performances ,137 due The great variety of interactions that occur between biochar and organic molecules ranges from very weak to very strong .The simultaneous occurrence of these interactions is the reason for the good performance of biochar as an adsorber for several typologies of compounds ,142,143.Firstly, biochar has been used to remove persistent small organic molecules such as aromatics. Jayawardhana et al. used bioLiquidambar formosana. The authors claimed an oil adsorption close to 99 wt.%.Al Ameri et al. used a p2O4-tailored biochar composite for the simultaneous removal of bisphenol A and sulfamethoxazole.Dyes represent the other great threat to water sanification due to their persistency and toxicity . The adsAnother rising issue in civilian water is the presence of traces of pharmaceuticals compounds due to their consumption ,163 and Adsorption processes are neither the only nor the most used route to eliminate organic pollutants from watery streams. Degradative processes based on the oxidation routes play a major role in this field, mainly through catalytic-mediated peroxide oxidation, as shown in 2O2 systems. Based on this study, He et al. [Fenton and Fenton-like processes are the more effective processes based on the activation of peroxides in mild conditions by using cheap metal precursors and moree et al. describeMyriophyllum aquaticum tailored with Fe3O4 were described by Fu et al. [Interestingly, Ho et al. producedu et al. . This cau et al. ,177. Furu et al. and ultru et al. ,180 indu2O4 for the activation of peroxymonosulfate.Gan et al. induced Fenton and Fenton-like routes are not the only available oxidative procedures. Moussavi et al. preparedAlternatively to oxidative degradation processes, reductive processes can also be performed, even if they are less appealing than the others due to their greater complexity. Some authors have described a reductive approach for the removal of nitro alkylated benzene based on the use of Fe(0)-tailored biochars ,184, but3O4-BiOBr was used by Li et al. [Photodegradative procedures have also been considered for watery streams purification . Shirvani et al. for carbi et al. describei et al. . Furtheri et al. .3 h, along with a fast response to shock loads. A very similar approach was described by Braghiroli et al. [2 that was generated from anthropogenic sources. Shao et al. [2 activation in producing a material with a higher adsorption capability and a higher regenerability compared with pristine biochar.Gas mixture purification is one of the most relevant industrial issues . Actualli et al. for the o et al. used act2 atmospheric emissions, which represent one of the greatest threat for climate change [2 for biochar activation is not the only route to mitigate emissions. The other and more appealing approach is represented by the use of biochar for the removal of CO2 from gaseous mixtures [2 adsorbers, reaching a gas uptake of up to 119 mg/g at 35 \u00b0C. Igalavithana et al. [2 uptake and a very good recyclability. Huang et al. [Leucaena wood, ultimately reaching a CO2 uptake of up to 53 mg/g. Chiag et al. [2 adsorption abilities with the surface microstructures and residual functionalities of carbonaceous materials. The effect of nitrogen residual functionalities was used by Zhang et al. [2 at a rate of 59 mg/g. Rice husk was also pyrolyzed under microwave irradiation [2 uptake values. Pyrolysis post-treatments were widely used to increase the CO2 adsorption ability of a biochar by introducing basic sites via ammonia functionalization processes [Furthermore, the use of carbon dioxide as a biochar activation reagent could contribute to the reduction of COe change ,201,202.mixtures . Liu et mixtures describea et al. recovereg et al. used a bg et al. explaineg et al. for the adiation and actiadiation , ultimatrocesses ,212.Energy storage technology represents a great challenge of 21st century due to iA battery is a system formed by at least two electrochemical cells with contacts to supply electrical energy according to electrochemical potential. Specialist literature has been focused on two main solid state battery systems based on lithium and sodi\u22126 up to 343 S/m when carbon content changed from 86.8 to 93.7 wt.%. This phenomenon was attributed to the formation of graphite nanocrystals in the main structure of the biochar during the high temperature treatment.The essential requirement for producing a performing supercapacitor material is an elevated surface area where the double ionic layer can be created. For this purpose, physically- and chemically-activated biochar is a very attractive material for the realization of supercapacitor electrodes . Chemica2.Chemical activation is a well-established procedure to create an activated biochar with a good capacitive performance. Luo et al. reported3 at 150 \u00b0C. Activated biochar showed a very high specific area of up to 2000 m2/g with a specific capacitance of up to 260 F/g.Jin et al. describe2/g, a capacitance of 314 F/g, and a remarkable stability after 105 cycles in a symmetrical cell. Fast pyrolysis and alkaline chemical activation was used by Chen et al. [A low ash content feedstock was used by Qu et al. for a din et al. for the n et al. and seven et al. ,233 haveSurface morphology plays a relevant role in biochar-based supercapacitors ,235,236.Activated biochar properties could be also modulated by using a plasma treatment, as described by Gupta et al. . The aut2/g and a capacitance of 229 F/g.Furthermore, non-lignocellulosic biomasses could be converted into usefully carbonaceous materials for capacitive uses. As an example, keratin-mixed algae was pyro2/g. This biochar was employed as an anode for an Li-ion battery and showed an impressive discharge capacity of up to 1169 mAh/g. Low porousity biochars have shown far lower performances, as reported by Luna-Lama et al. [2O4 to produce a flexible performing anode material. Similarly, Li et al. [3O4 nanoparticles, reaching a capacity of up to 635 mAh/g. Salimi et al. [3O4 nanoparticle-tailoring process with the pyrolysis of algae to produce an electrode material with a higher initial specific discharge capacity of up to 740 mAh/g and a good cyclic stability [Several authors have explored the use of biochars as anodic materials for the realization of performing batteries. Many authors have focused on the realization of lithium ion batteries due the great demand of highly technological devices based on them. Dai et al. produceda et al. , who usea et al. , who repa et al. reporteda et al. showed ta et al. as a cata et al. \u2014the autha et al. decoratei et al. tailoredi et al. combinedtability .Different ion-based batteries have also been developed, though in minor quantities; the only solid works about is from Saavedra Rios et al. , who use2 at 800 \u00b0C. A detailed study of direct carbon fuel cells was reported by Kacprzak et al. [2.Several authors ,254,255 k et al. ,259,260.\u22122. Elleuch et al. [2.In the same field, Ali et al. used tith et al. \u2014in simil2, with power cost of power output cost 17 $/W. This was 90% cheaper than graphene-based fuel cell electrodes, which have a cost of up to 402 $/W. Further improvements were achieved by using a manganese oxide-doped biochar, thus improving the power output by up to 606 mW/m2 [Another appealing use of biochar is the realization of electrodes for microbial fuel cells, as reported by Huggins et al. . The aut06 mW/m2 . Khudzar06 mW/m2 develope2. Similarly, Yuan et al. [2.Biochar was also used for the production of performing cathode electrodes. Li et al. producedn et al. used a s\u22129 m2/s. Comparing proton conductivity and power harvested per unit, the biochar-based membrane outperformed those based on materials such as Nafion.Apart from the electrodes, Chakraborty et al. developeNowadays, composite materials represent one of the largest global markets, with an expected future development of up to 131 billion dollars in 2024, as shown in Carbon-based composites represent one of the most relevant parts of global markets, with an annual production of about 150 kton/y in 2018 . As showCement production is a one of the largest productions in the world, with more than 3 Gton/y produced in 2018 . Along hCosentino et al. evaluateGupta et al. also proCement and concrete are not the only inorganic matrixes that have been used to host biochar. Mu et al. deeply dDahal et al. used bioCarbonaceous-reinforced thermoset plastics are widely diffuse materials that incorporate a plethora of different matrixes ,298,299.Pyrolytic temperature plays a crucial role in the interactions between epoxy resins and biochar particles. Giorcelli et al. used a mHigh temperature-treated biochar could be a solid choice for the production of conductive epoxy composites. Giorcelli et al. reportedFurthermore, biochar from pyrolyzed, wasted cotton fibers could be recovered in a carbon fiber shape that showed the property enhancement of an epoxy resin host matrix ,307.Regarding thermoplastic-reinforced plastics, polyolefins-based biochars are the most produced. Among them, biochar-containing polyethylene was studied by Arrigo et al. by usingPoly(propylene) is the other widely studied polyolefin for the realization of biochar-based composites. Das et al. proved tMiscanthus at 500 and 900 \u00b0C, and they showed the beneficial effect of the high temperature-produced biochar and the detrimental effect of the other. Sheng et al. [Furthermore, polyesters were impregnated with biochar in a study by Ogunsona et al. . The autg et al. modifiedRecently, biochar has been used for the production of biopolymer (i.e., starch and glutBiochar has found plenty of applications in all the field that are traditionally occupied by carbonaceous materials such as solid fuel ,327. The2 into glycerol carbonate.Vidal et al. developeAreeprasert et al. introduc2.Furthermore, biochar could be efficiently used in redox-mediated reactions . Cao et 2/NiOOH for the realization of non-enzymatic glucose electrode. Alternatively, Martins et al. [Biochar could also be used for the production of electrochemical measurement devices ,340. Zies et al. developeBiochar has also been used in biological procedures. Huang et al. pyrolyzeIn this review, we have presented an updated overview of non-soil applications of biochar with a focus on more useful and unusual ones. We reported many studies on the adsorbitive capacity of ions and organic molecules, together with their biochar electrochemical properties. These properties are particularly relevant in the future perspective of clean energy production and storage. We also described, in detail, the possibility of using biochars as sound replacements for traditional fillers in both inorganic and organic composites materials. This evidence has shown the feasibility of the biochars used in a lot of sectors as solid alternatives to traditional and next-generation materials. The polyhedral nature of biochar represents a very strong advantage for spread the biochar use across material science field.We hope that this summary of recent literature can lead to the foundation of new research which will exploit the great potential of biochar and biochar based materials."} {"text": "Dear Editor,et al.We were interested to read your recently published paper describing the need for storytelling and poetry for clinicians in the time of coronavirus (Barrett In the original paper, it was highlighted that clinicians and patients can use poetry and storytelling to \u2018support identification\u2019, \u2018to process experiences\u2019 and \u2018to promote coping\u2019. This is especially important during the intense and emotional pandemic period, when this narrative approach to practicing medicine can aid in building resilience for clinicians. However, we feel when applied to the context of medical student education, these qualitative mediums enable us to develop self-knowledge and to deepen the insight required as clinicians to have effective and compassionate patient communication.All the world\u2019s a stage,And all the men and women are merely players;\u2013 Shakespeare, As You Like It, Act II Scene 7For medical students, the COVID-19 pandemic has introduced new challenges for existing curricula and in gaining adequate clinical experiences. In a learning environment now dominated by virtual lectures, self-directed learning and a lack of patient contact, a narrative approach for learning medicine is more important than ever. We believe integrating narratives robustly into the curriculum is a priority; fostering these important skills early will help our future doctors become more reflective, conscientious and resilient in their practice.Narrative is fundamental to understanding human experience. In medicine, \u2018players\u2019 are regularly put in situations that challenge emotions and conceptions of self; often requiring difficult decisions. Clinicians\u2019 actions should balance the rational mechanisms of procedural medicine and the requirement to understand the meaning of our roles and interactions. Thus, as clinicians, it is fundamental to develop narrative competence Charon . Literatet al., explain that narrative competence can be sought and practiced using several approaches, including poetry, storytelling and literature in addition to Schwartz Rounds and Balint groups. We will analyse and discuss these techniques for medical students \u2013 for whom the utility and benefits likely differ from those of a clinician.Barrett et al.et al.et al.Literature and poetry are a valuable tool in developing medical students into self-realised, emotionally conscious clinicians. Existing programmes have improved communication skills and clinician-patient relationships Brown . Used inet al. et al.et al. As undergraduates, we were fortunate to experience Balint groups. These were a valuable forum to discuss and reflect upon emotionally challenging clinical situations, develop interpersonal skills and to view the patient beyond a clinical problem-solving exercise. Patient narratives at medical school can sadly exist merely as simplified \u2018cases\u2019 and patient presentations are only discussed at surface level. To develop meaningful doctor\u2013patient relationships, exploring patient narratives should be prioritised alongside learning communication and history taking skills. Recent research has highlighted the value of Balint groups for undergraduates, improving personal and professional development, stress management and reducing burnout (Yazdankhahfard et al.et al. et al.et al.et al.Schwartz Rounds foster a compassionate culture; through which clinicians feel patient-centred care is prioritised Goodrich . Incorpoet al. Balint groups and Schwartz Rounds can therefore be seen as effective for students to develop self-awareness, in addition to promoting personal growth and well-being. Whilst these techniques promote self-exploration in a safe setting, literature and poetry allow the first-hand experience of multiple and diverse voices, fostering creativity and acceptance of ambiguity. Medical schools should promote these values to prepare our future doctors to face the uncertainties and pressures of the clinical workplace (Novack et al., express the need for \u2018new frameworks of systemic support\u2019 in response to increasing clinician burnout. In response, we would like to highlight increasing burnout amongst medical students: a recent study found 29% of respondents were given a mental health diagnosis during medical school, and 85% could be classified as \u2018disengaged\u2019 and 85% \u2018exhausted\u2019 (Farrell et al.et al.et al.et al.Barrett et al., that learning narrative techniques at medical school will help students develop not only into effective, compassionate clinicians, but also into self-realised human beings.Thus, establishing narrative learning is paramount for medical students, this should be done not only through Balint groups and Schwartz Rounds, but more importantly through the cultivation of a culture of self-knowledge and lifelong learning. It is our view, motivated by Barrett"} {"text": "Early diagnosis and treatment of depression are associated with better prognosis. We used baseline data of the Canadian Longitudinal Study on Aging to examine differences in prevalence and predictors of undiagnosed depression (UD) between immigrants and non-immigrants at baseline and persistent and/or emerging depressive symptoms (DS) 18 months later. At this second time point, we also examined if a mental health care professional (MHCP) had been consulted.We excluded individuals with any prior mood disorder and/or current anti-depressive medication use at baseline. UD was defined as the Center for Epidemiological Studies Depression 10 score \u2a7e10. DS at 18 months were defined as Kessler 10 score \u2a7e19. The associations of interest were examined in multivariate logistic regression models.v. 65 (10.7) years in non-immigrants and 52.1% v. 57.1% were male. Among immigrants, 12.2% had UD at baseline of whom 34.2% had persistent DS 18 months later v. 10.6% and 31.4%, respectively, among non-immigrants. Female immigrants were more likely to have UD than female non-immigrants but no difference observed for men. The risk of persistent DS and consulting an MHCP at 18 months did not differ between immigrants and non-immigrants.Our study included 4382 immigrants and 18 620 non-immigrants. The mean age (standard deviation) in immigrants was 63 (10.3) years Female immigrants may particularly benefit from depression screening. Seeking mental health care in the context of DS should be encouraged. We also evaluated the association between immigration status and the presence of depressive symptoms (DS) at 18 months in those with and those without UD at baseline. In addition, we examined the association between immigration status and consulting a mental health care professional (MHCP) at 18 months among those with and those without DS at this time point and accounting for UD at baseline.et al., n\u00a0=\u00a030\u00a0097; face-to-face interviews at baseline and computer-assisted phone interview at 18 months), excluding those with any mood disorder in the last year, current anti-depressant use, and/or missing information on the outcomes and main exposure of interest as defined below .Between 2012 and 2015, for the baseline data of its Comprehensive cohort, the CLSA recruited and collected information from community-dwelling males and females ages 45\u201385 years. Details about the CLSA's sampling and design have been published elsewhere score \u2a7e10. The short form of CES-D, CES-D 10 was used in this study. This is a ten-item questionnaire with four possible choices for each question: all of the time, occasionally, some of the time, and rarely or never and personal health habits as suggested by Andersen's behavioural model for continuous variables and counts with percentages for categorical variables were computed by immigration status. Multivariate logistic regression models were used (1) to assess the associations between immigrant status and UD; (2) to examine the association between immigrant status and DS at 18 months in those depressed and those not depressed at baseline; and (3) to examine the association between immigrant status and consulting an MHCP at 18 months among those with and without DS at this time point. Immigration status, sex, age and province were included in all models, and all models adjusted for predisposing, enabling, needs-related and health-choice factors. In the model assessing the association between immigration status and UD, we examined the interaction effect between immigration status and other predisposing, enabling and needs factors. In the model assessing the association between immigration status and DS at 18 months, we examined the interaction effect between immigration status and UD at baseline and between UD at baseline and other predisposing, enabling and needs factors. Finally, in the model assessing the association between immigration status and MHCP at 18 months, we examined the interaction effect between immigration status and DS and between DS and UD at baseline. A significance level of 0.05 and the Bayesian information criterion were used to select the final models. To make the estimates generalisable to the Canadian population and address the complexity of the CLSA survey design, we used sample weights and geographic strata information provided by the CLSA in the descriptive analyses and regression analyses and over 75% had a household income above Can$ 50\u00a0000. Roughly, 85% had a post-secondary degree, over half were retired (55.9%) and 40.6% were employed. Most (65.7%) reported very good/excellent health. Hypertension (36.0%), diabetes (16.3%) and cancer (15.5%) were their most prevalent chronic diseases. One-third (32.8%) lived with pain and 7.8% had bowel disorders. Almost half consumed alcohol more than twice a week, 7.5% were current smokers, 68.6% were obese or overweight and almost half participated in a social activity involving sports or a physical exercise with others at least once a week (48.1%) . These were mostly from urban settings 87.7%; , White ( (48.1%) .Table 1v. non-immigrants) were more likely male, older, with post-secondary degree/diploma, to speak English most often at home (v. French), unemployed (v. employed), with lower incomes, residing in Quebec (v. other). Immigrants were less likely single, smokers, living in rural/suburban areas, with bowel disorders or cancer, and less likely overweight or obese (Nearly one-fifth (19.1%) of our study individuals had immigrated to Canada, the majority >20 years ago (87.5%) and only 1.3% had lived in Canada for <5 years. In multivariate logistic regression models, immigrants (or obese .v. employed) or had prior anxiety disorders were at higher risk of UD, while those who exercised at least once a week were at lower risk. Immigrants (but not non-immigrants) who consumed alcohol once a month (v. never) and those who were current smokers were at higher risk of UD. Immigrants who arrived in Canada at age >40 years were twice as likely as non-immigrants to have UD . As well, those who resided in Canada for <20 or >40 years were more likely than non-immigrants to have UD (online Supplementary Table B).Among immigrants, 12.2% had UD at baseline compared to 10.6% of non-immigrants . Risk fav. immigrant males and non-immigrant females v. non-immigrant males ] , but among females, immigrant status was associated with a 50% increased odd of UD . Female immigrant and female non-immigrant were more likely to be depressed than their male counterparts [immigrant females v. females without UD: OR 5.10, 95% CI 4.29\u20136.06) and for males , and the risk of UD was higher in females without UD v. males without UD, but similar in females with UD v. males with UD whether or not they had DS. Examining the interaction effect of DS at 18 months and UD at baseline revealed that the likelihood of consulting an MHCP among those with DS did not differ between those with and those without UD at baseline was over 20 years ago. Female immigrants were more likely to have UD than female non-immigrants, but no difference was observed in men. The risk of UD was higher in immigrants who arrived in Canada at age \u2a7e40 years and among those who resided in Canada for <20 or >40 years. Persistent DS at 18 months and seeking MHCP for these symptoms did not differ between immigrants and non-immigrants. Of note, only 17% of immigrants and 15% of non-immigrants with persistent DS (DS at 18 months and baseline UD) had consulted an MHCP in the previous month.et al., et al., et al., et al., et al., et al., As expected, immigrants in our study differed from non-immigrants on all mental health-predisposing, enabling, needs-related and personal health choices considered except for perceived health and alcohol consumption. Similar to other studies, immigrants were more likely to have post-secondary education and lower income (Dunn and Dyck, et al., v. non-immigrants is in line with the results of other studies that looked at the risk of depression in these groups (Wong and Tsang, et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., The risk of UD has not been previously assessed in Canadian immigrants. In a US study, UD was associated with psychosocial stressors including unemployment and relationship problems, but immigration status was not specifically examined (Williams et al., et al., et al., et al., In our study, immigrants who resided in Canada for <20 years and those who resided for >40 years were at increased risk of UD than the host population. Our findings support a \u2018U\u2019 shape association between UD and length of stay in the host country Beiser, . Immigraet al., et al., et al., et al., et al., et al., et al., Our results also showed an increased risk of UD in those who migrated at ages \u2a7e40 years. Contrary to our results, one US study reported a lower risk of psychiatric disorders onset in US Latino groups with older ages at arrival (Alegria et al., et al., et al., et al., In our study, immigrants were as likely as non-immigrants to have persistent DS at 18 months and to have consulted an MHCP for these symptoms in the past month. These results differ from those reported by other Canadian studies that found immigrants to be less likely than their Canadian-born counterparts to seek out or be referred to mental health services when they experience comparable levels of distress (Fenta et al., et al., et al., et al., Strengths of our study include the use of the carefully designed, population-based CLSA database and the high quality of its data. Our study has also some limitations. Although we used the survey weights in our analyses, participation bias cannot be ruled out (Haine et al., et al., et al., et al., Future studies should further investigate the personal, cultural and social factors (Dixon-Woods To the best of our knowledge, this is the first Canadian study to comprehensively assess associations between UD and immigration status. Screening for depression may particularly benefit female immigrants and those who migrated at 40 years of age and older.Systematic inquiry into patients' migration trajectory and subsequent follow-up on culturally appropriate indicators of health will allow clinicians to recognise problems in adaptation and undertake mental health promotion, disease prevention or treatment interventions in a timely way. Follow-up screening should query persistence of DS and encourage seeking mental health care regardless of immigration status."} {"text": "Cement-based materials play an irreplaceable role in building and sustaining our society by meeting the performance demand imposed on structures and sustainability. Cement-based materials are no longer limited to derivatives of Portland cement, and appreciate a wider range of binders that come from various origins. It is therefore of utmost importance for understanding and expanding the relevant knowledge on their microstructure and likely durability performance. This Special Issue \u201cMicrostructures and Durability of Cement-Based Materials\u201d presents recent studies reporting microstructural and durability investigation revealing the characteristics of cement-based materials. Concrete is the most widely used construction material and is also the second most consumed substance on the planet after water . Portlan2 emissions associated with using concrete and it improves the overall performance by doing so. In a study by Li et al. [2S in a carbonation condition.It is important to test and verify the performance of concrete and mortar prior to employing them in the relevant fields. A study by Fattah et al. adopted i et al. , the auti et al. , while ii et al. , and baci et al. . By empli et al. investigi et al. observedThe performance of concrete can be improved by incorporating various admixtures, as demonstrated by , which i4 and investigating its effect on the pore structure. An unconventional means of producing alkali-activated cements has been presented in [SCMs can be used to partially replace Portland cement in concrete, or they can be solely used by alkali activation. A number of studies published in this Special Issue have investigated the performance of alkali-activated cements. Lee et al. proposedented in ,16 where3S. In addition to the use of graphene in concrete, carbon nanotubes can be a useful admixture as demonstrated by Fan et al. [The use of carbon nanomaterials in concrete is a topic for ongoing discussions. A study by Zheng et al. showed tn et al. , which sn et al. ."} {"text": "Trachemys scripta) that possibly contributes to their tremendous diving capacity. The purpose of the present immunohistochemical study was firstly to screen major groups of vertebrates for the presence of cardiac smooth muscle. Secondly, we investigated the phylogenetic distribution of cardiac smooth muscle within the turtle order (Testudines), including terrestrial and aquatic species. Atrial smooth muscle was not detected in a range of vertebrates, including Xenopus laevis, Alligator mississippiensis, and Caiman crocodilus, all of which have pronounced diving capacities. However, we confirmed earlier reports that traces of smooth muscle are found in human atrial tissue. Only within the turtles (eight species) was there substantial amounts of nonvascular smooth muscle in the heart. This amount was greatest in the atria, while the amount in proportion to cardiac muscle was greater in the sinus venosus than in other chambers. T. scripta had more smooth muscle in the sinus venosus and atria than the other turtles. In some specimens, there was some smooth muscle in the ventricle and the pulmonary vein. Our study demonstrates that cardiac smooth muscle likely appeared early in turtle evolution and has become extensive within the Emydidae family, possibly in association with diving. Across other tetrapod clades, cardiac smooth muscle might not associate with diving. Anat Rec, 303:1327\u20131336, 2020. \u00a9 2019 The Authors. The Anatomical Record published by Wiley Periodicals, Inc. on behalf of American Association for Anatomy.A prominent layer of smooth muscle lining the luminal side of the atria of freshwater turtles (Emydidae) was described more than a century ago. We recently demonstrated that this smooth muscle provides a previously unrecognized mechanism to change cardiac output in the emydid red\u2010eared slider ( Emys orbicularis) were observed in isolated atrial preparations from European pond turtles (s) Fano, . The tons) Fano, , but dess) Fano, , the sciE. orbicularis (formerly Emys europaea) ; very little detail is given by Pereira did notet al., et al., et al., Isl1 and Longnose gar (Lepisosteus osseus [n = 1])were obtained from commercial sources or donated from private collections and maintained at the Aarhus University . C. serpentina and Alligator mississippiensis hearts were obtained from animals maintained at the University of North Texas . Mouse (Mus musculus [n = 1]), and a bird, the lesser redpoll (Acanthis cabaret [N = 1]), sections were obtained from archived samples (body mass unknown) at the Amsterdam University Medical Center (UMC) . One caecilian (Idiocranium sp.) section was taken from unpublished data associated with an earlier study cardiac samples were provided from the Department of Pathology, Amsterdam UMC, AMC .The majority of the turtle species (\u22121) before the brain was destroyed. All experiments were performed in accordance with local animal care regulations.For the hearts used for immunohistochemistry, the animals were euthanized with an overdose of pentobarbital and stored in 70% ethanol. The hearts were then embedded in paraffin and cut into 10\u2010\u03bcm transverse or coronal sections. A standard immunohistochemistry protocol was followed as described elsewhere (Jensen P. subrufa (n = 2), C. mccordi (n = 2), P. sinensis (n = 1), C. senegalensis (n = 2), C. serpentina (n = 2), T. hermanii (n = 2), C. carbonaria (n = 3) and T. scripta (n = 6). For each heart, the % area of smooth muscle was averaged from three or four equidistant sections from across the atria, although in some C. mccordi (n = 1), P. sinensis (n = 1), and C. carbonaria (n = 2) only two representative sections could be used. Due to the low animal sample sizes available for most species, no statistical comparisons were made between species. A linear regression was performed to investigate the relationship between body mass and atrial smooth muscle coverage in T. scripta. To investigate whether there were chamber differences in the proportion of the detected SMA relative to all detected SMA and cTnI, we used the Plugin function \u201cRGB Measure\u201d of ImageJ after having delineated the sinus venosus, atria, or ventricle with the Freehand selections tool . The output value (mean) is a composite measure of the number of pixels that contain each color and the color intensity. We only used images that contained both sinus venosus and atria (N = 73). In 25 of the 73 images, there was also ventricular tissue. Differences between the sinus venosus and atria were tested with paired T\u2010test. We used the Pearson correlation test for significant relation between the proportion of SMA in the sinus venosus as compared to the atria and ventricle. Statistical analyses were performed in SPSS (IBM SPSS Statistics version 24) or GraphPad Prism (Version 8.0). Data are presented as means\u2009\u00b1\u2009SD.The number of pixels containing myocardium (red) and smooth muscle (green) in composite images were determined by splitting the red and green colors using the \u201cColor Threshold\u201d function of ImageJ , and then measuring the area on the split colors allowing us to calculate relative smooth muscle area as a percentage of the total muscle area (smooth and cardiac muscle). To maintain standardization, only transverse images of the atria were used for this quantification, thus the final sample sizes in this analysis were as follows: Alligators mississipinesis) Fig. A, spectaus) Fig. A, pink\u2010tii) Fig. B, longnous) Fig. D, mouse us) Fig. E, or leset) Fig. F. Only iis) Fig. C and canus) Fig. B did we p.) Fig. C. We conP = 0.271). In T. scripta there was a great amount of smooth muscle in the sinus venosus and atria Fig. A. In T. lae Fig. , althougT. scripta we established that there was no relationship between body mass and atrial smooth muscle area . However, the proportional amount of SMA was greater in T. scripta compared to the other turtles in the sinus venosus and in the atria but not in the ventricle .Given the body mass range we encountered within and between species, in 44) Fig. . Also, iT. scripta is dramatically reduced when the smooth muscle of the atria is induced to contract .The cardiac output of et al., et al., et al., et al., et al., et al., et al., Our results suggest that (nonvessel) cardiac smooth muscle appeared early in turtle evolution, possibly in the sinus venosus before other chambers, as it was observed, at least in small amounts, in representatives across the turtle phylogeny, including side\u2010necked turtles (Pleurodira) that diverged from other turtles over 150 million years ago in which it was first described and detailed , but these findings received little subsequent attention and were not independently verified or acetylcholine provideet al., et al., et al., et al., et al., per se, may not be unique to the turtle lineage.We did, nevertheless, verify that human atria contain traces of smooth muscle (Nagayo, et al., et al., et al., T. scripta (Fig. Extensions of left atrial myocardium that partially envelop the pulmonary veins, known as \u201cmyocardial sleeves,\u201d have been well described in mammals and birds (Nathan and Gloobe, pta Fig. B, it appIn summary, our comparative study indicates that atrial smooth muscle evolved early in the order of Testudines. The atrial smooth muscle is particularly scarce in terrestrial tortoises, but well developed in some aquatic species, which lends tentative support to our hypothesis that it may be involved in the regulation of cardiac output of turtles during diving. All of the turtles investigated exhibited both smooth muscle and cardiac muscle in the sinus venosus, which may also be able to contribute to the regulation of cardiac output. A mixture of smooth and cardiac muscle in the sinus venosus was also evident in the anuran amphibians and has previously been reported in fish (Yamauchi,"} {"text": "Borderline ovarian tumours (BOTs) are ovarian neoplasms characterised by epithelial proliferation, variable nuclear atypia and no evidence of destructive stromal invasion. BOTs account for approximately 15% of all epithelial ovarian cancers. Due to the fact that the majority of BOTs occur in women under 40 years of age, their surgical management often has to consider fertility-sparing approaches. The aim of this mini-review is to discuss the state of the art of fertility-sparing surgery for BOTs with a specific focus on the extent of surgery, post-operative management and fertility. Borderline ovarian tumours (BOTs) are a group of ovarian neoplasms described as \u2018semimalignant disease\u2019 for the first time by Taylor in 1929.BOTs account for approximately 15% of all epithelial ovarian cancers, with an annual prevalence of 1.8\u20134.8/100,000 \u20138. Additet al [et al [et al [p = 0.003), need for mini laparotomy for retrieving the specimens (p = 0.006) and global operative time (p < 0.001). Nevertheless, laparoscopy obtained less intraoperative blood loss (p = 0.007) and shorter operative time (p < 0.001).Surgery is the primary treatment for BOTs. Similarly to the management of malignant ovarian cancers, the extent of surgery depends on stage: for stage I disease, the therapeutic approach should include a surgical staging with total hysterectomy, bilateral salpingo-oophorectomy, omentectomy, peritoneal washing and multiple biopsies; appendix should be removed in case of mucinous histology , 13. In et al demonstrl [et al reportedl [et al analysedet al [et al [The increasing use of robotic surgery in gynaecology was focalised also on the management of early-stage ovarian cancer and BOTs. In fact, the robotic-assisted approach is useful and safe in patients with presumed early stage disease . In a reet al showed nl [et al in a retet al [et al [et al [Overall, the choice of surgical approach for BOTs should consider size of ovarian masses, presence and localisation of peritoneal implants, presence of bulky nodes, surgeon\u2019s skills and patient\u2019s individual characteristics , 22. Anoet al , no diffet al , 25. Howet al , 25. Foret al , 26. Thel [et al reportedl [et al . In a rel [et al analysedl [et al . Globalll [et al . In casel [et al .Because many patients are still fertile at the time of diagnosis, not having completed the desire of childbearing, fertility-sparing surgery is considered as a relevant option in the management of BOTs. In many series published, the overall survival for patients undergoing a fertility-sparing surgery is close to 100% \u201334.et al [et al [p < 0.0001). Overall, in cases of bilateral BOTs, USO+CC did not obtain an advantage compared to BC in terms of recurrence (26.1% versus 25.6); therefore, less destructive approaches in this setting may be considered in order to preserve patients\u2019 fertility. Although no differences were found between conservative and radical surgery in terms of overall survival, the authors concluded that the low mortality rate precludes pooling estimation for death in relation to the different types of fertility sparing surgery; moreover, the short-term follow-up times tend to limit the interpretation of survival analysis. In this meta-analysis, the only prospective randomised controlled trial was an Italian paper published by Palomba et al [p < 0.01); however, performing a regression analysis, the difference did not reach a statistical significance (p = 0.14); additionally, disease recurrence was not different between these groups.In woman with unilateral/bilateral ovarian involvement, fertility-sparing options include surgical procedures, such as unilateral cystectomy (US), unilateral salpingo-oophorectomy (USO), bilateral cystectomy (BC) or unilateral salpingo-oophorectomy plus contra-lateral cystectomy (USO+CC). In a French multi-centre study, including 313 patients with stage I BOTs, the recurrence rates after cystectomy, USO or BSO were 30.3%, 11% and 1.7%, respectively . In a reet al analysedet al . The autl [et al publisheba et al , which cObtaining a biopsy from a normal appearing contra-lateral ovary is not recommended in patients undergoing surgical management for BOTs because the risk of under-diagnosis of an occult malignancy tends to be very low . FurtherThe management of relapse of BOTs mostly depends on tumour location and histotype. In case of a relapse in remnant ovarian tissue, a second conservative surgery could represent a suitable treatment option in patients desiring to preserve fertility ; otherwiet al [et al [et al [p > 0.05); moreover, no differences in antral follicle count (AFC) were shown [It is difficult to exactly assess the impact of fertility-sparing treatments for BOTs on ovarian function and fertility. It has been reported that approximately 81% of women retain normal menstrual cycles after conservative surgery for BOTs . As in bet al the serul [et al also foul [et al among pare shown .et al [Pregnancy rates in women attempting to conceive after fertility-sparing surgery are very heterogeneous . In the et al did not et al .in vitro fertilisation may be required after fertility-sparing surgery in order to enhance the chance of conceiving. Potential associations between ovulation-inductor drugs and BOTs have been proposed by several authors: in 1994, Rossing et al [et al [Another controversial topic is the need to use assisted reproductive techniques for improving fertility outcomes. In selected patients, induction of ovulation and ng et al highlighng et al \u201356. A mel [et al showed al [et al .As general rule, fertility counselling should be mandatory in the management of BOTs among women aiming to spare fertility. Patients with diagnosis of BOTs should be referred to an oncofertility centre before surgery in order to assess their reproductive status and to plan subsequent operative management , 59.et al [et al [et al [Currently, there is no universally accepted standard-of-care regarding follow-up after surgery for BOTs. In a cohort of 39 women, Uzan et al reportedet al . Same evl [et al in a cohl [et al . As in tl [et al analysedThe main clinical factors associated with disease relapse are advanced age at diagnosis, preoperative elevation of serum levels of CA125, presence of invasive implants and micropapillary histology .et al. [p = 0.01); among the women with initial diagnosis of serous BOT, only 3 (11.5%) developed an invasive carcinoma; in contrast, among those with initial diagnosis of mucinous BOT, invasive carcinoma occurred in 9 women (52.9%).Rates of relapse described after fertility sparing surgery for BOTs greatly differ in the current literature. The evidence shows that serous BOTs recurred more frequently than mucinous BOTs, despite progressing to an invasive carcinoma only in a smaller percentage of cases. In a retrospective analysis by Uzan et al. , 191 of Recurrence of serous BOT in residual ovary almost always has a non-invasive histology; for this reason, it should be considered as a new primary BOT and could be potentially treated by a second fertility-sparing surgery; conversely, the vast majority of invasive recurrences are characterised by extra-ovarian involvement ,65. For et al [In a prospective observational study, Franchi et al used traet al .Recurrence rates in fertility-sparing surgery for BOTs are higher than after radical surgery; however, after the completion of desire of conception the surgical second look with removal of uterus and contra-lateral ovary remains debated. In fact, the published data does not report differences in terms of survival rates after completion of surgery for BOTs. For serous BOTs many authors suggest expectant management and performance of radical surgery only in cases of disease recurrence , 69. ForFertility sparing surgery for BOTs is feasible and does not seem to negatively influence patients\u2019 long-term survival, although higher disease recurrence rates have been reported. The extent of surgery should be individualised based on patient characteristics, tumour stage and histology. Women with a diagnosis of BOT should be referred to an oncofertility centre prior to performing fertility-sparing surgery in order to assess reproductive status and to plan future post treatment pregnancies. In these patients, a routine follow-up evaluation should be done, including clinical examination, ultrasound and dosage of serum tumour markers. Surgical management of relapse depends on disease localisation and histology.The authors have not conflict of interest to declare.MMAliterature review, manuscript writingFBmanuscript revisionMVMliterature reviewSSdata analysisMMOdata analysisSFmanuscript revisionSCsupervisionThis paper has not been funded."} {"text": "The glyconanoparticle (GlycoNP) has multiple effects and has important applications in drug delivery and bioimaging. It not only has the advantages of nano drug delivery system but also utilizes the characteristics of multivalent interaction of sugar, which greatly improves the targeting of drug delivery. Herein, the application of GlycoNP in drug delivery was analyzed and discussed, the solution to its problem was proposed, and its prospects were forecasted. In addiNanomaterials as the carriers of carbohydrates have been gradually developed since the first synthesis of carbohydrate-functionalized gold nanoparticles in 2001 that wrapped carbohydrate molecules was reported, they could specifically bind to cell surface receptors in certain tissues and organs. Certain liver cells had asialoglycoprotein receptor on their surfaces. It could specifically bind to QDs with galactose residues lactose-QDs conjugate with more flexible sugar ligands.Moreover, Yang et\u00a0al. preparedin situ evaluation of cell surface sialic acid (SA) groups by combining the multiplex sandwich binding of the 3-aminophenylboronic acid functionalized QD (APBA-QD) probes to SA groups on living cells, glyconanoparticles, and the sensitive fluorescence detection of metal-responsive dye . GNP-LLO91\u201399 exhibited dual antitumour activities. It was proposed this adjuvant nanotherapy for preventing the progression of the first stages of melanoma.Dendritic cell-based (DC-based) vaccines are promising immunotherapies for cancer. However, the lack of efficient targeted delivery and the sources and types of DCs has limited the efficacy of DCs and their clinical potential. Calderon-Gonzalez et\u00a0al. proposedIn vivo pulmonary delivered siRNA GNPs were capable of targeting c-Myc gene expression via in vivo RNAi in tumor tissue, which led to an \u223c80% reduction in tumor size without associated inflammation are full of promise in areas like biomedicine, biotechnology and materials science because of their amazing physical, chemical and biological properties. The human galectin-3 is well-known to be overexpressed in several human tumors and can act as a biorecognizable target. Ayka\u00e7 et\u00a0al. preparedQian et\u00a0al. reportedy antigens, T-cell helper peptides, and glucose in well-defined average proportions and with different density were synthesized in a one-step procedure and studied their uptake by DC-SIGN expressing Burkitt lymphoma cells (Raji DC-SIGN cell line) and monocyte-derived immature dendritic cells.Arn\u00e1iz et\u00a0al. preparedNagahori et\u00a0al. communicThe Thomsen\u2013Friedenreich antigen-containing glycopeptide thiols based on a mucin peptide repeating unit were prepared. These novel multivalent tools should prove extremely useful in exploring the binding properties and immune response to this important carbohydrate antigen (Sundgren & Barchi, Wang et\u00a0al. reportedMicrociona prolifera as potential tools to explore carbohydrate-mediated cell recognition.Carvalho et\u00a0al. studied 2.3.Activation of the endothelium is a pivotal first step for leukocyte migration into the diseased brain. So, imaging this activation process is highly desirable. It indicated that the targeted carbohydrate-functionalized magnetic nanoparticles accumulated in the brain vasculature following acute administration into a clinically relevant animal model of stroke (Farr et\u00a0al., Dual-modal fluorescent magnetic glyconanoparticles are powerful in probing lectins displayed on pathogenic and mammalian cell surfaces. It indicated that glyconanoparticles were useful tools to enrich lectin expressing cells because of their magnetic properties. The dual-modal glyconanoparticles were biocompatible and that they could be employed in lectin-associated biological studies and biomedical applications (Park et\u00a0al., Magnetic glyconanoparticles were synthesized by the co-precipitation method, and they were formed in a simple and direct process (Kekkonen et\u00a0al., El-Boubbou et\u00a0al. demonstrde la Fuente et\u00a0al. reportedSundhoro et\u00a0al. reportedin vitro simple method to evaluate the efficiency of the magnetic probes to label specifically cell populations in the whole blood by magnetic resonance imaging and fluorescence techniques. In addition, Garc\u00eda et\u00a0al. (in vivo imaging of brain disease.Gallo et\u00a0al. offered a et\u00a0al. also esta et\u00a0al. designedZhou et\u00a0al. developeGlyconanoparticles that exhibit multivalent binding to lectins are desirable for molecular recognition and therapeutic applications. Besford et\u00a0al. exploredWu et\u00a0al. exploredE. coli K88 adhesin and potentially could be used as a transporter for an antibacterial drug.Gallegos-Tabanico et\u00a0al. modifiedWon et\u00a0al. introduc3.The targeted drug delivery has been investigated as one of the main methods in medicine to ensure successful treatments of diseases. Pharmaceutical sciences are using micro or nano carriers to obtain a controlled delivery of drugs, able to selectively interact with pathogens, cells or tissues.Herein, the construction, preparation, and applications of several common water-soluble and stable glyconanoparticles were summarized and discussed. It can be clearly seen that these new multivalent systems have opened up new avenues for the study of carbohydrate-involved biological interactions. These glyconanoparticles are easy to prepare and have unique physical, chemical, and biological properties, which make them have a wide range of applications in drug delivery, biomedical imaging, diagnosis, and treatment. It is believed that with the in-depth study of carbohydrate nanobiology, as well as the crossing of chemistry, physics, and pharmacy, the research of carbohydrate QDs, gold/silver glyconanoparticles, and magnetic glyconanoparticles will make great progress, and find a wider application field.Major effort should be focused toward the design and synthesis of more complex and biologically relevant carbohydrate mimics in order to have a better understanding of the carbohydrate\u2013carbohydrate and carbohydrate\u2013protein interactions. The full therapeutic potential of these carbohydrate-based nanoparticles systems can be achieved when the functions of carbohydrates in biological systems are clarified."} {"text": "Goats are reared for their meat, mohair and other socio-cultural needs in Lesotho. Helminth infections are some of the major setbacks in the goat production industry due to their negative impact on animals\u2019 health, resulting in significant losses on meat and mohair production and death. A cross-sectional study was conducted to determine the prevalence, fecal egg infestation, and morphological identification of gastrointestinal parasites in goats.Fecal samples were collected from 765 goats and subjected to McMaster egg counting techniques using the flotation method. Statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS v.26.0).Haemonchus contortus was identified as the prevalent gastrointestinal nematode species found in goats. The prevalence and fecal egg count of gastrointestinal parasites were significantly higher (p<0.05) in goats located in the highlands and Senqu River Valley, while goats in the lowlands demonstrated a significantly (p<0.05) higher prevalence of H. contortus. Immature goats and kids were more significantly (p<0.05) prone to gastrointestinal parasites.The overall prevalence of gastrointestinal parasites was 94.7%, and the identified gastrointestinal parasites were nematodes (64.7%), coccidia (25.8%), and cestodes (4.2%). The nematodes and coccidia infestations were prevalent in goats located in the highlands and foothills, respectively, whereas nematode and coccidia fecal egg loads were higher in goats located in the foothills and Senqu River Valley, respectively. Angora goat production serves as a cushion in the event of crop failure due to climatic vagaries, especially in arid and semi-arid environments ,2. Angoret al. stated t [et al. indicate [et al. .et al. [et al. [et al. [Furthermore, Ademola et al. reported [et al. also sta [et al. also repA cross-sectional study was conducted to determine the prevalence, fecal egg infestation, and morphological identification of gastrointestinal parasites in goats.The ethical approval was granted by the Department of Animal Science of the National University of Lesotho based on the recommended principles for the use of animals in conducting research. The animals were used with the consent of the farmers.The fecal sample collection was conducted from December 2018 to October 2019 in four agro-ecological zones of Lesotho which are Senqu River Valley, Highlands (Mountains), Foothills and lowlands represented by Qacha\u2019s Nek, Thaba-Tseka, Leribe and Mafeteng districts, respectively . Senqu RA total of 765 experimental animals kept under a semi-intensive management system were randomly selected. Animals <1 year (0-12 months) were considered kids, while those >1 year up to 2 years were immature (12-24 months), and those above 2 years were considered adults (>24 months).The simple random sampling technique was employed in the four agro-ecological zones of Lesotho. Four villages in each agro-ecological zone were selected with the assistance of shearing shed farmer associations. Three goat farmers were randomly selected in each agro-ecological zone, whereby each farmer provided 15 animals as experimental units composed of five kids, five immature, and five adult goats.Fecal samples were collected directly from each animal\u2019s rectum using sterile disposable plastic gloves and were placed in the labeled screw-capped plastic bottles and then kept in a cooler box with ice packs. The samples were then transported to the National University of Lesotho and refrigerated at 4\u00b0C, and the laboratory analyses were performed within 48 h.Two grams of crushed fecal samples were mixed with 58 ml of sodium chloride (flotation solution) and stirred. A homogenous solution was strained into the beaker, and few drops of amyl alcohol (3-5 drops) were added to treat bubbles in the solution. A McMaster quantitative technique was used to determine the number of eggs present per gram of feces, and each number obtained was multiplied by a factor of 100 to give an approximate number of eggs/gram of feces. A sub-sample was drawn from each sample using disposable pipettes and filled both chambers of the McMaster slides, which were observed under a microscope at 100\u00d7 following standard procedures outlined by several researchers -16.3 to migrate up the inner walls of culture jars and then harvested repeatedly by holding the jars at a slant position with mouth pointing downwards and spraying the inner walls with water to allow the larvae to drain into a suitable container (beaker) as recommended by van Wyk and Mayhew [Copro-cultures comprising pooled samples from each agro-ecological zone were prepared using goat fecal pellets. The pellets were thoroughly crumbled before being mixed with sufficient vermiculite chips to yield a crumbly mixture, which was lightly compacted and moistened sufficiently. The crumbly mixture was ensured not to be water-logged and then incubated in jars in the dark (laboratory cupboards) at room temperature for 7 days. The inside of jars was sprayed lightly with water before being placed in bright light that stimulates the Ld Mayhew .The larvae were preserved unchanged by adding formalin to the larval suspension to a final concentration of about 1-2% and then heated to 57\u00b0C in a water bath for about 1 min to kill larvae and straighten or uncurl it ,18.et al. [Larvae were prepared for examination by adding a drop of diluted Lugol\u2019s iodine solution to a drop of larval suspension on a glass microscope slide and then mounted a coverslip as outlined by van Wyk et al. .3 of most parasitic nematodes was based principally on examining the caudal (tail) and cranial (head) extremities. Conventional characteristics for identification of infective larvae GIN species were microscopically examined at 10\u00d7 with ocular number ten and measured using a calibrated stage micrometer that was imputed in an ocular lens [Morphological identification of Llar lens .The data collected were manually inputted in Microsoft Excel spreadsheet and transferred into SPSS v.26.0 for analyses. General linear model was employed to determine the effect of agro-ecological zone and age on the prevalence and fecal egg load of gastrointestinal parasites in goats. Generalized estimating equation (GEE) was used to analyze the FEC, where negative binomial regression was involved in the analysis. Odds ratio was used to measure the association between exposure and outcome. Descriptive statistics were implemented to determine the prevalence of identified species between the animals\u2019 agro-ecological zones and age groups. Pearson Chi-square test was also adopted to assess the degree of association between each risk factor and GIN. In the analyses, the confidence level was held at 95%.The goats in the highlands and foothills were more susceptible to nematode infestation than those in the lowlands and Senqu River Valley . The preAs illustrated in Goats in the lowlands were significantly (p<0.05) different from those in the highlands in the prevalence of cestodes. As depicted in The prevalence of nematodes was high (p<0.05) in immature goats, followed by adults, and the lowest prevalence was observed in kids . The oddIn the case of coccidian , the infIn terms of cestodes, a high prevalence was observed in kids, followed by immature goats, and was lowest in adult goats. There was also a significant difference (p<0.05) discovered between the different age groups. The likelihood of goats exhibiting cestodes infestation from adults to immature and kids increased significantly (p<0.05). The low prevalence of cestode infestation observed in adults might be due to body resistance as they might have developed immunity due to repeated natural infestations.Haemonchus contortus was the only species of GI nematodes that were discovered in this study. Of 172 samples examined, 130 were positive for H. contortus, and the overall prevalence in goats in this study was 75.6%. The cranial extremity of the infective larvae was observed to have a bullet-shaped head and lack of shrubs for goats to browse; hence, forcing the goats to graze closer to the ground, thereby, contacting the infective larvae. A higher (p<0.05) H. contortus infestation was observed in kids than in adult and immature goats.The higher (p<0.05) prevalence of lowlands comparedA higher (p<0.05) intensity of nematode FEC mean was observed in goats located in the foothills compared with other agro-ecological zones . The likThe nematode egg load was higher in immature goats, followed by adult goats, and lowest in kids . The adoThis study showed that adult goats were significantly lower (p<0.05) than other age groups in terms of coccidia fecal egg load. The study further demonstrated that coccidia egg count was significantly (p<0.05) higher in kids, followed by immature goats, and lowest in adult goats.The kids had significantly (p<0.05) higher cestode egg load than immature and adult goats. The likelihood of infestation increased significantly (p<0.05) from adult to immature goats, and from adults to kids, the chances of infestation increased significantly (p<0.05).et al. [Strongyloides (nematodes) in goats in the highlands. This high prevalence of nematodes, coccidian, and cestodes observed in goats in the highlands, foothills, and Senqu River Valley, respectively, suggests the existence of favorable environmental conditions for survival and development of parasitic larvae to the infective stage. Nabi et al. [et al. [This study\u2019s results agree with the findings obtained by Koinari et al. , who repi et al. illustra [et al. , who illet al. [et al. [et al. [et al. [Moniezia spp. (cestodes) might be due to less dissemination of eggs in the feces from the gravid segment.However, in a study by Moiloa et al. , nematodet al. further [et al. emphasiz [et al. , there w [et al. emphasiz [et al. also outet al. [et al. [et al. [et al., [et al., [et al. [et al. [et al. [The results of this study agree with those reported by Sunandhadevi et al. , who sho [et al. Mpofu et [et al. also add[et al., Nabi et [et al., noted th [et al. also arg [et al. emphasiz [et al. stated t [et al. argued t [et al. viewed tet al. [Eimeria spp. (coccidia) (86.34%). This may be attributed to the fact that kids are underdeveloped and have lower immunity resistance toward coccidia infestation than immature and adult goats. Similarly, Jittapalapong et al. [et al. [Eimeria (coccidia) infestation was the most prevalent parasitic infestation, followed by Strogyles (nematodes). Nevertheless, in a study conducted by Yusof and Isa [Eimeria spp. oocysts in feces, which contributed as a source of infestation for younger goats. Moreover, Mpofu et al. [According to the present findings, Gupta et al. reportedg et al. also rep [et al. added th and Isa , adult gu et al. added thThese results also agree with the study conducted by Yusof and Isa , who notStrongyloides (nematodes) in the highlands (Tambul) were significantly higher than those in the lowlands (Labu). Koinari et al. [Eimeria was higher in the highlands than in the lowlands. However, Regassa et al. [The results of this study agree with the findings of a previous study in Papua New Guinea , who disi et al. also adda et al. argued tet al. [Sharma et al. also obset al. [Following the results of this study, Kahan and Greiner also addet al. reportedThe findings in this study agree with the report of Yusof and Isa , who outH. contortus was identified as the prevalent GIN species found in goats. Nematodes and coccidia highly infested immature and kids.It can be concluded that nematode and coccidia infestation was higher in the highlands and foothills, while nematodes and coccidia fecal egg loads were higher in foothills and Senqu River Valley. LGM conceived and designed the research under the guidance of SM. LGM, MoP, and MaP conducted the sample collection. MoP and LGM carried out the morphological identification analysis. SM, MaP, and LGM carried out the data analyses. LGM wrote the manuscript. SM reviewed the manuscript. All authors read and approved the final manuscript."} {"text": "Recent years witnessed a stagnation in yield enhancement in major staple crops, which leads plant biologists and breeders to focus on an urgent challenge to dramatically increase crop yield to meet the growing food demand. Systems models have started to show their capacity in guiding crops improvement for greater biomass and grain yield production. Here we argue that systems models, phenomics and genomics combined are three pillars for the future breeding for high-yielding photosynthetically efficient crops (HYPEC). Briefly, systems models can be used to guide identification of breeding targets for a particular cultivar and define optimal physiological and architectural parameters for a particular crop to achieve high yield under defined environments. Phenomics can support collection of architectural, physiological, biochemical and molecular parameters in a high-throughput manner, which can be used to support both model validation and model parameterization. Genomic techniques can be used to accelerate crop breeding by enabling more efficient mapping between genotypic and phenotypic variation, and guide genome engineering or editing for model-designed traits. In this paper, we elaborate on these roles and how they can work synergistically to support future HYPEC breeding. Previous studies suggest that production of major crops needs to double to meet the projected demand by the year 2050, which requires an increased speed of yield improvement compared to the historical trends assimilation is needed. The 3D architecture-based canopy and root models, using organ physical and physiological properties, micro-environment around plants and inside organs as input, can predict canopy photosynthesis and root absorption activities possible, as demonstrated by the high-throughput transcriptome sequencing and the emerging robotized enzymes activity measurement platforms data for plants grown at different temporal and spatial scales and environmental conditions . Many mo.et al. develope (et al. develope (et al. proposed. et al.. High-th13C/12C ratio of the leaf (Condon et al.Drysdale, a bread wheat cultivar with improved transpiration efficiency and grain yield (Condon et al.2 uptake rate (Song et al.et al.It is worth emphasizing here that systems models may also drive the development and expansion of phenotyping tools. Firstly, systems models can be used to identify critical morphological and physiological parameters to be screened using phenomic approaches. As an example, a model of carbon isotope discrimination and stomatal conductance suggested that leaf water use efficiency can be \u2018recorded\u2019 in the Genomics accelerates the development of crop systems models and realization of crops designed with these models. As mentioned above, a key feature of future advanced crop systems models is to predict phenotype from genotype. How can one incorporate genomic information into systems models? The first approach is to construct mapping functions between molecular markers and macroscopic model parameters. For example, heading time of individual lines in a recombination inbred population was successfully predicted under different environments based on the parameterization of four model parameters from corresponding QTL markers (Yin et al.et al.et al.et al. (Arabidopsis. Furthermore, with the genome information of Mycoplasma genitalium, a whole-cell computational model was constructed to predict a wide range of phenotypes from its genotype (Karr et al..et al. accurateet al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.Genomics can accelerate identification of alleles or QTLs controlling traits identified by crops systems model as targets to produce HYPEC. Such QTLs or alleles have already been used in molecular marker-assisted breeding (Collard in silico simulation, crop systems models can be used to identify key parameters or parameter combinations to increase yield (item 5.1 in Here we summarize the above contents and describe the framework of systems model-guided HYPEC breeding, which will be built on the synergism of systems models, phenomics and genomics . FirstlyImproving photosynthetic efficiency over a whole growing season is now recognized as a major option to drastically improve crop yield, for which all the sink and flow related processes also need to be simultaneously considered to achieve the expected gain in crop yield potential. In this aspect, many breeding organizations, such as CIMMYT, have recognized this and created physiology-based breeding programmes to simultaneously improve source-, sink- and flow-related traits (Reynolds and Langridge"} {"text": "Dear Editorrepair index\u201d.A publication by Fahradi et al., (2017) describes the results of the study concerning micronucleus (MN) assay in buccal cells of smokers, a problem which was investigated by many scientists recently . But this problem is not solved, and MN induction in smokers is still questionable since both positive and negative results were reported in these studies . The authors studied along with MN also nuclear anomalies and presented the results of calculation of so-called \u201cThe main shortcoming of this publication (very serious one!) is that the authors scored only 500 cells per subject which were stained with Papanicolau stain (which is not DNA specific). I wonder why vast majority of scientists from some Asian counties (namely India) and Latin America (namely Brazil) ignore the validated and standardized buccal MN assay protocol . In this protocol is clearly stated that 2,000 buccal cells stained with DNA-specific stain should be scored to get reliable results . It is suggested in several publications that for the monitoring of genotoxic effects of carcinogens in exfoliated humans cells 3,000 - 10,000 cells per subject should be evaluated due to the lower baseline MN frequency . Recently, Ceppi et al., (2010) also calculated the minimum number of cells which should be evaluated to obtain reliable result in buccal MN assay and stated that it should be equal to 4,000.Crucial for buccal MN assay is staining because all epithelial cells have different types of keratohyalins. Cell injury (cytotoxicity) which can take place due to smoking (because of cell exposure to cytotoxic/genotoxic substances in tobacco smoke) can increase production of these proteins in cells (which appear in cells as bodies which do not contain DNA). When DNA non-specific stains are used, they visualize these bodies which can mimic MN. This phenomenon was for the first time showed by Casartelli et al., (1997) and Casartelli et al., (2000). Further, this phenomenon was confirmed in our study with different staining techniques in the buccal cells of smokers . In the buccal cell MN assay protocol and further guidelines, it is indicated that the presence of MN should be confirmed under fluorescent light because after this type of staining all bodies containing DNA fluorescent .Hence, the study by Fahradi et al., (2017) contains triple limitation and therefore, the results obtained in the study are not reliable. Another serious problem of this publication is enormous high level of MN. Indeed, it is indicated in Tables 1 and 2 that MN levels in non-smokers are 27.3\u2030 (2.73%) and 37.0%, 47.4% and 29.0% in smokers. These numbers are extremely, unusually high! Indeed, MN in epithelial buccal cells are very rare events. In 1999, Fenech et al., (1999) stated that the average MN frequencies in exfoliated buccal cells of healthy subjects are between 1.0 and 3.0\u2030. Later, it was reported by Bonassi et al., (2011) and Ceppi et al., (2010) (based on the data of 63 studies) that MN level in buccal cells of healthy unexposed persons are 0.74\u2030 (between 0.3 and 1.7%) and 1.10% (between 0.70 and 1.72%), respectively . In other words, in the study of Fahradi et al., (2017) the level of MN in non-exposed healthy subjects is much higher than in publications of other investigators. This level is between 24.9- and 36.9-fold higher compared with other data which is not possible, of course.differences were significant in smokers vs. nonsmokers for MN\u201d . But careful examination of the data presented in Table 2 shows that difference between smokers with history of \u226410 years and non-smokers (from Table 1) is not significant . The authors wrote that \u201cstatistical analysis was performed using the t-test\u201d. If so, the statistical significance of difference between mentioned data is t = [29.0 \u2013 27.3 / \u221a (82 + 10.92)] = 1.7/13.5 = 0.12 which is far from critical value 1.96 to be significant at p < 0.05). It means that the difference between smokers with history of \u226410 years and non-smokers is not significant. In this case the authors should write that MN level increases significantly after 10 years of smoking, and duration smoking less than 10 years does not induce MN in buccal cells. The application of Student\u2019s t test for statistical analysis of MN in buccal cells is not correct. Better to use non-parametric tests . At least, the authors ought to normalize the data by means of log or square root transformation and then apply t-test. The authors stated in the abstract and also in the text of the article that \u201ckaryorrhexis is a form of nuclear change in which nuclei are pyknotic or partially pyknotic\u201d. This is completely wrong statement because they mixed up two types of nuclear anomalies, i.e. karyorrhexis and pyknosis . Fahradi et al., (2017) stated that \u201cMarked by an Arrow\u201d. But no one arrow is indicated. Also, the photos are of poor quality and it is not possible to see any anomaly in them. Instead of one cell there are several and it is absolutely not clear which cell the author mentioned. In all legends to Figures 1 \u2013 3 in which different types of nuclear anomalies are presented, is written that anomalies are \u201cThe authors stated that there are several reports concerning MN induction in smokers and mentioned following papers: Kamboj et al., (2007); Stich et al., (1982); Majer et al., (2001) and Rosin et al., (1987). All these papers are not relevant in this regard. Indeed, the paper by Majer et al. is comprehensive review, research papers by Kamboj et al., (2007) describes MN score in patients with squamous cell carcinoma and leukoplakia and Stich et al., (1982) concerns betel quid chewers. I could not find the paper by Rosin et al., (1987) but in the abstract presented in PubMed is written that the paper concerns tobacco and betel quid users in the Philippines and snuff users in the Northwest Territories. Instead they could cite the paper which they mentioned in another regard, namely Stich and Rosin (1983) concerning MN assay in heavy smokers. repair index\u201d, I am not sure about its usefulness. Indeed, this parameter is not indicated in the validated protocol. Instead of this index I suggest to the authors include into analysis scoring of basal cells. This parameter will show changes in the proliferation of buccal cells . I hesitate if it should be expressed as %. Indeed, RI of non-smokers (Table 1) is 1.51 + 1.29/ 2.73 + 0.9 = 0.77. So it is mistake to state that RI is expressed in %.As for \u201cIn summary, Fahradi et al., (2017) carried out research work with 60 subjects but made some mistakes which are unfortunately quite common in case of ignoring the validated and standardized protocol for buccal MN cytome assay . Consideration of all parameters suggested in the protocol will increase the reliability of the study and will give possibility to compare the results obtained in different laboratories."} {"text": "The humoral immune response is one of the central tenets of mammalian immunity. Delivered through the production of antibodies of multiple classes and sub-classes their activities are achieved through their inherent ability to bind with exquisite specificity to a given target antigen and then engage various immune effector functions to elicit the appropriate response. Chief amongst these are cellular immune effectors such as macrophages, NK cells, and neutrophils which are engaged through their expression of Fc receptors (FcR), binding the Fc portion of the immunoglobulins. Accordingly, different classes and isotypes of antibody engage a selection of different FcR. For example in the murine system there are receptors that are specific for IgG, IgM, IgE as well as receptors that are dually-specific for IgM and IgA with paralogues in human cells. A bewildering array of immune and non-immune cells express these various receptors in different combinations, leading to a highly complex system for regulating and evoking antibody responses. Various FcR evoke cellular activation (Fc\u03b3RIIa and Fc\u03b3RIIa), whereas others are inhibitory (Fc\u03b3RIIb), with still others being capable of evoking intracellular transport and recycling of IgG (FcRn) to establish long serum half-lives. Clearly, careful regulation of expression, signaling and modulation is required for a healthy, well-functioning and balanced immune system. In this Research Topic, a series of articles are provided to reveal comprehensive insights on the role of these various FcR in health and disease, taking into account the wide spectrum of receptors and cells expressing them. Most importantly the insights presented in these articles pave the way for powerful immunotherapies and emerging principles about how FcR can be exploited for therapeutic purposes for various diseases, including infectious diseases, autoimmune diseases, and cancer.Kerntke et al. revisited the question of the number and expression pattern of Fc\u03b3R on myeloid cells, Nagelkerke et al. dissected the genetic variation within the family, including duplications and deletions within the low affinity Fc\u03b3R-locus. How the GPI-linked Fc\u03b3RIIIb affects tumor cell killing by PMN through therapeutic monoclonal antibodies is furthermore tackled by Treffers et al. while Kang et al. describes a new re-engineered IgG molecule that selectively engages Fc\u03b3RIIIa-V158 for enhanced therapeutic benefit through a single Fc\u03b3R. Brandsma et al. also investigated the differential capacity of tumor killing through FcR that engage different antibody isotypes, specifically addressing the role of Fc\u03b1R vs. Fc\u03b3R. Parameters affecting the function of FcRn were also tackled. Finally, Kendrick et al. mathematically modeled FcRn kinetics and suggest a novel reduced-order model based on a new expression for the fractional catabolic rate that can be used to predict plasma IgG responses.In total, 6 original research articles were contributed on the various topics, spanning the genetics and function of the disparate Fc\u03b3R. While Pyzik et al. and Nagelkerke et al. also contributes a comprehensive review of Fc\u03b3RII-Fc\u03b3RIII genetics. Anania et al. systematically discuss the structure-function relationship of Fc\u03b3RII receptors, while the contribution of Fc\u03b3RIIb in the development of autoimmune diseases in mouse models gets a comprehensive assessment by Verbeek et al.. Breedveld and van Egmond review pathologies and new opportunities resulting from targeting Fc\u03b1R. In addition, Foss et al. extend the scope of this topic to the cytosolic FcR, TRIM21, while Liu et al. and Kubagawa et al. discuss the role of the IgM binding, Fc\u03bcR in immunity. The role of FcR in infectious diseases and vaccine development is covered by Boudreau and Alter, discussing FcR and their role in the protection against influenza infection and future prospects to leverage FcR immune activity for the development of vaccines with Jenks et al. focusing on the subversion of immune responses by FcR encoded by Herpes simplex virus. The involvement of FcR in various inflammatory diseases such as rheumatoid arthritis, systemic lupus erythematosus, and immune thrombocytopenia with a focus on antibody-mediated autoimmunity is covered by Mkaddem et al.. This includes the mechanism of FcR-receptor-mediated inflammation and how to potentially exploit this knowledge therapeutically. Katsinelos et al. focuses on the role of antibodies and receptors involved in neurodegeneration during Alzheimer's and Parkinson's disease, while Castro-Dopico and Clatworthy discuss the role of FcR in inflammatory diseases of the gut, namely inflammatory bowel diseases. Patel et al. discusses the multiple variables that are at play in the interface between target and effector cells through IgG-Fc\u03b3R engagement, with a focus on the largely undescribed role for Fc\u03b3R-glycosylation in mediating the underlying recognition events. FcR signaling is also specifically covered by Gomez et al. for Fc\u03b5RI in Allergic disease, including seasonal rhinitis, atopic dermatitis, urticaria, anaphylaxis, and asthma, while Koenderman et al. reviews how the activation status of FcR can be affected by inside-out signaling. Finally, the importance of FcR in cancer and cancer therapies, in particular, the role of checkpoint inhibitors therein, is given comprehensive review by Chen et al. and special focus on Fc\u03b3RIIb mediated antitumor immunity by Teige et al.This Research Topic also features 18 Review Articles spanning these disparate areas. FcRn is tackled by Overall, it is clear that the knowledge acquired from the articles contained within this special issue highlights the complexity of the FcR family and their importance in multiple aspects of health and disease. However, equally clear is the fact that this family of receptors, despite being investigated for over 4 decades, still harbors many secrets, reinforcing that we still lack a complete understanding of their complex regulation, interaction and impacts. As central to humoral immunity, modulating disease pathogenesis and acting as a key determinant of antibody therapeutics, it is also similarly evident that further research in this area is still warranted. We look forward to seeing what the intensive study of these receptors shows in the coming decade.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "The objective of this study was to determine the prevalence of DRML and factors related to the lesions among denture wearers seen in a Nigerian teaching hospital.interviewer\u2019s administered questionnaire was used to obtain information from consecutive patients that had used removable denture for at least six months and consented to participate. Data related to gender, age, types of denture and presence of denture induced oral lesions were obtained, entered into a computer and analyzed using IBM SPSS Version 20. Descriptive statistics were expressed as frequency and percentages. Fisher\u2019s exact test was performed for discrete variables. A P-value less than 0.05 was regarded as statistically significant.a total of 104 respondents participated in the study and 14 had DRML giving a prevalence of 13.5%. The majority, 11 out of the 14 (78.57%) presented with mucosa ulceration, while 8 (57.14%) out of the 14 cases of DRML were caused by over extension of the denture flanges. There was no statistically significant relationship between daily removal of denture fore going to bed to sleep at night and DRML (p=0.776).the prevalence of denture related mucosal lesion was 13.3% and the major cause was over extension of denture flange. There is need to emphasize adherence to review appointments for early detection and correction of denture instability and over extension of denture flange to prevent DRML. Worldwide, the prevalence of edentulism is high and varies considerably between countries while a et al. [et al. [et al. [et al. [et al. [et al. [et al. [The prevalence of denture induced oral lesions varies across different countries and ranges from 10.8% to 62% . A preva [et al. reported [et al. reported [et al. revealed [et al. -13; Sadi [et al. reported [et al. 48.2% an [et al. , epulisf [et al. and papi [et al. . This ca [et al. . Coelho [et al. reportedThis was a cross-sectional study among removable denture wearers. A total of one hundred and four removable denture wearers were interviewed over a period of twelve months. Only patients that were willing to participate and had used the removable denture for at least six months were included in the study. After informed consent was obtained, data related to gender, age, type of denture, length of denture use, hygiene care, nocturnal denture wear, and presence of denture induced oral lesion were obtained using an interviewer\u00b4s administered questionnaire. The questionnaire was administered by one of the investigators. Confidentiality was ensured by not writing the name of the participants in the data form. Examination of each patient\u00b4s mouth was done on a dental chair by using a mouth mirror with gloved hand. Diagnosis of denture induced oral mucosa lesion was made based on the clinical appearance of the oral mucosa; denture stomatitis based on inflamed palatal mucosa, and ulceration was diagnosed based on presence of laceration or tearing on the oral mucosa in relation to denture. Patients with nodular mucosal swelling were referred to oral surgery clinic for excisional biopsy and the biopsies were sent to oral pathology laboratory for histological confirmation of mucosal hyperplasia. Also, the patients\u00b4 dentures were examined, and according to the quantity of plaque on the denture base, patients were divided into two groups by using Budtz-Jorgensen\u00b4s index of the respondents were females and majority 48 (46.2%) were above 60 years . Table 2There was no statistically significant relationship between daily removal of denture before going to bed to sleep at night and DRML (p=0.776), types of denture and DRML (p=0.269) and presence of plaque on the denture and DRML (p=0.729) . Table 4et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [In this study 58.6% of denture wearers that participated were females and this is in agreement with a previous study by Arigbede et al. . The femet al. . The pre [et al. but lowe [et al. 45.6% by [et al. , 20.5% p [et al. . The dif [et al. ,20,21 re [et al. could beet al. [et al. [Candida albicans being reported as a major cause by previous studies [et al. [In this study, traumatic ulcer was the most common form of denture induce lesion 1178.6%), followed by mucosa hyperplasia 2 (14.3%). This is in contrast to previous studies ,21 that et al. reported [et al. reported [et al. . Our fin%, follow studies . The mos studies as the m [et al. stated tet al. [et al. [There is no significant relationship between not wearing the denture to bed at night and the development of DRML, although the group that sometimes remove their denture before bedtime had the highest (35.7%) incidence of DRMLs in this study. This is in contrast to the previous study that shows higher prevalence among those that wear denture to bed . The higet al. reported [et al. was thatThe prevalence of denture related mucosal lesion is 13.5% in this study and the major cause was over extension of denture flange. There is need to emphasize proper denture fabrication to prevent DRMLs and adherent to review appointments for early detection of denture instability and over extension of denture flange.Denture related mucosa lesion (DRML) has been studied and found to show varying prevalence across different countries;DRML was more common among complete denture wearer and in elderly;Denture stomatitis was found to be the most common form of DRML.The causes and factors related to the incidence of denture related mucosa lesion (DRML) was studied for the first time in Nigeria and we found that ulceration was the major cause of DRML especially among RPD wearers;There was a statistically significant relationship between forms of lesion and types of denture. Mucosa ulceration and epulisfissuratum occurred more commonly among removable partial denture and complete denture wearers respectively;There was no statistically significant relationship between the duration of use of denture and prevalence of DRML."} {"text": "The choice of the type of stabilization device in the osteosynthesis of dorso-lumbar spine fractures remains a subject of controversy. The present study aims to evaluate the efficiency of short segment in patients suffering post-traumatic thoracolumbar fractures. This study was conducted in the Department of Orthopedic Surgery and Traumatology of the Habib Bourguiba University Hospital, Sfax, Tunisia. All our patients had a spinal osteosynthesis via the posterior approach with a short segment pedicle screw fixation. We established a record of the pre and post-operative data, the functional results in the post-operative stage during the follow-up period and in retrospect according to the Denis Pain Scale, as well as the Oswestry score. The correction was evaluated by determining the relative gain and loss at the last period of retrospect: vertebral kyphosis, regional kyphosis, Gardner Segment Kyphotic Deformity (GSKD), and computed tomography (CT) scan in retrospect to check the quality of the arthrodesis. The average Oswestry score was 14%. Twenty-nine patients had an Oswestry score \u226440%. The relative gain obtained postoperatively was 57.3% for vertebral kyphosis, 67.2% for regional kyphosis and 71.3% for Gardner kyphosis deformity; while the loss of correction at the last follow-up was 0.6\u00b0 for vertebral kyphosis, 1.5\u00b0 for regional kyphosis and 0.9\u00b0 for GSKD. No cases of non-union were reported. The short segment fixation makes it possible to limit operating time, the abundance of bleeding and the aggression of the soft tissues. Over the last two decades, numerous technical advances in instrumentation and in the knowledge of spinal biomechanics have modified the surgical strategies for the synthesis of spinal-lumbar spine fractures . The choWe conducted a retrospective, descriptive study over an 8-year period involving 30 patients operated on for a thoracolumbar spine fracture at the Orthopedic Surgery and Trauma Department of the Habib Bourguiba University Hospital in Sfax, between July 2011 and July 2019. All patients were operated on by the same surgeon and with the same technique. All of our patients underwent a spinal osteosynthesis via the posterior short segment instrumentation, and a posterior and postero-lateral graft. In our study, we included patients with: age \u226515, complete preoperative, postoperative and retrospective radiological assessment with a minimum retrospect of 1 year. We established a record determining the clinical and radiological data of each patient. On the clinical level, we specified the general information about the patient and the circumstances of the accident. The record also included the characteristics of the fracture and its consequences on the spinal statics, the neurological status, the pre and post-operative data , and the postoperative. Retrospective clinical and anatomical results according to the Denis Pain Scale, the Oswestry score (ODI: Oswestry Disability Index), the Sch\u00f6ber Index and return to work were considered in the report. On the radiological level, we estimated the correction by determining the relative gain and losses at the last follow-up vertebral kyphosis (VK), regional kyphosis (RK) and Gardner Segment Kyphotic Deformity (GSKD). We also performed the study of sagittal balance through the sagittal heel in T9 and the sagittal vertical axis (SVA) measured on an x-ray of the entire spine standing front and in the profile made at the last retrospect. Scannographic analysis was performed to check the quality of the arthrodesis.Our population included 29 men and only 1 woman. Average age was 32.6 years with a standard deviation of 10.7. The trauma was due to falls from a high place in 50% of the cases and to road accidents in 46.7% of the cases . AssociaThe mean operating time was 99.3 minutes with a standard deviation of 16.7 minutes. Pre- or postoperative transfusion was necessary in only one patient in our population. A laminectomy was performed in 8 cases (26.7%). The length of the hospital stay was 12.2 days on average and the delay in getting up postoperatively was of 2.9 days in average in non-neurological patients, with extremes ranging from 2 to 5 days. Four patients (13.3%) did not adhere to our rehabilitation protocol. The average decline in our series is 51.1 months with a standard deviation at 24 months. According to the Denis Pain Scale (DPS), 27 patients (90%) had a grade <3 and 3 patients (10%) had a grade \u22653 . The aveTwenty-six patients (86.7%) returned to work within 7.9 months and 4 patients (13.3%) stopped working permanently. The average Sch\u00f6ber index was 12.9/10 cm with a standard deviation of 0.6 cm. All patients initially presenting a partial neurological deficit improved by at least one grade according to the Frankel classification, 50% of which showed total neurological recovery. We also deplored a case of sepsis on early material. However, no case of non-union or implant failure was noted. Regarding the radiological results, the relative gain obtained postoperatively was 57.3% for VK, 67.2% for RK and 71.3% for GSKD and the et al. [et al. [et al. [et al. [In congruity with the literature , the stuet al. , the abs [et al. found th [et al. found 36 [et al. and that [et al. . Others [et al. . The Loa [et al. .et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [According to the Magerl\u00b4s classification , type A et al. , this cl [et al. question [et al. , the pre [et al. thinks t [et al. argues t [et al. and Defi [et al. used mon [et al. (100 min [et al. (101 minet al. [et al. [et al. [et al. [et al. [The comparison of bleeding abundance between the two types of segmental stabilization has been reported in several publications , 20, shoet al. (12.4 da [et al. (17 days [et al. and Shin [et al. noted go [et al. with an [et al. and lack [et al. .et al. [et al. [et al. [et al. [El-Shehaby A et al. reported [et al. noted th [et al. . Compare [et al. and less [et al. . As for Short segment fixation makes it possible to minimize the extent of the skin incision and to limit not only the operating time, the abundance of the bleeding and the damage of soft tissues , 30, butThoracolumbar fractures are common;Thoracolumbar fractures require osteosynthesis with a posterior and posterolateral graft;The choice of the type of stabilization remains a subject of controversy.The short segment minimizes the extent of the skin incision;The short segment limits the damage of soft tissues;Prescribing the short segment fixation makes it possible for consolidation and arthrodesis of thoracolumbar fractures without incurring a significant loss of correction or a high rate of instrumentation failure.The authors declare no competing interests."} {"text": "Exercise program combined with electrophysical modalities in subjects with knee osteoarthritis: A randomised, placebo-controlled clinical trial\u201d. Gomes et al. concluded that the low-level laser therapy (LLLT) did not reduce knee osteoarthritis pain when applied as an adjunct to exercise therapy. We argue that Gomes et al. neglected relevant laser treatment recommendations in the conduct and reporting of the trial.We read with interest the article by Gomes et al. entitled: \u201cGomes et al. did not state the Joules per treatment spot applied. We calculated the Joules applied from other laser information in the report and found that it is too low of a dose according to the World Association for Laser Therapy (WALT) guidelines. Furthermore, we have published a meta-analysis of 22 placebo-controlled trials demonstrating a significant difference in pain-relieving effect between doses in adherence and non-adherence to the WALT guidelines. However, neither the WALT guidelines, nor our meta-analysis was mentioned by Gomes et al.Moreover, Gomes et al. did not state whether the output power of the laser device was measured, and this is concerning because in the city of S\u00e3o Paulo, where the trial was conducted, most laser devices have been found to deliver less of a dose than specified by the manufacturers.In summary, we found that the best available evidence regarding effective and ineffective LLLT dosing from systematic reviews was neglected in the conduct and reporting of the trial, and that the laser device may not have been calibrated. Exercise program combined with electrophysical modalities in subjects with knee osteoarthritis: A randomised, placebo-controlled clinical trial\u201d. Gomes et al. concluded that the addition of low-level laser therapy (LLLT) \u201c\u2026 did not increase the clinical benefit after 8 weeks of treatment (primary and secondary variables) when combined with an exercise protocol for knee osteoarthritis.\u201d [We read with interest the article by Gomes et al. entitled: \u201chritis.\u201d .We argue that the results of the trial were not interpreted in the light of what was already known in terms of LLLT dosing.2 was utilized in skin contact mode and that the energy density was 6\u2009J/cm2 [2*0.1309\u2009cm2 = 0.78\u2009J).We are surprised that Gomes et al. did not state the Joules per treatment spot applied since th 6\u2009J/cm2 . This meIn the World Association for Laser Therapy (WALT) dose guidelines, irradiating the osteoarthritic knee with at least 1\u2009J of 904\u2009nm wavelength laser per treatment spot is recommended . Our res2 [2 is equivalent to J per treatment spot only in instances where the laser beam covers exactly 1\u2009cm2, which it rarely does.The dose applied by Gomes et al. does not satisfy the WALT recommendations and our LLLT dose-response meta-analysis can explain the negative findings. However, neither the WALT recommendations , nor our2 . It is iGomes et al. did not state whether the output power of the laser device was measured. It is a major concern that in the greater S\u00e3o Paulo area of Brazil, where the study by Gomes et al. was conducted, 59 of 60 laser devices tested delivered less of a dose than specified by the manufacturers . We concIn summary, we found that the best available evidence regarding effective and ineffective LLLT dosing from other trials was neglected in the conclusion by Gomes et al. and that their laser device probably was not tested and most likely delivered an ineffective dose."} {"text": "The development of DNA sequencing technology has provided an effective method for studying foodborne and phytopathogenic microorganisms on fruits and vegetables (F & V). DNA sequencing has successfully proceeded through three generations, including the tens of operating platforms. These advances have significantly promoted microbial whole\u2010genome sequencing (WGS) and DNA polymorphism research. Based on genomic and regional polymorphisms, genetic markers have been widely obtained. These molecular markers are used as targets for PCR or chip analyses to detect microbes at the genetic level. Furthermore, metagenomic analyses conducted by sequencing the hypervariable regions of ribosomal DNA (rDNA) have revealed comprehensive microbial communities in various studies on F & V. This review highlights the basic principles of three generations of DNA sequencing, and summarizes the WGS studies of and available DNA markers for major bacterial foodborne pathogens and phytopathogenic fungi found on F & V. In addition, rDNA sequencing\u2010based bacterial and fungal metagenomics are summarized under three topics. These findings deepen the understanding of DNA sequencing and its application in studies of foodborne and phytopathogenic microbes and shed light on strategies for the monitoring of F & V microbes and quality control. The principles of three generations DNA sequencing were depicted. Whole genome sequencing and DNA markers of foodborne and plant fungal pathogens were summarized. Bacterial and fungal metagenomics studies on fruits and vegetables were concluded. The amplicons of the DNA fragments are then sequenced in a system with four independent PCR arrays. In the four independent PCR assays, four chain\u2010terminating inhibitors are added. In the PCR assays, the majority of DNA polymerase reactions are conducted by adding dNTPs. However, in several reactions, ddNTPs are added for polymerization, thus terminating chain extension. As a result, DNA amplicons with different lengths are obtained. An electrophoresis method is used to separate the terminated DNA amplicons. The continuous DNA sequence is obtained by docking the ends of the electrophoresis\u2010separated products.The initial DNA sequencing methodology was developed in the mid\u20101970s. Sanger et al. established a method for determining DNA sequences via primed synthesis with DNA polymerase was announced by Applied Biosystems Co. in 1987 , the single\u2010molecule real\u2010time (SMRT) approach and the Oxford Nanopore MinION sequencer . The representative E. albertii strain KF1 was the first to be sequenced and reported serve as markers for specific VT\u2010producing E. coli gene , salamae (II), arizonae (IIIa), diarizonae (IIIb), houtenae (IV) and indica and flagellar (H) antigens . The genome of the representative S. enterica strain Typhi str. CT18 was the first to be reported . The sseL, spvC . Specifically, S. aureus is an important pathogen associated with toxin\u2010related food poisoning. As of recently, the genomes of approximately 10\u00a0630 S. aureus strains have been registered genes of sea, seb, sec and see have been used to monitor toxic S. aureus in food , 113 strains of S. boydii (genome ID 496), 1338 strains of S. sonnei have been registered that encodes 2790 proteins. In PCR\u2010based detection assays, specific gene regions of ipaH, virA, ial and 16S rRNA have been used as targets for Shigella genus identification . The hip, 16S rRNA, rrs, cdaF, porA, Hyp, cjaA, ceuE, hipO, mapA, ceuA, askD, glyA, lpxA, ccoN, ORF\u2010C sequence, rpoB, oxidoreductase gene, cdtA and pepT genes are widely used for the PCR identification of C. jejuni gene is considered to be a useful marker for distinguishing patulin\u2010producing and nonproducing Penicillium species , which differ between plant species , alternariol (AOH), alternariol monomethyl ether (AME), tentoxin (Ten) and altenuene (ALT). However, the genes responsible for Alternaria toxin biosynthesis have not yet been confirmed. To date, six strains of A. alternata have been registered carrying HST genes. The genome characteristics of A. arborescens, A. brassicicola, A. solani and A. tenuissima are summarized in Table\u00a0Alternaria species, such as high\u2010resolution melting (HRM) analyses, AFLP and SSR , Alt a1, AaSdhB, AaSdhC, AaSdhD, ITS and \u03b2\u2010tubulin have been used for Alternaria species identification . Genomic comparison revealed approximately 98% similarity between the six A. flavus species and 81% similarity between A. flavus and A. parasiticus species . As illustrated in previous reviews, the genes responsible for aflatoxin biosynthesis are integrated as a cluster that contains approximately 25 genes with a total length of 80\u00a0kb regions in the Fusarium genus . A total of 15 strains of this pathogen have been sequenced (NCBI genome ID 13188). The representative F. fujikuroi strain IMI 58289 has been reported . The representative strain F. proliferatum ET1 has been reported gene rDNA (18S) gene marker region (Diguta et al., The et al., et al., et al., et al., et al., Metagenomic strategies are technological approaches that are increasingly being used to study the overall microbial community in complex biological samples (Cao Many studies have elucidated the bacterial and fungal communities on F & V by using 16S rDNA and ITS sequencing. For this review, we searched references mainly from the Web of Science, NCBI, ScienceDirect and CNKI platforms. A total of 64 original studies describing the microbiomes of various F & V were identified Table\u00a0. Illuminet al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., Plant microbiomes are related to species/genotype specificity Fig.\u00a0. Recentlet al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., The microbiomes of F & V are affected by differences in regional environments, farming practices and disease occurrences Fig.\u00a0. Many enet al., et al., et al., Aspergillus, Botrytis, Mucor and Penicillium. The spinach phyllosphere was observed to present a decrease in bacterial diversity after 15\u00a0days of storage at 4\u00b0C or 10\u00b0C, which might be related to the inhibition of bacterial activity by the lower temperature (Lopez\u2010Velasco et al., et al., et al., et al., et al., The treatments applied to F & V during postharvest processing and storage affect microbial communities Fig.\u00a0. RefrigeThe development of DNA sequencing technology has provided an effective method for microbial WGS and genetic analysis. In recent decades, DNA sequencing technologies have been successfully developed including approximately ten operational platforms. By performing DNA sequencing, microbial WGS is largely promoted. Genetic studies provide DNA markers for microbial identification at the genetic level. These markers are extensively used in PCR or chip\u2010based detection. On the basis of 16S rDNA and ITS sequencing, metagenomic approaches are now emerging technologies for analysing the entire microbial community in a complex F & V matrix. The microbiomes of F & V show huge differences between plant species/genotypes. In addition, the microbiomes of F & V are related to factors such as regional/environmental factors, farming practices and postharvest treatments. These studies shed light on ways to improve F & V cultivation, disease prevention and quality control.The authors declare no conflict of interest."} {"text": "Tracking the spread of pandemics and the evolution of the underlying pathogens are effective tools for managing deadly outbreaks. Forster et al. use a meMJNs are not an appropriate representation of viral evolution. Although Forster et al. state thAdditionally, MJNs are constructed using distance-based criteria, which is inappropriate for modeling the mutational process in viruses . In facthttps://www.fluxus-engineering.com/) merely links the \u201coutgroup\u201d sequence to the most similar sequence of the already-produced \u201cingroup\u201d network. Therefore, it neither roots the ingroup topology nor polarizes character transformations (The outgroup comparison by Forster et al. is partirmations , 3.The authors\u2019 misinterpretation of MJNs fosters misconceptions, inaccuracies, and misrepresentations of fundamental phylogenetic principles. Thus, unfortunately, Forster et al.\u2019s study misleads"} {"text": "Even if current guidelines suggest an early referral of young breast cancer (BC) patients to fertility preservation counselling, physicians still lack knowledge about the different available strategies. Hormonal stimulation to harvest mature oocytes is considered unsafe by many oncologists and experts in reproductive medicine, particularly in the setting of oestrogen receptor-positive BC. The aim of this mini-review is to provide an overview on the available data about this topic in order to clarify potential misunderstandings and to highlight the new trends in the oncofertility field with their pros and limitations. The European Society of Medical Oncology , the AmeControlled ovarian stimulation (COS) and subsequent oocyte/embryo freezing are the gold standard of fertility preservation (FP) in this setting \u20135. NonetAre these issues really clinically relevant?Several studies have showed no detrimental effects on BC recurrence or survival if CHT was delayed until 12 weeks after surgery, particularly in ER-positive, early stage BC patients . Nowadayet al [et al [et al [Up to now, there is no evidence that COS promotes BC growth . A systeet al did not l [et al \u201322 devell [et al . Most ofl [et al . Subsequl [et al confirmel [et al .et al [in vivo tumour response to adapt the subsequent adjuvant treatment [et al [p = 0.47). The relapse-free survival rate was not statistically significantly different between pre- and post-surgery groups (p = 0.44) [et al [p = 0.75), and the median time was 41.5 days versus 35.5 days (p = 0.50) in the 34 patients who underwent COS versus the 48 control patients, respectively. Thus, patients who underwent COS before NACT had a delay of approximately 1 week compared to control patients, which could hardly impact prognosis [More recently, Letourneau et al comparedet al . Authorsreatment . Kim et t [et al reported = 0.44) . Chien e) [et al conducterognosis . Nonetherognosis .et al [A prospective multicentre study conducted by Marklund et al comparedet al \u201327, 29.in vitro maturation\u2014IVM) or using gonadotrophin-releasing hormone analogues (GnRHa) as medical gonadoprotection.Novel strategies have been developed to avoid hormonal stimulation in BC patients: harvesting ovarian tissue with subsequent cryopreservation, harvesting immature oocytes without COS (Over 130 live births have been reported after reimplantation of ovarian tissue , 30\u201341, et al [in vitro growth of human oocytes have been made, the pregnancy outcomes after IVM are still suboptimal, with lower implantation rates as compared with embryos obtained from mature oocytes [Grynberg et al reported oocytes , 46.According to current guidelines, the ovarian suppression with GnRHa during CHT does not represent an option of FP but a strategy to reduce the detrimental impact of cytotoxic drugs on ovarian reserve as data BRCA1/2-mutation carriers [To date, evidence on long-term reproductive outcomes after BC treatment is still scarce. A recent Swedish cohort study indicates that a successful pregnancy after BC is possible both in women who underwent FP and who did not . Noteworcarriers , 56.In conclusion, a timely COS for FP remains the first choice for young BC patients interested in FP, also if they have ER-positive tumours. Modified COS protocols with letrozole combined with gonadotrophins could increase safety and it should be recommended , 57, 58.The authors declare that they have no conflicts of interest.No funding was received for this specific research.All authors listed have contributed sufficiently to the writing and/or critical revision of the paper; they have approved the submitted final version."} {"text": "To determine the functional outcomes, complications and revision rates following total knee arthroplasty (TKA) in patients with pigmented villonodular synovitis (PVNS).We conducted a systematic review of the literature. Five studies with a total of 552 TKAs were included for analysis. The methodological quality of the articles was evaluated using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) scale. Functional outcomes, complications and revision rates were assessed. The mean age was 61\u2009years (range 33\u201394\u2009years) and the mean follow-up period was 61.1\u2009months (range 0.2\u201335\u2009years).n\u2009=\u200932) of patients in our review. Symptomatic recurrence of PVNS, component loosening, tibial-component fracture, instability and periprosthetic infection were the main factors leading to the need for revision TKA.All the studies reported improvement in knee function following TKA. Post-operative stiffness was the most frequently reported complication, affecting 32.7% (The findings of this review support the use of TKA to alleviate the functional limitations and pain due to knee degeneration in patients with PVNS. The operating surgeon should be aware of the increased risk of post-operative stiffness, as well as a potentially higher risk of infection. Implant survival should also be considered inferior to the one expected for the general population undergoing TKA. Pigmented villonodular synovitis (PVNS) is a benign proliferative condition which affects the synovial tissue and is a subtype of tenosynovial giant-cell tumor . This raThe etiology of PVNS is still unclear, with some believing that the disease stems from chronic inflammation , 5, wherTreatment for PVNS aims at removing the pathological lesion(s) either through open or arthroscopic synovectomy . A meta-With the increasing number of TKAs performed each year, arthroplasty surgeons are likely to encounter cases of PVNS with established degenerative changes. Understanding the challenges associated with this group of patients is, hence, very important. Our initial intent was to perform a meta-analysis on TKA as a treatment modality for PVNS; however, due to both the disease\u2019s rarity and the scarcity of available literature, we have, therefore, decided to perform a systematic review of the literature instead to determine the functional outcomes and complications of TKA in patients with PVNS of the knee.The literature search was conducted on Medline and EMBASE on 16 April 2020 by research librarians at two independent hospitals. Keywords used for the searches were \u201cPigmented Villonodular Synovitis\u201d OR \u201cPigmented Villonodular Tenosynovitis\u201d OR \u201cGiant Cell Tumour of Tendon Sheath\u201d AND \u201cArthroplasty, Replacement, Knee.\u201d All relevant studies between 1946 and 2020 were identified in accordance with the Preferred Reporting Items for Systematic review and Meta-Analysis (PRISMA) guidelines. All three authors then identified relevant studies by assessing the bibliographies of the included papers. The selected articles adhered to the PICO criteria for systematic reviews Population, Intervention, Comparison and Outcomes (PICO) criteria for systematic reviews.Our inclusion criteria for the study were as follows: (1) the studies selected should be about TKA as a treatment modality in PVNS, (2) the articles should be written in English, (3) the articles would report the functional outcomes post TKA and (4) the articles would report on complications post TKA.Our exclusion criteria were: (1) articles not reporting on the functional outcomes, (2) articles which included arthroscopic treatment of PVNS instead of TKA and (3) case reports which did not include the functional outcomes post TKA.After removal of duplicate articles, a full-text review of the selected studies was undertaken by two authors (YCT and KT).One reviewer (YCT) extracted data from the selected papers using a standardized data collection form. Information relating to the number of patients, their demographics, follow-up period, complications, revision rates, implant survival rates, recurrence and pre-operative and post-operative clinical and functional outcomes were compiled into a spreadsheet which was later checked by the second reviewer (JYT). There were no inconsistencies in the results.The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) scale was used to evaluate the articles . It was n\u2009=\u200910\u201348 patients, 11\u201348 TKAs) describing the outcome of TKA in patients with PVNS. Our largest study was by Casp et al. (n\u2009=\u2009453 TKAs) [The four studies included were small to medium retrospective case series (53 TKAs) and we d53 TKAs) , Casp et53 TKAs) , Lei et 53 TKAs) and Su e53 TKAs) scored a53 TKAs) scored 9The results were summarized using descriptive statistics for continuous variables, frequencies and percentages for categorical variables. Microsoft Excel, 2016 version was used for data analysis.The initial literature search identified 111 articles, of which 36 were duplicated articles. After screening the remaining 75 articles, a total of six studies satisfied the eligibility criteria. Two articles, Houdek et al. and HamlThe studies included 552 TKAs; however, only 98 patients (99 TKAs) had reported functional outcomes. Among the 552 TKAs, 83 patients (84 TKAs) had D-PVNS and 15 had L-PVNS, while the remaining 453 patients from the study by Casp et al. were not characterized by the disease subtype . The meaThree studies reported a pre-operative and post-operative comparison of the KSS and demonstrated a mean improvement of 26.79 and 39.11 points post-operatively, in functional and clinical KSS respectively. Casp et al. and Versp\u2009=\u20090.72). The study by Casp et al. [Both studies by Lei et al. and Su ep et al. and Versp et al. did not n\u2009=\u200931 patients) compared to 4.69% (n\u2009=\u200985 patients) in the control group with osteoarthritis (odds ratio (OR) 1.48, p\u00a0value =\u20090.023).Stiffness or decreased knee range was defined by authors Houdek et al. as a fleIn total, there were a total of 16 infections (2.9%) in our patient population. Authors Casp et al. utilized\u201cSymptoms of disease recurrence\u201d was defined by authors Houdek et al. as recurp value\u2009=\u20090.92) in increased risk of revision at 2\u2009years compared to the osteoarthritis (OA) group in the 2\u2009years post-operative period. Authors Verspoor et al. [Revision was described as the removal of implants with or without replacement of the materials. Symptomatic recurrence of PVNS, component loosening, tibial-component fracture, instability and periprosthetic infection were the main factors leading to revision TKA . There wr et al. reportedr et al. reportedr et al. , 15.TKA has been shown to be a generally successful procedure in patients with PVNS and established degenerative changes and this is reflected in the improvement of post-operative functional and clinical KSS in the four studies included in this systematic review. The main complication of PVNS in our patient cohort appears to be stiffness and infection.A systematic review and meta-analysis of case series and national registry reports with more than 15\u2009years of follow-up by Evans et al. estimateAn interesting observation from the studies was that 33 patients (34 TKAs) were diagnosed with PVNS post-operatively, through histopathological samples taken during the arthroplasty procedure due to suspicious synovial changes. This was seen in 23 out of 28 patients (82.1%) in the Su et al. study and 10 oo, range 50\u2013115o) which co-related with their poor post-operative ROM; however, there appears to be no clear link between multiple interventions and increased stiffness post TKA in our patient population. Casp et al. [p value\u2009=\u20090.011).Stiffness was among the main complications post TKA in patient with PVNS. According to the UK National Joint Registry 16th Annual Report, the peak incidence for stiffness occurs between 1 and 3\u2009years with 0.56 revisions performed per 1000 primary knee replacements (95% CI 0.53\u20130.60) with the trend for the development of stiffness decreasing after the 3-year mark . There wp et al. evaluaten\u2009=\u200914) in the former versus 1.55% in the latter . Houdek et al. [There was a total of 16 infected joints in our patient population. Casp et al. also comk et al. reportedk et al. administA total of seven patients were noted to have a recurrence of PVNS at a mean of 6\u2009years (range 2\u201312\u2009years) following the TKA. Both Houdek et al. and VersPost-operative recurrence of PVNS can be identified with MRI, using metal artefact reduction sequence that optimizes the visualization of periprosthetic soft tissues. MRI can play an important role during post-operative surveillance as it is highly sensitive and non-invasive. Relevant findings are a mass or diffuse synovitis of low signal intensity, both of which warrant close follow-up .This systematic review has limitations. First, we have reported the pertinent complications, but recognize that there is a time-dependent bias, which, due to the relatively short follow-up, may lead to underrepresentation of their true incidence. Second, the available studies were characterized by having a low level of evidence as well as a lack of a complete uniformity in reporting outcomes. Additionally, the Casp et al. study diThe findings of this review support the use of TKA to alleviate the functional limitations and pain due to knee degeneration in patients with PVNS. The operating surgeon should be aware of the increased risk of post-operative stiffness, as well as a potentially higher risk of infection. Implant survival should also be considered inferior to the one expected for the general population undergoing TKA."} {"text": "A Guest Editorial in Archives of Toxicology by Schreiver and Luch has poinFrom a consumer\u2019s aspect, there is a notable increase in the incidence of \u201citchy tattoos\u201d Kluger . From a Considerable progress has been made in tattoo pigment analysis. For identification of 36 specific organic pigments in tattoo inks, Schreiver et al. describeA pivotal point of the toxicological discussion is the potential carcinogenicity of tattoo inks. At present, skin cancers on tattoos are considered so far as coincidental, except for keratoacanthomas on red tattoos Bauer et al. report o(ii)TatS. This new and promising model emulates healed tattooed human skin. It underlines the advantages of 3D over traditional 2D cell culture systems. The methodological approach might be important for further research on the toxicology of tattooing, including used pigments and their destruction for tattoo removal.Hering et al. establisThis issue of Archives of Toxicology presents two contributions to this topic.In the February issue of this journal Giulbudagian et al. reviewedNevertheless, the assessment of risks remains a challenge, owing to knowledge gaps on biokinetics of highly complex inks and their degradation products. Giulbudagian et al. point to"} {"text": "Matrix metalloproteinases (MMPs) are members of an enzyme family and, under normal physiological conditions, are critical for maintaining tissue allostasis. MMPs can catalyze the normal turnover of the extracellular matrix (ECM) and its activity is also regulated by a group of endogenous proteins called tissue inhibitors of metalloproteinases (TIMPs) or other proteins, such as Neutrophil Gelatinase-Associated Lipocalin (NGAL). An imbalance in the expression or activity of the aforementioned proteins can also have important consequences in several diseases, such as cancer, cardiovascular disease, peripheral vascular disease, inflammatory disease, and others. In recent years, MMPs have been found to have an important role in the field of precision medicine as they may serve as biomarkers that may predict an individual\u2019s disease predisposition, state, or progression. MMPs are also thought to be a sensible target for molecular therapy ,2,3,4.This Special Issue includes ten papers: seven original articles and three review articles dealing with a broad range of diseases related to MMPs.The article by Santiago Ruiz et al showed tThe study by Cione E et al. evaluateThe paper by Rautava J et al. deals wiThe article by Rodr\u00edguez-S\u00e1nchez E. et al. exploresThe study by Heinzmann D et al. exploredThe paper by Ferrigno A et al. studied The article by O\u2019Sullivan S. et al. studied The study by Provenzano M. et al. aimed toThe paper by Laronha H. extensivThe article by Liu Z. et al. reviewedThis Special Issue describes important findings related to MMPs function, and dysregulation in several areas, such as vascular, kidney, and respiratory systems and also highlights the most recent progress on the knowledge and the clinical and pharmacological applications related to the most relevant areas of healthcare."} {"text": "Liver fibrosis and cirrhosis occur in the course of chronic HBV infection, with the increasing risk to develop hepatocellular carcinoma. In about 95% of adults\u2019 acute hepatitis B virus infection is self-limited, whereas in 90% of young children HBV infection leads to chronic progression. It is likely that a fully developed immune system contributes to HBV clearance has primed the determination of stealth or invisible virus. During the early HBV infection, transcriptome analysis failed to show a drastic change of hepatic gene expression, especially when compared to hepatitis C virus infection, which is associated with a strong IFN response (Wieland in vivo have revolutionized the field (Bissig et al.et al.et al.et al.et al.et al.Liver-chimeric mice, that have been developed to analyse HBV infection"} {"text": "We read with great interest the recent publication by Bankov et al. where they described the characteristics and functional activity of fibroblasts in Nodular Sclerosis (NS) Classical Hodgkin Lymphoma (cHL) . The autWe appreciate that the study of Bankov et al. confirmeInt. J. Cancer \u201d, but in our manuscript published in . Cancer in 2009 . Cancer . Moreove. Cancer we repor\u201d, but in. Cancer .In conclusion, we consider that there was a misinterpretation of our data since we have never claimed that IL-7R is up regulated in cHL fibroblasts or demonstrated that IL-7 secreted by HRS cells strongly induced the proliferation of cHL fibroblasts ."} {"text": "This unique diversity of photoinduced applications has spurred major research efforts on the rational design and development of photocatalytic materials with tailored structural, morphological, and optoelectronic properties in order to promote solar light harvesting and alleviate photogenerated electron-hole recombination and the concomitant low quantum efficiency. This book presents a collection of original research articles on advanced photocatalytic materials synthesized by novel fabrication approaches and/or appropriate modifications that improve their performance for target photocatalytic applications such as water and air (NO Photocaatalysts .Research efforts have been accordingly focused on the design and fabrication of advanced photocatalytic materials relying on competent modification approaches such as coupling with plasmonic nanoparticles, surface engineering, and heterostructuring with other semiconducting and/or graphene-based nanomaterials, as well as tailoring the materials\u2019 structure and morphology in order to boost light harvesting and photon capture, charge separation, and mass transfer that play a pivotal role in photocatalytic environmental remediation and solar to chemical energy conversion applications ,5.2 photocatalysts, prepared by a one-pot hydrothermal method. The complete removal of MC-LR from aqueous solutions was achieved under visible light irradiation, related to the unique combination of visible light photogenerated electrons from anion-induced impurity states and interfacial charge transfer between the brookite and anatase phases.This Special Issue consists of 10 original full-length articles on advanced photocatalytic materials fabricated by innovative synthetic routes and judicious compositional modifications, with diverse applications ranging from the degradation of hazardous water and air pollutants to hydrogen evolution and photoelectrocatalytic hydrogen peroxide production. T. M. Khedr at al. reported2 was utilized by M. Janus et al. [x decomposition without compromising, and even improving, cement\u2019s mechanical properties. Laser pyrolysis was applied by K. Wang et al. [2 photonic crystals by graphene oxide (GO) nanocolloids and subsequent thermal reduction was reported by Diamantopoulou et al. [N-modified TiOs et al. as an adg et al. to synthu et al. as a pro3 and Ag2S quantum dots with tunable photoelectrochemical response by the light incidence angle. Control over the shape and facet growth of TiO2 nanocrystals was demonstrated by Y. Du et al. [2 nanowires, grown on TiO2 nanotube arrays by electrochemical anodization, in combination with Au plasmonic nanoparticles were successfully applied by T.C.M.V. Do et al. [Fluorine-doped tin oxide (FTO) inverse opals crystal were also exploited by X. Ke et al. as macrou et al. by tunino et al. for the 4 with annealing temperature, used as an alternative metal oxide photocatalyst beyond TiO2, leading to enhanced decomposition of methylene blue as a model dye pollutant under LED visible light.Plasmonic Ag nanomaterials were also incorporated in silver-copper oxide heterostructures by H. Suarez et al. to promo2/FTO photoanodes, enabling broad visible light harvesting. These photoelectrodes combined with a Pt-free counter electrode, made of carbon cloth with deposited nanoparticulate carbon, were used for the assembly of a photoelectrochemical cell operating as a Photo Fuel Cell, i.e., without any external bias. These devices were demonstrated to photoelectrocatalytically produce substantial quantities of hydrogen peroxide with the Faradaic efficiency exceeding 100% in the presence of NaHCO3 carbonate electrolyte.The sensitization of mesoporous Titania films by nanoparticulate CdS and CdSe, in combination with ZnS passivation was used by T. S. Andrade et al. for the All the published articles were thoroughly refereed through the standard single-blind peer-review process of Materials journal. As Guest Editor, I would like to acknowledge all of the authors for their excellent contributions and the reviewers for their valuable and prompt comments that greatly improved the quality of the papers. Finally, I would like to thank the staff members of Materials, in particular Ms. Floria Liu, Section Managing Editor, for her devoted efforts and kind assistance."} {"text": "Dear Sir,On an eerily quiet, urban Dublin street lies an apartment block, it\u2019s balconies chequered with identical Irish flags carrying a defiant yet simple, national slogan; \u2018You\u2019ll Never Beat the Irish\u2019. It is the brilliance not only of the green\u2013white\u2013orange against the brutalist background that catches the eye but also the boldness, the sheer determination of the silent protest. It is a taunt to the insidious international public enemy that is COVID-19. It first barrelled through Ireland\u2019s door in late February 2020. On March 12, 2020, our Taoiseach Leo Varadkar addressed the nation from Washington, DC, issuing guidance on a nationwide lockdown (Radio Teilif\u00eds \u00c9ireann, et al., et al. et al. et al. et al. During times of crisis, we as humans have a natural tendency to seek solace in collectivism and reassurance in joint protection from a threat or crisis (Baumeister & Leary, et al. et al. et al. While COVID-19 alone may not be the sole impetus for suicidality, the associated social disconnection, physical isolation and routine disruption may be a pernicious cocktail of risk factors (Ammerman In the words of Thomas Fuller, \u2018if it were not for hope, the heart would break\u2019 (Grayling,"} {"text": "In this issue of MJR there are interesting case reports, research protocols and reviews.1 reviewed the role of microRNAs (miRNAs) in the pathogenesis of osteoarthritis (OA) and their potential role as therapeutic targets in this disease. miRNAs are small, single-stranded non-coding RNAs that regulate gene expression at the post-transcriptional level.Panagopoulos et al.2 use a case of a woman with Adamantiades-Behcet disease (ABD), who subsequently developed monoclonal gammopathy of unknown significance (MGUS) to speculate on the mechanisms of development of MGUS and ABD. ABD shares features of auto-immune disease and autoinflammatory disease and the authors elaborate that an early event in B cells, such as IgH translocation, may make them sensitive to growth factors, such as interleukin(IL)-6 which is raised in ABD. Also, the CD56 marker, increased in ABD T cells, is also increased in MGUS plasma cells.Chikanza et al.7 will utilize stress perfusion cardiac magnetic resonance in asymptomatic patients with APS to detect myocardial ischemia.Patients with antiphopsholipid syndrome (APS) may develop angina and myocardial infarction. In a research protocol, Tektonidou et al.8 in a 5-year prospective protocol will study disease course, comorbidities, treatment efficacy and outcome in giant cell arteritis in Greece.Tsalapaki et al.9 in a research protocol will examine longitudinal relationships between sedentary behavior while in a sitting, reclining or lying position, or light intensity physical activity(1.6-<3.0 METS) with health outcomes in rheumatoid arthritis.O\u2019Brien et al.6 pointed out, this finding as well SSc may well be a consequence of silica exposure.Calcified chest lymph nodes in a patient with systemic sclerosis (SSc) is a rare finding in SSc. Yet, as Angelopoulou et al.5 reported on two patients with Sjogren\u2019s syndrome (SjS) who developed polymyositis and inclusion body myositis and identified another 24 cases of SjS with inflammatory myopathies in the literature.Migkos et al.3 reported on a patient with ankylosing spondylitis who developed systemic sclerosis (SSc) and scleroderma renal crisis. The co-existence of ankylosing spondylitis and SSc is rare.Venetsanopoulou et al.4 described a patient with RA treated with steroids, methotrexate and adalimumab developed Orf disease, also known as ecthyma contagious; a rare self-limited disease caused by a DNA virus of the parapoxvirus group and transmitted to humans from goats and sheep.Patients on immunosuppressants are susceptible to various infections. In this issue, Kostopoulos et al"} {"text": "Ambio articles highlighted in this 50th Anniversary Issue have influenced the cultural narrative on environmental change, highlighting concepts such as \u201cresilience,\u201d \u201ccoupled human and natural systems\u201d, and the \u201cAnthropocene.\u201d In this peer response, I argue that global change research is still paying insufficient attention to how to deliberately transform systems and cultures to avoid the risks that science itself has warned us about. In particular, global change research has failed to adequately integrate the subjective realm of meaning making into both understanding and action. Although this has been an implicit subtext in global change research, it is time to fully integrate research from the social sciences and environmental humanities.Research on global environmental change has transformed the way that we think about human-environment relationships and Earth system processes. The four The Economist portrayed the Earth as a technical structure covered by riveted steel plates in dull blue colors. The accompanying story informed readers that \u201cHumans have changed the way the world works. Now they have to change the way they think about it, too\u201d (The Economist \u201cWelcome to the Anthropocene.\u201d The cover photo of the May 28, 2011 edition of The Economist cover story saw an impressive amount of research on global environmental change that has transformed the way that we think about human-environment relationships and Earth system processes. The concept of the Anthropocene has contributed to a new way of describing the significant role of humans in shaping the Earth\u2019s geology and ecology. Yet climate change, biodiversity loss, poverty, inequality, and other global problems are even more serious concerns today, and the timeframe for taking actions to meet international commitments is shrinking, increasing the risk of reaching \u201ctipping points\u201d and experiencing catastrophic losses (IPCC how conscious transformations to sustainability can be realized.Yes and no. How we think about the way that the world works has changed dramatically over the past decades. The years leading up to ses IPCC . BusinesAmbio, it is clear that concepts such as \u201cresilience,\u201d \u201ccoupled human and natural systems,\u201d and \u201cthe Anthropocene\u201d represent important advances that have influenced both scientific and cultural narratives on environmental change. However, I would also argue that global change research is still paying insufficient attention to how to deliberately transform systems and cultures to avoid the risk of what Steffen et al. (In this peer reflection, I start by acknowledging the contributions of global change research to a dynamic, interconnected view of the world. With specific reference to four key articles published in n et al. , p. 14 dn et al. , p. 16 rThe importance of meaning making has long been an implicit subtext within global change research. In fact, the realm of human thought and ideas, also referred to as the \u201cnoosphere,\u201d has historically had a close relationship with understandings of ecology and geology (see Samson and Pitt Ambio articles reviewed here include subtle yet significant references to the noosphere. For example, in \u201cResilience and Sustainable Development: Building Adaptive Capacity in a World of Transformations,\u201d Folke et al. (The four e et al. , p. 437 e et al. , p. 438.In \u201cCoupled Human and Natural Systems,\u201d Liu et al. emphasizAmbio, the article by Steffen et al. (terra incognita. The authors drew attention to some of the worst-case scenarios, noting that prior to the Anthropocene, humans \u201cdid not have the numbers, social and economic organization, or technologies needed to equal or dominate the great forces of Nature in magnitude or rate\u201d (Steffen et al. th century, humans transformed the environment at a global scale, as evidenced by dramatic rises in atmospheric concentrations of greenhouse gas emissions. In describing The Great Acceleration that started in 1945, Steffen et al. (In the same issue of n et al. on \u201cThe n et al. noted thn et al. , p. 619.n et al. .Finally, the article by Steffen et al. highlighAmbio articles identified the need for transformative change, and thus can be considered foundations for today\u2019s rapidly-growing literature on transformations to sustainability. For example, Folke et al. (These four e et al. , p. 437 e et al. concludee et al. , p. 644 e et al. , p. 753 e et al. ranged fe et al. .The articles have certainly helped to steer global change research in a more integrated and action-oriented direction. For example, Folke et al.\u2019s focus onhow deliberate transformations to sustainability come about, particularly how transformations in perceptions, meaning making, and relationships with nature actually can and do shift, and how such changes play out in the political sphere. Importantly, in recent years there has been a dramatic increase in the number of research programs, projects, and articles on transformations to sustainability. Most of these are located within the social sciences and environmental humanities, and draw attention to the importance of integrating more complex understandings of social systems and more nuanced interpretations of human relationships with the natural world. In Urgency in the Anthropocene, Lynch and Veland (What these articles did not address was d Veland , p. 1 cod Veland . Indeed,d Veland .There are also many critical and emancipatory approaches in the social science that acknowledge the ways that social structures and institutions can limit or expand the potential for humans and non-human species to flourish in the Anthropocene Wright . For exaThe Economist, one cannot help but notice that the \u201cworld of transformations\u201d described by Folke et al. (Returning to the \u201cWelcome to the Anthropocene\u201d article in e et al. translate et al. , or more"} {"text": "Anti-cancer treatments have never been so numerous and so efficient. As a result, the number of cancer survivors has been steadily increasing. At the same time, we are facing the challenge of a decade with increasing cancer incidence, which is expected to increase by 80% in middle and low-income countries and 40% in high-income countries from 2008 to 2030 . In 2008Immunotherapy, and especially different ways to boost patient immune systems to detect and fight cancer, has become an efficient way to treat cancer, showing great promise and potential against often otherwise non-curable cancers. In the review by Lichtenstern et al. , the autIt is known that solid tumors can often be difficult to treat with targeted therapeutic intervention strategies such as antibody\u2013drug conjugates and immunotherapy. This is especially true for tumors with a low mutation burden, which are often not antigenic enough to be targeted by immunotherapy. In their review article, Khazamipour et al. point ouCancer cell metabolism differs from that of normal cells. Tumors commonly activate metabolic pathways that upregulate nutrient synthesis and intake. Magaway et al. discuss Extracellular signal-regulated kinases (Erks) encompass another kinase family that is often activated in cancers. Erks possess unique features that make them differ from other eukaryotic protein kinases. Unlike others, Erks do not autoactivate and they manifest no basal activity. They are activated as unique targets of the receptor tyrosine kinases (RTKs)\u2013Ras\u2013Raf\u2013MEK signaling cascade, which controls numerous physiological processes and is mutated in most cancers. Smorodinsky-Atias et al. discuss Multidrug resistance is a serious problem in cancer and targeting multidrug resistance by re-sensitizing resistant cancer cells is one of the big challenges in cancer biology. Among the key multidrug resistance mediating proteins are the ATP-binding cassette (ABC) transporters and the breast cancer resistance protein (BCRP). ABC transporters are plasma membrane-bound proteins that transport nutrients into cells and unwanted toxic metabolites out of cells. Cancer cells can utilize this function for transporting cancer drugs out of the cells. Several attempts to target ABC transporters to gain control in cancer have been reported, but thus far none of the inhibitors developed have been clinically approved. Ambjorner et al. describe+ATPase (V-ATPase) that is located at lysosomal membranes, but can also be found at the plasma membranes of cancer cells that exhibit increased metabolic acid production, which makes them dependent on increased net acid extrusion. An acidic microenvironment favors cancer cell proliferation and survival and promotes their invasion. V-ATPase consists of at least 13 subunits, of which Flinck et al. [Another membrane bound protein family with important function in cancer is the vacuolar Hk et al. has idenIn their review, Peulen et al. discuss The vast majority of cancer deaths are caused by the primary tumor metastasizing into other organs. Invasion is a prerequisite for metastasis formation, and for this reason, the inhibition of invasion could efficiently prevent metastasis formation. For this, targeting the molecules regulating invasion could be useful. One of these is an oncogenic transcription factor, myeloid zinc finger 1 (MZF1), as reviewed by Brix et al. . P21 actNuclear protein localization protein 4 (NPL4) functions as an essential chaperone that regulates microtubule structures when a cell re-enters interphase. Majera et al. provide high/CD24low) as well as aldehyde dehydrogenase (ALDH), among others, suggesting that additional factors can be targeted by sulconazole. NF-\u03baB/IL-8 signaling is important for CSC formation and may be an important therapeutic target for treatment targeting breast cancer stem cells.Cancer stem cells (CSCs) are often responsible for therapeutic resistance. The study by Choi et al. presentsThe formation of three-dimensional (3D) multicellular spheroids (MCS) in microgravity, mimicking tissue culture conditions, was used by Melnik et al. as a metA study by Gruber et al. on the o"} {"text": "Cryptosporidium parvum infection in calves. Included studies used multivariate analysis within cohort, cross-sectional or case-control designs, of risk factors among young calves, assessing C. parvum specifically. We tabulated data on characteristics and study quality and present narrative synthesis. Fourteen eligible studies were found; three of which were higher quality. The most consistent evidence suggested that risk of C. parvum infection increased when calves had more contact with other calves, were in larger herds or in organic production. Hard flooring reduced risk of infection and calves tended to have more cryptosporidiosis during warm and wet weather. While many other factors were not found to be associated with C. parvum infection, analyses were usually badly underpowered, due to clustering of management factors. Trials are needed to assess effects of manipulating calf contact, herd size, organic methods, hard flooring and temperature. Other factors need to be assessed in larger observational studies with improved disaggregation of potential risk factors.Cryptosporidiosis is common in young calves, causing diarrhoea, delayed growth, poor condition and excess mortality. No vaccine or cure exists, although symptomatic onset may be delayed with some chemoprophylactics. Other response and management strategies have focused on nutritional status, cleanliness and biosecurity. We undertook a systematic review of observational studies to identify risk or protective factors that could prevent The online version of this article (10.1007/s00436-020-06890-2) contains supplementary material, which is available to authorized users. Cryptosporidium parvum is a common protozoan parasite in cattle. It causes chronic diarrhoea (scour) leading to stunted growth, loss of yield and potentially death are important contributors to total human deaths from diarrhoeal illness under 4\u00a0months old. The vast majority of calves suffering from cryptosporidiosis are under 1\u00a0month old (Erbe Bos Taurus mixed with others) were considered individually, in case they provided sufficient cattle-specific information to be informative.Eligible studies had to address infection in bovine calves molecular methods , (B) immunofluorescence microscopy or (C) contrast microscopy that detected oocysts that was concurrent with a large percentage of symptomatic animals (\u2265 90% with diarrhoea).The outcome was Any concurrent observational design was eligible. Studies were excluded if not available in a language known to the authors or if the article could not be easily translated into English using Google Translate. Articles without abstracts or available full text were excluded.We searched these databases from inception to May/June 2019: Scopus, CAB International abstracts, MEDLINE (PubMed) and Embase. A limited grey literature search was undertaken of three government databases via websites in summer 2019: The UK Dept for Food and Rural Affairs, The US Dept. of Agriculture library (at Cornell University) and The European Commission, Agricultural and Rural Development section. Conference proceedings were not searched. Literature databases were chosen following recommendations about the most comprehensive bibliographic sources for veterinary science research results. Grey literature search terms were Cryptosporidum, C. parvum, cryptosporidiosis).At least one of .After de-duplication, titles and abstracts were independently screened by two investigators (JB and CCH) against the inclusion criteria. Items were chosen for full text review or excluded. Selection disagreements were resolved by discussion or on the verdict of a third reviewer (PRH). Full texts were obtained where possible. Decisions about final inclusion or exclusion were made after full text review by one or more authors. Full-text review and data extraction were undertaken by LH, SM or JB and checked by each other.p\u2009\u2264\u20090.05 level of confidence were extracted and included in the results. After all such significant factors were identified and extracted, we checked back within the included studies to find instances where each factor had been assessed but found not to be a significant risk factor (to assess the consistency of importance of each factor and to reduce the risk of being influenced by random findings).Any risk or protective factors reported to be statistically significant at a Quality assessment was undertaken by LH or SM and checked by JB. Modified questions from the CASP checklist for cohort studies . Fourteen studies were eligible for inclusion and were data extracted and quality assessed. Characteristics of the included studies are found in Table Our search found 2522 possible relevant studies, see Fig. C. parvum was 6\u201378% of individual calves within studies, and studies assessed 1\u2013119 herds and 63 to 2249 individual animals. As management interventions tend to differ by herd, rather than by animal, the studies were all limited in their ability to identify important risk factors.The calves were overwhelmingly part of dairy production . Studies were carried out in Europe (6 studies), North America (6 studies), New Zealand and Egypt (one study each). Prevalence of We found that three studies were the strongest methodologically: Trotz-Williams et al. carried n\u2009=\u2009the number of studies that considered this potential risk factor): sex of animal (n\u2009=\u20091), cleanliness of actual animal (n\u2009=\u20091), breeding system (n\u2009=\u20091), dairy or beef farming (n\u2009=\u20091) and access to stream as water supply or not when dams were in the maternity pen >3\u00a0weeks prior to birth, compared with when dams were only in the maternity pen for \u22642\u00a0days before birth. There is some higher quality but limited evidence that longer segregation of dams from the rest of the herd prior to birth is protective.Two studies assessed use of maternity pens, isolating dams from the herd in a period prior to giving birth. A moderate quality study , this increased risk of disease in Matoock et al. OR 5.2, OR 0.96. No highC. parvum infection OR to 0.11 (95%CI 0.02\u20130.52) in Silverl\u00e5s et al. . We further suspect some studies reported that calves had \u2018colostrum\u2019 when in reality calves had artificial colostrum. Some studies report universal or near-universal exposure status , so colostrum feeding could not be assessed as a risk factor. Colostrum that has been sterilized or stored loses antibody effectiveness , but the only higher quality study Silverl\u00e5s et al. did not find that intake of unsterilised colostrum was protective.Silverl\u00e5s et al. , Weber eC. parvum infection, but colostrum intake as a risk or protective factor has not been tested effectively.There is weak evidence that having colostrum could be protective against There is lack of consistent evidence that any specific feeding delivery system for colostrum (or milk) is riskier or protective than others.Two studies found that use of milk replacer led to higher risk of oocyst shedding. D\u00edaz et al. reportedC. parvum with larger herd size. The odds ratios for larger herd sizes in the adjusted models ranged from 1.55 to 292 . All three of the highest quality studies considered herd size as a risk factor. Trotz-Williams et al. on farms was considered by three studies as a risk factor for fresh C. parvum infection. Two studies found no support for increased risk, while Starkey et al. ; both studies found the risk to be higher in organic systems: Maddox-Hyttel et al. found ORThis section deals with aspects of management not described elsewhere in this summary.C. parvum . A higher quality study study . Weber et al. . There is some consistent evidence, including from a higher quality study , while Maddox-Hyttel et al. was addressed by two studies .With regard to bedding, most studies considered hay under calf quarters although Silverl\u00e5s et al. assessedAmong the studies included in this review, Szonyi et al. found thC. parvum shedding. The models in Brook et al. found that disease risk was much lower when bedding was deeper , compared with shallower bedding (0\u20135\u00a0cm). Brook et al. tested other depths. Six to 10\u00a0cm depth was also protective compared with 0\u20135\u00a0cm depth , while >15\u00a0cm depth was not protective compared with 0\u20135\u00a0cm depth . So the relationship between depth and disease risk was not linear and not consistent and only tested by two lower quality studies. Evidence about optimal bedding depth is limited and inconclusive.Brook et al. and MaddSeparate from flooring management decisions, seven studies considered at least one aspect of how calf housing areas were cleaned.D\u00edaz et al. found thThe process of bedding removal involves walking and using equipment between animal groups and pens. In this process, personnel and equipment become fomites for spreading infection. A previous study found a similar effect . Trotz-Williams et al. (C. parvum infection more likely.Two studies considered whether co-infection with other pathogens (known to cause bovine diarrhoea) could be linked to y et al. concludes et al. is the os et al. is a higSome of the risk factor studies noted whether any animals were exposed to treatment that was meant to reduce illness, as reported below.Two studies mentioned that some calves were given halofuginone lactate (HfL). D\u00edaz et al. found thE. coli vaccine as a risk factor. D\u00edaz et al. , while the higher quality study, Trotz-Williams et al. . Trotz-Williams et al. (E. coli). First Defence was also found to increase incidence of cryptosporidiosis . The increased risk may be correlative; calves receiving the vaccine may have tended to be in herds that have more history of cryptosporidiosis. Evidence in favour of E. coli or similar vaccines was mixed and therefore inconclusive.Three studies assessed dams receiving an z et al. found E.s et al. , found ts et al. also assTrotz-Williams et al. found thTwo higher quality studies . Trotz-Williams et al. . One higher quality study . Infection rates peaked in October in Urie et al. . Deep straw bedding is thought to increase cleanliness of the animals and keep them away from faeces. Additionally, conditions should be kept as dry as possible. Disinfection (buckets or pans) should be available to staff at entrances to calf sheds. Livestock management strategies related to welfare encompass keeping animals warm and hydrated with electrolytes if necessary. Nutrition measures address whether colostrum or colostrum substitutes better bolster immune systems and overall condition was rarely reported. Even more difficult, it is likely that unobserved and/or unreported herd-specific factors affected the risks of a calf being ill or shedding oocysts. This missing information is a very important reason that future research needs to cluster observations by herd .A systematic review methodology for summarizing evidence is inherently conservative; this study design emphasizes only using demonstrable benefits or harms to inform policy and practice. This conservatism is meant to help prevent investment in futile measures but it cannot identify useful practices that have not been tested from types of evidence outside of the inclusion criteria. The strength of this systematic review is in highlighting how the body of evidence in risk factor studies on real animals needs to improve to make firm conclusions for better practices in herd management.The evidence base is generally insufficient to support any specific practice for controlling cryptosporidiosis in bovine calves. This is problematic because livestock managers cannot be sure which activities they should be doing to prevent this disease. Evidence-based practices are as important in veterinary science as in other biomedical sciences. Better quality and very specific evidence is needed about which modifiable risk factors should be prioritized in preventing cryptosporidiosis in calves.C. parvum infection increased when calves had more contact with other calves, were in larger herds or in organic production. Hard flooring reduced risk of infection, while calves tended to have more cryptosporidiosis during warm and wet weather. Co-infection with other pathogens was linked to being more likely to have a C. parvum-positive test in both studies that addressed this as a risk factor. All such factors should be formally tested in high-quality randomized controlled trials or case\u2013control studies.No overwhelming evidence on risk or protective factors was found. The most consistent evidence was that risk of C. parvum infection. Funding for such large studies that carefully assess and report the full range of potential risk factors are needed to enable the science to move forward and properly inform animal husbandry. Such studies need to use validated tools for assessment of risk factors, using pre-specified definitions, and high-quality methods of C. parvum detection in young calves.Many other risk factors were analysed but did not have consistent or conclusive effects. Being in individual or shared pens, being indoors or outdoors, whether the herd had history of cryptosporidiosis, breed, colostrum, time spent with dam after birth, type of flooring or bedding, whether calves were tied or free and use of nutritional supplements were not shown to consistently protect or increase risk of disease. However, most of these findings arose from relatively few studies: i.e. just two studies assessed each of organic production, nutritional supplements or being indoors/outdoors. Large high-quality studies across a large number of herds are needed that aim specifically to assess associations between rather than this range of management practices and ESM 1(DOCX 14\u00a0kb)"} {"text": "Rheumatoid arthritis (RA) is a chronic inflammatory disease that leads to joint destruction. Various therapeutic agents have been showed to halt disease progression in clinical studies. In this special issue, we cover subjects from the periodontal condition of RA patients ,2 to theEriksson et al. describeK\u00f6hler et al. reviewedTaylor et al. wrote thHirter et al. reviewedIn two post-hoc analyses of the RA BEAM Study Fautrel et al. and TaylIn summary, in this special issue the disease of RA and its therapy is described from different angles to provide a broad and profound insight into the disease."} {"text": "The aim of this study is to compare the use of flutter valve drainage bag system as an alternative to conventional underwater seal drainage bottle in the management of non-massive malignant/paramalignant pleural effusion.Forty-one patients with non-massive malignant and paramalignant pleural effusions were randomized into two groups. Group A (21patients) had their chest tubes connected to an underwater seal drainage bottle, while group B (20 patients) had their chest tubes connected to a flutter bag drainage device. Data obtained was analyzed with SPSS statistical package (version 16.0).Breast cancer was the malignancy present at diagnosis in 24(58%) patients. Complication rates were similar, 9.5% in the underwater seal group and 10 % in the flutter bag drainage group. The mean duration to full mobilization was 35.0\u00b120.0 hours in the flutter bag group and 52.7\u00b118.5 hours in the underwater seal group, p-value 0.007. The mean length of hospital was 7.9\u00b12.2 days in the flutter bag group and 9.8\u00b12.7 days in the underwater seal group. This was statistically significant, p-value of 0.019. There was no difference in the effectiveness of drainage between both groups, complete lung re-expansion was observed in 16(80%) of the flutter bag group and 18(85.7%) of the underwater seal drainage group, p-value 0.70.The flutter valve drainage bag is an effective and safe alternative to the standard underwater seal drainage bottle in the management of non-massive malignant and paramalignant pleural effusion. The oneet al. . Vuorisa [et al. showed t [et al. . Ogunley [et al. , also fo [et al. in Lagoset al. [Malignant pleural effusion is confirmed by the presence of malignant cells in pleural fluid or tissue . In patiet al. labels tet al. , 12. Malet al. , 9. Masset al. . Thus plth intercostal space mid axillary line and anchored to the skin with appropriate size non-absorbable suture.Patients with non-massive malignant or paramalignant pleural effusions admitted into the Lagos University Teaching Hospital (LUTH) from January 2014 to December 2014 were prospectively enrolled in the study. Approval was obtained from the Hospital's Health Research and Ethics Committee. Forty-one patients presenting with non-massive malignant and paramalignant pleural effusions were consecutively divided into two groups. Group A (21 patients) had their chest tubes connected to an underwater seal drainage bottle, while group B (20 patients) had their chest tubes connected to a flutter bag drainage device. All consenting adult patients (\u2265 18years) with non-massive malignant and paramalignant pleural effusions were considered eligible for inclusion in the study. Individuals with effusions not due to or related to a malignancy, massive pleural effusion and complicated effusions were excluded from the study. After obtaining informed consent, patients in both groups had a size 28F (French size) chest tube inserted aseptically under local anesthesia, via the 6Patients in Group A had their chest tubes connected to the standard underwater seal drainage system while Group B was connected to the flutter bag drainage system (Portex). All patients had administration of prophylactic antibiotics and adequate analgesia. A post insertion chest radiograph was obtained in all patients to confirm proper placement of chest tubes and daily drainage via chest tube was also noted. Drainage was discontinued and chest tubes were removed once daily output became \u2264 100ml/24hours for 2 consecutive days (in the presence of a patent tube) and chest radiograph showed re-expansion of the underlying lungs. Post chest tube removal radiographs were obtained and the patients were discharged home for follow up in clinic. Comparison was done between the two methods of chest drainage based on the selected parameters. Statistical analysis was conducted by using the SPSS 16.0 for Windows program . A P-value of < 0.05 using the Fishers exact test was considered significant.Of the 41 patients recruited for the study, there were 37 females (90.2%) and 4 males (9.8%) in the study population with a mean age of 50.3\u00b114.3 years . Breast There was no significant difference in the mean duration of drainage between both groups, 6.5\u00b12.6 days in the flutter bag group and 7.9\u00b12.0 days in the underwater seal group, p-value 0.059 . Howeveret al. [et al. [et al. [et al. [The underwater seal drainage bottle has been the conventional reservoir for drainage of pleural effusion in most patients, however various studies including that of Graham et al. and Vour [et al. have dem [et al. . In this [et al. -9. There [et al. and furt [et al. . The lac [et al. Althoughet al. [et al. [The patients in the flutter valve drainage bag group had a shorter period of hospital stay when compared to the underwater seal group and this was statistically significant with a P value of 0.019. This contrasts with the works of Graham and Vourisalo, but it is in keeping with the findings of Kadkhodaei et al. . The dif [et al. , in theiet al. [et al. [et al. study, where the incidence of complication was 11(17%) in the flutter bag group vs 7(12%) for underwater seal. This similarity should be interpreted with caution, due to the larger sample size in the Graham et al. study compared to this study.Vega et al. in their [et al. is that Malignant and paramalignant pleural effusions occur more in females with breast cancer and majority of these patients are in the fourth and fifth decades of life. The flutter valve drainage bag is an effective and safe alternative to the standard underwater seal drainage bottle in the management of non-massive malignant and paramalignant pleural effusion. The flutter bag drainage system encourages earlier mobilization of patients when compared to the underwater seal drainage bottle and shortens the length of hospital stay. However, a multicentre study is needed to further validate the findings of this study.The flutter valve drainage bag can be used to manage patients with pneumothorax and persistent air leak;The Flutter valve drainage bag can be used as a postoperative chest drain.The use of the flutter valve bag can be extended to patients with non-massive malignant or paramalignant pleural effusion;This study also shows that the flutter valve drainage bag shortens in-hospital stay.The authors declare no competing interests."} {"text": "Dear Editor,et al. [We read with interest the recent paper by Omar et al. regardinThe results concluded that: (a) there was a difference between mean and postoperative IIEF scores in the anastomotic group at both the 3- and 6-month follow-up, but (b) no difference was noted in the substitution urethroplasty group; furthermore, when the mean change in IIEF score was compared in both groups over the study period no difference was noticed; and overall (c) the study demonstrated that any type of bulbar urethroplasty had no statistically significant impact on EF .The study again highlights one of the challenges facing urethral surgeons: measuring sexual function following urethroplasty . The IIEet al. [The authors reference a paper by Barbagli et al. (using aet al. [The authors are to be commended for performing this study examining the impact of urethroplasty on EF specifically, but should acknowledge that the sample size is small and only a single questionnaire was used to assess EF. Urkmez et al. in 2019 Ideally, successful outcome assessment following urethral surgery should objectively evaluate voiding and all sexual outcomes, in addition to quality-of-life improvements with a validated, reproducible questionnaire.et al. [Breyer et al. have dev"} {"text": "Paravertebral compartment syndrome occurring without trauma is quite rare. We report a case of compartment syndrome that occurred after spinal exercises.A 23-year-old Japanese rower developed severe back pain and was unable to move 1 day after performing exercises for the spinal muscles. Initial evaluation at a nearby hospital revealed hematuria and elevated creatine phosphokinase levels. He was transferred to our hospital, where magnetic resonance imaging revealed no hematoma but confirmed edema in the paravertebral muscles. The compartment pressure measurements were elevated bilaterally. Despite his pain being severe, his creatine phosphokinase levels were expected to peak and decline; his urine output was normal; and surgery was undesirable. Therefore, we opted for conservative management. The next day, the patient\u2019s compartment pressure diminished, and his pain levels decreased to 2/10. After 5\u2009days, he was able to walk without medication.We present a rare case of compartment syndrome of the paravertebral muscles with good resolution following conservative management. We hope our case findings will help avoid unnecessary surgery in cases of paravertebral compartment syndrome. Compartment syndrome (CS) is characterized by increased compartmental pressure after trauma or surgery. CS is almost always reported in a limb; occurrence in other parts of the body, such as the paravertebral muscles, is rare. Lumbar paravertebral CS was first described in a 1985 case report of a young man with postexertional back pain , 3. The The patient was a 23-year-old Japanese man who was a medical student and belonged to a boating club. Two days before admission, he developed back pain after performing several repetitions of lifting 30-kg weight in a semicrouched position during a training session with the club. The pain was initially mild. On the day before admission, although he had mild back pain, he participated in both morning and evening training sessions. After the evening session, the pain gradually intensified. He waited to see whether the pain would resolve, but he was transported to a nearby hospital by ambulance at midnight because the back pain had worsened, and he found it difficult to move.Because of the combined symptoms of back pain at rest, hematuria, and a calculus detected in the ureter by computed tomography (CT), he was diagnosed with a urinary calculus and treated with fluids and rest. However, his pain persisted without relief. The following morning, because blood tests revealed elevated creatine kinase (CK) and lactate dehydrogenase (LDH) (1815\u2009U/L) levels, CS was suspected. He was then transferred to our hospital.Upon admission to our hospital, his condition was as follows: blood pressure, 131/81\u2009mmHg; pulse, 88 beats/minute; oxygen saturation, 100% in room air; respiratory rate, 22 breaths/minute; and consciousness level on the Glasgow Coma Scale, E4V5M6. His abdomen was flat and soft, and no tenderness was observed. The erector spinae muscles were tense, with especially severe spontaneous pain and tenderness in the lumbar region. The skin in the lumbar region was free from blistering or other superficial lesions. No neurological abnormalities, such as muscle weakness or impaired sensation, were observed in the lower limbs. There were no other findings suggestive of CS in the limbs.Blood tests Table\u00a0 revealedThe patient\u2019s compartment pressure gradually decreased, and his CPK levels peaked on the first day of admission to the hospital Fig. . The infCS is frequently observed after trauma to the limbs. However, it rarely occurs in the erector spinae muscles, and the diagnosis is commonly delayed . Our patet al. [et al. [et al. [et al. [et al. [Paravertebral CS is rare and difficult to diagnose. It has been characterized by the following symptoms in previous studies: severe tenderness, swelling, and rigidity in the lumbar region along with elevated CPK levels, according to Ferreira et al. ; abnorma [et al. ; dysesth [et al. and Nath [et al. ; and inc [et al. . Among tet al. [et al. [et al. [et al. also reported that muscle necrosis could be diagnosed on the basis of lack of gadolinium uptake [Regarding imaging findings, there are a few articles that mention CT. Plain CT is frequently performed for detailed examination of urinary calculi and the spinal canal in patients with sudden-onset back pain. In our patient, the diagnosis of paravertebral CS was delayed because a urinary calculus was detected at the previous hospital. Moreover, because there are few findings specifically suggestive of paravertebral CS, it appears to be difficult to diagnose the condition without performing contrast-enhanced CT. Regarding MRI findings, Ferreira et al. , Nathan [et al. , and Mat [et al. observedm uptake . In our et al. reported that the normal pressure in weightlifting players is 3.11\u2009mmHg in the supine position and 10.8\u2009mmHg in the sitting position [et al. reported that although muscle mass increases of 20% and compartment pressure increases from 8.5 to 14\u2009mmHg are observed during exertional exercise, these values return to normal within 6\u2009minutes after termination of exercise [et al., compartment pressure varied from 14 to 150\u2009mmHg [et al., only one of their nine cases meets the criterion [Although measurement of compartment pressure facilitates the diagnosis of paravertebral CS, normal pressure in the erector spinae muscles is not well established. Peck position . Moreoveet al. [et al. [et al. [et al. indicated that follow-up MRI may reveal scarlike disuse changes in muscles affected by CS [Of the therapeutic strategies employed for CS of the limbs, fasciotomy is indicated when the difference between diastolic blood pressure, which reflects the perfusion pressure in a muscle compartment, and intramuscular compartment pressure is within 20\u201330\u2009mmHg. However, because this value is associated with a high false-positive rate and low specificity, continuous measurement of intramuscular compartment pressure for 30\u201360\u2009minutes is reportedly preferable , 12. Theet al. ; increas [et al. ; and unc [et al. . Our pated by CS , we assuIn conclusion, we present a rare case of paravertebral CS. Despite high compartment pressures, conservative management resulted in an uneventful recovery with pain control and maintenance of urine output."} {"text": "Zeitschrift f\u00fcr die gesamte Neurologie und Psychiatrie. Today, he is rarely remembered, except mostly in the context of his research on the blood\u2013brain barrier.In the first two decades of the twentieth century, there was probably no European neurologist who was not familiar with the name of Max Lewandowsky, German-Jewish neurologist, author of numerous works, including a handbook of neurology, and the editor of the neurological journal Friedrichsgymnasium in Berlin, graduating in 1893. Then he studied medicine in Marburg, Berlin and Halle. Among his teachers were Theodor Engelmann (1843\u20131909) and Hermann Munk (1839\u20131912). He defended his Ph.D. thesis in 1898 and immediately continued his scientific career, first in the physiology laboratory in Berlin. He attended courses of psychiatry led by Karl Bonhoeffer (1868\u20131948) and Franz Nissl (1860\u20131919) in Heidelberg and Theodor Ziehen in Berlin Charit\u00e9 clinic. In Paris he studied under Pierre Marie (1853\u20131940) at Bic\u00eatre Hospital. In 1902 he became Privatdozent in Berlin, and in 1908 he was appointed extraordinary professor [Max Heinrich Lewandowsky was born on June 28, 1876 in Berlin, the son of Hermann Lewandowsky and Rose n\u00e9e Heymann. He attended Zeitschrift f\u00fcr die gesamte Neurologie und Psychiatrie. It was accompanied \u2018by a publishing series \u201cMonographs from the joint field of neurology and psychiatry\u201d (Monographien aus dem Gesamtgebiete der Neurologie und Psychiatrie). In the same year, he began work on a multi-authored handbook of neurology, inviting dozens of renowned specialists from Germany and abroad [Lewandowsky was an efficient organizer. In 1910, together with Alois Alzheimer (1864\u20131915), he founded d abroad . Until 1d abroad . LewandoBluthirnschranke) [As a researcher, Lewandowsky was particularly interested in experimental work. His Ph.D. dissertation dealt with vagal control of lung function . Under tchranke) . His subchranke) . In 1907chranke) . He alsoMax Lewandowsky was married to a mezzosoprano singer Margarete (Gretchen) Gille in 1909; the marriage produced no children. After World War I broke out, Lewandowsky served as an army physician. He investigated neurological symptoms in head trauma and opposed inhuman methods of treating war neuroses . In the"} {"text": "Hemodynamic monitoring is essential to provide optimal hemodynamic management to patients in perioperative and intensive care medicine. The Journal of Clinical Monitoring and Computing (JCMC) welcomes research articles investigating hemodynamic monitoring technologies, cardiovascular pathophysiology, and hemodynamic treatment strategies that help advance this research field and eventually improve patient care. In this review, we highlight and summarize selected papers on hemodynamic monitoring and management published in the JCMC in 2019.In a prospective study, Nicklas et al. comparedAn observational cohort study in cardiac surgery patients was performed by Henriques et al. to invesThere is a risk of measurements artifacts in large data sets as measurements are saved electronically without controlling. A growing number of hospitals use electronic records in perioperative and intensive care medicine. Therefore, Du et al. developeIn a retrospective analysis, Harrison et al. used theIn 2019, five papers on cardiac output monitoring were published in the JCMC.Vetrugno et al. comparedROC) for the CFI to predict a left ventricular ejection fraction\u2009\u2265\u200940% (AUCROC: 0.926),\u2009\u2265\u200950% (AUCROC: 0.924), and\u2009\u2265\u200960% (AUCROC: 0.875). Similar results were found for the predictive ability of the GEF\u00a0(left ventricular ejection fraction\u2009\u2265\u200940% (AUCROC: 0.934),\u2009\u2265\u200950% (AUCROC: 0.938), and\u2009\u2265\u200960% (AUCROC: 0.887). Further studies were recommended to confirm these findings.Also using the Volume View/EV 1000/Hemosphere system, Nakwan et al. investigMaeda et al. comparedOne of the more recently marketed technologies to measure cardiac output is the BSM-9101 bedside monitor . It provides estimated continuous cardiac output (esCCO) based on pulse wave transit time technology measured using the electrocardiogram (ECG) and peripheral pulse oximeter. Suzuki et al. ) and at individually adjusted contacting force . Since \u0394POP seems to be a reliable indicator to predict fluid responsiveness at a certain contacting force and is measured non-invasively and relatively easy, this study is an important contribution for this methodology.Park et al. investig2 and CS\u2009>\u200931\u00a0mL/cmH2O. Since the prone position is obligatory for many surgical procedures and a treatment option in patients suffering from acute respiratory distress syndrome, this study provides clinically important insights\u00a0for these indications.Another interesting and clinically relevant study in 88 patients with spinal surgery investigated the ability of PPV to predict fluid responsiveness in prone compared to supine\u00a0patient position and studied the effects of body mass index, intra-abdominal pressure, and respiratory system compliance (CS) on PPV . The autVistisen et al. investigIn a pilot study, Pybus investigSun et al. performeThe number of published studies applying machine learning methods to clinical data is exploding these years, and for JCMC, 2019 was also mirroring this trend. Last year, we saw five original papers applying various types of machine learning techniques/algorithms to clinical data. All studies had a retrospective design and authors predominantly tried to predict hemodynamic events/derangements such as tachycardia , hypotenROC of 0.81 with their developed algorithm. The editorial discussed how to identify the data sets for cases and non-cases, a selection that is fundamentally acausal if the existence of an event is used to define the data set, i.e. if analyzing only selected temporal windows preceding events and non-events, which Yoon et al. [In line with the aspects of the review, the editorial by Vistisen et al. accompann et al. did. In n et al. . Also, tn et al. chose a ROC was only in the range of 0.7, which is not as good as that reported for existing predictive monitoring [The study by Donald et al. presentenitoring . An imponitoring were wornitoring . Regardinitoring , it appeROC of around 0.75 10\u00a0min in advance of events. The peripheral oxygen saturation signal seemed to provide similar classification (AUCROC of 0.77), so the more advanced combination of vital signs in this group of patients for this predictive purpose may not be very beneficial. This highlights the need for reporting comparative classification of simple/existing monitoring [Matam et al. describenitoring . After tJCMC also published a validation study that reported clinical outcome data before and after the implementation of the Continuous Monitoring of Event Trajectories (CoMET) system in a surgical ICU . The CoMIn the study by Maheshwari et al. , the autIn summary, machine learning techniques gain widespread interest and utilization. JCMC not only publishes but also receives an increasing number of papers applying machine learning to hemodynamic and clinical data. The field is still new but exciting for authors, editors, and reviewers and we look very much forward to seeing which impact all these efforts provide in future, preferably prospective, studies. Studies need sufficient reporting as well In an in-silico study, Rinehart et al. evaluateA nationwide survey was conducted by Scholten et al. to inves"} {"text": "A community-based, age-specific survey of skin disorders is usually necessary to characterize the true burden of skin disease among a given population and help to tailor health care personnel training and delivery towards the prevalent disorders in resource poor settings.This was a descriptive cross-sectional study among adolescents attending secondary schools in Ilorin, Kwara State, Nigeria. A thousand and three hundred students were recruited from public and private secondary schools through a multi-staged stratified random sampling method. Information was obtained via a semi-structured questionnaire and all students underwent a physical examination. Data was analysed using SPSS version 20. Information generated was presented with tables and figures.The prevalence of skin disease in the study was 66.5%. More females, mid-adolescents, students in senior class and those attending public schools had skin disorders. The most prevalent skin disease were: acne vulgaris, pityriasis versicolor, tinea capitis, pityriasis capitis and traction alopecia.Skin conditions are highly prevalent among the adolescent population. Infective and inflammatory skin conditions appear to be more prevalent than other classes. Most times, only a few skin disorders account for the bulk of dermatoses affecting this age group. Adolescent skin healthcare should be subsidized because of the high prevalence of skin disorders in this age group. Skin disorders have been estimated to affect between 30% and 70% of individuals worldwide, with even higher rates in at-risk subpopulations . In 2010 [et al. . About 1 [et al. .Adolescents make up approximately 38 million of the population of Nigeria, a number that exceeds the total population of some countries , 7. PubeThis was a descriptive cross-sectional study that was conducted in 16 secondary schools in Ilorin, Kwara State, Nigeria. Research was approved by the University of Ilorin Teaching Hospital Ethical Review Board with Ethical approval number ERC PAN 2017/03/1655. Additional approval was obtained from the Kwara State Ministry of Education. The study was carried out by 2 paediatric dermatologists with the assistance of paediatric dermatology trainees and research assistants. Sixteen schools out of the 145 secondary schools in the Ilorin metropolis were selected through a multi-staged stratified random sampling method.Study participants were recruited over a 5-month period from November 2017 to March 2018. The research team paid a first visit to the selected schools during which the school management was informed about the study and their consent sought. A second visit was made to the selected schools a week before the research commenced and students selected by simple random sampling were addressed and given detailed information about the research. They were given assent forms to sign and consent forms to take home to their parents. Only those who returned with a duly filled and signed assent and consent form were included in the study.All study participants were given semi-structured questionnaires which was administered by the researchers. Information related to socio-demographics, presence and history of skin disorder, were obtained. The social class of the subjects was determined using the Oyedeji's classification system . ExaminaSocio-demographic characteristics of study population: a total of 1300 students were recruited from public and private secondary schools in Ilorin. The mean age of the study population was 13.8\u00b12.1 years with an age range of 10-19 years. Male: Female ratio was 1:1.3.Prevalence of skin disorders in the study population: eight hundred and sixty-five students had at least one skin disorder following a physical examination, giving a prevalence of 66.5%. There was a higher prevalence of self-reported skin disorders with 951 (73.1%) students reporting a skin complaint compared to 865 (66.5%) who were determined to have dermatologic diseases following history and physical examination. Most students (77.9%) had one skin disease while 20.2% had 2 skin disorders and 1.8% had three or more skin disorders as depicted in Distribution of skin disorders across socio-demographic indices: skin disorders were more common in females, students attending public schools, students in the senior secondary classes and mid-adolescents (p<0.05) as shown in Spectrum of skin disorders among adolescents: a total of 48 diagnoses was made. Skin disorders affecting the skin appendages 428 (49.5%) were the most frequent followed by infective skin disorders 330 (38.2%). The top five diagnoses were acne vulgaris 347(40.1%), pityriasis versicolor 204 (23.6%), tinea capitis 80 (9.1%) pityriasis capitis 77 (8.9%) and traction alopecia 64 (7.4%) (4 (7.4%) .Occurrence of skin disorders across the different age groups: pityriasis versicolor was the most prevalent disorder among early adolescents (21.8%), while acne vulgaris was most prevalent in mid and late adolescence (33.4% and 44% respectively). Traction alopecia, tinea capitis and pityriasis capitis were the other common disorders seen. Tinea pedis was more common in late adolescence than tinea capitis had gone to the hospital on account of the skin complaint, 69.1% had never seen a health worker for skin complaint and 8.3% were not sure if they had. The bar chart in Statement of principal findings: this study showed a high prevalence of skin disorders among adolescents attending secondary schools in Nigeria with well over half of the study population having at least one skin disorder. This high prevalence is similar to what Ogunbiyi et al. [et al. [et al. [et al. [i et al. and Yase [et al. found am [et al. . This is [et al. -18. Onin [et al. also fou [et al. , 21. How [et al. found a A study in South Africa reported similar findings although another study described traction alopecia among Sikh males and this was attributed to cultural practices , 24. TheStrength and weakness of the study: one of the strengths of this study is the large sample size and also the recruitment of the entire range of the adolescent population as defined by the World Health Organization [nization . This alnization . Our stuStrength and weakness in relation to other studies: this study had a large sample size, which is similar to the methodology other dermato-epidemiologic skin surveys done among adolescents in Nigeria employed. A large sample size ensured that the results of this study can be reliably generalized over the adolescent population. Ogunbiyi et al. [et al. [et al. [i et al. and Hens [et al. also rec [et al. had a sm [et al. . The pre [et al. in Roman [et al. . These set al. [et al. [et al. [Our study was community based and therefore the prominent disorders reported are different from what is mostly seen in hospital-based surveys. In Lagos Nigeria, Ayanlowo et al. found inet al. , 36. Wheet al. . Classifet al. . Eshan'set al. . We did [et al. focused [et al. also looMeaning of the study: this study has shown a high burden of skin disease in school going adolescents in a typical Nigerian community. It has also been able to demonstrate that the occurrence of skin disease can be influenced by age and gender even among adolescents. It also evaluated the adolescent behavior with regards to skin and health.Unanswered questions and future research: a study on the determinants of skin disease in adolescence and the effect on quality of life will provide further insight into the burden of these disorders.In conclusion, this study has corroborated the fact that the burden of skin disease is heavy in Nigerian adolescents going by similar prevalence rates in studies from some parts of the country. Another important finding is that almost 90% of skin disorders can be accounted for by a handful and as such targeted efforts can be made to train community health workers, subsidize dermatologic care in this age group and create more awareness about skin disorders.The prevalence of skin disease is high in adolescents;There is a wide spectrum of skin disease that can affect adolescents.The prevalence of traction alopecia is high in female adolescents;A few dermatoses account for the bulk of skin disorders seen in adolescents;Skin health-seeking behavior is poor among adolescents and is largely influenced by their parents.The authors declare no competing interests."} {"text": "Adolescents are at risk of obesity and caries due to various factors such as diet and poor health habits; these factors may affect their body mass index (BMI) and salivary components. Therefore, it is necessary to assess these factors and their relationship in this age group. This study aimed to evaluate the association between decayed missing filled teeth index (DMFT), salivary alpha amylase (sAA) level and age-specific BMI in adolescent girls.p< 0.05. A cross-sectional study was conducted on 81 females aged 13-15 years in 3 groups of BMI percentiles; \u201cnormal\u201d, \u201cat risk for overweight\u201d and \u201coverweight\u201d(n=27). DMFT was calculated and unstimulated saliva samples were collected. The sAA level was measured with a spectrophotometer. Data were analyzedusing Kolmogorov-Smir-nov test, Kruskal- Wallis and Spearman correlation tests using SPSS (version 23) at p= 0.014). The concentration of sAA and mean DMFT were estimated 1326.56\u00b14.73 U/L and 2.77\u00b12.36, respectively. There was no significant differencein sAA level and mean DMFT among BMI groups. A positive and significant correlation was found between sAA and DMFT in overweight group (r 0.46, Within the limitation of this study, higher levels of sAA may be considered as an indicator for dental caries in overweight adolescent girls. Adolescence is defined as a critical developmental period of life experienced from the ages of 10 to 24 years . Payinget al. equation and pointed on the age-sex specific BMI percentile chart. Based on the age-sex specific BMI percentile [ p< 0.05 was consideredstatistically significant. The study variables are listed in This cross-sectional study was carried out on female students in the age range of 13 to 15 years. All cases were randomly selectedamong the grade 1 of high schools of Babol (north of Iran). Exclusion criteria were adopted as suffering from systemic diseases,consumption of medication over the past month, poor oral hygiene, unwillingness to participate in the study and having orthodonticappliances. The sample size was calculated with consideration of the effect of 4% at the level of x= 0.05 with the power of 80%.Using the software G power, 81 students was calculated (in 3 groups of 27 each one). After approving the study protocol by the Ethic Committeeof Babol University of Medical Sciences (MUBABOL. REC.1396.12) and obtaining permission from the Babol Education Department, five high schoolswere selected randomly for sampling. The height and the weight of students were measured without shoes by a scale and a meter.BMI of students were calculated based on [weight/ (height)centile , the secentile . The tep< 0.05).According to Kruskal Wallis test, no significant difference was found among sAA level and DMFT of different BMI percentiles .Based on Spearman correlation coefficient test, there was a significant positive correlationbetween DMFT and sAA only in BMI percentile> 95.Additionally, the result of Fisher exact test revealed no significant difference in caries activity of different BMI percentiles (et al. [ et al. [ Considering the results of the current study, no significant difference was found between sAA levels in overweight adolescents compared to those who were in \u201cnormal\u201d and \u201cat risk for overweight\u201d groups. Although the effect of sAA gene on the risk of obesity has been confirmed, its mechanism remains unclear and there are conflicting results on the association between sAA and obesity - 21. Ret al. revealeet al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ Additionally, no considerable difference was found between mean DMFT in the 3 groups of BMIs. This finding is in conformity with the studies of Shakya et al. and Sad et al. . Additi et al. , Alves et al. , Mojara et al. , Sadegh et al. , and Pi et al. . Moreov et al. , did no et al. and Red et al. showed et al. showed et al. found a et al. investiIn the present study, a positive correlation was found between sAA level and DMFT only in overweight group. This finding may be related to the level of starch intake by the overweight subjects. As mentioned, excessive consumption of starch by people with higher levels of sAA puts them at risk of obesity . On theet al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ The available data concerning the association between dental caries and sAA are limited and inconsistent - 36. Met al. in a ma et al. and Sit et al. reveale et al. investi et al. conduct et al. .The current study was carried out with some limitations such as disregarding the dietary habits and stress level of the participants which might have an impact on the level of sAA. In addition, the study was conducted on adolescent girls who were in the pubertal period and puberty may have an impact on body biochemical and lifestyle, which may subsequently influence the study variables. Moreover, because the age of puberty for girls and boys is different, the authors had to study just on girls. Further investigations on all target age groups with a larger sample size in both genders are recommended. Additionally, matching the confounding factors such as oral hygiene, food habits, salivary immunoglobulins, stress level and so on, are suggested for future studies.The concentration of sAA and DMFT were not different among overweight and non-overweight adolescent girls. There was a positive correlation between sAA level and DMFT in overweight adolescent girls."} {"text": "Data collected was analysed using Statistical Package for Social Sciences . Chi square was used to test the statistical differences at a significance of p> 0.05.a total of 410 patients participated in the study. About 73% had tertiary education while 36.3% were within the modified ISCO-08 Group 2. The respondents that were satisfied with the general dental appearance and tooth shade were 66.3% and 63.5% respectively. More males (65.1%) than females (62.7%) were satisfied with tooth colour while more females (69.1%) were satisfied with dental appearance. The older age group were more satisfied with dental appearance and tooth colour. Awareness of tooth whitening (Over 80%) and the desire to undergo tooth whitening was more among the post-secondary individuals. More of dental patients (73.1%) than medical (59.2%) were satisfied with teeth appearance (p=0.003).patients are increasingly aware of their dental appearance/tooth colour and the need to improve it with tooth bleaching and/or orthodontic treatment. Female were more dissatisfied with their tooth colour but more satisfied with their dental appearance than the male. Older people were more satisfied with their dental appearance and tooth colour compared to younger age group. Aesthetics is an important aspect of modern society because it defines one\u00b4s personality. Individuals with positive attitude towards their teeth (colour and shape), and smile may show confidence and be extroverts while individuals with discoloured, missing or fractured teeth may on the other hand be withdrawn because of their teeth appearance. Recently in dental treatment, increasing emphasis is being laid on aesthetics with theThe arrangement of teeth, shape and form, untreated dental caries and non-aesthetic or discoloured anterior teeth restorations as well as missing anterior teeth lead to dissatisfaction with dental appearance. ,9 Thoughet al. [This was a questionnaire-based cross-sectional study, which made use of the modified version of the questionnaires used by Tin OO et al. and Poonet al. . The selet al. was usedEthical approval was obtained from the University of Ibadan/University College Hospital Institutional Review Board (UI/UCH IRB) with approval number: UIUCH/EC0155. The data collected was analysed using Statistical Package for Social Sciences . Descriptive statistics was employed and chi square used to test the statistical differences of some responses at a significance of p= 0.05.Four hundred and ten patients participated in the study with a mean age of 36.8\u00b114.03 years. Although there were more females than males generally, the M: F among the dental patients was 1: 2.5 compared to 1: 1.3 among the medical patients. The highest proportion (73%) had tertiary education while most of the participants (36.3%) were within the modified ISCO-08 Group 2 which comprised of Technicians, Clerks, Secretaries, Skilled Agricultural workers . The finIt seems the older the age, the greater is the satisfaction with either the dental appearance or tooth colour. The feeling of having protruded anterior teeth was seen more among the youngest age group studied (< 20) compared to other age groups, this was found to be statistically significant (p=0.005). The desire to undergo orthodontic treatment was equally high among the very young and the very old. However, the knowledge of tooth whitening was highest among the youngest age group . The asset al. [et al. [et al. [et al. [et al. [et al. [et al.[Dental aesthetics has increasingly become a concern among patients and clinicians. This is because physical appearance plays a key role in social interaction and the smile and teeth are important features in determining facial attractiveness . Evaluatet al. 62.7%) [et al. 47.2%), [et al. 57.3%) [et al. 58.1%).2.7% [et [et al. in Flori. [et al. in the U. [et al.. The dis7.2%, [et. [et al.. The mai%.2.7% [% [et al. [et al. was toot. [et al.,17. Thou. [et al.,20. Howe. [et al..et al. [et al. [It could be deduced that the higher dissatisfaction with dental appearance among males could be due to tooth arrangement problems as more males than females, felt that their front teeth were crowded, poorly set or bulging consequently, they were more willing to undergo orthodontic, restorative or any aesthetic treatment. This is not unexpected since malocclusion could also determine dental appearance. Teeth arrangement is a factor correlated to a harmonious smile and attractiveness and diveet al. found thet al. . This maet al. . The greet al. ,9 where [et al. did not This study also evaluated the correlation between occupation and the parameters studied which may not have been previously recorded. It was found that 59.2% of the executives and high-profile professionals and business tycoons were satisfied with their teeth appearance as compared to 72.7% in Group 3. One reason for this may be the caliber of people or the company of peoples those in Group I are likely to be relating with. Though the occupational group 4 had many features of unacceptable occlusion the desire to undergo orthodontic treatment was very low (38.5%) among them and this may not be unconnected to their possible low economic status. Findings in this study also show that there were significant difference in the satisfaction of dental and medical patients as more of dental patients than medical patients were satisfied with teeth appearance. More of medical patients felt they had poorly set and protruding front teeth. These observations pointed to the fact that the dental health of the dental patients appeared to be better than that of those patients that were attending the hospital for medical reasons. This is probably because the dental patients are more aware and possibly have started receiving aesthetic dental treatments that have improved their dental appearance to certain extent.Within the limitations of this study, it could be concluded that: patients are increasingly aware of their dental appearance/tooth colour and the need to improve it with tooth bleaching and/or orthodontic treatment; female are more dissatisfied with their tooth colour but more satisfied with their dental appearance than the male; older people are more satisfied with their dental appearance and tooth colour compared to the younger age group.Studies have shown that males are more satisfied with their tooth colour and appearance whereas females are more conscious and difficult to satisfy;Dental treatments that improve anterior teeth aesthetics have been found to improve the quality of life and psychosocial well-being of people.Satisfaction with dental appearance is better in patients that are aware and seek dental treatment than those that are not;In this study, medical patient showed a greater desire to have dental treatment that will improve aesthetic such as restoration and orthodontic treatment. Thus, this shows the need for more dental education and information in our environment."} {"text": "In recent years, increased resistance to antibiotics and disinfectants from foodborne bacterial pathogens has become a relevant consumer health issue and a growing concern for food safety authorities. In this situation, and with an apparent stagnation in the development of broad-spectrum antibiotics, research into new antibacterial agents and strategies for the control of foodborne pathogens that have good acceptability, low toxicity levels, and high sustainability is greatly demanded at present.Special Issue on \u201cNatural Alternatives against Bacterial Foodborne Pathogens\u201d aims to contribute to the visibility of some of these new antibacterial agents and contains eight research articles and one review, presenting different strategies potentially applicable in the control of various foodborne pathogens.This Escherichia coli, Listeria monocytogenes, Pseudomonas putida, and Staphylococcus aureus is described by Kerekes et al. [Bifidobacterium longum subsp. infantis and Lactobacillus reuteri) with the capacity to adhere on different surfaces forming a biofilm able to control the growth of pathogenic and food spoilage bacteria. This could be useful as a new biocontrol solution for different industrial applications. The probiotic functionality of a Bacillus subtilis strain protecting probiotic lactic acid bacteria during their exposure to unfavorable environmental conditions, such as desiccation and acid stresses, is described by Kimelman and Shemesh [B. subtilis strains have demonstrated a potent antimicrobial activity against pathogenic S. aureus. Luis et al. [The antibacterial properties of extra virgin olive oil against different foodborne pathogens and their relationship with phenolic composition of the extract are described by Nazzaro et al. . This sts et al. . These ss et al. proposes Shemesh . In addis et al. report ts et al. present s et al. report ts et al. present"} {"text": "The dairy sector is facing a decisive challenge in developed countries, which could deeply influence its future and its historical status of being a pillar for human nutrition. The most challenging issue is to give suitable answers to the demand for nutritionally balanced and environmentally sustainable products, the two main aspects of the new \u201cfood paradigm\u201d that increasingly sees foods as drugs and imposes a stringent eco-friendly approach in their production (\u201cgreen foods\u201d) ,2. In thProduct Innovation. Abdel-Hamid et al. [Rubus suavissimus S. Lee leaves). The phenolic compounds included as a consequence of extract addition improved biological activity in terms of antioxidant and antihypertensive activity and inhibition of the Caco-2 carcinoma cell line; on the other hand, the viability of the yogurt starter cultures during refrigerated storage was not significantly affected. Finally, the sensory analysis demonstrated a high acceptability of the product and allowed for establishing the most suitable level of fortification. In the second paper, two different plant-based ingredients were used as yogurt-fortifying agents: fenugreek (Trigonella foenum-graecum) seed flour and Moringa oleifera seed flour. Moringa oleifera samples had higher values of phenolic compounds and antioxidant activity as compared to fenugreek, and exerted higher antibacterial activity against several undesired species. On the contrary, the viability of Streptococcus thermophilus and Lactobacillus bulgaricus was improved. Incorporation of the flours caused a modification of the concentration of mineral compounds, with connected increase of some valuable elements, and of the sensory characteristics. Saleh et al. [d et al. and Dhawd et al. developeh et al. investigh et al. focused h et al. and Facch et al. dealt wih et al. investigh et al. made useThe influence of rearing conditions on cheese quality. Formaggioni et al. [i et al. comparedi et al. dealt wii et al. deepenedTwo further papers complete the Special Issue: an Original Article and a Review Article. The first one is a verIn summary, the Special Issue \u201cChemical and Technological Characterization of Dairy Products\u201d offers readers a series of innovative information that can be useful both for developing new research ideas and for developing new types of dairy products."} {"text": "Leprosy has long-term consequences related to impairment and stigma. This includes a major impact on mental health. This study aims to consolidate current evidence regarding the mental health impact of leprosy on affected persons and their family members. In addition, determinants influencing mental health outcomes among leprosy-affected persons and effective interventions are examined. A keyword-based search was conducted in PubMed, Web of Science, Scopus, PsycINFO, Infolep and InfoNTD; additional literature was also considered. Articles presenting primary data involving leprosy-affected persons or their family members experiencing mental conditions were included. Independent extraction of articles was executed using predefined data fields. Articles were sorted according to relevance. In total, 65 studies were included in this systematic review. Multiple psychiatric morbidities have been identified among leprosy-affected persons, including depression, anxiety disorders and suicide (attempts). Additional factors were found that may impact mental health. Moreover, studies found that demographic factors, lifestyle and disease-specific factors and stigma and discrimination impact mental health. Depressive symptoms and low self-esteem were identified among children of leprosy-affected persons. In addition, interventions were identified that could improve the mental wellbeing of leprosy patients. Depressive disorders and anxiety disorders were found to be very common among persons affected by leprosy. Feelings such as fear, shame and low self-esteem are also experienced by those affected, and their children. Further research is necessary to ensure that mental health impact is included when determining the burden of disease for leprosy, and to relieve this burden. Duplicates were automatically removed. The next step consisted of title and abstract screening. Irrelevant articles were excluded. Next, screening of full text articles was done. Both the first and second screening were performed by one author (PS) and supervised by two other authors . Uncertainties regarding inclusion or relevance of data were resolved together with the second authors. The title, abstract and full-text screening were executed in Covidence, a web-based software platform that streamlines the production of systematic reviews. Inclusion and exclusion criteria were used to guide the screening process see .Table 2Data extraction was done in Excel. Relevant information including study characteristics, patient characteristics, morbidity specifics and outcomes was noted. The articles were labelled based on the relevance. The relevance of the articles was determined based on several characteristics . Specifin\u00a0=\u00a086), because they were otherwise irrelevant (n\u00a0=\u00a0118), represented perspectives of others than leprosy-affected or their family members (n\u00a0=\u00a020) or articles in another language (n\u00a0=\u00a01). In addition, 20 duplicates were removed. A total of 167 studies were included for full text screening. Based on the above exclusion criteria, a further 102 articles were discarded at this stage: other subject or outcome than mental health condition and leprosy (n\u00a0=\u00a060), other language (n\u00a0=\u00a035), perspective of others (n\u00a0=\u00a05), review (n\u00a0=\u00a02). In the end, we included an analysis of 65 relevant articles regarding leprosy and mental health. Of these, 11 of 65 articles were studies describing interventions. Out of 65 studies, 62 concerned leprosy-affected persons, while three studies concerned the family members of the affected persons. The studies included and their important characteristics are presented in In total, 797 articles were found see . In addiet al., Out of the 65 articles, three articles described psychiatric morbidity in general among leprosy-affected persons. Psychiatric morbidity, or mental ill health, was measured with the general health questionnaire (GHQ). The GHQ is a tool for screening and identifying minor psychiatric conditions among adults among this group. According to Bharath et al. , 33.3% o and phantasmal fear (22.3%) for persons affected by leprosy as compared with healthy controls. When leprosy patients from Mato Grosso, Brazil, were asked to rate their QoL in a study by Garbin et al. , 37% scoet al. found th et al., . Brouwer, et al. found thet al., Three studies described the mental health impact of leprosy on family members of leprosy-affected persons. All three studied children and adolescents whose parents were coping with leprosy. In a study by Parashar and Kumar , 100 IndThere were several determinants presented in the articles that can have an influence on mental wellbeing. In the following paragraphs these determinants will be discussed.et al., et al., et al., et al., et al., et al., et al., et al., et al. .A study by Su et al. investiget al. , suicide (attempts) (around 33%) and anxiety disorders (10 to 20%). In addition, this study found that children of leprosy-affected persons also experience poor mental health.et al., et al. impairments, but also demographic factors and other disease-specific factors. Further research is necessary to identify the burden of disease of leprosy, taking the mental health consequences into account, and the impact of leprosy on the mental wellbeing of family members. In order to prevent and mitigate the mental health impact of leprosy, interventions are needed to strengthen coping mechanisms of those affected, to treat mental health conditions such as depression in this population, and to change negative community attitudes towards those affected by the disease, as stigma is a key contributor to mental ill health."} {"text": "Immune cell infiltration has been identified as a prognostic biomarker in several cancers. However, no immune based biomarker has yet been validated for use in pancreatic ductal adenocarcinoma (PDAC). We undertook a systematic review and meta\u2010analysis of immune cell infiltration, measured by immunohistochemistry (IHC), as a prognostic biomarker in PDAC. All other IHC prognostic biomarkers in PDAC were also summarised. MEDLINE, EMBASE and Web of Science were searched between 1998 and 2018. Studies investigating IHC biomarkers and prognosis in PDAC were included. REMARK score and Newcastle\u2013Ottawa scale were used for qualitative analysis. Random\u2010effects meta\u2010analyses were used to pool results, where possible. Twenty\u2010six articles studied immune cell infiltration IHC biomarkers and PDAC prognosis. Meta\u2010analysis found high infiltration with CD4 and CD8 T\u2010lymphocytes associated with better disease\u2010free survival. Reduced overall survival was associated with high CD163 . Infiltration of CD3, CD20, FoxP3 and CD68 cells, and PD\u2010L1 expression was not prognostic. In total, 708 prognostic biomarkers were identified in 1101 studies. In summary, high CD4 and CD8 infiltration are associated with better disease\u2010free survival in PDAC. Increased CD163 is adversely prognostic. Despite the publication of 708 IHC prognostic biomarkers in PDAC, none has been validated for clinical use. Further research should focus on reproducibility of prognostic biomarkers in PDAC in order to achieve this. Pancreatic cancer remains a challenging disease, with only small improvements in overall survival rates observed in recent years . CurrentBRAF mutant colon cancer , etc.) from the articles relating to immune infiltration was performed by AJM and this was checked by a second reviewer to ensure accuracy. The Newcastle\u2013Ottawa scale (NOS) [AJM reviewed the abstracts for all eligible studies to record the prognostic biomarker investigated. Abstracts describing immune response or specific immune cell markers was usedle (NOS) .P value using the method described by Altman [The association between OS, DSS or DFS in PDAC patients and the level of tumour infiltration by immune cells, identified by IHC, was determined. HRs and corresponding 95% confidence intervals (CIs) were extracted from each publication. If these results were not explicitly stated, Parmar's method was usedy Altman in any sI2 statistic. I2 estimates the proportion of the variance in studies included in the meta\u2010analysis that is due to heterogeneity between them [Random\u2010effects meta\u2010analysis was undertaken using RevMan 5.3 software to calcueen them . Resultseen them .For comparability, we evaluated \u2018high\u2019 versus \u2018low\u2019 immune cell infiltration/expression across studies. The HR and 95% CI from any study reporting associations for \u2018low\u2019 versus \u2018high\u2019 expression were, therefore, inverted prior to inclusion in meta\u2010analyses or summary tables. Results reported separately for survival associated with immune cell infiltration of specific tumour areas (e.g. stroma and tumour core) were combined by meta\u2010analysis prior to inclusion in the main meta\u2010analysis for specific variables. Publication bias was determined by assessing the symmetry of a funnel plot. Subgroup analysis based on the site of tumour was planned but a lack of relevant studies precluded this.This systematic review summarises previously published data and does not include new human data or tissue that require ethical approval and consent. The authors assume that the studies reviewed were conducted after ethical approval and consent, and in accordance with the Declaration of Helsinki.This manuscript does not contain any individual person's data. All data reported are found in the literature as cited in the text.The PRISMA flowchart of studyTwenty\u2010six studies , 35, 36 et al [et al [et al [et al [The majority of studies constructed a TMA using tissue cores from the PDAC tumour core or invasive front; however, the two studies by Diana et al , 22 usedl [et al only incl [et al . All othl [et al used patl [et al , 51, 52 l [et al was the l [et al . Zhang el [et al used a cet al [Inclusion of patients based on disease stage was diverse. Three studies included stage I\u2013II , 28, 29,et al and Sugiet al only incet al [et al [et al [All participants in the Nejati et al study anl [et al and Sugil [et al were trel [et al , 34, 36 l [et al , 20, 30 et al [et al [The REMARK criteria were useet al was the l [et al and Balal [et al addresseThe NOS was used to determine risk of bias and the results are summarised in supplementary material, p =\u20090.99) in their PDAC tumour specimens. Observed heterogeneity was high .Four studies , 29, 34 p =\u20090.28). There was moderate heterogeneity between studies . Four studies [p\u2009\u2264\u20090.001), with no heterogeneity between the four studies .Six studies , 25, 32 studies , 25, 28 p =\u20090.15) in PDAC . Six studies [p\u2009\u2264\u20090.001) in PDAC in comparison with low infiltration, with little evidence of heterogeneity Figure .et al [et al [p =\u20090.45). Liu et al [p\u2009\u2264\u20090.001). There was no association between high versus low intra\u2010epithelial CD8 infiltration and DSS in the Liu study [p =\u20090.67).Castino and colleagues and Liu et al measuredl [et al found noiu et al found loiu study in univaet al [et al [p =\u20090.15). Lohneis and colleagues [p =\u20090.01) and DFS with high compared to low CD8 infiltration in the stromal area of PDAC tumours. The magnitude of effect estimates reported are broadly similar to that found in the meta\u2010analysis for DFS.Tewari et al and Lohnet al investigl [et al found nop =\u20090.07). There was moderate heterogeneity between studies . Only two studies [et al [p =\u20090.70). The Liu et al study [p =\u20090.01) but no association between high FoxP3 in the stroma and DFS .Four studies , 24, 26 studies , 26 inves [et al found nop =\u20090.93) or stromal PDAC tumour compartments and DSS when compared to low FoxP3.Liu and colleagues also invp =\u20090.46) with low heterogeneity between studies .Six studies , 35, 36 p =\u20090.17: islet HR 1.38, 95% CI 0.8\u20132.38; p =\u20090.25). Diana et al [et al [p =\u20090.91 and HR 0.83, 95% CI 0.64\u20131.08; p =\u20090.16 respectively).In univariate analysis, Hu and colleagues found nona et al and Mahal [et al found noet al [P value from univariate analysis but found no difference in DSS or DFS between high and low CD68 levels at either the tumour core or periphery.Yoshikawa et al stained p =\u20090.04). Heterogeneity between studies was moderate .Five studies , 30, 36 p =\u20090.02) but there was no difference in the levels identified in tumour islets on univariate analysis [Hu compared high and low CD163 expression in the tumour stroma or islets with OS . Higher =\u20090.13) .et al [p\u2009\u2264\u20090.001) and DFS with high compared to low levels of CD204 at the periphery of PDAC tumours. Likewise, poorer OS and DFS was associated with high CD204 infiltration of the neural plexus in PDAC tumours when compared to low levels. Yoshikawa and colleagues [p\u2009\u2264\u20090.001) and DFS in high versus low levels of CD204 infiltration at the periphery of PDAC tumours. There was, however, no difference in DSS when high and low levels of CD204 at the tumour core were compared .Two studies , 35 inveet al found pop =\u20090.004). However, Wang et al [p\u2009\u2264\u20090.001).Two studies , 28 inveet al [p =\u20090.003).Kurahara et al was the p =\u20090.24). Overall heterogeneity between studies was low .Five studies , 28, 34 et al [p =\u20090.63). When CD20 infiltration was measured in areas of aggregated immune cells (described as \u2018tertiary lymphoid tissue\u2019), improved DSS was noted in the highest quartile of CD20 infiltration when compared to the lowest . Wang et al [p =\u20090.18).Castino et al investigng et al found noet al [p =\u20090.87). Using CD15 as a marker, Wang et al [p =\u20090.01).Two studies , 28 inveet al used chlet al [p =\u20090.004).Wang et al determinp =\u20090.19). There was low heterogeneity between studies .Four studies , 24, 26 et al [p =\u20090.03).Tessier\u2010Cloutier et al was the et al [p =\u20090.05) with high expression of programmed cell death 1 (PD\u20101) when compared to low in PDAC. There was no association between PD\u20101 expression and DFS .Diana et al reportedp\u2009\u2264\u20090.001) and DSS when the highest and lowest density of immune cell infiltration were compared. Wartenberg et al [p\u2009\u2264\u20090.001) and DFS .Tahkola and colleagues sought trg et al describeet al [p =\u20090.005: HR 3.06, 95% CI 1.26\u20137.44; p =\u20090.014 respectively) with a high compared to low FoxP3:CD8 ratio. Fukunaga et al [p =\u20090.03).Hwang et al assessedga et al reportedIn this systematic review we sought to summarise all of the existing studies of prognostic IHC biomarkers in PDAC, and to perform a meta\u2010analysis in relation to immune cell infiltration and prognosis. A total of 1101 articles were identified relating to the description of tissue based prognostic biomarkers in PDAC, and these investigated over 700 individual biomarkers. The vast majority were only assessed in a single paper and this is in line with previous criticisms of studies investigating prognostic biomarkers; no systematic approach to the discovery of novel markers and little external validation , 54. MetOur findings in relation to CD4 and CD8 are in keeping with the prognostic benefit of these immune cell types observed in other tumours , 55. TraPooled analysis of studies investigating the association between CD163 and prognosis found that patients with increased tumour infiltration by CD163 macrophages had shorter OS in PDAC. This is in keeping with studies which found that increased infiltration of CD163 was associated with poorer OS in hepatocellular carcinoma, triple negative breast cancer and follicular lymphoma , 59, 60.Meta\u2010analysis of the studies investigating FoxP3 and CD20 cell infiltration and prognosis in PDAC did not find any association with OS and this conflicts with findings in other cancers. In ovarian tumours, high FoxP3 T\u2010cells have been associated with a reduction in Th1 response and worse prognosis . Studiespost hoc changes of protocol were documented to ensure transparency.This review has several strengths. First, it is the most comprehensive review undertaken to date regarding tissue based prognostic biomarkers in PDAC in terms of breadth of search strategy and number of articles included. The quality and risk of bias in the studies was determined using validated tools, the REMARK guidelines and NOS Our review was restricted to articles published between 1998 and 2018. The benefit of gemcitabine in advanced PDAC was determined in 1997 and the et al [The main weakness of this review is the variation in the patient populations and tumour analysis methods in the included studies. Results from the two studies by Diana et al , 22, usiet al . Studieset al , 71. It et al . This vaet al [Three studies included patients treated with neoadjuvant chemotherapy prior to resection , 20, 31.The majority of studies included patients and tissue from a single centre only, although two , 15 usedThe REMARK guidelines recommend the reporting of univariable and key multivariable analysis in terms of HR and 95% CIs and we sWhilst no prognostic biomarker has yet been validated for clinical application in PDAC patients, our review has identified over 700 potential markers already investigated and the methodological variation preventing widespread applicability. This review should therefore prove a catalyst for the wider validation of the immune markers already published, to determine their utility as independent prognostic markers in PDAC and enable translation to routine clinical practice. The difficulty of reproducing results in different institutions has been highlighted , furtherWhilst this review focussed on the association between immune infiltration and prognosis, there is potential for the markers identified to be used in the selection of checkpoint inhibitors and other immunotherapies. However, to date no immunotherapies or companion diagnostics have been approved for the treatment of PDAC. Although PD\u2010L1 expression was not associated with prognosis in our pooled analysis, given the success of PD\u2010L1/PD\u20101 inhibitors in other cancers we believe further study and validation of this biomarker should be a priority. The recent FDA approval of the PD\u2010L1 inhibitor pembrolizumab in advanced cancers with mismatch repair deficiency, regardless of site of origin, highlights the benefit of treatment stratification based on the molecular characteristics of tumours. It is, therefore, imperative that a robust, validated biomarker is developed to ensure PDAC patientsGiven the widespread use of IHC, the identification of a validated prognostic marker using this technique is an attractive prospect which could be readily applied to current tissue analysis. The heterogeneity of PDAC tumour tissue is a challenge and any sampling protocol would have to be reproducible and enable the collection of representative tissue. \u2018Cut points\u2019 to determine \u2018high\u2019 versus \u2018low\u2019 infiltration would also need to be standardised for the marker to be used in routine clinical practice.In keeping with other tumour types, high CD4 and CD8 infiltration is associated with improved DFS in PDAC and increased expression of CD163 macrophages is adversely prognostic. Despite the publication of over 700 IHC prognostic biomarkers in PDAC, none has been sufficiently externally validated to enable use in clinical practice. Further high\u2010quality research is required to focus on reproducibility of prognostic IHC biomarkers, particularly CD4, CD8 and CD163, in PDAC. In order to achieve comparable results and validate these markers, we suggest a standardised, international consensus for the investigation and validation of prognostic biomarkers in PDAC is developed. In line with other cancers, optimal sampling protocols for TMA analysis of PDAC are required to mitigate the effects of morphomolecular heterogeneity , 84.AJM acquired, analysed, and interpreted the data, and drafted the manuscript. HGC designed the review, acquired and interpreted the data, and revised the manuscript. RSM and DIJ acquired the data and revised the manuscript. PJK and MAT analysed the data and revised the manuscript. RCT designed the review, acquired and interpreted the data, and reviewed the manuscript. All of the authors approved the final version for publication and agree to be accountable for all aspects of the work.Search termsFigure S1. Funnel plot of included studiesTable S1. List and frequency of prognostic biomarkers investigated in included studiesTable S2. Characteristics of included studiesTable S3. REMARK scoreTable S4. Newcastle\u2013Ottawa scoreClick here for additional data file."} {"text": "The wild giant panda in their habitat in Liziping National Nature Reserve, Shimian. The copywight belongs to Liziping National Nature Reserve, Shimian. Clostridium and vancomycin resistance genes) when compared to the other wild and captive populations studies, which was supported by previous giant panda whole\u2010genome sequencing analysis. In this study, we provide an example of a potential consensus pattern regarding host population genetics, symbiotic gut microbiome and ARGs. We revealed that habitat isolation impacts the ARG structure in the gut microbiome of mammals. Therefore, the difference in ARG composition between giant panda populations will provide some basic information for their conservation and management, especially for captive populations.The rise in infections by antibiotic\u2010resistant bacteria poses a serious public health problem worldwide. The gut microbiome of animals is a reservoir for antibiotic resistance genes (ARGs). However, the correlation between the gut microbiome of wild animals and ARGs remains controversial. Here, based on the metagenomes of giant pandas , we investigated the potential correlation between the constitution of the gut microbiome and the composition of ARGs across the different geographic locations and living environments. We found that the types of ARGs were correlated with gut microbiome composition. The NMDS cluster analysis using Jaccard distance of the ARGs composition of the gut microbiome of wild giant pandas displayed a difference based on geographic location. Captivity also had an effect on the differences in ARGs composition. Furthermore, we found that the Qinling population exhibited profound dissimilarities of both gut microbiome composition and ARGs (the highest proportion of For example, at the phylum level was highest in captive giant panda populations . In wild giant pandas, the mean abundance of Clostridium (Firmicutes) was highest in the QIN population (0.42\u00a0\u00b1\u00a00.35), and the mean abundance of Pseudomonas (Proteobacteria) was highest in non\u2010QIN populations . Additionally, the mean abundance of Pseudomonas (Proteobacteria) was also highest in the XXL population of wild red pandas (0.61\u00a0\u00b1\u00a00.38). However, the new finding here was the dissimilarity in the ARG composition between the captive and wild giant panda gut microbiome.Based on the meta\u2010analysis of the giant panda metagenome data, we confirmed the previous findings (mostly using the 16S rRNA data) on the difference in the gut microbiome composition (Wei P\u00a0=\u00a00.0001). This difference in ARG composition between the groups was larger than that in the gut microbiome community, with two clear clusters for ARGs . As such, our results revealed a clear divergence between QIN and non\u2010QIN populations (Qionglai and XXL populations) Fig.\u00a0C,F. Notans) Fig.\u00a0. In regans) Fig.\u00a0, the meaium Fig.\u00a0. The meaium Fig.\u00a0. And thenia Fig.\u00a0. We deduet al., et al., Clostridium strains , with the QIN and non\u2010QIN populations diverging approximately 0.3 million years ago and vancomycin resistance (e.g. in QIN mountain population).Our study revealed that captivity might lead to a special combination of ARGs as the treatment of captive individuals during normal health management affects the composition of ARGs. Thus, the difference in the ARG structure among different mammal groups in this study would provide some basic information for their management and conservation, especially for captive populations. The normal treatment of the mammals in the zoos (e.g. CA) should consider their potential tetracycline and macrolides resistance genes of the gut microbiome. Furthermore, maintaining the health of the captive giant panda population is important for the translocation of some small and isolated wild populations. The management of the captive giant panda population should think of the antibiotics resistance by et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., Considering the phylogeny position of the giant panda (belonging to Carnivora order), we collected the published metagenomes (raw data) of giant pandas and other carnivorans Table\u00a0. The 49 et al., et al., et al., et al., et al., et al., Raw reads were filtered using Trimmomatic of gut microbiome communities and ARGs per metagenome was transformed to relative abundance using STAMP regarding the ARG types and subtypes based on the relative abundance of bacteria genus for the annotated ARGs and the relative abundance of ARG types and subtypes in all annotated ARGs.et al., The Jaccard distance for gut microbiome genus and ARGs (types and subtypes) relative abundance was used to generate non\u2010metric multidimensional scaling (NDMS) in PAST3 of main ARG subtypes in the gut microbiome of giant pandas.Table S4. The mean abundance (%) of the dominant putative Clostridium species in the gut microbiome of giant pandas.Fig. S1. The distributions of dominant ARG subtypes and their abundances in the total annotated ARGs subtypes in the metagenomes.Fig. S2. The distributions of ARG subtypes and their abundances in the total annotated ARGs subtypes in giant panda and red panda metagenomes.Click here for additional data file."} {"text": "Stewart et al. were amo [et al. publishe [et al. . As ther [et al. . From a [et al. .et al. [The recent publication by Guan et al. from theSurgery is generally very slow to scrutinize the rapid progression of new surgical innovations until Level-1 evidence such as randomized control trials (RCTs) have shown them to be effective. However, it is challenging to evaluate a new surgical procedure in an RCT due to many potential practical problems: recruiting patients may be difficult, as they may refuse to be randomized; measuring appropriate outcomes may require years of follow-up; there may be differences in the surgical skills of the techniques being analysed; therefore, analysis should take account of how experienced each surgeon is in performing the new operation. Ideally, randomization should begin as soon as it is feasible, as this would enable the researchers to monitor the learning curve. The latter was emphasized in the recent ROLARR randomized clinical trial, which compared the effect of robotic vs laparoscopic surgery for rectal cancer. The median laparoscopic cases performed in the laparoscopic arm was 91 vs 50 in the robotic arm and, despite a recruitment of 471 patients, there was no difference in the primary endpoints of conversion to open laparotomy and positive rate of circumferential resection margin. A subsequent publication exploring and adjusting for potential learning effects showed that the initial ROLARR analysis was confounded by the learning effect and that the estimated odds ratio of conversion in the robotic arm was significantly lower after \u223c70 cases were performed when compared with the laparoscopic arm with a median of 91 cases [et al. [et al. [P\u2009=\u20090.005). There were no other statistical differences in the other standard parameters measured. Ma et al. [To date, there have been only three Level-1 publications on NOSES for colorectal procedures\u2014two RCTs , 7 and oet al. performea et al. meta-anaP\u2009=\u20090.639) [So, from a critical point of view, the concept of NOSES makes sense in the avoidance of incision-related morbidity as a goal of modern minimally invasive colorectal surgery. The approach also appears to offer significant short-term benefits with no immediate adverse complications. There is still the concern of the long-term oncological implications of NOSES, although, reassuringly, a large prospectively collected study of 844 patients (163 NOSE and 681 CL) who underwent curative surgery for rectal cancers showed a combined 5-year DFS rate for all stages of 89.3% in the NOSE group and 87.3% in the CL group (et al. [Surgeons who typically perform laparoscopic-assisted colectomies would be faced with a steeper learning curve when adopting NOSES. Intracorporeal anastomosis is a prerequisite skill for those adopting NOSES. Furthermore, specimen extraction via the vagina requires a posterior colpotomy, which will be a new skill for most colorectal surgeons. These technical challenges are amplified by a lack of standardization of the technique. Guan et al. , in theiet al. standardNone declared."} {"text": "Moreover, in addition to the two studies mentioned in our paper [The references in the Correction notice do not provide direct support to the points made in the text. The authors have supplied updated references. An increasing number of studies have supported the xenomiR hypothesis in recent years. Li et al. reportedin vitro . Hou et in vitro showed tur paper , one morur paper .Arabidopsis cells transported small RNAs into the fungal pathogen B. cinerea, which suppressed its pathogenicity by silencing fungal virulence genes [Not limited to mammals, plant-derived xenomiRs also exist and function in insects and bacteria. Zhu et al. reportedce genes .O. basilicum miRNAs from expressed sequenced tags using computational approaches, and predicted their targets and potential functions on human. Using bioinformatics tools and databases, Erg\u00fcn [H. perforatum flower dietetically absorbed to define potential biomarkers for prostate cancer. Yu et al. [A number of studies have investigated plant derived xenomiRs using computational approaches. Patel et al. identifis, Erg\u00fcn investigu et al. developeu et al. analyzedThese studies not only supported the xenomiR hypothesis from different approaches and species, but also started to explore the function, mechanism and medicinal value of plant derived xenomiRs. More related researches were well-reviewed in two recent papers , 16.Arabidopsis samples (S8 Table). The authors state that they used \u201cindependent samples Student\u2019s t-test\u201d when comparing abundances between the plant miRNA in human samples and human miRNAs in Arabidopsis samples. Here, the denary logarithm of plant miRNA abundances of human samples and human miRNAs of Arabidopsis samples were assumed to follow Gaussian distributions.The authors would also clarify the type of t-test used in the comparison of abundance values of plant miRNAs in human samples (S5 Table) and those of human miRNAs in When the abundances of plant derived miRNAs in human samples were analyzed, only the top 24 most abundant plant miRNAs (S3 Table) with abundance more than 0.05 were used, which include most of the plant miRNAs reported by other studies. In many samples, the abundance values of these 24 types of miRNA were 0, and none of these were not excluded. For computational convenience, a very small pseudo-abundance was added to all abundance values . The au"} {"text": "Exergy analysis, which is based on the second law of thermodynamics, is a potential tool to identify the sources, magnitude and location of the irreversibility in energy systems. Exergy analysis can assist researchers, engineers and students for system design, analysis, assessment, optimization, and performance evaluation of various energy systems. In this Special Issue, we have tried to attract researchers to submit their interesting and novel research works related to work availability, exergy and exergo-economic analyses. 18 high quality papers after several rounds of reviews were accepted and published in this Special Issue. A brief overview and summary of the individual contributions are given in the following:The first contribution was by Hooshmand et al. who inveP\u00e9rez-Garc\u00eda et al. studied 2 Removal Assembly in Manned Spacecraft was carried out by Pang et al. [Sciubba and Zullo publisheg et al. . The purg et al. studied 2O3 nanofluids as heat transfer fluids. A 1D numerical model has been extensively used to quantify the exergy performance of the system. Fontalvo et al. [An Improved System for utilizing low-temperature waste heat of flue gas from coal-fired power plants was carried out by Huang et al. . In thiso et al. studied Energy and exergy assessment of a pilot parabolic solar dish-Stirling system was carried out by Gholamalizadeh and Chung . The locDorosz et al. studied possible liquefied natural gas (LNG) exergy recovery systems for transportation since in most LNG-fueled vehicles, exergy of LNG was destroyed during the regasification process . Their aHuang et al. compared the integration of three retrofit concepts for waste heat recovery via ORC, in-depth boiler\u2013turbine integration, and coupling of both for maximizing the system-level heat . Their cSpanghero et al. applied the first and second laws of thermodynamics to the human body to estimate the quality of the energy conversion during muscle activity . AuthorsJing et al. investigated building cooling based on the exergo-economics through solar refrigeration, i.e., solar/natural gas-driven absorption chiller (SNGDAC), solar vapor compression\u2013absorption integrated refrigeration system with parallel configuration (SVCAIRSPC), and solar absorption-subcooled compression hybrid cooling system (SASCHCS) . AuthorsSince ORC is known as a promising technique to exploit waste heat from Internal Combustion Engines (ICEs), Liu et al. investigated the waste heat recovery systems that could be designed based on engine rated working conditions . AuthorsThe investigation on the impact of muscle and fat percentages on the exergy behavior of the human body in different environmental conditions was conducted by Garcia et al. . The focCastro et al. investig"} {"text": "The journal retracts the article \u201cPlatypnea-Orthodeoxia Syndrome: Multiple Pathophysiological Interpretations of a Clinical Picture Primarily Consisting of Orthostatic Dyspnea\u201d by De Vecchis, R., et al. .Following publication, concerns were brought to the attention of the publisher regarding an alleged redundant publication with \u201cPlatypnea-orthodeoxia Syndrome: Orthostatic Dyspnea and Possible Pathophysiological Substrates\u201d by De Vecchis, R., et al. . Adhering to our complaints procedure, an investigation was conducted that confirmed the existence of a redundant publication and the article is therefore retracted.This retraction was approved by the Editor-in-Chief of Journal of Clinical Medicine.The authors agreed to this retraction."} {"text": "Smoking affects not only smokers themselves, but also the people around them. 700 million children are exposed tosecond hand tobacco worldwide. One of the adverse effects of being a passive smoker is oral pigmentation. This study was conducted to evaluate the association between smoking of a parent at home and oral pigmentation in children, and the characteristic factors affecting that. In this retrospective cohort study, 140 healthy children aged 4 to 10 (mean age= 6.68\u00b11.60), 70 with smoker parentand 70 without smoker parents, were examined for oral pigmentation. Environmental factors were evaluated by asking theparents to fill a questionnaire. Data were analyzed using Chi-square test, Fisher's exact test, Logistic regression, and Spearman scale.p= 0.0001). Spearman\u2019s correlationshowed parents' duration of cigarette smoking and the number of cigarettes per day could meaningfully affect the severity of oralpigmentation (R=0.329). The study did not find a statistical relationship between oral pigmentation in passive smoking and gender or house area. There was a meaningful relationship between having a smoker parent and oral pigmentation ( Children exposed to secondhand tobacco are at more risk for oral pigmentation. Its severity depends on duration of cigarette smoking and the number of cigarettes per day. Smoking affects not only the smokers themselves but also the people around them. According to world health organization (WHO), 700 million children (40% of children) are exposed to second hand tobacco worldwide . StudieThe purpose of this study was to survey the association between smoking of parents at home and oral pigmentation in children, concerning the affecting characteristic factors.et al. study (P1-P2)2A total of 70 children (4.5 to 10 years old) referred to the Pediatric Department of Yazd Dental School (from October to December 2017) whom at least one of their parents or caregivers were smoker at home were selected as the case group. In our study, a person who had consumed at least ten cigarettes during the last month in the presence of their child was considered as smoker. This data was acquired by asking the parents. Seventy children attending pediatric department of Yazd Dental School were selected as the control group from those with non-smoker parents. The control group had the same age range as the case group. The inclusion and exclusion criteria were applied to both groups as follows. All children were psychologically and physically healthy (based on questionnaires) and did not take any medications, such as Bismuth and so on, which would have effects on the pigmentation of periodontium.In this study, the inner side of arm was chosen to determine the skin color . MY creA form was designed containing information on the study and asking about the parents' consent on participating. After the consent was given, a questionnaire including questions on demographic information (child\u2019s age and gender), parents\u2019 education and child\u2019s medical history was given to parents or caregivers. The surface area of their house was asked with the two options, below and over 100 square meters. This variable was analyzed only in the case group to see how the house area affects the children who have a smoker parent. During the sampling process, the control group was selected to be similar to cases on the distribution of age, gender, and skin color.The size of pigmentations was measured by a graded periodontal probe and was documented along with their locations in oral cavity . A classification was used to determine severity of the lesions defined as (1) mild (0.5-1cm), (2) moderate (1-2cm), and (3) severe (more than 2cm or presence of multiple sites of pigmentation).Data were collected, coded, and entered to the computer. They were analyzed using SPSS23 software. Qualitative variableswere analyzed via Chi-square test. Pigmentation severity was considered an ordinal variable; therefore, Spearman correlation was used for its analysis. A sample of 140 children with an average age of 6.68\u00b1 1.60 years was randomly selected. Out of the 140 samples examined, only two had mild allergies and did not use any medications. These two samples were excluded and replaced so that the effect of systemic diseases variable was excluded from the study. After reviewing the data, the following results were obtained:Oral mucosal pigmentation, regardless of the effect of passive smoking on children, was observed in 50.7% of the subjects. 20.7% of them had mild pigmentation, 20.7% had moderate pigmentation and 3.9% had severe pigmentation. Therefore, the least frequent type of pigmentation was severe .p= 0.280).The sample was consisted of 69 girls and 71 boys. There was no significant relationship between gender and the presence of pigmentation (p= 0.0001). Relative risk of passive smo--king for the occurrence of oral pigmentation in passive smoker children was 2.10 (C.I. = 95%: 1.45 \u2013 3.05).The case group consisted of 70 children with a smoker parent, all of whom their fathers were the smoker. Of these 70 patients, 31.4% had no pigmentation, 34.3% had mild pigmentation, 20% had moderate pigmentation, and 14.3% had severe pigmentation. In total, 48 had oral pigmentation. The control group consisted of 70 children without a smoker parent or caregiver. A total of 67.1% had no pigmentation, 7.2% had mild pigmentation, 21.4% had moderate pigmentation, and 4.3% had severe pigmentation. In total, 23 children with oral pigmentation were detected in the control group. There was a significant statistical difference between the presence of oral pigmentation and having a smoker father . Data are presented in There was no statistically significant difference, and no meaningful relationship between house size and the occurrence of children oral pigmentation ; p= 0.2.p= 0.001). Spearman's correlation coefficient for the age variable was 0.282. The relationship between age and oral pigmentation was statistically significant . Spearman correlation coefficient of duration of smoking (the number of years the parent has been smoking) was 0.371. Therefore, duration of consumption was directly related with the pigmentation. This variable had a significant relationship with children\u2019s oral pigmentation (p= 0.0001).Spearman's correlation coefficient in relation to this variable was 0.358. Therefore, it also had a direct relationship with the two other variables. There was a statistically significant difference and this variable was related to pigmentation (et al. [ et al. [ et al. [ et al. [ et al. [ This study aimed to examine the relationship between smoking of a parent at home and oral pigmentation in children and the characteristics affecting that. Studies on effects of passive smoking on oral pigmentation have been conducted on different target groups. Moravej-Salehi et al. have in et al. , Haniok et al. , Yasear et al. , Sridha et al. , similaet al. [ et al. [ Hanioka et al. and Yas et al. also maet al. [ et al. [ As mentioned before, by using Spearman\u2019s correlation analysis, a positive relation between oral pigmentation and the children\u2019s exposure to second hand tobacco was found. Some studies on effects of active smoking have shown the same results , 13 buet al. and Han et al. have noet al. [ et al. [ et al. [ et al. [ et al. [ Using MY cream, skin complexions were matched and subjects with darker skins were not included in the study. Haji-Fattahi et al. , Morave et al. and Sri et al. employe et al. and Yas et al. have noet al. [ et al. [ In this study, lesions were categorized by size. Periodontal probe was used to measure the size of the lesions. The examinations were conducted by only one clinician to ensure the objectivity of study. Madani et al. classif et al. also me et al. - 12. N et al. , 11.et al. [ Despite this study, Moravej-Salehi et al. were abet al. . It must be mentioned that smoking has different adverse effects on the consumer\u2019s and their families' health. This study has investigated just one of the many harmful effects of children's passive smoking.Children exposed to secondhand-tobacco regardless of their gender are at more risk to develop oral pigmentation. Investigating factors affecting the severity of pigmentations, this study found that parents' duration of cigarette smoking and the number of cigarettes per day have a meaningful relationship with their child's oral pigmentation. As for the extent of house area, this study could not find a relationship. In conclusion, prohibition of smoking in the presence of children might prevent oral pigmentation as well as other adverse effects of being a passive smoker."} {"text": "The 2020 International Conference on Intelligent Biology and Medicine (ICIBM 2020) provided a multidisciplinary forum for computational scientists and experimental biologists to share recent advances on all aspects of intelligent computing, informatics and data science in biology and medicine. ICIBM 2020 was held as a virtual conference on August 9\u201310, 2020, including four live sessions with forty-one oral presentations over video conferencing. In this special issue, ten high-quality manuscripts were selected after peer-review from seventy-five submissions to represent the medical informatics and decision making aspect of the conference. In this editorial, we briefly summarize these ten selected manuscripts. These ten articles went through the second round of peer-review and revision in the BMC Supplement submission system before their final acceptance to this ICIBM 2020 supplement issue. Of note, eight articles [The 2020 International Conference on Intelligent Biology and Medicine (ICIBM 2020) provided a multidisciplinary forum for computational scientists and experimental biologists to share recent advances on all aspects of intelligent computing, informatics and data science in biology and medicine. It was organized and hosted by the International Association for Intelligent Biology and Medicine (IAIBM), the University of Pennsylvania, and the Temple University on August 9\u201310, 2020. The conference was originally scheduled to be located in Philadelphia and was eventually transformed into a virtual conference held online due to the COVID-19 pandemic. The conference received seventy-five full-length original manuscript submissions. Each manuscript went through a rigorous review process and was peer-reviewed by at least three technical program committee members. Forty-one submissions were accepted and presented in four live sessions over Zoom, and the conference attracted ~\u2009300 attendees. In this special issue, ten high-quality manuscripts were selected to represent the articles are inclPredicting mortality in critically ill patients with diabetes using machine learning and clinical notes\u201d, Ye et al. [In \u201ce et al. investigNatural Language Processing (NLP) tools in extracting biomedical concepts from research articles: a case study on autism spectrum disorder\u201d, Peng et al. [In \u201cg et al. performeStress detection using deep neural networks\u201d, Li et al. [In \u201ci et al. applied Annotation and extraction of age and temporally-related events from clinical histories\u201d, Hong et al. [In \u201cg et al. reportedSURF: identifying and allocating resources during out-of-hospital cardiac arrest\u201d, Rao et al. [In \u201co et al. studied In \u201cUtilizing deep learning and graph mining to identify drug use on twitter data\u201d, Tassone et al. proposedAn interpretable risk prediction model for healthcare with pattern attention\u201d, Kamel et al. [In \u201cl et al. developeComparing different wavelet transform on removing electrocardiogram baseline wanders and special trends\u201d, Chen et al. [In \u201cn et al. addresseUnsupervised phenotyping of sepsis using non-negative matrix factorization on temporal trends from a multivariate panel of physiological measurements\u201d, Ding et al. [In \u201cg et al. performeIdentifying risk factors of preterm birth and perinatal mortality using statistical and machine learning approaches\u201d, Kothiya et al. [In \u201ca et al. studied Most of the studies included here were facilitated by and conducted with the valuable data resources either available in the open science domain or accessible to the authors. In five of these articles , 9, 10, In these studies, the authors employed informatics and machine learning methods to address various health topics, including diabetes , autism In summary, we envision a growing interest of medical informatics and machine learning approaches to address the pressing problems in health applications. We anticipate that future ICIBM events will continue serving as a forum for researchers to exchange ideas, data, and software, and speed up the development of intelligent computing methods for data-driven discovery in biology and medicine."} {"text": "In addition, transcriptional rewiring through various epigenetic signals has been observed in plants upon multi\u2010generational exposure to abiotic stresses that, in turn, activates multiple dehydration stress\u2010responsive genes light\u2010, salinity\u2010 and hypoxia\u2010resistance when independently adapting to the specific intertidal environments. However, such stressful conditions can activate transposable elements . This is, in fact, a more general pattern: epigenetic alterations following biotic stress are frequently observed around genomic regions containing defence\u2010related genes and their transcriptional activation upon stress is often mediated via neighbouring TEs , can lead to resistance to Gibberella stalk rot in maize. Additional relevant case studies of plant\u2013animal, plant\u2013microbe and plant\u2013plant interactions are critically discussed in Alonso et al. suggest that a better understanding of the epigenetic responses to environmental (i.e. both abiotic and biotic) stress is key to understanding rapid plant adaptation, plant immunity and for developing sustainable strategies for crops\u2019 improvement in the face of global warming. Indeed, the spontaneous and fluctuating nature of epimutations was suggested to enhance the adaptation potential to varying abiotic stimuli , are prevalent in plants and, as genomic stressors, can trigger systemic alterations across the epigenetic landscape. The epigenetic remodelling triggered helps to re\u2010establish the functional and structural balance of the affected genomes (e.g. Song & Chen, , et al. report aet al., et al., et al., et al., cis\u2013trans interactions and homoeologue expression are discussed in a recent paper by Hu & Wendel (The remodelling of the epigenetic landscape can be seen as a consequence of a phenomenon long depicted as \u2018genomic shock\u2019 (McClintock, & Wendel . In addi& Wendel .et al., Epigenetic mechanisms are also major players during plant development, and have a role in shaping phenotypic plasticity (Gallusci et al., Arabidopsis (Benoit et al., et al. (et al., et al. (Histone modifications also appear involved in fine\u2010tuning different phases of plant development. They can, for example, influence flowering time (Crevillen et al. uncovers et al., . Startin, et al. show tha\u2018Our understanding of the control and function of structural modifications to DNA has, in recent years, been complemented by developmental and ecological perspectives of epigenetics.\u2019Finally, epigenetic mechanisms involving small RNAs, in particular siRNAs, are relevant during plant reproduction, for example during a phase of global reactivation of TEs in gametes. During this stage an intercellular movement of siRNAs between companion cells and male gametic cells has been observed, and recent studies have elucidated the function of sperm\u2010delivered siRNAs during early seed development (reviewed by Wu & Zheng, et al., et al., et al., et al., et al., et al., Our understanding of the control and function of structural modifications to DNA (e.g. Law & Jacobsen,"} {"text": "Homo sapiens across the planet complete genomes submitted to the international Global Initiative on Sharing Avian Influenza Data (GISAID) database by early March 2020, to produce a snapshot of the beginning of the SARS-CoV-2 epidemic differ by more than 1,000 mutations. In their view, such a distant outgroup is unlikely to provide a reliable root for the network. We argue, on the contrary, that the bat virus is surprisingly conclusive, as shown by its stable rooting in cluster A despite incrementally increasing the epsilon \u201cfuzziness\u201d setting in the median-joining network algorithm as described in PNAS have gonMavian et al. are puzzMavian et al. refer toMavian et al. then misMavian et al. reproachMavian et al. state thFinally, Mavian et al. caution"} {"text": "This has not been adequately emphasized by Luyssaert et al. contains supplementary material, which is available to authorized users. In the original version of the paper, a key premise was that \u201cabout 75% of this reduction is expected to come from emission reductions and the remaining 25% from land use, land-use change and forestry\u201d, citing Grassi et al. [should not rely on forest management to mitigate climate change\u201d.A recent article by Luyssaert et al. analysesi et al. concludeThe original premise of Luyssaert et al. on the eIn this commentary, we discuss further several of the arguments by Luyssaert et al. , showing2/year \u2014offsetting about 7% of total EU GHG emissions, with rather stable values in the last 25\u00a0years [Considering that the current carbon sink in the EU LULUCF sector is about 300 Mt CO25\u00a0years , 7 , it corresponds to about 1% of the EU 2030 emission reduction target.\u00a0Therefore, contrary to the assumption of Luyssaert et al., almost all\u00a0of the EU mitigation effort in 2030 is expected to come\u00a0from emission reductions from non-LULUCF sectors and only a very small part directly from LULUCF.Furthermore, the EU climate legislation has capp2e/year [2e/year in 1990 , the indirect contribution of EU forest-based bioenergy to the EU 2030 emission reduction target would realistically add another 3% ((150\u00a0\u2212\u00a090)/2250).Forests may contribute to mitigation also indirectly, especially through the utilization of wood as an energy source in place of fossil fuels. When the harvesting of forest biomass for energy purposes is increased, a decrease in carbon stock is reported in the LULUCF sector whilst GHG emission savings appear in the energy sector. For the EU, these savings are currently estimated to be about 130 MtCO2e/year , relativ2 impact. Rather than emphasizing these crucial caveats, Luyssaert et al. [We fully share with Luyssaert et al. the viewt et al. used theIf the aim is to encourage countries to start considering biophysical effects in their policies, more emphasis should be put on seasonal and local impact of biophysical effects of forest cover change, including synergies and trade-offs with a carbon-oriented management, rather than on the net annual biophysical climate impact at EU level. These seasonal and local impacts are less uncertain and more relevant in the context of changes in diurnal temperature excursions and heat2 sink from forest management are counterbalanced by negative biophysical climate effects\u2014resulting in a \u201czero-sum\u201d climate outcome, could be interpreted as forest management not being important to fight climate change. We think that would be a wrong conclusion. In fact, the recent inclusion of forests into the EU 2030 economy-wide climate targets [2 sink, and no disincentive for a possible over-use of forest resources , which could drastically reduce the current CO2 sink.Irrespective of the high uncertainty of biophysical effects on climate, the argument by Luyssaert et al. , that ef targets represen2 impact. Therefore, in our view, the conclusion of Luyssaert et al. [2 sink from forest management at EU level are counterbalanced by negative biophysical climate effects is uncertain and premature. Furthermore, we show that the GHG mitigation contribution by forests towards EU 2030 climate objectives is expected to be small, but yet strategically important. Although the original mistake by Luyssaert et al. [In conclusion we argue that, while biophysical effects are clearly important on the local and seasonal climate, the net annual biophysical climate impact of forest management in Europe remains more uncertain than the net COt et al. that thet et al. on the et et al. . They alt et al. \u2014are serit et al. , and encAdditional file 1. The contribution of LULUCF to the countries' climate pledges made in Paris and, more specifically, the expected contribution of forests to meet EU 2030 the climate targets, including an analysis of forest-based bioenergy."} {"text": "The study on entropy generation and exergy analysis in Nanofluid flows started in 2010 ,4,5,6. FBecause of high interest toward this area, in 2016 and 2017 we decided to serve as guest editors for two special issues on nanofluids. The total number of articles that have been published in the two special issues are 21 articles with more than 200 citations. The high number of citations shows the significance of this topic for researchers. 2O3\u2013water nanofluid as the working fluid. It was found that total entropy generation increases with the increasing volume fraction of nanoparticles. In References [Here, we give a summary of papers published in these two issues. In the first paper, Kolsi et al. studied ferences ,10,11,12ferences modeled 2O3\u2013water nanofluids in a microchannel with flow control devices including cylinder, rectangle, protrusion, and v-groove. They concluded that protrusion devices are the best option to achieve minimum entropy generation. In another work, Xie et al. [Li et al. studied e et al. studied 2O3\u2013water nanofluid. Sheremet et al. [3O4\u2013water nanofluid in a microchannel heat sink with offset fan-shaped reentrant cavities. Rashidi et al. studied t et al. evaluatet et al. studied Qasim et al. simulateIn most of the studies mentioned above, it was concluded that adding nanoparticles leads to heat transfer enhancement, and on the other hand, entropy generation reduces with an increase in the volume concentration of nanoparticles.Investigate the effects of different thermophysical models on entropy generation rate;Use both two-phase mixture model and single-phase models and compare the results;Investigate new configurations;Investigate entropy generation in new application of nanofluids, as most of the present studies are limited to classic problems such as flow on sheets or inside cavities and ducts;Consider the prediction of entropy generation using soft computing approaches like neural network;Conduct a comparison between entropy generation rates of different nanoparticles and base fluids to recognize the optimum nanofluid from the second law of thermodynamics viewpoint.It is suggested for future studies that authors:"} {"text": "Highly correlated yet independently-derived reductions in productivity from sun-induced fluorescence, vegetative near-infrared reflectance, and GPP simulated by the Simple Biosphere model version 4 (SiB4) suggest a 130\u2013340 TgC GPP reduction in July\u2013August\u2013September (JAS) of 2018. This occurs over an area of 1.6 \u00d7 106 km2 with anomalously low precipitation in northwestern and central Europe. In this drought-affected area, reduced GPP, TER, NEE and soil moisture at ICOS ecosystem sites are reproduced satisfactorily by the SiB4 model. We found that, in contrast to the preceding 5 years, low soil moisture is the main stress factor across the affected area. SiB4\u2019s NEE reduction by 57 TgC for JAS coincides with anomalously high atmospheric CO2 observations in 2018, and this is closely matched by the NEE anomaly derived by CarbonTracker Europe (52 to 83 TgC). Increased NEE during the spring (May\u2013June) of 2018 largely offset this loss, as ecosystems took advantage of favourable growth conditions.We analysed gross primary productivity (GPP), total ecosystem respiration (TER) and the resulting net ecosystem exchange (NEE) of carbon dioxide (COThis article is part of the theme issue \u2018Impacts of the 2018 severe drought and heatwave in Europe: from site to continental scale\u2019. Decreased precipitation in combination with record-high temperatures led to strong reductions in soil moisture availability and decreased atmospheric humidity ,4. On thactivity ,6. The net al. [2 uptake from the atmosphere between 0.135\u20130.205 PgC yr\u22121 based on both land-based and atmosphere-based estimates. Luyssaert et al. [\u22121 for 2001\u20132005 following from a combination of atmospheric inverse, inventory and flux measurement-based approaches. Peters et al. [\u22121 on average in the period 2001\u20132007 using the atmospheric inverse system CarbonTracker Europe. Monteil et al. [\u22121 during the 2006\u20132015 period.The terrestrial ecosystems of Europe have been estimated to act as a net sink of carbon. The annual budget comprises a large seasonal cycle of summer uptake and winter loss that is highly sensitive to large-scale weather patterns, and therefore interannual variability is high. Janssens et al. estimatet et al. derive as et al. find a nl et al. use a coet al. [\u22121 source of carbon. Peters et al. [\u22121 over the year 2003, largely neutralizing the carbon sink. Vetter et al. [\u22121. Recently, Buras et al. [This small net European sink can reduce and even turn into a carbon source during droughts and heatwaves. Ciais et al. estimates et al. suggest r et al. estimates et al. have sugSevere droughts and heatwaves are expected to occur more frequently in Europe under likely climate change scenarios . It is, 2 fluxes and atmospheric CO2 mole fractions at the stations of the spatially dense Integrated Carbon Observation System (ICOS) [et al. [2 fluxes as a starting point in the data assimilation system CarbonTracker Europe (CTE) [2 mole fraction observations to derive an atmospheric view of the reduction in the carbon uptake over Europe during the 2018 drought.We use observations of eddy-covariance COm (ICOS) togetherm (ICOS) and vegem (ICOS) , which hm (ICOS) , for exam (ICOS) . Besidesm (ICOS) ) to undem (ICOS) ,22. We w [et al. , in whic [et al. . Finallype (CTE) and we u2.\u03c3 anomaly in precipitation over the period May\u2013September of 2018 relative to the 2000\u20132017 period. This covers an area of 1.6 \u00d7 106 km2 over land and corresponds to the \u2018extreme drought\u2019 threshold that we are primarily interested in under the system proposed by Quiring [et al. [In this section, we describe the observations and methodologies used for this study, throughout which we consider the drought to have had the highest impact during the months July\u2013September of 2018 [ Quiring . Using t [et al. , the sam(a)2 mole fractions are measured at the atmospheric sites (see http://www.icos-ri.eu). The current ICOS network consists of 81 ecosystem stations (22 of which have completed the ICOS labelling process) and 36 atmospheric stations (23 fully ICOS-labelled) in 12 countries. In our study, we use the local carbon flux measurements and partitioning thereof into NEE, GPP and TER and compare these with the simulated values from the biosphere model SiB4 , and tall tower sites. [2 mole fractions to infer carbon fluxes at the surface using the data assimilation system CTE SIF is the small percentage of incident radiation that is re-emitted by chloroplasts during photosynthesis at higher wavelengths, and this signal correlates well with GPP . The retet al. [Recently, Badgley et al. introducet al. . We use et al. . NIRv isin situ observations of GPP from the ICOS network We use the Simple Biosphere model v.4 (SiB4) to simulate biosphere fluxes of carbon during the drought period and preceding years. The SiB model was developed in the \u203280s and \u203290s with the aim of simulating the land\u2013atmosphere exchange of energy, water and carbon ,36). TheSiB4 simulates the leaf photosynthesis rate as a minimum of three limiting assimilation rates: that is, (1) limited by the capacity of the RuBisCo enzyme, (2) limited by light, and (3) limited by storage and export in the photosynthesis process ,40,41. T2 fertilization to the carbon pools following the observed increase in atmospheric CO2 mole fractions [Meteorological data that are used as forcing for the SiB4 model are taken from the Modern-Era Retrospective Analysis for Research and Applications 2 (MERRA-2) and are available from 1980 onwards . For iniractions ,43. The ractions and soil(d)et al. [et al. [et al. [et al. [et al. [Previous work has shown that the carbon cycle drought response to soil moisture stress is often not well captured in biosphere models ,23,45. Wet al. ), we see [et al. ). Furthe [et al. . Secondl [et al. . We have [et al. .(e)2, described in full in van der Laan-Luijkx et al. [2 concentrations from the GLOBALVIEWplus v.5.0 ObsPack product [CTE is a global atmospheric inverse system that estimates the biosphere and oceans\u2019 fluxes of COx et al. . Here, w product and the product ,30, made product on a 3\u00b0 product .et al. [Two inversions are carried out following this setup: one with prior fluxes for Europe\u2019s terrestrial biosphere taken from SiB4 \u00a7 and one et al. . Scalinget al. ).Fossil fuel and biomass burning emissions are taken from the EDGAR 4.0 and SiBC3.(a)a) SIF, (b) NIRv and (c) SiB4 GPP across Europe relative to the 2013\u20132017 baseline period, with similar patterns of reductions. Strong reductions in SIF and NIRv in northwestern Europe correspond to the area of reduced precipitation (green contour) and high temperatures, contrasting with increases in productivity in eastern Europe and the Iberian peninsula that are also noted in Longdoz et al. [\u03c3 reductions in precipitation over land, we estimate the drought-affected area at 1.6 \u00d7 106 km2, and based on 1\u03c3 reductions in SIF as well as NIRv, we derive an affected area of 1.9 \u00d7 106 km2, each with a slightly different spatial distribution. The integrated anomalies for SIF and NIRv are just above 1\u03c3 in May, become negative starting in June, and drop well below the \u22121\u03c3 range in the summer months July\u2013September (JAS).The carbon cycle impact of the 2018 summer drought was recorded in productivity independently across the network of ICOS eddy-covariance observations as well as in remotely sensed SIF and NIRv. z et al. . Based oN=10 out of 16) and for each of the PFTs, concomitant reductions in TER and (of higher magnitude) GPP lead to overall reduced NEE. Although there is an important role for vegetation stress due to high temperatures, high vapour pressure deficit, and low soil moisture, we find mixed signals of changes in water-use efficiency across sites (not shown) and only evergreen needleleaf forests (ENF) exhibit the clear increase that was demonstrated in Ciais et al. [et al. [Within the drought-affected area, 14 out of 16 selected eddy-covariance sites recorded reductions of monthly GPP and/or TER, as derived from the measured NEE in JAS. There was no clear separation of magnitudes across PFTs, nor a strong sign of GPP- or TER-dominated responses. Table 1 shows the changes in the monthly mean carbon balance averaged over PFTs. At most sites N = 7, grouped together because of their highly similar temporal anomalies) in Figure 2d shows this depletion to occur substantially earlier at the ICOS sites than SiB4 simulations suggest. The strongest GPP response consistently occurs once soil moisture goes outside its 1\u03c3 variability in July.The summer drought impacts on the carbon cycle were most severe in the July\u2013August\u2013September (JAS) period, and were preceded by favourable growth conditions in spring that increased productivity and net carbon uptake. This is partly visible in the anomalies in 2 (Ci/Ca), stomatal conductance (gs), and GPP, as well as increased intrinsic water-use efficiency. The same stress factor also scales down heterotrophic respiration (and maintenance respiration), representing reduced microbial activity in dry soils. This makes soil moisture levels, and ensuing stress, a key driver of the biospheric response to severe droughts as studied here. Simulations with an alternative soil moisture distribution derived from the PCR-GLOBWB hydrological model, which performed very well when we applied it to SiBCASA over the Amazon [e Amazon , now mai(c)R = 0.54\u20130.89 for SIF, and R = 0.70\u20130.97 for monthly NIRv versus eddy-covariance GPP; see electronic supplementary material, \u00a7SC) allows PFT-dependent upscaling of NIRv (and SIF) from figure 1c, which does not use any remotely sensed products to prescribe changes in vegetation phenology. SiB4\u2019s total GPP anomaly integrates to \u2212130 TgC over the same area and time period, as documented in Regression of the site-specific productivity and 57 TgC, respectively, when averaged over the JAS period and the 1.6 \u00d7 102 mole fractions across the ICOS network. Through CTE, we derive a summertime reduction of NEE of 52\u201383 TgC during JAS, shown alongside the other numbers in et al. [The integrated NEE anomalies at local to regional scales derived with SiB4 agree well with large-scale constraints derived from atmospheric COn et al. , this an4.et al. [2 observations from an extensive tower network. As warm conditions persisted, and possibly fed by a positive land\u2013atmosphere feedback discussed in various publications [et al. and Kowalska et al. [et al. [The contrasting carbon cycle anomalies between the spring and summer periods of 2018 strongly resemble the 2012 drought in North America described extensively in Wolf et al. . Similarications \u201363, soilications ,64. The a et al. ,65, as w [et al. , who hav6 km2, a factor of 2 larger). Also, the 2003 event was preceded by a weaker spring anomaly than in 2018. Sites impacted during both events include Loobos , Tharandt , Hainich and Sor\u00f8 , with needleaf forests showing comparable GPP reductions (approx. 10\u201315%), and the deciduous forests suggesting much larger reductions (approx. 20\u201350%) in 2018 than in 2003. The sample size of N=2 for both sets precludes strong conclusions on the 2003\u20132018 difference in drought response of each PFT though, which is more extensively evaluated in Fu et al. [increase in net carbon uptake. This contrasts strongly with the 2003 European summer anomaly, which led to a decreased European carbon uptake of 147 TgC [2 mole fractions presented in Thompson et al. [Compared with the 2003 drought in Europe, the 2018 impacts on GPP and NEE are large at the site level but integrate to smaller totals, partly because the 2003 event covered a much larger area of any single PFT in the model while their NEE response is overestimated by SiB4. In agreement with earlier [figure 2a is best reproduced by the model, although it also results from a too late and too small decline in both GPP and TER, and appears to occur at a too small increase of water-use efficiency compared with the (quite noisy) eddy-covariance-derived inherent water-use efficiency. Underestimation of changes in water-use efficiency during droughts is common amongst biosphere models [et al. [Our SiB4 model-based analysis suggests that the largest contribution to the total drought anomaly (JAS) in western Europe came from grasslands (NEE: 28 TgC), which had a small measured NEE response per unit area across the ICOS sites in earlier and 2018 earlier ,76, grase models , for vare models . SiB4-dee models ,79. Like [et al. are unde [et al. is large2 mole fraction observations. This was confirmed by our simulation using the climatological SiB4 prior, as well as by two alternative inverse simulations with CTE, in which we (a) started from the SiBCASA biosphere model that simulated no 2018 drought anomaly, and (b) additionally removed the extra ICOS CO2 observations to fall back on the standard set of sites over Europe. The posterior result of inversion (a) was similar in the timing of the anomaly compared with our CTE simulations shown in The SiB4 simulated biosphere flux has thereby already made a reasonable prior NEE estimate for the inversions we performed with CTE, but we stress that its outcomes are most strongly driven by the ICOS CO5.6 km2. This strong local response was offset on the European level owing to relatively high amounts of precipitation in southern and eastern Europe, which led to an increase in NEE in these regions. This resulted in a mean annual European net carbon uptake of \u221251 to \u2212108 TgC, illustrating that there the event was concentrated on a smaller area than that of 2003.We estimate that the drought of the summer of 2018 caused a 52\u201383 TgC drop in the net amount of carbon absorbed by the most strongly affected region in northwestern Europe compared with the climatological mean for July to September. This was partly mitigated by above-average uptake in late spring\u2013early summer and exacerbated by large releases of carbon during January\u2013March, resulting in an annual mean reduction in carbon uptake of 20\u201349 TgC integrated across the most strongly affected region of 1.6 \u00d7 10Using the biosphere model SiB4, we have shown that this is primarily due to soil moisture stress, which limited productivity across even typically resilient ecosystems, like evergreen needleleaf forests. Improvements to the hydrological component of this model, as well as corroboration in our findings across remote sensing products, the hydrological model PCR-GLOBWB, and eddy-covariance measurements give us confidence in the NEE anomaly estimates of SiB4. Our subsequent use of it as a high-quality prior estimate in the inverse model CTE, along with constraints from a dense network of observations across the region of interest, confirm the anomaly from the largest perspective."} {"text": "This study aims to review the evidence for the empirical validation of Frank et al.\u2019s proposed concept definitions and to discuss evidence-based modifications.For the past quarter of a century, Frank et al.\u2019s request for definition validation. Publications with data relevant for validation were included and checked for referencing other studies providing such data.A literature search of Web of Science and PubMed from 1/1/1991 to 08/30/2017 identified all publications which referenced Frank A total of 56 studies involving 39 315 subjects were included, mainly presenting data to validate the severity and duration thresholds for defining remission and recovery. Most studies indicated that the severity threshold for defining remission should decrease. Additionally, specific duration thresholds to separate remission from recovery did not add any predictive value to the notion that increased remission duration alleviates the risk of reoccurrence of depressive symptoms. Only limited data were available to validate the severity and duration criteria for defining a depressive episode.Remission can best be defined as a less symptomatic state than previously assumed \u2a7d4 instead of \u2a7d7), without applying a duration criterion. Duration thresholds to separate remission from recovery are not meaningful. The minimal duration of depressive symptoms to define a depressive episode should be longer than 2 weeks, although further studies are required to recommend an exact duration threshold. These results are relevant for researchers and clinicians aiming to use evidence-based depression outcomes. Major depressive disorder (MDD) is a common, often chronic and recurrent condition, marked by persistent suffering and poor overall health and with deleterious effects on psychosocial, academic, vocational and family functioning. MDD is one of the most prevalent mental disorders and the leading cause of disability worldwide should be chosen in such a way that they have optimal prognostic significance.In particular, it should be possible to distinguish remission from recovery (and therefore relapse from recurrence), which are different only in their duration, by a difference in prognosis. The hypothesis is that those in remission have not (yet) fully recovered from the latently present episode and therefore have a relatively high relapse rate compared with those who recovered. Those who recovered have a low recurrence rate that is no longer dependent on the time since their last episode and equal to the incidence rate of a risk factor-comparable population who never experienced an episode. Similarly, in cancer research, \u2018full remission\u2019 is defined as the period during which any sign of the disease is lacking, but during which a patient is particularly vulnerable for a relapse of the tumour since latent disease might still be present. When the remission is of sufficiently long duration, the patient can be (retrospectively) considered to be recovered or \u2018cured\u2019 as the passing of even more time does not provide additional protection to disease recurrence, the risk of which is similar to the incidence risk of a comparable healthy population.Some of the clinical status concepts that are the subject of this review are also defined in the Diagnostic and Statistical Manual of mental disorders validate at least one of Frank's definitions. Because severity related criteria were necessarily instrument-specific we focused on articles determining cut-offs on the HAMD-17 and the Montgomery-\u00c5sberg Depression Rating Scale (MADRS), which are the most widely used instruments extracted data independently and resolved discrepancies through discussion and consensus. References of included articles were searched for additional relevant studies. The literature search was last updated on August 30, 2017.The 1570 identified papers included 214 duplicates and 26 non-obtainable papers. The study selection criteria (as outlined above) reduced the number to 117 papers and yielded 49 additional records via reference checks. From these 166 papers, 110 were excluded based on the full-text assessment. Thus 56 studies covering 39\u00a0315 subjects were included, and summarised in et al. population, in which the average HAMD-17 score is about 3.2 should be visible as an upward discontinuity in the survival curve slope.Survival curves for asymptomatic individuals until relapse/recurrence or equivalent data were obtained from 31 studies see . There iet al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.Several studies show some indication of a sudden drop in relapse/recurrence rate a certain time after remission/recovery was obtained remission of depressive symptoms is relatively high when the onset of these symptoms is recent, especially during the first 12 weeks, but diminishes quickly thereafter. This provides some justification for the suggestion by Frank et al. of requiet al. does notet al.A substantial body of literature studying depressive relapse/recurrence risk over time has been obtained see , but comet al. . Moreover, a substantial part of the data had to be extracted from survival curves that only rarely showed confidence intervals and often did not possess a clearly labelled time axis, making it difficult to assess exactly when the measurement began.et al. (More than a quarter-century after the landmark paper in which Frank et al. provided"} {"text": "Low temperature solder has great advantages in aerospace and through-hole technology assemblies in IBM mainframe due to its unique low temperature characteristics. The review evaluates the effects of alloying elements, rare earth elements and nanoparticles on the wettability, microstructure, mechanical properties and oxidation resistance of the low-temperature solders. HoweverAt present, the new type of lead-free solder mainly includes Sn-Cu, Sn-Ag, Sn-Bi binary solder and Sn-Ag-Cu ternary alloy solder. The melting point of Sn-Ag solder is high (221 \u00b0C), and the price of Sn-Ag lead-free solder is relatively high due to the presence of Ag, which limits the use of the solder . The cosX, Sn-Bi, Sn-Zn), and points out the potential problems and future research directions of low temperature lead-free solder.The review evaluates the effects of alloy elements , nanoparticles and rare earth elements on the wettability, mechanical properties, microstructure and oxidation resistance of low-temperature lead-free solder are called \u2018vitamins\u2019 of metal elements, A small amount of rare earth elements can change the properties of solder . As a la2.3.2O3 oxide film, which hinders the flow of liquid solder. It may be due to the different addition proportion of Al element, resulting in two different results. Du et al. [Sn-Zn lead-free solder has a wide range of raw materials and a low melting point, which is close to the traditional Sn-Pb solder. It is one of the options to replace Sn-Pb solder at present. However, the use of Sn-Zn solder is limited due to its coarse structure, poor wettability and oxidation resistance . The effu et al. added P u et al. studied u et al. , the schWhere Zhang et al. studied 3.3.1.11In9) when the soldering temperature is 85\u00b0C-200\u00b0C. The first form is a continuous layer formed on the Cu substrate, which contains 6\u00a0wt.% Bi elements. The second form is scallop like compound layer. The compound layer is penetrated by molten liquid solder. It is the main channel of atom diffusion. The Bi-rich Cu11In9 phase becomes CuIn2 phase with the increase of temperature. The diffusion channel is blocked due to the growth of IMC. The growth rate of intermetallics decreases, and the interface microstructure becomes uniform. Tian et al. [2 layers with tetragonal crystal structure, Cu2 sublayers with coarse grains and Cu2 sublayers with fine grains. The shape of Cu 2 grain is the block with the largest grain, the shape of Cu2 grain is rod with medium grain size, and the shape of Cu2 grain with the smallest grain size is shown. Cu 2 layers grow and consume Cu2 layers continuously because the diffusion rate of In and Sn atoms is higher than that of Cu atoms after aging at 40 \u00b0C. Jin et al. [3In5 was only detected in In-50Bi alloy solder, BiIn2 was detected in In-40Bi, In-33.7Bi and In-30Bi, which confirmed that In-Bi binary system contained stable mesophase: Bi3In5, BiIn2 and \u025b.Joanna et al. investign et al. studied n et al. studied 3.2.The molten solder reacts with the base metal to forms the solder joint after cooling in the process of soldering. Whether the internal structure of the solder joint is uniform or there are too many intermetallic compound layers will directly affect the service life of the solder joint. The research on the internal microstructure of the solder joint is the basis of the research on the reliability of the solder joint . It is e6Sn5 compound in the liquid solder decreased due to the increase of Cu element content, and the thickness of IMC thickened. Miao et al. [Wang et al. added pho et al. obtainedo et al. studied o et al. that addo et al. . The reso et al. found tho et al. investigo et al. . The teso et al. shows tho et al. found tho et al. . More heo et al. studied o et al. also belxCe were refined. The rare earth elements belong to the surface-active elements. According to the adsorption theory of the surface-active elements [The microstructure can also be refined by adding rare earth elements. Shiue et al. studied elements , the equ6Sn5 compounds in the microstructure of Sn-Bi-xCe solder will disappear, which will refine the microstructure. Xia et al. [xRE composite solder decreased from 20.13\u00a0\u03bcm to 8.13\u00a0\u03bcm after RE were added, so as to refine the microstructure.Where a et al. also rep3Ag particles will be formed after the addition of nano Ag particles. They make the grains more difficult to grow, so as to refine the Bi-rich and Sn-rich phases. Liu et al. [6Sn5 nanoparticles can refine the microstructure of Sn-Bi solder. The melting point of Cu6Sn5 nanoparticles is high. It acts as a nucleation particle in the Sn-Bi solder after Cu6Sn5 nanoparticles was added in Sn-Bi solder. The nanoparticles can inhibit the coarsening of the Bi-rich phase. It has been pointed out in the literature [Sun et al. studied u et al. pointed terature that aftterature also shoterature reported3.3.6Sn5, CuZn5 and Cu5Zn8 are formed after Cu element was added in it. The coarse needle-like Zn-rich phase gradually becomes fine needle-like or even disappears, and the Sn-Zn eutectic structure is gradually refined. Lin et al. [5Zn8 and Cn6Sn5 according to the comparison of Cu-Zn [3Sn4 intermetallic compound phase produced by Ni element. Temperature is one of the important factors that affect the metallurgical effect of solder in the process of soldering. Praphu et al. [4, Cu5Zn8) formed at the interface between Sn-Zn solder and Cu substrate. Na element and Zn react to form NaZn13 compound according to the following reaction equation. The compounds hinder the diffusion of Zn atom, and the activation energy of Cu5Zn8 phase is higher than that of eutectic Sn-Zn phase. Too much brittle IMC will be produced if excessive Na element is added, which will affect the reliability of solder joints.The effect of Ag on the microstructure of Sn-Zn solder was studied by Wu et al. . They fon et al. investign et al. added Crn et al. found thof Cu-Zn , Cu-Zn cof Cu-Zn also addof Cu-Zn studied of Cu-Zn investigof Cu-Zn . The resu et al. investigu et al. studied xCe/La composite solder is composed of rod-like Zn-rich phase and eutectic. These results can be explained by the equation of nucleation rate [Lin et al. reportedion rate :(4)P(t)2 particles after repeated reflow when ZrO2 nanoparticles were added into Sn-Zn solder. It will reduce the possibility of Zn atom aggregation and roughen IMC layer, and achieve the purpose of refining IMC layer [2O3 nanoparticles on the microstructure of Sn-Zn solder. The results showed that the microstructure of Sn-Zn eutectic was more fine and uniform after adding micro nanoparticles into \u03b2-Sn matrix. Al2O3 nanoparticles can inhibit the growth of coarse dendrite Sn-Zn eutectic structure and refine the grains of composite solder. The literature [Where u et al. pointed MC layer . Xing etMC layer investigterature shows th4.4.1.In electronic components, solder joints not only play the role of electrical connection, but also play the role of mechanical support . TherefoK represents a constant that depends on the material, d represents the size of grain. The grain size became smaller after trace amount of Ag was added to the In solder, which enhanced the strength of solder joints. Solid-state interface reactions between 50In-50Pb solder and a Cu-Fe alloy attached to an electroplated Au layer were investigated by Vianco et al. [2 compound. The compound is implanted at the interface and grows into intermetallic compound. It has good ductility and plays a role of dispersion strengthening in the solder joint. The mechanical properties of the solder joint got improved. With the increase of annealing time, the intermetallic compound layer began to thicken and other brittle compounds were formed, which reduced the mechanical properties of the solder joints. The mechanical properties of the solder joint of alloy In and Au substrate after low temperature bonding at 200 \u00b0C were studied by Won et al. [5Zn8 and Cu11In9 composite layer. On the In-Bi-Zn interface of Sn/Cu substrate, Cu5Zn8 phase was better than Cu6Sn5 phase. Jin et al. [Where o et al. . The annn et al. . The resn et al. studied n et al. investig4.2.3Sn compound in Sn-Bi/Cu solder joints, which lead to the brittle failure of the samples during the tensile process. It was found that the mechanical properties of Sn-Bi solder joints increased first and then decreased with the increase of Ag content [3Ag compound is formed in the filler metal when the content of Ag is less. The compound is granular or acicular. It is evenly distributed in the matrix, which can refine the structure, improve the tensile strength and play the role of fine-grain strengthening. However, Sn3Ag begins to segregate and grow into plate or block when the amount of Ag is more than 1\u00a0wt.%. The large size of Ag3Sn compound will decrease the tensile strength of the solder joint according to the hall match formula. Wang et al. [2 intermetallic compound produced as the second phase particles dispersed in the solder matrix. It impeded the grain boundary sliding and dislocation movement and improved the shear strength of the solder joint [6Sn5 phase with peculiar shape is wrapped in the Bi-rich phase, which can promote the refinement of Bi crystal branch and improve the tensile strength of the solder joint. The mechanical properties of Sn-Bi solder joints were studied by Roh et al. [Zhang et al. found th content . It was g et al. believedg et al. studied g et al. added Crer joint . Chen eter joint reporteder joint added hier joint reportedh et al. . The resh et al. studied It has been reported that rare earth elements can improve the mechanical properties of solder . It is fThe influence of nano particles on the mechanical properties of Sn-Bi solder joints is reported in the literatures ,113. The3Sn4 compound strengthen the composite material together, which makes the tensile strength of solder joint enhanced. The tensile strength and elongation decrease with the further increase of the content of Ni-CNTs due to the existence of CNTs clusters and the increase of brittleness. Lee et al. [The literature shows the et al. investige et al. studied 4.3.5Zn2 compound layer, which inhibits the diffusion of Zn atom to the substrate. It can inhibit the growth of compound layer. Also, it will make IMC layer uniform and improve the mechanical properties of solder joints. Sharif et al. [3 intermetallic compound was formed during aging. The IMC layer gradually peels off with the increase of aging time, and a layer of Ni5Zn21 IMC layer is produced. The compound layer grows up gradually, which makes the shear strength of the solder joint decrease gradually. In order to prevent the decline of the mechanical properties of the solder joint, the peeling off of AuZn3 compound layer should be avoided as far as possible to ensure the reliability of Sn-9Zn/Au/Ni/Cu solder joint of the solder joint.The mechanical properties of eutectic Sn-9Zn solder and the properties of hypoeutectic Sn-4Zn and hypereutectic Sn-12Zn solder have been studied by Garcia et al. . The resf et al. thought f et al. . It was 3 compound phase formed is a stable phase. The fine particles gathered at the grain boundary and played an obvious strengthening role. Hu et al. [5Zn8) layer more uniform, improve the air tightness of the joint surface and improve the mechanical properties.The oxidation activity of rare earth elements is very high. The addition of trace RE elements can refine the structure of \u03b2 \u2013 Sn phase rod-like Zn-rich phase. It is the main reason for rare earth elements to improve the mechanical properties of solder . Accordiu et al. studied u et al. . The res2 nanoparticles was added. ZrO2 nanoparticles can be embedded in the solder matrix and play a role in pinning the grain boundary due to the small size of nanoparticles. It can prevent the occurrence of dislocation to a certain extent so as to improve the shear strength of the solder joint when the solder joint is subject to external force. The influence of Al2O3 nanoparticles on the mechanical properties of Sn-Zn solder joints was reported in the literature [2O3 nanoparticles is 1\u00a0-wt. %. It was pointed out that the tensile strength of the solder joint increased by about 10.9% compared with that of the common Sn-Zn solder joint when Al2O3 nanoparticles were added. The researchers attributed it to the use of Al2O3 nanoparticles, which made the pore size smaller and distributed uniformly. Therefore, the mechanical properties of solder joints got improved. It was found that the shear strength of Sn-9Zn solder joint increased after Ni nanoparticles were added. Nano Ni particles can refine the microstructure of solder and reduce the internal crack source of solder joint. The fracture mode of solder joint is mainly ductile fracture [Shen et al. studied the effect of nanomaterial addition on the mechanical properties of Sn-Zn solder joints . The resterature . The resfracture .6Sn5 compounds were produced at the interface after aging or reflowing, thus the reliability of solder joints got worse.5.5.1.In is an active element, but it does not react with oxygen under normal conditions. The literature shows that there is little research on the oxidation resistance of In solder.5.2.3+ accumulated on the surface of liquid solder and forms oxide film after Al were added, which prevent the further oxidation of SBZ. Teng et al. [Sn-Bi solder will not react with oxygen at room temperature, but oxidation may occur during the soldering. Little is known about the oxidation resistance of Sn-Bi solder. The effects of high-temperature aging on a novel hybrid bonding layer consisted of Cu nanoparticles and a eutectic Bi-Sn solder powder was studied by Usui et al. . The resg et al. found th5.3.2, which makes the oxidation resistance of solder containing Zn very poor (Zn element easily reacts with Oery poor . It is a2O3) is formed by the rapid reaction of Nd element with oxygen. The oxide film is wrapped on the surface of the molten composite solder, which blocks the entry of oxygen. It reduced the contact between Zn and oxygen, and improved the oxidation resistance of the solder.Where Y represents weight gain ratio, X represents aging time. It can be seen from the above equation that with the increase of aging time, the growth rate of oxide film thickness is lower than that of Sn-9Zn solder when appropriate amount of Ag and In elements are added. The effect of Bi on the oxidation resistance of Sn-Zn solder was reported by Jiang et al. . They po6.The research status of low temperature lead-free solder is comprehensively evaluated. The alloy elements , rare earth elements , nanoparticle are summarized. It is found that some alloy elements, rare earth elements and nano materials can refine the internal structure of solder. It can reduce the size of intermetallic compound particles and reduce the thickness of interface to some extent. The addition of reinforcement elements can improve one or the whole performance of low temperature solder, but the opposite view has been put forward, which is mainly related to the way and content of reinforcement phase. At present, there are very few reports on low-temperature In-based solders, especially on the solders containing nanomaterials and rare earth elements."} {"text": "Figure 1 as published as Villanacci et al. (2011) was inadvertently not cited.In the original article, there was a mistake in the legend for FIGURE 1 | Histological features of microscopic colitis. Hematoxylin and eosin (A) and anti-CD3 (B) staining in lymphocytic colitis; black arrows indicate intraepithelial lymphocyte infiltration. Hematoxylin and eosin (C) and Masson's Trichrome staining (D) in collagenous colitis; arrows point to the subepithelial collagen band. All images also show surface epithelial injury and lamina propria increased cellularity . This figure was obtained from Villanacci et al. (i et al. with perThe authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated."} {"text": "Its actions are manifested on organs like the brain, heart and kidneys. High serum uric acid (SUA) escalates cardiovascular vulnerability in patients with systemic hypertension.a cross-sectional study was performed in 271 patients with systemic hypertension. Two hundred and seventy one healthy age and sex matched non-hypertensive persons obliged as controls. Left ventricular hypertrophy (LVH) was estimated by echocardiography. Blood samples were collected for measuring uric acid levels.mean SUA was significantly higher among the hypertensive patients (371\u00b1125\u03bcmol/L) than in the controls , and the prevalence of hyperuricemia was 46.9% among the hypertensives and 11.1% among the controls (P < 0.001). Independent predictors of SUA were class of systemic hypertension, left ventricular mass index (LVMI), body mass index (BMI) and age. However, class of hypertension was the best independent predictor of SUA levels in the multivariate regression model (\u03b2 = 0.597). Linear regression revealed SUA levels \u2265 430\u03bcmols/l as a predictor of stage 2 hypertension . Among the hypertensive patients, LVH was present in 39.3% of those with hyperuricemia and in 28.0% of those with normal SUA levels (p = 0.003).results indicate serum uric acid is positively correlated with hypertension and a reliable indicator of LVH in study population. It et al. [et al. [et al. found that hypertensive patients with LV hypertrophy had higher uric acid levels and a greater prevalence of hyperuricemia than patients with a normal left ventricular mass [In the last decade, several well-grounded pieces of evidence showed that the elevation of uric acid often occurs prior to the development of hypertension or metabolic syndrome, thus suggesting a direct association between elevated SUA and these conditions . Moreoveet al. , indicatet al. , 34 This [et al. , 36 wherlar mass .et al. [et al. [et al. found 17% of the patients studied had concentric LV hypertrophy which is a geometric pattern associated with increased cardiovascular morbidity [et al. [In our study, there is a positive linear association between SUA ranges and LVMI as shown in et al. found th [et al. assessin [et al. where heorbidity Comparab [et al. also fouet al. [et al. [5H4N4O3) similar chemical structure to caffeine (C8H10N4O2) that stimulates organs and both have antioxidant properties that are neuroprotective [UA is the ultimate breakdown product of dietary or endogenous purines and is generated by xanthine oxidase (XO). A net release of urate in coronary heart disease and the et al. in a stu [et al. that indotective this preet al. elucidated that high-dose allopurinol regresses LVH, reduces LV end-systolic volume, and improves endothelial function in patients with ischemic heart disease (IHD) and LVH [et al. mentioned that SUA is positively associated with body mass index (BMI), blood glucose, blood pressure (BP), markers of inflammation, and altered lipid profile but concluded that in treated hypertensive patients, high levels of SUA normalized for major biological determinants and do not independently predict CV outcome [et al. reported a reduction of LV mass after weight reduction in young, obese hypertensive subjects [Losartan can cause uricosuria. The Losartan Intervention for Endpoint Reduction (LIFE) study outlined a 29% reduction in composite cardiovascular outcome in the losartan (ARB) arm of the study suggesting that a decrease in serum uric acid levels can lead to reduction in adverse cardiovascular outcomes Rekhraj and LVH . This st outcome . The finsubjects . Previousubjects . Our stuThis study reveals that hyperuricemia is widespread in our study population with systemic hypertension and both are positively correlated. Hyperuricemia was associated with LVH. Thus, the study recommends a repetitive evaluation of serum UA in all hypertensive patients.Elevated serum uric acid (SUA) is a risk factor for Chronic Kidney Disease (CKD);Normal levels of SUA is also associated with systemic hypertension;Elevated SUA is associated with a high body mass index (BMI).Linear regression revealed SUA levels \u2265 430\u03bcmols/l is a predictor of stage 2 hypertension ;Class of hypertension was the best independent predictor of SUA levels in the multivariate regression model (\u03b2 = 0.597);Among our hypertensive patients, LVH was present in 39.3% of those with hyperuricemia and in 28.0% of those with normal SUA levels (p = 0.003)."} {"text": "Fig. 1). Lichens were show-cases to introduce the concept of symbiosis ), which causes \u2018vitrification', the transition of their cytoplasm to a \u2018glassy' state and cease of metabolism. To find out what reactions may occur at different levels of desiccation in lichens, Candotto Carniel et al. [2O g\u20131 DW and enzymes were active in a \u2018rubbery' state (0.17 g H2O g\u20131 DW) but not in a glassy state (0.03 g H2O g\u20131 DW). Therefore, desiccated tissues may appear to be \u2018dry' in the conventional sense, but subtle differences in water content will have substantial consequences on the types of (bio)chemical reactions that can occur, with downstream effects on longevity in the desiccated state.Lichens have an outstanding ability to revitalize from dry stages. Lichens can endure extreme desiccation to water contents , as well as expanded fungal protein families and expanded algal protein families (carbohydrate active enzymes and a specific subclass of cytoplasmic carbonic anhydrases). Horizontal gene transfer from prokaryotes played a likely role for acquisition of novel archaeal ATPases and Desiccation-Related Proteins by the algae. According to these results lichens evolved by accretion of many scattered regulatory and structural changes, which agrees with an independent origin of lichenized fungal lineages in the fungal kingdom. Kono et al. [Usnea hakonensis for transcriptomic analyses. By comparing resynthesized and natural thalli (symbiotic states) with that of isolated cultures (non-symbiotic state), they found evidence for various processes involved in symbiotic establishment, including cell wall remodeling, production of hydrophobins and symbiosis-specific nutrient flow (including polyol transporters). Future transcriptomic approaches need to consider a side by side of life and death in lichens. Behind the growing thallus edges, even neighbouring cells can vary in vitality and fungal remnants are often part of protective surface layers or provide a scaffold for a fraction of living cells. In some lichens, these layers contain inflated algal cell walls as well, while in others, the remnant algal walls stay put and supposedly support the architectural integrity of thalli. New results, such as the detection of caspase-like activity as a marker of programmed cell death in lichens [Several attempts have recently been undertaken to achieve a better understanding of the genomic \u201chardwiring\u201d for the lichen symbiosis. For example, Armaleo et al. conducteo et al. succeede lichens , opens mFig. 2), which is considered in a recently emended definition of the lichen symbiosis [et al. [Over the past decade it has also become increasingly clear that the living-together of fungi and algae is more complex than previously thought. Algal partners are not uniform and may switch with climatic parameters, while the lichen phenotypes also host associated microbiota (ymbiosis . While nymbiosis . With re [et al. publishe [et al. . Despite [et al. . In addi [et al. .Nephromopsis pallescens revealed one of nine found type I PKS, Nppks7, as potentially involved in usnic acid biosynthesis [The varied colors of lichens are the result of accumulations of crystallized metabolites, which deposited on the outside of the fungal cells as light filters or herbivore deterrents. Thousands of compounds are known but only few are better characterized for antibiotic effects and other bioactive potentials . Researcynthesis . Nppks7 ynthesis . The dozynthesis , as wellynthesis .et al. [Aspergillus nidulans shields the alga Chlamydomonas reinhardtii from toxic azalomycin F produced by the bacterium Streptomyces iranensis. Whether similar effects could be part of the ecological success of lichen phenotypes in their natural environments still remains to be studied. Considering the high potential of compound production in lichens, science would be ready for further surprising discoveries. While the pocket-sized microbial ecosystems of lichens are still a challenge for research, new technological approaches provide bridgeheads for culture-independent functional studies.Cultivation of genuine lichens has remained difficult, but enhanced productivity of fungal-algal associations has been demonstrated also for co-cultures of faster-growing partners in the laboratory. Such studies also direct to hitherto unknown beneficial effects in the lichen symbiosis. As Krespach et al. discover"} {"text": "IUCrJ, Roth et al. analysis of scattering data from powder samples to investigate departures from perfectly crystalline order (Billinge & Kanatzidis, 2004 al. 2020 demonstret al., 2018et al., 2015et al., 2011Recent developments in instrumentation and computing have made the measurement of single-crystal diffuse scattering much easier than it was even ten years ago (Ye et al. (2020x1\u2009\u2212\u2009CoSb. These have proven challenging to understand via crystallographic techniques; while crystallographic solutions can confirm the concentration of vacancies on the Nb sites, the distribution of these vacancies is more difficult to discern. Electron diffraction data collected by Xia et al. (2018Roth al. 2020 have pro al. 2018, 2019 \u25b8 et al. (2020SCATTY (Paddison, 2019Roth al. 2020 approachx = 1/4 and x = 1/6, with intermediate compositions showing short-range combinations of the two. For concentrations with x < 1/6, broader wave-like scattering is seen. This entire range of compositions can be modeled and explained with self-avoiding vacancies.For a single ratio of energies, these models impressively reproduce published electron diffraction data over a range of vacancy concentrations (see Fig. 1et al. (2020These results provide a significant advance in the knowledge of the structure of these materials as well as a great example of how short-range ordering can be understood, modeled and described. Roth al. 2020 have pro"} {"text": "Breast cancer (BC) is the most frequent cancer among women in the world and it remains a leading cause of cancer death in women globally. Among BCs, triple negative breast cancer (TNBC) is the most aggressive, and for its histochemical and molecular characteristics is also the one whose therapeutic opportunities are most limited. The REpurposing Drugs in Oncology (ReDO) project investigates the potential use of off patent non-cancer drugs as sources of new cancer therapies. Repurposing of old non-cancer drugs, clinically approved, off patent and with known targets into oncological indications, offers potentially cheaper effective and safe drugs. In line with this project, this article describes a comprehensive overview of preclinical or clinical evidence of drugs included in the ReDO database and/or PubMed for repurposing as anticancer drugs into TNBC therapeutic treatments. Breast cancer (BC) is the most frequent cancer among women in the world. Triple negative breast cancer (TNBC) is a type of BC that does not express oestrogen receptors, progesterone receptors and epidermal growth factor receptors-2/Neu (HER2) and accounts for the 16% of BCs approximatively , 4. TNBCet al [Drug repurposing is the application of an old drug to a new disease indication: this holds the promise of rapid clinical impact at a lower cost than de novo drug development . In oncoet al , point tet al .In line with this project, we searched in PubMed for published preclinical or clinical evidence of anticancer activity for all drugs included in the ReDO_DB for TNBC. Specifically, starting from each drug present in ReDO_DB, we searched in PubMed for published preclinical and clinical evidence of anticancer activity for TNBC. The strings were composed by the name of the drugs and specific keywords related to TNBC.An additional search string was used to investigate potential clinical evidence about drugs not included in ReDO_DB or references not retrieved in the first search. The string was composed by three blocks concerning keywords related to TNBC, repurposing and study type, respectively. Both strings are provided in the supplementary file . ObservaMoreover, clinicaltrials.gov was searThe aim of this paper is to give to clinicians and scientists a comprehensive overview about preclinical and clinical studies, including clinical trials, present in literature on the repurposing of old-licensed drugs for TNBC.We found 188 preclinical studies references , 18 clinical references \u201326 and 1Using the PubMed database, we found preclinical evidence on TNBC models (cell lines and xenograft models of TNBC) for 84 out of 268 old drugs (31.3%) present in the ReDO_DB. For 42 of the 84 drugs, only one reference was retrieved . Thirteeet al analyses two different retrospective studies on beta blockers efficacy and safety on TNBC [et al [et al [ on TNBC , and theC [et al and Ishil [et al analysedl [et al , 25, 26,l [et al , 21, 23 l [et al and hazard ratio of overall survival was 0.35 [et al [p = 0.02) and hazard ratio of metastasis and BC death were significant . The study of Spera et al [p = 0.002) but not in overall survival . The second study presented by Spera et al [p = 0.269; 0.73; 95% CI, 0.35\u20131.48; p = 0.38, respectively).BBs were evaluated on postmenopausal women with operated early primary TNBC, on women with invasive TNBC (receiving neoadjuvant chemotherapy), and on women with advanced or nodal positive TNBC. Study populations ranged from 35 patients to 1,417 patients. In the study of Melhem-Bertrandt et al , using m) [et al using Brra et al , using dra et al using alet al [p = 0.23). Overall survival was 67% in the metformin group, 69% in the non-metformin group and 66% in the non-diabetic group (p = 0.58). Recurrence free survival was 65% in the metformin group, 64% in the non-metformin group and 54% in the non-diabetic group (0.38). Also, after adjustments, no significant survival outcomes were obtained.The retrospective study of Bayraktar et al using meet al [The primary endpoint of phase II open label single arm study of Chan et al was to aet al [p = 0.112) when comparing neoadjuvant chemotherapy plus zoledronic acid (6/17 (35.3%) CI: 12.6\u201358.0) with chemotherapy alone (2/17 (11.8%) CI: 0.0\u201327.1). Also for the 3 years disease free survival, neoadjuvant chemotherapy plus zoledronic acid showed no significant benefit compared to the neoadjuvant treatment alone (p = 0.077) despite the fact that the percentage of patients in treatment with zoledronic acid was higher compared to the other arm (94.1% versus 70.6%).The articles of Hasegawa et al and Ishiet al referredet al [et al [Celecoxib was analysed in two studies: the first, a phase II randomised study of Pierga et al performel [et al , analyseet al [p = 0.04; anti-platelet 8.8%, no anti-platelet 31.9%, HR: 0.310 (0.132\u20130.729); p = 0.007, respectively). Five years overall survival hazard ratio was not significant between the two arms (HR: 0.652 (0.343\u20131.239); p = 0.192). The second retrospective study of Williams et al [Aspirin was analysed in two retrospective studies. The first retrospective study of Shiao et al that colms et al performeet al [Finally, Retsky et al showed tet al , in whicet al [The article of Tsubamoto et al reportedet al [p = 0.011).The phase II, open label, randomised study of Wang et al analysedIn the Phase I, randomised study of Nanda and colleagues performed in USA, four women with metastatic or locally advanced TNBC were analysed . Unfortunately, no information about patients allocation, nor any outcome information could be retrieved from this article .et al [p = 0.026, 95% CI: 0.01\u20130.76).The retrospective study of Shaitelman et al used medet al [The retrospective study of Lacerda et al using BrSearching the web site of clinicaltrials.gov , we found only 17 drugs out of 286 presented in the ReDo_DB with ongoing or completed clinical trials for TNBC. in silico modelling or other computational pharmacological approaches that, despite the interest for the research [This review presents an overview of all the evidences about the repurposing of old, licensed, non-cancer-drugs in the treatment of TNBC, starting from preclinical evidence and going through current clinical trials. ReDO is an ambitious project aiming to investigate the repurposing of non-cancer-drugs in oncology, and ReDO_DB is a powerful tool that need to be dynamically implemented with recent findings, by adding to the database new drugs for which there are preclinical evidence, and by giving visitors a specific PubMed search string for each tumour and tumour subtypes. The ReDO approach is based on published literature and does not aim to identify new active compounds against cancer. Thus, the database does not include potential repurposing candidates identified through research \u201331, unleBeta Blockers (BBs) seem to be the more promising drugs in the repurposing for the treatment of TNBC. Three articles showed significant benefits of these drugs in women with advanced TNBC and in early primary TNBC patients treated with the combination of chemotherapy plus BBs \u201313. Unfoet al [While BBs demonstrated to be beneficial in the treatment of TNBC, metformin, a promising molecule in preclinical studies, did not show any efficacy in the treatment of women with TNBC. Bayraktar et al showed tet al [et al [et al [The articles of Shiao et al and Willl [et al showed cl [et al did not et al [et al [Despite many studies trying to evaluate the use of statins in breast cancer treatment \u201336, in tet al reportedl [et al did not In vitro, proton pump inhibitors inhibit FASN activity and induce apoptosis in breast cancer cell lines. In this study, omeprazole in combination with anthracycline-taxane (AC-T) was administered to 42 patients until surgery, and pathologic complete response (pCR) was investigated. FASN positivity significantly decreased with omeprazole from 0.53 (SD = 0.25) at baseline to 0.38 , and the drug was well tolerated with no known grade 3 or 4 toxicities. Furthermore, the pCR rate was 71.4% (95% CI: 51.3\u201386.8) in FASN+ patients and 71.8 % (95% CI: 55.1\u201385.0) in all enrolled patients, demonstrating that the omeprazole in addition to neoadjuvant AC-T yields a promising pCR rate without adding toxicity.Other authors showed significant results on the survival of TNBC patients treated with esomeprazole. Recently, one phase II study on activity of omeprazole on patients with operable TNBC independent of baseline Fatty acid synthase (FASN) expression was presented at the ASCO meeting. In vitroFor those drugs collected in ReDO_DB with favourable preclinical evidence or whose retrospective clinical trials were not so large to provide strong evidence, large retrospective cohort studies are needed to evaluate effectiveness. Further, as for BBs that have proven by retrospective studies to be effective in the treatment of TNBC patients, randomised clinical trials might be important to confirm the evidence of the repurposing.Drug repurposing is a highly interesting novel strategy for the oncology community and ReDO_DB is a powerful tool that can give authors the opportunity to investigate weather non-anticancer drugs might be effective in cancer treatment. Some precision medicine studies, based on omics data, have included repurposed drugs and have reported interesting case reports of responses from patients , 39, howFrom the literature retrieved, BBs seemed to be the more promising drugs for the repurposing, while evidence about other drugs as NSAIDs still need to be assessed or proven for the treatment of TNBC.The authors declare that they have no conflict of interestMZ and SC conceived the study. AS extracted the data. SD supervised the data extraction. MZ, SD, SC, AS, and PP contributed to the interpretation and discussion of study results. AS and SD drafted the manuscripts. All authors revised and approved the final version of the paper.This study was supported by Fondazione Decima Regio \u2018Olga e Raimondo Curri\u2019, Via Cimarra 44-B, Roma."} {"text": "Visible light communications (VLC) (including LiFi) represent a subset of the broader field of optical wireless communications. Where narrow beams, typical of free space optical communications are largely free from interference. VLC encompasses use cases involving combined illumination and data access and supporting a wireless access point (AP) model. The use of many units provides scaling of spatial coverage for both lighting and data access. However, AP replication in close proximity creates many interference challenges that motivate the investigation embodied in this paper. In particular, we frame the interference challenge in the context of existing strategies for driving improvements in link performance and consider the impacts of multiple users, multiple sources and multiple cells. Lastly, we review the state of existing research in this area and recommend areas for further study.This article is part of the theme issue \u2018Optical wireless communication\u2019. These are typically served by narrow beams that are more easily isolated to single target receivers. The introduction of divergent sources permits the possibility to serve multiple receivers with a single signal but also encourages interference in the presence of multiple sources. The emergence of light-emitting diode (LED) lighting as a basis for OWC for multiple users exacerbates the need to understand the impact of each of these dimensions: source divergence and the existence of neighbours. Research in this area spans visible light communications (VLC) ,2 and liEarly research in VLC mainly focuses on the physical layer and single-link implementations to ensure successful point-to-point communication, isolating the transmitter and receiver link into a fixed system. This research improves link performance through signal processing and the introduction of novel modulation techniques. More recent works delve into the system level by deploying cells of multi-point-to-point or multi-point-to-multi-point communication. These multi-AP systems introduce neighbouring cells which naturally induce interference when signals from multiple APs are simultaneously received. Interference can also occur from multiple user transmissions; but little work has analysed interference in the context of OWC uplinks and the impact of in-band uplink on overhead (i.e. AP) signal distribution. This is primarily because many systems incorporate asymmetry by using an alternative medium for uplink. For this reason, our survey focuses on downlink interference, leaving the uplink interference analysis as an open research opportunity. Wireless capacity can be added to indoor spaces by increasing AP density; but doing so can increase interference. Fortunately, light, with its directional property, offers performance gains with respect to interference. In this paper, we explore the relationship of VLC with radio frequency (RF) models but mainly focus on the unique characteristics of VLC interference mitigation. We discuss recent techniques proposed to improve system performance and describe possible research opportunities.Note that we will henceforth use the term OWC rather than VLC or LiFi which imply the use of the visible spectrum. Other optical spectra such as ultraviolet (UV) or infrared (IR) are applicable as well as covered by the more general OWC label.The remainder of the paper is organized as follows: in \u00a72.Interference is a critical parameter when deploying systems where multiple links are simultaneously active and within range of each other. In this section, we identify unique characteristics of the optical medium and OWC network deployments that distinguish OWC interference analysis from the traditional interference analysis techniques used for RF communications.(a)\u2014Directionality of the medium. The transmitter\u2019s illumination pattern/beam width, as well as the receiver\u2019s FOV, establish the directionality property . \u2014Optical power constraints. When the optical carrier is also expected to provide illumination, the optical signal must conform to lighting conventions and the nature of human perception including intensity, glare, flicker and colour quality. Also, intensity modulation with direct detection (IM/DD) signals have a different relationship between the constraint and the signal current or voltage. For an electrical signal X(t), average optical power in IM/DD relates to a constraint on E[X], whereas average electrical power sets a constraint on the variance of X, or E[X2] assuming X(t) is a 0 mean signal ) in most RF systems; however, the relationship between the variance and the average optical power constraint (i.e. E[X]) depends on the modulation used.Interference is widely explored in the RF domain; however, there are differences exhibited due to the properties of the media at different operating points in the electromagnetic spectrum. RF models for interference are insufficient for characterizing light-based models ,13,14. I(c)A wireless network with multiple simultaneous transmissions can be analysed as a single-coordinated AP or as multiple-coordinated APs. For OWC, we expect that the scope of interfering devices is more constrained due to the directionality of light and the nearness of adjacent APs. This is in contrast to RF-based APs which have much larger spatial scope. Each of these models suffers from different types of interference. These include inter-cell interference: UDs share the same resources but are in different cells; intra-cell interference: UDs consume different resources but are within the same cell; cross-cell interference: UDs are in different cells and have different resources .Interference and its impact can be characterized by direct and indirect metrics. Of this list, we identify a subset as best for establishing and benchmarking the performance of multiple AP systems. These are (1) system throughput, (2) sustained link speed of a single user, and (3) system complexity. However, in practice, evaluating published work on multiple access OWC systems is challenging due to disparate assumptions and operating conditions. To help with this challenge, we break down the techniques into categories to help permit fair comparison. These are described next.3.Up to this point, we have discussed the challenges associated with OWC APs including the nature of the optical physical channel models. In this section, we survey and classify existing techniques intended to manage interference in the context of multiple access in OWC systems. We begin with a summary of nomenclature and classification and then relate key results from the literature onto this classification.rejection is the strictest term used to describe completely removing interference from a system. It is normally employed in physically isolated techniques that can effectively separate resources or channels. Coordination is done in systems that attempt to either use interfering signals to their advantage or arrange transmissions in a way to cause the least possible interference. Coordination is normally done in networked or synchronized systems. Interference alignment is a scheme developed for RF [Avoidance, mitigation, cancellation and suppression are ways to minimize interference. Although cancellation sounds as strong as rejection, most of the literature uses it to describe minimizing interference where some interference effect remains. These techniques are adopted in systems with fixed resources that deploy resource allocation or load balancing. Management is a term that is usually used broadly to describe each of the previous systems that aim to combat interference effects in any manner. Using the term \u2018interference management\u2019 to cover the range of nomenclature, we classify techniques in OWC into two main categories (\u2014Multiple access (MA). This class consists of the techniques that allow an AP or set of APs to distribute defined resources across a set of users. It is divided into two further categories:(i)MA-signal processing (MA-SP). This class of techniques applies pre-processing at a transmitter to enable distinction of signals at the receiver, such as employing orthogonal multiple access. The received signal is the superposition of the intended signal and some number of interfering signals.(ii)MA-physical isolation (MA-PHY). This class of techniques isolates signals by using the physical characteristics of the channel such as using orthogonality of physical resources (e.g. wavelength division multiplexing) or by leveraging the properties that are distinct to OWC, such as directionality or receiver FOV.\u2014Spatial diversity. This class encompasses both of the PHY and SP cases and is concerned with receiving data by many spatially unique channels or multiple PDs then performing signal processing to have them de-correlated.A review of the state-of-the-art related to OWC interference studies yields a range of terms similarly applied. These terms include the following: d for RF that inctegories . These aNext we consider the state of the art for interference management in each of these classes and corresponding to the different system models . For eac(a)In the first instance Multiple access refers to how a set of resources can be distributed to serve multiple UDs. The characteristics of the medium used are reflected in the techniques applied towards sharing the channel. In the basic case, resource distribution provides fairness from a single AP to multiple users. The other case involves a receiver detecting multiple sources. Here the resolution can include more complex coordination among APs or more advanced coding techniques.et al. [Marsh et al. provide et al. [Kim et al. study fret al. ,20 focuset al. [For the visible spectrum, Jung et al. analyse (ii)cooperative transmission or joint transmission (JT) which allows transmission from multiple transmitters to a single VLC receiver, similar to MP2P systems as shown in The techniques mentioned here employ et al. [An approach using synchronized time of arrival (TOA) from multiple sources was introduced by Prince et al. . This meet al. [Chen et al. adapt JTet al. [Park et al. study anet al. [et al. study robust MMSE linear precoding for MU-MISO VLC broadcast systems assuming imperfect CSI to create a more robust precoder in [et al. [Yu et al. tackle Mcoder in . This sy [et al. explore et al. [Li et al. design tMU-MISO precoding can be very beneficial for users to help increase their data rates as well as the system throughput. However, there is much complexity on the transmit side that needs to be clarified. The transmitters are considered connected, synchronized and/or have channel information regarding the receivers in most works mentioned above. Some works assume transmit precoding as well. The backhaul network must be able to handle such overhead, and as more devices enter the system, the technique does not scale gracefully. While these systems appear promising, there needs to be more investigation to establish scaling limits with respect to number engaged transmitters UDs.(iii)NOMA permits the full use of the channel bandwidth by any transmitter, relying on superposition coding at the source and successive interference cancellation at each receiver. Many works have been published on the topic of NOMA in 5G networks ,30.et al. [Marshoud et al. propose et al. , as wellet al. [et al. [et al. [et al. [Kizilirmak et al. compare [et al. study NO [et al. derive c [et al. they ext [et al. analyse et al. [et al. [et al. [There are works that opt for solutions other than SIC such as Guan et al. , who stu [et al. use join [et al. study SIet al. [et al. [A low complexity power control allocation that aims at fairness under optical intensity constraints by maximizing the sum log user rate is studied by Yang et al. . Chen et [et al. study efWhile most works mention that NOMA is attractive for OWC systems, especially indoors, because the channel at a fixed location/orientation is nearly deterministic and the SNR is relatively high (in the absence of blockage), there is still a need for channel estimation as performance is impacted by shadowing, receiver mobility and orientation. NOMA has the advantage of allowing full resource usage and therefore best spectral efficiency. The main drawbacks of this approach are complexity, error propagation, and latency added due to the use of SIC. There is room for new solutions that find a trade-off between SIC and joint detection in terms of complexity while keeping a tolerable delay.(b)Here, we consider the implications of the physical layer characteristics on the nature of interference. This class includes sub-cases of (i) AP positioning, (ii) beam control and (iii) orthogonal multiplexing . Each is(i)The conventional model for an RF AP is a single omni-directional unit spanning multiple rooms in the service to a locale (e.g. an apartment). Coverage of a larger facility (building or campus) is realized by replication of APs. OWC APs, being primarily LOS-based, can service much smaller zones such as individual rooms, or can be replicated in larger rooms (e.g. open office seating). Clearly, there are important design considerations for the height, spacing and beam width of the OWC APs especially if they are also intended to provide lighting . Currentet al. [et al. [et al. [et al. [Wang et al. investig [et al. focus on [et al. propose [et al. study opAP placement is an interesting topic with more to investigate, and in particular, the impacts of mobility and device density. The introduction of steerable APs also adds a new dimension to improving system performance.(ii)The aforementioned cases consider static lighting and AP scenarios in which the source intensity parameters are provisioned for a particular operating point. With the use of beam control including beam width (transmitter FOV) and beam steering , new optimizations result as well as options to support adaptive performance to varying device location and data traffic.et al. [Rahaim et al. study SIet al. [Valagiannopoulos et al. suggest Beam steering can also be used at the transmit side, either as a narrow or diffuse beam \u201350. By uet al. [Another MA-PHY technique is proposed by Chang et al. involvinet al. [In addition to controlling the directionality of a source, the FOV and pointing angle of a receiver can be controlled to improve interference rejection. Abdalla et al. propose (iii)This technique facilitates co-location of multiple non-interfering signals that can be decoded selectively by independent receivers. .)et al. [et al. [\u22121 and 6.25\u2009Mb\u2009s\u22121. Butala et al. [Liu et al. demonstr [et al. considera et al. propose Exploiting WDM in OWC systems is an attractive means to gain capacity. However, it needs careful design to meet the relevant illumination requirements. These include (1) light distribution and intensity levels including minimizing glare, each dependent on the lighting use case; (2) providing satisfactory spectral power distributions for colour rendering, essential for humans; and (3) avoiding visible temporal discontinuities in colour or spatial distribution of light. Each of these is surmountable, but requires consideration when designing the modulation approach involving WDM or combinations of WDM with time division multiplexing (TDM).(c)Diversity techniques exploit the ability to simultaneously source multiple signals that can be combined at a receiver, the ability to receive sources from multiple detectors at a receiver, or some combination of the two techniques. The diversity class of interference management includes sub-cases of (i) space time block coding, (ii) multiuser-MIMO, (iii) angular diversity, (iv) interference alignment, (v) differential detection, and (vi) spatial light modulators .Figure (i)et al. [et al. [Space\u2013time block coding (STBC), first introduced for RF communications, has been established as a method to allow diversity in a system to achieve very low BER while saving power . Ntogariet al. analyse [et al. propose [et al. ) and shoet al. [\u22121 adding that the free space transmission range can be extended to 5\u2009m. BER is reported at less than 10\u22125. However, their system does not account for any mobility.Meanwhile Shi et al. construcWhile STBC can be beneficial in achieving low BER, there is a relatively high complexity involved with this approach. This includes the complexity associated with achieving syncronization of distributed transmitters.(ii)et al. [\u22121 at a BER of 10\u22126 while 70\u00b0 and 50\u00b0 are considered for receiver FOV.Hong et al. investiget al. [et al. [Pham et al. also emp [et al. investiget al. [Lian et al. study anMIMO schemes support increases in system capacity and related performance but at a the cost of complexity. Some of the systems mentioned above employ precoding which helps in relieving interference but assumes transmitter connectivity and the availability of CSI which is not always the case nor is it necessarily accurate. In support of this scheme, indoor VLC channels tend to be more deterministic but only in a static study. When device mobility is introduced, CSI changes and any static precoding matrices quickly become obsolete.(iii)a,b) but other designs using masks suspended over array receivers can be used [Angle diversity receivers (ADRs) rely on the ability to discern the angle of arrival of an incident transmitted signal . They arfigure 9 be used .Figure et al. [Most works on this topic focus on using ADR receivers in MIMO settings and for optical channel decorrelation. ADRs can also be used to mitigate inter-cell interference (ICI). Haas et al. explore et al. [a). Their results show that increasing the number of PDs (increasing diversity) improves the ability to reduce interference caused by reflections. And that the increased number of PDs degrades performance when reflections are not present due to the limited FOV of the PDs, all while assuming a fixed effective area.Chen et al. propose et al. [\u03d5 (b). Results show promise in the ability to minimize SINR fluctuation through optimizing parameters such as number of detectors, inclination angle and the combining scheme for the signals received by the PDs. However from their description of their ADR, they do not optimize \u03f5, which they define as the gap between the top detector and the side one although that area where \u03f5 presides is very useful and important for communication quality, they also do not study NLOS reflections. Most importantly, they do not study receiver orientation effects and so the proposed optimal inclination angle they reach through simulation is only optimized for a horizontally fixed receiver.Chen et al. study SIet al. \u00a73bi). bi. et alt al. [\u03d5 b. ResultEach of the works indicate that improvements in SINR can be achieved at a receiver by employing more PDs. However, they do not account for the variations in receiver orientation, which is critical to link performance ,71,72.(iv)et al. [Less stringent in its requirements, blind interference alignment (BIA) is a varet al. analyse (v)et al. [\u22123.Ryoo et al. propose Limitations of this approach exist when scaling to many UDs or in the presence of occlusions or variations in device orientation which may mask only a single channel. The use of a polarizer inherently limits incident signal strength as well.(vi)et al. [c). This method has been successfully demonstrated. It shows promise in being able to isolate the channels at the receiver, support diversity combining, and is relatively integratable into a working system.Spatial light modulators (SLMs) can be used reflectively or transmissively to manipulate or modulate optical signals. One approach leverages an array of mirrors to focus and enhance signal strength directed towards one or more PDs. The array of mirrors can be individually steered to reflect light from the target. All or a subset of the mirrors. By pointing the mirrors the signal strength of the target is increased (maximized) and the noise from other sources is minimized. Bare PDs do not have this feature. Chau et al. propose 4.The techniques discussed up to this point represent the building blocks for the construction of high-performance OWC systems. The introduction of device mobility, orientation, density and traffic use priorities each introduce new complexities for maintaining consistent performance. These factors motivate addressing the development of adaptive approaches that can react to changes in system state. The main topics considered here are (a) resource allocation and load balancing, (b) coordinated transmission, (c) combined beamforming, and (d) beam and FOV control.(a)Many works in the literature aim to optimize resource allocation and load balancing with different end goals in mind. We focus on the works that target interference mitigation.et al. [Mondal et al. propose et al. [et al. [Li et al. investig [et al. build a et al. [To improve load balancing results, Soltani et al. consideret al. [et al. [et al. [a priori knowledge of the users\u2019 wireless traffic distribution and then form an optimization problem that maximizes the sum rate for the duration of several future time slots weighted by the evolving queue backlog of each user over many future time slots. They compare their anticipatory association (AA) with responsive association (RA), which maximizes the sum rate at a current time slot weighted by the current queue backlog of the user. They report that their system outperforms RA achieving better trade-off between the average system queue backlog and the average per user throughput. They also note that their study indicates that the overall system average delay can be reduced when AA is employed. However, their choice of mobility model was the random waypoint model which is not practical and the assumption of perfect receiver location is not quite robust. Designing a predictive system is a step in the right direction but much more analysis and study is needed to delve deeper in the variations of this dynamic system and its reliability. There is definitely room for innovation in this area.Wang et al. study lo [et al. perform [et al. . However [et al. propose (b)Coordinated transmission involves multiple transmitters working together to produce signals decoded by one or more receivers. This includes (i) coordinated and non-cooperative cases and (ii) coordinated and cooperative cases.(i)et al. [Self-organizing interference coordination in optical wireless networks is the topic of the work of Ghimire et al. . The autet al. [b). Owing to this partitioning they provide a power control factor \u03b2 to help edge users get higher power than centre ones. This method improves the cell-edge user and overall system throughput.Chen et al. study fret al. [a). While accommodating illumination requirements, they propose a static scheduler that adjusts power control to give centre cell users the minimum required communication performance to mitigate interference for cell edge users, as well as a dynamic scheduler for dense scenarios when the static scheduler is not sufficient to combat interference for the cell edge users. The dynamic scheduler limits colour usage in some attocells according to interference intensity and under SINR demands of all the UDs. It is intended to function in one of two modes: (1) a distributed stage followed by a centralized stage jointly to adjust colour usage, or (2) a greedy approach based on dividing the hexagon cell into rings which would then get colours according to their location. They compare their results with CD which is the one colour per cell plan and NoICIC which is the plan where a cell is allowed to use all colours. They show results based on inter-LED distances but the system has a very high overhead, requires two scheduling phases which can provide delays for highly mobile users, and also depends on reliable feedback channels which may not be practical.Zhou et al. analyse et al. [et al. [Of the works that study interference coordination employing NOMA (\u00a73a(iii)), Kashef et al. study an [et al. , who exp(ii)et al. [Bai et al. propose These areas are very interesting; there is a need to identify ways to realize maximum performance obtainable from the diverse set of physical layer techniques under dynamic conditions. We would hope that future work would define upper bounds on number of supported users on repeatable benchmark traffic and mobility configurations. We note that some of the aforementioned techniques will be challenging to scale to many APs (e.g. the use of synchronized transmitters).(c)Combined beamforming employs et al. [Mostafa et al. suggest (d)d) is more active in adapting the FOV and orientation to allow the user the best SNR quality and is able to change them dynamically with system changes. By contrast, work that proposes dynamic beam and steerability adapt the transmit side to change along with the user density, location and overall motion within the room to allow for better coverage and communication quality. Both areas are very interesting and have room for more analysis in the system scale.Beam and FOV control can also be used to address dynamic user behaviour. Examples include aforementioned work and refefigure 9(e)et al. [Here, we discuss systems that generally combine some of the building blocks discussed in \u00a7et al. propose et al. [Adnan-Qidan et al. study us5.(i)VLC provides links that have unique properties of light leading to methods and performance unique to OWC.(ii)When we adopt and replicate VLC for APs, we need to address interference among UDs and among the multiple APs.(iii)Many interesting techniques exist each with different considerations for interference.(iv)Techniques can be combined and optimized for static scenarios, but ultimately we need to study how they interact and can adapt to follow the dynamics introduced by device mobility, orientation and data traffic models.(v)Future work will address system dynamics including optimization and management in the context of overall network performance adaptation.OWC promises to provide a huge boost to the capacity of indoor AP networks including those coupled to the lighting function as VLC. However, the properties of the optical spectrum and the anticipated increase in density of mobile-UDs requires a revisit to interference management for this media type. In this paper, we explore the state of the art with respect to multiple user optical access and develop a classification of current technical approaches for exploiting this technology and managing interference. There are a few key takeaways from the survey which we summarize here:Finally, we emphasize the importance of being able to compare the strengths of future innovations by ensuring that a common set of operating conditions are adopted. With such a foundation we would expect the most critical performance metrics will be (1) system throughput, (2) sustained single-user link speed, and (3) system complexity. We hope to evaluate future work based on these primary metrics."} {"text": "After 40\u2009years of intense study on HIV/AIDS, scientists have identified, among other things, at risk populations, stages of disease progression and treatment strategies. What has received less attention is the possibility that infection might elicit an increase in sexual behavior in humans. In 2000, Starks and colleagues speculated that HIV infection could alter host behavior in a manner that facilitated the spread of the virus. Retrospective and self-report data from five studies now support this hypothesis. Individuals with acute\u2014versus nonacute\u2014stage infections report more sexual partners and more frequent risky sex. Additionally, male sexual behavior increases nonlinearly with HIV viral load, and data suggest a potential threshold viral level above which individuals are more likely to engage in risky sexual behavior. Taken together, these data suggest that HIV infection influences male sexual behavior in a manner beneficial to the virus. Here, we present these findings, highlight their limitations and discuss alternative perspectives. We argue for increased testing of this hypothesis and advocate for increased public health measures to mitigate the putative impact on male sexual behavior.Lay Summary In 2000, Starks and colleagues speculated that HIV infection could alter host behavior in a manner that facilitated the spread of the virus. Retrospective and self-report data from five studies now support this hypothesis. We argue for increased testing of this hypothesis and advocate for increased public health measures to mitigate the putative impact on male sexual behavior. As in the acute stage, viral load is often high during AIDS [The progression of HIV infection is divided into distinct stages marked by differences in serology, viral load, and CD4+ cell counts. Acute infections are active during the period \u223c2\u20135\u2009weeks after transmission, with the production of HIV-specific antibodies commencing around 3\u20134\u2009weeks after transmission . Like thing AIDS . Our hypA 2016 review notes thDirectly testing our hypothesis is challenging. Assessment of altered sexual behavior caused by HIV infection requires observation before and after infection. A randomized controlled trial would require an at-risk group to be monitored for an extended period. Given the potentially confounding impacts of both knowledge of infection and treatment with antiretrovirals, individuals would need to remain unaware of their infection and remain untreated. For obvious ethical reasons, such an experimental design cannot be implemented. It is possible, however, to examine sexual behavior of HIV+ individuals as related to stage of infection or viral loads. In particular, stage of infection studies can offer similar experimental benefits to the aforementioned morally problematic design. Here, we use retrospective and self-report data from five studies to examine our hypothesis.et al. [Joseph Davey et al. found thet al. . Had sexet al. [et al. [et al. [n = 169) compared with nonacute (n = 5015) HIV infection in homosexual men was associated with 5-fold higher odds for future risky sexual behavior, which they defined as condomless sex with an occasional partner .Joseph Davey et al. also fou [et al. , Braun e [et al. report tet al. [2 to 104 copies/ml and then rises parabolically to around 80% at between 106 and 107 copies/ml . When subdivided, there is strong trend toward increased unprotected sex with casual partners with a recent increase in serum viral load above 105 copies per ml. Continued elevated viral load was not significantly related to riskier sexual practices with casual partners, but the rate of unprotected sex was still elevated (OR=1.5).\u00a0Consistent with the findings presented by Joseph Davey et al. [Huerga et al. found thy et al. , this suy et al. . This acy et al. . This suet al. [5 viral copies/ml is a threshold level above which the number of insertive acts rapidly increases. Stratifying semen viral loads by less than or greater than 105 copies/ml shows a significantly greater frequency of insertive sexual acts in the group with higher viral loads . In sum, although supporting our hypothesis, all five referenced articles suffer from some limitations.Data presented by Huerga et al. in Fig.\u00a0 [et al. was smalet al.\u2019s data [It can be argued that Joseph Davey .\u2019s data listed iet al. [et al. [et al. [Dukers et al. , Huerga [et al. and Kali [et al. did not [et al. , 30. Pera priori and when assessed in light of the findings on sexual partner diversity and risk taking in uninfected, acute stage and nonacute stage individuals reported by Joseph Davey et al. [et al. [These are but two of several potential explanations for the observations raised by these studies. However, the suggested alternate explanations seem less plausible than our hypothesis both y et al. and Brau [et al. .We believe that future research on HIV and its potential impact on sexual behavior should examine semen viral loads in addition to blood viral loads since these only weakly correlate . Blood vAdditionally, there are at least four broad groups of HIV-1 which are further distinguished into subtypes that can have different biological effects . We suggFinally, HIV-1 is a relatively new human pathogen that originates from African primates; it first infected humans about 100 years ago , 33. AccThe fact that pathogens can influence the behavior of hosts is well understood. It stands to reason that sexual behavior is the most likely trait for a sexually transmitted virus to manipulate, and data show that risky sexual behavior increases following infection during the acute stage and with increasing viral loads. The data upon which we have based our analysis are limited. Consequently, we hope to spur further research that would more definitively support or reject our hypothesis. Seeing as knowledge of infection seems able to curb risky sexual behavior, our hypothesis argues strongly in favor of frequent screening in order to catch infections early on when they are most infectious and manipulative . In addi"} {"text": "These results suggest that supplementary protein (plant-based) may improve strength in the absence of exercise, a finding that was independent of lean mass [p < 0.001), which may further indicate protein inadequacy in this population. We appreciate the critique of our paper by Genoean mass . In thisGenoni et al. stated tSecondly, Genoni et al. are critical of our use of the DIAAS in athletes stating that DIAAS analyses do not consider the metabolic demand for protein in the context of exercise. Citing Burd et al. , Genoni p = 0.074); furthermore, we reported that this statistic was further attenuated when gender was controlled. Had we controlled for gender, age, and energy intake, the resulting p value was similar to the unadjusted value (p = 0.078). We did not control for lean mass in this comparison between diet groups as our premise was that lean mass was a reflection of diet type. Indeed, peak torque per kilogram lean mass was virtually identical between diet groups. Tong et al. also reported a slight but significant reduction in strength for vegetarians versus omnivores from a large UK cohort and concluded that the differences in height, lean mass, and physical activity likely contributed to the difference in strength in vegetarians versus omnivores [Finally, Genoni et al. suggest that our statistical analyses were \u201cflawed\u201d for comparing strength differences between diet groups since we did not control for age, gender, energy intake, and lean mass. The stated purpose of our study was to \u201canalyze dietary protein availability using the DIAAS method and to relate available protein to muscle mass and strength.\u201d We demonstrated significant relationships between available protein and muscle mass (r = 0.541) and available protein and peak torque (r = 0.315) in our participants. We also reported that peak torque between diet groups was not significant (mnivores . We wereWe hope that these comments adequately address the concerns of Genoni et al. We value the opportunity to discuss our results in further detail."} {"text": "Gene editing and/or allele introgression with absolute precision and control appear to be the ultimate goals of genetic engineering. Precision genome editing in plants has been developed through various approaches, including oligonucleotide\u2010directed mutagenesis (ODM), base editing, prime editing and especially homologous recombination (HR)\u2010based gene targeting. With the advent of CRISPR/Cas for the targeted generation of DNA breaks (single\u2010stranded breaks (SSBs) or double\u2010stranded breaks (DSBs)), a substantial advancement in HR\u2010mediated precise editing frequencies has been achieved. Nonetheless, further research needs to be performed for commercially viable applications of precise genome editing; hence, an alternative innovative method for genome editing may be required. Within this scope, we summarize recent progress regarding precision genome editing mediated by microhomology\u2010mediated end joining (MMEJ) and discuss their potential applications in crop improvement. The broken genome should be repaired to maintain its stability; otherwise, it may lead to cellular malfunctioning. Even single double\u2010stranded break (DSB) damage in chromosomes could lead to cell death if it is not properly repaired , including rapidly emerging clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR\u2010associated (Cas) protein complexes \u2010dependent targeted integration (MITI) or Precise Integration into Target Chromosome (PITCh), an application of MH\u2010mediated end joining (MMEJ) repair of DNA DSBs , and HR and frogs (Xenopus laevis) were much higher than they were in the HR pathway. Due to the dimeric active forms of FokI, a pair of TALENs must be designed for each of the dsDNA cut sites on the donors and targeted genomic sites. Moreover, to avoid recurrent cutting by the same TALENs after ligation, the junction sites have to be shortened from their original forms, thereby limiting the ability of TALEN\u2010based PITCh to perform a targeted insertion with unaltered sequences at the junctions. However, the CRISPR/Cas\u2010based PITCh does not have the latter limitation, as it could be designed to precisely edit genomic loci without resulting in any undesirable sequence modification that used TALENs and CRISPR/Cas9 for targeted genome modifications via the MMEJ pathway (8\u2010nt MH) in various animals was successfully engineered selection of CRISPR/Cas complexes for site\u2010specific DSB formation, (ii) delivery method for introducing the editing tools into plants, that is via Agrobacterium or particle bombardment\u2010mediated transformation, or PEG or electroporation\u2010mediated protoplast transformation, (iii) the type of CRISPR/Cas editing tools to the nucleus and the targeting sites , (iv) synchronization or spatial and temporal controls of the expression of Cas proteins, gRNAs and donors, (v) the DNA donor length and (vi) possible methods for quantification MMEJ frequency .et al., et al., It is worth understanding that the MMEJ\u2010mediated repair mechanism in plants may be different from that of animal systems, as some MMEJ repair components, such as DNA ligase III and SIRT6 proteins, have yet to be identified in plants. Another possibility following from MMEJ is the generation of chromosomal translocation or arrangement, probably due to kinetics of its repair process being slower than the SSA pathway , with Cas9 protein in et al., Drosophila and mammals was shown to preferentially adopt certain MH patterns in mouse culture cells (Hogan and Faust, et al., et al., et al., et al., et al., et al., et al., et al., et al., Mapping of the MHs among DSB repair outcomes by deep sequencing techniques has been very powerful for understanding their distributions and selections for annealing during MMEJ. Analysis of CRISPR/Cas9\u2010based DSB repair outcomes by NGS revealed and validated the major involvement of MMEJ in the repair process and generated corresponding maps of MH usage in human cells (Bae et al., S.\u00a0cerevisiae POL3/CDC2 (Galli et al., et al., et al., et al., Other MMEJ improvement approaches might be connected to early or late steps of the repair process. K48\u2010linked polyubiquitylation\u2010mediated removal of KU80 from DSB ends may promote alternative repair pathways (Postow et al., et al., et al., et al., et al., MMEJ\u2010based gene editing offers alternative tools for precision engineering of organisms of interest at a potentially higher frequency than what is achieved with HR\u2010based approaches. Substantial understanding of MMEJ activation and its mechanism in DSB repair Figure\u00a0 has beenAgrobacterium\u2010mediated transformation or particle bombardment or protoplast transfection; Agrobacterium\u2010mediated transformation is the most widely used thanks to its ease and low cost. However, Agrobacterium\u2010mediated transformation usually delivers low copy numbers of the editing tools in its T\u2010DNA system, thereby limiting the competitiveness of exogenous donors to prevent re\u2010ligation of a broken end by cNHEJ. Recent advances in HR works have used DNA replicons as efficient donor DNA cargo for plant genome editing (Baltes et al., et al., et al., Since the advent of CRISPR/Cas technology, DSB formation has become much more precise, flexible and customizable. In general, both blunt\u2010end and cohesive\u2010end DSBs can be used in MMEJ\u2010mediated editing, but the former type might be easier to design and might produce more predictable repair outcomes. Moreover, due to the need for end resection to generate 3\u2032 overhangs, Cas complexes that produce 5\u2032 overhangs, such as Cas12a, may not be energetically preferred. The DSB inducers and donor DNAs are conventionally introduced into plants by et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., In this review, we summarize the MMEJ\u2010mediated DSB repair mechanism at each step that requires the involvement of a cascade of DNA damage repair proteins/enzymes Figure\u00a0. MMEJ\u2010meNo conflicts of interest are declared.Conceptualization, T.V. V and J.Y.K.; Methodology, T.V. V and J.Y.K.; Writing\u2014Original Draft, T.V. V., D.T.H. D., J.K., Y.W.S., M.T.T., Y.J.S. and S.D.; Writing\u2014Review and Editing, T.V. V., D.T.H. D., J.K., S.D. and J.Y.K.; Funding Acquisition, J.Y.K.; and Supervision, T.V. V and J.Y.K."} {"text": "Determining the tempo and mode of non-avian dinosaur extinction is one of the most contentious issues in palaeobiology. Extensive disagreements remain over whether their extinction was catastrophic and geologically instantaneous or the culmination of long-term evolutionary trends. These conflicts have arisen due to numerous hierarchical sampling biases in the fossil record and differences in analytical methodology, with some studies identifying long-term declines in dinosaur richness prior to the Cretaceous\u2013Palaeogene (K-Pg) boundary and others proposing continued diversification. Here, we use Bayesian phylogenetic generalized linear mixed models to assess the fit of 12 dinosaur phylogenies to three speciation models . We do not find strong support for the downturn model in our analyses, which suggests that dinosaur speciation rates were not in terminal decline prior to the K-Pg boundary and that the clade was still capable of generating new taxa. Nevertheless, we advocate caution in interpreting the results of such models, as they may not accurately reflect the complexities of the underlying data. Indeed, current phylogenetic methods may not provide the best test for hypotheses of dinosaur extinction; the collection of more dinosaur occurrence data will be essential to test these ideas further. The effet al. = 614) and Lloyd et al. [n = 420), including two subtly different versions of the Benson et al. [et al. [et al. [et al. [Sakamoto et al. used the [et al. , ceratopsians [n = 27 and 30, respectively), hadrosauriforms [n = 62), sauropods [n = 87 and 76, respectively) and theropods [et al. [. Maximum and minimum possible ages came from the Paleobiology Database . Zero-length branches were lengthened by imposing a minimum branch duration of 1 Myr; [et al. [We expanded this comparison by using an additional set of nine recently published non-avian dinosaur phylogenies that include representation of all the major clades present during the late Mesozoic, including thyreophorans \u201339 (n = topsians (n = 27 uriforms (n = 62)auropods ,42 ), (ii) the slowdown to asymptote model, with node count modelled as a function of the square root of time elapsed from root to tip ), and (iii) the downturn model, with node count modelled as a function of time elapsed from root to tip and its quadratic term ). Defining the theoretical value for the node count when no time has elapsed is not straightforward. The trees have no tips at t0, meaning that node count will be zero; however, because the tree has a root node, there is technically a node count of 1. We therefore fitted models where the intercept is estimated , with t5 iterations sampling at every 1000 iterations and discarding the first 5 \u00d7 104 iterations as burn-in. We used the default priors for MCMCglmm . All models had a mean effective sample size of greaet al. [et al. [2 or \u221atime elapsed) to choose between models where the difference in DIC was less than 4 units. We feel that our procedure is a potentially fairer test between models, as quadratic terms are almost always significant and would thus lead to the downturn model (c) being preferentially selected over either the null (a) or slowdown to asymptote models (b) even in cases where model fit was similar. For models with estimated intercepts, we also extracted the posterior means of the intercepts to examine how these varied.We fitted each of the three models to all of our 909 trees , with inet al. ), or wit [et al. , which urn model c being pthe null a or slowe models b even inhttps://github.com/nhcooper123/dino-trees/ [https://dx.doi.org/10.5519/0034257) [All analyses used R v. 3.6 and repro-trees/ ). The da0034257) .3.et al. [et al. [The best model varied between trees, between differently dated versions of the same tree, and on the basis of whether intercepts were estimated, set to zero, or set to 1.0 figures\u00a0. The Lloet al. supertre [et al. supertreet al. [et al. [et al. [Our analyses where intercepts were estimated produced qualitatively identical best model results for the Benson et al. and Lloy [et al. trees , indicaet al. [et al. [et al. [et al. [For models where intercepts were set to zero , 207 (23%) of our 900 new trees unambiguously favoured the downturn model, 186 (21%) unambiguously favoured the slowdown to asymptote model and 472 (52%) favoured either the downturn or the slowdown to asymptote model. No trees favoured the null model and 35 (4%) favoured no model at all . All three Lloyd et al. supertre [et al. supertre [et al. supertre [et al. supertreet al. [et al. [et al. [et al. [For models where intercepts were set to 1.0 , 183 (20.3%) of our 900 new trees unambiguously favoured the downturn model, 209 (23.2%) unambiguously favoured the slowdown to asymptote model and 469 (52.1%) favoured either the downturn or the slowdown to asymptote model. No trees favoured the null model and 39 4.33%) favoured no model at all . All three Lloyd et al. supertre% favoure4.et al. [et al. [2 or \u221atime elapsed). Because these quadratic terms were generally significant this led to their favouring the downturn model over the other two models in these ambiguous cases, potentially leading them to overstate the success of the downturn model. While this is a valid methodological choice, and differences in opinion about model selection procedures are common, we argue that as their selection of the downturn model as the \u2018best\u2019 model was methodologically equivocal it is unfair to say there is \u2018overwhelming support\u2019 for a downturn in dinosaur speciation rates prior to the Late Cretaceous [In general, our results agree with those of Sakamoto et al. but we d [et al. selectedetaceous , p. 5036c) that would be indicative of a continual downturn in dinosaur speciation rates before the K-Pg boundary. In particular, the sauropod tree of Carballido et al. [contra [Using the various combinations of trees, dating protocols and intercept assumptions listed above, we fitted a total of 2727 speciation models. Of these, only 518 (approx. 19%) unambiguously favoured the downturn model c that woo et al. consiste [contra ), but suet al. [et al. [et al. [et al. [A recent study by Chiarenza et al. provideset al. . Their cet al. . In addiet al. ,55. Thes [et al. and Dean [et al. rejected [et al. found a [et al. recovereFinally, we posit that although phylogenies can be very useful in resolving long-running evolutionary debates ,57, they"} {"text": "Humans evolved from an ape ancestor that was highly intelligent, moderately social and moderately dependent on cultural adaptations for subsistence technology (tools). By the late Pleistocene, humans had become highly dependent on culture for subsistence and for rules to organize a complex social life. Adaptation by cultural traditions transformed our life history, leading to an extended juvenile period to learn subsistence and social skills, post-reproductive survival to help conserve and transmit skills, a dependence on social support for mothers of large-brained, very dependent and nutrient-demanding offspring, males devoting substantial effort to provisioning rather than mating, and the cultivation of large social networks to tap pools in information unavailable to less social species. One measure of the success of the exploitation of culture is that the minimum inter-birth interval of humans is nearly half that of our ape relatives. Another measure is the wide geographical distribution of humans compared with other apes, based on subsistence systems adapted to fine-scale spatial environmental variation. An important macro-evolutionary question is why our big-brained, culture-intensive life-history strategy evolved so recently and in only our lineage. We suggest that increasing spatial and temporal variation in the Pleistocene favoured cultural adaptations.This article is part of the theme issue \u2018Life history and learning: how childhood, caregiving and old age shape cognition and culture in humans and other animals'. Severalet al.'s classic et al.'s . Thus, t [et al. The othe [et al. . Kaplan [et al. provide [et al. \u20138. Walke [et al. note tha [et al. have recet al.'s work on [et al. argue thThe reason that our large brain features so prominently in discussions of the evolution of the human life history is that it generates strong life-history trade-offs. Aiello & Wheeler noted thet al. [Isler & Van Schaik argue thet al. suggest et al. . The onlet al. [et al. believed hunter\u2013gatherers could not solve. Hawkes [There is some controversy over who were the most important providers of alloparental care over the course of human evolution. Kristen Hawkes et al. noted th. Hawkes proposedet al. [et al.'s [et al. [the human life history in the face of likely large changes over the Pleistocene and considerable ecological and cultural variation around the current central tendency. Another extreme human life-history variant is modern \u2018demographic transition' societies that have far fewer children than are economically feasible [Hawkes et al. went on et al.'s data are [et al. label thfeasible . Modernifeasible .et al.'s [contra [One constraint on the small game hypothesis is that human physiology precludes getting a large share of calories from lean protein. Humans cannot process enough nitrogenous waste from using protein as a source of calories above about 35% of total caloric needs, and small animals are generally very low in fat . Kaplan et al.'s data sug [contra but cons [contra ). Large [contra might coWood & Hill tested t2.Given that our large brain coevolved with our slow life history and our cooperative breeding adaptation, it is of interest what the brain is adapted to do. Some of the controversy involves whether big brains are for managing our social life ,29 or foet al. [A different controversy involves the roles of culture and individual intelligence in the evolution of our large brain. This is a subtle question. The subtlety is twofold. First, empirically, the comparative biology of brain size suggests that it is correlated with both individual and social learning across a wide range of behaviours and species \u201334. Readet al. argue thet al. ,37. It iSecond, Boyd & Richerson showed tSome prominent evolutionary psychologists deny that culture, in the sense of socially transmitted traditions, plays any significant part in the evolution of humans or other animals (e.g. ). Tooby More recently, Cosmides & Tooby have procultural variation operating via social selection acting shments, .Cultural evolutionists argue cultural traditions like technology and social organization typically evolve cumulatively over prolonged periods, often resulting in quite sophisticated adaptations ,55,58\u201360et al. [et al. [et al. [et al.'s [Homo naledi fossils suggests that Lower Pleistocene brains and their cultures were viable long after more sophisticated cultures and larger brains evolved in other human lineages [In essence, the cultural niche hypothesis defended by Boyd et al. holds th [et al. point ou [et al. to argue [et al. , and by [et al. . By Ache [et al. . Hill etet al.'s hypotheslineages . At the lineages . The builineages . Social Thus, in principle, our large and costly brain might be explained by its ability to support more innate modules, better individual intelligence or cumulative culture. That so much of our subsistence is a product of sophisticated technology and social organization that is transmitted culturally suggests that cumulative culture is a major part of the answer.3.et al. [The four life-history characteristics enumerated by Kaplan et al. could ha(a)et al. [There is a fifth major life-history difference between humans and chimpanzees in addition to the four enumerated by Kaplan et al. . Humans et al. . When thet al. , as is tet al. . High paet al. , would aet al. . Young cThe size of social networks people can use to access cultural variation is important because people can actively bias their acquisition of culture (and their teaching) . Learneret al. [Long-term quantitative studies of hunting and gathering groups demonstrate that they tend to live in fluid bands of 30\u201350 individuals that regularly exchange members, such that the whole ethnolinguistic tribe of a few hundred to a few thousand people participates in a common social network, a form of social structure that seems to be uniquely human . Hadza aet al. studied et al. looked cet al. give theSheer population size limits the scale of social networks and the level of social complexity that a societies can sustain ,80. DiamIncreasing network sizes engenders trade-offs. Societies living at low population densities require costly investments in travel to maintain large social networks. In the Ju/hoansi (!Kung), a system of gift exchange links people in distant camps . In the Increasing social network size also carries the risk of acquiring maladaptive ideas. Acquiring culture by vertical transmission is relatively safe in that parents and offspring are closely related genetically and tend live in the same ecological circumstances. Even so, parent\u2013offspring conflict is a well-studied problem . HoweverHuman social networks are poly-functional. A person's information network, network of relatives, network of economic partners and network of acquaintances have different costs and payoffs yet they tend to heavily overlap. For example, Thornhill & Fincher defend t(b)Do humans show any signs of being adapted to social as opposed to individual learning? At least at the margin of time these activities trade-off against one another even if both capacities extensively share cognitive resources. A now voluminous literature documents that children are adept at social learning even as compared with other apes ,99. This4.The evidence we have reviewed suggests that the long, slow human life history coevolved with our large brain. Brains are substantially organs of phenotypic flexibility, and brain size increased in many mammalian lineages in the Cenozoic, with humans holding down the upper tail of the distribution of brain size relative to body weight. This evolution seems to have been driven by increasingly variable environments. The question is how do humans, and by extension other large-brained creatures, pay the high overhead costs of large brains? The general answer seems to be learning and other forms of individual creativity plus social learning. The extraordinarily large modern human brain depends upon high skilled food acquisition strategies that make nutrient-dense foods available by exploiting a great variety of locally available food resources. Cooperative breeding, especially the heavy involvement of men in helping provisioning of mothers with dependent offspring, requires institutions of marriage and kinship. Culturally transmitted subsistence skills and techniques and culturally transmitted social institutions make possible a life history that is simultaneously slower but capable of higher completed family size than in other apes."} {"text": "NPNL. Our simulation of foetal energy requirements demonstrated that this metabolic threshold of 2.1\u2005\u00d7\u2005BMRNPNL cannot realistically be crossed by the foetus around the time of birth. These findings imply that metabolic constraints are not the main limiting factor dictating gestation length.A hallmark of modern humans is that our newborns are neurologically immature compared to other primates. It is disputed whether this so-called secondary altriciality evolved due to remodelling of the pelvis associated with bipedal locomotion, as suggested by the obstetrical dilemma hypothesis, or from maternal energetic limitations during pregnancy. Specifically, the \u2018Energetics of Gestation and Growth\u2019 (EGG) hypothesis posits that birth is initiated when foetal energy requirements exceed the maximum sustained maternal metabolic rate during pregnancy at around 2.1\u2005\u00d7\u2005basal metabolic rate (BMR) of the non-pregnant, non-lactating condition (NPNL). However, the metabolic threshold argued under the EGG framework is derived from one study with a small sample size of only 12 women from the UK. Accordingly, we performed a meta-analysis of all published studies on metabolic scopes during pregnancy to better account for variability. After excluding 3 studies with methodological issues, a total of 12 studies with 303 women from 5 high- and 3 low-income countries were analysed. On average, pregnancy was found to be less metabolically challenging than previously suggested. The studies revealed substantial variation in metabolic scope during pregnancy, which was not reflected by variation in birth timing. Further, in a third of the studies, the metabolic rates exceeded 2.1\u2005\u00d7\u2005BMR Newborn modern humans weigh approximately twice as much as those of great apes, both absolutely and relative to adult body mass, yet neonatal brain size is only 28% of the mother\u2019s brain size, whereas it averages 43% in non-human primates . Human nThe obstetrical dilemma hypothesis has recently been criticized from various perspectives. Most of these critiques focus only on single aspects like the energetic consequences of pelvic width or factoThe most comprehensive critique of the obstetrical dilemma to date is offered by the \u2018Energetics of Gestation and Growth (EGG) hypothesis\u2019. It not only questions that difficult birth was caused by pelvic adaptations to bipedal locomotion but also provides an alternative explanation for secondary altriciality, thus countering two of the central pillars of the obstetrical dilemma hypothesis , 19. In The EGG hypothesis is technically an extension of Ellison\u2019s \u2018metabolin vitro and in vivo studies of the placental transport of 14C-labelled fatty acids showed that this process occurs rapidly [However, since the formulation of the metabolic cross-over hypothesis in 2001, rapidly , 28. It, rapidly . Support rapidly . Moreove rapidly metaboli rapidly . Neverth rapidly , so that rapidly metaboliNPNL), which only slightly exceeds the metabolic scope during lactation [et al. [NPNL for a 280-day-long event like pregnancy [NPNL threshold assumed by the EGG hypothesis. However, the energetic requirements of the foetus would need to increase enormously to cross those of the expectant mother at the time of birth if this threshold would be significantly higher than 2.0 to 2.1\u2005\u00d7\u2005BMRNPNL. It is, therefore, unclear whether metabolic limitations are the sole determinant for gestation length in humans as suggested by the EGG hypothesis.The maternal energetic threshold of the EGG hypothesis, the so-called \u2018metabolic ceiling\u2019, was proposed to lie at approximately 2.0\u20132.1 times the basal metabolic rate of the non-pregnant, non-lactating condition , th [et al. with onlregnancy . Therefoet al. [Recently, Prado-Nov\u00f3a et al. also demet al. .et al. [Given these issues surrounding the EGG hypothesis, the present study aims to expand the data on which the hypothesis of Dunsworth et al. rests anet al. , the preet al. [et al. [et al. [t-tests, we found the mean 24 h-energy expenditure observed by Poppitt et al. [et al. [et al. [We identified 15 studies reviewed by Butte and King and Savaet al. that meaet al. was omit [et al. was excl [et al. , 38. Fin [et al. was excl kcal/d) was sign\u2005<\u20050.05) and Hein\u2005<\u20050.05) , althouget al. [et al. [et al. [et al. [This left 12 studies that were included in the present meta-analysis . Some ofet al. studied [et al. measured [et al. who comp [et al. , perform [et al. , perform [et al. , 48, 49; [et al. and the [et al. . Specifimax) is defined as the ratio of TEE that can be maintained without depleting energy reserves, thus preserving constant body mass, and basal metabolic rate [et al. [EP) and fat mass (EF) during pregnancy was taken from Table 8 of Butte and King [Energy expenditure generally shows physiological limits during prolonged physical activities , 53, 54. kcal/d) , 53, 54. [et al. in addinand King . To obtaet al. [For all studies, the same offspring energy requirements were used as by Dunsworth et al. and no later measurements were provided. The authors indicated, however, that the non-pregnant, non-lactating women and the pregnant women were not the same individuals. In Butte et al. [n\u2005=\u200563), not only the weighted mean metabolic rate of the three subsamples (2.19\u2005\u00d7\u2005BMRNPNL in pregnancy week 36) exceeded the threshold of 2.1\u2005\u00d7\u2005BMR but also each subsample itself . Finally, a high mean metabolic rate was also reported by Dufour et al. [NPNL in pregnancy week 35, but no data have been recorded for week 36 or later.In 4 of the 12 studies , 48, 49,h et al. that equr et al. for a stPregnant women potentially use different strategies to lower their own energy expenditure. Such strategies could include energy-sparing mechanisms, like the reduction of the maternal BMR , or loweDifferences in the socioeconomic background of the study populations could explain part of the variation as the mean metabolic scope, particularly before pregnancy, was generally higher in socioeconomically weaker countries. Thus, socioeconomic background has been linked to physically more demanding workloads . Additioet al. [et al. [2 consumption and CO2 production and is considered the most accurate method to measure energy expenditure in a clinical setting [It is possible that the measurement methods contributed to the high variation of the metabolic rates . Thus, tet al. observed [et al. found th setting . It is t setting , 49. In setting , 46, the setting . We consIn the free-living context, the doubly labelled water method (DLW) is usually considered the gold standard, as it imposes a minimal burden to the participants and is sufficiently accurate, having a precision of 2\u20138% compared against respiratory gas exchange . The DLWet al. [et al. [NPNL with indirect calorimetry. Panter-Brick [NPNL was estimated by using an age-specific predictive equation based on individual height and month-specific mass [et al. [Another method to measure TEE during pregnancy is the flex heart rate method, which was used by Dufour et al. in Columet al. , 63. On [et al. assesseder-Brick relied ofic mass . The stu [et al. and Pant [et al. showed tet al. [r\u2005=\u20050.87) [Finally, Abeysekera et al. based th\u2005=\u20050.87) . Neverthet al. [et al. [et al. [a priori yield different results compared to DLW.An examination of the additional energy required during pregnancy revealed on average a steady increase during the entire course of pregnancy rather than a plateau during the third trimester as suggested by Dunsworth et al. , which w [et al. study et al. [e [et al. to estim [et al. . The equ [et al. and Dufo [et al. . Neverth [et al. . Therefo [et al. and Dufo [et al. are plau [et al. in the N [et al. , 46 showThe Energetics of Gestation and Growth (EGG) hypothesis posits that maternal energy requirements steeply increase during pregnancy and approach a plateau in the third trimester close to 2.1\u2005\u00d7\u2005BMR of the non-pregnant, non-lactating condition, and that labour starts when the exponentially growing energetic requirements of the foetus cross the maximum sustained maternal metabolic scope .NPNL. (v) Because the EGG hypothesis posits that labour is only triggered when the energetic requirements of the foetus surpass the maximum sustained metabolic capacity of the pregnant woman, a metabolic ceiling during pregnancy exceeding 2.1\u2005\u00d7\u2005BMRNPNL would imply an unrealistically high mean birth weight of\u2005>\u20054.9 kg. Conversely, (vi) the remarkably high variation in energy expenditure of the pregnant women strongly contrasts with the relatively low variation in gestation length. If birth timing were in fact dependent on energy expenditure during pregnancy, gestation length would likely be equally variable, which questions the conceptual basis of a metabolic ceiling in determining the onset of labour.The present meta-analysis of the 12 studies that measured TEE during pregnancy demonstrated, however, (i) considerable variation in the observed sustained maternal metabolic scope. (ii) On average, pregnancy was found to be less energetically costly than suggested previously, and (iii) in the great majority of the studies maternal metabolic scope did not plateau in the third trimester, implying a pattern that is not suggestive of a metabolic ceiling being approached by the maternal energy requirements during pregnancy. Moreover, (iv) a large percentage of the studies significantly exceeded the presumed metabolic ceiling of the EGG hypothesis of about 2.1\u2005\u00d7\u2005BMRA metabolic trigger of human birth has also been challenged by a study of obstetric selection pressures in early hominid fossils using FEA birth simulations, demonstrating that cephalopelvic constraints were similarly high in australopithecines as in modern humans . This imIndependent of the question of birth timing and the EGG hypothesis, it remains unclear from an evolutionary point of view why maternal metabolic capacity would be limited given the potential negative consequences. Further, the significantly higher costs of lactation make it more plausible that this activity might be pushing humans closer to the limit of their energetic capacity, wherever such a threshold might lie. Yet, it is well known that even human lactation can well be supported through increased dietary intake rather than substantial mobilization of adipose tissue , 54, 75."} {"text": "A best-evidence topic was written according to a structured protocol. The question addressed was the following: in patient undergoing lung transplantation, are lungs from donors of age >60\u2009years old (yo) associated with equivalent outcomes\u2014including primary graft dysfunction, respiratory function and survival\u2014than lungs from donors \u226460yo? Altogether, >200 papers were found using the reported search, of which 12 represented the best evidence to answer the clinical question. The authors, journals, dates, country of publication, patients group studied, study type, relevant outcomes, and results of these papers were tabulated. Amongst the 12 papers reviewed, survival results were different depending on whether donor age was analysed raw or adjusted for recipients\u2019 age and initial diagnosis. Indeed, recipients with interstitial lung disease (ILD), pulmonary hypertension or cystic fibrosis (CF) had significantly inferior overall survival when receiving grafts from older donors. When older grafts are allocated to younger donors, a significant decrease in survival has been noticed in the case of single lung transplantation. In addition, 3 papers showed worse results regarding peak forced expiratory volume in 1\u2009second (FEV1) in patients receiving older organs, and 4 showed comparable primary graft dysfunction incidence rates. We conclude that when carefully assessed and allocated to the recipient who could benefit most from the transplant , who would not require a prolonged cardiopulmonary bypass (CPB)), lung grafts from donors of >60yo offer comparable results to younger donors. A best-evidence topic was constructed according to a structured protocol. A best-evidence topic was constructed according to a structured protocol. This is fully described in the ICVTS .In [patients undergoing lung transplantation], are lungs from [donors > 60\u2009years old] associated with than lungs from donors \u226460\u2009years old?As a thoracic surgeon on call for lung procurement, you receive a call from the agency of health care. A graft from a 76-year-old (yo) male patient is offered. The cause of death is a sudden brain Haemorrhage. The clinical record shows no medical and no smoking history. The initial study shows a thoracic diameter similar to your patient\u2019s one with a Pa02/Fi02 ratio evaluated at 420\u2009mmHg. The CMV serology is negative. The CT-scan imaging shows no emphysema or suspicious nodules. The bronchoscopy shows perfect integrity of the airway. Then, you find a potential match for a 64yo man with COPD, but you hesitate to go through the lung transplantation (LTx) based on the donor\u2019s age.Medline search using PubMed interface was performed with the following query: [lung transplantation OR lung transplant] AND [old donors OR age] AND [outcomes], with results only in English and including all types of publications. Afterwards, references from the retrieved studies were looked up manually.209 papers were found using the reported search. From these papers, 12 were selected, providing the best evidence to answer the question. These are presented in Table\u00a0et al. [De Perrot et al. analysedet al. [Baldwin et al. evaluateet al. [Shigemura et al. publisheet al. [Sommer et al. studied et al. [P\u2009<\u20090.05). Regarding long-term outcomes, recipients of young, old and very old lungs showed similar 1- and 3-year survival rates.In 2017, Hecker et al. investiget al. [Holley et al. publisheet al. [Schultz et al. publisheet al. [Whited et al. aimed toet al. [In 2019, Hall et al. retrospeet al. [Aur\u00e5en et al. publisheet al. [Renard et al. studied et al. [Vanluyten et al. publisheWe are here chronologically presenting data from cohorts studied between 1986 and 2019. The patients with the oldest data are included in large cohorts and the authors of the studies captured sufficient data for a robust statistical analysis. Considering the level of evidence, all the studies are retrospective non-randomized studies. Nevertheless, we looked for large cohort studies that analysed donor age as a binary variable. The data presented here are showing that when analysed as a binary variable and without adjusting for the presence of other extended donor criteria, old donor age can be associated with worse OS. However, when the recipient's age and underlying diagnosis were considered, these results almost consistently lost their statistical significance. Some authors have sub-stratified patients by single or bilateral LTx. In this way, in younger recipients, the use of older donors was associated with a similar survival when performing double LTx and a significant decrease in survival performing single LTx. Nevertheless, it has been noticed that double lung transplant was the dominant procedure in the younger recipients\u2019 cohorts and the single transplant was the dominant procedure in the older recipients\u2019 cohorts. Regarding respiratory functions, results are showing better short- and long-term postoperative peak FEV1 from younger lungs. The donor age was not associated with PGD risk in the papers presented. Considering CLAD, there are data available. However, some authors described a significant difference in CLAD-free survival when using older donors.Hence, during the allocation process, lungs from donors \u226560yo should not be systematically rejected based on this parameter alone and the recipients\u2019 characteristics should also be considered. From the data collected here, the proper recipients for lungs from old donors appear to be emphysema patients with no pulmonary hypertension and >60\u2009years old. Thus, accepting lung grafts from older donors can be a safe and feasible strategy but donor age should be perceived as a risk factor that needs to be balanced against the urgency for transplantation."} {"text": "Mast cells and basophils share some phenotypic characteristics, such as the expression of a high-affinity membrane receptor for IgE, Fc\u03b5RI, and the granule storage of inflammatory mediators including histamine. Mast cells are found to be resident in nearly all vascularized tissues, whereas basophils migrate into the inflamed tissues from the circulation.Sauer et\u00a0al. verified the involvement of STAT3 in type IIb autoimmune CSU. The roles of histamine in chronic inducible urticaria were reviewed by Kulthanan et\u00a0al., Takimoto-Ito et\u00a0al. reported a case in which activated basophils remained in circulation during treatment with omalizumab. Miyake et\u00a0al. summarized recent findings of human and murine basophils, including their roles in immune tolerance. The regulatory roles of mast cells were also discussed by Zhang et\u00a0al. and Honda and Honda Keith. Poto et\u00a0al. and Numata et\u00a0al. investigated, respectively, the actions of autoantibodies against IgE and sweat antigen-induced chronic activation of basophils in patients with atopic dermatitis. Kamei et\u00a0al. demonstrated an IgE-dependent murine model of oral allergy syndrome. El Ansari et\u00a0al. found that allergen-specific IgA could suppress the IgE-mediated activation of mast cells.Accumulating evidence suggests that cutaneous mast cells play critical roles in chronic spontaneous urticaria (CSU) and that there are several promising therapeutic approaches . Sauer eMcNeil). In this topic, signal transduction and regulation of intracellular localization of MRGPRX2 were investigated . West and Bulfone-Paus discussed the heterogeneity of tissue mast cells, with special attention given to the expression of MRGPRX2. Numata et\u00a0al. summarized the roles of mast cells in cutaneous diseases. In indolent systemic mastocytosis, an increased number of mast cells expressing MRGPRX2 was found in the skin but was not linked to symptom severity .MRGPRX2 has received attention as a novel therapeutic target of chronic urticaria and drug-induced anaphylaxis, because its agonists have been found to have a structural diversity . Novel murine experimental models for the depletion of mast cells and basophils were also introduced here .We hope that this topic encourages and accelerates the research of mast cells and basophils.All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication."} {"text": "Hypoparathyroidism requires management with both calcium supplementation and active vitamin D to avert a state of hypocalcemia. During late gestation and the postpartum period , there is an under-recognized, yet intriguing occurrence of apparent \u2018pseudohyperparathyroidism\u2019, whereby supplementation dosages may need to either be reduced or discontinued, to prevent hypercalcemia. The explanation for this apparent phenomenon of improved parathyroid status (\u2018remission\u2019 or \u2018resolution\u2019) is incompletely understood; the purpose of this review is to analyze the case reports of this enigma within the medical (and grey) literature, providing an overall pathophysiological explanation and recommendation for the management of such patients. A literature search was conducted through PubMed/Medline, CINAHL, Cochrane Library Database, Scopus, UpToDate, Google Scholar, and the grey literature without a time-restricted period, analyzing all available articles within the literature describing an apparent improvement in parathyroid status in late-gestation and postpartum (lactating) females. Non-hypoparathyroid case reports were also included to further analyze and synthesize an overall likely pathophysiological explanation. Through the literature search, 24 papers were identified covering such a phenomenon in patients with hypoparathyroidism, alongside multiple additional reports of a similar occurrence in patients without underlying hypoparathyroidism. The pathophysiology is believed to occur due to the placental production of parathyroid hormone-related peptide (PTHrP) during gestation, with further production from the lactating mammary glands during the postpartum period. A typical pattern is observed, with increased PTHrP and suppressed PTH throughout both gestation and lactation . The concept of PTHrP-induced hypercalcemia is further demonstrated in patients without hypoparathyroidism, including subjects with placental hypersecretion and mammary gland enlargement. It is evident that patients with hypoparathyroidism may require a dosage reduction during late gestation and lactation, due to the risk for hypercalcemia. In addition to patients with hypoparathyroidism, this pathophysiological phenomenon occurs in unsuspecting patients, demonstrating the need for all clinicians in contact with pregnant females to be aware of this uncommon - yet perilous - occurrence. Initially described in the great Indian rhinoceros by Sir Richard Owen in 1852, the parathyroid glands were an addition to the knowledgebase in the ever-evolving field of endocrinology ..3].Hypoparathyroidism has a prevalence of up to 6.6% in the general population, encompassing all age groups, but lacks a specific gender preponderance ,5. The pDuring the postpartum period, an apparent phenomenon is noted, whereby lactating females (with hypoparathyroidism) may demonstrate an apparent state of calcitriol toxicity requiring a temporary reduction (or discontinuation) of their medications. This is characterized by subversion into an apparent temporary remission (or improvement) of hypoparathyroidism and has also been documented during late gestation. As this occurrence is infrequently recorded, the focus of this article is to reinforce this pivotal concept, as well as to provide an overall recommendation for the management of such patients.Outside the field of endocrinology, the quandary for calcitriol toxicity (temporary hypoparathyroidism resolution) during lactation is unlikely to be common knowledge. Parous women with a diagnosis of hypoparathyroidism are likely to attend appointments with specialties other than endocrinology - examples include obstetrics, midwifery, and general practitioners. This demonstrates multiple occasions for potential proactive maternal care. Due to the limited understanding of this phenomenon, coupled with the potential for preventable maternal harm, the purpose of this review is to broaden the awareness of the potential for hypercalcemia in postpartum, lactating females, with an established diagnosis of hypoparathyroidism.The primary objective of this review is, therefore, to1.\u00a0Provide a scoping review of the medical literature pertaining to the phenomenon of an \u2018improvement\u2019 or \u2018remission\u2019 of hypoparathyroidism during late gestation and lactation, with the intent of enhancing awareness of this potential (harmful) outcome across further medical specialties (and afflicted patients).The secondary objectives of this review are to1.\u00a0Provide an overview of the proposed pathophysiological hypotheses pertaining to the apparent improvement in hypoparathyroidism .2.\u00a0Review the recommendations for the management of hypoparathyroidism in late gestation and the postpartum period.Whilst the temporary resolution of hypoparathyroidism is described in the medical literature, this is mainly limited to case reports and case series. Due to the paucity of data, this review is justified, with the intent to provide awareness to the greater medical community of the potential for calcitriol toxicity (requiring dosage revision) during both late gestation and postpartum (lactation). Ensuring that a broader group of healthcare professionals (and motivated patients) are aware of this potential outcome will safeguard proactive patient care and result in a reduction in preventable feto-maternal harm.Materials and methodsThis review is delivered as a scoping review, encompassing the broad medical literature, with a subset focus on relevant case reports/series from the \u2018grey\u2019 literature, which further substantiates the apparent (under-studied) phenomenon of hypoparathyroidism improvement during late gestation and in postpartum, lactating females. Providing a scoping review allows for a multi-faceted approach in the deciphering of this apparent endocrine anomaly, covering multiple case documentation, reviewing (and synthesizing) the proposed pathophysiology, providing rationales for the recommended management of this unique subset of patients, and highlighting areas of uncertainty.Search StrategyWhilst, in research, the highest-regarded bodies of evidence are randomized controlled trials, meta-analyses, and systematic reviews, infrequent medical occurrences are often limited to less-rigorous bodies of evidence. For this reason, this review encompasses all forms of articles published . In addition to the variable article types available, to prevent further exclusion of informative data, the electronic search conducted did not have a time-restricted period. The search strategy entailed Cochrane Library Database, Scopus, PubMed/MEDLINE, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), UpToDate, and Google Scholar (in addition to the grey literature). The following keywords were used in the retrieval of available literature pertaining to the principal topic of this professional project: \u201chypoparathyroid\u201d AND \u201clactation\u201d OR \u201chypoparathyroidism\u201d AND \u201clactation\u201d OR \u201chypoparathyroidism\u201d AND \u201cbreastfeed\u201d OR \u201chypoparathyroidism\u201d AND \u201cremission\u201d AND \u201clactation\u201d OR \u201chypoparathyroid\u201d AND \u201cremission\u201d AND \u201cpostpartum\u201d OR \u201chypoparathyroidism\u201d AND \u201cresolution\u201d AND \u201clactation\u201d OR \u201ccalcitriol treatment decreased\u201d AND \u201chypoparathyroid\u201d AND \u201clactation\u201d.Publications in languages other than English were included . The search included reports of this occurrence without hypoparathyroidism to further support the pathophysiological hypotheses. Male subjects were excluded from the search strategy; animal studies were included in the search strategy, to be incorporated under the pathophysiological section of this professional project . The focus of inclusion is postpartum (lactating) females. However, to further reiterate the pathophysiological hypotheses, non-lactating, postpartum females (and those during gestation) were also included. Further, articles were included from the reference list of an initial article identified (when relevant to the aforementioned inclusion criteria). Both full-text articles and abstracts (whereby the case description was apparent and data could be deciphered) were included . A further n = 49 citations were identified from primary reference lists and used to further the pathophysiological explanation of this professional project Figure .Quality of EvidenceAs the purpose of this scoping review is to synthesize the cases relating to the apparent improvement in hypoparathyroidism and to analyze areas of uncertainty in current knowledge, a quality appraisal was not performed for this review.Risk of Bias AssessmentScoping reviews are prone to bias, especially when considering a rare documented medical occurrence whereby only patients experiencing the phenomenon are likely to be depicted in the literature (an uneventful postpartum period is unlikely to be reported in the literature). There is no data to suggest the percentage of pregnant and lactating postpartum females with hypoparathyroidism who do not experience the enigma of temporary resolution or remission. As this is a scoping review, a risk of bias assessment was not performed.ResultsWright et al. posed anTwo years later, the same patient delivered her second child; again, during her pregnancy, serum calcium was unremarkable, and she was adherent to 100,000 IU of calciferol and 76 mEq of calcium daily. The authors record that, for the second time, she did not breastfeed and note that, within 24 hours postpartum, calcium rose to 6.0 mEq/L, at which point all medications were discontinued. Serum calcium subsequently decreased to 5.1 mEq/L over the following six days postpartum. Calciferol and 38-57 mEq/L of calcium were restarted; however, symptomatic hypercalcemia reappeared (6.0 mEq/L), requiring discontinuation on postpartum day 20. Roughly one week following the discontinuation of her medications, the serum calcium decreased to 4.8 mEq/L, at which point half her usual dosage was reintroduced for the succeeding 12 days.A second patient was described by Wright et al. , who wasIn contrast to Wright et al. , SadeghiMarkestad et al. presenteRude et al. presenteCundy et al. describeCaplan et al. presenteH\u00f6per et al. presenteCath\u00e9bras et al. further Shomali et al. presenteAnai et al. presenteMather et al. presenteYasumatsu et al. presenteSweeney et al. presenteKrysiak et al. presenteSegal et al. depictedAl-Nozha et al. reportedHatswell et al. analyzedShah et al. demonstrDixon et al. posed a DiscussionA pattern is noted in the above case series, whereby calcium requirements are rather labile during pregnancy, but tend to decrease postpartum. Marcucci et al. includedIn a case series performed by Hartogsohn et al. , 17 pregWang et al. presenteProposed PathophysiologyThe apparent improvement in hypoparathyroidism during late gestation and breastfeeding is an incompletely understood, yet potentially harmful occurrence to the untrained clinician. As noted by Winter et al. , this apDuring pregnancy, PTHrP is predominantly released by the placenta . However, there may be concurrent release from enlarged mammary glands during this time period, with a \u2018transition\u2019 to release by the mammary glands during the postpartum (lactating) period ; these aPTHrPPTHrP is a hormone responsible for humoral hypercalcemia of malignancy and paraneoplastic syndromes (notably squamous cell carcinoma of the lung). It is important to note that, during lactation, PTHrP is released into the breast milk at 1,000-100,000 times the serum concentration that would be present in humoral hypercalcemia of malignancy . During As noted by Kalkwarf et al. , a lactaGrill et al. demonstrSimilar findings emerged from the study conducted by Kalkwarf et al. , noting In a longitudinal study performed by Ardawi et al. , levels In a study of non-hypoparathyroid, lactating women by Dobnig et al. , at two-While Grill et al. noted elAt two to three days postpartum, Dobnig et al. demonstrAs demonstrated in the study by Segal et al. , there iSowers et al. demonstrIn the first six months of the study performed by Kalkwarf et al. , the autHypercalcemiaIn the cases reported under the \u2018results\u2019 section of the review, it is important to note that not all authors were able to measure serum PTHrP. In the report posed by Shomali et al. , howeverAnai et al. demonstrIn the first patient described by Yasumatsu et al. , it is nThe phenomenon of apparent PTH-independent hypercalcemia in pregnant and lactating females (as noted in the above series) has furthermore been replicated across numerous case reports consisting of patients without a diagnosis of hypoparathyroidism. Marx et al. presenteVan Heerden et al. parallelSiskind et al. presenteSato describeLepre et al. describeIn a similar manner to the case report by Segal et al. , KebapciBreslau et al. describeProlactinProlactin is a hormone involved in lactation and is released from the lactotrophs in the anterior pituitary in response to a myriad of stimuli (predominantly nipple suckling). As demonstrated by Sowers et al. , there iIt should be noted that elevated PTHrP has been documented in other cases of hyperprolactinemia, such as that with prolactinomas . In rat Kovacs et al. analyzedOf further peculiarity, serum prolactin appears to not only be intricately associated with serum PTHrP but also 1,25-hydroxyvitamin D3 (active vitamin D). Following the work by Cundy et al. , the appVitamin DCundy et al. attempteThe correlation between the prolongation of active vitamin D and prolactin is ill-defined, albeit three strong hypotheses are currently understood to be partly correct: delayed clearance, PTHrP-mediated, and prolactin-induced. The most straightforward explanation for a prolongation of the half-life of active vitamin D and prolactin would be that there is a reduction in the clearance of the former (as opposed to enhanced endogenous production). This hypothesis, while simple, is not accepted by many, as it is evident that serum 1,25-hydroxyvitamin D3 levels may continue to rise following discontinuation of calcitriol supplementation .PTHrP demonstrates similarity in structure to PTH (the first 34 amino acids of PTHrP are similar in structure to PTH), acting upon the same cell receptors . AnecdotProlactin has similarly been suggested to be able to stimulate the enzyme 1-\u03b1-hydroxylase (CYP27B1) within the maternal kidney. In both chick and rat studies, this theory has been demonstrated on multiple occasions -69. As rMarkestad et al. presenteHypocalcemiaOne should be aware that the converse has been described during both gestation and the postpartum periods, whereby there is an apparent \u2018worsening\u2019 of serum calcium . The case report posed by Sweeney et al. differs Al-Nozha et al. further Eller-Vainicher et al. presenteDurst et al. describeBernstein et al. depictedHarrad et al. presenteCallies et al. presenteJabbar et al. discusseKurzel et al. presenteRecommendationsBoth the hypoparathyroid mother and her infant should be carefully monitored to detect abnormal calcium levels ,13. The Strengths and limitationsA major strength of this article includes the enhanced awareness of a niche endocrine phenomenon, for which access will be available to all healthcare professionals who may encounter a female patient with hypoparathyroidism. Moreover, this article provides an up-to-date review of the available literature depicted by various authors globally. As with all articles, however, limitations must be highlighted; in this case, it is important to note that a great proportion of females with hypoparathyroidism may not require dosage adjustments during either gestation or lactation, with these apparently \u2018normal\u2019 cases unlikely to be reported upon . Further risks include selection bias and misinterpretation of data from articles published in German, Japanese, and French . Moreover, as there are multiple underlying causes of hypoparathyroidism . For this reason, an individualized approach with close monitoring is essential when treating hypoparathyroid females during gestation and the postpartum period."} {"text": "Molecularly imprinted polymers (MIPs), due to their unique recognition properties, have found various applications, mainly in extraction and separation techniques; however, their implementation in other research areas, such as sensor construction and drug delivery, has also been substantial. These advances could not be achieved without developing new polymers and monomers that can be successfully applied in MIP synthesis and improve the range of their applications. Although much progress has been made in MIP development, more investigation must be conducted to obtain materials that can fully deserve the name of artificial antibodies.The \u201cAdvance in Molecularly Imprinted Polymers\u201d Special Issue connects original research papers and reviews presenting recent advances in the design, synthesis, and broad applications of molecularly imprinted polymers. The Special Issue content covers various topics related to MIP chemistry, which include their molecular modeling, application of new monomers and synthesis techniques for MIP preparation, synthesis, and application of ion-imprinted materials, selective extraction of organic molecules, sensor preparation, and MIP applications in medicine. The presented collection of scientific papers shows that research in MIP chemistry is diversified, ultimately improving the desired properties and creating new potential applications for these materials.\u22121. The findings of this study indicate the potential of the obtained ion-imprinted polymers for the selective extraction of heavy metal ions from polluted waters.Bivi\u00e1n-Castro et al. obtainedIon-imprinted polymers were studied by Kondaurov et al. . They obZaharia et al. obtainedMegahed et al. obtainedN-(2-arylethyl)-2-methylprop-2-enamides with various substituents in the benzene ring were obtained by Sobiech et al. [Six aromatic h et al. using 2-Piletsky et al. presenteThe synthesis of 5-fluorouracil-imprinted microparticles and their application in prolonged drug delivery was reported by Ceg\u0142owski et al. . The autJumadilov et al. studied Chien et al. developeMIPs possessing dual functional monomers (methacrylic acid and 2-vinylpyridine) were synthesized by Thach et al. to yieldIn their review article, Lusina and Ceg\u0142owski exploredThe other review article, prepared by Liu et al. , describPolymers. In addition, I would like to express my gratitude to the Editorial Team who helped prepare the \u201cAdvance in Molecularly Imprinted Polymers\u201d Special Issue.This Special Issue has brought together experts that have studied and explored various aspects of MIPs. I want to thank all researchers who have contributed to the production of this Special Issue of"} {"text": "Green et al. apply ne2 injection (e.g. ref. 3 reservoir meant for perturbations on timescales of LIP volcanism surface ocean carbonate saturation (2 release at the onset of Deccan emplacement drove the Late Maastrichtian Warming Event, but carbonate dissolution buffered surface ocean Earth system scientists have long recognized that the rise of pelagic marine calcifiers fundamentally changed the way Earth responds to CO ocean \u03a9 and mari ocean \u03a9 and 6.2 release was superimposed relative to earlier Phanerozoic LIPs, Green et al. (2 (Having overlooked the fundamentally different biogeochemical boundary conditions upon which Deccan COn et al. cite putn et al. . Furthert al. (2 and globt al. (2 returnint al. (2 and is it al. (2 and palet al. (2 and 6 evIn sum, we implore the community to move beyond implicating causality from temporal coincidence, and instead, to focus on how, mechanistically, individual LIPs interacted with their specific boundary conditions to drive specific environmental and macroevolutionary responses. While meaningful research questions about the rate and scale of perturbation required to prompt a mass extinction remain, ultimately, a causal link between any geological phenomenon and mass extinction cannot be made by comparing unconnected events hundreds of millions of years apart. Instead, feasible, model-supported mechanisms for how a given event could precipitate ecological collapse are required\u2014mechanisms that for the Chicxulub impact exist, in the form of ocean acidification and impa"} {"text": "Urothelial carcinoma (UC) of the bladder is the tenth most diagnosed cancer worldwide and represents a significant cause of morbidity and mortality . UC is ain situ that are thus particularly challenging. For such cases urine cytology is still the most used non-invasive test detecting bladder cancer during follow-up. Despite its high specificity of approximately 86%, the limitation lies in low sensitivity of only 50% .There is a great need for further investigations of molecular markers for bladder cancer in the context of risk stratification and for developing combined targeted therapy options to prevent progression and cancer-related death. According to newer publications, FGFR3 mutation analysis study on bladder cancer patients should be performed for all stages and grades . There aIt is mandatory perceiving that marker systems are playing an important role in all fields of bladder cancer: as an alternative or additional tool for cystoscopy during follow-up, as predictor and prognostic instrument during decisions for systemic therapies or as screening tool for detecting bladder cancer in high-risk groups.This Research Topic has the main aim of offering possibilities to publish new research results in the basic and translational research field of bladder cancer. During the editorial process of this Research Topic we have appreciated that significant progress has been recently done in bladder cancer research and that efforts should be pursued by fostering extensive cooperation between the scientific and medical communities to translate evidence-based research into clinical practice. The identification and validation of bladder cancer markers for predicting recurrence and progression will contribute to establish better treatments for the individual patient based on its predicted response and their specific genetic and molecular characteristics, and molecular staging will allow selection of tumors that will require systemic treatment.The studies in this Research Topic cover the major areas of developing interest in bladder cancer research.Wang et al. studied ferroptosis regulators and identified GCLM as a tumor promotor and immunological biomarker in bladder cancer. Feng et al. evaluated the prognostic significance basement membrane-associated lncRNA in Bladder Cancer.There were several novel markers evaluated in this Research Topic. Lee et al. evaluating alpha-2-macroglobulin in urinary extracellular vesicles and a study by Bian et al. evaluating urinary exosomal long non-coding RNAs as noninvasive biomarkers for diagnosis of bladder cancer. Su et al. analyzed exosome-derived long non-coding RNAs as non-invasive biomarkers of bladder cancer.There were several articles evaluating new diagnostic markers for bladder cancer including a study by Hao et al. study near-infrared targeted imaging using ICG-anti-CD47.To improve imaging of disease, Wang et al. review the impact of cuproptosis-related genes on bladder cancer prognosis, tumor microenvironment invasion, and drug sensitivity. Castaneda et\u00a0al. identify Novel Biomarkers associated with Bladder Cancer Treatment Outcomes. Wang et al. use proteomics analyses to identify CLIC1 as a predictive biomarker for bladder cancer staging and prognosis. Li et\u00a0al. study endothelial-related molecular subtypes for bladder cancer patients, and Xiong et al. study inflammation-related lncRNAs in bladder cancer. You et\u00a0al. study novel pyroptosis-related gene signatures and Xiao et al. evaluate impact of fatty acid metabolism, inflammation and hypoxia on bladder cancer.There were several studies evaluating bladder cancer prognosis. Chen et\u00a0al. study the tumor microenvironment to develop a nomogram to predict lymph node metastases. Bieri et\u00a0al. use a modified Immunoscore to improve prediction of progression-free survival in patients with non-muscle-invasive bladder cancer. Luo et al. develop a novel prognostic model based on cellular senescence-related gene signatures for bladder cancerSeveral studies developed prediction models for bladder cancer. Song et\u00a0al. evaluated aliphatic acid metabolism in bladder cancer with the goal of guiding therapeutic treatment. A study by Wang et\u00a0al. studied the mechanism by which RAC3 inhibition induces autophagy to impair metastasis in bladder cancer cells via the PI3K/AKT/mTOR pathway.A study by All editors of this Research Topic thank all submitting authors for their work. The lead editors would like to thank all editors and reviewers for the time spent in assigning reviews, commenting on submitted manuscripts as well as reviewing. As editorial team, we hope that this special issue will prove useful in planning bladder cancer research in next future.All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication."} {"text": "Circadian rhythms play a fundamental role in our daily lives, orchestrating a wide range of physiological processes, including energy metabolism, immune function, cellular rejuvenation, and the intricate interplay with the gut microbiota. This Research Topic delves into the profound influence of circadian rhythms on various aspects of metabolism and endocrinology and explores the intricate connections between circadian physiology and metabolic disorders. By elucidating these relationships, this Research Topic aims to provide insights into the development of novel therapeutic strategies for the management of metabolic and endocrine diseases.Li et\u00a0al. conducted a study revealing an independent association between rest-activity rhythm and obesity phenotypes. Their findings highlight the impact of aberrant rest-activity rhythm on anthropometric and imaging measures of general and abdominal obesity.At the population level, Huang et\u00a0al. explore the effects of mistimed feeding on circadian reprogramming and the unfolded protein response. Their study demonstrates the cumulative impact of mistimed food intake over multiple generations, resulting in altered diurnal rhythmic transcriptomes and impaired endoplasmic reticulum stress response.Examining the intergenerational impact of circadian disruptions, Sun et\u00a0al. conducted a bidirectional Mendelian randomization study providing genetic evidence that supports a bidirectional causal relationship between sleep traits and nonalcoholic fatty liver disease (NAFLD). Their findings emphasize the importance of considering sleep disturbances in managing NAFLD.Cort\u00e9s-R\u00edos et\u00a0al. used mathematical modeling to investigate the dosing-time-dependent antihypertensive effect of valsartan and aspirin. Their study sheds light on the optimal dosing time for these medications, underscoring the importance of chronopharmacology in hypertension management.Shifting the focus to pharmacology, Wild et\u00a0al. conducted a case-control study evaluating sleep patterns in patients treated for non-secreting intra- and parasellar tumors. Their findings revealed altered sleep patterns in these patients compared to healthy controls, highlighting the need for clinicians to address sleep disturbances in this population.Expanding the scope to sleep patterns, Bao et\u00a0al. performed a prospective clinical trial investigating the effect of melatonin on the quality of repeated-poor and frozen-thawed embryos in humans. Their study demonstrates that melatonin supplementation in the culture medium improves the rate of high-quality embryos, offering a potential rescue strategy for in vitro fertilization (IVF) failures.Moving to a more specific endocrine system, Yin et\u00a0al. reviewed the use of melatonin for premenstrual syndrome (PMS). They discuss the role of melatonin in modulating sleep disturbance, mood changes, and cognitive impairment associated with PMS, suggesting its potential as a safe and effective treatment option.Exploring the potential therapeutic application of melatonin, Watanabe et\u00a0al. explored the insulin-independent action of nocturnal melatonin in increasing glucose uptake in the goldfish brain. Their study highlights the role of melatonin in regulating glucose homeostasis and its potential as a circadian regulator of glucose dynamics.Lastly, Li et\u00a0al., Huang et\u00a0al., Sun et\u00a0al., Cort\u00e9s-R\u00edos et\u00a0al., Wild et\u00a0al., Bao et\u00a0al., Yin et\u00a0al., and Watanabe et\u00a0al. emphasize the importance of circadian regulation and sleep patterns in maintaining metabolic homeostasis and offer potential avenues for therapeutic interventions in metabolic and endocrine disorders. The comprehensive understanding of the ties between circadian rhythms and metabolism presented in this Research Topic will undoubtedly pave the way for innovative approaches in the management of these conditions.The articles included in this Research Topic provide valuable insights into the intricate relationship between circadian rhythm, metabolism, and endocrinology. From the impact of mistimed feeding on circadian reprogramming to the bidirectional relationship between sleep traits and NAFLD, the findings from MR-F: Writing \u2013 original draft, Writing \u2013 review & editing"} {"text": "Age\u2010associated reduction of nuclear shape dynamics in excitatory neurons of the visual cortex by Tanita Frey et al., https://doi.org/10.1111/acel.13925Cover legend: The cover image is based on the Research Article"} {"text": "Cryptovaranoides microlanius from the Late Triassic of the United Kingdom was described as the oldest crown squamate. If true, this result would push back the origin of all major lizard clades by 30\u201365 Myr and suggest that divergence times for reptile clades estimated using genomic and morphological data are grossly inaccurate. Here, we use computed tomography scans and expanded phylogenetic datasets to re-evaluate the phylogenetic affinities of \u2020Cryptovaranoides and other putative early squamates. We robustly reject the crown squamate affinities of \u2020Cryptovaranoides, and instead resolve \u2020Cryptovaranoides as a potential member of the bird and crocodylian total clade, Archosauromorpha. Bayesian total evidence dating supports a Jurassic origin of crown squamates, not Triassic as recently suggested. We highlight how features traditionally linked to lepidosaurs are in fact widespread across Triassic reptiles. Our study reaffirms the importance of critically choosing and constructing morphological datasets and appropriate taxon sampling to test the phylogenetic affinities of problematic fossils and calibrate the Tree of Life.Most living reptile diversity is concentrated in Squamata , which have poorly known origins in space and time. Recently, \u2020 Of these, the more than 11 000 living species of squamates represent the most speciose modern tetrapod clade . Living Recent additions and revisions to the early squamate and rhynchocephalian fossil records made possible by the use of new imaging and analytical tools have substantially improved our understanding of squamate origins and the rise of squamates to ecological dominance as rhynchocephalian diversity declined \u201310. Recoet al. [Cryptovaranoides microlanius scan data available online [C. microlanius, as well as a wealth of comparative CT data and personal observations. We re-examined the original diagnosis relative to the referred specimens and holotype and re-assessed the primary homology concepts applied to create character scorings from the original study. Our detailed re-assessment of \u2020C. microlanius reveals that the concept of the taxon was based on multiple fossil specimens that were not discovered in association with each other, save for the elements in the holotype block, and that there is little to no anatomical justification for the referral of these morphologically disparate skeletal remains to the same species . FurtheC. microlanius and divergence times for the major groups of squamates using three radically different phylogenetic matrices under various optimality criteria. We find no support for the placement of \u2020C. microlanius within Anguimorpha or even crown Squamata, with a broader reptile dataset suggesting that it is in fact an archosauromorph reptile. Divergence time estimates support previous estimates for a Mid-Late Jurassic origin of anguimorphs, highlighting how misinterpretations of the fossil record can highly impact our understanding of the origin of major branches of the Tree of Life.We tested the phylogenetic affinities of \u2020. 2. 2.1et al. [C. microlanius are not ideal, as nearly all the non-lepidosaur species included in the original versions of those matrices were excluded. This approach potentially compromises the ability of the matrix to provide a strong test of whether \u2020C. microlanius falls outside crown Squamata. Therefore, we included \u2020C. microlanius in the largest available dataset to infer relationships among the major groups of reptiles [C. microlanius, we further expanded the taxonomic sampling of dataset 1 by adding three taxa with relatively unstable relationships, but which have historically been linked to Lepidosauria: \u2020Fraxinisaura rozynekae from the Middle Triassic of Germany [Palaeagama vielhaueri [Paliguana whitei [Pali. whitei [The data matrices used by Whiteside et al. to assesreptiles , as it ereptiles for a de Germany , and \u2020Paelhaueri and \u2020Pala whitei from the. whitei . [et al. ,33)) is [et al. for theiat study . To thisat study \u201338 into e et al. in datas [et al. suggeste [et al. proposedreserved .Table 1et al. [et al. [C. microlanius being deeply nested within squamates, regardless of the results presented by dataset 1.Finally, dataset 3 is a recent update of Gauthier et al. by Brown [et al. , that sa [et al. ), taxono. 2.2. 2.2.1All maximum-parsimony (MP) analyses were conducted in T.N.T. v. 1.5 which al. 2.2.2rBayes v. 3.2.7a [Bayesian analysis of the morphological dataset was performed in M. 3.2.7a using thWe used the Mkv + gamma substitution model .Stationarity was assessed using standard measures, such as average standard deviation of split frequencies (ASDSF < 0.05) and potential scale reduction factors . Effective sample size values were assessed using Ter v.1.6 , reachin. 2.3rBayes v. 3.2.7a [Divergence times were calculated using relaxed clock Bayesian inference analyses of the morphological dataset for dataset 1 and the combined molecular and morphological data for dataset 2. We implemented total-evidence-dating (TED) using the fossilized birth-death tree model, under relaxed clock models in M. 3.2.7a .. 2.3.1\u03c1 = 1). The age of the root also follows [The tree model and its calibration priors follow the previous analyses of this dataset in . Namely, follows , sampled follows ) and a s follows ). The ra follows . Specifi follows , which i follows .. 2.3.2\u03c1 = 1) where all fossils are considered to be tips only.Sampling strategy among extant taxa was set to \u2018diversity\u2019, which is more appropriate when sampling maximizes extant diversity (as performed herein) and fossils are assumed to be sampled randomly ,46. AccoYoungina and crown reptiles) was sampled from an offset exponential distribution with a hard bound for the minimum age and a soft maximum age, with the mean of the exponential distribution based on a recent TED analysis [The age of the root [Diphydontosaurus [The vast majority of our calibrations were based on tip-dating, which accounts for the uncertainty in the placement of extinct taxa and avoids the issue of constraining priors on taxon relationships when implementing bound estimates for node-based age calibrations ,49. The ssic\u2014UK) \u2192 168.3\u2013ssic\u2014UK) ; SphenodGermany) \u2192 241.5\u20132Germany) .We provided an informative prior to the base of the clock rate based on the previous non-clock analysis: the median value for tree height in substitutions per character from posterior trees divided by the age of the tree based on the median of the distribution for the root prior: 13.1/280 = 0.0464, in natural log scale = \u22123.07, and a wide standard deviation (1.0). We employed the uncorrelated independent gamma rate clock model as in pr. 3. 3.1Reptilia Laurenti, 1768Cryptovaranoides Whiteside et al. [\u2020e et al. .Cryptovaranoides microlanius Whiteside et al. [\u2020e et al. .. 3.1.1Clevosaurus sp.; Whiteside et al. [Cryptovaranoides type block. This observation motivated us to critically re-evaluate the referral of additional material described by Whiteside et al. [Cryptovaranoides .NHMUK PV R36822, a partially articulated skeleton of a single, small reptile preserved in matrix. The presence of a large, isolated interclavicle in the block including the holotype demonstrates that additional reptiles are repe et al. to \u2020Cryp. 3.1.2a); coronoid bone that is 40% of the anteroposterior length of the dentary and forms a low, gently rising coronoid process (b); surangular as long as dentary ; absence of incipient or developed rugosities and osteoderms on cranial bones (d); seven cervical vertebrae (c).Pterygoid anterior process considerably longer than posterior process a; corono process b; surang dentary b,c; abseal bones d; seven ertebrae c.. 3.1.3C. microlanius provided by Whiteside et al. [et al. [The diagnosis of \u2020e et al. cited see et al. ,6,7. Fur [et al. .. 3.2C. microlanius and note issues with the description presented in Whiteside et al. [C. microlanius were found in isolation and lack diagnostic features that would make it referable to the same species of the holotype. Besides the anatomical reinterpretations below, we also provide a list of all specimens referred to \u2020C. microlanius by Whiteside et al. [C. microlanius in the data matrices used to infer its phylogenetic position.In this section, we review the anatomy of \u2020e et al. based one et al. , noting . 3.2.1Fusion of the premaxillae. Whiteside et al. [C. microlanius as fused, with a median tooth placed centrally. The presence of fused premaxillae is historically considered to be a squamate synapomorphy [C. microlanius would strongly support a squamate identity for this species. However, our re-evaluation of the CT scan data available for the holotype shows that the premaxillae are clearly unfused, and no median tooth is identifiable (c). The isolated premaxilla (NHMUK PV R37378) referred to \u2020C. microlanius by Whiteside et al. [b,c) cannot be directly compared to the holotype specimen because all the teeth of the referred premaxilla are broken. Furthermore, although this referred specimen does appear to show some degree of fusion of the premaxillae near the tooth row margin, the premaxillae are separated throughout most of their extension, and the apparent fusion could be an artefact of preservation (e.g. suture infilling by surrounding sedimentary matrix).e et al. describepomorphy ,6 and mopomorphy ,9. As sutifiable c. The ise et al. .anius by , who argagmented a,b.Absence of a jugal posterior process. This widely cited character is related to the partial or complete loss of the lower temporal bar ancestrally in squamates, although it also appears in some stem lepidosaurs and rhynchocephalians [et al. [C. microlanius. However, as noted, this condition is also found elsewhere in lepidosaurs and in many other neodiapsids, such as kuehneosaurids, sauropterygians and ichthyosaurs [C. microlanius with Squamata over other clades of neodiapsids. Furthermore, the posterior region of the jugal is broken on the holotype of \u2020C. microlanius , and so it is possible that a posteroventral process was present but not preserved.phalians ,7,29,33. [et al. suggestehyosaurs ,53\u201355. Arolanius a,b, and Peg-in-notch articulation with rod-shaped squamosal. This feature was cited as a potential squamate synapomorphy of \u2020C. microlanius [a\u2013c). The long, thin bone identified as the squamosal by Whiteside et al. [et al. [C. microlanius.rolanius . Howevere et al. is orien [et al. , but it Quadratojugal not present as separate element. This feature was also listed as a squamate synapomorphy of \u2020C. microlanius by Whiteside et al. [e et al. . We disaFrontal underlaps parietal laterally on frontoparietal suture. This articulation was described as an anguimorph synapomorphy that is present in \u2020C. microlanius based on the inferred articulation of the frontals with the parietals [a,b), and the referred frontals were found in isolation and cannot be anatomically connected to any of the other preserved elements in the skull without ambiguity foramen and recessus scala tympani. Whiteside et al. [C. microlanius. Whiteside et al. [C. microlanius\u2014see also figure 1d. Despite this, Whiteside et al. [e et al. describee et al. acknowlee et al. still ine et al. ,7,9) or Enclosed vidian canal exiting anteriorly at base of each basipterygoid process. This braincase feature was described as a squamate synapomorphy of \u2020C. microlanius [et al. [rolanius . However [et al. noted, cFusion of exoccipitals and opisthotics forming an otoccipital. This feature was referred to as a squamate synapomorphy of \u2020C. microlanius [d), we note that braincase fusion is quite variable within Squamata [rolanius . AlthougSquamata and othe. 3.2.3Septomaxilla probably contacts dorsal surface of palatal shelf of maxilla (septomaxillary facet on maxilla). This feature was described by Whiteside et al. [C. microlanius. However, these are completely disarticulated and the septomaxilla is not preserved (a\u2013c).e et al. as a squreserved a\u2013c.Long ventral longitudinal ridges converging toward midline of vomer. This feature was described by Whiteside et al. [C. microlanius. The vomers of \u2020C. microlanius are large, flattened, and subrectangular, and are more similar to the vomers of non-squamate lepidosaurs than toridensis ,9. The auimorphs , such ass apodus and Elgaria spp. ,59, wherProminent choanal fossa on palatine. This feature was described as an unambiguous synapomorphy of Squamata present in \u2020C. microlanius [C. microlanius is deep and anteroposteriorly restricted, matching the condition in gekkotans [Megachirella watchleri [Bellairsia gracilis [Oculudentavis spp. [rolanius . Howeverrolanius ,6,7,9. Tekkotans ,61 and tgracilis and \u2020Ocuvis spp. .Gephyrosaurus bridensis [Marmoretta oxoniensis [Taytalura alcoberi [Sphenodon punctatus [Navajosphenodon sani [C. microlanius is mediolaterally restricted so that it barely fills half of the mediolateral length of the anterior margin of the palatine [Tanystropheus hydroides [Macrocnemus bassanii [Stem and crown group members of the other major squamate clades display deeper and more posteriorly extensive choanal fossae ,9,63, whridensis , \u2020Marmoroniensis , \u2020Taytalalcoberi , and sphdon sani . Intrigupalatine . This copalatine is preseydroides and \u2020Macbassanii .Short overlap in quadrate-pterygoid contact and the absence of the pterygoid process on the quadrate. This feature was described as a synapomorphy of Squamata found in \u2020C. microlanius by Whiteside et al. [e et al. . However. 3.2.4Angular does not extend posteriorly to reach articular condyle. Although the angular was suggested to terminate before the mandible articular condyle in \u2020C. microlanius as in the squamate total clade [a,b).al clade , the posArticulars and prearticulars medial process present. This feature was scored as present in \u2020C. microlanius by Whiteside et al. [a,c). The absence of a medial process of the articular and prearticular can also be noticed in [figure 6g,h.e et al. . Howeverticed in : figure . 3.2.5Atlas pleurocentrum fused to axis pleurocentrum. The fusion of these elements cannot be assessed because their pleurocentra are not preserved in the holotype. Only the neural arches and neural spine of the atlas and axis are preserved, and their intercentra are missing from the holotype .holotype a,b.Cervical ribs double-headed. Whiteside et al. [C. microlanius, acknowledging this state was unusual for a squamate. However, inspection of the CT data indicates that the cervical ribs of \u2020C. microlanius are in fact single-headed and possess an expanded endpoint for articulation with the vertebral centra . This differs from the condition observed in all known squamates and resembles the rib morphology observed in other reptile clades, including protorosaurs such as \u2020Protorosaurus speneri . What was originally interpreted as the second rib head in \u2020C. microlanius by Whiteside et al. [figure 2b,c\u2014Ant.Pr.) commonly observed on the cervical ribs of several archosauromorphs, including \u2020Protorosaurus speneri (b), \u2020Prolacerta broomi (c), \u2020Mesosuchus browni [Azendhosaurus madagascariensis [Euparkeria capensis [e et al. is reint speneri b, \u2020Prolaa broomi c, \u2020Mesoss browni , \u2020Azendhariensis and seveariensis and \u2020Eupcapensis Cervical and dorsal vertebral intercentra present. Based on the number of preserved, articulated vertebrae, the presence of intercentra on the trunk vertebrae was described by Whiteside et al. [C. microlanius. However, the presence of intercentra was based on the presence of a single isolated bone fragment, suggested as a displaced intercentrum. Upon inspection of the cervical region using CT scan data (c), we observed that cervical centra are in close articulation without any evidence for intercentra or articulatory facets for them, which should be clearly visible in this particularly well-preserved specimen, as they are in extant squamates . TherefCervical vertebrae midventral crest absent. A midventral crest or keel on each caudal centrum was scored as absent in \u2020C. microlanius by Whiteside et al. [b).e et al. . HoweverAnterior dorsal vertebrae, diapophysis fuses to parapophysis. Whiteside et al. [C. microlanius. However, the few preserved dorsals have unfused neural arches and pleurocentra (c), thus logically having their diapophyses and parapophyses (located on the pleurocentrum) also unfused. Given the juvenile condition of the holotype specimen , it is possible that later during ontogeny those elements could have been fused together, forming a synapophysis. However, there is no evidence to support this given the material available. Secondly, even if synapophyses occur later in the ontogeny of \u2020C. microlanius, these are observed across several groups of reptiles, including all other non-squamate lepidosaurs [e et al. suggesterocentra c, thus lidosaurs ,68,71,72Zygosphene-zygantra in dorsal vertebrae. Incipient zygosphene-zygantra articulations were mentioned to be present in a set of vertebrae present in the block containing the holotype but separate from the holotype specimen of \u2020C. microlanius [C. microlanius is quite different from the coracoid emarginations (or \u2018foramina\u2019 as labelled by [a). We reinterpret them as an instance of incomplete mineralization of the central region of the coracoid, which is common among juvenile reptiles \u2014e.g. a similar mode of preservation occurs in the coracoid of the Early Cretaceous South American lizard \u2020Tijubina pontei (b). We note the holotype specimen appears to be a juvenile based on unfused neural arches and centra .anius by . We note figures a and 5c elled by ) observea pontei b. We notEntepicondylar and ectepicondylar foramen of humerus present. The ectepicondylar foramen is nearly universally present in Lepidosauria, whereas the entepicondylar foramen is lost in squamates but retained in sphenodontians [et al. [C. microlanius; however, we were unable to observe any foramina in their only illustration . This feature was not discussed by Whiteside et al. [Cryptovaranoides microlanius was scored as having the expanded radial condyle present in the phylogenetic dataset used by the authors, but our inspection of the CT scan data on both humeri of the holotype from different angles clearly shows this is absent in this taxon . This feature was not discussed by Whiteside et al. [C. microlanius suggests the ulnar patella is absent in this taxon the goois taxon .. 3.3Cryptovaranoides within reptiles, as discussed above. In all of our results using dataset 1, \u2020Paliguana whitei acted as a rogue taxon that contributed to poor resolution across the generated consensus topologies . Removing that species substantially improved the resolution . Under the most robust phylogenetic hypothesis , \u2020C. microlanius is inferred to be the sister to Allokotosauria within Archosauromorpha, albeit with weak support .Dataset 1 was the matrix most suited for testing the broader affinities of \u2020ccuracy, ; figure C. microlanius to lepidosauromorphs are challenged by important reinterpretations of the postcranial skeleton of the holotype, including: absence of ectepicondylar and entepicondylar foramina , thus m (contra ). The op [et al. is reintrolanius .C. microlanius is supported by the following characters: strong anterior emargination of the maxillary nasal process, which is rarely observed in squamates but is a hallmark feature of archosauromorphs, where it contributes to the formation of the antorbital fenestra . Using relaxed clock Bayesian inference, \u2020C. microlanius is inferred to occupy a similar position but with lower support, less than 50% PP . These results suggest that, when \u2020C. microlanius is included in a lepidosaur-specific dataset, the several character states it shares with some early lepidosauromorphs and archosauromorphs place it close to the root of the tree. We note that only four archosauromorphs are used as an outgroup in this dataset, and so it does not provide an adequate test of the non-lepidosaurian affinities of \u2020C. microlanius . The analyses of datasets 2 and 3 provide strong evidence that the anatomy of \u2020squamate , well ouet al. dataset in [B. gracilis, \u2020Oculudentavis khaungraae, and \u2020Oculudentavis naja. Undated parsimony and tip-dated Bayesian analyses failed to recover \u2020C. microlanius within the squamate crown, again placing this species as a stem squamate, here one node stemward of the clade formed by \u2020B. gracilis, \u2020Huehuecuetzpalli mixteca, \u2020O. khaungraae, and \u2020O. naja .Dataset 3 was analysed with and without the three species of stem-squamates added to the Gauthier taset in : \u2020B. gra\u2020O. naja a,b.Figu. 3.4C. microlanius and its inferred placement within archosauromorphs (C. microlanius (as an archosauromorph) is largely compatible with the specimen age are estimated to occur during the Middle and Late Jurassic (C. microlanius is again more consistent with its age.Divergence times among the major groups of reptiles are largely unaffected by the inclusion of \u2020pothesis ). Furthe. 4C. microlanius was originally interpreted as nested within Anguimorpha [C. microlanius is accurate, it would radically alter all previous hypotheses on the timing of squamate diversification, and potentially suggest widespread bias towards younger age estimates for squamates and other reptiles in timetrees produced using a wide variety of methods and both morphological and molecular data [C. microlanius posited by Whiteside et al. [The Late Triassic reptile \u2020uimorpha , which iuimorpha ,25,76\u201378lar data ,24,25,78C. microlanius. First, we find no evidence for referring most of the other specimens noted by Whiteside et al. [C. microlanius to squamates and anguimorphs are in fact not observable (e.g. not preserved), poorly preserved and of ambiguous interpretation, or incorrectly described . We provide a thorough redescription of the holotype and provide, for the first time to our knowledge, detailed images of key anatomical traits that highlight several traits seen in \u2020C. microlanius that are incompatible with a lepidosaur hypothesis, and instead support its affinity to archosauromorphs. Finally, phylogenetic analyses of three separate datasets with radically different taxonomic composition and criteria of character construction, under multiple optimality criteria, consistently reject the hypothesis that \u2020C. microlanius is a crown squamate. Instead, our analyses find that \u2020C. microlanius is a neodiapsid of unclear placement with potential affinities to early archosauromorphs.Reinterpretation of the original data and analyses strongly reject a crown squamate identity for \u2020e et al. to this Crown reptiles underwent extensive radiation and diversification during the Early to Middle Triassic but have roots dating back into the Permian . While nC. microlanius shares with crown squamates an elongated (rod-like) squamosal and a large coronoid bone with a prominent dorsal process. Protorosaurian archosauromorphs [C. microlanius). Even when considering highly derived members of the archosauromorph tree, we can find features of interest that have been historically linked to lepidosaurs in systematic studies. For instance, hatchlings of Alligator mississippiensis have their first tooth generation not attached in a socket made of alveolar bone, but rather to the labial face of the medial wall of the tooth bearing element; that is, hatchling A. mississippiensis are pleurodont [C. microlanius.For example, \u2020romorphs ,28,48,82romorphs \u201388. Squaromorphs ,86,87 . As such, \u2020C. microlanius and \u2020F.corami are not likely to be conspecific; however, presence of two broadly similar taxa highlights the need for caution when referring isolated elements.Recently, Triassic formations in England have produced several diapsids that may be early diverging lepidosaurs, as well as plentiful examples of early rhynchocephalians ,24,91,92C. microlanius has larger implications for the interpretation of lepidosaur-like skeletons from the Triassic reptile assemblages of the UK and other Triassic faunas\u2014see further discussion on the difficult interpretation on other Triassic fissure fill deposits previously linked to lepidosaurians in that haC. microlanius remain relatively unclear, this species shares several features with archosauromorphs, the total clade of birds and crocodilians [Cryptovaranoides microlanius is placed as the sister of Allokotosauria, a diverse Triassic archosauromorph clade outside the crown group, with moderate support in analyses of dataset 1 that exclude \u2020Paliguana. However, several anatomical features of \u2020C. microlanius, especially the morphology of its dentition, markedly differ from the conditions observed in allokotosaurs. It appears that \u2020C. microlanius is part of a poorly known radiation of early small-bodied archosauromorphs, but the phylogenetic affinities of this species will only be resolved by future fossil discoveries. \u2020Cryptovaranoides highlights a potential new branch in the exceptional Triassic radiation of crown reptiles and demonstrates the probability that key small-bodied clades might still await discovery .This work did not require ethical approval from a human subject or animal welfare committee."} {"text": "These advances include novel strategies for reducing CO2 per metric ton of clinker. This approach has the potential to significantly reduce the carbon footprint of cement production, a major source of greenhouse gas emissions. Sim et al. [2 sink to lower CO2 emissions from the cement industry. Their research demonstrated that complete carbonation can be achieved within 10 min at specific CSW ratios (5\u201325%), with the ability to reduce CO2 emissions from the cement industry.Wojtacha-Rychter et al. exploredAmin et al. ,4 investAlternative materials and innovative mix designs are being explored to improve concrete performance. Kumar et al. assessedNiu et al. examinedThe findings presented in this SI are useful in the ongoing efforts to enhance the sustainability, durability, and performance of cement and concrete products. Moreover, such advances may help address the challenges of evaluating the sustainability of concrete materials and climate change factors that could influence the design of such materials."} {"text": "Within the nonequilibrium environment of living cells, the transport behaviours are far from the traditional motion in liquid but are more complex and active. With the advantage of high spatial and temporal resolution, the single-particle tracking (SPT) method is widely utilized and has achieved great advances in revealing intracellular transport dynamics. This review describes intracellular transport from a physical perspective and classifies it into two modes: diffusive motion and directed motion. The biological functions and physical mechanisms for these two transport modes are introduced. Next, we review the principle of SPT and its advances in two aspects of intracellular transport. Finally, we discuss the prospect of near infrared SPT in exploring the Microtubules, the structural backbone of the cytoskeleton, are long hollow tubes with a diameter of 22\u201325 nm composed of 13 parallel protofilaments. Microtubules are highly dynamic and polarized, alternating between phases of growth and shrinkage . In FRAP experiments, fluorescent molecules in a small given area are first photobleached by a focused laser beam with high intensity. Then, surrounding unbleached fluorescent molecules diffuse into the photobleached area resulting in fluorescence recovery , the laser penetrates the cell vertically. Since all probes across the cell are excited, the overall fluorescence within the cell imposes a strong background on the single probes of interest, making the signal-to-noise ratio relatively low. et al. et al. et al. et al. et al. et al. et al.To improve the signal-to-noise ratio, total internal reflection fluorescence (TIRF) microscopy was invented to selectively excite fluorescent molecules close to the cover glass (<200 nm) Axelrod . TIRF is et al..Due to the observation depth, TIRF is limited to studying the cell membrane. To observe fluorescent molecules in cells, researchers further invented highly inclined and laminated optical sheet microscopy (HILO) due to the diffraction limit. We obtain an accurate position of the probe through Gaussian fitting. By linking the same particle\u2019s different positions in consecutive images, particle trajectories are constructed . Both tiTo analyse the motion, the MSD of the trajectories is generally calculated:r(t) is the displacement, \u0394t is the time interval, and < \u00b7\u00b7\u00b7 > is the average. Time-averaged MSD is:where M is the total time length of the trajectory. The ensemble-averaged MSD is:where i is the ID of each particle, and N is the total number of all particles. The two MSDs help us understand the motion of target particles, but their results are not always consistent. With increasing \u0394t, the MSD tends to show an upward trend, which can be described by a remarkable power-law curve MSD ~\u0394t\u03b1. The value of \u03b1 depends on the motion type of the particle: \u03b1 = 1 corresponds to Brownian motion, \u03b1 < 1 refers to subdiffusion, and \u03b1 > 1 refers to superdiffusion on cell membranes on cell membranes and artificial lipid bilayers using SPT , 3B technology is an effective method for observing deep tissue (Li et al. et al. et al. et al. et al. et al. et al. et al. et al. et al. et al. et al. et al.in vivo confocal 3D imaging of tumour vasculatures in mice at a depth of 1.2 mm (Zhang et al.in vivo, with promising applications in biophysical studies and biomedical diagnosis.SPT has promoted the study of intracellular transport dynamics, but there are still some challenges. To date, most SPT studies on intracellular dynamics have been carried out at the cellular level Ming-Li Zhang, Hui-Ying Ti, Peng-Ye Wang and Hui Li declare that they have no conflict of interest."} {"text": "At one text position, the level of significance stated as \u201cA correction has been made to Paragraph 5. This sentence previously stated:vaccinated) and 14 (placebo) dead during eleven weeks, then evidently giving numbers for complete trial groups . If our German-based estimate is assumed to be the expected value of a binomial probability distribution then the corresponding standard deviation is quite exactly 5. A corresponding binomial test reveals that the 14 dead in the placebo group are significantly different from prognosticated 25 deaths, at a p-value of 0.0012. Hence, counting just 14 dead in Thomas et al. (7) is already utterly unlikely to be explainable by chance; and the 4 dead reported in Polack et al. (1) are an entirely impossible count, which can only reflect some preliminary data analysis. In stark disaccord, the data reported in Polack et al. (1) should without doubt stand on their own and not rely on additional publications, as this was a public dissemination of both safety and efficacy probed by a pivotal vaccine trial. Publishing another, later (6-month) safety data set as in Thomas et al. (7), or even secondary reports like, e.g., by the USA's \u201cFood and Drug Administration,\u201d should not be required when disseminating a primary endpoint assessment of safety .\u201d\u201cIn continuation of Polack et al. (1), interestingly, the authors of (7) finally counted 15 (The corrected sentence appears below:vaccinated) and 14 (placebo) dead during 11 weeks, then evidently giving numbers for complete trial groups . If our German-based estimate is assumed to be the expected value of a binomial probability distribution then the corresponding standard deviation is quite exactly 5. A corresponding binomial test reveals that the 14 dead in the placebo group are significantly different from prognosticated 25 deaths, at a p-value of 0.012. Hence, counting just 14 dead in Thomas et al. (7) is already utterly unlikely to be explainable by chance; and the four dead reported in Polack et al. (1) are an entirely impossible count, which can only reflect some preliminary data analysis. In stark disaccord, the data reported in Polack et al. (1) should without doubt stand on their own and not rely on additional publications, as this was a public dissemination of both safety and efficacy probed by a pivotal vaccine trial. Publishing another, later (6-month) safety data set as in Thomas et al. (7), or even secondary reports like, e.g., by the USA's \u201cFood and Drug Administration,\u201d should not be required when disseminating a primary endpoint assessment of safety .\u201d\u201cIn continuation of Polack et al. (1), interestingly, the authors of (7) finally counted 15 (The authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated."} {"text": "Environmental Heath. In this Comment, I point out that, also warranting being called out, are the arguments/assertions of Cl\u00e9ro et al. who, in their recent response to an article by Tsuda et al., reiterated the conclusions and recommendations derived from their European project, which were published in Environment International in 2021.The need to call out and expose authors for their persistence in improperly using epidemiology has been previously noted. Tsuda et al. have done well to expose Sch\u00fcz et al.\u2019s arguments/assertions in their recent publication in Tsuda et al. had critiqued the Cl\u00e9ro et al. 2021 publication in their 2022 review article. However, in their response to it, Cl\u00e9ro et al. deflected by not addressing any of the key points that Tsuda et al. had made in their review regarding the aftermath of the Chernobyl and Fukushima nuclear accidents. In this Comment, I critique Cl\u00e9ro et al.\u2019s inadequate response.Publication of this Comment will help in routing out the improper use of epidemiology in the formulation of public health policy and thereby reduce the influence of misinformation on both science and public policy. My critique of Cl\u00e9ro et al. is not dissimilar from Tsuda et al.\u2019s critique of Sch\u00fcz et al.: in as much as Sch\u00fcz et al. should withdraw their work, so should Cl\u00e9ro et al.\u2019s article be retracted.The response by Cl\u00e9ro et al. consists of four paragraphs. First was their assertion that the purpose of the SHAMISEN project was to make recommendations based on scientific evidence and that it was not a systematic review of all related articles. I point out that the Cl\u00e9ro et al. recommendations were not based on objective scientific evidence, but on biased studies.In the second paragraph, Cl\u00e9ro et al. reaffirmed the SHAMISEN Consortium report, which claimed that the overdiagnosis observed in non-exposed adults was applicable to children because children are mirrors of adults. However, the authors of that report withheld statements about secondary examinations in Fukushima that provided evidence against overdiagnosis.In the third paragraph, Cl\u00e9ro et al. provided an explanation regarding their disclosure of conflicting interests, which was contrary to professional norms for transparency and thus was unacceptable.Finally, their insistence that the Tsuda et al. study was an ecological study susceptible to \u201cthe ecological fallacy\u201d indicated their lack of epidemiological knowledge about ecological studies. Ironically, many of the papers cited by Cl\u00e9ro et al. regarding overdiagnosis were, in fact, ecological studies.Cl\u00e9ro et al. and the SHAMISEN Consortium should withdraw their recommendation \u201cnot to launch a mass thyroid cancer screening after a nuclear accident, but rather to make it available (with appropriate information counselling) to those who request it.\u201d Their recommendation is based on biased evidence and would cause confusion regarding public health measures following a nuclear accident. Those authors should, in my assessment, acquaint themselves with modern epidemiology and evidence-based public health. Like Tsuda et al. recommended of Sch\u00fcz et al., Cl\u00e9ro et al. ought also to retract their article. From the SHAMISEN international experts\u2019 consortium, Cl\u00e9ro et al. published a review article entitled, \u201cLessons learned from Chernobyl and Fukushima on thyroid cancer screening and recommendations in the case of a future nuclear accident\u201d . Their rTheir review, however, misrepresented the Fukushima empirical findings and the Chernobyl experience by including citations of biased literature and by emphasizing overdiagnosis in thyroid screening using ultrasound echo, but with no specific verification/validation. Tsuda et al. communicated the current situation in Fukushima and Japan, pointed out errors in Cl\u00e9ro et al.'s claims, and evaluated those claims using the \u201cToolkit for detecting misused epidemiologic methods\u201d , 3. Cl\u00e9rCl\u00e9ro et al. responded that the SHAMISEN paper was not a systematic review of all papers published on the Chernobyl and Fukushima accidents , 4. HoweCl\u00e9ro et al. stressed that the evidence suggests that overdiagnosis is similar in children and adults . HoweverCl\u00e9ro et al. emphasized \"small thyroid nodules\" in cases of overdiagnosis , but, beEnvironment International [Environment International, and the public at large, will carefully assess whether the claims of Cl\u00e9ro et al. are valid. Furthermore, conflicting interests should be disclosed that involve IARC Expert Group chairs and the Japanese government, whose evident bias is to obscure the large excess incidence of thyroid cancer in Fukushima.Cl\u00e9ro et al. described the way that Dr. Dominique Laurier declared her conflict-of-interest , 3, 4. Tnational , 3, 4. CFirst, if the epidemiological study on historical air pollution in six cities in the United States by Dockery et al. were considered to have a semi-ecological rather than a cohort study design , then coSecond, if the design of Snow's foundational epidemiological investigation regarding the outbreak of cholera were deeIn the final paragraph before their conclusion, Cl\u00e9ro et al. , using JThe main role of epidemiologists is not to classify study designs; nor is the role of epidemiologists to continue to insist on things for which there is no evidence. The role of epidemiologists is to collect and analyze data, draw conclusions based on the results of analyses that take the study design employed into account, and make valid causal inferences based on the body of evidence. Then, the role of epidemiologists in public health is to propose and implement necessary and feasible intervention measures in a timely manner, based on these results and conclusions. Cl\u00e9ro et al. , 4, the In their conclusion, Cl\u00e9ro et al. claimed, without evidence, that the large number of excess thyroid cancers in Fukushima was caused by overdiagnosis , 4. AlthWithout sufficient discussion and evidence on the already disproven claim regarding overdiagnosis and the effects of screening, the above authors and the SHAMISEN Consortium have caused distress and confusion and have delayed scientifically justifiable interventions in Japan , 6, 9. TFinally, members of the SHAMISEN Consortium appear to me to have a deep understanding of the Japanese language in which the phrase \"playing shamisen\" is Japanese slang; it means \"to mislead\" or \"to lie\u201d . Again,"} {"text": "The rapid development of smart devices and electronic products puts forward higher requirements for power supply components. As a promising solution, hybrid energy harvesters that are based on a triboelectric nanogenerator (HEHTNG) show advantages of both high energy harvesting efficiency and multifunctionality. Aiming to systematically elaborate the latest research progress of a HEHTNG, this review starts by introducing its working principle with a focus on the combination of triboelectric nanogenerators with various other energy harvesters, such as piezoelectric nanogenerators, thermoelectric/pyroelectric nanogenerators, solar cells, and electromagnetic nanogenerators. While the performance improvement and integration strategies of HEHTNG toward environmental energy harvesting are emphasized, the latest applications of HEHTNGs as multifunctional sensors in human health detection are also illustrated. Finally, we discuss the main challenges and prospects of HEHTNGs, hoping that this work can provide a clear direction for the future development of intelligent energy harvesting systems for the Internet of Things. With the rapid development of the Internet of Things and artificial intelligence, various portable and wireless electronic products and their derived intelligent sensor systems are widely present in our life ,2,3,4,5.Nowadays, various energy harvesting technologies have been developed, such as triboelectric nanogenerators (TENGs) ,15,16, pHowever, to meet the increasing power consumption requirement of more powerful electronics, it is anticipated that TENGs will be integrated with other energy harvesting devices to build hybrid energy harvesters so that multiple energy sources can be simultaneously collected efficiently from the surrounding environment ,25,26,27In this article, the latest research progress of HEHTNGs and the working principles of different devices are systematically reviewed . Key strTENGs can effectively convert mechanical energy into electrical energy based on Maxwell\u2019s displacement current theory . NowadayThe basic principle of PENGs is to use the positive piezoelectric effect of the piezoelectric material a. The po3 are often used to make pyroelectric generators. By harvesting waste heat energy, pyroelectric nanogenerators can be used in applications such as temperature monitoring and healthcare [In general, the harvesting of thermal energy relies mainly on the Seebeck effect b, but ifalthcare ,54.2Te3 materials with high conductivity and low thermal conductivity after doping are widely used. In addition, some materials such as PbTe, CoSb3, and Mg2Si have also been reported for the production of TEGs [Waste thermal energy is one of the most easily overlooked energy sources. With the deepening of research, thermal energy harvesting can be applied to various aspects, among which not only can PyENGs collect thermal energy but TEGs can also convert thermal energy into electrical energy under very small temperature differences. The working principle of TEGs is the thermoelectric effect c, which of TEGs ,58,59,60Solar energy is the most important renewable energy source, and is now applied to solar heating, photosynthesis, the PV effect, and other aspects. Solar cells are a device that uses chemical and physical phenomena to convert light energy into electricity ,62. The The basic working principle of EMGs is based on Faraday\u2019s law of electromagnetic induction e, consisFrom TENGs to hybrid energy harvesters, the high efficiency and versatility of devices are closely related to structural design, material selection, and surface modification . A reasoThis section focuses on the flexible structure of the hybrid energy harvesting device, and the reasonable structural design enhances the device\u2019s consistency and greatly improves the operating stability and output performance of the device. A hybrid nanogenerator with triangular cylindrical origami was prepared based on ancient origami a\u2013c. The Increasing the contact area is a commonly used method in structural design, in addition to forming fast charge transfer channels to achieve changes in the internal structure of the thin film. Zhu et al. drilled a hole in the friction layer to serve as a conductive channel d\u2013f, alloLee et al. proposed a TENG-EMG hybrid generator (S-TEHG) with elasticity g\u2013i, wherChanging the operating mode of devices through external components can also improve device performance. Wang et al. used lateral connectors to connect a TENG, EMG, and SC to collect wind and solar energy j\u2013l, convDue to the flexible structure design of the device, the output performance and stability are improved, but it is easier to neglect other characteristics such as flexibility and biocompatibility .3 nanorods to form hybrid films (2) groups in chitosan readily release electrons, thus generating positive zeta potential and enhancing matrix cation capture. The tested power output is more than four times higher than the power enhancement of pristine chitosan TENGs, achieving an ultra-high effective charge density [It is well known that the selection of a suitable functional layer material is one of the important factors for improving the high-performance output of hybrid energy harvesting . The funid films a\u2013c. The density .3Bi2Br9 increases the electron capture ability and polar crystalline phase. Forming a good energy level matching with PVDF-HFP improves the electron transfer efficiency and reduces charge loss. The experiment proved that the device can operate for a long time, and it set a record regarding the the output voltage of a halide perovskite-based nanogenerator [Meanwhile, perovskite materials have excellent electrical properties, making them a hot research topic nowadays. Jiang et al. prepared a multifunctional nanofiber composite material (LPPS NFC) by electrospinning lead-free perovskite/polyvinylidene fluoride co hexafluoropropylene (PVDF HFP) d\u2013f. As aenerator .In recent years, fabrics have been used as carriers to provide high friction properties through various technologies. Kim et al. reported a Fab-EH-based energy collector (Fab-EH) g\u2013i. The The functional layer prepared by Nawaz et al. consists of cubic zinc ferrite nanoparticles (CZF NPs) and a polymer matrix, realizing a multifunctional hybrid multimodal nanogenerator (HNG) j\u2013l. The In the past, it was common to choose environmentally friendly or toxic high-performance materials. With the development of flexible and biocompatible wearables, it is a trend to gradually choose natural cellulose or plant extracts for device preparation .In addition to material selection, the surface modification of hybrid energy harvesting devices is very important. Surface modification technology can change the physical morphology and electrical properties of the functional layer surface, enabling the energy harvesting device to have high-performance output. Yu et al. reported that the use of liquid-nitrogen-induced phase transition and in situ doping methods resulted in a piezoelectric polymer containing 71% \u03b2 after quenching a\u2013c. Dopi2) nanoparticles and titanium dioxide composed of an EMG and TENG, g\u2013i. Usinl device .Some inorganic materials are expensive and contain substances that are toxic to the human body. It is a trend to look for low-cost natural modifiers. Mariello et al. prepared a metal-free hybrid piezoelectric nanogenerator (HPENG) based on soft biocompatible materials j\u2013l. CardSurface modification technology plays an important role in improving device output. By changing the degree of interfacial friction and dielectric properties of thin films through physical or chemical methods, the device output can be further improved.The integration strategy of hybrid energy harvesting systems is crucial. Through reasonable integration, energy harvesting systems can have the characteristics of efficient harvesting and a wide application range in different environments.The hybrid energy harvesting system, due to its characteristic of collecting multiple energy sources, can operate alternately to compensate for the limitations of single energy harvesting, thereby increasing the harvesting efficiency of the system. For example, Liu et al. used polysilicon solar cells and an interdigitated electrode structure triboelectric nanogenerator (IDE-TENG) to form an umbrella-shaped energy harvesting system a\u2013c. The Ye et al. designed an efficient R-TENG array d\u2013f. The Due to the applicability of EMGs and TENGs in high-frequency and low-frequency applications, Dan et al. adopted a segmented alternating working strategy to expand the energy harvesting range of hybrid devices g\u2013i. The Liu et al. integrated a TEG and TENG using a converter j\u2013l. ThisThe advantage of a hybrid energy harvesting system is that it can adjust the working mode according to needs. The strategy of running the system simultaneously can maximize system output, but it will limit its application scenarios. Yuan et al. prepared a multifunctional DC-TENG that can harvest both mechanical energy and solar energy by constructing a dynamic Al/CsPbBr3 Schottky junction a\u2013c. The The simultaneous operation strategy adopts a multi-part simultaneous triggering mode, and each part can not only be assembled horizontally but also designed vertically. Lee et al. vertically stacked TENG components on PENG components to prepare a mixed foot mechanical energy harvesting device d\u2013f. WhenParanjape et al. made a multi-stage continuously connected hybrid nanogenerator (HNG) using composite films g\u2013i and oWen et al. proposed a flexible hybrid photothermal generator (PTEG) j\u2013l. The The overall output of the energy harvesting system not only relies on the output performance of hybrid devices but also requires external circuit assistance. Since the output form of a TENG is AC, a rectification circuit is required to convert it into DC to supply power for electronic components. Moreover, with the help of an external circuit, the charging speed can be accelerated and the charge density of the TENG can be improved. For example, the hybrid nanogenerator prepared by Pongampai et al. is combined with the designed self-charge pump (SCP) module to improve the charge density of the friction layer a\u2013c. The The external circuit design generates DC pulses that are conducive to energy transmission. Shi et al. set up specialized mechanical switches to connect the TENG with photovoltaic cells in series d\u2013f. By gBy changing connections and adding external components to optimize the charging speed, Kim et al. developed an energy collection system based on a TENG and TEG g\u2013i and uWang et al. developed a high-performance ocean energy harvesting device to ensure the continuous operation of wireless sensor nodes. A power management system (PMS) was designed to improve the charging efficiency of the TENG j\u2013l. The Hybrid energy harvesting systems are widely combined with various sensors to form intelligent systems due to their efficient energy harvesting and sensing performance. Feng et al. integrated a hybrid energy harvesting device, power management circuit, sensor, microcontroller, and wireless communication module to create an intelligent ocean buoy a\u2013c. UndeBy customizing and developing smartphone applications, better control will be achieved based on hybrid energy harvesting systems. Rana et al. developed an energy harvesting device driven by biomechanics as a continuous power source d\u2013f. BaseIn addition, in order to increase the energy conversion efficiency of TENGs, Zhao et al. reported a self-powered wireless marine environment monitoring system g\u2013i, desiXue et al. developed a self-powered wireless temperature and vibration monitoring system based on LTC3331 as the core of an energy harvesting controller j\u2013l. The HEHTNGs are widely used as multifunctional sensors due to their multi-energy harvesting characteristics, good adaptability to different environments, and efficient energy conversion efficiency.In wind power generation, traditional turbine design and installation have drawbacks such as a high cost and complex equipment, which are not suitable for large-scale applications. However, TENGs can overcome these drawbacks. Many researchers shifted their research direction to TENG-based energy harvesting systems. Wang et al. designed a hybrid nanogenerator a\u2013c. BecaIn another work, wind energy harvesting devices are combined with other sensors to form an intelligent agriculture self-powered distributed meteorological sensing system. Zhang et al. proposed this system d\u2013f, whic\u22121 to 15 ms\u22121) and can effectively collect various levels of wind energy through an optimized structural design. Experiments have shown that the FTEHG has proven to be a wind speed sensor that can successfully charge a 47 \u03bcF capacitor to 1.5 V in 4 s, and is thus a system with great potential for wind energy harvesting and sensing [EMGs and TENGs have good harvesting efficiency in high and low frequencies, respectively. Ye et al. reported a triboelectric\u2013electromagnetic hybrid nanogenerator (FTEHG) g\u2013i. The sensing .\u22123) at wind speeds of 2\u201316 ms\u22121. In addition, a dual-channel power management topology (DcPMT) has been established to control the outputs of two modules in the TEHG. This system has the advantages of broadband and efficient wind energy harvesting and can provide 3.3 V voltage to power electronic products, playing an important role in improving the environmental adaptability of the Internet of Things [Hybrid wind energy harvesting devices of EMGs and TENGs are being developed. Yong et al. designed a dual-rotor triboelectric\u2013electromagnetic hybrid nanogenerator (TEHG) through the synergy between TENGs and EMGs j\u2013l, resuf Things . A summaMany regions have rainy conditions, and the harvesting of raindrop energy is easily overlooked. However, with the development of TENGs, raindrop energy can compensate for SCs\u2019 shortcomings on rainy days, forming a complementary solution. Ye et al. prepared a hybrid nanogenerator array a\u2013c. By sThe main research on raindrop energy harvesting focuses on two types of energy: raindrop impact energy and electrostatic transfer generated by solid\u2013liquid contact. Mariello et al. designed a multifunctional, flexible, and conformal hybrid nanogenerator (HNG) to collect energy from different water transfer sources d\u2013f, grea2 [Chen et al. prepared a fully encapsulated piezoelectric\u2013triboelectric hybrid nanogenerator (PTHG) g\u2013i. On t2 .Inspired by solid\u2013liquid contact, Liu et al. designed an interdigital electrode structure to integrate solar panels with a TENG (IDE-TENG) j\u2013l. TheySound energy can widely exist in the surrounding environment but, due to the low power density of sound waves and the lack of effective sound harvesting technology, it is not systematically researched and developed like other energy sources. The working principle of converting sound energy into electrical energy is the triboelectric effect and electrostatic induction effect of acoustic vibration, so sound energy harvesting devices based on TENGs have been widely studied. Zhao et al. prepared a novel triboelectric nanogenerator (HR-TENG) based on a dual-tube Helmholtz resonator a\u2013c. The In addition, some researchers have efficiently utilized wind and sound energy. Wang et al. have implemented a hybrid triboelectric nanogenerator (TENG) inspired by windmills to simultaneously capture wind and sound energy d\u2013f. One In order to improve the efficiency of harvesting sound energy, a porous three-dimensional structure is designed to increase the triboelectric effect. Yu et al. used nanoporous PVDF hollow fibers and PDMS valves to simulate the tympanic membrane (PHVAH) g\u2013i. The On the basis of previous studies, Yu et al. designed a unique beam-like structure from a structural design perspective, promoting the diffraction and scattering of sound waves inside the hole wall, enhancing its vibration and friction j\u2013l. In aOcean energy is one of the most abundant sources of energy, and the development of ocean energy is challenging due to the corrosion of electronic devices caused by seawater and the susceptibility to adverse weather conditions on the sea surface. Wang et al. made a hybrid nanogenerator system with an internal topology a\u2013c. UsinCurrently, the research direction of TENG-based ocean energy harvesting mainly includes system design, structural design, and external incentives. Hao et al. designed a box-type energy harvesting device d\u2013f. The Encapsulated devices have better corrosion resistance. Pang et al. used silicon shells to prepare soft spherical devices (TEHG) g\u2013i and s\u22123. This work provides conditions for the application of high-power self-powered systems [However, inspired by the pendulum, Zhang et al. coupled the dual-pendulum hybrid nanogenerator (BCHNG) module j\u2013l, incl systems . A summaWith the rapid demand for sports and fitness, wearable sensors have an important role in the process of the real-time monitoring of human movement status in different ways due to their flexibility, high sensitivity in detecting tiny signals, and so on . Zhu et In addition, wearable sensors can not only monitor human motion status but also serve as self-powered devices to power other intelligent sensors, achieving an intelligent integration of IoT devices. Jiang et al. reported a wearable non-contact free-rotating hybrid nanogenerator d\u2013f that With the deepening of research on wearable devices, electronic skins that focus on comfort and practicality have become a research focus. Although devices integrated into clothing can detect human motion status, they still have certain errors and discomfort, and electronic skins can solve such problems. Gogurla et al. have prepared a biocompatible electronic skin g\u2013i. ThisMariello et al. reported a hybrid nanogenerator with high sensitivity and super flexibility j\u2013l. The Nowadays, some diseases such as the heart and brain require implantable devices to provide treatment . ImplantAs devices implanted in the human body require better safety, Gogurla et al. used proteins to create an electronic skin d\u2013f. By dThe recovery process of nerve injury repair is slow due to its complex and individual differences. Jin et al. developed an implantable system with a physiological adaptation function g\u2013i. The Li et al. prepared a TENG-based hybrid energy harvesting system j\u2013l, whicThe TENG-based hybrid energy harvesting system, which couples multiple working mechanisms, can harvest energy in the environment to achieve higher efficiency, multifunctionality and self-sustainability. Combining these characteristics, the integration of TENGs with various energy harvesters in current self-sustaining systems is increasingly attractive, especially in this era of the Internet of Things. It can be seen that HEHTNGs are undergoing fast progress, and current research is mainly focused on the technological improvement process of HEHTNGs from the aspects of device operating principles, performance improvement, and integration concepts. Secondly, HEHTNGs can operate alternately or simultaneously under one or more energy conditions, efficiently collecting energy from different environments, such as wind energy, raindrop energy, ocean energy, and wave energy. And, it can even be used as multifunctional sensors in human health and other environmental stimuli detection while fulfilling the energy needs of electronic devices and self-powered systems. A high power density and sustainability are always the ultimate development trends; in fact, they are also the most active direction of HEHTNGs.Nowadays, most devices are simply stacked and connected, which limits their range of use. Optimized structural designs should be developed to achieve a high power density and sustainability while maintaining device integration and miniaturization to meet various application needs.The durability and stability of devices are easily overlooked, and materials can cause damage in long-term use. Therefore, it is urgent to study materials with self-healing functions.In terms of enhancing performance output, doping and surface modification can increase the output performance of devices, but they can cause material preparation complexity and high costs. Therefore, external excitation methods are sought to promote the generation of functional layer charges to meet the requirements.With the rapid development of the Internet of Things and artificial intelligence, while pursuing a high power density, exploring multifunctionality such as synergistic sensing by integrating devices with each other may achieve novel intelligent features for wearable devices, medical devices, and other environmental monitoring systems.However, there are still some key issues and challenges that need to be addressed, which not only limit the output performance but also limit their potential future applications:"} {"text": "Myofunctional orofacial examination (MOE) is an important tool for the assessment of the stomatognathic system and orofacial functions, and the early diagnosis of orofacial myofunctional disorders. Therefore, the purpose of the study is to scan the literature and determine the most preferred test for myofunctional orofacial examination.A literature review was conducted to collect information. Pubmed and ScienceDirect database was explored by using keywords gained by MeSH .Fifty-six studies were retrived from the search and all of the studies were screened and evaluated regarding the subject, aim, conclusions and the orofacial myofunctional examination test they used. It has been observed that traditional evaluation and inspection methods have been replaced by newer and methodological approaches in recent years.Although the few tests used differ, 'Orofacial Examination Test With Scores\u2019 (OMES) was found to be the most preferred myofunctional orofacial evaluation method from ENT to cardiology. Myofunctional orofacial anomalies refer to abnormal orofacial functions that can lead to changes in the function, structure and formation of the stomatognathic system. These disorders can cause malocclusions of the jaws, temporomandibular joint disorders, and other problems involving the orofacial region. Factors triggrering malocclusions may include myofunctional anomalies characterized by disturbances in chewing, swallowing and breathing patterns. Infantile swallowing, mouth breathing and tongue thrust, which are clinically common, and can be caused by genetic and/or environmental factors could be counted as examples \u20133. The pOrthodontic and dentofacial orthopedic treatment is often used to correct malocclusions and other orofacial abnormalities \u20137. But tThe diagnosis of myofunctional orofacial disorders requires a comprehensive myofunctional orofacial examination (MOE) of the orofacial musculature and function. This examination is commonly performed by speech-language pathologists, dentists, and orthodontists. The MOE has been described in several studies as a comprehensive tool for assessing the orofacial region by various examination protocols. However, there is no global consensus on a standardized orofacial myofucntional examination form. Various examination forms and methods have been presented for the evaluation of orofacial anomalies and functions which includes the examination of stomagnatic region: orofacial muscles, lips, tongue, breathing, swallowing patterns, cranio-oro-facial posture and speech. The purpose of the myofunctional orofacial examination is to evaluate the function and structure of the orofacial complex statically and dynamically and, to detect any dysfunction that may be present. The information obtained from the examination is aimed to be used for developing an individualized treatment plan to address any orofacial dysfunction that is detected \u201322.Myofunctional examination may include the following stages: reviewing the patient's both current health and medical/dental history; statically and dynamically examining and both photographing and video recording of facial/mouth structures, oral functions, and general face and body posture; things to be evaluated sequentially and individually: Respiration and Respiratory tract; Oral Habits; Craniofacial/orofacial appearance and function of involved muscles and TMJ; Evaluation of tongue ; hard and soft tissues of the mouth; occlusion; speaking function, appearance and resting position; chewing; liquid and solid swallowing; review of examination findings; consulting if necessary; establishing a treatment strategy and plan \u201322. In rIn addition to the examination, studies are also carried out on the treatment of these anomalies. Rehabilitation of the tongue , examinaNevertheless, the need for a standard and valid myofunctional orofacial assessment test has become more evident for dentists, all specialist dentists and especially orthodontists. That is why, the research question of this study is 'Which is the most preferred and presented myofascial orofacial examination test in the literature?' And thus, the aim of the study is to scan the literature with this point of view and to identify the most valid and used test.Pubmed and ScienceDirect database was searched by one researcher of the present study on April 2, 2023 by using keywords: \u201cmyofunctional[Title] AND orofacial[Title] AND examination[Title] AND orofacial[Title] AND myofunctional[Title] AND examination[Title]\u201d, \u201cmyofunctional[Title] AND orofacial[Title] AND assessment[Title] AND orofacial[Title] AND myofunctional[Title] AND assessment[Title]\u201d, \u201c OR \u201d for PubMed and, \"myofunctional orofacial assessment\u201d,\u201dmyofunctional orofacial evaluation\u201d, \"myofunctional orofacial examination\u201d,\u201dorofacial myofunctional examination\u201d,\u201dorofacial myofunctional assessment\u201d,\u201dorofacial myofunctional evaluation\u201d for ScienceDirect. Medical Subject Headings (MeSH) terms was used in creating the search keywords.Fifty-six studies reached by screening both databases. 5 studies were duplicates, 2 were review articles, 3 were conference abstracts, 1 was an erratum, 1 was a case report. After excluding those 12 studies, 44 studies remained for evaluation. All of 44 remaining studies were included in screening (Fig.\u00a0The following data were recorded: database, name of publication, study type, name of the examination form used in the study. In addition all studies were screened regarding subject, aim, conclusions in the discussion of the present study Table .Table 1DAmong all studies, OMES and OMES-E tests were the most preferred myofunctional anamnesis protocols. MGBR was following them. A few studies were based on conventional conventional OMD examination rules. It was observed that some studies examining myofunctional anomalies and temporomandibular disorders together used DC/TMD or RDC/TMD and, EMG tests. Some studies were specifically aimed at measuring the functions and working force of the tongue. Two researchers have published publications promoting their personal measurement protocols. One study used a protocol to evaluate facial muscular mimic-specific measurements. A nutrition scale was used in one study. Nasal obstruction was separately evaluated in one study.In some studies, specialized tests were used to evaluate specific functions, such as the ability to produce speech sounds. It was observed that studies examining OMD and TMD together also used RDC/TMD and EMG tests. Some studies were specifically aimed at measuring the functions and functional strength of the tongue. Some researchers have published publications promoting their own personal measurement protocols. One study used a protocol that made mimic-specific measurements. A nutrition scale was used with MOE in one study. Effects of nasal obstruction was evaluated by using a MOE in one study.The MOE examination forms used in the screened studies included observation of the patient's orofacial posture at rest, assessment of swallowing patterns, assessment of speech production, and a comprehensive oral-motor examination. Those included assessments of various aspects of orofacial function, such as lip and tongue resting posture, swallowing patterns, and speech production. In OMES protocol each assessment is given a score to define the severity of orofacial dysfunctions which are observed and detected. It is observed that orofacial myofunctional examinations for the studies screened were performed in a clinical setting such as language-speech pathology clinic or dental clinic or ENT clinics or pediatric clinics, which shows that the subject is definitely multi and interdisciplinary.This literature review has shown us that the MOE protocol is used in many different disciplines of medicine and dentistry and in the diagnosis and treatment of many different orofacial anomalies. The aims and results of all the publications scanned in the study are given below in the form of a short review. In addition, which MOE test was used in each study is indicated in Table De Ara\u00fajo SRS et al. investigBueno et al. conducteIn their study, Magnani et al. used OMEGraziani et al. , 32 studPaskay LC . in her In his study, Hanson ML . drew a Medeiros et al. studied Rohrbach et al. examinedMacedo et al. conducteLima et al. evaluatePimenta et al. presenteBarbosa et al. investigSantos et al. studied In their study, Arias Gullien et al. assessedDe Castro Correa et al. investigIn their case report, Carvalho Lima et al. studied In their systematic review study, Palomares-Aguilera M et al. investigIn their study, in which they compared orofacial myofunctional evaluation with scores protocol with traditional orofacial myofunctional evaluation, Felicio and Ferreira proved tIn an other study, de Felicio CM et al. , 46 expaFolha et al. in theirFolha et al. in theirDe Felicio CM et al. investigIbrahim AF et al. examinedDe Felicio et al. conducteFerreira et al. assessedMarim et al. in theirPedroni-Pereira A et al. evaluateScudine KGO et al. examinedBueno Dde A et al. evaluateSantos REA et al. analyzedCorrea CC et al. examinedMu\u00f1oz-Vigueras N et al. examinedHansen D et al. mentioneMagnani DM et al. aimed toDa silva AP et al. comparedMagnani DM et al. assessedBraga et al. aimed toTrawitzki LV et al. evaluateFasscollo CE et al. using elRenom-Guiteras M Et al. aimed toMapelli A et al. aimed thTrawitzki LV et al. aimed toFinger LS et al. aimed toLodetti et al. aimed toTartaglia GM et al. comparedDe Felicio CM et al. examinedGubani MB et al. \u201376 in thMapelli A et al. aimed toMilanesi JM et al. comparedAnother problem that can be caused by myofunctional orofacial anomalies is orofacial pain. In the modern world, myofascial pain is one of the most common problems related to the orofacial region, both due to our head and body postural disorders, due to stress and myofunctional orofacial anomalies. Orzeszek et al. In theirSince this study was conducted to determine which OMES test is the most known and preferred in the literature, the quality of the methodologies of the studies was not examined one by one. This was a limitation of the study. Studies in the field about orofacial disorders, which is a highly multidisciplinary medical problem, are relatively insufficient. Myofunctional orofacial dysfunction is a complex condition that requires careful examination and appropriate multidiscipliner treatment to achieve optimal outcomes. Anomalies that occur in a region that we call the stomatognathic system and where vital and important functions such as speech, swallowing, feeding, breathing, chewing, facial appearance and posture are performed, cause many other medical problems and even anomalies of other systems. Furthermore studies with future studies could develop the evaluation and rehabilitation processes.The MOE is a valuable tool in many disciplines of dentistry and medicine to assess the orofacial region and to identify orofacial myofunctional disorders both statically and dynamically. In general, across all disciplines, OMES was found to be the most preferred test. In addition, it has taken its place in the literature that the OMES test makes consistent and reliable measurements in many different conditions."} {"text": "I argue that the electrophysiological phenomenon, the visual mismatch negativity (vMMN), considered to be the signature of automatic visual change detection is post-perceptual.Presenting stimulus sequences with frequent physically or categorically identical visual stimuli (standards) and rare stimuli (deviant) that violate the regularity of standards, an ERP component, the vMMN emerges, even if these stimuli are unrelated to the ongoing task . VMMN is observed in the ERP when stimulus sequences with frequent physically or categorically identical visual stimuli (standards) and rare stimuli that violate the regularity of the standards (deviants) are presented. It is considered the counterpart of the auditory mismatch negativity MMN and vMMN is the predictive coding theory. In summary, the mechanism underlying (auditory) MMN), and o et al. clearly o et al. . In the VMMN is usually measured on the difference potentials, where ERPs to the standard are subtracted from the ERPs to deviant.At the later time range, deviant minus standard ERP differences appear after 250 ms, e.g., for dot motion is later than the time needed to identify images depicting single objects, objects within contexts, and scenes.The most direct method to measure the time needed to identify (plus decide and respond) visual events is a \u201cgo/no-go categorization task,\u201d first introduced by Thorpe et al. . ParticiPotter and her colleagues introduced a series of studies presenting pictures using the rapid serial visual presentation (RSVP) method. In her now-classic study Potter, she presIn a passive paradigm (participants reacted to the color change of the fixation dot) Stothart et al. presenteidentified events. These events have potential importance, even in a passive oddball paradigm. This suggestion is not new; it fits the early interpretation of (auditory) MMN by Risto N\u00e4\u00e4t\u00e4nen. \u201cThe mismatch negativity was interpreted by N\u00e4\u00e4t\u00e4nen et al. component, the visual mismatch negativity (vMMN) in the 120\u2013350 ms latency range. Data from visual discrimination, identification, and steady-state evoked potential studies show that its latency is longer than the time needed to identify even complex visual events. Therefore, it is improbable that vMMN is a signature of the processes of early phases of stimulus identification. In the paradigms investigating vMMN , events appear in the context of other ones. The context is the set of standard stimuli. Due to the regular appearance of standards, the system is capable of developing predictions about forthcoming events. I agree that the processes underlying vMMN are error signals, but signals of mismatch between expected and unexpected n et al. as reflen et al. lead to n et al. , p. 523.n et al. .IC: Conceptualization, Writing\u2014original draft."} {"text": "Multiple studies have connected parenting styles to children's internalising and externalising mental health symptoms (MHS). However, it is not clear how different parenting styles are jointly influencing the development of children's MHS over the course of childhood. Hence, the differential effects of parenting style on population heterogeneity in the joint developmental trajectories of children's internalising and externalising MHS were examined.Growing Up in Ireland cohort study was derived for further analyses. Parallel-process linear growth curve and latent growth mixture modelling were deployed.A community sample of 7507 young children from the p < 0.01; LMR = 682.19, p < 0.01; E = 0.86). The majority of the children (83.49%) belonged to a low-risk class best described by a decreasing trajectory of externalising symptoms and a flat low trajectory of internalising MHS. In total, 10.07% of the children belonged to a high-risk class described by high internalising and externalising MHS trajectories, whereas 6.43% of the children were probable members of a mild-risk class with slightly improving yet still elevated trajectories of MHS. Adjusting for socio-demographics, child and parental health, multinomial logistic regressions indicated that hostile parenting was a risk factor for membership in the high-risk and mild-risk classes. Consistent parenting style was a protective factor only against membership in the mild-risk class.The results indicated that the linear growth model was a good approximation of children's MHS development . The growth mixture modelling revealed three classes of joint internalising and externalising MHS trajectories . Furthermore, hostile parenting style is a substantial risk factor for increments in child MHS, whereas consistent parenting can serve as a protective factor in cases of mild-risk. Evidence-based parent training/management programmes may be needed to reduce the risk of developing MHS. As can be seen in As a reference only, the sample was distributed into the age-appropriate cut-off centiles based on the new fourfold classification of the scores on the SDQ scales in the UK (\u03c72(9)\u00a0=\u00a0926.16, p\u00a0<\u00a00.001. The linear parallel-process LGM (intercept-slope) exhibited great fit to the data, i.e. scaled \u03c72(4)\u00a0=\u00a030.22, p\u00a0<\u00a00.001, CFI\u00a0=\u00a00.996, TLI\u00a0=\u00a00.98, RMSEA\u00a0=\u00a00.03, SRMR\u00a0=\u00a00.013. Thus, we could be confident that a linear model of change could be a good approximation for the MHS trajectories. The LGM parameters are presented in the Supplementary materials. Next, we estimated the GMM to identify possible heterogeneity in the trajectories of MHS. The model with three classes of MHS trajectories exhibited significant improvement compared to the two-class model. Moreover, the likelihood ratio tests revealed that the addition of a fourth class did not result in improved model-data fit (p\u00a0>\u00a00.05). The \u2018elbow\u2019 plot of the BIC values showed that adding more than three classes did not improve fit substantially . Additionally, we estimated further models where the variances of the intercept and slope parameters were freely varying between classes. However, the variance\u2013covariance matrices were not positive definite, and thus, the variances were held equal across classes. Keeping in mind that this was a general population sample, we did not expect extreme variation in mental health symptomatology, and thus, the three-class model was retained. Fit indices for the models are presented in An intercept-only model was found to be degraded compared to the intercept-slope model, Satorra\u2013Bentler N\u00a0=\u00a06268) of the children belonged to class 1 (low risk) that was characterised by a flat low trajectory of internalising MHS, and a declining trajectory of externalising MHS. In contrast, 10.07% (N\u00a0=\u00a0756) of the children were members of a high-risk class 2 that was characterised by an accelerating high trajectory of internalising and externalising MHS. Finally, 6.43% (N\u00a0=\u00a0483) of the children belonged to class 3 with MHS trajectories that were slightly improving but still elevated (mild-risk). The estimated trajectories are shown in The model revealed that 83.49% were inspected and found to be less than 6 (mean VIF\u00a0=\u00a01.84). Thus, no extreme collinearity was diagnosed for the model. The results of the multinomial logistic regressions are presented in Hostile parenting style was a substantial predictor of membership in the high-risk class and the mild-risk class . Warm parenting style did not predict membership in the high-risk or the mild-risk classes. Although consistent parenting style did not predict membership in the high-risk class , it was a protective factor against membership in the mild-risk class . The results for the covariates are discussed in the Supplementary materials.et al., et al., et al., et al., et al., et al., The person-centred approach revealed some extent of heterogeneity in children's MHS trajectories. Contrary to most preceding evidence (Patalay et al., et al., et al., et al., As mentioned, most of the preceding evidence did not examine (Miner and Clarke-Stewart, et al., et al., et al., et al., et al., et al., et al., et al., Last but not least, the present study sought to evaluate the impact of parenting styles on the heterogeneity in children's MHS developmental trajectories. Using a developmental and psychopathology multisystem resilience approach (Masten et al., et al., et al., et al., et al., et al., et al., Overall, the present study is distinct from preceding works (Fernandez Castelao and Kr\u00f6ner-Herwig, et al., Nevertheless, the present work also had some limitations. Specifically, it should be underscored that SDQ is a broad screening tool of children and adolescents\u2019 MHS and does not necessarily target specific symptoms of the most common mental disorders. Moreover, evidence suggests that SDQ may not have high criterion validity in predicting emotional and behavioural disorders (Aydin The present findings may have significant implications for policy and practice. For instance, the correlated nature of the developmental trajectories of internalising and externalising MHS indicates that mental health professionals should always screen for both types of MHS. Furthermore, the person-centred modelling indicated that only a small proportion of the general population of young children may be at-risk for increased MHS. However, mental health practitioners should be vigilant to identify those children. Thus, more robust extensive screening may be needed. Most importantly, the present study's results underscored the importance of parenting styles for the identification of children at-risk for developing high/mild MHS. Therefore, we suggest that routine background checks of parenting behaviours should be applied while screening children for possible MHS. Given that hostile parenting style was associated with children's membership in the high- and mild-risk classes with high MHS trajectories, we recommend evidence-based parent training/management programmes to improve positive interactions between parent and child."} {"text": "Hsp90 is known for its role in the activation of an eclectic set of regulatory and signal transduction proteins. As such, Hsp90 plays a central role in oncogenic processes ,2. AlthoThe review by Bjorklund and Prodromou , an \u2018EdiThe article by Wengert et al. , also seThe review of Joshi et al. , anotherThe article by Jay et al. looks atThe \u2018Editor Choice\u2019 review by Backe et al. looks atThe article by Ziaka and van der Spuy extensivPlasmodium falciparum (PfHsp90), and its associated co-chaperone complexes, display key structural and functional differences compared to the human Hsp90 system. While the core co-chaperones regulating client entry, ATPase activity and Hsp90 conformation are broadly conserved in P. falciparum, there are some major differences, such as the expression of two p23 isoforms, and the apparent absence of a canonical Cdc37. Overall, these differences in the co-chaperone network of PfHsp90 suggest that the elucidation of Hsp90\u2013co-chaperone interactions can greatly extend our understanding of proteostasis and lead to the identification of novel inhibitors and drug candidates.The review by Dutta et al. highlighPiper et al. bring toIn addition to assisting protein homeostasis, Hsp90 has been shown to be involved in the promotion and maintenance of proper protein complex assembly either alone or in association with other chaperones, such as the R2TP complex. The review by Lynham and Walid looks atThe review by Mankovich and Freeman shows thThe review by Maiti and Picard , an \u2018EdiThe review by van Oosten-Hawle looks atThe \u2018Editor\u2019s Choice\u2019 article by Haufeng discusseThe study by Omkar et al. exploredA second contribution by Bjorklund et al. looks atIn conclusion, the \u2018Special Issue\u2019 brings together a series of review and research papers that concentrate on past and recent advances on Hsp90. It highlights its diverse role form the cytoplasm to the extracellular environment and even at the organismal level, and concentrates on the structure, mechanism and its pivotal position in disease. The articles bring together a diverse array of research findings into specific articles that will help drive research effectively towards making further progress in this fascinating research field that is Hsp90."} {"text": "Dear Editor:1. From the OCT images presented, the case in the inverted ILM flap group had MTM with lamellar macular hole (LMH); after surgery, the OCT images shows a flap closure configuration, instead of a true MH. This pattern is precisely what we would like to achieve with the addition of inverted ILM flap. Boni\u0144ska et al. [We read with interest the article \u201cAnatomical and visual outcomes of fovea\u2011sparing internal limiting membrane (ILM) peeling with or without inverted flap technique for myopic foveoschisis\u201d by Zheng et al. . In thisa et al. has repoa et al. . This po"} {"text": "The purpose of this review was to identify the effectiveness of environmental control (EC) non-pharmaceutical interventions (NPIs) in reducing transmission of SARS-CoV-2 through conducting a systematic review. EC NPIs considered in this review are room ventilation, air filtration/cleaning, room occupancy, surface disinfection, barrier devices, This article is part of the theme issue \u2018The effectiveness of non-pharmaceutical interventions on the COVID-19 pandemic: the evidence\u2019. The rapid spread of SARS-CoV-2 worldwide presented a unique challenge ,2. For iMany countries implemented isolation of imported cases of COVID-19 and their contacts, but by late February 2020 cases of community transmission with no links to travel were identified in the UK and many other countries . The rapAlthough highly effective in reducing transmission, the adverse social and economic consequences associated with lockdown meant thThis review of the impact of environmental controls (ECs) covers ventilation, occupancy, disinfection and air filtration. ECs are defined as measures which were intended to alter the potential contamination level of surfaces, imposed barriers to person-to-person contact, and modified the air within buildings. By focusing on transmission rather than surrogate markers such as virus detection in the environment, the review seeks to identify the effectiveness of measures in terms of reduction of transmission in real-life situations. While modelling and experimental studies help inform our understanding about the role of various ECs, direct extrapolation of their findings for humans in real-life situations is limited.. 2 (a)Systematic searches of databases from Web of Science, Medline, EMBASE, preprint servers MedRxiv and BioRxiv were conducted in order to identify studies reported between 1 January 2020 and 1 December 2022. All articles reporting on the effectiveness of ventilation, air filtration/cleaning, room occupancy, surface disinfection, barrier devices, (b) 1.did not consider the transmission of SARS-CoV-2 between humans or animals 2.did not include a comparison between groups that implemented the NPI and groups that did not 3.were modelling studies with no original data 4.were experimental studies that used model aerosols with no SARS-CoV-2 virus 5.were studies on environmental sampling alone 6.did not include original research such as review papers etc. 7.were not published in English.Studies were included in the review if they (i) reported on the transmission of SARS-CoV-2 in humans or animals and (ii) reported how transmission is impacted by the implementation of the following EC NPIs: ventilation, air cleaning devices, surface disinfection, room occupancy modification, barrier devices, (c)Three authors screened the retrieved articles based on the title and abstract of the references. The obtained references were screened a second time by one reviewer (A.M.) to further exclude references based on criteria 3\u20136 listed above in \u00a72b. A second reviewer (C.I.) reviewed 5% of the exclusions. Four reviewers then performed a full text review in order to select the final papers based on the inclusion/exclusion criteria in \u00a72b. Each paper selected for full text review was reviewed first by one reviewer (A.M.) and inclusion/exclusion decisions that were not straightforward were reviewed by a second reviewer . Disagreements were then subsequently resolved by a third reviewer. The following variables were noted in the final papers: country, setting, environmental NPI implemented, sample size, SARS-CoV-2 transmission results and other factors associated with transmission. Where available the following data were also summarized: measurements related to the environmental NPI considered (air changes per hour (ACH) or (d)Three authors made an assessment of methodological study quality using a Cochrane \u2018risk of bias\u2019 tool for non-randomized studies. Risk of bias assessment for each study using ROBINS-I is included in . 3n=3) [In total, 13\u2009971 unique articles were identified in the systematic search. A total of 3217 of these were retrieved based on initial title and abstract screening. Further screening of the titles and abstracts based on the inclusion/exclusion criteria in \u00a72b reduced the number of articles for full-text review to 1328. Only 19 of these studies met the inclusion criteria for the review; those which had been initially identified through pre-print servers were subsequently published following peer review, and it was the peer reviewed version which was included. For a detailed description of the number of references at each stage, see the Prisma flowchart in n=12) \u201321; air 2) [n=4) \u201328; surf4) [n=5) \u201324; room5) [n=6) ,24\u201326; aes (n=6) ,22\u201324; r6) [n=3) ,11,21; m3) [n=2) ; school 2) [n=2) ,17; an o2) [n=1) ; a bus (1) [n=1) . Two stu1) [n=2) ,28. TablAll of the included studies were found to have critical or serious risk of bias in at least one domain, see \u00a74. (a)Among the 12 studies that considered the effectiveness of ventilation on the transmission of SARS-CoV-2: (i) six studies provided evidence suggesting improved ventilation decreases SARS-CoV-2 transmission, (ii) three studies found no association between improved ventilation and transmission and (iii) three studies considered the impact of a combination of NPIs that included ventilation, therefore making it difficult to determine the effect on transmission of ventilation alone.Positive association: Six (of 12) studies provided evidence suggesting improved ventilation decreases SARS-CoV-2 transmission. Only two of these studies directly measured ventilation rates, either experimentally . These : Six of studies [et al. . Walshe : Six of studies u et al. measured: Six of studies et al. [Additionally, the viral load present in the boning hall (where the outbreak originated) may have been different from that in the abattoir, which could have impacted the differences in secondary infection rates. In the study of the two buses , the exet al. [et al. [et al. [et al. [In two of the other studies which showed decreased transmission with increased ventilation , ventil [et al. consider [et al. reportedet al. [et al. [et al. [et al. [The presence/absence of a ventilation system was assessed through questionnaires in the studies by Gettings et al. and Poko [et al. . Getting [et al. studied a et al. analysedNo association: Three (of 12) studies showed no significant association between the presence of a ventilation system and SARS-CoV-2 transmission. In all three studies, data were collected through questionnaires or interviews. Wang et al. [et al. [et al. [et al. [et al. [g et al. studied [et al. consider [et al. consider [et al. , there w [et al. , there whree of 1 studies Unclear association: Three (of 12) studies considered ventilation in combination with a range of other NPIs and do [et al. found thet al. [et al. [et al. [Feathers et al. and Danc [et al. studied [et al. changed (i)et al. [All of the four studies that considered the effectiveness of air cleaning devices reported evidence for reduced viral transmission ,26\u201328. HPositive association: The only controlled laboratory experiments identified in this review were undertaken by Fischer et al. [et al. [et al. [et al. [r et al. and Zhan [et al. , and the [et al. separate [et al. showed t [et al. . Howeveret al. [Gettings et al. Among the six studies that considered the impact of room occupancy on SARS-CoV-2 transmission, (i) four studies provided evidence suggesting that decreasing room occupancy leads to a reduction in SARS-CoV-2 transmission, (ii) one study concluded that there is no association between room occupancy and transmission and (iii) one study showed unclear association.Positive association: Walshe et al. [et al. [et al. [et al. [et al. [e et al. was one of a number of different factors considered and using statistical analysis they concluded that this factor did not have a significant impact on transmission.g et al. ) reporteUnclear association: Oginawati et al. [per capita) was associated with increased transmission in a residential setting. However, the authors attributed this result to the likelihood (based on trends in the area studied) that larger houses generally contained more family members (i.e. more people in the same house), and therefore a larger susceptible population for each index case. The significance for the association between lower occupancy rates with higher transmission in this study was therefore confounded by the measure of occupancy used.i et al. conclude (c)Five studies were identified that considered the effectiveness of surface disinfection on transmission of SARS-CoV-2: (i) three showed a positive association, i.e. enhanced disinfection was associated with reduced transmission and (ii) two studies showed no association between disinfection and transmission. These studies relied on data collected through questionnaires, interviews or site visits.Positive association: Atnafie et al. [et al. [et al. [et al. [et al. [e et al. and Kera [et al. consider [et al. used dati et al. compared [et al. consideret al. [et al. [et al. [Both of the studies on healthcare settings were ba [et al. , it was No association: Guedes et al. [et al. [s et al. studied [et al. studied (d)et al. [Only one of the identified studies considered the impact of barrier devices on transmission. Gettings et al. No evidence was found for the effectiveness (or ineffectiveness) of the EC NPIs of . 4The risk of bias assessment for each domain across all included studies is given in . 5For all but one of the NPIs considered there were studies that reported a positive association (i.e. that the NPI reduced transmission), no association (i.e. that the NPI was ineffective for reducing transmission) and unclear association. Most of the studies were based on retrospective analyses of real-world settings with many factors either left uncontrolled or not measured including; the viral load of the infectors, the number of infectors, the size of the susceptible population, infection risk of the host outside the investigated setting, and the influence of other NPIs etc. . These factivity \u201336.Due to such confounding factors, many of the studies reported in \u00a73 have a high risk of bias.Additionally, in most real-world settings, a combination of NPIs was generally employed together to reduce transmission, thereby making it difficult to determine the effect of a single NPI from observations of these settings. Many of the studies reported in \u00a73 used questionnaires, interviews or surveys to collect data, thereby introducing potential bias in the data. For the few studies that measured the impact of the different NPIs in settings where there were recorded transmission events, the exact conditions that were present during the transmission event were not fully documented, thereby weakening the confidence in the conclusions. Two laboratory animal studies were identified in this review. However, transmission between segregated infected and naive hamsters in controlled environments is not directly comparable to socially mixing humans. Furthermore, the sample size of these animal studies (of infectors and infectees) was very small, and there was a lack of measurement of factors such as the amount of virus present in the environment. The combination of these factors reduces the confidence of findings from these studies in terms of how they may relate to human transmission of SARS-CoV-2. (a)Studies that were published outside of the date range for the review or did not directly provide evidence for the effectiveness of the NPIs, but instead just suggested an association between the NPI and transmission, were excluded from the results section of this review. However, some of these studies do provide insights into the NPIs and are therefore briefly discussed here. The choice of which \u2018excluded\u2019 studies to discuss is not exhaustive, but chosen to represent the breadth of the material discovered in the literature.Ventilation and air filtration: There are studies that report on outbreaks where, in the absence of adequate ventilation, the authors attributed long-range airborne transmission as the dominant transmission route [et al. [et al. [on route \u201350. They [et al. and Vern [et al. ) showed [et al. \u201357. From [et al. . The autSome studies found a correlation between the probability of getting infected and the location of people relative to the air handling units in the setting ,59. Theset al. [et al. [The analysis of air samples for SARS-CoV-2 genomic material (SARS-CoV-2 RNA by RT-PCR) is a method widely used for assessing the impact of ventilation and air cleaning devices. A number of studies have shown some reduction in SARS-CoV-2 genomic material in the air samples collected from hospitals with COVID-19 patients and enhanced ventilation ,62. HEPAet al. , it was [et al. showed t [et al. ,66. Howeet al. [Buonanno et al. consideron rates ,69. FurtSurface disinfection: Contamination of surfaces in houses or hospital wards housing individuals with COVID-19 suggests that fomite transmission may be possible [et al. studied secondary transmission to HCWs that performed gastrointestinal endoscopy in 11 COVID-19 patients at a hospital in China, where enhanced disinfection strategies were in place both during and after the procedures. No SARS-CoV-2 transmission to HCWs was reported in the study [possible \u201372. Lin he study . Howeverin vitro disinfection of non-porous surfaces inoculated with inactivated SARS-CoV-2 demonstrated a reduction in detectable genomic material after disinfection [A number of studies involved the swabbing of surfaces in hospitals housing COVID-19 patients and were able to demonstrate reduction in the detection of SARS-CoV-2 genomic material after disinfection \u201379. Howenfection .Barrier devices: A type of barrier device was used during aerosol generating procedures on COVID-19 positive patients in healthcare settings. There are studies that report zero or reduced transmission to HCWs during procedures where such barrier devices were employed [e.g. yed e.g. , 82. Howyed e.g. . This suyed e.g. . They us. 6Evidence from the literature suggests that EC NPIs of ventilation, air cleaning devices and limiting room-occupancy may have a role in reducing transmission in specified settings. However, it is important to recognize that this conclusion is based on evidence which was usually of low or very low quality and hence the level of confidence ascribed to it is low. There were two significant challenges that limited the confidence in evidence for the effectiveness of many NPIs examined: (a) the low number of studies and (b) the low-quality assessment of the identified studies. What does this mean for the future? It is recommended that future studies on NPIs should be prioritized where there is a current lack of evidence on the effectiveness on transmission and where they have significant implementation cost including: (i) enhanced surface disinfection, (ii) use of barrier-devices, (iii) Many of the studies identified herein had a critical risk of bias mainly due to confounding factors. It is suggested that international level checklists/guidelines/protocols for both field and laboratory studies on pathogen transmission are established in order to ensure optimum utilization of available research resources. A more standardized approach focused on reducing confounding factors would equip future researchers with the tools that would enable a higher degree of confidence to be associated with their conclusions.Only 19 studies from the initial dataset of 13\u2009971 references addressed the issue of the effectiveness of EC NPIs on the transmission of SARS-CoV-2 in humans or animals . The paucity and low quality of evidence makes it challenging to draw firm conclusions regarding the effectiveness of implementing these NPIs in the future to control the spread of SARS-CoV-2. This is exacerbated by apparently contradictory findings for almost all NPIs investigated (in that at least one study reported the opposite conclusion to the others). The review did not involve simply counting the number of studies for and against; the robustness of each study and findings were assessed and the extent to which confounding factors played a role was also considered. A majority of the studies were found to provide only low-quality evidence mainly due to the presence of many confounding factors in the study design.The evidence identified for surface disinfection and barrier devices (screens) does not permit conclusions to be drawn regarding their effectiveness against SARS-CoV-2 transmission. While this does not mean that they are ineffective, their effect on SARS-CoV-2 transmission is not yet known. No studies were found that discussed the effect of Evidence, although of low quality, was found which showed increased ventilation and use of air cleaning devices reduced the transmission of SARS-CoV-2 in some situations. There was evidence, also of low quality, that decreasing the occupancy levels within some settings was found to be effective in reducing the transmission of SARS-CoV-2. An important caveat is that the evidence for these measures is limited to the settings that were studied and cannot necessarily be extrapolated beyond these. There is no evidence on the implementation of EC NPIs that provides any information on their effectiveness in altering transmission of SARS-CoV-2 at a community or population level.In summary, this review has highlighted that there are significant knowledge gaps regarding the effectiveness of ECs in limiting transmission of SARS-CoV-2. It is extremely challenging to conduct controlled studies in the midst of a pandemic. However, it is important that lessons can be learned in these circumstances and protocols should be established to study the effectiveness of ECs using observational approaches during a pandemic. It is equally important that rigorous controlled studies are undertaken to study the effectiveness of ECs in experimental studies before another pandemic strikes."} {"text": "Elective surgery can be overwhelming for children, leading to pre-operative anxiety, which is associated with adverse clinical and behavioural outcomes. Evidence shows that paediatric preparation digital health interventions (DHIs) can contribute to reduced pre-operative anxiety and negative behavioural changes. However, this evidence does not consider their design and development in the context of behavioural science. This systematic review used the Theoretical Domains Framework (TDF) to evaluate the design and development of DHIs used to support children up to 14 years of age and their parents, prepare for hospital procedures, and determine any correlation to health outcomes. It also considered whether any behavioural frameworks and co-production were utilised in their design.A search of the MEDLINE, EMBASE, PsycINFO, and HMIC databases was carried out, looking for original, empirical research using digital paediatric preparation technologies to reduce pre-operative anxiety and behavioural changes. Limitations for the period (2000\u20132022), English language, and age applied.Seventeen studies were included, sixteen randomised control trials and one before and after evaluation study. The results suggest that paediatric preparation DHIs that score highly against the TDF are (1) associated with improved health outcomes, (2) incorporate the use of co-production and behavioural science in their design, (3) are interactive, and (4) are used at home in advance of the planned procedure.Paediatric preparation DHIs that are co-produced and designed in the context of behavioural science are associated with reduced pre-operative anxiety and improved health outcomes and may be more cost-effective than other interventions.https://www.crd.york.ac.uk/prospero/, identifier: CRD42022274182. AnaesthHeightened pre-operative anxiety can lead to poor anaesthesia induction, an increased risk of emergence delirium, pain, inconsolable crying, irritation, incoherency, and uncooperativeness . These fVarious pharmacological and non-pharmacological interventions have been used to reduce pre-operative anxiety in children and improve post-operative psychological and physiological outcomes. Pharmacological interventions include anti-anxiety and sedative drugs, but these commonly cause adverse effects such as drowsiness and can interfere with anaesthesia medication . Non-phaThe use of pre-operative preparation interventions indicates that well-prepared children have reduced pre-operative anxiety and negative responses to surgery or medical procedures \u201316. Pre-Behavioural science is interested in aspects such as behavioural change, in this case, the design and development of paediatric preparation DHIs and their impact on children's emotional, behavioural, and clinical outcomes. Due to the lack of understanding between the preparation DHIs and behavioural change, this systematic review builds upon this research. It looks specifically at the design and development of paediatric preparation DHIs through the application of the Theoretical Domains Framework (TDF). It applies the 14 domains of the TDF to assess the components of DHIs and examines whether there is a correlation to improved outcomes. The TDF was developed from the synthesis of 33 behaviour change theories into a framework comprising 14 domains and 84 behaviour constructs, founded on the Behaviour Change Wheel . The Beh1.1.Children undergoing medical procedures, anaesthesia, and surgery experience significant psychological and physiological reactions. The unfamiliar environment, the equipment and routines, fear of separation, needles, and the medical procedure are well documented as sources of these negative reactions \u201329. Thes1.1.1.Pharmacological interventions include anti-anxiety and sedation medications, such as Midazolam, Fentanyl, Ketamine, and Clonidine. These are used as effective pre-operative anxiolytic and sedation medications in children, which reduce pre-operative nausea and vomiting, enable satisfactory separation from parents and anaesthesia induction, and reduced the need for post-operative analgesics \u201335. HoweDue to these adverse side-effects, non-pharmacological interventions have increasingly been used to manage pre-operative anxiety. Research on the use of parental presence is mixed. Some papers suggest it has been used to provide reassurance and comfort, eliminate separation anxiety and reduce the need for medications, while other papers suggest it can increase anxiety if parents themselves are anxious \u201339. DistWithin the last 20 years, there has been increased research into the use of digital technologies such as DHIs to manage pre-operative anxiety either through distraction , 48, 49 1.1.2.The TDF provides a validated framework, developed to provide a more accessible and usable tool to support improving the implementation of evidence-based practice. By bringing together a range of behaviour theories and key theoretical constructs, a simple and integrated framework is provided to inform and assess intervention design and implementation . The TDFThis study aimed to evaluate the design and development of paediatric preparation DHIs used to support children up to 14 years of age, and their parents, to prepare for hospital procedures, and to understand their impact on their health outcomes. The primary objective was to evaluate the design and development of paediatric preparation DHIs against the TDF and ascertain whether any behavioural frameworks and co-production were used. A secondary objective, and in the context of the previous systematic reviews , 20, 53,2.The study protocol is publicly available under registration number CRD42022274182 on the International Prospective Register of Systematic Reviews (PROSPERO). The inclusion and exclusion criteria were bui2.1.The OVID databases that were selected were MEDLINE, EMBASE, PsycINFO, and HMIC. A mix of keywords and Medical Subject Headings (MESHs) was used to search for themes. The search was carried out in February 2022 using the complete syntax with truncation for each database as outlined in 2.2.The preliminary search returned 931 records; 363 duplicate records were identified, and 176 records were removed. A total of 730 records remained, and these progressed to the stage of title and abstract screening . Two revData relevant for extraction were considered against the aims and objectives of the review . SupplemThe synthesis and analysis were first assessed, on the basis of the degree of homogeneity , 81, in 1.input from one or more healthcare professionals, children, and parents, and2.use of any behavioural frameworks.During pilot testing of the modified TDF against a few studies, it was decided that the TDF\u2019s \u201csocial/ professional role and identity\u201d domain was not applicable. This was attributed to its focus on the behaviours and displayed personal qualities in a social or work setting, whereas the TDF domains were being used to assess the design of digital intervention in respect of use by children and their parents. It was subsequently removed and the scoring for the evaluation of the DHIs was adjusted to be out of 15 domains.The digital health interventions in the studies are aimed at changing behaviour to reduce pre-operative anxiety through education, information, and coping strategies. The TDF was chosen to evaluate the design and development of the digital health interventions within the context of behavioural science, as it is a validated tool for assessing implementation issues, supporting intervention design, and analysing interventions , 26, 82.For each domain in the modified TDF, the domain descriptions were used to develop a criteria checklist to guide the evaluation of the DHIs. The criteria checklist considered what information, activities, techniques, or actions the DHIs should incorporate to meet the domain descriptions. This was tested against a sample of the DHIs to refine the criteria checklist. Each DHI was then assessed against each domain criteria checklist and a score applied depending on whether the DHI fully met, partially met, or did not meet the requirements in the criteria checklist. To determine any correlation between the evaluation of the development of the DHIs and the reported outcomes, quantitative data was converted into a summary statistic. Specifically, this examined what outcomes were measured and how, whether there was a noticeable measure of effect, and how it correlated to the scoring from the DHI evaluation. To ensure that the data analysis met the requirement of systematic review transparency, established reporting guidelines were followed .d, Glass's delta, and Hedges\u2019 g or the state anxiety between groups. State anxiety on arrival at the hospital was significantly lower in the DHIG with a negative medium effect compared with that in the control group. Similarly, Wantanakorn et al. (p\u2009=\u20090.82) to after its application (p\u2009=\u20090.012) within the DHIG, with a negative medium effect (d\u2009=\u20090.6). This suggests that the DHIs positively impacted levels of anxiety in these two studies. It is noted that both DHIs included interactive elements and scored fully in the domains for behavioural regulation, beliefs about capabilities, and reinforcement. However, Wantanakorn et al. (d\u2009=\u20090 and d\u2009=\u20090.03). Parental self-reported state anxiety (STAI-S) increased pre-procedure and decreased post-procedure but with a notable increase in anxiety in the DHIG compared with that in the two control groups. A medium positive effect occurred between the DHIG and control group 1 (d\u2009=\u20090.58) and a small positive effect between the DHIG and control group 2 (d\u2009=\u20090.48) pre-procedure, changing to a small positive effect compared with control group 1 (d\u2009=\u20090.43) and no effect compared with control group 2 (d\u2009=\u20090.15) post-procedure. Fernandes et al. (p\u2009<\u20090.001). This translated into a large negative effect between the DHIG and each of the controls. In addition, the video game control group had lower levels of worries compared with the no intervention control group. SAM results showed no significant differences in valence or arousal (happiness) between the groups before and after the interventions. Despite this, a small effect (d\u2009=\u20090.25) was calculated between the DHIG and control group 1 for valence post-intervention. For arousal in the DHIG compared with the control groups, a small effect occurred pre-intervention (d\u2009=\u20090.20 and d\u2009=\u20090.4) and a medium effect post-intervention . Parental anxiety in the DHIG was significantly lower with a negative medium effect compared with that in control group 1 but comparable with no effect compared with that in control group 2 (d\u2009=\u20090.08). This DHI was developed using a behavioural framework and co-production with children and healthcare professionals. It also met the modified TDF domains for emotion and reinforcement, scoring fully across seven domains. Fortier et al. (p\u2009=\u2009004) in the DHIG than in the control group pre-procedure and post-intervention, with a medium negative effect (d\u2009=\u20090.65). Anxiety remained lower in the DHIG at separation but was not statistically significant and had a small negative effect (d\u2009=\u20090.25). Wakimizu et al. (d\u2009=\u20090.45) and 1 month after the procedure (d\u2009=\u20090.27), and a partial small effect occurred at 1 week after the procedure (d\u2009=\u20090.2). Wakimizu et al. (d\u2009=\u20090.60) effect post-operatively and a negative small effect (d\u2009=\u20090.23) at 1 week after the procedure, and all other time points showed no effect. Campbell et al. (p\u2009=\u20090.790) before the intervention across all three groups . However, during induction and recovery, the observer-rated child VAS to determine anxiety levels shows a decrease in anxiety across all groups over time. A significant change was noted between the DHIG and control group 1 at induction (p\u2009=\u20090.014) and between the DHIG and control group 2 at recovery (0.016). The effect could not be calculated because of a non-normal distribution of data. While the results of these two studies suggest that the DHI had some impact, albeit a small effect for Wakimizu et al. (p\u2009=\u20090.009) in the DHIG post-intervention and with a negative medium effect (0.67).Bray et al. revealedn et al. revealedn et al. only parn et al. scored fn et al. showed ts et al. assessedr et al. parentalu et al. showed cu et al. also foul et al. found seu et al. , it is n3.5.1.2.p\u2009=\u20090.407) and after . Likewise, self-reported STAI and observed VAS parental anxiety were comparable between the control group and the DHIG immediately after child induction with no effect observed in the STAI results (d\u2009=\u20090.01). Campbell et al. between the I-PPP and the usual care groups in the holding area and to a medium negative effect (d\u2009=\u20090.53) and small negative effect (d\u2009=\u20090.34) between the DHIG and the usual care and I-PPP\u2009+\u2009parent groups, respectively. The lower anxiety levels in both the I-PPP and the I-PPP\u2009+\u2009parent groups to the usual care group suggest that the DHI positively impacted anxiety levels. When considered against the higher parental anxiety STAI-S scores in the control groups, it was possible that parental anxiety may have impacted child anxiety. Fortier et al. and again considerably during induction . Parental STAI anxiety scores followed a similar trend to that of the children. The DHI used by Hatipoglu et al. (p\u2009<\u20090.001). A large negative effect was calculated between the DHIG and the control groups, respectively . The DHIs used by Wantanakorn et al. in observer-rated m-YPAS child anxiety after the use of the DHI in the DHIG compared with the control group. A medium negative effect (d\u2009=\u20090.6) and large negative effect (d\u2009=\u20090.9) were calculated. Ryu et al. between the DHIG and the control group after the use of the DHI 1\u2005h before the procedure, with negative effects of small (d\u2009=\u20090.47) and large (d\u2009=\u20090.80) in the first two. The effect could not be calculated for Ryu et al. (p\u2009=\u20090.01). Ryu et al. . Fortier et al. (p\u2009=\u20090.04), with a small negative effect (d\u2009=\u20090.45). Post-operative behaviour was measured by Hatipoglu et al. (p\u2009<\u20090.001) between control group 1 and both control group 2 (voice recording) and the DHIG. The effect size between the DHIG was large to control group 1 (d\u2009=\u20092.049) and small to control group 2 (d\u2009=\u20090.31). In addition, they showed that anxious children had a 1.03 times greater risk of adopting negative post-operative behaviours.The DHIs used by Wright et al. and Fortr et al. found a u et al. was a vin et al. and Ligun et al. were useu et al. , 61, 70 u et al. reportedu et al. measuredr et al. measuredu et al. using th3.5.2.2.d\u2009=\u20090.02 at admission before intervention and d\u2009=\u20090.01 in the holding area after the intervention. Although the intervention was used a week before the procedure, Huntington et al. (d\u2009=\u20090.21) was calculated between the DHIG and control group 2 (handwashing game) both pre- and at induction. For induction behaviour and compliance, Ryu et al. (p\u2009=\u20090.92). Huntington et al. . Park et al. (d\u2009=\u20090.07). Ryu et al. . For behaviour, Ryu et al. among children in the two groups. Eijlers et al. (p\u2009=\u20090.251).Eijlers et al. found non et al. found nou et al. found PBn et al. found nok et al. ICC resuu et al. and Eijlu et al. also meau et al. used thes et al. used the3.5.3.d\u2009=\u20090.45, d\u2009=\u20090.36) increasing to a medium negative effect post-intervention in the DHIG compared with the control groups.The study by Fernandes et al. was the 3.5.4.p\u2009=\u20090.410, p\u2009=\u20090.454, and p\u2009=\u20090.30, respectively. For patient flow, Huntington et al. (d\u2009=\u20090.31) and spent less time on the ward compared with control group 1 with a small negative effect (d\u2009=\u20090.28). Fortier et al. (p\u2009=\u20090.708) or recovery (p\u2009=\u20090.26) time. Medication usage for analgesic consumption was recorded by Fortier et al. , and overall, a small effect (d\u2009=\u20090.22) was calculated between the need for rescue analgesia in the DHIG compared with the control group. Stunden et al. (p\u2009=\u20090.07) nor among the three groups (p\u2009=\u20090.27). The chi-square p-value effect calculated a small effect (d\u2009=\u20090.43) in average successful MRI and a small negative effect (d\u2009=\u20090.26) between the groups, with the DHIG (VR-MRI) being on average less successful at 30% compared with control group 1 (SPM) at 47% and control group 2 (CLP) at 50%. Preparation time and assessment time were measured in minutes. Preparation time between the groups was significantly different (p\u2009<\u20090.001) and had a medium effect size (\u03b72\u2009=\u20090.57), with the DHIG preparing the longest on average at 22.05\u2005min. However, assessment time was comparable across the groups with no significant difference (p\u2009=\u20090.13).Clinical status was assessed in five studies , 68, 71 n et al. measuredr et al. , with aln et al. used hea3.5.5.p\u2009<\u20090.001) and parents or caregivers (p\u2009=\u20090.01) in the DHIG compared with the control group. The calculated effect was positively large for children (d\u2009=\u20091.11) and positively medium (d\u2009=\u20090.59) for parents and caregivers. Procedural satisfaction in children and parents was not statistically significant (p\u2009=\u20090.10 and p\u2009=\u20090.72) but was higher in the DHIG than in the control group, with a small positive effect in children (d\u2009=\u20090.37). Stunden et al. and whether it improved their child's ability to cope . Ryu et al. . Park et al. (p\u2009=\u20090.008). Wright et al. (d\u2009=\u20090.20) was calculated against control group 1 and a medium negative effect (d\u2009=\u20090.50) was calculated against control group 2. Stunden et al. (p\u2009=\u20090.03), and of the 20 children who completed the form, they liked the different components. Wakimizu et al. were satisfied.DHI usability, satisfaction, and/or knowledge and understanding were assessed using seven different measures across eight studies , 70, 73.um d\u2009=\u20090. for paren et al. measuredn et al. evaluateu et al. , 70 usedk et al. did findt et al. measuredum d\u2009=\u20090. for paren et al. used a 53.6.The risk of bias across the 16 randomised control trials was generally concerning, with 68.8% having an overall result of some concern \u201370, 73 aAll studies used a random group allocation sequence, with this being computerised in eight studies , 70\u201372. Bias due to deviations from intended interventions (D2) was low across 50% of studies. Of the five studies considered to have some concern of bias in this domain, three , 62, 63 Bias due to missing outcome data (D3) was low across 88% of the studies and considered high for two studies. Ryu et al. excludedBias for measurement of outcome (D4) was deemed low in 50% of studies , 70, 71 Most studies 68.8%) had some concern for bias in D5 \u201cselection of the reported results\u201d. This was due to an inability to confirm whether the outcome data were analysed following a finalised pre-specified analysis plan before unblinded outcome data were made available for analysis. This according to Cochrane RoB2 guidelines means th.8% had sMedical trials entail a comprehensive understanding of clinical ethics, with those involving children complicated by stricter standards than those involving adults . In addi4.1.health outcomes observed,2.co-production and use of behaviour frameworks,3.type of DHIs, and4.timing and location of the DHIs used.DHIs are increasingly being used to prepare children and their parents for hospital procedures, aiming to reduce pre-operative anxiety and improve health outcomes. It is evidenced that well-prepared children are associated with reduced pre-operative anxiety and that DHIs can be an effective preparation method \u201316. This4.1.All studies included in this review assessed child anxiety either as an emotion or as a feeling or behavioural response. Compared with children in the control group(s), 14 studies (82%) showed that children using the DHIs were associated with lower anxiety levels and the DHI had a positive impact, with this corresponding to the result of the effect size calculations where they could be calculated. This differed for three studies (17%), which showed anxiety levels were similar and the DHIs had no or little impact and effect. Given that higher pre-operative anxiety is a predictor of negative behavioural changes, the results for measures such as emergence delirium, induction behaviour, and induction compliance were mixed, although they were considered only in a small number of the included studies. For the three studies , 68, 71 The first finding is that paediatric preparation DHIs scoring higher against the modified TDF are more likely to be associated with reduced anxiety and reduced negative behavioural changes, as they will provide detailed information on the planned procedure and encompass information on coping with emotions, feelings, and anxiety , 13. Bra4.2.The second finding is that preparation DHIs scoring higher against the modified TDF are more likely to have used co-production and a behavioural framework in their design and development. Aufegger et al. stated tThe three higher scoring studies , 62, 71 4.3.The third finding is that the type of preparation DHI plays an important role in achieving a higher score against the modified TDF, with this being intrinsically linked to interactivity and rewards or achievements. In a previous qualitative study , childre4.4.The fourth finding is that the timing and location of the preparation DHI lends itself to a higher score against the modified TDF. Three of the highest scoring DHIs, by Bray et al. , Wright 4.5.This study's strength is that it evaluates the design and development of DHIs used in preparing children for hospital procedures, correlating this against effectiveness in improving outcomes. Previous systematic reviews , 98\u2013100 There are limitations to this study. The first and second limitations relate to the search strategy and data extraction. While the search strategy was considered comprehensive, it was limited to papers in English published within the last 22 years, with the period being to ensure the relevance of the studies. When snowballing references of included papers and previous systematic reviews, a few papers published before the year 2000 may have been relevant for inclusion.The third was the inability to conduct a meta-analysis because of the presence of heterogeneity across the included studies. Consequently, effect sizes were calculated, but not all studies reported the mean and standard deviation. It was, therefore, necessary to convert the median and interquartile ranges into a mean and standard deviation to then calculate the effect size. However, due to insufficient information to determine proximity to a normal distribution, the results may potentially be skewed. Some data reported in the studies were not amendable to calculating the effect size, and for these studies, the results were only narratively synthesised.Finally, the level of information contained within some of the study papers to describe the DHIs was minimal, with supporting resources not found. This was a factor in the inability to draw meaningful conclusions against many of the modified TDF domains.4.6.The quality of the studies was predominately moderate, with five studies having an overall high risk of bias. However, when considering the individual risk of bias in each of the five domains, it generally ranged from low to some concern, with most of the concerns linked either to an uncertainty of, or to a confirmed lack of, blinding of participants or those assessing the data, or to a lack of information in the papers to make a judgement. This was within the domains for \u201crandomisation process\u201d and \u201cselection of reported results\u201d, with the latter predominately linked to uncertainty on whether the analysis plan was finalised before results were assessed and the trial protocol not being readily available to verify, rather than the results being biased.4.7.It is considered that this study is the first to use an adapted version of the TDF to assess the design and development of DHIs used to prepare children for hospital procedures. The four key findings from this study suggest that the TDF can be used to analyse the effects of preparation DHIs, and by using theory-driven behavioural science, their design can be redressed accordingly to improve health outcomes. While these findings contribute to this field of study, further research is required to validate the findings. Furthermore, research is required to understand the developmental costs of these preparation DHIs and whether they are cost-effective against the traditional form of pre-operative preparation.5.The Theoretical Domains Framework is a validated tool designed to enable the evaluation of behaviour change and can be used to assess implementation issues, support intervention design, and analyse interventions. This study applied an adapted version of the Theoretical Domains Framework to assess the design and development of DHIs used to prepare children for hospital procedures.1.associated with positive health outcomes,2.influenced by the use of co-production and behavioural science in their design and development,3.interactive,4.used a few days to a week in advance of the planned procedure within the comfort of the child's own home.These four findings together are associated with reduced anxiety and reduced negative behavioural changes in the DHIs that scored the highest against the modified TDF. Furthermore, well-designed and developed DHIs that can be used in the child's own home and in advance of the planned procedure may be more cost-effective. This is in respect of the reduced staff time for on-the-day preparation and the potential longer-term reduced healthcare utilisation.The main findings from this assessment are that DHIs scoring highly against the modified TDF arePaediatric preparation DHIs that are designed in the context of behavioural science and with co-development from healthcare professionals, children, and their parents are more likely to be associated with reduced pre-operative anxiety and have the potential for improving health outcomes. Furthermore, the use of paediatric preparation DHIs well in advance of planned invasive and non-invasive procedures may be more cost-effective than traditional preparation programmes such as Child Life Specialists or hospital tours that require staff time, resourcing, and planning around the child's procedure. By enabling pre-operative information to be provided digitally in the child's own home, these costs could be reduced. However, further research is required into the cost\u2013benefit of this weighed against the developmental costs associated with the DHIs, particularly those that have shown to be more effective in reducing pre-operative anxiety."} {"text": "The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, the personalized diagnosis and treatment planning for a single cancer patient relies on various images and non-image data . However, such decision-making procedures can be subjective, qualitative, and have large inter-subject variabilities. With the recent advances in multimodal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multimodal information to ultimately provide more objective, quantitative computer-aided clinical decision making? This paper reviews the recent studies on dealing with such a question. Briefly, this review will include the (a) overview of current multimodal learning workflows, (b) summarization of multimodal fusion methods, (c) discussion of the performance, (d) applications in disease diagnosis and prognosis, and (e) challenges and future directions. The heterogeneous data would provide different views of the same patient to better support various clinical decisions (e.g. disease diagnosis and prognosis -3). HoweMany works have achieved great success in using a single modality to make a diagnosis or prognosis with deep learning methods -10. Howeet al [et al [et al [et al [et al [Several surveys have been published for medical multimodal fusion -15. Boehet al reviewedl [et al and Stahl [et al and Lu el [et al divided In this survey, we collected and reviewed 34 related works published within the last 5 years. All of them used deep learning methods to fuse image and non-image medical data for prognosis, diagnosis, or treatment prediction. This survey is organized in the following structure: 2.2.1.This survey only includes published studies (with peer-review) that fuse both image and non-image data to make a disease diagnosis or prognosis in the past five years. All of them used feature-level deep learning-based fusion methods for multimodal data. A total of 34 studies that satisfied these criteria are reviewed in this survey.2.2.A generalized workflow of collected studies is shown in Based on the stage of multimodal fusion, the fusion strategies can be divided into feature-level fusion and decision-level fusion , 15. Fea2.3.In this survey, multimodal fusion is applied to the disease diagnosis and prognosis. The disease diagnosis tasks include classifications such as disease severity, benign or malignant tumors, and regression of clinical scores. Prognosis tasks include survival prediction and treatment response prediction.After obtaining the multimodal representations, multi-layer perceptrons (MLP) were used by most of the reviewed studies to generate the prognosis or diagnosis results. The specific tasks of diagnosis and prognosis can be categorized into regression or classification tasks based on discrete or continuous outputs. To supervise the modal training, the cross-entropy loss is usually used for classification tasks, while the mean square error (MSE) is a popular choice for regression tasks. To evaluate the results, the area under the curve (AUC) of receiver operating characteristics (ROC), mean average precision (mAP), accuracy, F1-score, sensitivity, and specificity metrics are commonly used for classification, while the MSE is typically used for regression. However, although the survival prediction is treated as a time regression task or a classification task of long-term/short-term survival, the Cox proportional hazards loss function is popul3.Due to multimodal heterogeneity, separate preprocessing and feature extraction methods/networks are required for different modalities to prepare unimodal features for fusion. As shown in 3.1.3.1.1.Pathology images analyze cells and tissues at a microscopic level, which is recognized as the \u2018gold standard\u2019 for cancer diagnosis . The 2D 3.1.2.Radiology imaging supports medical decisions by providing visible image contrasts inside the human body with radiant energy, including MRI, CT, positron emission tomography (PET), fMRI and x-ray, etc. To embed the intensity standardized 2D or 3D radiology images into feature representations with learning-based encoders -36, 87 o3.1.3.In addition to pathology and radiology images, some other kinds of medical images captured by optical color cameras are categorized as camera images; examples of these camera images include dermoscopic and clinical images for skin lesions , 45, end3.2.The non-image modalities contain lab test results and clinical features. Laboratory tests check a sample of blood, urine, or body tissues, access the cognition and psychological status of patients, and analyze genomic sequences, etc. Clinical features include demographic information and clinical reports. These modalities are also essential to diagnosis and prognosis in clinical practice. The non-image data on the reviewed works can be briefly divided into structured data and free-text data for different preprocessing and feature extraction methods.3.2.1.et al [Most of the clinical data and lab test results in the reviewed works are structured data and can be converted to feature vectors easily. In preprocessing, categorical clinical features were usually converted through one-hot encoding, while the numerical features were standardized , 34, 35.et al used sofet al , 47, 67.et al , 24, andet al , 68.3.2.2.et al [Clinical reports capture clinicians\u2019 impressions of diseases in the form of unstructured text. In order to deal with the free text data and extract informative features from the free-text, natural language processing techniques are implemented. For example, Chauhan et al preparedet al . Then, tet al model inet al was usedet al was trieet al for textet al [et al [et al [et al [et al [et al [et al [After using the above modal-specific preprocessing and feature extraction methods, the unimodal representations could be converted to feature maps or feature vectors. For feature vectors, in order to learn more expressive features with expected dimensions, EI-Sappagh et al used pril [et al exploredl [et al used thel [et al replicatl [et al deconvoll [et al used bidl [et al applied l [et al , 68, vecl [et al , 34, 35,l [et al .Unimodal feature extraction can be either unsupervised or supervised. Note that if the unimodal features are trained prior to fusion, it is possible to train the unimodal model with the maximum number of available samples of each modality for better unimodal model performance and hopefully better unimodal features to benfit the multimodal performance , 67, 84.4.et al [et al [Fusing the heterogeneous information from multimodal data to effectively boost prediction performance is a key pursuit and challenge in multimodal learning . Based oet al implemenl [et al showed tl [et al , 61 also4.1.et al [p-value > 0.05), while the element-wise summation and multiplication methods required less trainable parameters in the following FCN. After comparing the learned non-image features by FCN and the original non-image features, the former ones achieved superior performance. Meanwhile, the concatenation of the feature vectors outperformed the concatenation of logits from the unimodal data. Yan et al [et al [To combine different feature vectors, the common practice is to perform simple operations of concatenation, element-wise summation, and element-wise multiplication. These practices are parameter-free and flexible to use, but the element-wise summation and multiplication methods always require the feature vectors of different modalities to be converted into the same shape. Many early works used one of the simple operations to show that multimodal learning models outperforms unimodal models , 45, 87.et al comparedan et al investigl [et al proposed4.2.et al [et al [et al [The subspace methods aim to learn an informative common subspace of multimodality. A popular strategy is to enhance the correlation or similarity of features from different modalities. Yao et al proposedet al , they prl [et al designedl [et al used thel [et al fused thl [et al , they foet al [et al [Another strategy in the subspace-based fusion method is to learn a complete representation subspace with the encoder-decoder structures. Ghosal et al decoded l [et al also use4.3.et al [et al [et al [et al [et al [et al [et al [et al [et al [et al [et al [Attention-based methods computed and incorporated the importance scores (attention scores) of multimodality features when performing aggregation. This progress simulated routine clinical practice. For example, the information from clinical reports of a patient may inform the clinicians to pay more attention to a certain region in an MRI image. Duanmu et al built anl [et al concatenl [et al calculatl [et al proposedl [et al proposedl [et al aggregatl [et al and a bal [et al . Guan etl [et al applied l [et al in theirl [et al . In addil [et al built a l [et al , and thel [et al exploitel [et al and finel [et al used theet al [The above attention-based fusion methods rescaled features through complementary information from another modality, while P\u00f6lsterl et al proposedet al .4.4.et al [et al [et al [The tensor-based fusion methods conducted outer products across multimodality feature vectors to form a higher order co-occurrence matrix. The high-order interactions tend to provide more predictive information beyond what those features can provide individually. For example, blood pressure rising is common when a person is doing high-pressure work, but it is dangerous if there are also symptoms of myocardial infarction and hyperlipidemia . Chen etet al proposedet al to combiet al was addel [et al not onlyl [et al . More rel [et al followedl [et al and exte4.5.et al [et al [A graph is a non-grid structure to catch the interactions between individual elements represented as nodes. For disease diagnosis and prognosis, nodes can represent the patients, while the graph edges contain the associations between these patients. Different from CNN-based representation, the constructed population graph updates the features for each patient by aggregating the features from the neighboring patients with similar features. To utilize complementary information in the non-imaging features, Parisot et al proposedl [et al built gr5.In the above sections, we reviewed recent studies using deep learning-based methods to fuse image and non-image modalities for disease prognosis and diagnosis. The feature-level fusion methods were categorized into operation-based, subspace-based, attention-based, tensor-based, and graph-based methods. The operation-based methods are intuitive and effective, but they might yield inferior performance when learning from complicated interactions of different modalities\u2019 features. However, such approaches (e.g. concatenation) are still used to benchmark new fusion methods. Tensor-based methods represent a more explicit manner of fusing multimodal features, yet with an increased risk of overfitting. Attention-based methods not only fuse the multimodal features but compute the importance of inter- and intra-modal features. Subspace-based methods tend to learn a common space for different modalities. The current graph-based methods employ graph representation to aggregate the features by incorporating prior knowledge in building the graph structure. Note that these fusion methods are not exclusive to each other, since some studies combined multiple kinds of fusion methods to optimize the prediction results. Compared with decision-level fusion for the decision-level fusion, feature-level fusion may gain benefits from the interaction between multimodal features, while the decision-level fusion is more flexible for the combination of multimodalities and thus robust to modality missing problems.Although different fusion methods have different characteristics, how to select the optimal fusion strategy is still an open question in practice. There is no clue that a fusion method always performs the best. Currently, it is difficult to compare the performance of different fusion methods directly, since different studies were typically done on different datasets with different settings. Moreover, most of the prior studies did not use multiple datasets or external testing sets for evaluation. Therefore, more fair and comparative studies and benchmark datasets should be encouraged for multimodal learning in the medical field. Furthermore, the optimal fusion method might be task/data dependent. For example, the decision-level fusion might be more suitable for multimodality with less correlation. However, the theoretical analysis and evaluation metrics are not extensively researched. Some studies show that the fusion at different layers or levels can significantly influence the results , 84, 87.et al [et al [et al [et al [The reviewed studies showed that the performance of multimodal models typically surpassed the unimodal counterparts in the downstream tasks such as disease diagnosis or prognosis. On the other hand, some studies also mentioned that the models fused more modalities may not always perform better than those with fewer modalities. In other words, the fusion of some modalities may have no influence or negative influence on multimodal models , 40, 41.et al used thel [et al used outet al [et al [A concern in this field is data availability. Although deep learning is powerful in extracting a pattern from complex data, it requires a large amount of training data to fit a reasonable model. However, data scarcity is always a challenge in the healthcare area, the situation of the multimodal data is only worse. Over 50% of the reviewed studies used multimodal datasets containing less than 1,000 patients. To improve the model performance and robustness with limited data, the pre-trained networks were used to visualize the activated region of images that were most relevant to the models\u2019 outputs , 53, 67.et al displayel [et al used a pl [et al used leal [et al and Chenl [et al , Cheerlal [et al and Braml [et al displayel [et al visualizl [et al mentione6.This paper has surveyed the recent works of deep multimodal fusion methods using the image and non-image data in medical diagnosis, prognosis, and treatment prediction. The multimodal framework, multimodal medical data, and corresponding feature extraction were introduced, and the deep fusion methods were categorized and reviewed. From the prior works, multimodal data typically yielded superior performance as compared with the unimodal data. Integrating multimodal data with appropriate fusion methods could further improve the performance. On the other hand, there are still open questions to achieve a more generalizable and explainable model with limited and incomplete multimodal medical data. In the future, multimodal learning is expected to play an increasingly important role in precision medicine as a fully quantitative and trustworthy clinical decision support methodology."} {"text": "Preterm birth is a worldwide health problem. After unsuccessful transvaginal cerclage, the transabdominal isthmo-cervical cerclage can be indicated. A laparoscopic approach has been described.A search was performed including the combination of: \u201c((cerclage) AND (laparoscopy)) AND (pregnancy)\u201d. A systematic review was performed to compare indications, outcomes, techniques, and safety.42 articles were found through database search. 30 articles were included for review. By reviewing the literature, the transabdominal cervico-isthmic laparoscopic cerclage is highly effective in selected patients with a history of refractory cervical insufficiency. This technique has a high neonatal survival rate when placed in preconceptional or post conceptional patients. Moreover, laparoscopic cervical cerclage is a safe procedure when laparoscopic expertise is present. A transvaginal cerclage is a procedure that places a stitch or tape within the cervix to mechanically close the cervix to prevent it from opening too early during pregnancy. However, if transvaginal cerclage fails, doctors may recommend a transabdominal isthmo-cervical cerclage using a laparoscopic approach. A systematic review analyzed 30 articles to compare the reasons for surgery, outcomes, techniques, and safety. The review suggests that this procedure is highly effective for certain patients with a history of refractory cervical insufficiency. It appears to be a safe technique with a high rate of survival for newborns and rare complications. Across 184 countries, the rate of preterm babies ranges from 5% to 18% of live births . Indicatet al. and ScibThe research was performed using the PubMed database. Filters applied: in the last 5 years. A search was performed including the combination of the following words: \u201c((cerclage) AND (laparoscopy)) AND (pregnancy)\u201d. The search and selection criteria were restricted to the English language.The selection procedure followed the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) principles and is presented using a PRISMA flow chart . Recent The collection of data was done. Data were grouped depending on the type of clinical studies. Obstetrical outcomes and the differences in the surgical techniques were all reviewed.Inclusion criteria were hospitalized high risk women who underwent a laparoscopic cerclage preconceptionally or post conceptionally.-Gestational age at the moment of the cerclage.-Type of the tape.-Uterine manipulation.-Surgical technique.-Surgical complications.The operative technique and characteristics were as follows:-Mode of delivery.-Gestational age at the delivery.The pregnancy outcomes observed were as follows:Thirty studies were included in our review. The PRISMA flow chart is presented in et al. [et al. [Lesser et al. and Scib [et al. were theet al. had an important experience in laparoscopic cervical cerclage. They shared the outcomes of 121 pregnancies after a pre-pregnancy laparoscopic cervical cerclage in women at high risk for pre-term from 2007 to 2017 [et al. published a paper concerning their experience in laparoscopic transabdominal cerclage in 19 pregnant women at 6\u201311\u00a0weeks of gestation [et al. reported the obstetric outcomes of subsequent pregnancies in women who had a laparoscopic transabdominal cerclage left in situ [second pregnancy and 86% of women delivered after 34\u00a0weeks of gestation. On the other hand, the neonatal survival rate was 100% (3/3) in the third pregnancies, and 100% (3/3) of women delivered after 34\u00a0weeks of gestation.Ades to 2017 . The peret al. who underwent a retrospective observational cohort study of patients operated on modified laparoscopic transabdominal cervical cerclage from 2003 to 2018 with a sample size of 299 pregnant women with a mean gestational age of 12.5\u00a0weeks [et al. in 2015, and it differs from the conventional laparoscopic cervical cerclage by the origin of the insertion of the tape \u2013 laterally to uterine vessels and above the uterosacral ligament. The different surgical techniques are detailed in The largest case series was reported by Chung .5\u00a0weeks . There wet al. in 2019: 24 women delivered by cesarean section, with 16/24 after 34\u00a0weeks and 21/24 women producing live births [However, there are few articles introducing the efficiency of laparoscopic cervical cerclage in twin pregnancy. The largest sample of twin pregnancy undergoing laparoscopic cervical cerclage was studied by Huang e births .et al. in 2021 [However, the outcomes of emergency laparoscopic cervical cerclage were introduced in one case series of 5 patients by Kavallaris in 2021 . Patientet al. thoroughly studied the safety and beneficence of laparoscopic transabdominal cervical cerclage in their systematic review with meta-analysis [Moawad analysis , where tet al. [et al. [Laparoscopic cervical cerclage is a surgical approach suggested to prevent premature labor in high-risk women with a history of failed transvaginal cerclage, trachelectomy, or absent vaginal cervix. Hulshoff et al. compared [et al. showed tet al. compared in their meta-analysis 25 studies (1116 patients) on transabdominal cerclage placed by laparotomy to 15 studies (728 patients) on transabdominal cerclage performed by laparoscopy and they reported a higher neonatal survival rate in the laparoscopic group without difference in peroperative complications [A select cohort of women, with a history of second-trimester abortions or early preterm labor due to cervical insufficiency, should benefit from periconceptional counseling for laparoscopic cervical cerclage . These wet al. [et al. [et al. case series. On the other hand, the disadvantage of preconception cerclage is that pregnancy may either never occur or result in an early loss unrelated to cervical incompetence.The laparoscopic cervical cerclage can be provided before conception or after conception, most commonly during the first trimester of pregnancy. There is no cohort study dealing with the difference between the preconceptional and postconceptional laparoscopic cervical cerclage. A systematic review undergone by Tulandi et al. , evaluat [et al. studied et al. experience [In the case of gravid women, using a uterine manipulator during laparoscopic cervical cerclage is not possible. In the modified LCC, four trocars were used with atraumatic forceps for uterine manipulation . Sponge perience . In precperience .et al. [et al. [et al. used Prolene n 1 in their laparoscopic cerclage (5\u20137). Shin et al. [et al. [et al. [Two types of tapes are described in the reviewed studies: The conventional Mersilene Tape and Prolene suture. Whittle et al. were the [et al. in choosn et al. used Mer [et al. where Pr [et al. used Proet al. [et al. [Traditional laparoscopic cervical cerclage is based on a three-port laparoscopic approach with a fourth suprapubic assistant port. Four ports are preferable in gravid women to facilitate uterine manipulation. An incision is performed at the level of the utero-vesical fold in the visceral peritoneum and extended laterally to the broad ligaments. It is not necessary to carry out the bladder reflection systematically, while it is preferable in the case of previous cesarean sections. Most surgeons dissected the uterovesical and paravesical spaces and made a broad ligament window after identification of the ureters and uterine arteries. Although the suture may be inserted in either direction, there was no evidence that placing the suture from anterior to posterior has advantages more than the opposite direction. Some authors preferred the anterior to posterior direction for better visualization, and reduced risk of bowel injury and bladder erosions. Straight or straightened needles can be used during this procedure because they presented a more accurate direction of the suture. Furthermore, the suture is then passed at the level of the uterine isthmus medial to the uterine vessels. However, the tape is inserted laterally to the uterine vessels and above the ureters at the level of the uterine isthmus, above the uterosacral ligament in the modified laparoscopic cervical cerclage . This teet al. describe [et al. also deset al. [insitu and concluded a high neonatal survival rate after 34\u00a0weeks of gestation even in third pregnancies.Three different locations of the knot were described: anterior, posterior, and intravaginal knot. Anterior knots have the advantage of avoiding the risk of adhesions in the Douglas pouch, and can be easily removed in laparoscopy, but may increase the risk of erosion into the bladder. On the other hand, posterior knots can be removed via posterior colpotomy in case of pregnancy failure in the second trimester and this allows vaginal delivery. To avoid the unindicated cesarean section at term for removing the intracorporal cerclage knots, some authors described the intravaginal knot method to simplify knot removal . To sum et al. conducteet al. [et al. case series of 121 interval cerclage [et al. [et al. [et al. [Multiple complications are described by reviewing the literature. Some complications were related to the laparoscopic surgery and others to the transabdominal cervical cerclage procedure. Although Moawad et al. reportedcerclage . There w [et al. was the [et al. did not [et al. publisheet al. [et al. case series [et al. study, 176 (85.9%) delivered successfully live births. 60 of 205 patients (29.2%) delivered before 37\u00a0weeks [et al. study [Moawad et al. reportede series , where t37\u00a0weeks . FurtherTo our knowledge, this is a large and new systematic review exploring the surgical technique of the laparoscopic approach of abdominal cerclage with obstetrical outcomes. On the other hand, the small sample size in some of the included studies, their retrospective design, and the lack of standardized criteria for the technique, and timing of the operation represent the major limitations of this systematic review, thus making it difficult to conclude any convincing evidence on the management strategies. We also included case reports and case series, thus facing a higher risk of publication bias and decreasing the level of the evidence of our findings. Moreover, indications for delivery are not discussed in any study: patients with an indicated preterm labor should be excluded.Future randomized cohort studies with a larger sample size are required to evaluate the best technique , the most efficient type of tape (Mersilene or Prolene), and the best timing for this procedure (preconception or first trimester) to have better obstetrical outcomes. However, these studies need significant statistical power. Furthermore, the beneficial outcomes of laparoscopic cervical cerclage may be a reason for medical laboratories and manufacturers to invest more in the production of resorbable tape based on native tissue. Moreover, further studies are needed to evaluate the efficiency of laparoscopic cervical cerclage in multiple pregnancies.Although the United Kingdom National Institute for Health and Care Excellence (NICE) classified laparoscopic cerclage as a procedure with limited evidence for success, multiple studies showed that transabdominal cervico-isthmic laparoscopic cerclage is highly effective in selected patients with a history of refractory cervical insufficiency. This technique has a high neonatal survival rate when placed in preconceptional or post conceptional patients. Moreover, laparoscopic cervical cerclage seems to be a safe procedure when the correct skill and laparoscopic expertise are present.After unsuccessful transvaginal cerclage, the transabdominal isthmo-cervical cerclage can be indicated in a selected high-risk patient. A laparoscopic approach has been described.The transabdominal cervico-isthmic laparoscopic cerclage is highly effective in selected patients with a history of refractory cervical insufficiency.This technique has a high neonatal survival rate when placed in preconceptional or post conceptional patients.Laparoscopic cervical cerclage is a safe procedure when laparoscopic expertise is present."} {"text": "BMC Bioinformatics presented guidelines on long-read sequencing settings for structural variation (SV) calling, and benchmarked the performance of various SV calling tools, including NanoVar. In their simulation-based benchmarking, NanoVar was shown to perform poorly compared to other tools, mostly due to low SV recall rates. To investigate the causes for NanoVar's poor performance, we regenerated the simulation datasets (3\u00d7 to 20\u00d7) as specified by Jiang et al. and performed benchmarking for NanoVar and Sniffles. Our results did not reflect the findings described by Jiang et al. In our analysis, NanoVar displayed more than three times the F1 scores and recall rates as reported in Jiang et al. across all sequencing coverages, indicating a previous underestimation of its performance. We also observed that NanoVar outperformed Sniffles in calling SVs with genotype concordance by more than 0.13 in F1 scores, which is contrary to the trend reported by Jiang et al. Besides, we identified multiple detrimental errors encountered during the analysis which were not addressed by Jiang et al. We hope that this commentary clarifies NanoVar's validity as a long-read SV caller and provides assurance to its users and the scientific community.A recent paper by Jiang et al. in The benchmarking of structural variation (SV) calling tools provides end-users with vital information for comparing and selecting the optimal tool and settings for SV detection in their research. Hence, it is important to ensure that benchmark analyses are accurate and fair to faithfully reflect the benefits and drawbacks of each tool. In a recent paper by Jiang et al. , a benchhttps://github.com/SQLiu-youyou/The-commands-of-the-evaluation). For comparison, we have also included Sniffles [To investigate the poor performance of NanoVar in Jiang et al., we regenerated the long-read simulation datasets and benchmarked NanoVar in accordance to the methods stated in Jiang et al. (Sniffles (describSniffles v3.0.1)https://gWe also observed different performance results for Sniffles. While our F1 scores of Sniffles for SV calling by presence are broadly in agreement with Jiang et al., our F1 scores for SV calling by genotype concordance were substantially lower . The problem was resolved after we corrected the start coordinates of the file. The second error occurred when NanoVar was running on the simulated long-read BAM file produced by VISOR. The error happened because the read names of the simulated reads contained the comma symbol, which resulted in a parsing error and prevented NanoVar from completing successfully. After removing the commas in the read names, NanoVar completed its run with no errors. As this was a necessary correction to obtain results from NanoVar, it is unclear how Jiang et al. had handled it and whether this influenced the results. The third error happened due to VCF file incompatibilities with Truvari for NanoVar and Sniffles. For NanoVar, an error was raised due to the presence of \u201c>\u201d or \u201c.\u201d symbols in the \u201cSVLEN\u201d field of some entries in the VCF file. These symbols were added by NanoVar to refine information on SV length, or nullify it for SVs with no lengths, respectively. When these symbols were omitted from the VCF file, Truvari ran successfully. For Sniffles, the error was caused by the \u201cSTRANDBIAS\u201d string in the \u201cFILTER\u201d column of a few entries (<\u200950), and eliminating these entries resolved the problem. With the presentation of these VCF incompatibilities, it is plausible that there might be more nuances in the VCFs of NanoVar and Sniffles that impede an accurate assessment by Truvari. Taken together, we are uncertain how these fundamental errors were addressed by Jiang et al. and if they may have affected the results.During our analysis, we made some changes to certain output files within the protocol described by Jiang et al. However, these changes were made in order to rectify the errors that we encountered. As these errors were not mentioned by Jiang et al., they were unanticipated while following their protocol, and we are uncertain how Jiang et al. had resolved them to allow successful completion of their analysis. The first error we encountered happened during the long-read simulation step using VISOR (v1.1) wIn conclusion, based on the Jiang et al. published materials, we were not able to entirely reproduce the results described by the authors\u2019 benchmark. Indeed, our analysis performed on the same simulated datasets suggests an underestimation of NanoVar\u2019s performance and an overestimation of Sniffles\u2019 SV genotyping performance. We also encountered multiple errors while trying to replicate their analysis which might explain the discrepancy in results. We hope that the discussions provided here, as well as other studies \u20135, have"} {"text": "Obesity \u2013 a pandemic of the twenty-first century \u2013 is one of the greatest public health problems worldwide. Overweight and obesity affect nearly one in five children in the world and one in three in Europe . Recent No single cause is responsible for increased incidence of childhood obesity. It cannot be blamed on genetics factors or environment factors alone. In this Research Topic our contributors explore the mechanisms behind, linking intrauterine, postnatal, and early childhood metabolic environment to obesity and its complications , 5, 6.Nakhleh et\u00a0al. revealed that class 1 obesity in children\u2019s and adolescents (BMI \u2265 110% of the 95th percentile) was associated with higher prevalence and clustering of cardiometabolic risk factors.Rajamoorthi et\u00a0al. highlighted the role of the environmental factors, including the globalization of the western diet and unhealthy lifestyle choices. In an elegant review they argued that starting from conception type and timing such exposures come into play impacting on the overall risk of obesity and future adverse health outcomes.Seget et\u00a0al. as they documented that the prevalence of obesity is increasing among in children with diabetes mellitus type 1 (T1DM) and may influence the glycemic control.An important new observation was reported by Pixner et\u00a0al. investigated LACA and its mediators (amino acids and glucagon), focusing on the relationship between glucose and the LACA in adult and pediatric subjects.On the other hand, K\u0105cka et\u00a0al. introduced novel markers of metabolic complications in obese T1DM and non-diabetic subjects.Sobek and D\u0105browski in article \u201cLifestyle intervention changes are crucial in the prevention and treatment of childhood obesity\u201d.Analysis of the taste preferences and sensitivity of mothers and their children in the relation to excessive body weight of children is presented by Str\u0105czek et\u00a0al. found that one-year dietary education resulted in significant improvements in body weight, waist, and hip circumference, WHtR and selected measured carbohydrate and lipid metabolism parameters with the exception of total cholesterol. The one-year dietary intervention did not have the same effect on the change in dietary habits in children and in their mothers.de Lamas et\u00a0al. concluded that controlling obesity and cardiometabolic risk factors, especially insulin resistance and blood pressure in children during the prepubertal stage appears to be effective in prevention of pubertal metabolic syndrome.The assessment of childhood obesity comorbidities and risk of its complication is challenging and difficult. Artemniak-Wojtowicz et\u00a0al. experimentally proved that weight reduction leads to significant decrease of circulating Th17 cells and improvement of lipid parameters. This significant reduction of proinflammatory Th17 cells is a promising finding suggesting that obesity-induced inflammation in children could be reversible.Maruszczak et\u00a0al. described determinants of hyperglucagonemia in Pediatric Non-Alcoholic Fatty Liver Disease. Brunnert et\u00a0al. revealed usefulness of the liver stiffness measurement in the evaluation of liver involvement in obese adolescents. Furdela et\u00a0al. revealed that triglyceride glucose index, pediatric NAFLD fibrosis index, and triglyceride to high-density lipoprotein cholesterol ratio are a valuable combination of predictive markers of metabolically unhealthy phenotype in Ukrainian overweight/obese boys.One of the key problems in the development of obesity complications is the liver involvement. Liver abnormalities - collectively known as metabolic associated fatty liver disease is becoming a more prevalent clinical problem, in obese children and adolescents. Erazmus et\u00a0al. in the article \u201cDecreased level of soluble Receptor Activator of Nuclear Factor-\u03ba\u03b2 Ligand (sRANKL) in overweight and obese children\u201d.Obesity can also associate with complications of calcium-phosphorus and bone metabolism regulation . That waKrajewska et\u00a0al. confirmed that vitamin D has positive effect on metabolic profile in overweight and obese children, but the relationship between vitamin D and chemerin is not clear.Zembura and Matusik found that sarcopenic obesity is highly prevalent in children and adolescents and is associated with various adverse health outcomes including significant association with cardiometabolic outcomes, severity of non-alcoholic fatty liver disease (NAFLD), inflammation, and mental health. Findings of this review highlight the need for the development of a consensus regarding definition, standardized evaluation methods, and age and gender thresholds for sarcopenic obesity for different ethnicities in the pediatric population.Many factors influencing the development of obesity and its complications are still unknown. Future studies are needed to elucidate many questions and concerns raised by our contributors. Nevertheless, we do hope that readers will find our Research Topic informative and inspiring.AM, MW - writing draft of manuscript. AG, GT, EV - review. DM - final correction and approval."} {"text": "We thank Dr. Elvira Alvarez Stehle for her Although deviations from the original protocol were found in the PREDIMED trial, they do not affect the validation of the MEDAS questionnaire. Schr\u00f6der et al. did, in More recently, Garc\u00eda-Conesa et al. performeWe thus believe that the MEDAS questionnaire remains a useful and reliable tool for rapidly assessing adherence to the Mediterranean diet."} {"text": "Cancer represents one of the most important general health problems of our day. The estimated incidence of new cases in 2020 was 19.3 million [Even when diagnosed at an early stage, cancer patients can experience a metastatic relapse. Gallicchio et al. estimated that, during a lifetime, the percentage of metastasisation can range from 30% for lung cancer to 72% for bladder cancer , even thAll of these improvements could not be reached without the translation of fundamental research into practical uses, ranging from the initial cellular level to the molecular level, and nowadays, we have a genetic understanding of cancerogenesis. Since the times of ancient Greece, multiple theories have been made about oncogenesis, and through \u2018step by step\u2019 discoveries, we have managed to learn how things work in the complex area of human biology, and new potentially valuable targets have emerged.One of the most important dreams of healthcare personnel\u2014from research, clinical, or laboratory specialities\u2014is to solve a little or big part of the complicated oncology puzzle, which could contribute to saving lives. Every small discovery could represent a big step in curing more people, such as by making more effective treatments available, diagnosing cancer earlier, or reducing the risk for cancer.This Special Issue, \u201cAdvances in Cancer Therapy from Research to Clinical Practice\u2014Surgical, Molecular or Systemic Management of Cancer\u201d, was initially designed to allow the potential authors to share their work in the very complicated world of oncology using a large amount of big data, which can be practice changing.The published articles covered a broad range of cancers, with the most important primary tumours being treated from a fundamental research or clinical practice point of view.For breast cancer, for example, Lisencu et al. tried toMoreover, Mkrantonakis et al. tried toThe molecular types of cancer are not a fundamental discovery without any clinical relevance. As shown by Szep et al. , multipaFor lung cancer patients, Ahn et al. searchedIn melanoma cancer, rechallenging with BRAF inhibitors becomes an attractive option for multiple-relapse patients. The polyclonal theory of cancer remains an important element to be taken into consideration when we are in a difficult situation after standard therapy failure. Ksomidis et al. showed sInflammation in cancer could be a target for immunomodulatory treatment, but could also represent an unfavourable factor for patients, being responsible for cancer progression. The combined peripheral neutrophil\u2013platelet index seems to be an unfavourable predictor factor for overall survival in resectable oesophageal squamous cell carcinoma (ESCC) patients, as shown by Peng et al. . An absolute iron deficiency in colorectal cancer can impair not only the cardiac and respiratory functions, among others, but also the immune system\u2019s defence. Deficient patients could be affected more and could present with more advanced disease\u2013lymphatic invasion, as shown by Fagarasan et al. .For hepatocarcinoma, Chen et al. showed tA new chemotherapy association or a new modality of drug delivery? Chioreanu et al. tested aBody mass index can be a simple prognostic factor for prostate cancer, as shown in Popovici et al.\u2019s article , due to Al-Gharaibeh et al. analysedFor gynaecologic cancers, Obradovic et al. investigMedicina by Simescu et al. [Multiple neoplasia represents a difficult pathology for an oncologist, as mentioned and developed in an article published in u et al. . Parathyu et al. . A rare u et al. .Cotorogea-Simion et al. developeWe hope that the readers will find answers to their questions or read about a finding related to their scientific interest in our Special Issue."} {"text": "Tomography is an open access journal dedicated to all aspects of imaging science from basic research to clinical applications and imaging trials. As Editor-in-Chief of Tomography it is my great pleasure to provide a summary of some of the most cited and viewed publications in Tomography in 2021\u20132022 to summarize the last year\u2019s most relevant discoveries in clinical imaging.Tomography regarding AI and, in particular, in AI oncologic imaging applications. Takahashi et al. [Presently, artificial intelligence (AI) and patient radiation exposure likely represent the most relevant general research fields in clinical imaging. Several papers were published last year in i et al. showed hi et al. . Park eti et al. showed ti et al. . AI couli et al. . The usei et al. , which cTomography, Inaba et al. [Regarding occupational radiation protection, specific attention has been recently dedicated to the eye lens. The International Commission on Radiological Protection (ICRP) adopted the new recommendation of reducing the occupational eye lens dose limit from 150 mSv/year down to 20 mSv/year averaged over 5 years since cataracts can occur at lower radiation doses than those examined in previous epidemiological research. In a recent paper published in a et al. showed hCOVID-19-related pneumonia still represents a main research topic in the radiological literature. In particular, imaging findings in long-COVID still represent a relevant field. Besutti et al. showed tTomography still represents an important venue for radiology research especially in the most relevant topics of imaging science.In conclusion,"} {"text": "Staphylococcus aureus (LA-MRSA) and methicillin-resistant Staphylococcus pseudintermedius (MRSP) are of growing concern. Staphylococcus schleiferi and coagulase-negative staphylococci occur as commensals of the skin and mucous membranes of animals. However, they have been implicated in a variety of infections and are frequently resistant to one or more antimicrobial classes. This Special Issue assembles a collection of original articles that shed further light on these fascinating bacteria and their potential impacts on both veterinary medicine and public health.Staphylococci figure prominently among those bacteria demonstrating antimicrobial resistance (AMR) and are thus responsible for significant problems concerning the treatment of the animals and humans that they infect. In particular, livestock-associated methicillin-resistant Staphylococcus pseudintermedius, the most common pathogen isolated from skin disease samples (particularly pyoderma) from dogs [S. pseudintermedius constitutes a growing concern. A range of novel potential treatments including vaccination and phage therapy are discussed.The review by Lynch and Helbig providesrom dogs . The higmec (SCCmec) type and the antimicrobial susceptibility of staphylococci isolated from superficial pyoderma infections affecting dogs in Thailand. SCCmec type V was found in S. aureus, the S. intermedius group, S. lentus, S. xylosus, and S. arlettae, and although the authors do not state whether the coagulase-negative species were considered the primary bacterial pathogens in the cases from which they were isolated, the need to reduce environmental contamination and educate veterinary personnel and clients about the potential for the transmission to and from dogs of all resistant staphylococci is emphasized. The need for hygiene is supported by the results of another recent longitudinal study of MRSP-infected dogs conducted by Frosini et al. [S. pseudintermedius isolates that showed MICs < 2 \u00b5g/ml, an interesting finding in light of the publication by Wegener et al. [Chanayat et al. investigi et al. , in whici et al. , includer et al. suggestiS. pseudintermedius isolates causing superficial pyoderma in Taiwanese dogs, Lai et al. [S. pseudintermedius from their pets. Although no significant association was found, high odds ratios were obtained for \u201ckeeping three or more dogs\u201d and \u201cdogs can lick the owner\u2019s face\u201d, suggesting support for recent publications describing the potential for S. pseudintermedius infections in human hosts [As well as characterizing i et al. also exaan hosts ,8.Certain foods and food production systems may present a pathway for the transmission of MRSA to humans. Benrabia et al. detectedStaphylococcus aureus is still the leading cause of bovine mastitis in many countries. Rusenova et al. [S. aureus in Bulgaria. The discrepancies detected for some isolates are concerning and, as recommended by the authors, highlight the need for isolates to be thoroughly characterized.a et al. comparedS. aureus [Biofilm production is considered an important virulence factor for . aureus and coag. aureus showed tThe diverse articles contained in this Special Issue constitute a valuable contribution to our understanding of staphylococcal infection in both farm and companion animals."} {"text": "Plants (ISSN 2223-7747) presents a comprehensive update of the current progress in the field. It includes a total of 38 articles, 29 original research papers, 5 reviews, 2 conference reports and 2 communications, encompassing almost all areas of research and applications related to the aquatic monocotyledonous plants duckweeds. The content of this Special Issue reflects the diversity of the duckweed community well in terms of the focal areas of research using the method of tubulin-gene-based polymorphism . Whereas several distinct clones were identified within the populations of each pond in the case of L. minor, S. polyrhiza clones only showed genetic differences between the ponds [atpF-atpH and psbK-psbI), Chen et al. [Lemna aequinoctialis did not form a uniform taxon, which might be a hint for the existence of hybrids. The same plastidic markers for barcoding were used by Yosef et al. [L. gibba G3 was shown to be tetraploid [The family Lemnaceae was circumscribed by Ivan Martinov 1771\u20131833) as early as 1820. Therefore, the valid name of the family is Lemnaceae Martinov . With th71\u20131833 acf. also ). This pcf. also used fivariation . This alhe ponds . Using pn et al. identifif et al. , identiff et al. summariztraploid . This muWater pollution and meeting the ever-increasing clean water demands are interconnected problems that our modern society must tackle. Duckweeds have long been considered as potent candidates for wastewater management. As reviewed by Zhou et al. , these fSix case studies examined the performance and pitfalls of duckweed-based wastewater remediation systems. Using multi-tiered indoor bioreactors, Coughlan et al. studied 3\u2212 or NH4+ as inorganic nitrogen sources was general or rather species-specific amongst Lemnaceae. In addition, they provided insights to the complex regulation of nitrogen assimilation in these plants by reporting the molecular structure and differential expression of several key enzymes in response to different inorganic nitrogen sources. Nitrogen availability is not only crucial in the plants\u2019 nutritional status, but may also modulate responses to other stressors, such as the presence of heavy metals. In their study, Kishchenko et al. [4+ could mitigate manganese toxicity in S. polyrhiza, including the interactions between Mn availability and the transcription of ammonium transporters. Besides remediation, metal accumulation may also gain significance when future duckweed-based feed and food production is considered. Accordingly, Pakdee et al. [A series of studies addressed basic physiological processes that make duckweeds efficient in water remediation and waste valorisation. Zhou et al. , by studo et al. focused e et al. identifie et al. comparedL. minor and Wolffiella hyalina in an indoor experiment under the influence of different nitrate-to-ammonium ratios. The best results were obtained in a 50% diluted N medium with a nitrate-N to ammonium-N ratio of 3:1. In an additional set of experiments, the influence of light intensity and light source spectrum was investigated in a \u201csmall-scale, re-circulating indoor vertical farm\u201d [Wolffia species in space applications. As the authors pointed out, the world\u2019s smallest plants are promising candidates to be incorporated into bioregenerative life support systems in long-term space missions, and they are able to recycle water and oxygen for astronauts while also providing them fresh vegetable biomass.Application of duckweed on an industrial or commercial scale requires the production of large amounts of biomass. This can be attained either by using very large water surface areas or by using a large number of smaller facilities in a modular arrangement. Wastewater cleaning with typically very high volumes of liquid waste might be preferentially carried out outdoors in large ponds or waterways \u2014althoughal farm\u201d providinal farm\u201d ). The saal farm\u201d . Romano al farm\u201d discusseLemna) have very high growth rates combined with unusually high levels of zeaxanthin, which is important for human nutrition. Moreover, Lemna plants can respond to elevated CO2 concentrations with increasing growth rates. It has been known for a long time that under stress conditions, the protein content of duckweeds decreases as strongly as the starch content increases. The accumulation of starch in a large number of duckweed species under nutrient-limited cultivation conditions has been shown [Lemna turionifera under sulphur limitation in the study of Wang et al. [A group from Italy substituted different amounts of the standard feed of rainbow trout on the protein basis and reported that 20% substitution did not have any negative effects on the fish, but 28% substitution did show effects such as reduced fish body weight and a few other parameters . Demmig-en shown . Starch g et al. .Rhodobacter, changes after transfer from conditions in nature to nutrient-deficient conditions. Some herbivores feed on duckweeds, e.g., on S. polyrhiza. Schaefer and Xu [S. polyrhiza tested did not differ in their resistance toward the herbivore. However, after outdoor inoculation with microbiota associated with the same plant species, altered herbivory tolerance was observed in a genotype-specific manner.The interaction of duckweeds with microorganisms, especially plant-growth-promoting bacteria, can result in enhanced plant growth . The ider and Xu quantifiWolffia arrhiza treated with brassinolide and brassinazole, a synthetic brassinosteroid inhibitor. Kozlova and Levin [L. minor to a fish steroid, 17\u03b2-Estradiol, that has been released at a large scale by intensive fish farms, and they found growth-promoting effects.Duckweeds are ideal for studying hormonal effects in plants, as they can be cultured axenically and take up substances directly into the shoot. As a vivid example of this traditional role, Chmur and Bajguz analysednd Levin , on the Besides plant physiology research, bioindication of pollutants is another classical field of duckweed applications. Microplastics are emerging contaminants and, as such, their potential risks need to be explored urgently. Two studies aimed to fill this knowledge gap: Rozman and Kal\u010d\u00edkov\u00e1 tested iThe present Special Issue gives an update on the state of the art of duckweed research and applications and, together with the two previous Special Issues on this topic ,53, demo"} {"text": "Dura est manus cirurgi, sed sanans.The hand of the surgeon is hard, but healing.Walter Map, (1135\u20131210), British writerSince minimally invasive surgery (MIS) such as video-assisted thoracoscopic surgery (VATS) was introduced over 2 decades ago, it has revolutionized the thoracic surgery field. MIS can offer many advantages such as reduced anaesthesia and hospitalization time, less tissue damage and pain, decreased intraoperative blood loss, lower risk of postoperative infection and complications, better cosmetic outcome and faster recovery compared to traditional open technique . When coHowever, VATS is definitely not a panacea and it actually has a number of technical drawbacks , 2, 4. OAnother significant limitation of MIS is the lack of tactile sense during tool\u2014tissue interaction that severely impairs the surgeon\u2019s ability to control the applied forces\u2014and thus can cause unintentional damage or additional trauma to healthy tissues . TactileRecent advances and more widespread application of computational tomography (CT) including the use of low-dose high-resolution CT protocols have allowed for the early detection of small pulmonary nodules (PNs) and improved early-stage lung cancer screening , 6. VATSet al. [In this article, Messina et al. explore et al. . In thiset al. . The autet al. .et al. [et al. [The use of IUS for detecting lung nodules is not novel! Almost 2 decades ago, Santambrogio et al. utilized [et al. utilized [et al. .et al.\u2019s study can overcome this disadvantage and have been shown to be very accurate locating small PNs of 20\u2009mm or less at depths of up to 15\u2009mm [However, in contrast to lung nodules, GGOs are small areas of hazy increased attenuation on CT that do not obscure underlying bronchial structures or vascular markings and often do not present with a solid, firm component. And herewith is their main difference compared to PNs, for their detection even with open palpation is extremely challenging! Nevertheless, high-frequency ultrasound probes such as the one used in Messina to 15\u2009mm , 9.et al. can be considered a glimpse into the future because the authors, by utilizing high frequency IUS, were now able to not only quite effectively locate the GGOs but also to accurately define and characterize/describe them. Therefore, one may safely expect that as ultrasound resolution and technique improve further its applicability in thoracic surgery will probably become more widespread and thus the \u2018manus cirurgi\u2019 will end up replaced by ultrasound to effectuate \u2018s\u0101nantem\u2019!Consequently, this report by Messina"} {"text": "Agnaou et al., RSC Adv., 2023, 13, 8015\u20138024, https://doi.org/10.1039/d3ra00485f.Correction for \u2018New silicon substituted BiMeVO The authors regret that the name of one of the authors (A. Agnaou) was shown incorrectly in the original article. The corrected author list is as shown above.The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} {"text": "Nutrients [We have read the article entitled \u201cThe effect of goat-milk-based infant formulas on growth and safety parameters: a systematic review and meta-analysis\u201d by Jankiewicz et al. published in utrients .sn-2 palmitic acid), while the other formula tested by Xu et al. [We acknowledge the relevance of this systematic review due to the increasing number of goat milk formulas (GMFs) available worldwide. Only two GMFs have been clinically evaluated to help caregivers and healthcare professionals make an informed choice when selecting a nutritional product. However, the article did not present key compositional differences between the two GMFs, which may affect nutritional and health outcomes beyond infant growth. One formula evaluated by Grant et al. and Zhouu et al. and He eu et al. is made s2-casein as the protein linked to less protein aggregation in goat milk when it should refer to a lower level of \u03b1s1-casein. The risk of the bias assessment process for D1 (randomization process) is not clearly explained, and, based on information provided in all publications, it is unclear why the publication of Zhou et al. [In addition, a few issues appear unclear and should be corrected. The introduction refers to a higher level of \u03b1u et al. was asseu et al. was used. And the reference to two ongoing trials should also indicate which formula is used, as the nutritional and health outcomes in these two studies are only valid for the formula under investigation. For example, it is well known that for allergy outcomes, clinical data and efficacy in allergy prevention or treatment of hydrolysed formulas cannot be generalised to any hydrolysed formulas. Therefore, the same would apply to the allergy outcome results from the GIraFFE study (NCT04599946).Importantly, a statement about the conflicts of interest from the GMF company authors is missing."} {"text": "Periodontitis is an inflammatory disease that affects the supporting tissues of teeth, the effects of excess of nitric oxide, may contribute to the symptoms of periodontitis. To determine the serum nitric oxide concentration in generalized chronic and aggressive periodontitis patients and to compare it with a healthy subject group from the Mexican population.Cl\u00ednica de Posgrado de Periodoncia of the Centro Universitario de Ciencias de la Salud, Universidad de Guadalajara, M\u00e9xico. Patients with clinical features of generalized chronic periodontitis , generalized aggressive periodontitis , and a group of healthy subjects were included in the study. Informed consent was obtained from each subject, and serum nitric oxide concentration was measured by an enzyme-linked immunosorbent assay. A case and control study was performed. Sixty-nine individuals were recruited from the Nitric oxide concentration in the study groups was greater in the GCP group (462.57 \u00b1 16.57 \u00b5mol/L) than in the GAP group (433.84 \u00b1 18.61 \u00b5mol/L) and the HS group (422.46 \u00b1 12.07 \u00b5mol/L). A comparison using Student\u2019s t-test (one-tailed) between healthy subjects and generalized chronic periodontitis showed borderline significance (p<0.04), whereas no significant differences were observed in HS and GAP groups, with a p-value of 0.64, and the GAP vs. GCP p-value was 0.33. The serum nitric oxide concentration observed in the present study suggests that nitric oxide plays a major role in the inflammatory process, which cannot necessarily be linked to the severity of the disease and periodontal tissue destruction. It shows an accelerated destruction of periodontal tissues with alveolar bone loss in otherwise clinically healthy subjects Nitric oxide is now accepted as a prevalent biological mediator in many organisms. In mammals, nitric oxide is involved in several intercellular and intracellular activities, such as blood vessel dilation, neuronal intermediary, cytotoxicity, regulation of the cardiac rhythm and cellular respiration activities Nitric oxide originates from a cluster of isoenzymes denominated nitric oxide synthases (NOS), which exist as three specific isoforms: endothelial NOS (eNOS), neural NOS (bNOS), and inducible NOS (iNOS). eNOS and bNOS deliver a limited amount of nitric oxide for a brief time following receptor stimulation. In contrast, iNOS is expressed due to proinflammatory mechanisms and generates a high volume of nitric oxide for longer periods.The progression of chronic harmful inflammation of the periodontium may be a disrupted process. The effects of spare nitric oxide in the gingival mucosal tissue could contribute to the progression of the most frequent clinical signs of periodontitis in humans. The vasodilatory action of nitric oxide could be related to gingival redness, and gingival swelling may be provoked by the increase in the permeability of blood vessels induced by nitric oxide. The enlarged propensity of gingival tissue to bleed on probing may demonstrate the inhibitory mechanism of nitric oxide on platelet aggregation and the inhibitory activity of nitric oxide on adhesion A high concentration of nitric oxide produced locally is crucial in nonspecific host defense because of its cytotoxic activity against several organisms, as well as tumor cells. Nitric oxide released by eNOS plays a role in maintaining periodontal vascular perfusion et al., were the first to show that the salivary nitric oxide concentration is associated with the severity of periodontitis, allowing to differentiate between moderate and advanced chronic periodontitis. They proposed that NOS inhibitors could be valuable in the treatment of periodontal disease Reher, This study aimed to measure the serum nitric oxide concentration in patients with generalized chronic and aggressive periodontitis and compare it with that of healthy subjects.Cl\u00ednica de Posgrado de Periodoncia, Centro Universitario de Ciencias de la Salud, Universidad de Guadalajara. Patients with clinical features of generalized chronic and aggressive periodontitis and healthy subjects were included in the study Sixty-nine individuals were recruited from the Medical and dental records were taken from all participants. None had a history of current smoking or systemic disease; or had received antibiotic, immunomodulatory, or anti-inflammatory drugs; or had received periodontal treatment within the previous six months. Pregnant and breastfeeding women were excluded from the study. The purpose of the study was explained fully to each participant before they were accepted into the study.Institutional ethics review committee approval for the study was obtained (CI-01715), and informed consent was obtained from each participant in accordance with the 2013 Helsinki Declaration.\u00ae ) by a single researcher.Clinical parameters were measured at six sites per tooth using a periodontal probe . The amount of destruction was consistent with local factors, including subjects with \u226530% of sites with pockets, including a CAL of \u22655 mm and PD of \u22656 mm with radiographic evidence of alveolar bone loss This study group comprised 11 patients ; they had a family history of \u22651 family member with severe periodontal damage. They showed radiographic evidence of severe alveolar bone loss and clinical attachment loss minimum of \u22655 mm in eight or more teeth, at least three of which were not central incisors or first molars, and PD \u22656 mm (8).This study group included 39 participants without history of systemic diseases or tobacco smoking and health status Venous blood from patients and healthy subjects was collected, and serum samples were separated by centrifugation at 900g (10 minutes). The obtained samples were stored and frozen immediately at -70 \u00b0C until analysis. Serum nitric oxide levels were measured by enzyme-linked immunosorbent assay, ELISA . Optical density was measured in a microplate reader set to 450 nm with wavelength correction at 540 nm. The concentrations were calculated with the standard curve included in each assay kit and were expressed as \u00b5mol/L.Clinical and demographic parameters in the study groups were determined by Student\u2019s t-test and Fisher\u2019s exact test, respectively. The nitric oxide concentration was compared among the three groups using Student\u2019s t-test. Significance was considered at a p value <0.05.In terms of age, a difference was observed: the HS group was younger than the GCP group (p<0.0001); also, the comparison of the GCP vs. GAP groups showed a significant difference (p<0.0001). In terms of sex, women predominated in all the study groups and did not exhibit differences.Among the periodontal clinical parameters, PD demonstrated statistical significance between the HS vs. GCP and HS vs. GAP groups (p<0.0001) in both cases when using Student\u2019s t test (table 1).The serum nitric oxide concentrations are presented as the means \u00b1 standard deviations. The nitric oxide concentration did not differ significantly between the GAP group (433.84 \u00b1 18.61 \u00b5mol/L) and the GCP group (462.57 \u00b1 16.57 \u00b5mol/L) or between the GAP group and the HS group (422.46 \u00b1 12.07 \u00b5mol/L). The comparative analysis was only significant between GCP and HS, but it showed borderline significance (p=0.04) .et al. et al. et al. The participation of nitric oxide in physiological actions depends on its origin, duration, and concentration et al. (17), and Wang, et al. et al. et al., reported higher levels of salivary nitric oxide in periodontally healthy subjects than in chronic periodontitis patients Contrary to Schmidt, et al. et al. ,In contrast to Reher, et al. et al. The differences in nitric oxide concentration in GCP vs. HS could be explained by the fact that nitric oxide participates as a biomessenger of bone loss induced by inflammation et al. et al. (26), noted that high nitric oxide production by endothelial cells in bone plays a physiological role by regulating the activity of osteoclasts, which probably limited the alveolar bone damage observed in the GCP group in the present study.With respect to the origin of nitric oxide, McCarty, et al. et al. et al. et al. et al. in vitro and stimulates osteoblast apoptosis On the other hand, Daghigh, et al. et al. et al. et al. et al. The involvement of nitric oxide and NOS in apposition and bone resorption is controversial. Fukada, et al. ,Additionally, MacIntyre, Our results allow us to infer that iNOS and eNOS, along with other mechanisms or related metabolic pathways, probably account for the severity of alveolar bone loss in periodontitis. It is likely that the pathways that activate NOS, which produces nitric oxide, differ between generalized aggressive and chronic periodontitis, and in healthy subjects; the difference in pathways is probably related to the damage observed in these patients. Additionally, time-dependent effects and concentrations could participate in the phenotypic consequences of periodontitis.Supplementary studies are required to address the role of nitric oxide in periodontitis. The higher serum nitric oxide concentration in patients in the GCP group suggests that nitric oxide plays a major and selective role in the inflammatory process and that the nitric oxide concentration and origin are probably not related to the severity of the disease."} {"text": "Philosophical Transactions published G. I. Taylor\u2019s seminal paper on the stability of what we now call Taylor\u2013Couette flow. In the century since the paper was published, Taylor\u2019s ground-breaking linear stability analysis of fluid flow between two rotating cylinders has had an enormous impact on the field of fluid mechanics. The paper\u2019s influence has extended to general rotating flows, geophysical flows and astrophysical flows, not to mention its significance in firmly establishing several foundational concepts in fluid mechanics that are now broadly accepted. This two-part issue includes review articles and research articles spanning a broad range of contemporary research areas, all rooted in Taylor\u2019s landmark paper.In 1923, the Philosophical Transactions paper (part 2)\u2019.This article is part of the theme issue \u2018Taylor\u2013Couette and related flows on the centennial of Taylor\u2019s seminal Philosophical Transactions A a century ago, G. I. Taylor connected theory and experiment in his ground-breaking investigation of flow between differentially rotating concentric cylinders [In a remarkable paper published in the ylinders . The papylinders , which aTaylor\u2019s 1923 paper has inspired several generations of researchers in fields ranging from nonlinear dynamics to astrophysics. Not only does Taylor\u2013Couette flow display remarkable vortical patterns that are easily generated and visualized, it is a test bed for studies probing fundamental aspects of fluid flow as well as practical engineering applications.Philosophical Transactions.Part 1 of this two-part theme issue explored contemporary topics related to Taylor\u2013Couette flow including turbulent, convective and two-phase flows as well as extensions to magnetohydrodynamic, ferrofluidic and viscoelastic flows and flow geometries that are closely related to the Taylor\u2013Couette problem \u201318. Partet\u00a0al. [et al. [et al. [This issue starts with several papers that consider much the same problem as Taylor did, but with an emphasis on turbulence rather than the linear onset of instability. Feldmann et\u00a0al. review t [et al. present [et al. conduct et al. [et al. [et al. [As in Part 1, we have several papers on the interaction of Taylor\u2013Couette flow with convection, but with one crucial difference. The papers in Part 1 imposed a temperature gradient in the cylindrically radial direction, which corresponds to a radial force field playing a role like that of gravity. By contrast, the papers here impose gradients in the axial direction, matching gravity in a typical laboratory setting. Lopez et al. review T [et al. , who pre [et al. impose uJi & Goodman present et al. [et al. [et al. [As was the case in Part 1, the Taylor\u2013Couette problem can be extended by considering fluids that are more complex than Newtonian fluids. Bai et al. and Moaz [et al. both pre [et al. consider [et al. conduct [et al. present et al. [et al. [Finally, and again as in Part 1, there are a variety of systems and geometries that are not strictly Taylor\u2013Couette flows as such, but are nevertheless closely related. Nagata presentset al. consider [et al. numericaPhilosophical Transactions. It is clear that the study of Taylor\u2013Couette flow will continue to provide a basis for a broad range of important and fundamental research topics for many decades to come.This two-part theme issue is a fitting tribute to celebrate the centennial of Taylor\u2019s foundational paper in the"} {"text": "Da increased while the slip parameters The modeling and analysis of hybrid nanofluid has much importance in industrial sector where entropy optimization is the key factor in different processes. This mechanism is also used in medical industry, where it can be used for separation of blood cells by centrifuge process, treating cancers, and drug transport. In light of this importance, current study is focused on mathematical modeling and analysis of blood based hybrid nanofluid between rotating disks with various shapes of nanoparticles. The shape factors are taken into account with Hamilton\u2013Crosser model as spherical, brick, cylinder and platelet in the current scenario, with special reference to entropy optimization. In order to solve modeled nonlinear and non-dimensional system, optimal homotopy analysis approach is utilized through Wolfram MATHEMATICA 11.3 software. Error estimation and convergence analysis confirms that obtained semi-analytical solutions are valid and reliable. Velocity, temperature and concentration profiles are analyzed against important fluid parameters. Fluid velocity decreased in all directions when unsteady parameter Subray et al.2 comparatively analyzed the flow of a nano and hybrid nanofluid for brick, blade and laminar shaped nanoparticles. Li and You3 simulated the flow of a water-based hybrid nanofluid over a stretching sheet with various shapes of copper and alumina nanoparticles. Akbar et al.4 studied the Maxwell nanofluid flow over a linearly stretched surface. Analysis on cross flow of hybrid nanofluid with numerous nanoparticle shapes is performed by Ramesh5. Study on a rate type nanofluid over a magnetized stretching sheet is performed by Liu et al.6. Dinarvand and Rostami7 studied squeezing of hybrid nanofluid with variable shapes. Ghobadi and Hassankolaei8 numerically simulated the hybrid nanofluid on cylinder with different shape factors. Chung et al.9 analyzed three dimensional hybrid nanofluid flow with heat source/sink. Gholinia et al.10 explored nanofluid with varying shapes of titanium oxide and alumina nanoparticles. Nasir et al.11 studied hybrid nanofluid flow over a Darcy-Forchheimer porous surface. Waqas et al.12 explored different shapes of gold nanoparticles in Sisko fluid. Li et al.13 studied slip effects on a nanofluid flow over stretching sheet.Hybrid nanofluids are colloidal mixtures containing two type of nanoparticles mixed in a single base fluid. These fluids are useful in applications where heat and mass transfer enhancement is required to obtain more efficient and effective systems. Hybrid nanofluids especially with different shapes of nanoparticles can further improve the heat transfer effects due to which their study has gained interest of many researchers. Kashi\u2019e et al.14 investigated Williamson nanofluid with swimming gyrotactic microorganisms. Khalaf et al.15 improved the heat transfer effects in a nanofluid with porous media. Chu et al.16 studied heat transfer of a hybrid nanofluid in a microchannel. Ahmad et al.17 analyzed the bio-convective flow of a gyrotactic microbes based nanofluid flow over a non-linearly stretched sheet and passing through a porous medium. Muhammad et al.18 investigated the Darcy-Forchheimer porous medium flow of a carbon nanotubes based nanofluid. Li et al.19 enhanced the heat transfer properties of the time-dependent viscous fluid flow. Gul et al.20 analyzed heat transfer in a hybrid nanofluid flow in a porous chamber. Panigrahi et al.21 numerically simulated the effects of porous media on MHD flow of a Casson nanofluid using Runge-Kutta method with shooting technique. Babu et al.22 simulated the heat and mass transfer effects in a nanofluid flow over a wedge. Nasir et al.23 enhanced the heat transport properties in stagnation point flow of a hybrid nanofluid. Esfe et al.24 studied the impact of porous medium on three different types of convective transfer through heat. Recently, Prasannakumara25 investigated the influence of porous media on methanol and NaAlg based nanofluid flow through Tiwari-Das model. Ragupathi et al. in26 explored radiative Casson nanofluid over a radially stretching and rotating disk.Increase in energy generation gathered much attention in last decade. Bhatti et al.27 sought out to improve the thermal transport of two types of nanofluids (mono and hybrid) passing over an inclined sheet with heat source/sink effects. Nasir et al.28analyzed entropy generation in ethylene glycol and water based nanofluid. Yaseen et al.29 investigated the flow of a water based hybrid nanofluid past a moving convective heated surface with heat source/sink, velocity slip and non-linear thermal radiation. Sajid et al. in30 investigated a Cross non-Newtonian tetra hybrid nanofluid flow in a stenosed artery. Sulochana and Kumar31 enhanced the rate of of heat transfer with heat source and sink in a mono and hybrid nanofluid over a stretching surface. Chu et al.32 analyzed Jeffrey nanofluid with chemical reaction between two disks. Chamkha et al.33 numerically analyzed the copper-alumina hybrid nanofluid flow with water as base fluid inside a partially heated square cavity under heat generation and absorption effects. Gorla et al.34 investigated heat source and sink effects on a hybrid nanofluid flow in a porous cavity. In a recent study, Yasir et al.35 applied a non-uniform heat source/sink in an ethylene glycol based hybrid nanofluid with Hamilton-Crosser model. Saleh et al.36 simulated effects of heat generation and absorption on a Maxwell hybrid nanofluid with MHD effects. Dinarvand et al.37 performed a numerical investigation on squeezing flow of a water based hybrid nanofluid between two collateral sheets influenced by heat generation and absorption.Many applications of rotating disks involve heat generation and absorption phenomena in order to perform the task optimally. It can either require higher temperatures or extremely lower temperatures depending on the phenomena under consideration. For instance, in order to separate platelets and other components from blood rotation, an ambient temperature must be maintained in order to achieve the desired results. Different studies in literature have taken heat source/sink into account. Ali et al.38 attempted to minimize the entropy generation in membrane reactor of methanol synthesis with various geometries by using optimal control theory and linear programming. Khan et al.39 investigated the entropy minimization in a non-linear thermal radiative flow of hybrid nanofluid with water as a base fluid. Obalalu et al.40 minimized the entropy generation in a Casson nanofluid flow over a stretching Riga plate and non-Darcy porous medium. Nasir et al.41 optimized entropy generation in a Maxwell nanofluid flow. Li et al.42 simulated entropy generation in stagnation point flow of Carreau nanofluid. Munawar et al. in43 investigated the entropy minimization of a hybrid nanofluid flow inside a corrugated triangular annulus with magnetic effects and free convection. Khan et al.44 simulated entropy generation in a viscous nanofluid with second order velocity slip. Ibrahim et al.45 analyzed entropy generation in a nanofluid flow with twisted porous objects. Mabood et al.46 minimized the entropy generation in a Jeffery nanofluid boundary layer flow over a permeable stretching sheet with non-linear thermal radiation and activation energy. Acharya et al.47 investigated the entropy generation in a ferrous oxide and graphene oxide hybrid nanofluid over an unsteady spinning disk with slip effects.Entropy generation is the useful energy dissipated in the environment and it results in reduced efficiency of engineering systems and biological processes. Many studies in recent years are focused on entropy minimization to provide best possible conditions and obtain maximum output as a result. Li et al.The focus of current study is entropy analysis, and modeling of heat and mass transfer in a blood based unsteady hybrid nanofluid with radium and alumina nanoparticles having various shapes including spherical, brick, platelet and cylindrical through Hamilton-Crosser model. The nanoparticles of current study are important in enhancing heat and mass transfer properties of blood which is useful in many applications of medical industry including drug transport, cancer treatment and centrifuging blood to obtain its components . The flow is simulated with slip boundary conditions and fluid rotation between double rotating disks. The flow is also influenced by magnetic field, porous medium and heat sink/source. Using appropriate transforms modeled equations are converted to system of nonlinear ODEs. The solution method adopted is a semi-analytical approach namely, homotopy analysis method (HAM). The series form solution obtained with this method are validated through mean square errors and convergence table. Moreover, solutions obtained through HAM are also compared with Runge-Kutta 4th order solutions to provide further validation of results. The nanofluid flow in radial axial and tangential directions is graphically analyzed. Fluid temperature and concentration is simulated against pertinent fluid parameters. Entropy generation is presented numerically and graphically. In rest of the article, mathematical formulation is given in Section \u201cRd and alumina 48The flow geometry consists of double rotating disks with cylindrical coordinated mentclass2pt{minimu,\u00a0v,\u00a0w) are the velocities of fluid in r, z directions, respectively. T is the temperature and C is the concentration of fluid. L is the slip distance. t having unit (s) and b having the unit 49 which also considers the shape factor of the nanoparticles involved. The thermo-physical quantities in this case are given belowBehavior of nanofluids vary depending on base fluid and nanoparticles taken into account. There are many models in literature that characterize various properties of nanofluids. The hybrid nanofluid model considered in this study is the Hamilton-Crosser model50hnf\u2019 presents the quantities of hybrid nanofluid,\u2019f\u2019 presents the quantities of base fluid whereas \u2019Rd\u2019 and \u2019Al\u2019 presents the radium and alumina quantities, respectively.The thermal conductivity for the current phenomena ismentclass2pt{minim50The hybrid nanofluid thermal diffusivity is given as50The electrical conductivity of hybrid nanofluid containing radium and alumina nanoparticles is50The kinematic viscosity is given as50The density and heat capacity of hybrid nanofluid is53Da is the Darcy number, Pr is the Prandtl number, M is the magnetic interaction parameter and We non-dimensionalize the system of partial differential equations given in Eqs. \u20134) by i by i4) b54At the disk wall use Eq. in Eq. , th, th5833\\The convergence of series solution is determined through entclass1pt{minimaThe hybrid nanofluid flow between two rotating disks is simulated for various fluid parameters and physical interpretations are drawn in this section for velocity, temperature and concentration profiles. Parameters of physical interest like entropy generation, Bejan number, skin friction, Nusselt number and Sherwood number are also discussed in detail.Da decreases the radial and axial velocity.Larger Darcy number results in increased viscous forces among fluid layers that causes resistance to fluid flow in radial and axial directions. Increase in stretching parameters u-direction and decreasing the rotation of disk in Radial, axial and tangential velocities are presented against pertinent fluid parameters in Figs. Pr increases the fluid temperature due to elevated thermal diffusivity. Increase in heat source Change in fluid temperature is observed in Figs. Concentration of blood nanofluid against unsteady parameter M in Fig. M increases the resistance in fluid flow increases due to the Lorentz forces and disorderedness of system increases. Increase in volume fraction of radium and alumina increases entropy while Bejan number behaves oppositely with increase in unsteady parameter Both stretching parameters Increase in slip parameter Pr and heat source Temperature increases with higher values of Prandtl number Overall temperature of the fluid is highest in case of brick shaped nanoparticles whereas platelet shape nanoparticles result in lowest fluid temperature.Concentration of blood hybrid nanofluid increases with higher values of unsteady parameter M, volume fractions Br and Reynolds number Re whereas Bejan number behaves in contrast.Entropy generation increases with increase in magnetic parameter Spherical shaped nanoparticles result in highest entropy while the platelet shaped nanoparticles offer lowest entropy.The most optimal value of entropy is obtained to be zero when Re, volume fraction Skin friction of nanofluid with wall elevates with higher values of Reynolds number Re.Mass transfer decreases with increase in both volume fractions Heat transfer in the fluid is highest in case of platelet shaped nanoparticles and lowest in case of spherical shaped nanoparticles.This study can be further carried out in future by fractional modeling for various nanofluid models in both Buongiorno and two phase cases.The objective of current manuscript is modeling and computation of entropy generation and optimization in hybrid nanofluid with different nanoparticle shape factors. The base fluid in current study is blood while the two nanoparticles are radium"} {"text": "Advances and applications of fluids biomarkers in diagnosis and therapeutic targets of Alzheimer's disease by Yanan Xu et al., https://doi.org/10.1111/cns.14238.The cover image is based on the Review Article"} {"text": "The most common approach in transcriptomics (RNA-seq and microarrays) is differential gene expression analysis (DGEA). Differentially expressed genes (DEGs) may be responsible for phenotypic differences between various biological conditions. An alternative approach is gene coexpression analysis, which detects groups of genes with similar expression patterns across unrelated sets of transcriptomic data from the same organism. Coexpressed genes tend to be involved in similar biological processes. This Special Issue includes 12 research articles and one review on the topic of differential gene expression and coexpression. This review is an introduction to the basic methods of coexpression analysis, while the research articles describe both software and tools that assist in the execution of differential gene expression and coexpression analysis, as well as computational workflows that reveal new biological knowledge.dcap-1 knockout and wild type roundworms, Borbolis et al. [dcap-1) in the silencing of spermatogenic genes during late oogenesis and in the suppression of aberrant immune gene rise during aging in Caenorhabditis elegans. Yoon et al. [Through a microarray-based comparison of n et al. revealedn et al. identifin et al. used a bn et al. identifin et al. were abln et al. showed tn et al. assessedn et al. offered n et al. , a web an et al. is a Linn et al. is a webn et al. is an op\u03a4he guest editors are grateful to all authors for their contributions to this Special Issue. Their works enlighten various aspects of differential gene expression and coexpression analyses, starting from transcriptomics data production and automated data acquisition from public repositories, followed by comprehensive analysis and meta-analysis applications and downstream analysis tools. Some publications presented bioinformatics pipelines used for the discovery of cancer biomarkers. We are looking forward to the presentation of novel computational pipelines and biological discoveries in the Special Issue \u201cDifferential Gene Expression and Coexpression 2.0\u201d."} {"text": "Biomedical sensors are the key units of medical and healthcare systems. The development focus of this topic is to use new technology and advanced functional biocompatible materials to design miniature, intelligent, reliable, multifunctional, low-cost, and efficient sensors. The last two decades have seen unprecedented growth in the employment of advanced sensors, which enable the detection of critical biomarkers for the early diagnosis of human diseases and the monitoring of human physiological signals for assessments in healthcare and biomedical applications. This rapid progress in both sensor technology development and its applications is mainly due to the quickly advancing development of micro/nanofabrication, manufacturing techniques, and advanced materials, as well as the increasing demand for the development of fast, simple, and sensitive measurement techniques that are capable of accurately and reliably monitoring biological samples in real time. The development of biomedical sensors is driven by the requirements of the medical field. The screening and continuous monitoring of patients with sensors has become increasingly important. A huge growth in the demand for home care will certainly promote the development of disposable sensors or telemedicine. This also puts forward requirements for future medical sensors.This Special Issue aims to provide an overview of recent advancements in the area of sensing technologies, including of sensors and platforms with a focus on functional materials, novel sensing mechanisms, design principles, fabrication and characterization techniques, performance optimization methods, multifunctional and multiplex sensing platforms, and system integration strategies, which play crucial roles in many applications.G\u00f6khan G\u00fcney et al. used MedAthanasios Tsanas et al. proposedNegin Foroughimehr et al. demonstrDavid Burns et al. presente2 = 0.48 and a standard deviation STD = 0.10, with a total system average delay of 192 ms. Compared with the previous study, this newly proposed system presented a higher accuracy and was more suitable for real-time leg muscle activity estimation during walking.In the study by Zixi Gu et al. , a real-Chisaki Miura et al. aimed toRobert Karpi\u0144ski et al. proposedAar\u00f3n Cuevas-L\u00f3pez et al. presenteSheng-Wei Pan et al. proposedLalita Chopra et al. preparedWenfeng Zheng et al. proposedRobert Karpi\u0144ski et al. aimed toDan Wang et al. exploredPablo Campo-Prieto et al. exploredWe would like to express our profound appreciation to the authors and reviewers who made this Special Issue possible. In the time it took to complete this Special Issue, our reviewers and authors contributed tremendous efforts to improve the paper\u2019s quality and thus guarantee the high standard of this Special Issue."} {"text": "Copper (Co) and Titanium Alloy (Ti6Al4V) nanoparticles are found in this fluid. The HT level of such a fluid (Ti6Al4V-Co/H2O) has steadily increased in comparison to ordinary Co-H2O NFs, which is a significant discovery from this work. The inclusion of nanoparticles aids in the stabilization of a nanofluid flowing and maintains the symmetry of the flow form. The thermal conductivity is highest in the boundary-lamina-shaped layer and lowest in sphere-shaped nanoparticles. A system's entropy increases by three characteristics: their ratio by fractional size, their radiated qualities, and their heat conductivity modifications. The primary applications of this examination are the biological and medical implementations\u00a0like\u00a0dental and orthopedic implantable devices, as well as other devices such as screws and plates because they possess a favorable set of characteristics such as good biomaterials, corrosion resistance and wear, and great mechanical characteristics.To get a better heat transmission capacity of ordinary fluids, new hybrid nanofluids (HNFs) with a considerably greater exponent heat than nanofluids (NFs) are being used. HNFs, which have a greater heat exponent than NFs, are being applied to increase the HT capacities of regular fluids. Two-element nanoparticles mixed in a base fluid make up HNFs. This research investigates the flow and HT features of HNF across a slick surface. As a result, the geometric model is explained by employing symmetry. The technique includes nanoparticles shape factor, Magnetohydrodynamics (MHD), porous media, Cattaneo\u2013Christov, and thermal radiative heat flux effects. The governing equations are numerically solved by consuming a method known as the Galerkin finite element method (FEM). In this study, H Waste heat recovery, which tries to recover energy losses as heat, work, or power, was researched by Olabi et al.2. They claim that NFs are recently developed high-performance heat transfer fluids. Three crucial factors identified by Wang et al.3 have an impact on the use of mono and hybrid NFs in heat pipes. Consistency, thermal conductivity, and viscosity. The application of heat transfer growth or inhibition, as well as the usage of NFs in a variety of heat pipe categories, is described. Machine learning is explored in the context of NFs and NF-charged heat pipes. Current developments in NF thermal characteristics and applications in a variety of engineering fields, ranging from NF-medicine to renewable energy, were examined by Eid4. The latter has seen some major advancements in flexibility and momentum, which have an impact on military and shield technologies. As a result, specialised NF applications in space research, solar energy, NF-medicine, temperature exchangers, heat pipes, and electronics freezing have been researched and made available. Gupta et al.5 examined the current advancements in NF in solar collectors and how it is employed nowadays. They discovered that using a premium heat transfer fluid with outstanding thermal physical properties, such as high thermal conductivity, is the most efficient way to increase the performance of a solar energy system, and NF is the best option for doing so. According to Salilih et al.6, the use of NF resulted in decreased heat of liquid leaving the condenser, increasing the solar scheme's efficacy.Nanofluids (NFs) have been considered as a potential different fluid solution for enhancing the competence and efficacy of current systems in manufacturing, commercial, and residential contexts. Numerous benefits of increased thermal system efficiency include decreased environmental impact, decreased energy use, and lower prices. The appropriateness of NFs for use in present systems has recently been assessed in terms of cost and environmental impact by utilising sustainability approaches. Thermal studies are one of its most important applications. The energy consumption of thermal systems is essential in the global environment. Several readings have been shown to increase the performance of thermal systems based on these elements, including the employment of various resources, produced liquids, process proposals, and the integration of newfangled information for clean energy building, resulting in an optimal explanation. Increasing the heat surface area of thermal convert to recover their current performance is one of the most investigated solutions; however, this modification results in the material buildup and an increase in production cost. In order to ensure long-term technical development, Bretado et al.7 largely addressed hybrid nanofluid (HNF), a modern class of NF created by suspending separate multiple NFs in the base NFs. Unexpectedly, the thermal characteristics can be increased by the creation of a small portion of metal nanotubes or nanoparticles within the NFs of an oxide or metal that are already present in a base liquid. Improved thermal conductivity, stability, corrected HT, positive impacts of each suspension, and combined nanomaterial influence are only a few of the benefits of HNFs. With higher operational efficiencies than NFs, HNFs are used in almost all HT applications, including welding, defense, temperature pipe, biomedical, boats, and space planes. Other applications include generator freezing, coolant in machining, thermal capacity, electronic cooling, reheating and cooling in homes, vehicle thermal management or motor freezing, modernizer freezing, atomic structure freezing, refrigeration, medication saving, and vehicle thermal management or motor freezing. These good properties drew researchers' attention to the HNF in the context of HT difficulties in daily living. Khan et al.8 presented a proportional investigation of HT and friction drag in the flow of numerous HNFs achieved by the associated magnetic field and nonlinear radiation. Xiong et al.9 reviewed the application of HNFs in solar energy collectors. While Yaseen et al.10 reviewed the role of HNFs in HT. Sathyamurthy et al.11 documented an experimental investigation on freezing the photovoltaic board utilizing HNFs. Bakhtiari et al.12 presented stable HNFs and advanced a novel association for HT. Xuan et al.13 studied thermo-economic presentation and compassion examination of ternary HNFs. Said et al.14 gathered HT, entropy generation, and economic and ecological examinations of linear Fresnel indicators utilizing HNFs. Jamshed et al.15 introduced a computational setting effort of the Cattaneo\u2013Christov heat flux model (CCHFM) based on HNFs. Ma et al.16 considered the effect of surfactants on the rheological performance of HNF and HT ownership. Chu et al.17 modeled a study of magnetohydrodynamics utilizing HNFs flow between two endless corresponding platters with atom form possessions. \u015eirin18 investigated the presentation of cermet apparatuses in the rotating of HNFs wounding settings. Jamei et al.19 estimated the thickness of HNFs for current dynamism application. Bilal et al.20 used the degenerate electro-osmotic EMHD HNFs over the micro-passage.Jana et al.21 used PMM in solar aircraft joining tangent HNFs as a solar heat application. Shahzad et al.22 formulated a comparative mathematical study of HT using the PMM in HNFs. Parvin et al.23 presented the numerical conduct of 2D-Magneto double-diffusive convection flow of HNF over PMM. Faisal, et al.24 indicated the raising of heat effectiveness of solar water\u2010pump utilizing HNFs over PMM. Banerjee and Paul25 reviewed the most recent studies and development with the applications of PM combustion. Zou et al.26 modeled an explicit system of stone heat in the PM model for pebble-bed devices. Lee et al.27 proposed PMM substantiation with stress drip dimensions. Talbi et al.28 analyzed a solution for longitudinal quivering of a fluctuating pile based on PMM on a convective flowing model.A porous media model (PMM), often recognized as a porous material, is one that contains pores (vacuums). The \"matrix\" or \"frame\" refers to the thin part of the fabric. A fluid is generally injected into the pores (fluid or fume). Although the skeleton fabric is typically solid, systems together with foams may enjoy the perception of a porous media model (PMM). Jamshed et al.29 took into consideration a device studying technique for the calculation of transference and thermodynamic methods in metaphysics structures HT in HNFs flow in PMM. Rashed et al.30 recommended a non-homogenous HNF for three-D convective flow in enclosures full of heterogeneous PMM. The investigation of the magnetic appearances and behavior of electrically conducting liquids is known as magnetohydrodynamics (MHD). Plasmas, melted metals, salty water, and electrolytes are illustrations of MHD. Recently, many investigations are appeared using this setting practically in HNFs. Alghamdi et al.31 utilized MHD HNFs flow encompassing the medicine over a blood artery. Zainal et al.32 analyzed MHD HNFs flow over an extending/dwindling pane with quadratic velocity. Abbas et al.33 modeled improper investigation of motivated MHD of HNFs flow over a nonlinear extending cylinder. Waqas et al.34 impacted of MHD radiated flow of HNF over a revolving disk. Shoaib et al.35 provided a numerical examination of three-D MHD HNFs over a revolving disk in the incidence of heat electricity with Joule reheating and viscous degeneracy possessions using the Lobatto method. Tian et al.36 investigated 2D and 3-d shapes of fins and their possessions on the heat sink performance of MHD HNF with slide and non-slip float. Gul et al.37 studied a couple of slides impacted withinside the MHD HNF float with Cattaneo\u2013Christov heat flux and autocatalytic biochemical response. Ashwinkumar et al.38 considered HT in MHD HNFs flow over two diverse geometries. Abderrahmane et al.39 formulated MHD HNFs over HT and entropy generation in a 3D revolving tube. Salmi et al.40 studied a numerical case of non-Fourier heat and mass transfer in incompletely ionized MHD HNFs.Alizadeh et al.41 considered CCHFM on sloping MHD over nonlinear overextended flow. Haneef et al.42 utilized CCHFM and HT in HNFs rheological liquid in the attendance of mass transfer. Yahya et al.43 employed CCHFM on Williamson Sutterby NF transportation, which is produced by an extending superficial with a convective boundary. Eswaramoorthi et al.44 engaged CCHFM in 3D plow of a plate with nonlinear heat energy. Tahir et al.45 enhanced the current appearances of viscous NF flow with the induction of CCHFM. Ali et al.46 proposed CCHFM for assorted convection flow owing to the revolving disk with slide possessions. Ullah et al.47 suggested a numerical attitude to read melting and initiation energy occurrence on the influenced fleeting HNF with the application of CCHFM. Zuhra et al.48 gave a numerical analysis of CCHFM HNFs by Lavenberg\u2013Marquard back propagated neural networks. Sadiq et al.49 modeled the HT because of CCHFM. Vinodkumar et al.50 joined the CCHFM HNFs that affected MHD flow via an extending slip in a PMM.The heat transfer in viscoelastic float resulting from an exponentially stretched sheet is defined through the Cattaneo\u2013Christov warmth flux model (CCHFM). The major factors of this study may be summarized as follows: When related to a viscous fluid, the hydrodynamic boundary layer in the viscoelastic fluid is thinner. Venkata et al.51 is one in which the slide velocity is compared to the clip stress. Alzahrani et al.52 studied the effect of heat contamination on HT in-plane walls themed to SBC. P\u00e9rez-Salas et al.53 presented an approximate analytical outcome for the fluid flow of a Phan-Thien-Tanner with SBC. Wang et al.54 solved the problem of SBC by boundary-lattice Boltzmann scheme. Arif et al.55 analyzed SBC of Non-Newtonian rheology of lubricant. Dhifaoui56 illustrated a weak solution for the outside static Stokes equations with SBC. Zeb et al.57 proposed the SBC on Non-Newtonian Ferrofluid over an extending slip. There are many studies60 probed the problem of slippage velocity in the flow model. It had a prominent effect in clarifying this effect on the movement of the fluid and its temperature.The no-slip condition is the acknowledged boundary condition for a fluid over a solid surface. The slip boundary condition (SBC) proposed by Navier6Al4V) are the two types of HNFs used in this study. Entropy generation data for HNFs used in this study was analyzed to identify the impact on the process. The HNF's governing equations will be translated into ODEs using an appropriate similarity conversion. ODEs will be created, and the Galerkin finite element method (FEM) will be utilized to numerically resolve them using appropriate governing parameter values. The numbers are going to be represented graphically, with additional discussion. The impacts of particle shapes, thermal radiated flow, slippery velocity, and convective slip boundary limitations are investigated during this research.This looks at objectives to fill a familiarity hole withinside the flow and warmth transfer of a radiated Casson HNF with a variable thermal conductivity because the temperature rises, primarily based totally on the literature. The Tiwari and Das NF versions can be used to mathematically version the NF flow. Copper (Cu) and Titanium Alloy curving prototype is drawn in Fig.\u00a0The ensuing standards, together with the requirements, be relevant to the stream framework: 2-D laminar steady flow, phase flow model, HNF, permeable medium, MHD, viscous dissipation, Thermal radiative heat flux, Cattaneo\u2013Christov heat flux, joule heating, porousness elongated surface.61 in consideration of the suggested assumptions.The governing equations and associated boundary conditions for hybrid nanofluid flowing are given in21 gave the related boundary constraints:Jamshed et al.64.The equations in Table Where, nano-sized particle fractional volume of analysis, substantial features of the primary fluid of the water are described.In Table 68 is applied in formula (The equation for radiative flux given by Rosseland formula .5\\documeExpressions \u20134) are are 4) aThe specified similarity quantities areentclass1pt{minimanto Eqs.\u00a0. We get8Equation\u00a0 is accuree Table .Table 3DWhere The non-dimensional skin friction 69 is covered in this section. The finite element method's flowchart is shown in Fig.\u00a0The corresponding boundary constraints\u00a0of the present system were computationally\u00a0simulated using FEM. FEM is\u00a0based on the partitioning of the desired region into components (finite). FEMStage I:Weak form is derived from strong form (stated ODEs), and residuals are computed.Stage II:To achieve a weak form, shape functions are taken linearly, and FEM\u00a0is used.Stage III:The assembly method is used to build stiffness components, and a global stiffness matrix is created.Stage IV:Using the Picard linearizing\u00a0technique, an algebraic framework (nonlinear equations) is produced.Stage V:Algebraic equations are simulated utilizing appropriate halting criterion through 10(-5) (supercomputing tolerances).Further, The Galerkin finite element technique's flow chart is depicted in Fig.\u00a070. Table Heat transfer coefficients from existing methods were compared to findings that had been supported by earlier research to assess the validity of the computational methodf\u2032(\u03bb)) and entropy generation 6Al4V are composed of water. The solid and dashed lines are respectively plotted for Co-H2O and Ti6Al4V-Co/ H2O.This section delves into the influence of a few key physical parameters, such as the velocity slip parameter NG vs. (NG) grows but the value of (P b) declines as the distance from the surface increases. A major temperature differential at the surface causes entropy to increase. Consequently, a high value of the permeability of the porous medium may present a technique for modifying the spin coating flow parameters in industrial applications. It is also believed that improved permeability and larger pore spaces promote better nanoparticle precipitation, which reduces friction at the sheet surface. Figure\u00a06Al4V-Co/H2O nanoparticles, Co-H2O nanoparticles control heat transport in the examined base fluid. Figure\u00a0Figure\u00a0NG is shown as a deviation from the variety of entropy produced. (NG and (6Al4V-Co/ H2O hybrid nanofluid and Co-H2O nanofluid are shown in Fig.\u00a06Al4V-Co/ H2O nanoparticles is predicted to increase (NG behaves when the Biot number (NG is very sensitive to surface and small changes. For both kinds of nanofluids, entropy generation profiles as a function of the Reynolds number (NG and 6Al4V-Co/H2O nanoparticles are substantially higher than those of Co-H2O nanoparticles.Even if the wall velocity parameter has significant slip velocity values, it restricts collisions with molecular diffusion. When more nanoparticles are added to various mediums, the simultaneous effects of thermal convection, diffusion, and kinematic viscosity are involved. In Fig.\u00a0Table 2O and Ti6Al4V-Co/H2O nanoparticles. Graphic analysis and extensive discussion of the physical behavior of the non-dimensional boundary layer distributes show how the unique factors affect them. Thus, from the present analysis, the under-listed concluding remarks are obtained:Along the far stream, the velocity field is reduced for the upsurging porosity The temperature distribution is affected by most of the physical quantities, which denotes that nanofluids have a high heat exchange rate. This property helps control the temperature during spin coating processes.The entropy profile against the porosity term 2O nanofluid and Ti6Al4V-Co/H2O hybrid nanofluids can be seen, compared to the Nusselt number coefficient for the porosity and volume fraction.Remarkable change in frictional force factor for Co-HEntropy creation, irreversibility propagation, fluid flow, and heat transfer in an electrically conducting Newtonian hybrid nanofluid across a stretching sheet exposed to slip and convective boundary conditions have all been quantitatively described in the current research. The solid volume fraction has been explored using a modified version of Tiwari and Das's nanofluid model of the Co-H76.The FEM could be applied to a variety of physical and technical challenges in the future"} {"text": "This issue addresses the multifaceted problems of understanding biodiversity change to meet emerging international development and conservation goals, national economic accounting and diverse community needs. Recent international agreements highlight the need to establish monitoring and assessment programmes at national and regional levels. We identify an opportunity for the research community to develop the methods for robust detection and attribution of biodiversity change that will contribute to national assessments and guide conservation action. The 16 contributions of this issue address six major aspects of biodiversity assessment: connecting policy to science, establishing observation, improving statistical estimation, detecting change, attributing causes and projecting the future. These studies are led by experts in Indigenous studies, economics, ecology, conservation, statistics, and computer science, with representations from Asia, Africa, South America, North America and Europe. The results place biodiversity science in the context of policy needs and provide an updated roadmap for how to observe biodiversity change in a way that supports conservation action via robust detection and attribution science.This article is part of the theme issue \u2018Detecting and attributing the causes of biodiversity change: needs, gaps and solutions\u2019 EcologiEfforts to properly account for the value of nature within our economies are Developments linking biodiversity monitoring and assessments are occurring rapidly in response to the need for actionable knowledge. The impetus for this knowledge comes from the need to track and guide progress towards biodiversity goals, notably the United Nations Sustainable Development Goals and the The challenges of effective biodiversity change assessments go beyond economic incentives and investment in monitoring networks. Designing sampling protocols, identifying metrics, correcting estimation bias, quantifying uncertainties, attributing causes, projecting future pathways and designing policies are components that contain major and often underappreciated knowledge gaps. These gaps were contributing factors to the failure to achieve earlier international targets to halt biodiversity loss by 2020 . In addi. 2This issue is organized through a conceptual framework linking social and policy needs to the scientific components that underpin robust assessments of biodiversity change figures and 2.FWe identify observation, estimation, detection, attribution and projection as major scientific components of biodiversity assessment . ObservaWe also identify the components of policy, community and research that need better integration. Policy on stewarding and exploiting biological resources is driven by a combination of consumer demand , traditiThe natural and social sciences covering these biodiversity issues span almost all disciplines, so it would be impossible for a single journal issue to cover all important aspects. As well, the complexity of socioecological dynamics prevents a complete synthesis from being feasible at the moment et al. [et al. [et al. [Three perspectives in this issue, Dasgupta & Levin , Salomonet al. , and Ach [et al. remind u [et al. provide Dasgupta & Levin argue thet al. [Salomon et al. , a groupet al. [Continental variations in biodiversity and assessment capacity are illustrated by Achieng et al. . The autet al. [et al. [In a synthesis paper, Gonzalez et al. show tha [et al. argue th (b)et al. [et al. [Biodiversity observations are vital for assessing how policies and environmental and anthropogenic pressures impact biodiversity. Yet, most biodiversity data are biased geographically, taxonomically or thematically. Oliver et al. explore [et al. clearly et al. [et al. [Mori et al. advocate [et al. suggest et al. [Biodiversity assessments hinge on identifying metrics most indicative of nature's states and changes under climate and anthropogenic stressors. While essential biodiversity metrics have been identified, it is still an open question as to what are the most important metrics and how they impact ecosystems and societies. Dornelas et al. provide (c)et al. [The lack of evidence for changes in some biodiversity metrics may mean they really are not associated with changes in ecosystem functions, but observational and statistical deficiencies may also lead to a lack of power to detect biodiversity changes even if they are occurring. In line with open empirical questions about biodiversity indicators, Roswell et al. explore et al. [Our issue identifies two essential biodiversity metrics with strong theoretical roles that are sensitive to observational biases, with current correction methods remaining suboptimal to accurately assess biodiversity states and changes using data from most monitoring programmes today. These metrics are species richness and ecolet al. present et al. [et al. [Identifying actual species interactions is impossible without having observed all extant species in a community, but Chiu et al. show it [et al. represen (d)et al. [et al. [Gadus morhua) using a spatially replicated dataset of temporal genomic data combined with model simulations. They find evidence for harvesting-induced evolution via polygenic adaptation and trait selection sustained over several decades. However, attributing harvesting with high confidence for the patterns of polygenic trait variation is not easy and will require a combination of spatially replicated population genomic time series in contrasting selective environments combined with models to provide expectations for patterns of genetic covariance in allele frequency over long time periods. This is an exciting challenge for future detection and attribution research on genetic diversity, and its links to population persistence, that extends far beyond harvesting in marine populations.Reid et al. begin wi [et al. assess wet al. [Gregory et al. report aet al. [Malchow et al. analyse et al. [Thompson et al. use fishet al. [et al. [Navarette et al. illustra [et al. .et al. [The interactions between humans, nature, and environmental change are complex and capable of producing unexpected dynamics. Therefore, process-based models of how to manage biodiversity change may be theoretically intractable and empirically unidentifiable in many social\u2013ecological systems. For example, the socioecological coupling of fish population dynamics, economics and management produces the alternative stable states of conservation or overexploitation, which are indistinguishable if we only measure derived ecosystem services (e.g. revenue from consumption) without independent stock assessments, which may not be available. Chapman et al. suggest . 4Although our issue has covered some major themes in quantifying and understanding biodiversity change, some emerging issues were not addressed. These missing links include incorporating genetic diversity , spatialCausal analysis in biodiversity science is still in its infancy, partly because statistical and mechanistic modelling tools are underdeveloped given the complexity of ecological systems \u201368, or rWe hope this issue provides guidelines for linking biodiversity observations, monitoring and inferences about the rates and reasons for biodiversity change. A robust detection and attribution framework will inform the implementation of policies designed to protect, manage and sustain biodiversity and ecosystem benefits at the heart of the Kunming-Montreal Global Biodiversity Framework and the"} {"text": "The engineering of scaffolds and surfaces with enhanced properties for biomedical applications represents an ever-expanding field of research that is continuously gaining momentum. As technology and society evolve, the golden standard of autografts has been contested due to their lack of availability, and tremendous efforts have been dedicated to developing nature-inspired materials that are able to either undertake the functions of damaged tissues or significantly contribute to their repair. To this end, multidisciplinary research that aims to design novel or upgraded materials has been conducted with the purpose of locating suitable candidates that replicate the characteristics of natural tissues with regard to their function, mechanical behavior, microarchitectural features, etc. The present Special Issue entitled \u2018Scaffolds and Surfaces with Biomedical Applications\u2019 published 13 papers (10 research and 3 review papers) that describe the synthesis of new materials with biomedical applications and their thorough characterization using conventional and emerging techniques. Istratov et al. synthesiSoria et al. evaluated the cytotoxic capacity of a new instillation technology via a biodegradable ureteral stent/scaffold coated with a silk fibroin matrix for application in the controlled release of mitomycin C as an anti-cancer drug . They co\u03d2-Fe2O3) nanoparticles mixed with ultra-hard and tough bio-resin, was reported by Fallahiarezoudar et al. [2O3 in the structure enhanced the proliferation rate of HSF1148 due to the ability of numerous magnetic nanoparticles to integrate with the cellular matrix.A 3D scaffold structure, comprising thermoplastic polyurethane and maghemite (S. aureus. By developing a biodegradable scaffold that has the potential to simultaneously promote bone tissue regeneration and prevent bacterial biofilm formation, we come a step closer to overcoming the current problems encountered in bone tissue engineering.Filipov et al. presenteOlaret et al. proposedMorales-Guadarrama et al. reportedBombyx mori 3D silk fibroin nonwoven scaffolds in vitro.Hu et al. evaluateIn addition, Neto et al. reportedMartinez-Garcia et al. presenteLu et al. presenteThis Special Issue also contains three review papers. The first, contributed by Arifin et al. , one aim"} {"text": "Intelligent sensing systems have been fueled to make sense of visual sensory data to handle complex and difficult real-world sense-making challenges due to the rapid growth of computer vision and machine learning technologies. We can now interpret visual sensory data more effectively thanks to recent developments in machine learning algorithms. This means that in related research, significant attention is now being paid to problems in this field, such as visual surveillance, smart cities, etc.The Special Issue offers a selection of high-quality research articles that tackle the major difficulties in computer vision and machine learning for intelligent sensing systems from both a theoretical and practical standpoint. It includes intelligent sensing techniques ,2,3,4,5,Intelligent sensing techniquesKokhanovskiy et al. demonstrShiba et al. proposedChen et al. presenteNiu et al. proposedHashmani et al. presenteIntelligent sense-making techniquesLe and Scherer performeTran et al. proposedZaferani et al. proposedHu et al. developeOh et al. suggesteApplications of intelligent sensing systemsSong and Lee studied Moreno-Armend\u00e1riz et al. describeIn conclusion, through the wide range of research presented in this Special Issue, we would like to boost fundamental and practical research on applying computer vision and machine learning for intelligent sensing systems."} {"text": "Human-induced pluripotent stem cells (hiPSCs) serve as a sustainable resource for studying the molecular foundation of disease development, including initiation and deterioration. Although the process of reprogramming adult cells is accompanied by the obliteration of part of the epigenetic signature, usually upon differentiation of hiPSCs into specific cells, such as brain, heart, or muscle cells, these cells are adequate models for studying disease pathology and for drug discovery, as described in this Special Issue. In considering iPSCs as models for mono/complex diseases and as a potential future replacement for animal studies, Costa et al. suggest PCCB genes . They suggest that the upregulation of cardiac-enriched miRNAs can explain these changes and that they can serve as new therapeutic targets for intervention strategies for this cardiomyopathy-associated disorder. Alvarez et al. observedInterestingly, in a study of an hiPSC line generated from a patient with mutations in a factor related to autophagy ), Ben-Zvi et al. present Walker et al. establisThe review by Gabriele Bonaventura et al. reports 2+ channel subunits, and ionotropic receptor subunits and the density of GABAergic synapses. The results indicated elevated basal intracellular Ca2+ levels and lower frequency of spontaneous Ca2+ signals, elevated Ca2+ amplitudes upon glycine and acetylcholine application, and larger miniature postsynaptic current (mPSC) amplitudes in SGCE MSNs as compared to healthy MSNs. The contribution of this in vitro model of DYT-SGCE myoclonus\u2013dystonia to the understanding of the functional phenotype and pathophysiology of the disease may help to advance the development of new therapeutic strategies. Kutschenko et al. investigThe study by Tate et al. sheds liKuriyama et al. describeIn their review, Heider et al. describeAbati et al. reviews Methods: this Special Issue presents several articles describing modified protocols for the efficient generation of specific iPSC-derived cellular models. 2+-dependent glutamate release properties as a hallmark of neuronal maturation. Baldassari et al. aimed toLanfer et al. describeJohnson Chacko et al. describeHelmi et al. show thaThe review by Matsumoto et al. describeThe review by Bigarreau et al. discusseThe review by Antonov et al. describePasqua et al. describe"} {"text": "The definition of the term biopolymer is often controversial, and there is no clear distinction between \u201cbiopolymers\u201d, \u201cbioplastics\u201d, and \u201cbio-based polymers\u201d. Biopolymers (or bioplastics) are considered by some authors to only be polymers that are biodegradable. In practice, they bring together biosourced polymers that are produced from renewable resources, as well as biodegradable polymers and even sometimes biocompatible polymers. Thus, they can be classified according to two distinct criteria: the origin of the resource from which they are produced and their end-of-life management (biodegradability).The current biopolymers can be classified into three main groups, depending on the two aforementioned criteria: (i) biodegradable polymers from renewable resources ; (ii) biodegradable polymers from fossil resources obtained via industrial synthesis processes; and (iii) non-biodegradable polymers from renewable resources.Biopolymers have a diverse chemical structure, and their physicochemical properties make them suitable for clinical, biomedical, and pharmaceutical applications due to their versatile characteristics, such as biocompatibility, biodegradability, and low immunogenicity, which are key features in the new approach to the design of novel advanced materials.In this Special Issue, entitled \u201cBiopolymers for Enhanced Health Benefits\u201d, a total contribution of eight papers\u2014seven original articles and one review\u2014were published, focusing on the synthesis and characterization of different types of biopolymers for biomedical applications.Rodriquez-Cendal et al. reviewedPonjavic et al. also obtOzturk et al. investigGan et al. investigDzhuzha et al. studied Danio rerio larvae model).Nurzynska et al. investigUlagesan et al. preparedMasetto et al. obtainedThe papers published in this Special Issues clearly prove that the field of biomaterials is important for high-value-added applications in the medical field. However, a great deal of effort is necessary in order to translate the results obtained in this academic research to an industrial scale as well as to clinical randomized trials."} {"text": "There are a few hypotheses for the origin of palatally impacted canines (PIC). Nevertheless, the results of different studies are controversial.Considering the evidence available in the literature to determine the skeletal and dentoalveolar dimensions in patients with PIC using cone beam computed tomography (CBCT).This systematic review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Statement. The literature search with no publication date restriction in five databases and hand searching was performed until April 2023.Data assessing the skeletal and dentoalveolar characteristics of subjects with PIC evaluated with CBCT was extracted, and the studies\u2019 quality was evaluated with the Newcastle-Ottawa Scale (NOS). Skeletal and dentoalveolar characteristics of subjects with PIC were compared with non-impacted subjects or non-impacted sides. MedCalc software was used to perform the meta-analysis. Statistical heterogeneity was assessed using the chi-square and I-square tests.The initial database search identified a total of 1153 studies. After applying the selection criteria, nine articles were included in the systematic review and meta-analysis. According to the NOS, all included articles were graded as \u201cGood\u201d quality. The meta-analysis showed a non-significant difference in measuring dentoalveolar height, alveolar first molar width, and basal lateral width. Controversial results were observed when evaluating both basal and alveolar first premolar widths. A significant difference was found when assessing anterior alveolar crest height and basal maxillary width.Studies demonstrated the reduction of both dentoalveolar and skeletal maxillary parameters of the patients with PIC. The meta-analysis indicated that PIC correlates to both vertical and transverse skeletal dimensions of the maxilla. However, the results remain controversial. The findings should be interpreted with caution due to different study designs and unbalanced groups in the included studies; therefore, further research is needed for more reliable conclusions.This systematic review and meta-analysis were registered in the International Prospective Register of Systematic Reviews (PROSPERO CRD42022362124) With a frequency ranging from 1% to 2.5%, the maxillary canine is the second most frequently impacted tooth after the third molars , 4. NeveTooth eruption is a physiological process that affects the alveolar bone\u2019s normal development, whereas tooth impaction may prevent the alveolar bone\u2019s regional growth . Althouget al. [et al. [Several researchers have attempted to determine whether there is a connection between PIC incidence and the skeletal and dental dimensions of the maxilla. McConnell et al. linked i [et al. , risk fa [et al. reported [et al. , patientCone-beam computed tomography (CBCT) has made it possible to gather precise information on the bone dimensions by displaying three-dimensional pictures of teeth and bone in high resolution. For subjects with PIC, CBCT can be used for localization, evaluation of root resorption , alveolaDue to these considerations, this systematic review aimed to gather current information and evaluate the skeletal and dentoalveolar dimensions using CBCT in individuals with PIC.These systematic review and meta-analysis were conducted and reported following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) and was registered in the International Prospective Register of Systematic Reviews (PROSPERO CRD42022362124).According to the Participants Intervention Comparison Outcome Study design scheme, the study planned to include randomized, prospective, and retrospective controlled trials (S) on human patients of any age, ethnicity, or sex with palatally impacted maxillary permanent canines (P). The intervention (I) was defined as the CBCT of subjects with PIC, and the comparison (C) was made between patients with impacted and normally erupted canines or between impacted and non-impacted sides. The primary outcome (O) evaluated was maxillary skeletal morphological characteristics , height of alveolar crest (AACH), basal first premolar, and basal lateral widths (BLWs)). The secondary outcome was dentoalveolar variables ). The developed focus question was: What are the maxillary structure variations in subjects with palatally impacted maxillary canines?Five electronic databases were searched systematically . This warandomized, prospective, and retrospective studies published in English,patients diagnosed with palatal impaction of the maxillary permanent canine, andCBCT images with radiological evaluation measurements before treatment.literature reviews, case reports, and series;panoramic or dental radiographs and dental casts used for evaluation; andpatients with genetic syndromes , severe facial malformations or systemic diseases, previous orthodontic treatment, dento-maxillary traumas, or agenesis.Before beginning the search in the selected databases, the three investigators discussed the search strategy. Two researchers then performed the study selection independently. Selection and filtration were done by assessing the titles of the articles and their abstracts; duplicates were removed. If the article met the inclusion criteria, the entire article was read to make the final decision. In addition, the reference/citation lists of the included trials were manually searched for any additional studies. Disagreements were resolved by consensus between the two reviewers and a third author was consulted when necessary. The last search was conducted on 22 April 2023.Two authors independently extracted the study characteristics, including design, sample size, patient age and sex, and maxillary morphological characteristics measurements .The quality of the included study protocols was assessed by investigating full-text articles. The Newcastle-Ottawa Scale (NOS) risk of bias assessment tool was used to evaluate the methodological quality of non-randomized clinical studies . Three dP values \u22640.05 were considered statistically significant. Characteristics included in the meta-analysis were the AACH, basal first premolar width (BPMW), basal lateral width (BLW), ADH, alveolar first molar width (AMW), and alveolar first premolar width (APMW). Studies with methods and outcomes that could not be quantitatively analyzed were described qualitatively.A meta-analysis was conducted on the quantitative data using MedCalc v14.8 . Considering the high clinical heterogeneity among the included studies, a random effect model was used for analysis. Statistical heterogeneity was assessed using the chi-square and I-square tests. I2 values of 25%, 50%, and 75% indicated low, moderate, and high heterogeneity, respectively. A total of 1153 articles were identified in the online search engine. Following the removal of duplicates and the review of article titles and abstracts, reports were sought for retrieval and three of them were not retrieved . Eight aAll selected studies were retrospective with a split-mouth design or a conFundamental data extracted from individual studies are presented in All the studies , 15\u201322 rAll articles were rated as \u201cGood\u201d quality with a low risk of bias . PrimariThe outcomes of all individual studies for the primary and secondary outcomes are summarized in Alveolar bone measurements were reported in four articles . All of P\u2005<\u20050.0001) between the BP width of the impacted side (6.87\u2005\u00b1\u20051.08 mm) and the non-impacted side (8.70\u2005\u00b1\u20051.13 mm). In contrast to the non-impacted side (8.90\u2005\u00b1\u20051.68 mm), there was no statistical difference in BP width at the height of 6 mm on the impacted side (8.55\u2005\u00b1\u20052.23 mm) as well as at 10 mm (9.51\u2005\u00b1\u20052.26 vs. 10.26\u2005\u00b1\u20052.31).Only one article evaluateThe BLW was studied in two split-mouth design studies , 17. In There were five articles published on evaluating the BMW , 22. In Five studies , 19\u201321 rOne study evaluatiTwo studies reported on ADH , 17. ThiAMW was estimated in five studies , 19\u201322. In the other three studies , 21, 22 et al. [et al. [Five studies , 19\u201322 eet al. and Hong [et al. where unArch length (AL) was assessed in two studies , 18. OneMaxillary-impacted canines have frequently been a subject of study, along with the associated dentoalveolar and maxillofacial structures. Some authors have suggested a connection between transverse maxillary width and impaction , 10, 12,The preferred diagnostic technique for identifying impacted teeth is CBCT because it eliminates many of the frequent issues with panoramic radiography, such as superimpositions, overlapping structures, and image blurring , 15, 24.It is reasonable to presume that the canine tooth\u2019s impaction results in altered alveolar dimensions on the impacted side because the alveolar process develops in response to tooth eruption , 26. Wheet al. [et al. [et al. [The alveolar transverse difference at the premolar level (BPMW) was measured in five studies. All except one found naet al. , and Sar [et al. with D\u00b4O [et al. . Controv [et al. , 17, whi [et al. , 21. It et al. [et al.\u2019s [The reason for the aforementioned disagreement could be the different gender distribution in studies: Arboleda-Ariza et al. sampled et al. , 17, 19,et al.\u2019s investiget al.\u2019s , 17 verset al.\u2019s ) could iet al. [When analyzing dental arch dimensions, some researchers included in this review reported reductions in maxillary dentoalveolar width at two levels (AMW and APMW) in the PIC patient groups when compared to controls , 19, 21.et al. found wiConcerning height measurements, a meta-analysis revealed a statistically significant difference between the impacted and non-impacted groups when evaluating skeletal AACH. Thus, it can be assumed that PIC relates to a vertical deficit of the maxilla.et al. [Tadinada et al. discoveret al. [et al. [The results of the dentoalveolar parameter evaluation used in this review can be compared to the results of studies with similar methodologies but performed on dental casts. Vitria et al. revealed [et al. did not [et al. did not According to Verma and Saravana Dinesh , impactiet al.\u2019s [et al.\u2019s [There is a lack of evidence concerning transverse deficit in the molar area, both skeletal and dentoalveolar. In the premolars area, a meta-analysis showed constriction of both skeletal and dentoalveolar areas comparing PIC patients and controls. Researchers evaluating PIC patients via cephalometry or dental casts also confirmed that PIC is often diagnosed in patients with an absence of noticeable malocclusion. Amini et al.\u2019s study shet al.\u2019s findingset al.\u2019s . Numerouet al.\u2019s . The freet al.\u2019s .This systematic review provides clinically significant information regarding the morphological characteristics of the maxilla in patients with PIC. The findings imply that transverse dimension should be corrected with more focus, especially at the level of the first premolar.The review included studies that used different study designs, such as split-mouth or two groups of subjects , and they included growing and non-growing patients. However, none of the articles calculated a reliable sample size, the gender distribution varied among the studies, and ethnicity-specific differences were likely. Due to the high heterogeneity of the studies, all but three meta-analyses were conducted on the results of two studies. Even though it is a sufficient number to perform a meta-analysis , additioThe included studies demonstrated the reduction of both dentoalveolar and skeletal maxillary parameters in patients with PIC. The performed meta-analysis indicated that vertical and transverse skeletal dimensions of the maxilla were associated with PIC. Regarding dentoalveolar characteristics, only the difference in first premolar width between groups was statistically significant in the meta-analysis. However, the results remain controversial, and further research is needed for more reliable conclusions.cjad050_suppl_Supplementary_Figure_S1Click here for additional data file.cjad050_suppl_Supplementary_Figure_S2Click here for additional data file.cjad050_suppl_Supplementary_Table_S1Click here for additional data file."} {"text": "Future research and planned studies testing recruitment and retention strategies are needed to identify optimal, modern communication procedures to increase AYA participation and adherence. More education should be provided to AYAs to increase their knowledge of research studies and strengthen the connection between AYA cancer survivors and their health providers.We conducted a literature review to identify commonly used recruitment and retention strategies in research among adolescent and young adult (AYA) cancer survivors 15-39 years of age and examine the effectiveness of these strategies based on the reported recruitment and retention rates. We identified 18 publications published after 2010, including 14 articles describing recruitment strategies and four articles discussing retention strategies and addressing reasons for AYA cancer patients dropping out of the studies. In terms of recruitment, Internet and social networking strategies were used most frequently and resulted in higher participation rates of AYA cancer survivors compared to other conventional methods, such as hospital-based outreach, mailings, and phone calls. In terms of retention, investigators used monetary incentives in all four studies and regular emails in two studies. There was no association between the number of strategies employed and the overall recruitment ( Adolescent and young adult (AYA) patients aged 15\u201339 years are recognized as a unique population within the oncology community. Worldwide, more than 1.2 million AYAs are diagnosed with cancer annually, and nearly 90,000 AYAs were diagnosed in 2020 in the United States that mayThe Internet has become the mainstream platform for acquiring and disseminating information. Digital tools, such as social media and email, play an important role in recruiting and retaining participants. We hypothesized that an increasing number of strategies used to recruit/retain AYA cancer survivors would be associated with higher recruitment and retention rates. Therefore, we conducted a literature review to identify commonly used recruitment and retention strategies in research among AYA cancer survivors and examined the effectiveness of these strategies based on reported recruitment and retention rates.We used PubMed and Google Scholar to identify existing studies and reviews on AYA recruitment and retention methods for longitudinal research and clinical trials in oncology. Considering the rapid development of the Internet in the past ten years, results were restricted to publications no earlier than 2010 to review more current research. We included only studies published in English.To narrow the publications in cancer-specific research, keywords of \u201ccancer,\u201d \u201cAYA,\u201d \u201cadolescent,\u201d \u201cyoung adult,\u201d \u201crecruitment,\u201d \u201cretention,\u201d \u201cparticipation,\u201d \u201crate,\u201d and \u201cstrategy\u201d were used. These keywords were combined multiple times as \u201cadolescents cancer recruitment rate,\u201d \u201cyoung adult cancer recruitment rate,\u201d \u201cadolescent cancer retention rate,\u201d \u201cyoung adult cancer retention rate,\u201d \u201cAYA recruitment and retention strategy,\u201d \u201cadolescent and young adult cancer participation,\u201d and \u201cAYA cancer recruitment and retention\u201d to get a comprehensive search of relevant studies. Additionally, citations of the selected articles, especially systematic reviews, were evaluated and filtered with the same inclusion and exclusion criteria so that studies missed in the keyword searching stage could be included. Studies that were not cancer-specific, did not target AYAs, or did not specify a population age range were excluded. A total of 10 articles were excluded, including 5 articles without a description of recruitment and retention strategies.p-value of < 0.05 was considered statistically significant.A Spearman correlation test assessed the association between several strategies used in each study and overall recruitment and retention rates. A The final search yielded 18 publications (Table\u00a0n = 10) and hospital-based (n = 6) strategies were the primary approaches used to recruit participants. Of the 14 studies that report recruitment methods, 64.3% (n = 9) reported using financial incentives, ranging from $20 to $50 per person.A total of 12 methods were used to recruit potential participants of participants. These approaches included emails sent by the directors of cancer survivorship organizations, posting on cancer survivorship organizations\u2019 websites, Facebook paid advertisements, and Facebook posts on cancer survivorship sites. Similarly, Benedict et al. .Including paper questionnaires and sending reminders increases the recruitment rate of AYA cancer survivors . In a cret al. divided et al. [et al. [et al. [p = 0.333).Among the identified studies, only four discussed retention strategies and three provided a retention rate Table\u00a0. Cantrelet al. , Le et a [et al. , and Tay [et al. reportedet al. [et al. [In the study conducted by Rosenberg et al. , a group [et al. , particiet al. [et al. [In the third study, Le et al. conducteet al. . Lastly, [et al. reportedet al. [Three articles discussed the reasons for participants dropping out from studies, including two clinical trials and a survey-based study Fig.\u00a0 4,18,19,1918,19.Internet-based outreach to AYA cancer survivors became a common strategy after 2010 based on 14 published studies that included information on recruitment methods. Despite that most studies did not provide the recruitment rates of each strategy, studies generally reported a higher participation rate resulting from Internet and social networking recruitment compared to recruitment at oncology clinics and cancer centers, supporting our hypothesis that the use of Internet-based outreach would increase recruitment rates of AYA cancer survivors. In terms of retention strategies, much of the existing research addresses the attrition rate among AYA cancer survivor studies, with little published literature on methods to improve retention rates. The most utilized retention method was monetary incentives of cash and gift cards, which was mentioned in all four studies, followed by regular emails to participants used in two studies. No studies used Internet-based strategies to increase retention rates, identifying an important area to consider in future studies. To advance the field of research in AYA cancer survivors, investigators should report their recruitment and retention rates and strategies in all publications reporting their study methods.et al. [et al. [et al. [n = 184) was achieved from automatic hospital referral to the cardiac rehabilitation clinical trial, although a wide range of recruitment strategies other than referrals were employed, including mailings, media advertisements, and community outreach.Studies conducted across broader age groups could offer recruitment strategies for AYA cancer survivors. After evaluating 68 studies across all ages on strategies to improve recruitment in randomized trials from different countries, Treweek et al. found th [et al. found th [et al. reportedet al. [P = 0.027). In contrast, Teague et al. [In our review, we did not observe a clear association between the number of strategies employed and the overall recruitment and retention rates. Our assessment was limited by the availability of data, as among the 18 studies identified for both recruitment and retention strategies, only five of them provided information on their recruitment rate, one of them indicated the number of enrollees and potential participants for us to calculate the recruitment rate, and three articles reported their retention rate. However, prior studies not restricted to AYA cancer survivors have found conflicting results. After conducting a systematic review of 88 studies on 985 retention strategies, Robinson et al. found a e et al. claimed e et al. . Some eme et al. ,24. Theset al. [et al. [According to the three studies that described why AYA cancer survivors withdrew from studies, the most common reasons were concerns about the time commitment, side effects, and relocation. In addition, Roick et al. found th [et al. also fou [et al. . However [et al. \u201329. Ther [et al. , highliget al. [Furthermore, Buchanan et al. also diset al. .There are also system-level barriers that hinder the recruitment and retention of AYA cancer survivors into studies. Compared to cancer survivors < 15 years of age who receive care in pediatric oncology facilities, adolescent cancer survivors have lower participation rates in clinical trials . EnrollmA major limitation of this review is the lack of studies assessing recruitment and retention strategies. The literature discussing recruitment and retention strategies for AYA cancer patients is less than that for patients of other ages, and not all studies reported their recruitment and retention rates. In two articles ,9, the aInternet-based recruitment strategies are becoming increasingly utilized, followed by hospital outreach and other conventional methods, such as mailing, flyers, and phone calls. Providing monetary incentives is an effective recruitment and retention method in AYA cancer studies. Other retention strategies include frequent email reminders and stable contacts with participants. In future research, evolving communication strategies, such as advertisements on social media and video platforms , can be implemented to improve AYA cancer patient recruitment rates. Investigators should consider cancer survivors\u2019 psychological and social barriers and facilitators to enroll and remain in the studies. There is also an opportunity for future research to address the underlying factor for the low participation rate of AYA cancer patients in cancer clinical trials. More strategies need to be implemented to overcome the retention barriers, such as unwillingness to the time commitment and medical mistrust. It is necessary for investigators to be educated on recruitment and retention barriers faced by AYAs as well as the need to increase education regarding cancer research and treatments for AYAs to improve their knowledge of cancer research and the relationships between patients, healthcare providers, and researchers. Engaging AYA cancer survivors with the research studies they are participating in also may result in higher retention rates."} {"text": "Metal\u2013organic frameworks (MOFs) are a class of porous two- or three-dimensional infinite structure materials consisting of metal ions or clusters and organic linkers, which are connected via coordination bonds . Owing t2C/C nanospheres for microwave absorption. The dual-shell structure could optimize impedance matching by prompting the intrinsic impedance to be as close as possible to that of the outside air. It was proven that the dual-shell structure of the Mo2C nanoparticles had a positive effect on EM energy attenuation. This finding could facilitate the design and preparation of highly efficient carbon-based microwave-absorbable materials.Wang et al. prepared3In2S6/g-C3N4 photocatalyst by using a low-temperature solvothermal method, showing excellent degradation performance regarding tetracycline (TC) under visible light irradiation. The degradation mechanism of photocatalysts on TC was analyzed, demonstrating excellent performance at low temperatures. This study could provide a new strategy for the preparation of photocatalysts. Li et al. synthesiXia et al. develope2 hydrogenation catalysts and compared their synthetic strategies, unique features, and enhancement mechanisms with traditionally supported catalysts. The challenges and opportunities pertaining to the precise design, synthesis, and applications of MOF-based CO2 hydrogenation catalysts were also summarized. Great emphasis was placed on the various confinement effects involved in CO2 hydrogenation. The mechanistic insights into the confinement effects associated with MOFs or MOF-derived catalysts designed for CO2 hydrogenation provided by this review could facilitate the development of clear structure\u2013activity relationships, aiding rational catalyst design.Lin et al. recentlyWang et al. reviewedXu et al. summarizJu et al. describeZhao et al. reviewed"} {"text": "Weevils represent one of the most prolific radiations of beetles and the most diverse group of herbivores on land. The phylogeny of weevils (Curculionoidea) has received extensive attention, and a largely satisfactory framework for their interfamilial relationships has been established. However, a recent phylogenomic study of Curculionoidea based on anchored hybrid enrichment (AHE) data yielded an abnormal placement for the family Belidae (strongly supported as sister to Nemonychidae + Anthribidae). Here we reanalyse the genome-scale AHE data for Curculionoidea using various models of molecular evolution and data filtering methods to mitigate anticipated systematic errors and reduce compositional heterogeneity. When analysed with the infinite mixture model CAT-GTR or using appropriately filtered datasets, Belidae are always recovered as sister to the clade )), which is congruent with studies based on morphology and other sources of molecular data. Although the relationships of the \u2018higher Curculionidae\u2019 remain challenging to resolve, we provide a consistent and robust backbone phylogeny of weevils. Our extensive analyses emphasize the significance of data curation and modelling across-site compositional heterogeneity in phylogenomic studies. In tet al. [In the present study, we reanalyse the data by Shin et al. using va. 2 (a)et al. [et al. [Aethina (Nitiduloidea: Nitidulidae) was removed in datasets 2 and 3.Our analyses were conducted on four datasets prepared based on the concatenated AA sequences by Shin et al. . We did et al. . More imet al. ,41). Dat [et al. . Dataset [et al. using BL [et al. under a [et al. . The dis (b)To assess the compositional heterogeneity across taxa, the normalized relative composition frequency variability (nRCFV) was calculated using RCFV Reader v.1 . To asse (c)The maximum likelihood (ML) phylogenetic trees were inferred with IQ-TREE 1.6.2 or 2.1.3 under both the best fitting site-homogeneous model and the mixture models LG4X, LG + C20 + F, and LG + C40 + F (for dataset 2 only), as well as the posterior mean site frequency (PMSF) approximation of LG + C20 + F, LG + C40 + F and LG + C60 + F . The PMSThe Bayesian phylogenies were inferred under the infinite mixture model CAT-GTR + G4 in Phyloet al. [The coalescent-based phylogenies were also estimated. Since the sequences of individual genes were not directly supplied by Shin et al. , we extret al. and R paet al. and seqiet al. . The genet al. , first wet al. , with thet al. .. 3 (a)The normalized RCFV value measures the compositional heterogeneity across taxa. The original dataset displayed a relatively high nRCFV value, indicating a pronounced compositional heterogeneity . In the (b)The Bayesian analyses under the CAT-GTR model did not fully converge for datasets 1 and 4 (maxdiff = 1). However, the failure to converge was due only to discrepancies between chains about the nodes within the family Curculionidae, and all the nodes related to the interfamilial relationships of Curculionoidea had reached a posterior probability support of 1.00 in these two analyses . According to , these ret al. [The relationships among most families were stable across our analyses with different datasets and models, except for the positions of Cimberididae and Belidae figures and 2. Aet al. . In most (c)Belidae were recovered as sister to Nemonychidae + Anthribidae in most of the results from datasets 1 and 2, except when analysed with the infinite mixture model CAT-GTR . However. 4et al. [et al. [et al. [In the partitioned maximum-likelihood analysis of AA data by Shin et al. , Belidaeet al. ,20 , where [et al. suggeste [et al. shows th [et al. ,22,23. I [et al. ) as our [et al. \u201368).et al. [et al. [In the original study by Shin et al. , the sitet al. ). They ret al. for each [et al. , in our et al. [Aethina), included some incorrectly aligned loci, and skipped the data filtering step before tree reconstruction. The impact of alignment trimming on downstream phylogenetic analysis is often not very clear [The genomic data used by Shin et al. were obtet al. , which met al. . As showry clear , as it iry clear ,72. NeveThe systematic placement of Belidae based on our reanalyses is congruent with morphology-based studies \u201315 are now confidently resolved, the relationships within the \u2018higher Curculionidae\u2019 , especia. 5et al. [Weevils represent a classical case study on adaptive radiation associated with the diversification of flowering plants ,11. A vaet al. . This to"} {"text": "Human activity recognition (HAR) is an important research problem in computer vision. This problem is widely applied to building applications in human\u2013machine interactions, monitoring, etc. Especially, HAR based on the human skeleton creates intuitive applications. Therefore, determining the current results of these studies is very important in selecting solutions and developing commercial products. In this paper, we perform a full survey on using deep learning to recognize human activity based on three-dimensional (3D) human skeleton data as input. Our research is based on four types of deep learning networks for activity recognition based on extracted feature vectors: Recurrent Neural Network (RNN) using extracted activity sequence features; Convolutional Neural Network (CNN) uses feature vectors extracted based on the projection of the skeleton into the image space; Graph Convolution Network (GCN) uses features extracted from the skeleton graph and the temporal\u2013spatial function of the skeleton; Hybrid Deep Neural Network (Hybrid\u2013DNN) uses many other types of features in combination. Our survey research is fully implemented from models, databases, metrics, and results from 2019 to March 2023, and they are presented in ascending order of time. In particular, we also carried out a comparative study on HAR based on a 3D human skeleton on the KLHA3D 102 and KLYOGA3D datasets. At the same time, we performed analysis and discussed the obtained results when applying CNN-based, GCN-based, and Hybrid\u2013DNN-based deep learning networks. HAR is an important research problem in computer vision. It is applied in many fields, such as human\u2013machine interaction , video sDespite much research interest and impressive results, HAR still contains many real challenges in the implementation process. In Islam et al. \u2019s study,In the studies of Xing et al. , Ren et In this paper, we conduct a survey on deep learning-based methods, datasets, and HAR results based on 3D human poses as input data. From there, we propose and analyze the challenges in recognizing human activities based on the 3D human pose.Previous studies on HAR often applied to datasets with a small number of 3D joints, namely from 20 to 31 points as the HDM05 dataset . It maps a 3D representation of the 3D human skeletons into a 2D array to learn the spatial-temporal skeleton features. The CNN-based approach is illustrated in X, Y, and Z axes. The joints of each frame ith are represented by j is the joint number. Li et al. [Tasnim et al. proposedi et al. proposedi et al. . This moi et al. propose i et al. \u2019s researi et al. \u2019s researi et al. to trainGCN-based deep learning uses the natural representation of the 3D human skeleton as a graph, with each joint as a vertex and each segment connecting the human body parts as an edge. This approach often extracts the spatial and temporal features of the skeleton graph series, as illustrated in With the advantages of features that can be extracted from the skeleton graph, this approach has received much research attention in the past four years. In 2019, Shi et al. proposedA novel end-to-end network AR-GCN is proposed by Ding et al. . AR-GCN Kao et al. proposedIn 2020, Song et al. proposedThe Shift-GCN is proposed by Cheng et al. . Other GDing et al. proposedYu et al. proposedThe PR-GCN is proposed by Li et al. . To reduIn 2021: Chen et al. proposedIn 2022: Lee et al. proposedThe DG-STGCN model is proposed by Duan et al. . DGSTGCNThe STGAT is proposed by Hu et al. to captuDuan et al. proposedThe InfoGCN framework is proposed by Chi et al. and presThe TCA-GCN method is proposed by Wang et al. . The TCAHybrid-DNN approaches use deep learning networks together to extract features and train recognition models. Here we examine a series of studies from 2019 to 2023 for skeletal data-based activity recognition. Si et al. proposedThe end-to-end SGN network is proposed based onTrived et al. proposedZhou et al. built a The action capsule network (CapsNet) for skeleton-based action recognition is proposed by Bavil et al. . The temTo evaluate deep learning models for HAR based on 3D human skeleton data, usually, some benchmark datasets have to be used to evaluate the performance. Here we introduce some databases containing 3D human skeleton data.UTKinect-Action3D Dataset [ Dataset includesSBU-Kinect dataset [ dataset is captuFlorence 3D Actions dataset [x, y, and z coordinates) captured with MS Kinect. dataset is captuJ-HMDB dataset [ dataset is a sub dataset with 21 Northwestern UCLA Multiview Action 3D (N-UCLA) [(N-UCLA) is captuSYSU 3D Human-Object Interaction Dataset [ Dataset includesNTU RGB+D dataset [ dataset has beenKinetics-Skeleton dataset [ dataset is named dataset . Each huNTU RGB+D 120 dataset [NTU RGB+D dataset [NTU RGB+D dataset [ dataset . Most de dataset , only thKLHA3D-102 dataset [ dataset is captuKLYOGA3D dataset [KLHA3D-102 dataset [ dataset that the dataset . The onlAcc):Accuracy with the support of CUDA 11.2/cuDNN 8.1.0 libraries. In addition, there are a number of other libraries such as OpenCV, Numpy, Scipy, Pillow, Cython, Matplotlib, Scikit-image, Tensorflow \u2265 1.3.0, etc.In this paper, we pre-trained the models of DDNet and PA-RThe results of HAR based on the skeleton of the KLHA3D-102, KLYOGA3D datasets are shown in This can be explained by several reasons. The model is trained to recognize using only the data of the 3D human skeleton and has not been combined with other data, such as data about the activity context. Human skeleton data are collected from many cameras in different viewing directions without being normalized. The number of joints in the skeleton data of these two datasets are 39 joints, which is a large number of joints in 3D space. They make the feature vector size large, and the number of action classes in the KLHA3D-102 dataset is 102 classes; there are many similar actions, such as \u201cdrinking tea\u201d, \u201cdrinking water\u201d, and \u201ceating\u201d. As illustrated by the human skeleton data in r et al. chooses r et al. . In thisr et al. , the midr et al. , and ther et al. .In In this paper, we have carried out a full survey of the methods of using deep learning to recognize human activities based on 3D human skeleton input data. Our survey produced about 250 results on about more than 70 different studies on HAR based on deep learning under four types of networks: RNN-based, CNN-based, GCN/GNN-based, and Hybrid-DNN-based. The results of HAR are shown in terms of methods and processing time. We also discuss the challenges of HAR in terms of data dimensions and the insufficient information to distinguish actions with a limited number of reference points. At the same time, we have carried out comparative, analytical, and discussion studies based on fine-tuning two methods of DNNs for HAR on the KLHA3D-102 and KLYOGA3D datasets. Although the training set rate is up to 85% and the test set rate is 15%, the recognition results are still very low (the results on KLHA3D-102_Conf. 5 is 1.96% of DDnet and 8.56% of PA-ResGCN). It also shows that choosing a method for the HAR problem is very important; for datasets with a large number of joints in the 3D human skeleton, the method based on projecting a 3D human skeleton to the image space and extraction features on the image space should be chosen.Shortly, we will combine many types of features extracted from the 3D human skeleton into a deep learning model or construct new 2D feature sets to improve higher HAR results. We will propose a unified model from end-to-end for detecting, segmenting, estimating 3D human pose, and recognizing human activities for training and learning exercises in the gym or yoga for training and protecting health. As illustrated in"} {"text": "Mutation accumulation in tumour evolution is one major cause of intra-tumour heterogeneity (ITH), which often leads to drug resistance during treatment. Previous studies with multi-region sequencing have shown that mutation divergence among samples within the patient is common, and the importance of spatial sampling to obtain a complete picture in tumour measurements. However, quantitative comparisons of the relationship between mutation heterogeneity and tumour expansion modes, sampling distances as well as the sampling methods are still few. Here, we investigate how mutations diverge over space by varying the sampling distance and tumour expansion modes using individual-based simulations. We measure ITH by the Jaccard index between samples and quantify how ITH increases with sampling distance, the pattern of which holds in various sampling methods and sizes. We also compare the inferred mutation rates based on the distributions of variant allele frequencies under different tumour expansion modes and sampling sizes. In exponentially fast expanding tumours, a mutation rate can always be inferred for any sampling size. However, the accuracy compared with the true value decreases when the sampling size decreases, where small sampling sizes result in a high estimate of the mutation rate. In addition, such an inference becomes unreliable when the tumour expansion is slow, such as in surface growth. Durinet al. [The patterns of ITH are driven by both spatial and temporal dynamics ,18. Whilet al. introducet al. ,22, whicet al. . In larget al. as well et al. , which iet al. . Spatialet al. ,18,27\u201332et al. ,33\u201337.et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [Using a three-dimensional lattice, Waclaw et al. modelled [et al. investig [et al. . Their r [et al. modelled [et al. and Fu e [et al. also vis [et al. quantita [et al. showed t [et al. compared [et al. applied et al. [et al. [Beyond competition and constraints on pure physical space, the micro- and macro-environment play a significant role in tumour development especially on the non-genetic factors . Andersoet al. introducet al. extendedet al. , hierarcet al. and metaet al. . More re [et al. used a hp \u2264 1) similar to Sottoriva et al. [et al. [et al. [p = 0) and exponential growth (p = 1) as two boundary examples.While different growth modes of solid tumours are related to the strength of spatial constraints and likely to be tissue-specific , we simpa et al. , Ryser e [et al. , Chkhaid [et al. in an aget al. [et al. [et al. [et al. [R) and marginal regions ((1/3)R) of simulated tumours with radius R. We record the accumulation of point mutations, compare mutation frequencies above a detection limit and infer mutation rates in each sample. To compare the ITH between samples, we apply the Jaccard index to quantify the mutation composition difference, which has been used to quantify mutation diversity in much genetic data [More and more experimental and theoretical studies have demonstrated the importance of sampling itself on the interpretation of measured ITH. Opasic et al. investig [et al. evaluate [et al. and Zhao [et al. indicatetic data \u201359. The Our results show that under a given tumour expansion mode (fixed push rate), the ITH increases with the sampling distance, which is consistent with observations in experimental data ,60. This. 2. 2.1Lp), and the other daughter cell is located in a randomly selected empty space among its direct neighbours. If there is no free space, with a probability p (0 \u2264 p \u2264 1), a new space can be created by pushing a randomly selected neighbour cell (at location Ld) and all the rest of the cells along the direction of Lp and Ld outwards for one position until an empty space was reached [p = 0 refers to the surface growth where only cells in the outskirt of the tumour would divide and p = 1 refers to an exponential growth where all cells divide in each time step per cell division. Thus, the probability to have k new mutations in one daughter cell is \u03bbk e\u03bb\u2212/k!. If not specified, we use \u03bb = 10, which is conventionally considered as the average number of point mutations per cell division in tumours [In each cell division, random point mutations can happen in both daughter cells compared with the parent cell. The number of new mutations in each daughter cell follows a Poisson distribution ,26,33,64 tumours ,66.. 2.2To understand how mutations spread over space, we test two different spatial sampling methods, i.e. random sampling and centre-margin sampling. For the random sampling method, we sample 500 locations randomly in each simulated tumour and collect cells in a rectangular area around these locations . To investigate the impact of sampling sizes on our diversity measures, we vary sample size from 100 to 3600 cells (around 0.6\u201321.6% of the whole tumour). Alternatively, we divide simulated tumours into the central region (a circle of two-thirds of the radius from the tumour centre) and the marginal region (the rest one-third ring structure). We randomly sample 500 rectangular areas with 100 cells in each sample (around 0.6% of the whole tumour) in the margin and 500 samples with 100 cells in the centre region .. 2.3A and B are the set of mutations in two samples, the similarity of mutations between the two samples is given byWe use a statistical measurement, the Jaccard index, to compare the diversity between samples. Supposing . 2.4f [M(f) = (\u03bc/\u03b2)((1/f) \u2212 (1/fmax)), and the slope could determine the effective mutation rate \u03bc/\u03b2 . We constructed the cumulative VAFs for all samples and compared how the push rates, sampling methods and sizes would impact on this measurement. We quantify how much the observed cumulative agrees with a linear regression by Kolmogorov\u2013Smirnov (KS) test [Another classical measurement for patterns of mutation accumulation in population genetics is the frequency distribution of mutations among tumour cells in each sample, which is called variant allele frequency (VAF) distribution in cancer research. We are interested in how different growth dynamics, such as push rates as well as spatial sampling, impact on the measured VAF distributions. For simplicity, we simulate only one driver event where genetic changes lead to the initiation of a single tumour cell, which seeds for the tumour growth and the accumulation of random neutral mutations during the tumour expansion. In this scenario, the accumulated mutations follow a theoretical expectation of power-law decay, where the number of mutations of a given frequency decrease along with the mutation frequency f . The cumKS) test , which m. 3. 3.1p = 0), cells grow slowly and mainly on the surface. From spatial patterns constructed by the cell ID (a), where the cell born later is assigned a larger ID number, we observed clear circular boundaries among early and later born cells. However, when the push rate increases, these spatial boundaries become loose. Instead, spatial mixing among early and later born cells appears. Similar effects are observed in the spatial pattern of mutations. In figure 2b, we demonstrate the spatial pattern of four different mutations randomly picked up from four cells born in the second generation. When p = 0, clear boundaries among cells carrying those mutations exist. With the increase in push rate, cells are carrying different mutations mix in space.When the push rate is low . This pattern is consistent under all push rates. Arising at the same tumour generation, the mean value of the frequency these mutations can reach in the final tumour is independent of the push rates. However, the variance increases monotonically when the push rate decreases.The earlier a mutation arises during the tumour growth, the higher frequency it reaches in the final tumour c. This p. 3.2a,b shows examples of single simulations under two boundaries of push rates, p = 0 and p = 1. Without mimicking sequencing noise in our simulation, when push rate p = 1, the VAF distribution is discrete with mutations at frequencies 0.5, 0.25, 0.125, 0.0625 and so on , which is the frequency of most clonal mutations in diploid populations. The cumulative VAF distribution under surface growth strongly deviates from a linear relation, which is quantified by the KS distance. The larger the KS distance, the further away the cumulative VAF distribution deviates from a linear regression.Next, we construct the VAF of all mutations accumulated through tumour growth. Mutations with the frequency of less than 0.01 in the final tumour are discarded, as in reality, it is hard to detect such lower frequency mutations in a standard sequencing depth. Figure 3nd so on inset. Tding 0.5 b inset, a,b). We found that the push rates have a strong impact on this measurement, where the KS distance keeps a relatively high level when push rates are small . This means that mutation rate inferences based on a linear regression are not reliable under small p. The KS distance decreases when p becomes larger , and a linear regression is reasonable across all sampling sizes . Thus, we can infer the mutation rates based on the slope of the linear regression under large p.We measure the KS distance for different push rates and sampling sizes under random sampling. Note, to eliminate the extremely low-frequency mutations (less than 0.01), we discard the last few dots in the cumulative VAF distributions for the linear regression , the inferred mutation rate is often an overestimate compared with the true value . In addition, the smaller the sampling size is, the large the variance of the mutation rate is.Figure 3. 3.3p = 0 (surface growth), the Jaccard index drops down to 0 very fast (a), where the non-overlapping samples have fewer and fewer shared mutations when the sampling distance increases. When p increases, given the same sampling size and distance, the Jaccard index increases. This agrees with the observation of spatial mixing of cells carrying different mutations. When p = 1, cell spatial mixing reaches the highest level, and we seldom observe any Jaccard index as 0 even under the smallest sampling size and largest sampling distance, and there are always shared mutations among those samples. While the results in In each simulation, we first randomly sample 500 areas with various sampling sizes. We measure the ITH between spatially non-overlapping samples and quantify how this heterogeneity changes with spatial distances. We compare the samples pairwise to calculate the Jaccard index, which is inversely proportional to intra-tumour heterogeneity. Meanwhile, the spatial distances between samples are defined by the Euclidean distance between the central points of each sample. For various sampling sizes and push rates, the Jaccard index between two random samples decreases, thus the ITH increases, monotonically with the spatial distances and margin region (1/3 R ring width), where R is the tumour radius . We randomly sampled 500 areas in the margin and centre, respectively . Then, we compared the relationships of the Jaccard index and spatial distance between samples in the margin and the centre region under the different push rates. The patterns are very similar to those observed in completely random sampling. The Jaccard index decreases with the increase of the sampling distance, and the push rates lead to a higher Jaccard index under the same sampling size. In addition, we see the Jaccard index is slightly higher between samples in the central region compared with the margin region. This is more obvious when push rates are small, where the spatial constraint is stronger and thus less mutations are shared between spatially non-overlapping samples. In summary, the two sampling methods do not alter the pattern of how ITH increases with the sampling distance qualitatively with increasing p . However, there is a small quantitative difference if we sample in the margin or centre of tumours.To understand the impact of sampling methods, we divided the simulated tumours into the central region . For a similar sampling size under two- and three-dimensional simulations, the Jaccard index is in general higher in two-dimensional compared with three-dimensional simulations even under the same sampling distance (d\u2013f). This might be due to the fact that it takes more cell divisions to reach the same physical distance in three-dimensional compared with two-dimensional simulations, thus higher intra-tumour heterogeneity between samples.The majority of our simulations are based on two-dimensional lattice to reduce the computational cost of simulating tumours with large sizes and sampling distances, which results in a limitation as solid tumours are often three-dimensional. To explore this, we extend our agent-based model from two to three dimensions in some parameter sets . We model the tumour growth up to 10rd index a\u2013c. For distance d\u2013f. This. 3.5x-axis) between samples increases with physical distance (their y-axis), which is consistent with our results. In Ling et al. [We have demonstrated that the relationship between Jaccard index and sampling distance is qualitatively stable across two- and three-dimensional models as well as different sampling methods. We see a similar pattern in a study of colorectal tumours , where tg et al. , 300 bio. 4We developed a computational model that tracked the dynamic movement of each cell and variation divergence, which revealed the relationship of spatial heterogeneity distribution with sampling size and tumour expansion modes. We used push rates to model slow and fast growth modes, where small push rates refer to surface growth and large push rates to exponential expansion without spatial constraints. We implemented two alternative pushing algorithms, where pushing happens in a random direction or towards the nearest empty spot. Furthermore, we recorded the mutation accumulation during all growth modes and applied different sampling methods, i.e. completely random sampling and margin-centre sampling, with various sampling sizes.et al. [et al. [et al. [Under the surface growth , the accumulation of mutations is concentrated in a continuous space, and mutations arising in different original cells can form clear boundaries in space. When the push rate increases, the mutations become more spatially dispersed, which agrees with the conclusion of Chkhaidze et al. in simul [et al. . However [et al. , where tet al. [et al. [While mutation heterogeneity can reveal a tumour\u2019s life history and the et al. sequence [et al. , where t [et al. over theOur model provides a quantitative analysis of how growth modes, sampling distance and size impact on the measurements of intra-tumour heterogeneity. Those results confirm the importance of obtaining spatial information in understanding tumour evolution, as well as the possible deviation of estimated evolutionary properties such as mutation rates introduced by sampling details."} {"text": "Gastrointestinal (GI) cancers are a group of cancers associated with the gastrointestinal tract and the most affected areas are the esophagus, stomach, colon, liver, and pancreas. GI cancers are responsible for 25% of cancer incidence and 33% of cancer-related death globally. Even though the recent advances in diagnosis and therapies have made an overall good impact, the challenges in controlling and managing GI cancers continue . This ReLuo et\u00a0al. assessed that AI is highly accurate in early-stage upper GI cancer detection using endoscopic images. In a systematic review, Jia et\u00a0al. showed by metanalysis that AI Deep learning models have higher predictive accuracy than radiomics models in patients with rectal cancer.The world is now witnessing the emergence of artificial intelligence (AI) in every field. Arrichiello et\u00a0al. describe the emerging pathological features to predict the prognosis of patients with colorectal cancer (CRC). In a systematic review by Guan et\u00a0al., the authors have performed a meta-analysis to quantify the relevance of preoperative factors for peritoneal carcinomatosis in gastric cancer when using staging laparoscopy (SL). He et\u00a0al., Schlosser et\u00a0al., and Sung et\u00a0al. have reviewed the emerging biomarkers and their potential for clinical diagnosis of hepatocellular carcinoma (HCC). In a systematic review and meta-analysis, Yang et\u00a0al. assessed the prognostic value of pan-immune-inflammation value in patients with CRC. In another systematic review, Li et\u00a0al. assessed the diagnostic value of lncRNAs for gastric cancer.Zhang et\u00a0al. described a condition of pancreatic neuroendocrine tumors and liver perivascular tumors with the involvement of multiple organs and space-occupying lesions, which is less common. Yan et\u00a0al. summarized that interventional therapies such as rupture tissue ablation and TAE/TACE for those who are not tolerant to emergency surgery, reach an ideal prognosis for ruptured HCC (rHCC) cases. Xue et\u00a0al. reviewed the prognostic importance of tumor budding, which is a single cell or cluster of up to four cells at the cancer invasion margin in gastric cancer. Bae et\u00a0al. showed an increasing trend in the utilization of radiotherapy, adoption of advanced techniques, and overall survival improvements in patients with HCC from a Korean tertiary hospital registry. A review by Wang Q et\u00a0al. has described the pathogenesis, diagnosis, and management of an extremely rare pathological condition, primary hepatopancreatobiliary lymphoma, and offers a diagnosis and management schedule for clinicians. Chen et\u00a0al. have reported a meta-analysis on neoadjuvant chemoradiotherapy for resectable gastric cancer. Li et\u00a0al. have discussed the current status and future perspectives of cardia preserving radical gastrectomy, which is a promising approach with various advantages. Du et\u00a0al. have reported a rare case of an ectopic enterogenous cyst in the anterior sacral and soft tissue of the buttocks and its carcinomatous transformation, an event that has never been reported before in the literature.In a case report, Ma et\u00a0al.. Wu et\u00a0al. updated the advancement in the research of HCC progression after radiofrequency ablation. In a systematic review, Zhong et\u00a0al. assessed the efficacy and safety of ICIs combined with antiangiogenic drugs in HCC. Hu et\u00a0al. described the current advances in research on the secondary resistance to imatinib against gastrointestinal stromal tumors (GISTs). A meta-analysis by Zheng et\u00a0al. showed the effect of phosphoglucomutase (PGM), a key enzyme involved in the synthesis and breakdown of glycogen, on the survival prognosis of tumor patients. In this Research Topic, Jiang et\u00a0al. have discussed the possibilities of targeting neutrophils against the development and progression of pancreatic cancer.Comparative efficacy and toxicity of immune checkpoint inhibitors (ICIs) combined or not with chemotherapy have been analyzed in the systematic review by Qiu et\u00a0al. reviewed the role of Intercellular Adhesion Molecule-1 (ICAM-1), a cell surface glycoprotein, focusing on expression, functions, prognosis, tumorigenesis, polymorphism, and therapeutic implications in CRCs. Gong et\u00a0al. have reviewed the role of melatonin, a natural indolamine in inhibiting GI carcinogenesis and the mechanisms behind it. Xie et\u00a0al. have reviewed the application of single-cell sequencing in gastrointestinal cancers. Qi et\u00a0al. have described the prognostic roles of Competitive Endogenous RNAs (ceRNA) Netowork-Based Signatures in GI cancers. Wang S. et\u00a0al. have reviewed the mechanisms and prospects of circular RNAs, a group of single-stranded RNAs that form a covalently closed continuous loop, and their role in GI cancer signaling networks.Yu et\u00a0al. reviewed the endocrine organ-like tumor hypothesis that explains the factors involved in cachexia development. Ferroptosis is an iron-dependent form of programmed cell death and Liang et\u00a0al. have reviewed the therapeutic applications of ferroptosis in CRCs. In their review, Melia et\u00a0al. have explained the pro-tumorigenic role of type-2 diabetes-induced cellular senescence in CRCs and its molecular mechanism. In a systematic review, Pang et\u00a0al. investigated the clinical significance of lung cancer inflammation index in patients with GI cancer in order to evaluate the postoperative complications before surgery and survival outcomes.Up to 80% of patients with pancreatic adenocarcinoma (PDAC) can experience PDAC-derived cachexia, a systemic disease that involves a complex interplay between the tumor and multiple organs. Taken together, the articles published in this Research Topic have discussed a broad area of research in GI cancers, from diagnosis to therapeutic resistance. Current advances in genomic techniques and AI have certainly created new and tremendous possibilities in GI diagnosis and therapy, which will hopefully improve GI cancer management in the future.KM has written, FB and EG have reviewed the editorial. All authors contributed to the article and approved the submitted version."} {"text": "Since then, women have made numerous contributions to all fields of science. Yet, despite this undisputable progress, female scientists still remain a minority. Hence, it is particularly important to highlight and promote their work. This edition of the Research Topic is devoted to women involved in childhood cancer research.Women in Pediatric Oncology\u201d contains 15 articles spanning a broad range of topics, starting from basic science through clinical research and survivorship care. Sorteberg et\u00a0al. identify the activation of cyclin dependent kinase p21Cip/Waf1 as a potential mechanism of chemoresistance in high-risk neuroblastoma and present preclinical data demonstrating efficacy of its inhibitor in combination with routine chemotherapeutics. Cervi et\u00a0al. report a complete response to Trk inhibitor treatment in a patient with angiosarcoma carrying KHDRBS1-NTRK3 fusion gene, the first such case and a perfect example of precision-based medicine. A review paper by Cruz-Galvez et\u00a0al. provides a comprehensive overview of retinoblastoma \u2013 the known facts and novel findings pertaining to this classic, genetically-driven pediatric malignancy.While pediatric cancer research has led to advances in our understanding of the biology of childhood cancers, and thereby contributed to advances in diagnosis, treatment and survivor psychosocial care, there remains an unmet need for improving patient outcomes and quality of life of the childhood cancer survivors. The second volume of \u201cPetrilli et\u00a0al. and Miller et\u00a0al. describe the use of state-of-the-art molecular profiling methods to characterize heterogeneity of rhabdoid tumors and pediatric-type diffuse high-grade gliomas, respectively. Other articles in this collection focus on clinical practice and outcomes research. Samborska et\u00a0al. describe treatment outcomes in patients with myeloid sarcoma, while Puglisi et\u00a0al. focus on the clinical characteristics of patients with combined neuroblastic tumors and neurofibromatosis type 1. Ariagno et\u00a0al. present a timely study on the impact of prior COVID-19 infection on the risk of endothelial dysfunction in pediatric and adolescent patients undergoing hematopoietic cell transplants.Women are also pioneering novel technologies and treatment approaches. Two papers by Wang et\u00a0al., who evaluate a diagnostic performance of imaging techniques in children with ovarian masses. On the other hand, Reschke et\u00a0al. describe the development of clinical protocols aiming at improving multidisciplinary care of children with high-risk malignancies.The clinical practice in pediatric oncology has to be tailored to children and often does not follow the same protocols as the care for adult patients. This problem is emphasized by Burgers et\u00a0al.; McLoone et\u00a0al.; Otth et\u00a0al.; Otth and Scheinemann; Rockwell et\u00a0al.).The last group of the manuscripts included in this Research Topic focuses on psychosocial issues associated with care for pediatric and adolescent cancer patients. These studies range from challenges in communication between health providers and patients and/or their families, as well as everyday difficulties facing the patients, their caregivers and educators (Altogether, this collection of outstanding 15 articles is a perfect example of the scope and variety of research performed by women scientists focusing on pediatric oncology and hematology. After reading the collection of these articles, the reader will appreciate the multi-faceted approach of current childhood cancer research, which remains a work in progress.JK and Y-MK reviewed and summarized the manuscripts published in the Research Topic. All authors contributed to the article and approved the submitted version."} {"text": "This commentary discusses loose versus tight control of biomineralization products and how this evolved flexibility. Concomitant improved functionality may be more widespread than commonly thought. There are a surprisingly large number of biominerals, that is, minerals produced by living organisms, and these utilize a myriad of cations and anions, see Lowenstam & Weiner 1989. Researca) can grow and be replaced over time, (b) can form and be added to over time [recording structures can be periodically shed and totally replaced. Considering the calcium phosphate and calcium carbonate mineralized tissues, examples of (a) include bone and its remodeling fish otoliths deer antlers is precisely controlled by the action of specific cells and macromolecules. Data strongly support this view in many cases, including the most heavily studied biomineralized tissues, bone or bone-analogs and tooth. The paper by Christensen al. 2023 reports Odontodactylus scylliarus are an example of food-gathering weapons: the animal uses its pair of clubs \u2018to destroy its prey with bullet-like acceleration revealed the 3D distribution of minerals, the cross-sectional distribution of Ca and the different crystallographic phases and their crystallographic texture . These investigators established most clubs\u2019 sides crystallize to calcite and not to bioapatite, the dominant crystal type in the impact zone. In other clubs, substantial amorphous mineral remained. Further, crystallization can occur while the club is still functional, and the distribution of calcite crystallites can vary drastically from club to club.Stomatopod dactyl clubs are shed periodically, and Christensen et al. suggest that the variability in structure of the sides of clubs provides \u2018design\u2019 flexibility and provides a \u2018good enough\u2019 structure. The results also suggest further directions of research. For example, one wonders whether the banded structure in Fig. 6 reflects periodic changes in growth processes, like that in cementum (Naji et al., 2022et al., 2018e.g. Stock et al. (2017et al. (2020Christensen al. 2017 and Ryan al. 2020.et al. that their results suggest tightly controlled mineralization in the impact zone but loosely controlled mineralization in the sides of the clubs. This is an extremely important demonstration that mineralization within a single organ is spatially modulated with tightly controlled mineralization in one place and loosely controlled mineralization elsewhere. Similarly, the author suspects that intertubular and peritubular dentin form by tight and loose control, respectively (Stock et al., 2014aet al., 2003et al., 2014bet al. (2014bThe author agrees with Christensen l. 2014b appear tet al. at the end of their discussion, namely that the observed side-wall structure is \u2018good enough\u2019 for its purpose.One sometimes runs across the notion that the biomineralized structures we observe today evolved to be optimized structures, which is not a helpful viewpoint. Specific, highly functional features are likely to persist for long periods if they are good enough for their purpose, if other evolutionary changes do not incidentally alter them, or if the gap to superior structures is too wide for evolution to \u2018jump\u2019. In fact, this is the opinion offered by Christensen"} {"text": "The purpose of radiation therapy (RT) is to cover tumor tissue homogeneously with a planned dose while minimizing the dose to the surrounding healthy tissue . At pres2+) in ferrous sulphate solutions are dispersed throughout the gel matrix, while in the second one, monomers (such as acrylamide), are dispersed in the matrix. The radio-inducted variations can be read out by magnetic resonance imaging (MRI), x-ray computed tomography (CT), optical scanning, and ultrasonography. Despite extensive research in recent decades, gel dosimeters have yet to achieve widespread clinical acceptance, mainly because of three major practical concerns: the toxicity of active materials, oxygen sensitivity of the dose response, and the spatial instability of dose information.In this scenario, \u201cGel dosimetry\u201d is the most promising tools for the evaluation of 3D high-spatial-resolution dose distributions, and the studies regarding these materials represent the starting point for developing performance and innovative systems . \u201cGel doA number of gel dosimetry systems have been developed over the years with differing mechanisms of operation and varying degrees of success in alleviating the practical issues, inhibiting a clinical application.This current Special Issue is a thorough collection of articles dealing with the synthesis and characterization of hydrogels, of which the authors show the mechanisms of action and prospective applications for 3D dosimetry. The overview presented in this Special Issue would not be complete without mentioning novel approaches for the characterization and modeling of hydrogels for medical applications.Gels, belonging to the \u201cGel Analysis and Characterization\u201d Section and titled \u201cGel Dosimetry\u201d, was to collect original research manuscripts that describe cutting-edge developments in hydrogel-based materials for dosimetry and their translational applications, as well as reviews providing updates on the latest advancements in this field.With this in mind, the goal of this Special Issue of A series of manuscripts have been submitted to the Special Issue, and 11 of them have been accepted for publication. The final collection includes eight original research manuscripts and three reviews by authors from ten different countries. The contents of the published manuscripts are briefly summarized below.Zirone et al. [In the article of e et al. , entitleMerkis et al. [In the article of s et al. , a uniqude Silveira et al. [The manuscript entitled \u201cThree-Dimensional Dosimetry by Optical-CT and Radiochromic Gel Dosimeter of a Multiple Isocenter Craniospinal Radiation Therapy Procedure\u201d by a et al. proposedSoliman et al. [n et al. , using UScotti et al. [The manuscript of i et al. aims to i et al. and gluti et al. dosimeteToyohara et al. [In the manuscript of a et al. , radioacRabaeh et al. [2) on the performance of N-(hydroxymethyl)acrylamide (NHMA) polymer gel dosimeter. The dosimeter was exposed to doses of up to 10 Gy with a radiation beam-energy of 10 MV, and the relaxation rate (R2) parameter was utilized to explore the performance of irradiated gels.The manuscript of h et al. , entitleMizukami et al. [1 relaxation time also remains an important goal in clinical examinations. Low-noise, high-resolution 3D mapping of T1 relaxation times could not be achieved in a clinically acceptable time frame (<30 min). The authors also demonstrated that the whole three-dimensional dose distribution could be roughly evaluated within the conventional imaging time (20 min) and the quality of one cross-section.In the study of i et al. , the autOur Special Issue also covers some high-quality review articles that complement the recent literature regarding 3D gel dosimetry ,18,19.In the review article entitled \u201cRecent Advances in Hydrogel-Based Sensors Responding to Ionizing Radiation\u201d , the autThe review article entitled \u201cRadiation Dosimetry by Use of Radiosensitive Hydrogels and Polymers: Mechanisms, State-of-the-Art and Perspective from 3D to 4D\u201d by De Deene providesMacchione et al. [The review article entitled \u201cChemical Overview of Gel Dosimetry Systems: A Comprehensive Review\u201d by e et al. aims to In conclusion, we were very pleased to guest edit this Special Issue, as it collects relevant contributions that reflect the increasingly widespread interest in hydrogels and related applications in the field of \u201c3D Dosimetry\u201d and \u201cRadiation Therapy\u201d. We hope that this Special Issue can reach the widest possible audience in the scientific community and contribute to further boosting scientific and technological advances in the intriguing world of hydrogels as well as their multidisciplinary applications. Finally, we wish for this Special Issue to help its readers to conceive both new and improved ideas about \u201cGel Dosimetry\u201d in their respective fields."} {"text": "The coronavirus disease (COVID-19) pandemic has highlighted the close relationship between infection and kidney injury. Infections induce kidney parenchymal injury directly or indirectly through the various mechanisms of several renal diseases. This Special Issue, entitled \u201cInfection and the Kidney,\u201d focuses on this important and expanding topic of research, providing research and updated reviews on kidney injury observed in association with ongoing or past infection.Masset et al. describePorphyromonas gingivalis, in IgAN by showing a higher presence of these bacteria within the tonsils of patients with IgAN than in those of patients with simple tonsillitis, together with experimental data from a mouse model. Nagasawa et al. [IgA nephropathy (IgAN) is a rare disease for which a link between focal infection and kidney inflammation has been demonstrated in humans, as evidenced by the widespread use and efficacy of tonsillectomy in Japan. Its efficacy has been so great that most centers in Japan have begun using tonsillectomy plus steroid pulse therapy as the standard treatment for IgAN. The developers of this treatment, Hotta et al. , contriba et al. also conStaphylococcus aureus (MRSA)-infection-associated glomerulonephritis. Yoshizawa et al. [Streptococcus. Therefore, NAPlr could be used as a general biomarker of bacterial IRGN (Regarding infection-related glomerulonephritis (IRGN), two elegant reviews were included: one written by Yoshizawa et al. , the disa et al. describeial IRGN . FurtherFinally, Uchida et al. summarizA detailed understanding of the mechanisms underlying the relationship between infection and the kidneys is important because it may also lead to the elucidation of the pathogenic mechanism of idiopathic renal diseases. This is an area where knowledge is expanding, and further development is expected. Therefore, a second edition of the Special Issue, \u201cInfection and the Kidney 2.0,\u201d is currently being compiled ."} {"text": "RVF poset al. showed tet al. .Regardless of the timing of t-RVAD placement, there is undeniably an especially challenging subset of patients in whom wean from t-RVAD support is not tolerated and will require ongoing, durable right ventricular support. The total artificial heart (TAH) option has many limitations based on its large size, large pneumatic drive system limiting mobility, and geographic availability . An alteet al. [et al. [et al. [Milano et al. reportedet al. . Urganci [et al. have sha [et al. describein situ t-RVAD graft. In addition, the team eloquently show a technique of ring augmentation to make up the discrepancy between inflow cannula length and right atrial size.Herein, the Berlin group describes a patient who presents with biventricular failure due to ischaemic cardiomyopathy and ultimately is taken for HM3 LVAD implantation . The decAuthors make use of excellent visuals to contextualize the intraoperative challenges in this complex clinical scenario and provide simple techniques used to mitigate them. Clearly not for the \u2018faint-hearted\u2019, this approach is becoming an important addition to the armamentarium of every practicing mechanical circulatory support surgeon."} {"text": "Careful observation of the QT interval is important to monitor patients with long QT syndrome and during treatment with potentially QT-prolonging medication. It is also crucial in the development of novel drugs, in particular in case of a potential side effect of QT prolongation and in patients with increased risk of QT prolongation. The 12-lead electrocardiogram (ECG) is the gold standard to evaluate cardiac conduction and repolarization times. Smartwatches and smart devices offer possibilities for ambulatory ECG recording and therefore measuring and monitoring the QT interval. We performed a systematic review of studies on smartwatches and smart devices for QTc analysis. We reviewed PubMed for smartwatches and smart devices that can measure and monitor the QT interval. A total of 31 studies were included. The most frequent devices were (1) KardiaMobile 6L, a Food and Drug Administration-approved device for QTc analyses that provides a 6-lead ECG, (2) an Apple Watch, a smartwatch with an integrated ECG tool that allows recording of a single-lead ECG, and (3) the Withings Move ECG ScanWatch, an analog watch with a built-in single-lead ECG. The KardiaMobile 6L device and the Apple Watch provide accurate measurements of the QT interval, although the Apple Watch is studied in standard and non-standard positions, and the accuracy of QT measurements increased when the smartwatch was moved to alternative positions. Most studies were performed on patients, and limited results were available from healthy volunteers. In 1957, Jervell and Lange-Nielsen described a case of a family in which QT prolongation was found in multiple children and who subsequently died in infancy without any evidence of cardiac pathology at autopsy , 3. As ahttps://pubmed.ncbi.nlm.nih.gov) for studies published on the use of smart devices for QTc analysis until September 30, 2022. For reporting and methodology, the updated 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines were used (We reviewed PubMed (ere used . Terms \u201cN\u2009=\u200916). Five studies examined the Apple Watch. Another smart watch (SW), the Withings Move ECG ScanWatch, was examined in three studies. A graphic representation of the three most studied devices is shown in The initial search identified 1,071 studies. After screening titles, 43 articles were considered for further review. After reviewing the 43 articles, 12 articles were further excluded. The search strategy is shown in r\u2009=\u20090.856; p\u2009<\u20090.001) in lead II. The absolute difference between QTc values was <10\u2005ms in 55% of the subjects. A mean QTc \u2265480\u2005ms in lead II on the 12-lead ECG was found in six subjects. The sensitivity and speci\ufb01city for mECG QTc prolongation in lead II were 80% and 99%, respectively (n\u2009=\u2009203). The authors concluded that using a 6-lead mECG enables measuring the QT interval with good accuracy compared with the standard 12-lead ECG. Frisch et al. (p\u2009=\u20090.15). An excellent agreement and no statistically signi\ufb01cant differences in the QTc interval measurement was found in this study. It was demonstrated that the KM-1l device has adequate precision and agreement compared to the standard 12-lead ECG. Minquito-Carazo et al. (p\u2009<\u20090.001) shorter in the KardiaMobile 6-lead ECG than in the 12-lead ECG. Beers et al. is a wireless mobile ECG (mECG) device that can directly record a 6-lead ECG, which consists of leads I, II, and III and also augmented Vector Left (aVL), augmented Vector Foot (aVF), and augmented unipolar right arm lead (aVR). It is a small (9.0\u2005cm\u2009\u00d7\u20093.0\u2005cm\u2009\u00d7\u20090.72\u2005cm) device that consists of three electrodes each on both the top surface and the bottom surface. Electrodes on the top surface make contact with both thumbs, and electrodes on the bottom surface make contact with either the left knee or the left ankle. KardiaMobile 6L can subsequently be connected to the corresponding application through Bluetooth on mobile devices such as tablets and smartphones to record a 30-s 6-lead mECG. It then provides an automated assessment of heart rate and heart rhythm . The FDAh et al. publisheh et al. publisheh et al. investigh et al. evaluateo et al. evaluateo et al. assessedo et al. comparedo et al. comparedo et al. present s et al. determins et al. validates et al. trained s et al. describeN\u2009=\u2009100). Apple Watch lead I was obtained with the watch on the left wrist, and lead II was obtained with the watch on the left ankle. Furthermore, the simulated lead V6 was recorded with the watch on the left lateral chest. Adequate QT measurements were observed in 85% of the patients when the SW was worn on the left wrist. This number of adequate measurements increased to 94% when the SW was moved to alternative positions. Chinitz et al. is an analog watch with an in-built single-lead ECG. It offers, without manual measurement of the SW-ECG or the need for any other software, an automated analysis of the corrected QT interval . An artiR2\u2009=\u20090.89). Carter et al. with possibilities for ECG and QTc measurements. When an SW is worn on the wrist, which is common practice, the device can only provide lead I recording, which has significant limitations. Historically, measurement of conduction intervals is preferably performed in lead II , which i testing . MeasureSmartwatches and smart devices offer possibilities for monitoring the QT interval and could be of great additional value. Compared to a 12-channel ECG, patients can record an ECG themselves, which is also possible at home. Results differ from device to device, but some devices can provide comparable results with the gold standard 12-lead ECG and allow adequate QT measurements. Given that smartwatches are already owned by many people and offer additional functionalities, these are promising devices. However, it is recommended to not only measure the QT interval from standard lead I but also at least from lead II and preferably one of the precordial leads. Further studies are needed to evaluate and validate QTc monitoring in healthy subjects and patients. While much research has been done into detecting atrial fibrillation with an SW, this review proves that reliable measurement of the QT interval is also possible. This can have an important impact on drug safety monitoring and monitoring of patients at risk for QT prolongation and offers opportunities in drug research. These devices have the potential to lead to future clinical applications in the evaluation of any drug-induced arrhythmogenicity related to prolongation of the QT interval, needing close monitoring of QT intervals. Before they can be used in daily clinical practice for antiarrhythmic drug initiation, alerts for QT prolongation or arrhythmic events need to be prospectively studied."} {"text": "Inflammation and hypertrophy of the ankle joint's synovial lining can occur due to various causes. Chronic pain and degenerative changes may be due to synovitis causing clinical manifestations through traction on the joint capsule. The failure of conservative treatment for at least six months indicates arthroscopic debridement, which can provide significant pain relief without the morbidity of extensive surgical exposures. This study was therefore conducted to establish the functional results of arthroscopic debridement of the ankle joint in synovitis. Fifteen patients with chronic ankle pain who had not responded to conservative treatment for approximately six months were included in the study. Arthroscopic debridement was performed using a shaver blade, followed by a postoperative ankle physiotherapy regimen. Patients were assessed preoperatively and postoperatively using the AOFAS, FADI, and VAS scores, with a mean follow-up period of 26 months. There was a significant improvement in the final clinical outcomes of the patients. The post-operative VAS score improved to 2.20\u00b10.56 (2-4) , the AOFAS score was 86\u00b18.25 (65-98) , and the FADI Score was 86.93\u00b17.35(70-96) . Thirteen patients (86.67%) achieved outstanding or good results, while two had fair results, according to Meislin's criterion. One patient reported a superficial wound infection, which subsided with antibiotic therapy. The study findings indicate that arthroscopic ankle debridement is an efficient method to treat persistent ankle discomfort induced by synovitis, and it has a low postsurgical complications rate, quicker recovery, and less joint stiffness. Inflammation and hypertrophy of the ankle joint's synovial lining can occur as a consequence of inflammatory arthritis, infection, and degenerative or neuropathic diseases. When chronic pain and degenerative changes are evident, it is important to keep in mind that synovitis may be present, which can cause clinical manifestations either directly or indirectly through traction on the capsule. Trauma and joint overuse can cause pain and swelling due to generalized inflammation of the joint synovium . In mostSome patients experience continuous ankle discomfort and swelling without evidence of ankle instability. An arthroscopic examination occasionally reveals localized synovitis . A cliniThere is controversy regarding how to treat a patient who sustains an ankle injury and experiences prolonged symptoms despite a stable ankle joint. In recent years, due to advancements in small joint arthroscopy and the introduction of suitable instruments and scopes for smaller and tighter joints, arthroscopic debridement has gained popularity due to its minimally invasive nature, low morbidity, less joint stiffness, and faster recovery. Currently, ankle arthroscopy has been successfully used to treat various disorders, such as loose bodies, talar dome defects, degenerative disorders, and posttraumatic conditions -8. HowevThis prospective study was conducted at the Department of Orthopedic Surgery from November 2019 to December 2022. It involved fifteen patients, including 10 males and 5 females, with a mean age of 38.80\u00b115.68 years (ranging from 20 to 65 years). These patients experienced chronic ankle pain that did not respond to conservative treatments such as Nonsteroidal Anti-Inflammatory Drugs (NSAIDs) and repeated courses of physiotherapy for approximately 6 months. The exclusion criteria comprised patients with localized soft-tissue infection, tenuous vascular status, and ankle instability.A comprehensive personal and clinical history was obtained from each patient, followed by a thorough examination that included the Ankle Anterior Drawer test and Talar tilt test to assess ankle instability. To evaluate foot disability, we calculated the American Orthopaedic Foot and Ankle Society (AOFAS) Score and the Foot and Ankle Disability Index (FADI) score. In addition, the Visual Analogue Scale (VAS) score was calculated to document the severity of pain. All patients had persistent ankle pain with intermittent localized swelling. All ankles were stable on stability tests. An X-ray examination was done to assess the patient preoperatively. A senior surgeon performed all the operations, using a tourniquet in each case. The joint surfaces were carefully examined A-B. Theet al. [The study of Duan et al. was usedThe average age of the patients was 38.80\u00b115.68, ranging from 20 to 65 years. 10 male and 5 female patients were included in our study . These iMeislin\u2019s criteria and the AOFAS, FADI, and VAS scores were calculated for each patient preoperatively. In the pre-operative group, 6 patients (40%) were in the \u201cFair\u201d category and 9(60 %) were in the \u201cPoor\u201d category of AOFAS grading . Eight pPostoperatively, we did a final evaluation of the patients. There has been significant improvement in scoring and pain assessment. Six (40%) patients were in the \u201cExcellent\u201d category, 7(46.67%) patients were in the \u201cGood\u201d category, 1(6.67%) patient was in the \u201cFair\u201d category, and 1(6.67 %) was in the \u201cPoor\u201d category of AOFAS grading . FourteeAnkle arthroscopy has gained popularity as a diagnostic and therapeutic procedure during the last decade. Compared to other major joints like the knee and shoulder, ankle arthroscopy is still in its early stages. All intra-articular structures of the ankle can be directly seen with ankle arthroscopy without requiring an arthrotomy or malleolar osteotomy. The capacity to perform ankle diagnostic and surgical arthroscopy has improved due to technological advancements and a complete understanding of anatomy. It is a more desirable procedure than open arthrotomy due to the lower morbidity and quicker recovery time. The ankle joint of the patients we evaluated showed superfluous and often inflamed synovial tissue. The cause of synovitis in these patients was unknown and might be multifactorial. Most of them previously suffered an ankle sprain. It may have arisen as a result of an acute event or repeated ankle sprains irritating the joint. Hemarthrosis resulting in an acute inflammatory response could proceed to chronic synovitis with repeated joint irritation if previous injuries were present. They did not show any signs of ligamentous instability. In 86.67% of our patients, arthroscopic debridement resulted in symptom alleviation.et al. [et al. [et al. [et al. [Ankle arthroscopy was suggested by Ogilvie-Harris, Gilbart, and Chorney for patiet al. found th [et al. reported [et al. suggeste [et al. discoveret al. [et al. [Ahn et al. performe [et al. did an aet al. [et al. [et al. [In their study, Woo Jin Choi et al. conducteet al. . They ob [et al. conducte [et al. , 15 patiIn our study, the mean pre-operative FADI score was 48.53\u00b118.82 (13-65), and the mean post-operative score was 86.93\u00b17.35 (70-96).et al. [et al. [In a study by Woo Jin Choi et al. , where a [et al. investiget al. [et al. [et al. [et al. [et al. [In a study conducted by Hassan et al. involvin [et al. found 90 [et al. reported [et al. reported [et al. also rep [et al. and comp [et al. . Fair reCommon complications reported during ankle arthroscopy in other studies are superficial peroneal nerve injury during anterolateral portal placement and suraFinally, we believe that arthroscopic ankle debridement can successfully treat persistent ankle discomfort induced by synovitis, and it has a low postsurgical complications rate, quicker recovery, and less joint stiffness."} {"text": "Machine Learning and AI for Sensors\u201d Issue of the Sensors journal. The primary aim of this Special Issue is to demonstrate the recent advances related to machine learning and AI methods on sensors as well as investigate the impact of their application in a variety of hard real-world problems. In total, twelve (12) research manuscripts were accepted for publication after going through a careful peer-review process based on contribution and quality criteria. All accepted manuscripts possess significant elements of novelty and enclose several application domains, which provide the readers with a glimpse of the state-of-the-art research in the machine learning area.This article summarizes the works published under the \u201cIn recent decades, new advances in machine learning (ML) and artificial intelligence (AI) have covered areas from zero- and single-shot algorithms ,2,3 to dHowever, the increasing challenging necessities of the industrial sector as well as considerable needs of this data-driven era led to the development of new, efficient and robust methodologies and approaches. Along this line, novel AI and ML algorithms are needed, such us new data quality techniques, new clustering, classification and reinforcement learning methods; in addition, distributed AI algorithms are required and different strategies are needed to embed these algorithms in sensors.A novel approach to image recoloring for color vision deficiency\u201d and it is authored by Tsekouras et al. [The first paper is entitled \u201cs et al. . In thisXcycles backprojection acoustic super-resolution\u201d and it is authored by Almasri et al. [The second paper is entitled \u201ci et al. . The autDecision confidence assessment in multi-class classification\u201d and it is authored by Bukowski et al. [The third paper is entitled \u201ci et al. . In thisBoosting intelligent data analysis in smart sensors by integrating knowledge and machine learning\u201d and it is authored by \u0141uczak et al. [L-neurons, while the latter is a fully-connected three-layer feedforward neural network. The authors provided a comprehensive experimental analysis, which showed that the proposed hybrid structure successfully combines learning and knowledge and provides high recognition performance even for a small number of training instances. Finally, the authors stated that since the proposed L-neurons are able to learn through classical backpropagation processes, the proposed architecture is capable of updating and repairing its knowledge.The fourth paper is entitled \u201ck et al. . In thisHyperspectral image classification using deep genome graph-based approach\u201d and is authored by Tinega et al. [The fifth paper is entitled \u201ca et al. . The autA heterogeneous RISC-V processor for efficient DNN application in smart sensing system\u201d and is authored by Zhang et al. [The sixth paper is entitled \u201cg et al. . In thisA convolutional autoencoder topology for classification in high-dimensional noisy image datasets\u201d and is authored by Pintelas et al. [The seventh paper is entitled \u201cs et al. . In thisMulticlass image classification using gans and cnn based on holes drilled in laminated chipboard\u201d and it is authored by Wieczorek et al. [The eighth paper is entitled \u201ck et al. . The autA comprehensive survey on nanophotonic neural networks: architectures, training methods, optimization and activations Functions\u201d and it is authored by Demertzis et al. [The ninth paper is entitled \u201cs et al. . In thiss et al. for everMulti-agent reinforcement learning via adaptive Kalman temporal difference and successor representation\u201d and is authored by Salimibeni et al. [The tenth paper is entitled \u201ci et al. . The autAn IoT-enabled platform for the assessment of physical and mental activities utilizing augmented reality exergaming\u201d and it is authored by Koulouris et al. [The eleventh paper is entitled \u201cs et al. . In thisA robust artificial intelligence approach with explainability for measurement and verification of energy efficient infrastructure for net zero carbon emissions\u201d and is authored by Moraliyage et al. [The twelfth paper is entitled \u201ce et al. . In thise et al. . The proConclusively, we point out that the rationale and motivation behind this Special Issue was to provide a minor contribution to the existing literature about machine learning and AI methods for sensors. The examination of a variety of interesting proposed methodologies led to the presentation of a diverse range of novel strategies. Our great expectation is that the presented techniques and approaches, which were demonstrated in this Special Issue, will be found to be constructive and deeply appreciated by the scientific and industrial communities. Finally, the guest editors express their sincere gratitude to all authors for their high-quality contributions as well as the publisher and members of staff for their invaluable advice and support, which contributed in a decisive manner to enriching the quality of this editorial paper."} {"text": "Social cognition and mental health among children and youth\u201d aims to provide a forum to improve research in this field and its contribution to health psychology, the understanding of risk and protective factors, and the exploration of innovative psychosocial interventions to benefit children and youth's health and wellbeing. People's feelings and social experiences are very influenced by their social, cultural, educational and autobiographical contexts.This editorial comment on \u201cLong-term effects of stress, such as COVID-19, have the potential to seriously endanger developing children's and youth's social cognition and mental health , the 18 articles on this Research Topic highlight the individual and contextual factors that can affect the mental health and psychological wellbeing of these age-groups. In this section, we refer briefly to the themes and novel contributions of these 18 articles.Zeng et al. characterized post-traumatic growth and academic burnout and \u201cthe moderating role of core belief challenge among adolescents in an ethnic minority area in China during the COVID-19 pandemic\u201d (p. 1). Fang et al. explored an audience's emotional experience and sharing of audio-visual artistic works during the COVID-19 pandemic. In a sample of college students, Chen S. et al. \u201cexplored the mediation and moderation effects on the relationship between different social media usage patterns, emotional responses, and consumer impulse buying during the COVID-19 pandemic\u201d (p. 1). Lin et al. surveyed a large sample of Chinese adolescents and, based on health-related correlates, revealed the important role of negative problem orientation. Feng and Zhang explored \u201cthe effect of perceived teacher support and peer relationships on the mental health of Chinese university students, examining the mediating effects of reality and Internet altruistic behaviors on these relationships\u201d (p. 1). Yue et al. explored the influence of peer actual appraisals on moral self-representations through peers' reflected appraisals among Chinese adolescents aged 12\u201314. Li M. et al. examined the relationship between empathy and altruistic behavior and their underlying mechanisms in Chinese undergraduate and graduate students. Yan et al. offered a theoretical model of how perceived control and sense of power affect adolescents' acceptance intention of intelligent online services through their perceived usefulness. Zhu et al. conducted behavioral and event-related potential experiments in China to illustrate \u201chow young females with facial dissatisfaction process different levels of facial attractiveness\u201d (p. 1). Chen X. et al. \u201crevealed the effect of family cohesion on adolescents' engagement in school bullying and its mechanism of action, providing a theoretical basis for preventing and reducing the occurrence of school bullying incidents\u201d (p. 1). Li A. et al. examined how a father's presence affects an adolescent's social responsibility, and their quality of interpersonal relationships. Li T. et al. \u201cconstructed a moderated chain mediation model to investigate the influence of childhood psychological abuse on relational aggression among Chinese adolescents\u201d (p. 1). Zhang et al. demonstrated that photographic intervention could effectively improve positive affect and mitigate negative affect of college students during the COVID-19 pandemic. Wang et al. \u201cshowed that subjective wellbeing was positively correlated with social trust, trust in people, self-compassion, and social empathy\u201d (p. 1), in a sample of first-generation Chinese college students. Hua and Zhou examined the relationship between personality assessment and mental health, via sequential mediation path involving the Barnum effect and ego identity. Liu et al. used a multi-methods approach \u201cto analyze the mediating effects of social comparison and body image on social media selfie behavior and social anxiety in Chinese youth group\u201d (p. 1). Fan et al. explored how relationship-maintenance strategies affect burnout in adolescent athletes, \u201cincluding the potential mediating effects of the coach\u2013athlete relationship and basic psychological needs satisfaction\u201d (p. 1). Lastly, van Grieken et al. evaluated the longitudinal association between life events occurring before the second year and the risk of psychosocial problems at 3 years of age in Netherlands.The papers on this Research Topic show how understanding the fundamental cognitive/social processes and the applied/clinical situations may help unify and expand our knowledge of Social Cognition and Mental Health throughout childhood and adolescence. However, the research herein is not exhaustive, and results must be weighed against the many conceptual and methodological constraints indicated in the publications. There are also geographic and cultural limits since most of the investigations were done in China. However, we believe that by offering an overview of the topic and emphasizing the most recent achievements, this Research Topic/eBook will be useful to both beginners and specialists toward improving the mental health of disadvantaged children and youth around the globe.All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication."} {"text": "Irritable bowel syndrome (IBS) is a common gastrointestinal disorder that affects a patient for their entire life. Effective treatments for IBS are scarce, leading to an increased interest in alternative treatments\u00a0such as osteopathic manipulative treatment (OMT). OMT uses hands-on treatment to reduce pain through various methods. By focusing on visceral techniques, OMT can restore autonomic homeostasis and increase lymphatic flow. This literature review aims to investigate the efficacy of visceral OMT in reducing the severity of IBS symptoms. Five primary research studies were evaluated in this analysis. The concluding results show that visceral OMT effectively reduces the symptoms of IBS and improves patients' quality of life. Therefore, OMT should be considered an alternative therapy for treating IBS. Irritable bowel syndrome (IBS) is a common gastrointestinal (GI) condition that affects approximately 5-20% of the population and has an annual incidence of 196-260 per 100,000 [IBS can be classified as\u00a0a \u201cgut-brain axis\u201d disorder because the processing of visceral stimuli is altered and sub-optimal in function, leading to the majority of symptoms patients experience . TherefoThis review aims to answer the effectiveness and safety of the use of osteopathic treatment in reducing the severity and symptoms of IBS in patients.This article was previously presented as a poster at the 2023 Florida Osteopathic Medical Association Research Poster Competition on February 3rd, 2023, the LECOM Bradenton Interprofessional Research Day on April 27, 2023, and at the 2023 FMA Poster Symposium on July 29, 2023 (Appendices).MethodsPrimary research studies and meta-analyses examining the effect of OMT on the symptom severity of\u00a0IBS were identified using academic search engines, including Google Scholar and Osteopathic Research Web by E.G. in August\u00a02022. Keywords \u201costeopathic visceral techniques\u201d and \u201cIrritable Bowel Syndrome\u201d were utilized in each search engine. Nine primary research studies and one meta-analysis were found using Google Scholar, and three primary research studies were found using the Osteopathic Research Web. These were all reviewed by C.L., J.B., A.J., M.G., E.G., and J.T. The articles most relevant to the research question were determined based on described inclusion and exclusion criteria. Our study included only those research papers that met two criteria: First, they utilized the Rome III Criteria, and second, they had a minimum of five participants. Case studies and articles published in languages other than English were excluded from our study. IBS studies that did not relate to OMT were also excluded from our study. Overall, two primary research studies and one meta-analysis were included from Google Scholar, and one primary research study was included from the Osteopathic Research Web. We came across a fifth study, Hundscheid et al., which was referenced in the research study by Attali et al. Based on the inclusion and exclusion criteria discussed, we included Hundscheid et al. in our analysis.\u00a0Results were analyzed by comparing the statistical significance of improvement in IBS symptoms following OMT. A flow diagram of included studies is detailed in Figure ResultsFour out\u00a0of the five studies were randomized controlled studies, and the fifth study was an intervention study. Each study used different methods to analyze outcomes.\u00a0Attali et\u00a0al. and Florance et\u00a0al. analyzed IBS symptom severity, Steiner used the Irritable Bowel Syndrome Quality of Life Instrument (IBS-QOL) and IBS symptom severity (IBS-SS) questionnaires, and Hundscheid et al. and Stiedl et\u00a0al. used score sheets such as the Likert/visual analog (VAS) scale -7. AdditAttali et\u00a0al., Florance et\u00a0al., Hundscheid et\u00a0al., and Stiedl et al. demonstrated statistical significance with P values less than 0.01, as seen in Table DiscussionA combined analysis of all research studies indicates that OMT has the ability to ease IBS symptoms. Attali et\u00a0al. discovered that visceral OMT alleviated constipation, diarrhea, abdominal distension, rectal hypersensitivity, and abdominal pain . SteinerThere are several theories on how visceral manipulation can improve the symptoms of those with IBS. After somatic manipulation treatment, some physiologic properties improve. These include increased fluid dynamics and nutrition to supply tissue, relaxing smooth muscle in fascia and ligaments, increased blood flow, and improved lymphatic drainage . AdditioAttali et. al, Florance et al., and Hundscheid et\u00a0al. reported no adverse side effects for groups treated with OMT -6. SteinLimitationsOverall, osteopathic treatment use for IBS symptoms has shown a great response, with no known adverse effects. Even so, there are some limitations in this review on osteopathy for the treatment of IBS. The studies reviewed used different visceral and soft tissue techniques as opposed to standardizing a single technique. The techniques that were conducted varied in length of time and frequency, which may have contributed to the greater degree of symptom resolution in some sample groups. Researchers utilized varying numbers of osteopathic physicians for treatment. The studies employed various severity scores and did not compare different types of severity scores in the treatment of IBS. Lastly, our sample size was small. A larger sample size would have increased the power of this study and decreased the margin of error.This review demonstrates a correlation between the benefits of visceral osteopathy and a reduction in the severity of IBS symptoms. The OMT performed did not appear to cause any side effects. Therefore, OMT may be considered a safe alternative or adjunct in the treatment of patients with IBS. However, more research is needed due to a limited number of randomized controlled studies relating to the effectiveness of\u00a0OMT in alleviating IBS symptoms. Future studies will help demonstrate a stronger causal positive relationship between the benefits of visceral OMT and the reduction of IBS symptoms."} {"text": "This Special Issue, focused on a collection of papers on \u201cattacking cancer progression and metastasis\u201d, is devoted to communicating current knowledge about the cellular and molecular mechanisms involved in cancer progression and metastasis, as well as suggesting new targets for possible future therapeutic interventions. It aims to provide ample scope for new ideas on how to block or weaken processes of cancer progression to its final stage. Nine interesting scientific papers from basic research covering a broad spectrum of cancer types and seven reviews offer an extensive view on various aspects of possible targetable mechanisms leading to possible future interventions.In the research article \u201cAnti-Stem Cell Property of Pterostilbene in Gastrointestinal Cancer Cells\u201d, Shiori Mori et al. showed tMonika Barathova et al. describeAmy Scholze et al. proved tThe up-regulation of the receptor for advanced glycation end products (RAGE), in the absence of stimulation by external ligands, has been shown by Priyanka Swami et al. to modulMargarite Knyazeva et al. used couTargeting collagen prolyl 4-hydroxylase 1 (C-P4H1) is considered a potential therapeutic strategy for collagen-related cancer progression and metastasis. Shike Wang et al. developeThe role of methionine aminopeptidase 2 (MetAp2), an intracellular enzyme known to modulate angiogenesis, in lymphangiogenesis has been described by Rawnaque Esa et al. . The genAmong the reviews published in this Special Issue, Olamide T. Olaoba et al. offered Eleonora A. Braga et al. analyzedSylwia Tabor et al. discusseBoris Mravec et al. providedThe review \u201cNew Insights into Therapy-Induced Progression of Cancer\u201d written by Polina V. Shnaider et al. summarizMarisol Miranda-Galvis and Yong Teng aimed toFinally, Sona Ciernikova et al. summariz"} {"text": "Collectively, urological malignancies account for a considerable proportion of cancer cases worldwide. Among them, prostate cancer (PCA) is the most frequently diagnosed cancer in men, while bladder cancer (BCA) and renal cancer (RCC) rank among the top 10 most prevalent cancers globally. The high incidence rates pose a significant public health problem. The treatment of urological malignancies often involves complex approaches such as surgery, radiation therapy, chemotherapy, and targeted therapies. However, a better understanding is needed to further enhance the therapeutic management and to improve outcomes for patients. In this editorial, we examine and analyze the key findings from the original articles published in the Special Issue \u201cInsights into Urologic Cancer\u201d. These studies contribute to advancing our understanding of urological malignancies and hold significant implications for patient care and outcomes.Jir\u00e1sko et al. exploredHistopathological discrimination of chromophobe RCC and oncocytoma may be challenging due to a similar appearance. Bin Satter et al. developeMetastasis is a major challenge in RCC, often associated with poorer outcomes. Sanders et al. investigTyrosine kinase inhibitors have revolutionized RCC treatment, but resistance remains a challenge. Ding et al. highlighDJ-1 is involved in various cellular processes and has been implicated in cancer development and progression. Hirano et al. studied The study by Gutierrez et al. sheds liThe introduction of the antibody\u2013drug conjugates enfortumab vedotin targeting Nectin-4 revolutionized the treatment of metastatic BCA. However, BCA exhibits diverse histological subtypes with varying prognoses, and the prevalence of Nectin-4 expression remained unclear. Rodler et al. focused CD155 is mainly expressed in various cancer cells. Mori et al. studied Gemcitabine is a commonly used chemotherapy drug, but resistance often limits its effectiveness. Wang et al. investigRNA-binding proteins play an essential role in post-transcriptional gene regulation, and their dysregulation has been implicated in cancer. Gu et al. developeFerroptosis, a form of regulated cell death, has emerged as a potential therapeutic target in various cancers. Zhang et al. exploredImmunotherapy has shown promise in the treatment of BCA, but response rates can vary. Shimizu et al. investigRadical prostatectomy is a common treatment option for localized prostate cancer, and it is crucial to consider not only the surgical procedure itself but also the supportive measures provided to patients. Wolf et al. aimed toIn conclusion, the collection of articles in the Special Issue \u201cInsights into Urologic Cancer\u201d has made substantial contributions to our comprehension of urological malignancies. These studies have shed light on various aspects of urological cancers. The studies have also shed light on emerging therapeutic approaches, such as targeting specific molecular pathways or exploring the role of circular RNAs in overcoming drug resistance.Looking ahead, the insights gained from these studies open up new avenues for future functional, translational, and clinical research. Further investigations can build upon the knowledge obtained in this Special Issue to refine diagnostic methods, optimize treatment protocols, and develop novel therapeutic interventions. By addressing the gaps in our understanding of urological malignancies, future research endeavors hold the potential to improve patient outcomes and to ultimately contribute to the global fight against urologic cancers."} {"text": "Depending on the state of its raw materials, final products, and processes, materials manufacturing can be classified into either top-down manufacturing and bottom-up manufacturing, or subtractive manufacturing (SM) and additive manufacturing (AM). Some important top-down manufacturing methods include casting , welding2O3, using a laser cladding process. Yin et al. [\u22121 and dynamic tensile deformation behaviors with a wide strain rate range, varying from 33 to 600 s\u22121. Zhang et al. [Hedhibi et al. studied n et al. studied g et al. investigg et al. investigChen et al. studied The variety and quality of all these papers are addressed to both academic and industrial researchers who are looking for new information that can contribute to the advancement of future research in these highly challenging fields. It is our hope, as guest editors, that you find this volume interesting. We would like to express our sincere gratitude to the authors for their contributions and cooperation during the editorial process. We are indebted to the reviewers for their constructive suggestions and comments. We thank the editorial team for their strong support throughout the entire process."} {"text": "Gastrodiscoides hominis. Both parasitic infections are important intestinal food-borne diseases. Humans become infected after ingestion of raw or insufficiently cooked molluscs, fish, crustaceans, amphibians or aquatic vegetables. Thus, eating habits are essential to determine the distribution of these parasitic diseases and, traditionally, they have been considered as minor diseases confined to low-income areas, mainly in Asia. However, this scenario is changing and the population at risk are currently expanding in relation to factors such as new eating habits in developed countries, growing international markets, improved transportation systems and demographic changes. These aspects determine the necessity of a better understanding of these parasitic diseases. Herein, we review the main features of human echinostomiasis and gastrodiscoidiasis in relation to their biology, epidemiology, immunology, clinical aspects, diagnosis and treatment.In the present paper, we review two of the most neglected intestinal food-borne trematodiases: echinostomiasis, caused by members of the family Echinostomatidae, and gastrodiscoidiasis produced by the amphistome Over 100 species of digenetic trematodes have been reported infecting humans, many of them transmitted through food. Food-borne trematodiases constitute one of the most neglected tropical diseases group and includes liver flukes, lung flukes and intestinal flukes. Commonly, these parasitic infections have been ignored both in terms of research funding and presence in the public media. More than 40 million people are currently infected and about 10% of the world's population live at risk of infection by cystoscopy. The patient complained of haematuria and dysuria but he did not show intestinal symptoms. The authors suggested that the metacercariae could penetrate through the intestinal wall into the urinary bladder, where the parasite attained maturity.Strikingly, Miao et al. recentlyet al., et al. . In fact, most recent reviews differ in the number of species causing this parasitic disease. Toledo and Esteban harbouring the infective metacercariae (50\u00a0000 people infected), Echinochasmus japonicus (about 5000 cases) and Echinostoma cinetorchis and Acanthoparyphium tyosenense (with about 1000 cases each). This evidences the need for systematic epidemiological surveys, especially in areas where consumption of raw or undercooked intermediate hosts of echinostomes is a common habit.The current incidence of human echinostomiasis is not known. Most of the available information is based on sporadic reports with scarce information and, in many cases, lacking specific identification as mentioned above. Moreover, microscopists may misinterpret echinostome eggs, particularly considering that eggs of different echinostome species markedly resemble making difficult specific diagnosis coexist with the human practice of eating undercooked intermediate hosts from lakes. An average prevalence of 43%, but attaining 96%, of Echinostoma lindoense was reported from 1937 to 1956 in residents in the lake Lindu Valley (Sulawesi) . Infections were related to the local habit of eating raw bivalves has been described on the basis of adult flukes collected from Riparian people residing along the Mekong river. Parasites were detected in six Riparian people from the localities of Kratie and Takeo and/or IL-13 , 297 white blood cells per \u03bcL and a positive result of 2+ for occult blood in the urine. Moreover, haematuria was accompanied by urgency and dysuria (Miao et al., In ectopic locations such as the urinary bladder, infection was characterized by haematuria with 305 red blood cells per \u03bcm\u00a0\u00d7\u00a043\u201390\u00a0\u03bcm (Esteban et al., Opisthorchis viverrini (Crellen et al., et al., et al., Laboratory diagnosis of echinostomiasis is based on the finding of eggs in feces. The eggs are yellow-brown and thin-shelled with an operculum, and a slight thickening of the shell at the abopercular end. They are unembryonated when passed in feces. The size of human-infecting echinostome eggs ranges 66\u2013145\u00a0\u22121 of praziquantel is recommended for treatment of intestinal fluke infections. Echinostome infections can be treated successfully with slightly lower \u2013 single, oral 10\u201320\u00a0mg\u00a0kg\u22121 praziquantel (Chai et al., Praziquantel is the drug of choice for intestinal fluke infections although it is not included in the US product labelling for these infections. A single dose of 25\u00a0mg\u00a0kgG. hominis. Adult worms are pyramidal in shape and measure 8\u201314\u00a0\u00d7\u00a05.5\u20137.5\u00a0mm2 (Mas-Coma et al., G. hominis include a subterminal pharynx, tandem, lobed testes, a post-testicular ovary, an ascending uterus and a ventral genital pore (Mas-Coma et al., Gastrodiscoidiasis is a plant-borne parasitic disease caused by the intestinal amphistome G. hominis is not well known (et al., et al., Helicorbis coenosus is known to act as the first intermediate host (Mas-Coma et al., et al., et al., et al., et al., Life cycle of ll known . Adult fll known . Pigs apll known , but it et al., et al., et al., G. hominis is not a common parasite of humans, high prevalences have been found in some areas, especially in India. Buckley (First cases of human gastrodiscoidiasis were reported in India. Thereafter, human cases have been found in Burma, Nepal, Pakistan, Myanmar, Vietnam, the Philippines, Thailand, China, Kazakhstan, Indian immigrants in Guyana, Zambia, Nigeria and the Volga Delta in Russia (Yu and Mott, Buckley reported Buckley .et al., et al. (G. hominis. Moreover, infection with metacercariae encysted in animal products, such as raw or undercooked crustaceans, molluscs or amphibians also may occur (Fried et al., Most of the cases appear to be related to the consumption of tainted vegetables directly collected from ponds and rivers, which is a common practice in some Asian countries (Sah , et al. analysedet al., et al., et al., et al., Human gastrodiscoidiasis is commonly asymptomatic. In heavy infections, epigastric pain, abdominal discomfort, diarrhoea and headache may occur (Toledo 2 (et al., et al., et al., et al., Et al., Diagnosis of human gastrodiscoidiasis is performed by detection of eggs in feces. The egg is operculated, non-embryonated and measuring about 150\u00a0\u00d7\u00a070\u00a0mm2 (Mas-ComHuman echinostomiasis and gastrodiscoidiasis are among the most neglected intestinal trematode infections. Both parasite infections have been considered as minor diseases mainly confined to some areas of Asia. However, the real prevalence of these infections is not known, which makes necessary further efforts and systematic helminthological surveys to determine the real impact on human health. This is of particular importance considering that several factors such as growing international markets, new eating habits in developed countries or demographic changes may be expanding their risk of infection and prevalence in areas where these diseases were not known. Thus, further studies are required to elaborate new maps of risk and a more detailed follow-up may be useful to gain a better understanding of the current incidence of these diseases."} {"text": "Robust Perfect Adaptation (RPA), or robust homeostasis, is a ubiquitous property of biological systems, and ensures that key properties of the system can reset themselves in response to environmental disturbances, and maintain a fixed value (the system\u2019s \u2018setpoint\u2019) at steady-state. RPA is a structural property of biological networks, and is independent of special parameter choices\u2013hence, \u2018robust\u2019. Simple network configurations that can support RPA have been known for several decades \u20133, but u7, encompassing single-input/single output-networks subjected to constant-in-time disturbances, under the assumption of stability.Over the past several years, we have provided definitive answers to this question \u20136\u2013both aPLOS Computational Biology recently published an article by Bhattacharya et al. [a et al. which cla et al. in the ca et al. . We alsoPLOS Computational Biology that all RPA-capable networks\u2013regardless of size or complexity\u2013are necessarily modular in nature. Large and highly complex RPA-capable networks may be decomposable into many such \u2018basis modules\u2019, which can admit rich and complex realizations, but which can only be drawn from two distinct and well-characterized classes: Opposer modules and Balancer modules. Particularly simple RPA-capable networks, such as the three-node networks discovered by Ma et al. [First and foremost, we acknowledge that the Bhattacharya et al. article developsa et al. by compua et al. misreprea et al. article a et al. we by noa et al. , 9. Beloopposer nodes; all other nodes in a network are characterized by non-zero kinetic multipliers. In our paper [opposing sets (Theorem 3 in [without an opposer node) embedded into the route segments of the module . We alsur paper ). We devrem 3 in ), accounContrary to the claims of the Bhattacharya et al. paper, tmust be a negative feedback loop. This more precise statement encompasses more complex versions of Opposer modules that include opposing sets; it also allows for the possibility that other feedback loops -th order polynomial of up to (n\u22121)! terms for an n-node network) as points in a topological space, together with collections of \u2018independently-adapting subsets\u2019. This collection of subsets constitutes a topology on the terms of the RPA equation. We proved that a \u2018minimal\u2019 such subset could belong to one of two distinct forms: S-sets and M-sets. Only through a comprehensive analysis of the permissible algebraic structures of these two classes of subsets, and by considering the relationship of the contents of these subsets to structural properties of the associated network, could a topological basis for any RPA equation be definitively established. And only by relating these topological basis sets to their corresponding network architectures (modules), could we arrive at an exhaustive description of all possible RPA-capable network designs. In doing so, we identified novel network features of RPA-promoting modules, such as opposing sets (see 4\u20137 for further details), and also rigorously established the intermodular connectivity rules that allow arbitrarily large networks to achieve RPA through multi-modular network designs.The overarching technical difficulty with the Bhattacharya et al. study isopposing sets), while point (2) represents a simplified description of an M-set. In this connection, we clarify for the readers of PLOS Computational Biology that the network designs considered by Bhattacharya et al. [Interestingly, Bhattacharya et al. claim ina et al. , which ta et al. , and ignmay contain embedded feedback loops comprised entirely of balancer nodes (not opposer nodes); these feedback loops are compatible with RPA, but not required. Their Theorem 3 [Many results and special cases have been \u2018rediscovered\u2019 and presented as \u2018new\u2019 by Bhattacharya et al. , despiteheorem 3 also faiheorem 3 follows heorem 3 follows While we recognize that Bhattacharya et al. identifyPLOS Computational Biology to consider the fundamental network principles governing more complex phenotypes\u2013Turing patterning, for instance, or multistability/switching responses. These grand challenges remain open.As we note in a more recent study , the que"} {"text": "Dear Editor,The recent article by Radaelli et al. presentsRadaelli et al. then posThe simple answer is that loss of power does not yet have a name.Sarcopenia is generally considered a loss of muscle mass. A quick literature search revealed the first four mentions of sarcopenia in 1993 are famous for responding to identified diseases\u2026 A new, well publicized disorder\u2026 might stimulate a\u2026 public response\" [In 1993, Dr. Butler, the editor-in-chief of the journal esponse\" .potentiapenia.As long as 30 years ago, in 1993, Rogers and Evans mentione"} {"text": "An age-at-death estimation method using the first rib may be particularly advantageous as this rib is relatively easy to identify, not easily damaged postmortem, and associated with less mechanical stresses compared to other age indicators. Previously, mixed results have been achieved using the first rib to estimate age-at-death. This study aimed to develop and test an age-at-death estimation method using the first rib. An identified modern black South African sample of 260 skeletons were used to collect age-related data from the first rib. Multiple linear regression analysis equations were created from this data for male, female, and combined samples. When tested on a hold-out sample, equations generated mean inaccuracies of 7\u201313\u00a0years for point estimates. The 95% confidence intervals contained the true age in 11\u201333% of individuals depending on the equation used, but wider intervals generated using 95% prediction intervals contained true ages for 100% of individuals. Point estimate inaccuracies are comparable to other age-at-death estimation methods and may be useful if single indicator estimation is unavoidable in the case of missing or damaged bones. However, combined methods that use indicators from many areas of the skeleton are preferable and may reduce interval widths. Adult age-at-death estimation from the skeleton is difficult because of inherent human variation as people age. Different statistical approaches and observer subjectivity add to the complexity of obtaining a reliable age estimate from the skeleton \u20135. For tKunos et al. were theThese authors concluded that the overall age-at-death distribution pattern between observers (intra- and interobserver agreement) and also between the estimated and known age distributions were not statistically significantly different. As both these distribution patterns are calculated using the mean differences between observations, these reported averages could mask large individual differences. Kunos et al. note this by stating that individual specimen\u2019s ages could markedly differ between observations and also between estimated and known ages . Overalln\u2009=\u200929) of the Kunos et al. [n\u2009=\u200939) found that ages of only 55% of skeletons were correctly identified, and in contrast found that the over 60-year-old age group was misclassified most often, through underestimation [A small independent validation of presumptively or positively identified males from single internment graves in Kosovo. They modified the Kunos et al. [As advanced mathematical techniques have become increasingly popular to analyze known biological age-at-death indicators, the relationship between age and the first rib was also developed into a Bayesian analysis method . DiGangis et al. method ss et al. found thn\u2009=\u2009470) sample of convenience that contained only males. Thus, Merrit [n\u2009=\u200920) of male individuals from European ancestry to compare the performance of, among other methods, the Kunos et al. [Merrit noticed , Merrit used a ss et al. and DiGas et al. methods s et al. . Unfortus et al. point ess et al. point ess et al. not onlys et al. method ts et al. , but thes et al. , Lovejoys et al. , Todd [1s et al. , and Bros et al. ). In cons et al. method ws et al. , 17, Pass et al. , Buckbers et al. , and Rous et al. method ws et al. method , CF2 , and CF3 were statistically significantly higher for males compared to females. This significant difference might be indicative of unique patterns of aging due to lifestyle differences and suggested that functions of prediction should be adapted to consider sex.A Mann\u2013Whitney Initial results for multiple linear regression containing all features showed a 69% correlation with age using the combined sample and 64% and 75% for the female and male subgroups, respectively. However, all five features did not contribute statistically significantly to the multifactorial regression equation. Thus, the features with a low impact on the equation were manually excluded to produce the final equations in Table r\u2009=\u200968%), but using the combined sex equation produced higher correlations than the female equation (r\u2009=\u200966% compared to r\u2009=\u200963%). Only between 39 and 47% (r2) of age variability is accounted for by the independent variables . Unfortunately, the equation for the female subgroup performs particularly poorly (low r2 and accuracy). One explanation for the poor accuracy of the female group could be the underlying relationships between skeletal aging and female sex hormones or pregnancy. Any such female-specific confounders could be poorly accounted for due to the original all-male sample used for category description development. Thus, revisiting category descriptions for female samples may be advisable.One factor that clearly influences the relationship between the structural changes of the first rib and age, in the current study, is sex. This could be an unintentional effect introduced by the category descriptions of DiGangi et al. , which wAlthough some of the out-of-sample validation results were disappointing, many of these results are comparable to previous studies. Not only are the mean inaccuracies for the point estimates of the current study (13, 7, and 9\u00a0years for female, male, and pooled, Table In conclusion, the first rib contains some age-related information that can be used to make predictions related to age-at-death but should ideally be used in combination with other features from the skeleton. It performed better in males than in females. This study once again demonstrates the difficulties with adult age estimation, with narrow, accurate estimates using macroscopic features still remaining elusive."} {"text": "A best evidence topic in cardiac surgery was written according to a structured protocol. The question addressed was \u2018in patients undergoing mitral valve surgery, does atrial incision affect early postoperative rates of atrial arrhythmia\u2019. Two hundred and four papers were found. Nine represented the best evidence to answer the clinical question. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results of these papers are tabulated. Data suggest that a transeptal incision is associated with increased rates of postoperative atrial arrhythmia compared with direct left atriotomy. A best evidence topic was constructed according to a structured protocol. A best evidence topic was constructed according to a structured protocol. This is fully described in the ICVTS .In [patients undergoing mitral valve surgery] does [atrial incision] affect [postoperative rates of atrial arrhythmia].A 56-year-old male presents for mitral valve (MV) surgery with severe mitral regurgitation from \u2018posterior leaflet\u2019 prolapse. His left atrium is mildly dilated on transthoracic echocardiogram. He is in sinus rhythm (SR) with no history of atrial fibrillation (AF). Will the atrial incision affect the risk of early postoperative atrial arrhythmia (AA)?Medline 1950 to 2020 using Ovid interface:[cardiac surgical procedures/OR heart atria/su [surgery] OR mitral valve insufficiency/su [surgery]]AND OR electrocardiography/OR postoperative complications/OR tachycardia, ectopic atrial/ep, et ]AND [incision.mp OR trans$eptal.mp OR sondergaard$.mp OR interatrial groove.mp OR waterston$.mp OR superior trans$eptal.mp]Two hundred and four papers were found using the reported search. From these 9 papers were identified that provided the best evidence to answer the question. These are presented in Table\u00a0et al. [Utley et al. prospectet al. [n\u2009=\u200925) and superior transeptal (n\u2009=\u200965) approaches. Three patients in the left atriotomy group developed a junctional rhythm, all which resolved on discharge. Twenty-five patients in the superior transeptal group developed a junctional rhythm, 3 remained in a junctional rhythm at 6\u2009weeks. Again, the authors purported the advantages in view and visualization via the superior transeptal approach, however, as seen with in Utely et al., rates of repair were actually lower in the superior transeptal group when compared to the atriotomy group. There was no mention of underlying aetiology of valve dysfunction to further comment.Kumar et al. studied et al. [Bernstein et al. studied et al. [A retrospective study performed by Masuda et al. in 1996 et al. [n\u2009=\u200954) and left atriotomy (n\u2009=\u200922). They reported follow-up to 2 years. Nineteen patients in the superior transeptal group were in SR preoperatively. Of this group, 2 patients developed persistent longstanding postoperative AF. Seven patients in the left atriotomy group were in SR preoperatively with one patient developing a junctional rhythm requiring PPM. No patient remained in AF at 2\u2009years. Takeshita et al. also performed preoperative and postoperative coronary angiograms on 9 patients undergoing superior transeptal approach during this study. In all 9 cases the SA nodal artery had been divided by the superior transeptal approach; however, nodal function \u2018was not severely impaired\u2019.Takeshita et al. studied et al. [et al. reported no difference in rates of AA, including AF and junctional rhythms. They suggested preoperative factors such as left atrial (LA) dilatation, increased LA pressures and chronicity of mitral disease are more contributive to rates of postoperative rhythm disturbances than incision used. They did not differentiate between AFl and AF.Gaudino et al. set out et al. [et al. also assessed sinus nodal artery courses preoperatively in the superior transeptal group and found no clear correlation between course and postoperative sinus node dysfunction. They did demonstrate much higher rates of repair in the superior septal group than the other 2 incisions.Tenpaku et al. comparedNienaber and Glower retrospeet al. [Lukac et al. performe. Interestingly, although the vast majority of papers purport the advantage of the transeptal and superior transeptal approaches in regard to superior view of the MV, rates of repair via this superior incision were not reflected in all studies. Overall, the risk of re-entrant AAs needs to be weighed against the advantages of access to the MV achieved via transeptal approaches to the left atrium.There remains a paucity of data looking at rates of AAs in relation to different access incisions in MV surgery. The majority of the studies are of low volume and difficult to directly compare due to differing study designs and outcome measures. The available data suggest that there is a higher risk of AAs when a transeptal approach (either superior or limited) is taken compared with a direct LA approach. It is felt that right atrial tachyarrhythmias are usually a result of the incision resulting in a proarrhythmogenic nidus whilst left AAs are due to the underlying pathologyConflict of interest: none declared."} {"text": "Eye tracking has the potential to characterize autism at a unique intermediate level, with links \u2018down\u2019 to underlying neurocognitive networks, as well as \u2018up\u2019 to everyday function and dysfunction. Because it is non-invasive and does not require advanced motor responses or language, eye tracking is particularly important for the study of young children and infants. In this article, we review eye tracking studies of young children with autism spectrum disorder (ASD) and children at risk for ASD. Reduced looking time at people and faces, as well as problems with disengagement of attention, appear to be among the earliest signs of ASD, emerging during the first year of life. In toddlers with ASD, altered looking patterns across facial parts such as the eyes and mouth have been found, together with limited orienting to biological motion. We provide a detailed discussion of these and other key findings and highlight methodological opportunities and challenges for eye tracking research of young children with ASD. We conclude that eye tracking can reveal important features of the complex picture of autism. Autism spectrum disorder (ASD) is a neurodevelopmental condition defined by impairments across the areas of reciprocal social interaction and verbal and non-verbal communication, alongside repetitive and stereotyped behaviors . InterveUnderstanding how infants and children use their eyes in various contexts is important to understanding their opportunities for learning and development -16. An eCorneal reflection eye tracking is the most common method used to study gaze performance in infants and young children ,21. ThisIn this review, we critically assess the use of eye tracking in research focused on autism early in life. Eye tracking studies were identified through searches (through August 2013) in PubMed, Web of Science, and Google Scholar using \u2018autism\u2019\u2009, \u2018child\u2019\u2009, and \u2018eye tracking\u2019 as keywords \u2019 and that \u2018the combination of a preference for geometry combined with saccade quantity might be a particularly strong early identifier of autism\u2019 (pp. 107\u2013108). However, this suggestion remains speculative given that no data were presented to support the view that the looking time measure and the saccade frequency measure (fixation rate) reflected independent processes. Alternatively, interest in a particular type of object increases the fixation length on that object (and thus decreases the fixation rate), leading to more aggregated looking time at that object as well.The Pierce l. study may serv [et al. reportedet al. [In the context of this review, it is worth noting that the paired visual preference paradigm does not require eye tracking technology. For example, a non\u2013eye tracking study by Tek et al. used theEvent-related designs typically focus on properties of gaze shifts , and the paradigms included in this section have a more experimental flavor than the (semi-) naturalistic approaches.et al. [Using an event-related paradigm called the gap overlap task, Elison et al. investigWhile Elison studied saccadic reaction times, Falck-Ytter used eyeet al. [et al. [et al. [et al. [et al. [et al. [Yet another event-related paradigm of particular relevance for autism research is the gaze following task. Bedford et al. used a s [et al. and Youn [et al. to map g [et al. and show [et al. , but ind [et al. , who rep [et al. found th [et al. suggeste [et al. .The studies reviewed in this section illustrate the value of event-related eye tracking measures for understanding aspects of oculomotor performance, visual orienting, action prediction and gaze following.et al. point to the possibility that basic attentional functions are already impaired in ASD during the first year of life. This conclusion is also supported by Elsabbagh et al. , altet al. [et al. [et al., who used static cues, Elsabbagh et al. used a dynamic central cue . Another important difference was that Elsabbagh et al. [et al. [et al. used corneal reflection eye tracking, Elsabbagh et al. extracted latencies from video recordings of eye movements. Finally, Elsabbagh et al. used non-social stimuli, while Elison used mixed social and non-social stimuli. Interestingly, in a study of toddlers, Chawarska et al. [The studies by Elison et al. and Elsa [et al. ,92 had sh et al. ,92 inclu [et al. based th [et al. . Furthera et al. found noet al. [In light of these findings and because studies of ASD are orienting toward younger populations, it may be useful to briefly review some basic findings related to the typically developing oculomotor system . At biret al. . These ret al. . As infaet al. ,98. Arouet al. . This tret al. . The halet al. . Evidencet al. ,103, andet al. . Saccadeet al. . Thus, eet al. also illustrates how eye tracking and brain-based measures can be linked, which clearly is a priority for future research. Although plausible neural mechanisms for some eye tracking measures have been established [et al. [The study by Elison ablished , thablished . Key andablished reported [et al. suggeste [et al. . The \u2018fa [et al. . Process [et al. ,110,111, [et al. . Finally [et al. , but the [et al. ,113.This review has covered eye tracking studies of early autism, ranging from research that involved viewing of naturalistic scenes to highly experimental designs. We have argued that future research can benefit from taking more advantage of the unique options provided by eye tracking ,53,57. SWhat are the substantial findings from this body of research? Several of the reviewed studies have found that reduced looking time to people and faces is characteristic of young infants and toddlers with ASD ,32,34,35This review also covers some controversies. One concerns how young children with ASD look at faces, in particular their looking time to other people\u2019s eyes and mouths. The reviewed studies indicate that looking time to eyes and mouth probably depends on a number of contextual and participant factors (diagnostic status being only one of many) that are currently relatively poorly understood. However, it now seems fair to conclude that looking time to the mouth is related to language function at specific early periods in typical development ,58. AnotFor researchers not familiar with eye tracking, it can be difficult to realize the diversity of the questions the method can address. In fact, a full overview of the possibilities associated with it is outside the scope of this article . The spaIn sum, although eye tracking has some drawbacks (primarily high cost and expertise requirements), there is a great potential to exploit and develop this method further in the field of early autism. Eye tracking data can be conceptualized as describing autism at a unique, intermediate level, with links \u2018down\u2019 to underlying neurocognitive networks, as well as \u2018up\u2019 to everyday function and dysfunction. By describing these links in detail, eye tracking will reveal important features of the complex picture of autism.Written informed consent was obtained from the children's guardian/parent/next of kin for the publication of this report and any accompanying images.ADOS: Autism diagnostic observation schedule; AOI: Area of interest; ASD: Aautism spectrum disorder; ERP: Event-related potential; SLL: Specific language disorder.The authors declare that they have no competing interests.TFY wrote the paper with contributions from SB and GG. All authors read and approved the final manuscript.List of identified eye tracking studies.Click here for file"} {"text": "Understanding the structure of interphase chromosomes is essential to elucidate regulatory mechanisms of gene expression. During recent years, high-throughput DNA sequencing expanded the power of chromosome conformation capture (3C) methods that provide information about reciprocal spatial proximity of chromosomal loci. Since 2012, it is known that entire chromatin in interphase chromosomes is organized into regions with strongly increased frequency of internal contacts. These regions, with the average size of \u223c1 Mb, were named topological domains. More recent studies demonstrated presence of unconstrained supercoiling in interphase chromosomes. Using Brownian dynamics simulations, we show here that by including supercoiling into models of topological domains one can reproduce and thus provide possible explanations of several experimentally observed characteristics of interphase chromosomes, such as their complex contact maps. Such linkers do not force the two ends of a given topological domains to stay together but rather let them to fluctuate around positions dictated by supercoiling of modelled topological domains. A similar behaviour would be expected for supercoiled topological domains where boundary elements are attached to different nuclear granules. Importantly, the linker chains serve only an accessory role and are not entered into the statistics of contacts. Using the model described earlier in the text, we checked whether supercoiling can cause formation of topological domains, i.e. regions with 2- to 3-fold increased frequency of contacts as compared with loci with similar genomic distance but located in different topological domains . The larger loops thus correspond to \u223c800 000 bp and are close to the average size of topological domains . This later result agrees with the notion that supercoiling in topological domains is dynamic and may change with cell activity .To evaluate the effect of supercoiling in a more quantitative way, we compared how the average contact probability decreases with the separating genomic distance in simulations and in experiments. As experimental data, we took first intra- and inter-domain contact probabilities involving topological domains E, F and H in the X-chromosome inactivation centre of mouse embryonic stem cells that were presented in et al. . As showet al. . It is i . It is important to mention here that the scale of our large chromosome fragment model is effectively set by the average number of beads used to model individual topological domains, which in reality have the average size of \u223c1 Mb. et al. (et al. (et al. (Simulation results presented up to now were obtained using a model where the diameter of beads corresponds to the diameter of 30-nm chromatin fibres. Taking the linear density of 30-nm chromatin fibres , one can. et al. . To have. et al. . It is v. et al. A. A simi (et al. (see Fig (et al. A. Howeve (et al. A. It is (et al. also, re (et al. ,30. Howe (et al. . If that (et al. ,29. With (et al. ,29.Figuet al. involved specific binders that only aggregate together chromatin portions belonging to the same topological domain. That model is able to imitate general characteristics of topological domains, but it requires that each topological domain should have domain-specific binders recognizing it. Because there are several thousands of topological domains in genomes of higher eukaryotes, one would also need to have several thousand species of these specific binders. In addition, each topological domain would need to have highly specific markers enabling it to attract its specific binders. Barbieri et al. did not propose, what could be these specific markers and specific binders for each topological domain.We have presented a relatively simple model of organization of topological domains that agrees with the available 3C data. It is important though to discuss how our model compares with other models in the literature. Because the discovery of topological domains in eukaryotic chromosomes is relatively recent, only few papers discussed their possible structure ,31, and Several earlier papers, preceding the discovery of topological domains, proposed models where chromosomes are organized into sequentially arranged closed loops ,33. We set al. (When this article was under review, a new study was published that combined high resolution 3C data and polymer modelling to elucidate the structure of bacterial chromosomes . It was et al. suggesteSupplementary Data are available at NAR Online.Swiss National Science Foundation grant [31003A_138267 to A.S.]. Funding for open access charge: Waived by Oxford University Press. Conflict of interest statement. None declared."} {"text": "We review the current status of the role and function of the mitochondrial DNA (mtDNA) in the etiology of autism spectrum disorders (ASD) and the interaction of nuclear and mitochondrial genes. High lactate levels reported in about one in five children with ASD may indicate involvement of the mitochondria in energy metabolism and brain development. Mitochondrial disturbances include depletion, decreased quantity or mutations of mtDNA producing defects in biochemical reactions within the mitochondria. A subset of individuals with ASD manifests copy number variation or small DNA deletions/duplications, but fewer than 20 percent are diagnosed with a single gene condition such as fragile X syndrome. The remaining individuals with ASD have chromosomal abnormalities , other genetic or multigenic causes or epigenetic defects. Next generation DNA sequencing techniques will enable better characterization of genetic and molecular anomalies in ASD, including defects in the mitochondrial genome particularly in younger children. Leo Kanner described autism in 1943 in 11 children manifesting withdrawal from human contact as early as age 1 year postulating origins in prenatal life . SeveralSymptoms of ASD usually begin in early childhood with evidence of delayed development before age 3 years. The American Academy of Pediatrics recommends autism screening for early identification and intervention by at least age 12 months and again at 24 months. Rating scales helpful in establishing the diagnosis are Autism Diagnostic Interview- Revised (ADI-R) and the Autism Diagnostic Observation Schedule (ADOS) in combination with clinical presentation , 6. The etiology of ASD is complex and encompasses the roles of genes, the environment (epigenetics) and the mitochondria. Mitochondria are cellular organelles that function to control energy production necessary for brain development and activity. Researchers are increasingly identifying mitochondrial abnormalities in young children with ASD since the most severe cases present early with features of ASD. Better awareness and more accurate and detailed genetic and biochemical testing are now available for the younger patient presenting with developmental delay or behavioral problems. et al. [Epidemiologic and family studies suggest that genetic risk factors are present. Monogenic causes are identifiable in less than 20 percent of subjects with ASD. The remaining subjects have other genetic or multigenic causes and/or epigenetic influences which are environmental factors altering gene expression without changing the DNA sequence -10. Epiget al. . The recde novo changes in genes are unlikely to occur so quickly. Specific genetic and cytogenetic conditions associated with ASD are summarized in a recent review [Fragile X syndrome and tuberous sclerosis are the most common single gene conditions associated with ASD.\u00a0The commonest chromosomal abnormality in non-syndromal autism is duplication of the 15q11-q13 region, accounting for 5% of patients with autism. Large microdeletions in chromosome 16p11.2 and 22q regions account for another 1% of cases . The rapet al. [MECP2 gene defects in females), 3% with PTEN gene mutations in those with a head circumference > 2.5 SD, approximately 10% with other genetic syndromes and 10% with small deletions or duplications identified using chromosomal microarrays.The role and importance of genetic testing for individuals with ASD is well recognized with varet al. later foet al. used a tHigh resolution chromosome analysis detects 3 to 5 megabase-sized abnormalities; however, new technology using DNA or chromosomal microarrays can identify abnormalities 100 times smaller. Therefore, microdeletions and duplications may now be identified with microarrays in individuals with ASD who previously had normal cytogenetic testing. Children with ASD show a higher prevalence of microdeletions and duplications, particularly involving chromosome regions 1q24.2, 3p26.2, 4q34.2, 6q24.3 and 7q35 including those with non-syndromal ASD . TherefoShen and coworkers studied et al. [CDH10) and cadherin 9 (CDH9) genes. The latter two genes encode neuronal cell-adhesion molecules. These findings were replicated in two independent cohorts and demonstrate an association with susceptibility to ASD. Wang et al. reportedet al. [Sebat et al. studied STXBP5) and neuronal leucine rich repeat 1 (NLRR1) genes. Syntaxin 5 regulates synaptic transmission at the presynaptic cleft and is known to inhibit synapse formation. Syntaxin 1 protein is increased in those with high functioning autism. The role of NLRR1 at the synaptic level is unknown but is thought to be related to neuronal growth [As a result of research into nucleotide sequences, microdeletions and duplications in children with ASD can now be identified including syntaxin binding protein 5 , two important gene networks expressed in the central nervous system.An autism genome-wide copy number variation study reported by Glessner et al. in a larNext generation DNA sequencing is currently underway, allowing for rapid and efficient detection of mutations at the nuclear and mitochondrial DNA (mtDNA) level in human investigations and becoming part of clinical workup. Heteroplasmy or the existence of multiple mtDNA types within cells of an individual, is detectable using standard molecular genetic techniques which focus on hypervariable regions of the mitochondrial genome. With high-throughput next generation sequencing of the complete human mtDNA, which is faster and more powerful, accurate detection of heteroplasmy can be made throughout the mitochondrial genome not just in the hypervariable regions located in the cytoplasm of the cell enabling the study of correlation with diseases. et al. [Recently, Li et al. sequenceWith a significant percentage of children with autism presenting with metabolic abnormalities and other biochemical disturbances, identification of mitochondrial disorders is critical for early treatment. Long standing mitochondrial dysfunction can lead to major health complications and damage. If identified early, mitochondrial disorders can be managed with improved longevity and quality of life. Medical intervention and therapies are now available to specifically target the biochemical defect in the mitochondria and to improve function and bioenergy utilization and diminish the neurological insults that would occur if left untreated. Inborn errors of metabolism may contribute to at least 5% of cases with ASD . Deficie1).Mitochondria are intracellular organelles in the cytoplasm known as the power houses of the cell. They play a crucial role in adenosine 5\u2019-triphosphate (ATP) production through oxidative phosphorylation (OXPHOS). The latter process is carried out by the electron transport chain made up of\u00a0Complex I (NADH: Ubiquinone oxidoreductase or dehydrogenase), Complex II (Succinate:Ubiquinone oxidoreductase), Complex III (Coenzyme Q:Cytochrome c reductase or cytochrome bc1 complex), and Complex IV (Cytochrome c oxidase) required to convert food sources to cellular energy. The electron transport system is situated in the inner membrane of the mitochondria and contains proteins encoded by both nuclear and mitochondrial DNA . About 1Human mitochondrial DNA (mtDNA) is a circular double stranded DNA molecule contained within the mitochondrion and inherited solely from the mother. The evolutionary antecedents are bacterial plasmids. Each mitochondrion contains 2-10 mtDNA copies organized into nucleoids . This or2). When a mutation arises in a cellular mtDNA, it creates a mixed intracellular population of mutant and normal molecules known as heteroplasmy. When a cell divides, it is a matter of chance whether the mutant mtDNAs will be partitioned into one daughter cell or another. Therefore, over time, the percentage of mutant mtDNAs in different cell lineages can drift toward either pure mutant or normal (homoplasmy), a process known as replicative segregation, which then impacts on cellular energy and human diseases [Mitochondria are dynamic organelles that fuse and produce fission by changing shapes and size throughout the life of the cell. Thus far, the mitochondria cannot yet be accurately quantified but mtDNA can be analyzed. In mammals, each double-stranded circular mtDNA molecule consists of 15,000-17,000 base pairs containing a total of 37 genes is caused by T8993G heteroplasmic mtDNA mutations. When heteroplasmy is present in less than 10 percent of copies, NARP findings are usually absent. At 10-70 percent, adult onset NARP is present, and at 70-100 percent childhood Leigh syndrome occurs. Other examples of known mitochondrial diseases include Myoclonic Epilepsy with Ragged Red Fibers (MERRF) and the disorder known as Mitochondria Encephalomyopathy, Lactic Acidosis, and Stroke-like episodes (MELAS) . In the mitochondria ATP production, free oxygen radicals and reactive oxygen species (ROS) are produced and then normally removed from the cells by anti-oxidant enzymes. When the production of ROS and free radicals exceeds the limit, oxidative stress occurs, that is, ROS combine with lipids, nucleic acids and proteins in the cells leading to cell death by apoptosis or necrosis . Since bet al. [et al. [Coleman and Blass in were theet al. reportedet al. then pro [et al. studied [et al. in two u1 for a summary of known mitochondrial and nuclear gene defects in ASD). Recently, Shoffner and colleagues studied Early cases reported by Graf and colleagues provide et al. [dGK and TK2. The sister had no depletion of the mtDNA or mutations in the two nuclear genes. However, they did have mtDNA depletion in the skeletal muscle at a 72% level. These findings highlight the importance of the A3243G mtDNA mutation in ASD and the investigators proposed that mutations or depletion of mtDNA should be investigated in other children presenting with ASD.In addition, Pons et al. followedet al. [et al. [et al. [Oliveira et al. reported [et al. , but no [et al. reportedet al. [et al. [An interesting study by Fillano et al. further [et al. found thet al. [SLC23A12. This most important gene located on chromosome 21q31 encodes a calcium-binding carrier protein used by the mitochondria. It is involved in the exchange of the amino acid aspartate for glutamate in the inner membrane of the mitochondria for use in the electron transport chain. Ramoz et al. [SLC25A12 gene associated with ASD in their study of 411 families. This association was confirmed by Seguarado et al. [et al. [Studies reviewed by Smith et al. linked nz et al. identifio et al. and by S [et al. .et al. [et al. [SLC23A12 gene in their families with autism. Also, Chien et al. [SLC23A12 gene.A recent study by Ezugha et al. of a 12- [et al. also stun et al. studied (POLG1) which codes for mitochondrial DNA polymerase [Mitochondrial DNA replication and maintenance is regulated by enzymes which are proteins encoded by nuclear DNA genes . One suclymerase . This gedGUOK gene mutations and hepatic failure with reductions in activity of mitochondrial complexes I, II and IV. However, the enzyme activity for complex II, which is encoded by nuclear DNA, was normal in these patients. Furthermore, quantitative mtDNA analysis showed a reduced mtDNA/nuclear DNA ratio of 8-39% in liver specimens from seven of the children. Studies have examined the role of nuclear genes in causing depletion of the mtDNA genome and tissue specific conditions , 62. MandGK and TK2, encoded by the nuclear genome [et al. [dGK was mutated in patients with reduced mtDNA/nuclear DNA ratios. A similar study by Saada et al. [TK2 nuclear gene. These two studies further supported that nuclear genes are linked to mtDNA but more research is needed to determine their role in autism and for identification of mtDNA genetic defects, mutations or for the mtDNA depletion status associated with related nuclear genome defects.The main supply for dNTPs required for energy comes from the deoxynucleoside salvage pathway regulated by two mitochondrial enzymes r genome , 64. Hen [et al. examineda et al. in two oPOLG1) is an important polymerase enzyme responsible for mtDNA replication and base excision repair [POLG1 enzyme is located on chromosome band 15q25 [POLG1 gene. Studies have shown that mutations or disturbances of the POLG1 gene cause deletions of mtDNA genes. Additionally, autism is linked to defects of human chromosome 15q11-q13 region including inverted duplications found in about 5% of cases with autism [et al. [Polymerase gamma , juvenile spino-cerebellar ataxia-epilepsy syndrome (SCAE) and Alpers-Huttenlocher hepatopathic poliodystrophy. These encephalomyopathies are caused by abnormal nuclear mitochondrial intergenomic cross-talk linked to POLG1 gene mutations and to mtDNA deletions. This strong link between nuclear genes and the mtDNA genome is further supported by a prevalence of 2% of individuals in the general population carrying this nuclear gene defect [e defect . TK2, DGK, PLOG, TWINKLE, ANT1 and TP can cause deletions and depletions of mtDNA, their involvement and frequency in ASD requires further investigation.Another nuclear gene that can cause multiple mtDNA abnormalities including duplications, deletions and point mutations is thymidine phosphorylase (TP) , 73. TheFinally, oxidative stress and gene methylation (epigenetics) are both considered to play a role in producing ASD. Oxidative stress in brain cells due to genetic or environmental causes, leads to decreased methionine synthase activity . Methionet al. [Deth et al. further et al. . MeCP2 pet al. [Oxidative stress is also known to cause deletions of the mtDNA genes in mice models . For exaet al. indirectet al. [To further evaluate for mitochondrial dysfunction and mtDNA abnormalities, Giulivi et al. recentlyIn summary, the role of the mitochondria and mitochondrial defects are discussed in relationship to ASD. Several studies have linked autism to defects in oxidative phosphorylation encoded by mtDNA along with interaction of nuclear genes. Individuals with the ASD phenotype clearly show genetic-based primary mitochondrial disease. Further studies with the latest genetic technology such as next generation sequencing, microarrays, bioinformatics and biochemical assays will be required to determine the prevalence and type of mitochondrial defects in ASD. Elucidation of molecular abnormalities resulting from mitochondrial or genetic defects in individuals with ASD may lead to better treatment options and outcomes."} {"text": "High quality abstracts were selected and discussed as oral or poster presentations. The aim of this review is to distribute the scientific highlights of this workshop outside the group as analyzed and represented by experts in retrovirology, immunology and clinical research.The December 2011 5 Maarten from December 6-9 and featured presentations from 210 scientists representing approximately 10 countries. Since its inception, the goal of the workshop has been to provide a forum for research aimed at understanding the mechanism by which HIV-1 persists in the face of antiretroviral therapy (ART) and to develop strategies with which to curtail viral persistence and accelerate the objective of viral eradication. While ART has fundamentally impacted the health of individuals living with HIV infection and effects durable suppression of plasma viral RNA to undetectable levels, current treatment regimens are unable to eradicate the virus [The 5+ T cells [Planelles et al. [in vitro differentiated central memory cells. The system uses peripheral blood mononuclear cells from healthy human donors. Following isolation of na\u00efve CD4+ cells by negative selection, cells are activated with CD3 and CD28 antibodies in medium containing TGF-beta, anti interleukin (IL)-12 and anti IL4 antibodies followed by culture in IL-2. Following HIV-1 infection, cells return to a quiescent state where the majority of infected cells are latent. Planelles et al. have previously demonstrated that these latently infected cells can be re-activated by incubation with CD3/CD28 antibodies. The investigators went on to use this system to screen for anti-latency agents. The investigators demonstrated that incubation with IL-2 + IL-7 can induce homeostatic proliferation and reactivation in about 10% of the latently infected cells. However, full reactivation of all latently infected cells in the cultures required antigenic stimulation. While this study describes a system that can be very useful for the screening of anti-latency agents, it highlights the concern that agents inducing homeostatic proliferation may have the potential to expand integrated proviruses through gene duplication at mitosis. Nonetheless, this won't impact the majority of latent genomes that fail to become reactivated during homeostatic proliferation and therefore, not achieve the goal of eradicating latent provirus. An important point to consider is whether homeostatic proliferation favors the duplication of archival and defective provirus rather than latent provirus. It is unclear whether latent provirus duplication by mitosis during homeostatic proliferation leads to expression of the provirus and subsequent clearance by cytopathic effects or by immune clearance. Defective provirus would be less likely to be cleared by these mechanisms. Therefore it would be important to determine to what extent homeostatic proliferation allows duplication of latent provirus without subsequent clearance.The development of strategies to eliminate HIV-1 reservoirs that persist in the face of ART will require a complete understanding of the nature of viral persistence and latency and how these processes are regulated. While latency is considered the single biggest obstacle to viral eradication, how latency is established and regulated is still not well understood. Most of the studies conducted to date on viral latency have focused on models that employ established cell lines. They have demonstrated various forms of latency regulation and the role of host cell cycle, epigenetic effects and other host cell factors that can regulate viral latency. However, it is unclear as to the extent to which these cell line models of viral latency reflect the true nature of latency as it exists in memory CD4 T cells . For this et al. describeGarcia et al. [ex vivo to produce virus. The frequency of latently infected, resting cells was determined to be in the reaching of 9.9 per million CD4+ T cells, which is in the range observed for patients on suppressive therapy. The availability of an in vivo model of viral latency extends the tools available for the analysis of anti-latency drugs.Continuing on with the theme of primary models of viral latency, a et al. describeWhile the majority of research has focused on the role of lymphoid reservoirs in viral persistence and latency, several presentations focused on the role of myeloid cells in viral persistence. A large body of research has demonstrated the important association of animal lentiviruses with myeloid-lineage cells and, in particular, macrophages. However, very little attention has focused on the role of myeloid-lineage cells in the biology of primate lentiviruses. Early studies demonstrated infection of macrophage in the tissues such as the lung and central nervous system (CNS) but, beyond that, there is no clear understanding of whether macrophage support viral persistence in patients on antiretroviral therapy or whether latency is also manifest in macrophage or other myeloid-lineage cells. Arguably, the strongest piece of experimental evidence supporting an essential role for macrophage in primate lentivirus biology is the demonstration that myeloid-lineage cells harbor an antiviral restriction that is not exhibited by T cells and further, that the virus has evolved a strategy to circumvent this restriction.Clements et al. [Russell et al. [s et al. discussel et al. who examLewin et al. [+ T-cells incubated with CCL19 and CCL21 receptor chemokines, CXCL 10 and CCR 6 rendered them permissive to HIV-1 infection and provirus establishment. Incubation with CCL 19 prior to infection activated RhoA signaling and alteration of the actin network that was sufficient for establishment of the integrated provirus. Furthermore, integration of HIV-1 in resting cells was also dependent upon PI3K pathway activation and inhibitors of the PI3K pathway did not affect nuclear localization of viral cDNA but prevented cDNA integration. Lewin et al. went on further to describe how dendritic cells may play a role in the infection of resting memory CD4+ cells. Monocyte-derived dendritic cells (myeloid DCs) promoted latent infection of resting memory CD4+ cells when in co-culture but less efficiently when cells were incubated separately. This suggests the presentence of soluble factors that may play a role in the promotion of CD4 latency by myeloid DCs.In addition to the viral mechanisms of HIV-1 persistence, it is well established that the immune system plays a critical role in the establishment and persistence of a viral reservoir during therapy . Interesn et al. presenteLieberman et al. [+ T cells, monocytes and macrophages, she demonstrated that the knockdown of CCR5, HIV Vif and gag or Trex1 inhibit HIV transmission in tissue explants and humanized mice.n et al. outlinedLaguette et al. [e et al. discussee et al. ,12. SAMHManel et al. [l et al. gave furO'Doherty et al. [in vitro assay. Her data suggested that latently infected cells continuously express low levels of viral proteins and constitute targets for HIV-specific T cell responses. In an in vitro killing assay, EC displayed more effective removal of latently infected CD4+ T cells than chronic progressors.y et al. quantifiy et al. ,16. She Lichterfeld et al. [+ T cells from EC to HIV-specific CD8+ T cell mediated killing. In a cytotoxicity assay, CD4+ T cells from EC showed an increased susceptibility to CD8 mediated killing. Interestingly, a reduced susceptibility of na\u00efve CD4+ T cells to CD8+ T cells mediated killing was associated with a lower viral reservoir in EC.d et al. also preDeeks et al. [Deeks et al. summarized the recent results from two Raltegravir intensification trials indicating that this intervention did not modify the levels of T cells activation in blood and rectum but may reduce them in the terminal ileum [s et al. focused s et al. . Inflammal ileum ,21.Chomont et al. [+ T cell nadir significantly predicted levels of HIV proviral DNA. The evolution in the TCR repertoire of virally suppressed subjects was correlated with the genetic evolution of the viral reservoir, supporting a model in which the dynamic of the memory CD4+ T cell compartment drives the dynamic of the latent reservoir. In the second part of his talk, Chomont presented a novel assay aimed at monitoring HIV persistence during ART. The assay, which uses authentic CD4+ T cells from virally suppressed subjects, could also be used to identify novel compounds aimed at reactivating HIV production in latently infected cells.t et al. listed bVandergeeten et al. [+ T cells and hypothesized that they may also contribute to the persistence of latently infected cells. Both cytokines induced proliferation, activation and survival of CD4+ T cells in vitro. Similarly, both IL-7 and IL-15 enhanced viral production in productively infected CD4+ T cells isolated from HIV infected subjects. Strikingly, the two cytokines differed in their ability to induce HIV production in latently infected cells, with IL-15 being much more efficient than IL-7. These results suggested that IL-15 should be considered as a possible candidate to force viral expression of the latent reservoir to achieve HIV eradication.n et al. outlinedChirullo et al. [o et al. started o et al. showing o et al. .Kuritzkes et al. [s et al. mentioneIt will undeniably be concerted decisions to make between patients scientists, ethic committees and funding agencies.The immunological and virological benefits of ART initiation during the early steps of HIV infection have been reported for more than 10 years ,29. StarMarkowitz et al. [z et al. presenteAt 48 weeks, from 26 patients randomized in the 5-drug arm, 23 patients remained on study; and from 14 patients randomized to the 3-drug arm, 11 patients remained on study.At 96 weeks, 18 patients remained on study in the 5-drug arm, and 10 patients in the 3-drug arm. As expected the 5-drug regimen achieved < 50 copies/ml of plasma viremia faster, but at week 48 there were 3 virological failures in the 5 drug arm, and none in the 3-drug arm. No differences were found in terms of proviral DNA levels or infectious virus titers between the 2 arms. Overall, these results are quite disappointing but concern a population of patients treated after a mean duration of symptoms of acute infection of about 50 days.Ananworanich et al. [On the contrary, h et al. treated +CCR5+ T cells in sigmoid colonincreased from Fiebig I to III stages (p = 0.0014). After treatment initiation, a rapid and equivalent plasma HIV RNA decline was observed with both regimens.Before ART initiation, total HIV DNA in PBMC was significantly lower in Fiebig I versus Fiebig III patients (p = 0.007) and the loss of CD4+CCR5+ T cells in sigmoid colon significantly increased after antiretroviral treatment in Fiebig III and IV subjects.The percentage of CD4Total and integrated HIV DNA in PBMCs declined significantly after Mega-ART and ART. Total HIV DNA by week 24 was undetectable in 40% of patients. Integrated HIV DNA by week 24 was undetectable in 80% of patients.Total and integrated HIV DNA in sigmoid colon declined significantly after Mega-ART. Total HIV DNA by week 24 was undetectable in 35% of patients. Integrated HIV DNA by week 24 was undetectable in 78% of patients.The frequency of PBMCs harboring HIV DNA during acute HIV predicted HIV reservoir size after 24 weeks of treatment (p < 0.0001).Consequently, these studies tend to show that a smaller reservoir size is obtained when ART is initiated very early at acute infection, but no clear benefit is found by adding more than 3 drugs in the combination.Zack et al. [in vitro on reactivation of latent HIV.k et al. proposedk et al. , they prMargolis et al. [+ cells showed increased viral RNA expression in vitro following the presence of Vorinostat. From oncologic studies, it is known that the peak of Vorinostat in plasma occurs between 4 and 8 hours after a single administration. A mean increase in vivo of 4.4 fold of resting CD4+ T cell associated full-length RNA was observed after the administration of 400 mg of Vorinostat and sampling patients at around 6 hours post administration.s et al. presenteHernandez et al. [Provocative data were presented by z et al. suggestiChen et al. [in vitro in ACH-1 and U1 cells. In these models, Gnidimacrin was at least 2,000-fold more potent than Prostratin.n et al. ,38 showein vitro demonstration that ART is quite ineffective at preventing direct HIV cell-to-cell transfer [Whether or not ongoing HIV replication or propagation persists during effective ART is not a new topic, but it has recently been fuelled by the transfer .Schacker et al. [6 cells. Although plasma viremia became undetectable within 2 months in all patients, the authors demonstrated that HIV continued to infect new cells in lymphoid tissues and that the drugs rarely reached effective concentrations in these tissues [Bringing the problem at the clinical level, r et al. analyzed tissues .It will therefore be important to develop new ways of drug delivery in lymphoid tissues, in particular for strategies purging HIV reservoirs, in order to protect uninfected cells in every compartment.Deeks et al. [+ cells expressing PD-1 contain more proviral DNA than those who do not. Immune therapies based on the blockade of PD-1 interaction with its ligands are supposed to increase HIV production and enhance anti-HIV specific immune responses. The first clinical trial using anti PD-1 antibodies (ACTG 5301) is under approval. It is a single-arm pilot study to evaluate the safety, pharmacokinetic profile, and effects of a single dose of anti-PD-1 antibody in 40 chronically HIV-infected patients receiving effective ART for more than 36 months.s et al. argued tVandergeeten et al. [in vitro for reactivating the reservoir while inducing much less cell proliferation. Consequently, IL-15 is a possible candidate to achieve HIV eradication, although IL-7 increases the number of CD4+ T cells harboring HIV integrated DNA.As mentioned previously, n et al. presenteJune et al. [3 and undetectable viremia received a single infusion of 1 \u00d7 1010 modified cells. Four weeks after the infusion, a structured treatment interruption (STI) was planned for a maximum of 12 weeks. Although plasma viremia rebounded in all cases within 4 weeks following the STI, it began to decrease in 5 cases before ART was resumed. In particular, \"patient 205\" who was heterozygote for the CCR5 delta 32 mutation before ZFN therapy, reached undetectable viral load by day 112. Consequently, ZFN modified cells distributed and trafficked normally compared to endogenous CD4+ cells and showed antiviral activity during STI.Trying to mimic the \"Berlin patient\", e et al. updated Elimination of CCR5 expression is not expected to directly impact the size of the latent reservoir. If the stability of the latent reservoir is extended by reservoir replenishment, limiting target cell availability would, in turn, reduce potential sources of the virus that drives the replenishment and, in this scenario, an accelerated decay of the latent reservoir would be expected. However if the intrinsic stability of the latent reservoir is indeed measured in decades and there is no replenishment, then introduction of CCR5-negative cells would not alter the decay of the latent reservoir. This should still produce a functional cure since any virus released from reactivated latent cells would be unable to establish new infections. Either scenario would be a significant step forward since patients transduced with CCR5-negative stem cell would be predicted to control the virus in the absence of ART.Jerome et al. [e et al. have beeth International Workshop on HIV Persistence led to the presentation and discussion of exciting new data from research groups working towards an HIV cure. Over the years, the control of HIV reservoirs is progressively moving from bench to bedside. The next edition of the workshop will be held in Miami, Fl, 3-6 December, 2013.As anticipated, the 5The authors declare that they have no competing interests. The findings and conclusions in this report are those of the authors and do not necessarily represent the views of their institutions.All the authors contributed equally to the manuscre therapeutic session. All authors read and approved the final manuscript."} {"text": "In the special issue \u201cSignaling Molecules and Signal Transduction in Cells\u201d authors were invited to submit papers regarding important and novel aspects of extra- and intracellular signaling which have implications on physiological and pathophysiological processes. These aspects included compounds which are involved in these processes, elucidation of signaling pathways, as well as novel techniques for the analysis of signaling pathways. In response, various novel and important topics are elucidated in this special issue. Several of the manuscripts presented discuss compounds which might be involved in cellular apoptosis and thereby influence cancer or embryonic development.Ginsenoside Rh2 (G-Rh2), derived from the plant Ginseng, acts anti-proliferative and pro-apoptotic. Its intracellular effects through apoptotic pathways were analyzed by Guo et al. Rapamycin is an inhibitor of mTOR and thereby acts antiproliferative on some tumors [et al. studied the anti-tumor effect of rapamycin inducing apoptosis and autophagy on pancreatic cancer cells [et al. proffer that JRS-15, a derivative of xylocydine which was a novel cyclin-dependent kinase inhibitor, induced mitochondrial apoptosis in several cancer cell lines [et al. reviewed the effect of various bioactive compounds from marine organisms including sponges, actinomycetes and soft corals on the diverse apoptotic pathways of cancer cells [The compound e tumors . Dai et ll lines . Kalimuter cells .ochratoxin A (OTA) is nephrotoxic, hepatotoxic and immunotoxic. The cytotoxic effects of OTA on mouse embryonic development inducing reactive oxygen species and mitochondrial apoptosis were studied by Hsuuw et al. [et al. reviewed the role of reggie/flotillin proteins for Wnt secretion and gradient formation and its effect on development [et al. reported the osteogenic potential of diverse signaling pathways including Wnt, BMP, FGF and TGF\u03b2 [et al. analyzed the toxic effect of cesium in plants. High concentrations of cesium inhibited plant growth inducing the jasmonate pathway and thereby probably modified potassium uptake machineries [unfolded protein response (UPR) and the steroid response element (SREBP) was studied by Bedoya-Perez et al. in the mosquito Aedes aegypti using the Cry11Aa toxin [UPR signaling pathways in mammalian and their implications were reviewed by Carrara et al. [The common mycotoxin w et al. . Wnt morelopment . Furtherand TGF\u03b2 . Adams ehineries . The difAa toxin . Furthera et al. .et al. described prostaglandin E2 GPCR signaling in dendritic cells in respect to the cellular life cycle [Resolvin (resolution-phase interaction products) is a member of a novel family of aspirin-triggered short-lived autacoids synthesized during inflammation. Keinan et al. presented resolvin signaling pathways which could be used in oral health treatment [G-Protein coupled receptors (GPCR) represent the most abundant class of mammalian membrane-bound receptors and are valuable pharmacological targets. The review by De Kejzer fe cycle . Resolvireatment .EGFR to the intracellular dynein IC2 was described by Pullikuth et al. [spatial regulation of EGFR signaling including endocytosis were elucidated by Ceresa et al. [et al. [EGFR Y845 phosphorylation probably interacted with Mucin-1 and cleaved Galectin-3 which could serve as a diagnostic tool for differentiation of benign and malign tumors. Regulation of endocytosis and cell signaling is an emerging role of intersectins which were summarized by Hunter et al. [Growth factors are important mediators of developmental processes. Mutations in the tyrosine kinase growth factor receptors are known to induce severe diseases, including the susceptibility to cancer. Therefore, the regulation of growth factor receptor signaling is essential for the understanding of physiology and pathophysiology of these proteins. In this regard, the link of h et al. . Furthera et al. . The mec [et al. . They for et al. . There wet al. explained the role of fibroblast growth factor (FGF) and the FGF receptors Heartless(Htl) and Breathless (Btl) for development and differentiation in Drosophila [et al. described that interference of peptide apatamers with growth factors e.g., TGF\u03b2 or EGFR could be suitable for the analysis of their signaling pathways in high throughput screening studies [Formyl peptide receptor 2 agonists, their distinct signaling pathways and their involvement in immunological responses and cancer were reviewed by Cattaneo et al. [Muha osophila . Conidi studies . Formyl o et al. .et al. discussed the effect of EPO, its derivatives and the serine/threonine kinase receptor EPO-R in endothelial cells, regarding desensitization/resensitization/expression using an in vitro model [et al. discussed the role of cytokines in healthy and inflammatory skin diseases [Erythropoietin (EPO) induces erythropoiesis and is used as a pharmacological drug, e.g., as biosimilars for long-term treatment of anemia. However, EPO also acts on other types of cells, e.g., endothelial mediating proliferative and angiogenic effects and might be important for the therapeutic outcome. Trincavelli ro model . H\u00e4nel ediseases .retinoid nuclear orphan receptor ROR\u03b1 were reviewed by Du et al. implicating its role as tumor suppressor [Functions of the ppressor .the secretory pathway calcium (Ca2+)-ATPase pump (SPCA1). Micaroni et al. hypothesized that the gene ASTE1 influences ATP2C1 gene expression. ASTE1 dysregulation might induce cell death and tumor transformation [Mutations in the gene ATP2C cause the Hailey-Hailey skin disease in humans. ATP2C1 encodes ormation .Nitric oxide is an important signaling molecule which exerts pleiotropic functions. Its regulatory function in skeletal muscle during exercise was summarized by Suhr et al. [in vivo is an emerging field which was presented in comparison to cAMP by Sprenger et al. in a featured review paper [r et al. . Solubleew paper .MAP kinase scaffold was reviewed by Meister et al. [et al. presented a study regarding the gene En-MAPK1 which was activated during pheromone signaling of the polar ciliate Euplotes nobili [et al. described the regulation of T-Cell activation and function by diacylglycerol kinases [Kinase cascades are essential for the intracellular signal transduction. In this regard, the r et al. coordinas nobili . Further kinases .et al. set its focus on the atypical phosphatases eyes absent (EYA) which acted as dual Thr/Tyr-phosphatase and members of the phosphoglycerate mutase (PGAM) family which exerted His-based Tyr-phosphatase activity [Phosphorylation is controlled by protein phosphatases. Recently, atypical protein phosphatases were discovered which were structurally different from the known families of Ser/Thr- and Tyr-phosphatases. Sadatomi activity .ubiquitination of Notch and its signaling intracellular function was reviewed by Moretti et al. Small GTP-binding proteins are important regulators of intracellular signaling [RhoA in the intestinal epithelial barrier was summarized by Tong et al. [SUMOylation, e.g., on ATF3. Its role inhibiting prostate cancer cells was presented by Wang et al. [The ubiquitination of proteins is a proteasomal degradation motif. However, ubiquitination is also used as an intracellular receptor signaling motif. In this regard, the ignaling . As an eg et al. . A furthg et al. .exosomes for (patho)physiological processes is a topic which was reviewed in detail by Corrado et al. [et al. [The controlled release of compounds from cells is important for intercellular signaling and communication. Beyond various exocytosis mechanisms, the analysis and implication of o et al. . The com [et al. .In summary, several important and novel aspects of intracellular and intercellular signaling in health and disease were highlighted in this special issue. However, signal transduction in and from cells is a huge field which can only partly be touched on in one special issue. Therefore, a further special issue covering more aspects of this engrossing field will follow in 2014."} {"text": "Computerized cognitive bias modification for social anxiety disorder has in several well conducted trials shown great promise with as many as 72% no longer fulfilling diagnostic criteria after a 4\u2009week training program. To test if the same program can be transferred from a clinical setting to an internet delivered home based treatment the authors conducted a randomized, double-blind placebo-controlled trial.After a diagnostic interview 79 participants were randomized to one of two attention training programs using a probe detection task. In the active condition the participant was trained to direct attention away from threat, whereas in the placebo condition the probe appeared with equal frequency in the position of the threatening and neutral faces.Results were analyzed on an intention-to-treat basis, including all randomized participants. Immediate and 4-month follow-up results revealed a significant time effect on all measured dimensions . However, there were no time x group interactions. The lack of differences in the two groups was also mirrored by the infinitesimal between group effect size both at post test and at 4-month follow-up.We conclude that computerized attention bias modification may need to be altered before dissemination for the Internet.ISRCTN01715124 A recent development in the treatment of anxiety disorders is attention bias modification , which dMoreover, there is evidence that successful treatment for social phobia may lead to a normalization of attention bias for threat . This fiIn a meta-analysis on the effects of attention bias modification by Hakamata and coworkers , 10 rande.g., [via the internet that could open the door for millions of people in need. Another study that also found positive outcomes was carried out by Schmidt and coworkers [Attention bias modification is a potentially effective computerized treatment that most often has been delivered in a laboratory setting with subclinical samples e.g., . The trae.g., . The tree.g., . Up untie.g., . Both cooworkers . At termoworkers have invvia computer we decided to investigate if this novel treatment could be delivered via the Internet with no physical contact with the study participants. This has been done once in a recent trial by Boettcher and co-workers [via the Internet was superior to a placebo condition (random attention training) in a group of participants with diagnosed social anxiety disorder recruited from the community. We hypothesized that the treatment would be better than the control condition and that it would be possible to present attention bias modification via the Internet.Another recent development in the treatment of social anxiety disorder is the possibility to deliver CBT over the Internet . A large-workers . In facte.g. psychosis, substance misuse) that could be expected to influence the outcome of the study; j) having a primary diagnosis of social anxiety disorder according to the Structured Clinical Interview for DSM\u2013IV Axis I Disorders . The laAs evident from the CONSORT flowchart Figure , of the via the Internet [The following social anxiety scales constituted the outcome measures in the study: the Liebowitz Social Anxiety Scale self-report version , theInternet ,41).et al.[via the Internet.Participants were either assigned to the real attention modification program or to a placebo version. Everything was identical in both conditions except for the location of the probe. Hence, in both conditions a trial began with a fixation cross (\u201c+\u201d) presented in the center of the screen for 500\u2009ms. Immediately following termination of the fixation cue, the web based flash program in full screen mode presented two faces of the same person, one face on the top and one on the bottom, with each pair displaying one of two combinations of emotions. Either neutral-disgust, or neutral-neutral. After presentation of the faces for 500\u2009ms, a probe appeared in the location of one of the two faces. Participants were instructed to indicate whether the probe was the letter E or F by pressing the corresponding arrow on the keyboard using their dominant hand. The probe remained on the screen until a response was given, after which the next trial began. During each session, 160 trials with various combinations of probe type (E/F), probe position (top/bottom), face type and person . There are a total of 8 persons showing 2 different facial expressions; 4 male and 4 female showing disgust or neutral. In the real condition the probe was always presented at the location of the neutral face if there also was a disgust face present . In contrast, in the placebo condition the location of the probe could not be predicted since the probe appeared with equal frequency in the position of the disgust face and the neutral face. The remaining 32 trials were neutral-neutral with the probe randomly presented at the top/bottom. For a more detailed description of the two conditions see Amir et al.), since Participants were encouraged to do the training on Tuesdays and Thursdays. They received an email and a SMS reminding them to do the training on the training days. If a session was missed a reminder was sent the following day. The participants could only do the training between 5\u2009AM and 11\u2009PM, and there should always be least one day between the sessions.The participants were divided into two groups; treatment or control by an online true random-number service independent of the investigators. The study protocol was approved by the regional ethics committee, and written informed consent was obtained from all participants by surface mail.All self-report scales were administered before the start of the treatment. During the treatment LSAS-SR once a week (Sundays). Following the four weeks of training LSAS-SR, SPS, SIAS, SPSQ, MADRS-S and QOLI was readministered. Immediately following the training phase, a clinical global impression of improvement (CGI-I) was mapped on a 7-point scale CGI; ) after aIn accordance with the intention-to-treat principle, all participants were asked to complete post-treatment and follow-up assessments, regardless of how many training sessions they had completed. Independent t-tests and chi-2 were used to check if randomization had resulted in a balanced distribution across both conditions. As evident from Figure d\u2009=\u20090.15; placebo d\u2009=\u20090.19).Most participants (74 of 79) completed all 8 training session for a mean of 7.8. Tables vs. placebo group: very much improved (5.0% vs. 2.6%), much improved (2.5% vs. 20.5%), small improvement (32.5% vs. 35.9%), unchanged (57.5% vs. 38.5%) and small deterioration (2.5% vs. 2.6%) \u2009=\u20097.5; p\u2009=\u20090.112). In addition to the self-report measures and the blind CGI-I rating a SCID interview was conducted with the results that three (7.5%) in the treatment group and two (5.1%) in the placebo group no longer met DSM-IV criteria for social phobia.The CGI-I rating at post-treatment (n\u2009=\u200979) showed the following non-significant results for the treatment group Of the 79 participants who commenced treatment 74 (93.7%) completed all 8 training sessions as scheduled. However, five (6.3%) dropped out after only finishing M\u2009=\u20094.8 (SD\u2009=\u20091.0) sessions.An analysis of the proportion of correct trials revealed no difference between the groups. The overall average was M\u2009=\u200998.7% correct indications of the letters presented. In addition, there was no difference in the average response time per trial M\u2009=\u2009733\u2009ms (SD\u2009=\u2009290\u2009ms).To get a measure of attention bias we followed the suggested procedure from the previous Amir study . That inTo assess whether participants remained blind to their respective experimental condition, we asked the participants at post treatment whether they thought they had received the active or the placebo intervention. Of the participants who provided responses (n\u2009=\u200976) 18.9% (7/37) in the treatment group and 28.2% (11/39) in the placebo condition thought they had been randomized to the treatment program \u2009=\u20090.9, p\u2009=\u2009.42).et al.[et al.[et al.[et al.[i.e., internet based flash program) vs. personal computer delivery was a difference between current study and previous research.This was a double blind randomized controlled trial with the aim to transfer the promising results of the Amir et al. and Schml.[et al. studies l.[et al.. Specifil.[et al., while Bl.[et al.. The prol.[et al.,18). Whel.[et al. is explal.[et al. simply eAn explanation of the null findings could be that this study also included participants with non-generalized social phobia. However, the mean scores on LSAS-SR were almost identical . In addition, when running the analysis with the NGSP and GSP groups separated, no differences in the treatment effect emerged. et al.[et al.[cf.)[cf.)[i.e., trial with two neutral faces. Using this new measure of bias, Koster et al., found that individuals with anxiety have had difficulty in disengaging their attention from highly threatening pictures. This measure of bias has been used but other investigators to assess the specific componets of attentional bias in anxiety [et al.[et al.[When the participants who received the real treatment where asked to predict condition the absolute majority 81%) thought they had been randomized to the placebo group. One would think that such a low level of positive expectation would influence the trial. However, that cannot explain the lack of effect since the Amir et al. and Schml.[et al. papers r al.[cf.). Howevercf.)[cf.). Althougcf.)[cf.),51. The cf.)[cf.) proposed anxiety -55. The % thoughty [et al. or the Sl.[et al. studies.l.[et al.. It shoul.[et al.. PerhapsWe conclude that attention bias modification may need to be further investigated before dissemination for the Internet.Seven of the eight authors declare that there is no conflict of interest. However, Dr Amir has founded a company that market online anxiety relief products.All authors contributed to the design of this study. PC drafted the manuscript. All authors contributed to the further writing of the manuscript. All authors read and approved the final manuscript."} {"text": "Laser-induced breakdown spectroscopy (LIBS) is typically performed at ambient Earth atmospheric conditions. However, interest in LIBS in other atmospheric conditions has increased in recent years, especially for use in space exploration or to improve resolution for isotopic signatures. This review focuses on what has been reported about the performance of LIBS in reduced pressure environments as well as in various gases other than air. However, interest in LIBS under other atmospheric conditions has been a growing area of study both for fundamental knowledge and challenging applications.Laser-induced breakdown spectroscopy (LIBS) is a popular technique because of its speed, simplicity, and usually inexpensive hardware. Additionally, LIBS requires little or no sample preparation and can provide simultaneous multi-element analysis. Thus, it is not surprising that LIBS has been used for a wide variety of applications, such as material analysis , environ2, has been explored primarily for geological characterization [LIBS in pressures and atmospheric compositions other than Earth ambient has gained interest as LIBS has been promoted for space exploration applications \u201316. Chemrization \u201326. Currrization ,28, whicrization . Determirization \u201333. Besirization .2, Ar, and CO2, which may also report results incorporating pressure dependent experiments. Studies examining the effect of pressure conditions on LIBS of ambient gas have been investigated [While there have been several studies of LIBS under non-Earth ambient conditions, none of the studies currently available are comprehensive. Therefore, this review focuses on compiling an understanding of LIBS phenomena that have been gained through the various pressure dependence and atmospheric composition studies. The pressure studies have been divided into two regimes: >760 Torr and <760 Torr. The gas composition studies include comparisons of air, He, Nstigated , however2.2.1.Performing LIBS on a surface at reduced pressures (pressures below atmosphere) can result in enhanced spectra and improved ablation. Specifically, these enhancements are an increase in spectral intensity, spectra signal-to-noise (S/N), spectra resolution, increased ablation, and more uniformed ablation craters. These enhancements are generally seen when using both femtosecond and nanosecond lasers; however, the explanation for these enhancements vary slightly for the two laser\u2019s pulsewidths.\u22125 Torr. Though the intensity of the LIBS spectrum taken at vacuum is less intense than the LIBS spectrum taken at atmospheric condition, it is clear that the LIBS spectrum at vacuum is of higher resolution [et al. [In demonstrating the efficacy of a fiber optic feed-through design for integration of LIBS in a vacuum environment, Cowpe and Pilkington producedsolution ,38. This [et al. .et al. performed LIBS at varying reduced pressures on a hematite sample [et al. comparing LIBS at lunar simulated condition of 5 \u00d7 10\u22125 Pa (\u223c10\u22127 Torr) and atmospheric conditions [et al. also observe a significant decrease in spectral intensity of ionic species between 7 and 5 Torr, suggesting a rapid decrease in electron density [et al. analyzed the LIBS spectra using a non-gated spectrometer. It is well known that a gated spectrometer yields higher quality LIBS spectra if the time is optimized; however, Dreyer et al. chose a non-gated spectrometer to remove some bias introduced in gated schemes when timing is optimized for one condition and is then carried through for all other conditions, which is a conundrum that other researchers have observed and dealt with in various ways.Using a Nd:YLF laser with a 10 ns pulse duration, Dreyer e sample . Figure nditions . Dreyer density . Dreyer et al. [\u22122, Fe showed a 2.2-fold increase in ablation rate in 0.75 Torr Ar compared to 750 Torr Ar atmosphere. A similar result was seen on ablation of Zn. With a laser fluence of approximately 3 J/cm\u22122, the Zn ablation rate was 3.7-fold greater in 0.75 Torr Ar compared to 750 Torr Ar atmosphere. It was also noted by Vadillo et al. that the crater rims were free of deposited material after ablation at 0.75 Torr, while ablation at 750 Torr left craters with a visible ring of deposited material [et al., the 498.17 nm emission line from Ti(I) was monitored during laser ablation studies [et al. [et al. noted reduced LIBS intensity after 10 to 20 shots at the same location [In addition to having an effect on the emission intensity and resolution, reduced pressures have also been shown to effect ablation from LIBS significantly. For example, in work performed by Vadillo et al. , the ablmaterial . In othe studies . It was [et al. made the [et al. \u201344. Dreylocation .et al. reported an S/N enhancement at 4 Torr compared to atmospheric condition of 240-, 840-, and 629-fold for 0, 85, and 200 ns delays, respectively. It was also observed that a Mg(I) line at 383.8 nm, which was nearly unresolved at atmospheric condition, is easily resolved at 4 Torr. As pressure is reduced further, much of the enhancement seen at 4 Torr is lost, which can be explained by et al. examined 2D plasma images of a Cu plasma at 760, 1.79, 0.85, and 0.167 Torr. et al. explains that this expansion causes a decrease in collision excitation, resulting in a dimmer plasma. The enhancement seen at 4 Torr compared to atmosphere is likely a result of reduced cooling of the plasma at low pressures. Yalcin and co-workers showed that the lifetime of a LIBS plasma created at 4 Torr is much greater than that of a LIBS plasma created at atmospheric condition. Because of this increased lifetime, the LIBS plasma emission is stronger for a longer period of time, which results in enhancement. There was no significant difference in ablation at 4 Torr and atmospheric pressure, which may be unique to LIBS plasma generated with femtosecond lasers [Yalcin and co-workers investigd lasers .Whether a femtosecond or nanoscecond laser is employed in surface LIBS experiments, there are some advantages in lowering the surrounding pressure. These advantages are higher spectral resolution, greater S/N, and increased spectral intensity. For ablation, it appears that only lasers with nanosecond pulse lengths can take advantage of lower pressures. The majority of photons from femtosecond lasers reach the surface before the laser plasma develops, while with the nanosecond pulse lasers, a significant portion of the photons are able to interact with the expanding plume. At reduced pressures, a plasma generated with a nanosecond pulse is expanding in a less dense atmosphere, which results in a less dense shock wave. The reduced density in the shock wave results in reduced plasma shielding; thus, allowing more photons to reach the sample. Increasing the number of photon interacting with surface results in increased sample ablation, which can also lead to a more intense spectrum. Because there is little plasma shielding during LIBS using a femtosecond laser, another explanation must be considered for the spectral enhancement observed at reduced pressures. During LIBS plasma expansion, energy is lost to the surrounding atmosphere. This loss of energy reduces the lifetime of the laser plasma. Therefore, reducing the pressure increases the lifetime of the plasma, allowing for more light from the laser plasma to be collected. If pressures are too low (<\u223c7 Torr), there is a steep loss in LIBS spectral intensity. This loss in intensity is likely due to a disordered plasma, which results from the lack of sufficient atmosphere to provide adequate confinement.2.2.et al. investigated the effect of background high-pressures on LIBS of Basalt rock to examine the potential use of LIBS for future Venus missions [5 Torr, respectively). et al. [LIBS at high pressure has been primarily investigated for applications. For example, Arp missions . Figure . et al. noted thet al. studied the influence of increasing atmospheric pressure (1 to 80 atm) of N2 and He on LIBS spectra of carbon [2 atmosphere at 1 atm. Vors et al. found that LIBS spectral intensity decreases with increasing pressures when either He or N2 are used. Interestingly, LIBS spectral intensity decreased more rapidly with increasing He atmospheric pressure than with increasing N2 pressure, which can be seen when comparing et al. argues that this more rapid decrease in LIBS spectral intensity is likely due to the greater thermal conductivity of He compared to N2. The higher the thermal conductivity, the more rapidly a LIBS plasma will cool, which leads to a shorter plasma lifetime.Vors f carbon . The higure 1 to 0 atm of et al. [In a study examining Al(II) at 281.6 nm and Al(I) at 308.2 nm lines from LIBS, Owens and Majidi observedet al. where th3.versus atmospheric composition can be seen in et al. are likely a result of a combination of lower plasma temperatures and lower electron densities. Iida also studied laser vaporization rates in He and Ar. Sensitive sample mass measurements, made during LIBS under atmospheres ranging from just under 10 to 1,000 Torr, showed a significant decrease in vaporization rates for Ar with increasing pressure. Helium experiments showed only a modest decrease in vaporization rates from 10 to 1,000 Torr. The above observations, explained by Iida, are primarily related to plasma shielding and how different gases effect plasma shielding. Laser plasmas developing rapidly with a sufficient amount of electrons can absorb a significant portion of the laser pulse through inverse Bremsstahlung. Ar is more easily ionized than He and the breakdown threshold in gas for He at 1 atm is \u223c3 times greater than Ar and \u223c5 times greater at 100 Torr [Iida conducte100 Torr . Iida noet al. compared the effect that 1 atmosphere of Ar, He, and air had on plasma temperature and electron densities during LIBS of steel [et al. [et al. [Aguilera of steel . It was [et al. and Lee [et al. , the reaet al. [et al. argue that plasma shielding is likely the dominant mechanism for the improvement seen in ablation using an atmosphere of He compared to Ar. The ionization potential for He and Ar are 24.4 and 15.8 eV, respectively. Under an Ar environment, it is likely that plasma shielding could be a factor because the ionization potential is lower and ionization cross section greater than the ionization potential and ionization cross section for He.The composition of the background pressure has been shown to greatly influence the rate of ablation. In a study by Mao et al. involvinet al. . Ablatioet al. [versus measurement locations above the sample surface. In et al. argues that increased electron density results in an increase in line broadening observed in Ar. et al. also made similar measurements at 100 Torr where Ar also exhibited self-absorption. Lee et al. explains the results at 100 Torr by suggesting that the atmosphere acts to dampen free expansion of propelled atoms from the sample surface. Lee et al. also notes that, due to the greater thermal conductivity, He will cool a plasma more quickly through collisions than Ar. Though not mentioned by Lee et al., it may be possible that LIBS surface plasmas generated in He atmosphere or in low pressure Ar atmosphere are not as optically thin as plasmas generated at 1 atm. Compared to Ar and atmospheric conditions, an expanding LIBS plasma is less likely to breakdown the surrounding low pressure or He atmosphere. This reduced breakdown may result in a less confined plasma, and thus a larger plasma than a plasma generated in 1 atm Ar.Lee et al. studied 2 environment [et al. [2. From 2 atmosphere, LIBS spectra exhibit improved resolution and an increase S/N for the stronger lines in a 7 Torr CO2 atmosphere. For example, when using a gated delay spectral lines from Al (396.15 nm) and Ca (393.37 nm) both have an improvement of around 10-fold in S/N and Si (390.56 nm) has an improvement of nearly 40-fold in S/N in a 7 Torr CO2 atmosphere. The intensity is weaker in 7 Torr CO2 and some of the weaker lines seen in the atmospheric conditions are undetectable in a 7 Torr CO2 atmosphere. Some of these undetectable lines are Ti (398.97 nm) and Mn (403.31 nm). Also, the S/N for Fe (404.59 nm) at 7 Torr CO2 was about half of the S/N at atmospheric conditions. Salle et al. measured the full width at half maximum of several lines between 390 to 410 nm and the majority of the lines in this region either had no or only a modest improvement in resolutions. The scope of this study was to compare calibration curves from soil and clays at atmospheric conditions (585 Torr), Mars conditions (7 Torr CO2) and Moon/asteroid conditions (50 mTorr air). It was found that the best repeatability and linear regression was achieved at 7 Torr CO2.The potential use of LIBS on Mars has commenced many studies involving LIBS in primarily a COironment ,18\u201324. I [et al. compareset al. [2 resulted in an improvement when compared with atmospheric conditions. Brennetot et al. also studied the effect of LIBS at varying CO2 pressures; at very low pressures (<3 Torr) the plasma plume was highly expanded and difficult to fully image on to fiber optic. At pressures between 3 and 15 Torr, the lifetime of Cu (515.324 nm) decreased gradually with increasing pressures. This reduction in lifetime with increasing pressure is likely do to the increase in collisions at higher pressures, resulting in more rapid cooling of the plasma. There was no mention of whether the increase in collisions at the higher pressures resulted in line broadening; however, spectra at atmospheric conditions did exhibit significant self-adsorption compared to spectra in 7 Torr CO2.In another study involving Mars atmosphere simulation, Brennetot et al. found thet al. [2 environment. It was found that the optimum delays for LIBS at atmospheric conditions and 6 Torr CO2 were slightly different and that the intensity was always higher at atmospheric conditions. Although the maximum intensity is twice as intense at atmospheric conditions compared to 6 Torr CO2 at the respective optimum gate widths and delays, the S/N is not significantly different. This result differs from Brennetot et al. [2 compared to atmospheric conditions . This difference highlights the difficultly in comparing different LIBS experiments. Both authors note that changes in LIBS plume geometry could affect the collection of emission.Colao et al. studied t et al. , where eet al. [238U and 235U, which are only 0.025 nm apart, in an enriched U sample. Pietsch et al. noted that to achieve lines narrow enough to separate U isotopes, LIBS must be conducted at low pressures to reduce the effects Stark broadening.LIBS requires little to no sample preparation, consumes minute quantities of material, and can be used in standoff applications. These attributes make it well suited for measurements of radioactive material. Measurements of radioactive isotope shifts require high-resolution spectroscopy; thus, LIBS is not typically used. However; Pietsch et al. took advet al. [239Pu/240Pu isotope ratio of 49/51%, respectively. Here the expected isotope shift of 239Pu and 240Pu is 0.355 cm\u22121 (0.0125 nm), and is fairly well resolved in the LIBS spectrum. Smith et al. used a much higher atmospheric pressure (\u223c100 Torr) compared to the conditions Pietsch et al. used (\u223c0.02 Torr) and explained that the higher pressure should cool the plasma more quickly leading to reduced Doppler and Stark broadening. Due to the high ionization potential of He, a pressure of 100 Torr would likely not have the same adverse effects associated with other gases, such as N2 or Ar, which would breakdown more easily from the expanding plasma shock wave and shield the sample.In another study examining LIBS isotope measurements, Smith et al. used a 14.et al. [et al. [et al. [et al. [Not all of the reviewed studies corroborate well with each other, in fact. the cited work by Vadillo et al. seems toet al. ,39,45,49 [et al. noted th [et al. and Cola [et al. . Also, t [et al. \u201344.Although there is no definitive study that fully accounts for all of the phenomena that occur in LIBS as the pressure and atmospheric compositions are changed, the work of Iida does proBased on this review, it is clear that reduced pressures (<760 Torr) tend to improve LIBS spectra by increasing the S/N and improving resolution. The observed improvement is primarily due to the reduced plasma shielding, resulting in more ablation, and less Stark broadening. However, if the pressures are reduced too much (<10 Torr), then LIBS spectra tend to degrade, primarily because of lack of plasma confinement. Due to its high ionization potential, He may be useful in improving LIBS spectra when pressures cannot be reduced. The more the nuances of what occurs in the LIBS plasma are understood, the easier it will be to optimize LIBS for extreme environments and applications requiring high resolution and S/N."} {"text": "Papio hamadryas), geladas (Theropithecus gelada), snub-nosed monkeys (Rhinopithecus spp.), and proboscis monkeys , the different functional units of the society are clearly segregated in multilevel species (Swedell and Plummer . et al. attempt (et al. rely on (et al. rely on n et al. describeet al. Given the tremendous variety of habitats in which multilevel sociality has evolved, pinning it down to a universal environmental driver is utopian. Environmental features can constrain the evolution of modularity (Grueter and van Schaik cf. McGrew et al. (Studying the nuances of multilevel sociality in primates also has potentially important implications for hominin behavioral evolution. Chimpanzees may well be an apt referential model that helps us to characterize the social system of the last common ancestor (w et al. attempt et al. Mandrillus leucophaeus: Astaras et al. Pygathrix spp.: Rawson Trachypithecus geei: Mukherjee and Saha Although hamadryas baboons, which are famous for their multilevel system, were among the first primate species to be studied in the wild Kummer , 1995, tet al. et al. Band formation in some multilevel species, e.g., colobines, has been argued to represent an adaptation to enhanced threat from conspecifics (Grueter and van Schaik et al. et al. et al. It is obvious that our knowledge of these intriguing societies is still limited. Unfortunately, most species showing multilevel sociality are endangered, and pristine conditions allowing the complexities of multilevel systems to unfold unhampered are becoming rare. Bands of snub-nosed monkeys often occur in fragmented habitats with limited or absent connectivity with neighboring bands (Grueter"} {"text": "Compulsive drug use and a persistent vulnerability to relapse are key features of addiction. Imaging studies have suggested that these features may result from deficits in prefrontal cortical structure and function, and thereby impaired top-down inhibitory control over limbic\u2013striatal mechanisms of drug-seeking behaviour. We tested the hypothesis that selective damage to distinct subregions of the prefrontal cortex, or to the amygdala, after a short history of cocaine taking would: (i) result in compulsive cocaine seeking at a time when it would not usually be displayed; or (ii) facilitate relapse to drug seeking after abstinence. Rats with selective, bilateral excitotoxic lesions of the basolateral amygdala or anterior cingulate, prelimbic, infralimbic, orbitofrontal or anterior insular cortices were trained to self-administer cocaine under a seeking\u2013taking chained schedule. Intermittent mild footshock punishment of the cocaine-seeking response was then introduced. No prefrontal cortical lesion affected the ability of rats to withhold their seeking responses. However, rats with lesions to the basolateral amygdala increased their cocaine-seeking responses under punishment and were impaired in their acquisition of conditioned fear. Following a 7-day abstinence period, rats were re-exposed to the drug-seeking environment for assessment of relapse in the absence of punishment or cocaine. Rats with prelimbic cortex lesions showed decreased seeking responses during relapse, whereas those with anterior insular cortex lesions showed an increase. Combined, these results show that acute impairment of prefrontal cortical function does not result in compulsive cocaine seeking after a short history of self-administering cocaine, but further implicates subregions of the prefrontal cortex in relapse. Following surgery, rats were housed individually, and fed 20\u2003g of food each day within 1\u2003h of testing. Water was always freely available in the home cage. The experimental procedures were conducted in accordance with the United Kingdom 1986 Animals (Scientific Procedures) Act (project licence PPL 80/2234).m quinolinic acid in 0.1\u2003m phosphate buffer (pH 7.2\u20137.4) via a 30-gauge stainless steel cannula connected by fine-bore polythene tubing to a Hamilton precision microsyringe mounted in an infusion pump . The stereotaxic coordinates for each targeted area are shown in Table S1. All dorsal\u2013ventral coordinates were taken from dura, and anterior\u2013posterior coordinates were measured from bregma. Each infusion of 0.4\u20130.6\u2003\u03bcL, depending on the structure targeted (Table S1), was made over 3\u2003min, and the cannulae were left in place for a further 3\u2003min. Rats receiving sham surgery were injected identically with phosphate buffer vehicle.The rats were anaesthetized with Avertin . All rats were implanted with a catheter in the right jugular vein targeting the left vena cava. The mesh end of the catheter was sutured subcutaneously on the dorsum between the scapulae. Rats were then placed in a stereotaxic frame with the incisor bar set at 3.3\u2003mm below the interaural line to prevent infection equipped with two 4-cm-wide retractable levers that were mounted in one sidewall 12\u2003cm apart and 8\u2003cm from the grid floor. Above each lever there was a cue light , and a house light was located on the opposite wall. A dipper delivered 0.04\u2003mL of a 20% (w/v) sucrose solution to a recessed magazine (3.8\u2003cm high and 3.8 cm wide and 5.5\u2003cm from the grid floor) situated between the levers. Entry into this magazine was detected by the interruption of an infrared photo-beam. The floor of the chamber was covered with a metal grid with bars separated by 1\u2003cm and connected to a shock generator and scrambler (Med Associates), which delivered 0.5-mA footshocks. The grid was located 8\u2003cm above an empty tray. The testing chamber was placed within a sound-attenuating and light-attenuating enclosure equipped with a ventilation fan that also screened external noise. Silastic tubing shielded with a metal spring extended from each rat's intravenous catheter to a liquid swivel mounted on an arm fixed outside the operant conditioning chamber. Tygon tubing extended from the swivel to a Razel infusion pump located outside the outer enclosure. The operant conditioning chambers were controlled by software written in C++ with the Whisker control system . Responding was reinforced under a fixed ratio (FR) 1 schedule. Each lever press produced a 0.25-mg cocaine infusion (0.1\u2003mL/5\u2003s) accompanied by the withdrawal of the taking lever, the extinction of the house light, and the illumination of the stimulus light above the lever for 20\u2003s. The sessions terminated after 30 cocaine infusions or 2\u2003h, depending on which criterion was met first. Training of the taking response continued for five to seven sessions.Following acquisition of the taking schedule, the seeking\u2013taking chain schedule was introduced, each cycle beginning with insertion of the drug-seeking lever with the taking lever retracted; the first press on the seeking lever initiated a random interval (RI) schedule on the seeking lever. The RI parameter was progressively increased from 2 to 120\u2003s. The first lever press after the RI had elapsed terminated the first link of the chain, resulting in the retraction of the seeking lever and insertion of the taking lever to initiate the second link. One press on the taking lever was followed by the drug infusion accompanied by the same stimulus events as during the training of the taking response. There was then a time-out (TO) period, which was progressively increased across sessions from 20\u2003s to 10\u2003min after each cocaine infusion. The seeking lever was then reinserted to start the next \u2018cycle\u2019 of the schedule. Consequently, at the end of five training sessions, the rats were responding on a heterogeneous chained FR 1 TO schedule, allowing a maximum of 11 cocaine infusions.To investigate whether the incentive value of cocaine was affected by lesions of the different PFC subregions or the BLA, we assessed responding on the taking lever under a progressive ratio schedule. The ratio requirement of the taking response was increased after each reinforcer according to the following progression: 1, 3, 6, 9, 12, 17, 24, 32, 42, 56, 73, 95, 124, 161, 208, 268, 346, 445, 573, and 737 delivered under a VI schedule, which progressively increased (from 2 and 15\u2003s) to 60\u2003s on the third day. This last day was considered to be the first day of baseline. Responding for sucrose allowed for the assessment of general suppression of appetitive behaviour by, and thus the specificity of the effects of, the subsequent introduction of intermittent punishment of the cocaine-seeking response.et\u2003al., Three further sessions of training under the seeking\u2013taking chain with concurrent sucrose reinforcement established a baseline in which performance during the seeking link of the chain is monotonically related to the cocaine dose under the TO conditions of the session , a clicker was presented for 1\u2003min, which ended with the delivery of a single footshock . One minute later, the rat was returned to its home cage. After 24\u2003h, the rat was returned to the conditioning chamber for a 20-min test session, during which the clicker was alternately switched on (1\u2003min) and off (1\u2003min). Freezing was scored every 5\u2003s during the test session.m phosphate-buffered saline followed by 4% (w/v) paraformaldehyde in phosphate-buffered saline. Their brains were removed and postfixed in paraformaldehyde before being dehydrated in 20% (w/v) sucrose; they were then sectioned coronally at a thickness of 60\u2003\u03bcm on a freezing microtome, and every third section was mounted and allowed to dry. Sections were passed through a series of ethanol solutions of descending concentration , and stained for 5\u2003min with Cresyl Violet . After staining, sections were rinsed in water and 70% ethanol before being differentiated in 95% ethanol. Finally, they were dehydrated and delipidified in 100% ethanol and Histoclear before being cover-slipped and allowed to dry. The sections were used to verify lesion placement and assess the extent of quinolinate-induced neuronal loss and gliosis.At the end of the experiment, rats were deeply anaesthetized with intraperitoneal Euthatal and perfused transcardially with 0.01\u2003anovas were conducted for the progressive ratio, extinction probe test, and baseline seeking\u2013taking motivational measures. Seeking under punishment was analysed with a mixed two-way anova with group as the between-subjects measure and session as the within-subject measure. Seeking at reinstatement was assessed with a between-groups one-way anova. Significance was set at \u03b1\u2003=\u20030.05. Significant interactions were analysed further with protected Fisher's LSD post hoc analyses.Between-subjects F5,133\u2003<\u20032.1).After training on the seeking\u2013taking task, and before introduction of the punishment contingency, the effects of the PFC and BLA lesions on the motivation to respond for cocaine were measured. The multiple analysis of variance performed on the break points under the progressive ratio schedule , on seeking performance during the non-reinforced seeking probe test or during baseline responding and actually increased their cocaine-seeking responses during the punishment sessions, thereby showing evidence of compulsive behaviour (P\u2003<\u20030.001) , whereas BLA-lesioned rats showed increased responding (P\u2003<\u20030.05) after both cocaine and shock as compared with that seen during the first cycle, and thus no sign of suppression across the session .All rats with ACC, prelimbic cortex (PLC), infralimbic cortex (ILC), OFC or anterior IC (AIC) prefrontal lesions suppressed their seeking responses to the same extent as controls, i.e. abstained from responding. In contrast, rats with lesions to the BLA both maintained for 2\u20137\u2003days (P\u2003<\u20030.05), but this soon returned to baseline. However, BLA-lesioned and ILC-lesioned rats differed from their respective sham controls by maintaining stable responding for sucrose across baseline and the early phase of the intermittent punishment contingency .The majority of rats showed an initial decrease in concomitant responding for sucrose following introduction of the punishment contingency , and there was a similar trend for ACC-lesioned rats (P\u2003=\u20030.16). In contrast, PLC-lesioned rats decreased their seeking responses (P\u2003=\u20030.03), similar to the trend observed in rats with OFC or ILC lesions (P\u2003>\u20030.23).Prefrontal cortical lesions differentially affected the reinstatement of drug-seeking responses after the return of rats to the test environment .Given the marked increase in cocaine seeking under punishment in the BLA-lesioned rats, they and their respective sham controls were subjected to fear conditioning to assess the extent to which reduction in fear evoked by shock was a factor in the persistent seeking of cocaine. BLA-lesioned rats showed a reduction in conditioned fear, as expressed by a significant decrease in freezing in response to the conditioned stimulus previously associated with footshock as compared with sham controls (Student's Histological assessment was performed after completion of the experiment, and was conducted blind to the behavioural results. The sites and extent of lesions are illustrated in Fig.\u2003et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., In this study, we tested the hypothesis that prefrontal cortical or limbic cortical loss of function plays a causal role in the development of compulsive cocaine seeking, as suggested by correlations between reduced grey matter volume or reduced metabolic activity and addiction to cocaine in human populations (Volkow The results showed that pre-cocaine exposure lesions of the ACC, PLC, ILC, OFC or AIC did not result in compulsive cocaine seeking, as measured by its resistance to punishment. Lesions of the BLA, however, resulted in both resistance to punishment-induced suppression and an increase in cocaine seeking, but the characteristics of this apparent compulsive cocaine seeking were somewhat different from that seen in the subpopulation of intact rats in which this behaviour has emerged over a long cocaine self-administration history. By contrast, there was evidence of opposing influences of the AIC and PLC on the reinstatement of cocaine seeking after abstinence; rats with AIC lesions showed an enhanced propensity to relapse, whereas those with PLC lesions showed a reduced propensity to relapse. The results are consistent with the notion of top-down inhibitory control by the PFC of maladaptive drug-seeking behaviour during relapse.et\u2003al., et\u2003al., et\u2003al., et\u2003al., Lesions of the IC are known to reduce aversive (Bermudez-Rattoni & McGaugh, et\u2003al., et\u2003al., Lesions of the PLC, in contrast, decreased the propensity to relapse when abstinent rats were returned to the drug-seeking environment. These results therefore indicate opposing influences of the PLC and AIC on relapse following punishment-induced abstinence. Opposing effects of ILC and ACC lesions on relapse following extinction have also been demonstrated, with functional inactivation of the ILC favouring, and inactivation of the ACC preventing, reinstatement (Peters et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al.,The lack of effect of PFC lesions on the tendency to maintain or even increase cocaine seeking under intermittent punishment conditions is perhaps surprising, particularly in the case of OFC lesions. In rats, pre-training lesions of the PFC similar to those used here result in perseverative responding, a form of compulsive behaviour, in a reversal learning task (Schoenbaum et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., The lack of effect of the various prefrontal lesions on compulsive cocaine seeking clearly demonstrates that acute loss of function of the PFC is not, in itself, sufficient to result in compulsive cocaine seeking. This could reflect the fact that acute, pre-cocaine self-administration lesions do not capture the subtle nature of either pre-existing reductions in grey matter volume reflecting a vulnerable endophenotype (Ersche et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., In addition, decreased dopaminergic and serotoninergic activity in the PFC has been reported in drug-addicted individuals Wilson, and in met\u2003al., et\u2003al., Using identical behavioural procedures to those used here, we have previously shown that the noradrenaline reuptake inhibitor atomoxetine also differentially affects responding under intermittent, unpredictable punishment and responding during relapse. Whereas acute systemic atomoxetine administration prevented relapse after voluntary abstinence (Economidou et\u2003al., et\u2003al., et\u2003al., et\u2003al. (et\u2003al., et\u2003al., et\u2003al., Rats with lesions of the BLA not only continued to respond despite the punishment contingency, but actually showed a marked increase in cocaine seeking above the pre-punishment baseline. This increase is unlikely to be attributable to baseline drift, as performance stabilized across the four baseline sessions, and previous data have clearly shown that non-punished animals do not increase their performance over time (Pelloux , et\u2003al. and empl et\u2003al., . Howeveret\u2003al., et\u2003al., et\u2003al., et\u2003al., However, although these data suggest that the persistence of cocaine seeking under punishment may reflect amygdala dysfunction, a general loss of fear does not provide an explanation for the compulsive cocaine seeking that emerges over a long period of cocaine self-administration, as we have shown that this can occur without any deficit in fear conditioning (Vanderschuren & Everitt, In conclusion, the results of these experiments show that lesions of the BLA, but not of prefrontal cortical areas, can markedly increase the propensity to seek cocaine under punishment after a short cocaine-taking history, but that this effect does not capture the mechanisms underlying the emergence of compulsive cocaine seeking after a prolonged or escalated cocaine-taking history. Prefrontal cortical lesions that were without effect on compulsive cocaine seeking nevertheless influenced relapse after abstinence, with PLC lesions preventing and AIC lesions increasing relapse. These findings further indicate inhibitory control mechanisms that are dissociable from those underlying relapse in extinction\u2013reinstatement models of relapse behaviour."} {"text": "Dear Editor,I read with great interest the contribution \u201cEffect of Gabapentin on Morphine Consumption and Pain after Surgical Debridement of Burn Wounds: A Double-Blind Randomized Clinical Trial Study\u201d by Raziet al. . Carryinrandomized controlled trials (RCT), the design and description should adhere to the Consolidated Standards of Reporting Clinical Trials\u2019 (CONSORT) statement. Lastly, we would also like to draw the authors\u2019 attention to a similar study addressing the efficacy of Gabapentin in post-surgical patients conducted by Dierking et al. (Firstly, the authors have not mentioned how they randomized their samples. Moreover, the process of blinding was not explained in detail other than mentioning the envelope method. In this regard, it would be advised that for g et al. publishe"} {"text": "Giardia duodenalis is a flagellate parasite which has been considered the most common protozoa infecting human worldwide. Molecular characterization of G. duodenalis isolates have revealed the existence of eight groups (Assemblage A to H) which differ in their host distribution. Assemblages A and B are found in humans and in many other mammals.G. duodenalis among Orang Asli in Malaysia. Stool samples were collected from 611 individuals aged between 2 and 74\u00a0years old of whom 266 were males and 345 were females. Socioeconomic data were collected through a pre-tested questionnaire. All stool samples were processed with formalin-ether sedimentation and Wheatley\u2019s trichrome staining techniques for the primary identification of G. duodenalis. Molecular identification was carried out by the amplification of a triosephosphate isomerase gene using nested-PCR assay.This cross-sectional study was conducted to identify assemblage\u2019s related risk factors of i.e. dogs and cats was found to be significant predictor for assemblage A. On the other hand, there were three significant risk factors caused by assemblage B: (i) children \u226415\u00a0years old , (ii) consuming raw vegetables and (iii) the presence of other family members infected with giardiasis .Sixty-two samples (10.2%) were identified as assemblage A and 36 (5.9%) were assemblage B. Risk analysis based on the detected assemblages using univariate and logistic regression analyses identified subjects who have close contact with household pets G. duodenalis infection among Orang Asli was caused by both assemblages with significant high prevalence of assemblage A. Therefore, taking precaution after having contact with household pets and their stool, screening and treating infected individuals, awareness on the importance of good health practices and washing vegetables are the practical intervention ways in preventing giardiasis in Orang Asli community.The present study highlighted that Giardia is a genus of intestinal flagellates that infects a wide range of vertebrate hosts. The genus currently comprises six species, namely Giardia agilis in amphibians, Giardia ardeae and Giardia psittaci in birds, Giardia microti and Giardia muris in rodents and Giardia duodenalis in mammals. These species are distinguished on the basis of the morphology and ultrastructure of their trophozoite . Subjects who participated in this study comprised 266 (43.5%) males and 345 (56.5%) females.More than half (68%) of the parents have low level of education i.e., less than 6\u00a0years of formal education. The majority of the parents did odd jobs such as selling forest products without any stable income. Some were daily wage earners working in rubber or palm oil plantations, unskilled labourers in factories or construction sites. Therefore, 51.6% of the households belonged to people who earned less than RM500 per month (\u2264US$156.02), the poverty income threshold in Malaysia which is inadequate to maintain a good living standard. Although 61.9% if the houses have provision of basic infrastructure such as treated water supply and 71.4% have pour flush toilet, at least 38.1% are still using untreated water originating from a nearby river for their domestic needs and 28.6% still defecate indiscriminately in the river or bush. More than half of the households (55.8%) kept dogs and cats as their pets. Most of these pets are left to roam freely. The villagers have very close contact with the dogs and cats. Occasionally, these pets also slept, defecated indoors and accompanied the villagers into the forest to harvest forest products.G. duodenalis assemblage A and assemblage B, respectively. The prevalence of G. duodenalis assemblages A and B infections were not significantly associated with gender. However, the prevalence of G. duodenalis assemblage B infection was significantly higher in the younger age group of less than 15\u00a0years (P\u2009=\u20090.021).Table\u00a0G. dudoenalis assemblages A and B infections and sociodemographic characteristics are shown in Table\u00a0P\u2009=\u20090.042) and close contact with household pets were significantly associated with G. duodenalis assemblage A infection. On the other hand, G. duodenalis assemblage B infection was associated with five factors which include children less than 15\u00a0years old , consuming raw vegetables , eating fresh fruits , non working mother and the presence of other family members infected with giardiasis .The association of P\u2009=\u20090.002) more likely to be infected with G. duodenalis assemblage A as compared to those who do not keep dogs and cats as their pets. In addition, children less than 15\u00a0years old , those being a consumer of raw vegetables and presence of other family members infected with giardiasis were more likely to be infected with G. duodenalis assemblage B , AF069557 (assemblage A), L02120 (assemblage A) and U57897 (assemblage A)] in one cluster with high bootstrap support. Phylogenetic analysis confirmed the monophyletic group of assemblage B (bootstrap\u2009=\u2009100%) were successfully amplified based on analysis targeting Giardia duodenalis from human and various animals are morphologically similar, distinct host-adapted genotypes have been demonstrated within G. duodenalis[G. duodenalis known as assemblages A and B. Both assemblages are found associated with human infection globally and have also been detected in various animals. In this study, the results showed that all G. duodenalis infections in Orang Asli are due to assemblage A and assemblage B. This confirmed the results of a several local studies performed elsewhere [Molecular tools have been recently used to characterize the epidemiology of human giardiasis. Although isolates of uodenalis,30. Humalsewhere ,19.G. duodenalis assemblages or in Giardia species). In the course of this study, triosephosphate isomerase (tpi) gene was specially chosen because of the high genetic heterogeneity displayed by Giardia species at this locus, as depicted by Thompson and Monis [et al.[tpi gene achieved the highest percentage of amplicons produced (70%), followed by glutamate dehydrogenase (gdh) (45%) and beta-giardin (bg) (33%). Similar occurrences were also reported in previous studies [et al.[tpi gene is a good phylogenetic marker for analysis of the molecular evolutionary and taxanomic relationship of G. duodenalis.At present, various molecular methods are available to distinguish these assemblages, mainly by nested-PCR followed by DNA sequencing or restriction fragment length polymorphism (RFLP), or by real-time PCR . The majnd Monis . A recens [et al. reported studies ,33. Furts [et al., the tpiG. duodenalis assemblages varied in different geographical areas. In the present study, sequences analysis of the 98 samples recovered from Orang Asli revealed 10.2% (62/611) G. duodenalis assemblage A and 5.9% (36/611) assemblage B, which were differs from previous local studies carried out by Mohammed Mahdy et al.[et al.[et al.[et al.[et al.[The distribution of dy et al. and Hueyl.[et al.. In theil.[et al. which oband 5.9% /611 assel.[et al.. Similarl.[et al.. Souza el.[et al. indicatel.[et al. observedet al.[et al.[et al.[et al.[et al.[G. duodenalis assemblage B infecting humans in the city of Rio de Janeiro, Brazil. Results from each of these studies are not strictly comparable since amplifications were done on different G. duodenalis genes. Differences of G. duodenalis assemblage among the studied populations could be due to different modes of transmission in each area, comprising human-to-human, foodborne, waterborne or zoonotic transmissions.In contrast, a high prevalence rate of assemblage B was reported by Hatam-Nahavandi et al.. Likewisl.[et al. in Thaill.[et al. observedl.[et al. indicatel.[et al. did not G. duodenalis assemblages A and B infections between genders. Similar findings were observed by Gelanew et al.[et al.[et al.[Results of this study indicate a non-significant difference in the prevalence of ew et al. and Anthl.[et al. which fol.[et al. demonstret al.[et al.[G. duodenalis assemblages show age-specific pattern. Compared to previous studies carried out in Ethiopia [Interestingly, this study showed that children age less than 15\u00a0years were at higher risk of being infected with assemblage B. This finding was in agreement with Mohammed Mahdy et al. and Sadel.[et al. that addl.[et al.,34,50. TEthiopia , PhilippEthiopia , it seemG. duodenalis assemblage B in the present study. The most common vegetables consumed in Orang Asli were tapioca shoots, wild fern shoots and locally planted leaves. It was believed that this association was due to eating these vegetables with contaminated hands or hands that were insufficiently washed. Contaminated hands have been implicated to play a major role in the faecal-oral transmission of the communicable faecal-oral transmitted diseases in developing countries [Giardia cysts [Foodborne transmission of giardiasis was suggested in 1920s ,53 and aountries ,56. Otheountries ,58. Vegeia cysts ,60, althia cysts .G. duodenalis assemblage B was predominantly human and to a much lesser extent dog and wildlife. The present finding suggests that humans are the major source of assemblage B and it indicates the possibility of infected family members as the source of infection and direct transmission occurring within the household. It has also been demonstrated in Bangkok where human-to-human transmission of assemblage B was found among humans in temple communities [et al.[G. duodenalis assemblage B was high in children younger than 15\u00a0years old. Furthermore, the prevalence of G. duodenalis assemblage B which indicates an anthroponotic transmission cycle has been seen in other countries in Asia [The host distribution of munities . Likewiss [et al. suggeste in Asia and the in Asia .G. duodenalis is still under debate and despite increasing knowledge of the molecular identification of Giardia from different host species; the zoonotic potential of G. duodenalis is not clear [G. duodenalis assemblage A. We considered household pets as dogs and cats that were kept contained in residence area (house and/or yard) for at least 12\u00a0h a day and allowed in the streets part of the day, either alone or accompanied by their owners. The role of dogs and cats as a definitive G. duodenalis host has been widely studied and recognized as being a public health problem, especially in developing countries and communities that were socioeconomically disadvantages as the one used in this study. In these communities, poor levels of hygiene and overcrowding, together with a lack of veterinary attention and zoonotic awareness, exacerbates the risk of giardiasis transmission [Giardia in dogs and humans living within the same household, although it would appear that the risk of dog-human transmission was low.Zoonotic transmission of ot clear ,67. Howesmission . FurtherGiardia positive samples collected from dogs in urban areas, 60% were infected with zoonotic Giardia from assemblage A, 12% with dog specific assemblages C and D and the remaining 28% harboured mixed infections [et al.[Giardia, six infected with assemblage A and 11 with assemblage F (the cat genotype). Based on these findings, we believe that there are two transmission cycles in dogs; (i) the normal cycles between dogs which involves transmission of G. duodenalis cysts that belong to assemblages C or D and (ii) the other cycler includes cross transmission of Giardia from humans belonging to assemblage A [G. duodenalis reservoir and transmit cysts in at least two ways which are from dog-to-dog and from humans-to-dogs and perhaps from dogs-to-humans.In a recent report from Germany, it was found that of 60 fections . Few stus [et al. examinedmblage A that groG. duodenalis assemblages A and B were not detected in the current study. Similarly, Bertrand et al.[Giardia cysts of different genetic profiles or subsequent infection of an infected host by genetically different Giardia cysts. This was especially common in areas where giardiasis was endemic [It is interesting to note that mixed infections with nd et al. also didnd et al., Italy [nd et al., India [nd et al. and Unitnd et al.. The per endemic ,76,77.i.e. lipids, haemoglobin, bile salts and polysaccharides from mucus, bacteria and food degradation product) which can affect the result of amplification. For this reason, some extraction and amplification methods have been improved to develop more sensitive assays to identify gene. In some studies, specific DNA was detected at all target concentrations, demonstrating that QIAamp DNA kit extraction method could effectively remove PCR inhibitory substances [G. duodenalis up to sub-assemblages [G. duodenalis assemblage A isolates have been further grouped into sub-assemblages I and II, whereby the assemblage B isolates have been divided into sub-assemblages III and IV [Giardia isolate since different methods can group isolates into different assemblages and the resolution of sub-assemblages is dependent on the selected method.The present study however has several limitations. Firstly, direct amplification of cysts DNA from stool samples help to sole important questions such as presence of mixed infections, association between assemblages and host (pathogenicity) and selection for irrelevant genotypes during cultivation ,78,79. Bbstances ,81. Secoemblages . The G. I and IV ,84. ThisG. duodenalis assemblage is a useful way to understand the dynamic transmission of Giardia infection in Orang Asli. In the base of our findings, an anthroponotic origin of the infection route is suggested and underscored the fact that human are the main source of infection for assemblage B while close contact with domestic animals played a major role in the transmission for assemblage A. Because of the possibility of zoonotic transmission and the potential of household pets for hosting the parasite suggested by some researchers, further studies with a variety species of animal stool samples are recommended. Further studies using additional, more highly variable loci will provide more definitive evidence of both anthroponotic and zoonotic transmission in this community.In conclusion, determination of the The authors hereby declare that they have no competing interests.TSA was involved in all phases of the study, including study design, data collection, data analysis and write up of the manuscript; NM supervised the study, and revised the manuscript; NM was involved in the statistical analysis of data; FMS and SNA were involved in the collection and laboratory examination of samples. All authors read and approved the final manuscript. TSA and NM are the guarantors of the paper.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2334/14/78/prepub"} {"text": "Synthetic Aperture Radar has shown its large potential for retrieving soil moisture maps at regional scales. However, since the backscattered signal is determined by several surface characteristics, the retrieval of soil moisture is an ill-posed problem when using single configuration imagery. Unless accurate surface roughness parameter values are available, retrieving soil moisture from radar backscatter usually provides inaccurate estimates. The characterization of soil roughness is not fully understood, and a large range of roughness parameter values can be obtained for the same surface when different measurement methodologies are used. In this paper, a literature review is made that summarizes the problems encountered when parameterizing soil roughness as well as the reported impact of the errors made on the retrieved soil moisture. A number of suggestions were made for resolving issues in roughness parameterization and studying the impact of these roughness problems on the soil moisture retrieval accuracy and scale. When the algorithm was applied to the entire test site, only a small percentage of pixels resulted in a normal solution. A similar problem was reported by Ji et al. [et al. [et al. [a priori knowledge of roughness parameters. They used a SIR-C/X-SAR scene and calibrated the model for two agricultural fields where soil moisture data were available. Results were compared to moisture estimates obtained from a hydrological model, yielding better results for L-band data than for C-band. The results obtained were strongly influenced by the vegetated cover of the fields. Fung and Chen [Applying Oh's model to SIR-C measurements over the Little Washita River watershed, Wang et al. found thi et al. , and wasi et al. obtained [et al. found th [et al. found a [et al. reported [et al. carried and Chen reportedet al. [ks < 2.5 and incidence angles larger than 30\u00b0.The Dubois model relates the HH or VV polarization to the soil's dielectric constant, surface roughness, incidence angle and radar frequency. Basically, for a given radar configuration and soil roughness, this model linearly relates the dielectric constant of a soil to the backscattering coefficient, expressed in dB. Dubois et al. restrictet al. [et al. [s < 0.6 cm), an overestimation for surfaces with an RMS height larger than 1.6 cm, and a correct simulation for surfaces with an intermediate roughness. For soil moisture contents smaller than 30%, Baghdadi and Zribi [et al. [Ji et al. applied [et al. applied [et al. generallnd Zribi observed [et al. applied et al. [et al. [et al. [Leconte et al. applied [et al. , many st [et al. develope2.3.A number of theoretically rigorous and approximate solutions for electromagnetic scattering from rough surfaces, described as stochastic random processes, have been developed over the past decades . SeThe most popular approximate scattering models are the Small Perturbation Model , Kirchho\u03c30 of a bare soil, given the radar properties , surface characteristics (dielectric constant and surface roughness) and local incidence angle. The theoretical derivation of the IEM starts from the Stratton-Chu integral which describes the scattered electric field sE observed at the sensor in terms of the tangential electric and magnetic fields at the soil surface. Because the Stratton-Chu integral is complex some approximations as described in Fung [et al. [The Integral Equation Model encompasses the Kirchhoff and small perturbation models in the high and low frequency regions respectively ,28,40,97 in Fung have to [et al. , the val [et al. .In order to invert the IEM to dielectric constants , sevks < 3 [For bare soil studies, the IEM has become the most widely used scattering model . The valks < 3 , howeverks < 3 found thMv > 30%) and large incidence angles (\u03b8> 44\u00b0).The IEM has been validated successfully at fine scales in a laboratory setting ,111-113.et al. [et al. [i.e. an underestimation of the HH backscattering coefficient) by Baghdadi and Zribi [et al. [et al. [Rakotoarivony et al. and Zrib [et al. observednd Zribi . Mattia [et al. showed t [et al. found tha posteriori verified [s and l, but introduced a new set of parameters that were related to multi-scale surface properties. One result obtained with this novel roughness description was that the backscattering from very smooth agricultural soils could be predicted better [According to Zribi and Dechambre the diffverified and secoverified reformuld better .et al. [et al. [et al. [et al. [Other improvements of the IEM have been reported: Boisvert et al. and Weimet al. adapted et al. included [et al. introduc [et al. introduc [et al. ; and Che [et al. improved [et al. , which i [et al. .3.et al. [et al. [Soil roughness can be considered as a stochastic varying height of the soil surface towards a reference surface . This reet al. show a b [et al. reporteds and l, where the latter is usually regarded as a fitting parameter [s, l and ACF that characterize single-scale roughness.Generally, the characterization of surface roughness is obtained from the analysis of height variations observed along transects , from wharameter ,128. As 3.1.N points with surface height iz, the RMS height, s, is calculated as [For discrete one-dimensional surface roughness profiles consisting of lated as :(1)s=1Net al. [In order to obtain a consistent RMS height measurement, Bryant et al. concludeet al. ,103,129.et al. [et al. [The relationship between RMS height and environmental variables such as tillage and soil texture has been extensively studied in the past ,130,131.et al. derived [et al. investig3.2.l, describes the horizontal distance over which the surface profile is autocorrelated with a value larger than 1/e (\u2245 0.368) [The correlation length, \u2245 0.368) ,133. Alt\u2245 0.368) ,65,134. \u2245 0.368) and depe\u2245 0.368) ,103,129,Various values for an optimum profile length have been suggested for measuring correlation length, varying from a couple of meters to more et al. [et al. [et al. [Baghdadi et al. mentione [et al. measured [et al. , who fou [et al. .3.3.\u03be = j\u0394x, and \u0394x the spatial resolution of the profile, is given by:The normalized autocorrelation function, for lags density ,21,51.In order to fully characterize the ACF of a surface, a discretization interval, used to sample the profile, should be at least as small as one tenth of the correlation length ,135, as In backscatter models, often two types of ACFs are used , i.e. thand the Gaussian function is defined as:l the correlation length. Depending on which ACF is chosen, IEM produces strongly different results as demonstrated by s and l.with et al. [Compared to the Gaussian ACF, the exponential one is characterized by smaller correlations at small lags. This causes that exponential ACFs to better describe the micro-roughness in the profile than Gaussian ACFs . Howeveret al. introducet al. ,135,137;et al. ,138. In et al. . For a set al. .et al. [et al. [Following Zhixiong et al. , the aut [et al. in a sen4.et al. [Although the characterization of soil roughness seems a fairly straightforward methodology, the parameterization faces many problems. One major problem is that roughness parameters often show little or no spatial dependency. In other words, surface height measurements and derived roughness parameters taken at one position often do not, or only poorly, represent their surrounding area which makes them physically meaningless. For example, this was observed by Lehrsch at al. who carret al. recognizet al. . This unet al. .In the following sections, a review on the different sources of errors will be given, and when available from literature, the influence of these errors on soil moisture retrieval will be discussed.4.1.Several methods have been suggested for estimating the roughness parameters. These can be subdivided in two groups : contactThe meshboard technique involves inserting a gridded board in the soil and making a picture after which it is digitized. The main advantages of a meshboard are its low cost and the fact that it is easy to make. A major disadvantage of the instrument is that it is quite difficult to insert the meshboard sufficiently deep into a rough soil without disturbing the roughness, especially, when the soil is compacted. Meshboard measurements are typically affected by parallax errors which are caused by the fact that the picture of the intersection of the soil and the meshboard cannot be taken at ground level, but generally is taken at a height of about 90 cm . FurtherThe pin profiler is constructed out of a number of vertically movable pins which are lowered onto the ground surface . The posNoncontact instruments include laser techniques e.g. ,153,154),154153,1A laser profiler makes use of a laser beam measuring the distance between a horizontally positioned rail, on which the carriage with the laser beam moves, and the soil surface. The main advantage of this instrument is that it allows for an accurate measurement of the roughness profile having a sufficient horizontal resolution. Yet, these instruments are also characterized by different disadvantages including the interference of light from other sources ,154, theet al. [et al. [2) of 0.6. For meshboards, a significant error was found in the roughness measurements (compared to laser profiler measurements), which, according to Mattia et al. [Unfortunately, a thorough investigation comparing measurement techniques and accuracy assessment has not been performed yet . In a stet al. found thet al. . This fi [et al. , where aa et al. , is mostet al. [In order to fully comprehend the effect of roughness characterization from different measurement techniques on soil moisture retrieval from SAR, an in-depth study is required that compares the different techniques over the same sites for different roughness situations. Some preliminary results, based on a study that only focused on the RMS height measurements from pin profiler and laser scanning, have been presented by Bryant et al. . They co4.2.et al. [The soil roughness parameters describe the statistical variation of the stochastic varying surface height towards a reference surface. Generally, only the first order component from the measured profile is removed. This assumes that the reference surface is a plane, which is not necessarily horizontal, and accounts for the fact that the measurement device may have been slightly tilted with respect to the reference surface. This assumption is only valid when short profiles are measured , but foret al. , the topet al. .et al. [et al. [In order to parameterize these single-scale roughness deviations, one should filter out the curved reference surface. Different methodologies can be applied such as detrending using piecewise linear regressions, applying a highpass filter, applying a moving average filter, and detrending using a higher order polynomial . Generalet al. found th [et al. did not et al. [Bryant et al. demonstr4.3.Bryant et al stated t4.3.1.et al. [The horizontal resolution is defined by the instrument that is used. For laser profilers, the horizontal distance between two measurements generally ranges between 1 mm and 5 mmet al. state thet al. .et al. [In order to prevent major errors in the estimations of roughness parameters, Oh and Hong suggesteet al. advised 4.3.2.Although the vertical resolution is an instrument property introducing errors in the measured profile, little attention has been given to the effect of this accuracy on the determination of roughness parameters or the impact on the soil moisture retrieval. Values with respect to the vertical accuracy of different instruments are rarely published. Jester and Klik mentione4.3.3.Both the non-electronic pin profiler and the meshboard technique require a digitization. With pin profilers, the position of the upper part of the pins is photographed and digitized, whereas, the soil surface is immediately digitized when using a meshboard. Archer and Wadge have shoet al. [s,l) of . If 12 different people digitized the same profile, similar average roughness values were obtained, i.e. = , and a coefficient of variation of 4.52% and 4.51% for respectively RMS height and correlation length was found. Although these examples are statistically not representative, one could conclude that errors introduced in the digitization process are very small, and therefore, although not assessed, the impact on soil moisture retrieval is expected to be small.Since the digitization of pins is much less subjective then making a difference between soil and plate when using meshboard pictures, D'Haese et al. digitize4.4.et al. [et al. [It is well known that the values of the roughness parameters depend on the profile length used ,129,163.et al. found th [et al. obtainedet al. [Smooth profiles required a minimum length of 10 m to get an estimation of the RMS height that was comparable to what was obtained when 25-m profiles were analyzed, whereas for rough profiles, 5-m profiles seem sufficient. Zhixiong et al. reportedet al. deduced et al. [In order to get accurate estimations of the correlation length, much longer profiles are necessary. Furthermore, the increase rate for the correlation length with increasing profile lengths is larger than for the RMS height . For shoet al. found siet al. , profiles2) is proportional to the profile length. Baghdadi et al. [\u03b7 and \u03b3 two calibration constants and \u03b3 corresponds to the asymptotical RMS-value (obtained for profile lengths L \u2192 \u221e). Callens et al. [s and L following:a and b are parameters obtained through a least squares fitting. They furthermore suggested a similar relationship for describing the dependency of the correlation length on L. They used both relationships to estimate the asymptotical value of both roughness parameters from 4-m profiles, such that infeasible long profile lengths (extending 25 m in some cases) are no longer required.Sayles and Thomas suggestei et al. discovers et al. further Despite the different studies devoted to this scaling behavior of the roughness parameters, an assessment of the errors made on the soil moisture retrieval when applying roughness values obtained from different profile lengths has not been reported yet. Such a study would be extremely useful in order to determine the profile lengths needed such that roughness parameters can be estimated at a scale that is relevant for the scattering process.4.5.et al. [et al. [Baghdadi et al. mentione [et al. reportedOgilvy demonstret al. [The effect of averaging soil roughness parameters from a different number of profiles on the soil moisture retrieval was assessed by Bryant et al. . They di4.6.et al. [\u00c1lvarez-Mozos et al. studied et al. .4.7.et al. [Different studies e.g. ,166,167),167166,1et al. mentioneet al. [et al. [During three months, Callens et al. measured [et al. found noet al. [s decreased whereas an increase in correlation length was observed with time. Even if these temporal changes in roughness were not strongly significant if the in-field spatial variability was taken into account, their influence on the backscattering coefficient can be important because both effects (s reductions and 1 increments) contribute to a more specular-like behavior of soils. Therefore, backscattering values increase at low incidence angles and decrease at large incidence angles as the surface smoothens. When studying the impact on soil moisture retrieval, \u00c1lvarez-Mozos et al. [\u00c1lvarez-Mozos et al. also invs et al. demonstr5.5.1./f power spectrum (i.e S(f) \u221d 1\u03bd/f where \u03bd = (7 \u2212 2D) and D is the surface fractal dimension). For these random processes, traditional roughness parameters, namely the profile height rms (s) and profile correlation length (l) are not intrinsic properties of the surface, but depend on the measured profile length [1/f surfaces always possess more important high frequency components than single-scale Gaussian correlated surfaces. On the contrary, they may possess larger or smaller high frequency components than single-scale exponentially correlated surfaces, depending on \u03bd scales rather than at a single fundamental scale. For instance, over large scales land surface roughness depends on climatic conditions . Wherease length . This pre length ,178,179)found in ,181).Based on the aforementioned description, different researchers have tried to describe soil surfaces using band-limited fractals e.g. ,182-184)-184182-15.2.et al. [Since the correlation lengths assessed from field measurements are generally inaccurate, Baghdadi et al. proposeda and \u03b2 are calibration constants depending on incidence angle and polarization. This relation was parameterized by fitting the IEM results to radar measurements until a good agreement was obtained. Their results revealed that the calibrated correlation length was strongly related to the RMS height, but that these relations were dependent on the radar configuration. Baghdadi et al. [where i et al. found thFor surfaces characterized by a Gaussian autocorrelation function, they found that a linear relationship better fitted the IEM results:et al. [\u03b1 and \u03b2 were dependent on the polarization and incidence angle of the SAR observations used, and \u03b1 and \u03b2 values for C-band data were recommended.In a later study, Baghdadi et al. found thet al. [in situ measurements of surface soil moisture and accounted for the very complex relation between s and l found in heterogeneous landscapes.Although generally better retrieval results are obtained when using these calibration approaches, this technique does not allow to extrapolate the obtained models, unless the sites on which it is applied are similar to the one used for its development. Rahman et al. used the5.3.z can be obtained by stereo-image matching or by using laser scanners. Jester and Klik [et al. [In the past, soil surface height measurements were almost exclusively taken along one-dimensional profiles. Thus it was not possible to carry out an analysis of the two-dimensional soil surface height field, which may possibly lead to different roughness characterizations then compared to the one-dimensional case. Two-dimensional height maps and Klik found th [et al. .et al. [The terrestrial measurements may be complemented by airborne laser scanner measurements, which have become the main data source for high quality digital terrain models . Even thet al. ,189 showet al. , which i5.4.et al. [Another approach to overcome roughness parameterization effects in the soil moisture retrieval consists of combining two or more SAR images of different incidence angle with the IEM to separate the effects of soil moisture and roughness for several tillage types ,191-193.et al. reported\u03c30) generated by the IEM model with two different incidence angles, keeping all other parameters constant, was proportional to roughness only, expressed as a ratio of s2/l termed the Z-index. Rahman et al. [s and l separately from the Z-index using the IEM with a SAR image acquired with dry soil conditions. The resulting maps of distributed roughness can be used to parameterize IEM for producing surface soil moisture maps without the need for ancillary data [Zribi and Dechambre found thn et al. showed tary data . The app5.5.et al. [et al. [An alternative way to address the roughness problem is to make use of polarimetric parameters such as, for example, the entropy, the \u03b1 angle and the anisotropy . This shet al. found th [et al. discover5.6.et al. [Satalino et al. suggest et al. . This pret al. ,130.et al. [et al. [et al. [Knowledge on the tillage state of a field, however, does not allow for accurate roughness parameterization. On the contrary, there exists a range of roughness values possible for the specific tillage state, and this vague information should be used in the retrieval process in order to determine a range of possible soil moisture values and/or a most likely soil moisture value. Satalino et al. trained [et al. applied [et al. -198 to p [et al. developeet al. [et al. [The results of Satalino et al. and Verh [et al. show tha6.From an extensive literature study, it is clear that roughness parameterization is an important yet problematic issue in SAR-based soil moisture retrieval. Basically, the way roughness needs to be described and measured for the modeling of backscattering is not fully understood, and generally, the problem is simplified through assumptions of single-scale, isotropic roughness.In most backscatter models, two roughness parameters are required, i.e. the RMS height and the correlation length, and in theoretical models such as the Integral Equation Model (IEM), the shape of the autocorrelation function (ACF) also needs to be known. For natural surfaces, an exponentially decaying function appears to be a reasonable approximation of the ACF, but given the high sensitivity of theoretical models to the selected ACF, even small deviations can cause differences in the calculated backscatter on the order of several decibels. Yet, the largest problems for the parameterization of the roughness are encountered with respect to the correlation length. This parameter is characterized by a very high variability, causing average values of generally a small number of roughness measurements to be characterized by a high uncertainty. Since the RMS height values are less variable between roughness profiles, its parameterization is less problematic.The profile length used in the field for characterizing roughness highly determines the roughness parameters. Longer profiles result in higher values and a smaller variability for both RMS height and correlation length. Since different parameters may thus result for the same roughness, the profile length used will have an important impact on the retrieval of soil moisture, given the high sensitivity of the backscattered signal to roughness. Therefore, several authors argue that long profiles should be used for an accurate estimation of the roughness parameters. Also, it may be important to collect height measurements in two dimensions. However, it has never been shown at what scale or dimension surface roughness needs to be measured in order to be relevant for an accurate description of the scattering phenomenon. Further research on this issue is thus very important such that adequate roughness parameters can be determined in the field in order to support backscatter modeling or soil moisture retrieval. If not, techniques which try to circumvent the in-field measurement of roughness will have to be developed in order to make further progress in SAR soil moisture retrieval, like those discussed in Section 5.in situ measurements is not an option. New techniques that allow for moisture retrieval need to be further explored that circumvent roughness characterization or that allow for roughness characterization based on remote sensing . Alternatively, ranges of roughness values can be assigned to different tillage practices which can then be used as a priori information as input to soil moisture retrieval algorithms.If SAR is to be used on an operational scale for soil moisture mapping, then soil roughness parameterization through The advent of new high resolution sensors observing in X- and L-band , and C-band sensors yielding polarimetric data (RADARSAT-2) should allow for a better characterization of surface parameters. It is expected that a combined use of data from these different platforms, can lead to retrieving soil roughness information at a spatial scale relevant to the observed scattering. These roughness values can then be used for operational soil moisture retrieval from SAR imagery.Several studies have been devoted to improving the roughness characterization, to assessing errors and to estimating scaling behavior of the roughness parameters. However, there is still no comprehensive assessment of the impact of these roughness problems on the soil moisture retrieval. Nevertheless, an improved insight in the roughness parameterization and its impact on soil moisture retrieval is a prerequisite for making further advances in electromagnetic backscatter modeling and soil moisture retrieval. It is important that the obtained soil moisture products are accompanied by an accuracy measure, as such information is of major importance in order to properly assimilate these spatial soil maps into land surface models . Howeve"} {"text": "Background. Paranasal and nasal cavity malignancies are rare tumors that frequently present at advanced stages. Tumor extension and anatomic complexity pose a challenge for their treatment. Due to their peculiar physical and biological properties particle radiation therapy, i.e. protons and ions can have a role in their management. We performed a systematic literature review to gather clinical evidence about their use to treat sinonasal malignancies. Materials and Methods. We searched the browsers PubMed and Medline as well as specific journals and conference proceedings. Inclusion criteria were: at least 10 patients, English language, reporting outcome and/or toxicity data. Results. We found six studies with data on clinical outcome. Carbon and helium ions were each used in one study, protons in four. Toxicity was specifically described in five studies. One reported acute toxicity of carbon ions, one dealt with brain toxicity from both carbon ions and protons. Three papers reported on visual toxicity: one from carbon ions, one from protons and one from both. Specific data were extracted and compared with the most pertinent literature. Conclusion. Particle radiation therapy is in its early phase of development. Promising results achieved so far must be confirmed in further studies. Paranasal sinus and nasal cavity malignancies are rare with an incidence rate estimated to range from 0.3 to 3.5 per 100.000 per annum . They acSinonasal malignancies present frequently at advanced stages due to late symptom onset. This, combined with the complex regional anatomy and the presence of several organs at risk (OARs) such as brain and optic structures, poses a challenge for their best management. The mainstay of treatment is surgery : traditi Radiation therapy has been employed either as adjuvant treatment for high-risk cases or as definitive therapy for unresectable disease. No clear evidence exists for a routine use of chemotherapy, which is administered generally in a case-by-case scenario , 18. DesParticle radiation therapy, that is, protons and heavy ions, is a relatively new type of radiation therapy that could enhance the therapeutic ratio for sinonasal malignancies. Protons and heavy ions share the same characteristic dose distribution, the so-called Bragg Peak, that is the release of almost all their energy in a few millimeters at the end of their path see . This pend(1 + d/\u03b1/\u03b2), [n), the single dose (d), and the \u03b1/\u03b2 ratio that is a radiobiological parameter that characterizes the response of every tissue to radiations.An altered fractionation regimen is any radiotherapy schedule that differs from the standard delivery of 1.8\u20132.0\u2009Gy, 5 days a week for an overall treatment time of about 6-7 weeks. They can be classified as hyperfractionated, accelerated, or hypofractionated with possible combinations as well. They all try to increase the therapeutic index that is, the ratio of the probability of tumor control to the probability of normal tissue toxicity. Published papers in the head and neck field have proved their effectiveness in comparison to standard fractionation , 34. Par d/\u03b1/\u03b2), , allows Since particle radiation therapy is not very widespread and basically still under development, data of its use to treat sinonasal malignancies are scant.In this paper we performed a systematic review of the literature to gather all the clinical experience so far accumulated on this issue focusing on outcome and side effects.The Population-Interventions-Comparators-Outcomes (PICO) frameworEligibility criteria were as follows: studies published in the English language, reporting outcome and/or toxicity data on definitive treatment with particle radiation treatment of nasal and paranasal sinus malignancies. Studies were accepted if photon radiation therapy was combined with particle radiation and if radiation therapy was used either in definitive, adjuvant, or neoadjuvant setting. Any use of chemotherapy was allowed. Studies had to include at least 10 patients and to report data on tumor control and/or on toxicity. Studies including multiple head and neck sites were considered and included if it was possible to extract specific information on sinonasal malignancies. In case of publications with overlapping data, the study with the largest number and wider data was chosen. Review articles, case reports, and planning studies were not considered.Data extracted from publications meeting the eligibility criteria were first author, year of publication, institution, number of patients, gender, median age, pathology, stage, type of surgery received if any, type of chemotherapy received if any, and followup. Regarding radiation treatment, the following data were extracted: possible combination with photon, type of particle used, RBE employed, total dose, dose per fraction, number of fractions and number of fractions a week of the particle used, total dose, dose per fraction, number of fraction and number of fractions a week of photons; combined total dose was reported as well. For studies reporting results on tumor control the following data were recorded: local, regional and distant control, overall survival, type, scale and grade of toxicity, time to toxicity, and risk factors individuated. For studies dealing particularly with toxicity apart from the available data as above, the diagnostic criteria used were reported as well as the maximum dose to affected organs and side effect risk factors. Any toxicity scale used was also reported.The initial literature research identified 2012 studies including duplicates see . 1985 weAll the studies reporting data on outcome have been published in peer-reviewed journals apart from the one from Malayapa that hasThe first publication included in the paper, by Castro et al. , dates bMizoe et al. publishe\u03b1/\u03b2=3)( > 130\u2009Gy (RBE) was associated with late toxicity as well as with high fraction size.Tokuuye et al. from theData from Massachusetts General Hospital, Harvard Cyclotron Laboratory, Francis Burr Proton Center, USA, are available in the paper of Resto et al. From 199Zenda et al. from theThe University Of Florida Proton Therapy (UFPT) Institute's experience is reported in the paper of Malyapa et al. since JaA total of five studies dealing specifically with toxicity after particle radiation therapy were found. Jensen et al. from themax\u2061 > 110\u2009Gy (RBE) (\u03b1/\u03b2 = 3) were observed as possible risk factors. Diabetes was statistically significant at multivariate analysis too. Demizu et al. , from thmax of the optic nerve >57\u2009Gy (RBE), and D10-50. At multivariate analysis, D20 > 60\u2009Gy (RBE) was still significant. In the paper of Hasegawa et al. , from thWeber et al. describe\u03b1/\u03b2 value of 3.6. Radiation-induced brain changes (RIBCs) were evaluated by MRI findings on T2-weighted or postcontrast images and graded according to the LENT-SOMA scale. Related symptoms were scored according to CTCAE versus 3.0 scale. After a median interval of 31 (range: 6\u201349) and 27.5 (range: 19\u201336) months, respectively, three patients, treated by protons, and two by carbon ions developed radiation-induced brain changes. Radiologic toxicity grades for the three protons-treated patients were G1, G2, and G3 and for the two carbon ions-treated patients were G3 and G2. For protons-treated patients, clinical grading was G1 for one and G2 for two, meanwhile it was G3 and G1 for the two carbon ions-treated patients. G3 clinical toxicity consisted of epilepsy requiring steroids and anticonvulsants. From the analysis of the whole group, carbon ions were statistically more frequently associated with RIBCs than protons (P = 0.02). Minimal median dose to RIBCs sites for the three proton patients was 117.1\u2009Gy (RBE)3 (range: 59.4\u2013117.1\u2009Gy (RBE)) and 27.5\u2009Gy (RBE)3 (range: 102.4\u2013110.6\u2009Gy (RBE)3) for the two receiving carbon ions. Most of the RIBCs were induced by doses \u226580\u2009Gy (RBE)3, occurring within two years from radiation in comparison to those induced by lower doses that developed after two years. Lobe volumes receiving more than 83, 90, and 100\u2009Gy (RBE)3 were significantly associated with RIBCs. The experience of The Hyogo Ion Beam Medical Center about radiation-induced brain injury after proton or carbon ions is reported in the paper of Miyawaki et al. . Twenty-We were able to find 11 studies reporting outcome and/or toxicity results relative to the use of particle radiation therapy for the treatment of paranasal sinus and nasal cavity malignancies. There are some common limitations for these studies that must be emphasized. All but one study were reHeavy particle radiotherapy is in its early phase of development, and it is mainly employed for unresectable or high-risk cases. This type of radiation therapy combined with photons or not is feasible and well tolerated by patients either as definitive or as adjuvant treatment. The delivery of the prescribed dose was possible in all studies, and all patients could complete their treatment. Altered fractionation schemes were safely employed to increase the therapeutic ratio , 44, 48,P = 0.32). Castro et al. [P = 0.03, P = 0.02). Local control rates compare positively with those from Institutions that used photon radiation therapy . Local co et al. reportedo et al. had one-o et al. at the mo et al. , 9 out oo et al. must be o et al. local coIn general, for particle radiation therapy studies, rates of distant metastasis free-survival and overall survival are not substantially different from other studies. Obviously, this means that an efficient systemic therapy capable of dealing with metastasis is needed for particle therapy too. Regional control is reported in the papers of Resto et al. and ZendAcute toxicity seems to be mild, well tolerated, and does not interfere with treatment delivery as pointed out in the specific paper by Jensen et al. . Late toAmong the possible risk factors, all the studies of particle radiation therapy found important the maximum dose delivered. This confirms both the concept that many optic structures are serial organs and the results from previous studies. Parsons et al. observedA volume factor for the irradiated OAR has been individuated as risk factor in the study of Hasegawa et al. . The dosDry-eye syndrome is a serious side effect associated with head and neck radiotherapy . In thisOther risk factors found by the specific studies were diabetes mellitus that has already been associated with toxicity by other authors , 61 and 3 and volumes of cerebral lobe receiving more than 83, 90, and 100\u2009Gy (RBE)3. Maximum dose and volume receiving a given dose are recognized risk factors for brain toxicity. From the review of published data, Lawrence et al. [\u03b1/\u03b2=\u2009\u20093). Schlampp et al. [3 of the temporal lobe. Carbon ions were found by Miyawaki et al. [Brain late side effects have always been among the most dreaded consequences of radiation therapy. Significant neurologic alterations have been described in association with photon and particle radiotherapy for paranasal and nasal cavity malignancies , 62. Thee et al. could cos et al. about 36p et al. observedi et al. to be sii et al. about 10i et al. in theirParticle radiation therapy is a promising tool for the treatment of paranasal and nasal cavity malignancies. Considering that it is in its early phases of development, it has shown to be feasible and well tolerated. Results regarding local control are encouraging and late toxicity is acceptable. Further research is needed to establish its exact role and its best combination with surgery and chemotherapy."} {"text": "Vasculogenic mimicry (VM), a new pattern of tumor microcirculation, is important for the growth and progression of tumors. Epithelial-mesenchymal transition (EMT) is pivotal in malignant tumor progression and VM formation. With increasing knowledge of cancer stem cell (CSC) phenotypes and functions, increasing evidence suggests that CSCs are involved in VM formation. Recent studies have indicated that EMT is relevant to the acquisition and maintenance of stem cell-like characteristics. Thus, in this review we discuss the correlation between CSCs, EMT and VM formation. This process is characterized by the loss of epithelial traits and the acquisition of mesenchymal phenotypes \u201323. ActiNormal tissues and tumors contain a small subset of cells, known as stem cells, with the capacity for self-renewal and the multipotency to differentiate into diverse committed lineages ,27. Tumo+/CD38\u2212 subpopulation of leukemic cells . From a retrospective study of 109 patients with IBC, the patient prognosis and metastasis trends showed a significant correlation with aldehyde dehydrogenase 1 (ALDH1) expression, a specific marker of CSCs. Both in vitro and xenograft assays showed that invasion and metastasis in IBC are mediated by a cellular component that exhibits ALDH activity \u2212 cadherin+ melanoma cells, which have the ability to form VM , loss of cell polarity and intercellular adhesion molecules (for instance E-cadherin and occludin), which is concomitant with upregulation of mesenchymal markers and acquisition of fibroblast-like morphology with cytoskeleton reorganization ,22,71. Tet al demonstret al\u201376, Sluget al, Twist (et al, SOX4 (7et al and ZEB et al, and sevet al\u201383. Snaiet al,79,84. Tet al. Twist1 et al. In a sp al(et al demonstr al(et al identifi al(et al,89. Furt al(et al.et al(et al(in vitro, while the well-differentiated cell line HepG2 did not form VM. These findings indicated that EMT is involved in VM formation.Recently, evidence has shown that EMT is involved in the process of VM formation. In VM-positive colorectal carcinoma samples, Liu et al found thet al. Furtheret al. Lirdpra al(et al have repet al(Researchers have been engaged in discovering the origin of CSCs for a number of years. It is widely accepted that tumor formation is due to the multistep mutation of genomes. Considering the longer lifespan of stem cells, normal stem cells suffer from the accumulation of mutations over time. Thus, it is hypothesized that CSCs derive from normal stem cells with genetic mutations, and this has been demonstrated by independent investigators ,94. In aet al classifiet al(et al(et al(+ T cells acquired certain characteristics of breast CSCs, including potent tumorigenicity, resistance to conventional treatment, and the ability to form spheroids. Fang et al(+/CD24\u2212 cells. Breast cancer cells exposed to TGF-\u03b2 and TNF-\u03b1 lead to the generation of breast cancer cells with stem-like characteristics by induction of EMT (et al(et al(in vivo tumorigenicity in nude mice and stem cell marker expression. Chen et al(+ cells exhibit a high level of expression of Snail, and knockdown of Snail significantly decreased the expression of ALDH1. These data suggest that epithelial cells within tumors are able to convert into CSCs via EMT (et al(An increasing body of evidence shows that EMT is associated with the acquisition of CSC properties. In 2008, Mani et al reportedet al,97. More al(et al also ind al(et al found thang et al demonstrn of EMT . In addiMT (et al identifiMT (et al. In HNSC al(et al found thhen et al revealed via EMT . MoreoveMT (et al demonstret al(et al demonstrated that Snail promotes the induction of Flk1+ endothelial cells in an early subset of differentiating mouse embryonic stem cells, depending on fibroblast growth factor signaling as well as the repression of the miR-200 family (et al(VM allows tumor cells to express the endothelial phenotype and play a similar functional role to endothelial cells in forming blood vessel-like structures. In fact, both epithelial and mesenchymal markers have been observed in tumor cells engaged in VM formation ,106,107.et al found th0 family . Hypoxia0 family ,110\u2013112.ly (et al found thly (et al,115. Thu3 depending on the diffusion of oxygen and nutrients (et al(It is clear that tumors are able to grow to a size of ~1\u20132 mmutrients . In ordeutrients . Keunen ts (et al found thet al(In reality, the coexistence of angiogenesis and VM is common within aggressive tumors. Angiogenesis inhibitors have little or even no effect on VM ,19 and Vet al reportedFor quite some time, the survival rate of patients with aggressive tumors has remained at a low level, despite the administration of surgery, chemotherapy and radiotherapy. The existence of CSCs was thought to be an underlying cause. Although CSCs comprise only a small proportion of tumor cell populations, CSCs have high resistance to multiple chemotherapeutics and ionizing radiation. Remaining CSCs are able to induce recurrence following treatment with chemotherapy and radiotherapy. Furthermore, it has been demonstrated that CSCs are implicated in VM formation. In this context, CSCs have been considered as a promising treatment target in cancer patients with VM. It has been observed that tumors undergoing the process of EMT acquire resistance to chemotherapy . EMT is"} {"text": "We have applied the mathematical model of compartmentalized energy transfer for analysis of experimental data on the dependence of oxygen consumption rate on heart workload in isolated working heart reported by Williamson et al. The analysis of these data show that even at the maximal workloads and respiration rates, equal to 174 \u03bcmol O2 per min per g dry weight, phosphocreatine flux, and not ATP, carries about 80\u201385% percent of energy needed out of mitochondria into the cytosol. We analyze also the reasons of failures of several computer models published in the literature to correctly describe the experimental data.In this review we analyze the recent important and remarkable advancements in studies of compartmentation of adenine nucleotides in muscle cells due to their binding to macromolecular complexes and cellular structures, which results in non-equilibrium steady state of the creatine kinase reaction. We discuss the problems of measuring the energy fluxes between different cellular compartments and their simulation by using different computer models. Energy flux determinations by This value matches the maximum total activity of CK in the direction of MgATP synthesis, 62.4 \u00b1 4.5 mM/s, measured in perfused rat hearts . The conet al. [et al. and showModel also reproduces the observed PCr levels at different workloads .et al. described above [ed above \u20138.et al. for 2- dimensional analysis of metabolites\u2019 diffusion within ICEUs [The A-S model was upgraded further by Vendelin in ICEUs . The newin ICEUs . Both moin ICEUs , explainin ICEUs ,103. Nowin ICEUs proportim of 31.7 s\u22121, in contrast to that value in [\u22121. What are the declared reasons for these simplifications? An early observations of Erickson-Viitanen et al. 1982 [in vivo? Using isolated mitochondria, it was shown that by simple addition of creatine to a mitochondrial suspension, in the presence of respiratory substrates without added nucleotide, these mitochondria produce and release PCr into the supernatant that is formed by intra-mitochondrially cycling ATP and ADP via creatine-stimulated respiration . The results of probability model calculations are essentially in line with recent direct experimental measurements by Timokhina et al. [in vitro and in vivo.Further, experimental studies with application of Metabolic Control Analysis showed very high control strength within the MI structure at high level of ATP in the permeabilized cardiomyocytes ,77. Thisa et al. . Basing a et al. on nullimito. With taken general model parameters, the experimental tmito values of about 4 seconds were fitted by membrane permeability parameter PSm = 13.3 s\u22121, while with PSm = 0.1 s\u22121 from [mito was about 15 s models [Very high permeability of MOM to ADP was inferred by van Beek ,103 froms\u22121 from the tmit31.7 s\u22121 . In the ) models ) should \u22121*kg wm\u22121) to medium (0.678 mmol ATP*s\u22121*kg wm\u22121) workload. Steady-state parameters for these workloads are indicated in Table 3 in [mito can be approximated even at imposed severe diffusion restrictions for ADP on MOM. The parent relation of our models [et al. [et al. in order to understand how is it possible to use similar models to attain principally different results. Data of such modeling, performed with our model [mito value at very high, about 90%, fraction of PCr diffusion out the rat heart mitochondria. Myoplasmic PCr/Cr ratios during the diastole are high, 2.6\u20131.8. In the System A on the second line in ble 3 in for glucr models ,89 to a [et al. ,103 allomito value is high, 8.7 s, out the experimental estimations. In similar conditions van Beek estimated tmito as high as 15 s and adenine nucleotides. Last line in mito and PCr/Cr ratios, not affecting the low fraction of energy export by PCr. The question arises, how justified are the changes in model parameters made in van Beek\u2019s group? Authors ascribe these changes to peculiarities of rabbit heart muscle [High tbout 4 s . This rebout 4 s \u201415 \u00b1 8%.t muscle . Availab3 contents are slightly lower in rabbit heart mitochondria than in protein ), but tectively ). Takingectively ), we havectively . As relaectively ,103, the [et al. : in rabb [et al. . This es [et al. . Based o [et al. ,103 by tet al., the stability of total CK unidirectional flux is lost at extremely high energy demand levels leading to a drop of total CK unidirectional flux and to a bypass of CK shuttle by direct ATP transfer from the mitochondria to the myofibrils [A recent manuscript published in the Journal of Biological Chemistry by Vendelin, Hoerter, Mateo, Soboll, Gillet and Mazet entitled: \u201cModulation of energy transfer pathways between mitochondria and myofibrils by changes in performance of perfused heart\u201d is also ofibrils . For treofibrils .in vivo kinetic studies and metabolic flux determination is the isotope tracer method, as described above. Another technique of labeling the phosphoryl groups in ATP and PCr in heart cells is 31P NMR saturation or inversion transfer, already applied in many laboratories [et al. [et al. [These results and their interpretation, made in the work referred to above, are not consistent with a large body of existing data, including Vendelin\u2019s own results published before in many articles ,102,104.ratories ,47\u201350. Tratories and Kupr [et al. . In sevein vivo, tubulin binding to voltage-dependent anion channel in mitochondrial outer membrane specifically decreases its permeability for ATP, but not for creatine and phosphocreatine and coupled reactions in Mitochondrial Interactosome, consisting of tubulin, VDAC, MtCK and ATP Synthasome result in effective phosphocreatine (PCr) synthesis with PCr/O2 ratio close to 6 [18O2 measurements described above. There is, however, also a certain proportion of MM-CK in the cytoplasm that is in a quasi equilibrium state, which does not depend on the workload, if PCr and ATP contents do not change, as it is seen in heart in the state of metabolic stability [31P-NMR saturation or inversion transfer techniques would still show the total CK activity that is present. If the cells are intact, this activity would not change, and CK flux measured by 31P NMR therefore should not change either and never decrease. Vendelin et al. [The CK pathway is now described in great detail \u201328. In hose to 6 . This letability . Even ifn et al. now are n et al. ? For the2+ concentration together with isoprenaline, a method which is known to induce severe damage in cardiac cells [2+ concentration induced by catecholamines is known to induce mitochondrial permeability transition pore (PTP) opening, mitochondrial swelling, MtCK detachment, sarcolemmal rupture and CK release. It is also known that uncoupling of MtCK from the mitochondrial adenosine nucleotide translocase (ANT), which results in a loss of creatine-stimulated respiration, ultimately leads to significantly increased production of highly reactive ROS or RNS species [et al.: total content of ATP decreased from 7.78 to 4.7 mM in the presence of high Ca and isoprenaline. The dramatic 40% decrease of ATP and 200% increase of Pi .et al. is equal to 75 \u03bcmol O2 per min per g dry weight, less than half of the maximal respiration rate 168 \u03bcmol O2 per min per g dry weight obtained in experiments with working heart model when workload change is induced by changing ventricular filling on the basis of Frank-Starling mechanism [Note that with this drastic means of catecholamine-induced increase in workload in the presence of 4.0 mM Ca leading to degradation of 40% of ATP, manifold increase of Pi content and loss of CK flux, respiration rate achieved in the work by Vendelin echanism ,99.Most surprisingly, the authors themselves were aware of the importance of the stability of the metabolites\u2019 levels in the studies of energy fluxes: in their previous work they wriThe other main question is: how adequate are the \u201cmathematical models\u201d used. et al. [In conclusion, both correct modeling by taking into account all existing experimental data, as well as qualified experiments avoiding artifacts such as induced in the work by Vendelin et al. , are nee31P saturation transfer spectroscopy to study the kinetics of the creatine kinase in muscle cells by saturating \u03b3-ATP phosphate and recording the transfer of magnetization to PCr, Nabuurs et al. discovered the binding of ATP and ADP to macromolecular complexes in the cells, explaining the mechanisms of ATP compartmentation and non-equilibrium state of the creatine kinase reaction in the cells [18O transfer method that in the heart, the phosphocreatine fluxes from mitochondria into cytoplasm transport about 80% of energy needed for contraction and ion transport, and about 20% of energy is transported into cytoplasm via adenylate kinase and glycolytic phosphotransfer pathways [There is an excellent agreement between recently published data from many laboratories, which give now the possibility of quantitative description of the energy fluxes between mitochondria and cytoplasm in muscle cells. By using the pathways \u20139. Very pathways ,89. The pathways and by cpathways ."} {"text": "Thus, in addition to decreasing the frequency of occurrence and growth of tumours, fucoxanthin has a cytotoxic effect on cancer cells. Some studies show that this effect is selective, i.e., fucoxanthin has the capability to target cancer cells only, leaving normal physiological cells unaffected/less affected. Hence, fucoxanthin and its metabolites show great promise as chemotherapeutic agents in cancer. Fucoxanthin is a marine carotenoid exhibiting several health benefits. The anti-cancer effect of fucoxanthin and its deacetylated metabolite, fucoxanthinol, is well documented. In view of its potent anti-carcinogenic activity, the need to understand the underlying mechanisms has gained prominence. Towards achieving this goal, several researchers have carried out studies in various cell lines and With the establishment of the anti-carcinogenic property of fucoxanthin, it was important to understand the mechanism by which it exerted its effect in cells. With this goal in mind, several researchers have been trying to elucidate the molecules and pathways that can be modulated and regulated by fucoxanthin. Mechanistic studies by various researchers have shown that fucoxanthin can affect many cellular processes, and so far have failed to establish a single primary mechanism of action. The objective of this review is to summarize the effect of fucoxanthin in cancer and the underlying mechanisms that have been elucidated in reported studies. The various mechanisms discussed further in this review are shown in Fucoxanthin is a marine carotenoid found in numerous classes of microalgae and brown macroalgae (phaeophytes) ,2. The cet al. have revN-ethyl-N\u2032-nitro-N-nitrosoguanidine (ENNG)-induced mouse duodenal carcinogenesis. In addition to the decrease in the percentage of tumor bearing mice, the mean number of tumors per mouse was also significantly lower in fucoxanthin fed mice. A decrease in the percentage of tumor-bearing mice as well as a decrease in the number of tumors induced per mouse by ENNG in the fucoxanthin group was also observed by Okuzumi and co-workers [et al. [et al. [in vivo study by Ishikawa et al. [Nishino has repo-workers ,11. Admi-workers . Das et [et al. and Kim [et al. have obsa et al. revealeda et al. .et al. [2O2 resistant cell lines, they have concluded that reactive oxygen species (ROS) is not the mainstream pathway for apoptosis caused. Contradictory to this, Kim et al. [2O2 and O2\u2212 as a result of treatment with fucoxanthin along with accumulation of cells containing sub G1 DNA content (indicating cell cycle arrest in G1 stage). On co-treatment with a commercial antioxidant (NAC), the number of apoptotic bodies and DNA fragmentation of cells was decreased, attributing the apoptotic effect of fucoxanthin to ROS generated. Thus, they have concluded that cytotoxic effect of fucoxanthin is by ROS generation that triggers apoptosis in HL-60 cells, which is contrary to the results of Kotake-Nara et al. [et al. [Several reports on the potent antioxidant property of fucoxanthin and its metabolites are available ,18,19,20et al. have hypet al. , they ham et al. have obsa et al. . Shimoda [et al. have rep [et al. have repin vivo has been explored by many researchers [et al. [et al. [The effect of fucoxanthin on cell viability in cancer cells such as GOTO, HL-60, Caco-2, HepG-2, Neuro2a, DU145, PEL, PC-3, HeLa, H1299, HT-29, DLD-1 cells, and earchers ,32,33,34 [et al. comparedet al. [et al. [et al. [et al. [et al. [Yamamoto et al. have con [et al. have rep [et al. have rep [et al. on 11 ca [et al. have repUndaria pinnatifida (Wakame), for one week [et al. [Thus, fucoxanthin was found to have a significant effect on cell viability and anti-proliferative effect and in several studies the potency was different for different cell types/lines. In addition, in several studies, the normal cells were unaffected/less affected than cancer cells indicating differential effect of fucoxanthin and focused targeting of cancer cells ,36,37,38one week . In anot [et al. have rep0/G1 stage by fucoxanthin has been observed in many studies involving different cell lines [et al. [2/M phase. Muthuirullappan and Francis [et al. [0-G1 phase with a significant decrease in cells in the S phase, indicating a block in the progression of the cells to S phase from the G0-G1 phase, resulting in inhibition of proliferation of the cells has been reported in GOTO cell line [1 stage and this was accompanied by alteration in the expression of more than 50 genes in HEPG2 cells [et al. [et al. [1 peak along with concentration of cells in G0/G1 phase and their decrease in the S and G2/M phases. In addition, pRb, cyclin D1, cyclin D2, and CDK4 levels were decreased along with increased p15INK4B, p27KIP1 levels. Murakami et al. [in vitro.The arrest of the cell cycle in the Gll lines ,44,45,46 [et al. have obs Francis have att [et al. have repell line . FucoxanG2 cells . In addi [et al. have fou [et al. have obsi et al. have repet al. [0/G1 phase at lower concentrations of fucoxanthin (25 \u00b5M), followed by apoptosis at high concentrations (>50 \u00b5M) with increased cells in sub G1 phase (index of apoptotic DNA fragmentation) and fragmentation of nuclei. Low and high concentrations of fucoxanthin up regulated the protein and mRNA levels of p21WAF1/Cip1 followed by increased levels of pRb (retinoblastoma protein), while high levels up regulated p27Kip1 as well (cdk inhibitory proteins) leading to the conclusion that fucoxanthin-induced G0/G1 cell arrest is mediated by the up regulation of p21WAF1/Cip1. They have speculated that the apoptosis observed at higher concentration may be due to partial conversion of fucoxanthin to its metabolites such as fucoxanthinol. From their results they have concluded that p21WAF1/Cip1 is important for the cell cycle arrest while p27Kip1 regulation may be a means for pro-apoptotic effect of fucoxanthin. Reduction in the phosphorylation of pRb protein, which is a regulator of cell cycle progression and down-regulation of other cell cycle regulatory proteins like cyclin D2, CDK4, CDK6, and c-Myc was reported by Yamomoto et al. [et al. [Kip1 p21Waf/Cip1, p57, and p16 were unchanged over this period indicating that inhibition of cyclin D/cdk4 activity by fucoxanthin was brought about by suppressing the levels of proteolysis and transcription of cyclin D. An increase in proteosomal activity was observed after 12 h of fucoxanthin treatment. Thus, they have suggested that fucoxanthin induced cell cycle arrest by suppression of cyclin D by proteosomal degradation and transcriptional repression. They have speculated that the decreased cyclin D expression may be due to change in the GADD45A expression.Das et al. have obso et al. . A decre [et al. . Fucoxanet al. [Apoptosis of cancer cells is a promising method to control and treat cancer. In this regard, the apoptotic effect of fucoxanthin is of interest and has been studied by several researchers. Hosokawa et al. have repet al. . DNA fraet al. . Howeveret al. [et al. [The results obtained by Konishi et al. show the [et al. have repExpression of v-FLIP and v-cyclin was inhibited and may be responsible for growth inhibition and apoptosis observed with fucoxanthin treatment . The autet al. [Metastasis is the stage of cancer at which tumor cells acquire the advantageous characteristics that allows them to escape from the primary tumor and migrate to surrounding and distant organs and tissues. Metastasis involves the interaction of the tumor cells with numerous factors and cell components including matrix metalloproteinases (MMPs). MMPs are thought to assist tumor cells in metastasis and their enhanced levels have been associated with extra-cellular matrix degradation and cancer cell invasion . Chung eet al. have stuL, A1, Bcl-w, and Boo, while the pro-apoptotic members include Bax and Bak, Bok, Bcl-xs, Bim, Bad, Bid, Bik, Bmf, Puma, Noxa, and Hrk [et al. [et al. [L and XIAP by fucoxanthin was observed by Yamamoto et al. [et al. [et al. [et al. [L, Kotake-Nara et al. [L was unaltered, unlike other apoptosis-inducing agents that modulate the ratios of the pro- and anti-apoptotic proteins, fucoxanthin may operate through a different pathway.The family of Bcl-2 proteins has anti-apoptotic and pro-apoptotic members. Anti-apoptotic Bcl-2 family proteins include Bcl-2, Bcl-x and Hrk ,55. Seve and Hrk ,27,50. N [et al. observed [et al. studied o et al. and Kim [et al. , while I [et al. reported [et al. observeda et al. on the oet al. [et al. [et al. [The caspases are cysteine proteases that control apoptosis. The extrinsic pathway involves the tumor necrosis factor and activates the caspases 8 and 10 while the intrinsic pathway involves the mitochondria and release of cytochrome c from damaged mitochondria, activating Caspase-9, which is an initiator and in turn can cleave and activate the effector Caspases such as Caspases-3, -6, and -7. These two Caspase pathways, intrinsic and extrinsic, can result in apoptosis. The intrinsic pathway involving the mitochondria and caspases-3, -6, -7, and -9 are controlled by the Bcl-2 protein family . The actet al. , Ganesan [et al. , Zhang e [et al. have obs [et al. . Fucoxan [et al. . Express [et al. . Fucoxan [et al. . The res [et al. .2 terminal kinases, also called as stress-activated protein kinases (SAPK); (3) p38 enzymes including p38\u03b1, p38\u03b2, p38\u03b3, p38\u03b4, and the recently identified; (4) ERK5. ERK1 and ERK2 regulate cell processes like mitosis, meiosis, post mitotic functions, and are responsible for proliferation, cell division, differentiation, development and survival. STAT proteins such as Stat3 are substrates that are phosphorylated by ERK and are activators of transcription. The JNK/SAPKs are activated by conditions such as oxidative stress and result in programmed cell death or apoptosis, growth and cell cycle arrest as well as inflammation and tumorigenesis and cell survival under certain conditions. C-Jun is a component of the AP-1 complex that is an important regulator of gene function and is activated by environmental stress, radiation and growth factors. The p38 MAPKs control the expression of several cytokines and are involved in the immune response mechanism in addition to cell motility, apoptosis, chromatin remodeling, and osmoregulation [The MAPK family or the mitogen activated protein kinase includes four well characterized sub-groups: (1) ERK1 and ERK2 that are extracellular signal kinases; (2) JNK1, JNK2, and JNK3 with the c-Jun NHgulation ,59. Gaddgulation ,61. et al. [et al. [Fucoxanthin was found to attenuate cisplatin induced phosphorylation of ERK, p38, and P13K/AKT (phosphatidylinositol 3 kinase family) in the studies carried out by Liu et al. . Their e [et al. have obsN-terminal kinases (SAPK/JNK)) and its association with the GADD45 activation for cell growth arrest in LNCap cells (prostate cancer). GADD45A may be implicated in the G1 arrest observed in the study. While the GADD45A gene was enhanced, GADD45B gene expression was unaffected after fucoxanthin treatment. With respect to the MAPK family, SAPK/JNK was increased, phosphorylation of ERK 1/2 was reduced and phosphorylation of p38 was unaffected. While it is suggested that MAPKs including SAPK/JNK induce GADD45A in a p53 dependent or independent manner, the author has suggested a p53 independent mechanism in the present study. In addition, inhibition of the SAPK/JNK pathway reduced GADD45A induction while inhibition of ERK 1/2 and p38 pathways stimulated GADD45A induction. The author has suggested that each MAPK plays a different role in GADD45A induction and G1 arrest by fucoxanthin based on the negative regulation of p38 MAPK resulting in increased GADD45A expression along with other observations in prostate cancer cells. Contrasting results obtained for ERK 1/2 MAPK in LNCap cells and in DU145 cells earlier indicate that GADD45A may not be the only factor responsible for the G1 cell arrest observed in that study. Thus the growth inhibitory effect exhibited by fucoxanthin may in part be due to a GADD45A-dependent pathway and the enhanced GADD45A expression and G1 arrest are positive regulated by SAPK/JNK in prostate cancer cells.In a separate study, Satomi and Nishino found thet al. [et al. [Yu et al. have obs [et al. reportedThe nuclear factor kappa B (NF-\u03baB) is a family of closely related transcription factors that are held in the cytoplasm in the inactive form by their interaction with the inhibitor of \u03baB (I\u03baB). I\u03baBs include I\u03baB\u03b1, I\u03baB\u03b2, I\u03baB\u03b5, and BCL-3. The phosphorylation of I\u03baB results in activation of NF\u03baB and its translocation to the nucleus, followed by induction of target genes and the resulting effects. NF-\u03baB may be activated by many cytokines, growth factors and their receptors, tyrosine kinases, tumor necrosis factor receptor families, other signaling pathways, such as Ras/MAPK and PI3K/Akt. NF-\u03baB promotes resistance to apoptosis and may also exhibit pro-apoptotic properties. NF-\u03baB inhibits p53-induced apoptosis by up-regulating anti-apoptotic genes, and decreasing p53 levels. As NF-kB is associated with several tumor/cancer related processes, such as its activation by pro-inflammatory cytokines and its ability to induce cell proliferation and anti-apoptotic gene expression, as well as induction of angiogenesis, it is often considered as a hallmark of cancer ,63. et al. [Cisplatin has the potential to bind to the DNA molecules, forming platinum-DNA adducts which interfere with transcription and replication of the DNA, resulting in cell death ,65. Seveet al. have alset al. . Differeet al. .et al. [Due to their wide range of substrate selectivity CYP3A enzymes play a major role in metabolism and are of special relevance in the metabolism of clinically used drugs . The cytet al. have exaet al. have obset al. [Gap junctional intracellular communication (GJIC) is a mechanism for intercellular cell communication and operates at sites of cell adhesion where plasma membranes of cells can be connected by buried paired channels. Thus, GJIC regulates the communication between cells of tissues of an organ, allowing for direct communication between the cytoplasm of cells without transit through the extracellular space, making it possible for the cells to achieve a common and integrated target/metabolic activity . Gap junet al. have attet al. have alset al. [et al. [et al. [et al. [N-Myc oncogene, known to be over expressed in neuroblastoma, was reduced by fucoxanthin treatment in GOTO neuroblastoma cell line and this effect was found to be reversible when fucoxanthin was removed from the media . Surviviet al. studied [et al. . Yamamot [et al. and Ishi [et al. also obset al. [Sugawara et al. have repFucoxanthin influences a multitude of molecular and cellular processes. It exerts strong effects on cancer cells and shows synergistic activity in combination with established cytotoxic drugs. This raises the possibility that it could become an interesting anti-cancer compound in various types of cancer."} {"text": "A commentary onMen and women show distinct brain activations during imagery of sexual and emotional infidelityby Takahashi, H., Matsuura, M., Yahata, N., Koeda, M., Suhara, T., and Okubo, Y. (2006). Neuroimage 32, 1299\u20131307.Among complex social emotions, there is a very important emotion called \u201cromantic jealousy.\u201d \u201cRomantic jealousy is defined as a complex of thoughts, feelings, and actions which follow threats to self-esteem and/or threats to the existence or quality of the relationship, when those threats are generated by the perception of a real or potential attraction between one's partner and a (perhaps imaginary) rival\u201d White, , p. 24. attention\u201d or \u201cattentional networks\u201d; and \u201cattention and evaluative bias\u201d to explain their results, which is the main focus of this paper.Takahashi et al. discusseTakahashi et al. used funThe full brain volumes were imaged using 40 transaxial contiguous with a slice thickness of 3 mm. To assess the specific condition effect, they used the contrasts of sexual infidelity minus neutral (SI-N) and emotional infidelity minus neutral (EI-N). Correlation coefficients between the degree of activation and rating of jealousy for emotional infidelity were also calculated.They did not find gender differences in ratings, but they did find neural differences. Common brain areas such as the frontal regions and the cingulate cortex were activated in both groups for the emotional infidelity condition. However, men demonstrated greater activation in the amygdala and hypothalamus than women. In contrast, women demonstrated greater activation in the posterior superior temporal sulcus (pSTS) (angular gyrus). A positive linear correlation was also observed between self-rating of jealousy for emotional infidelity and the degree of activation in insula for men and in pSTS for women. Takahashi et al. explaineattention and evaluative bias\u201d toward the processing of negative information, which might contribute gender differences in the processing of emotional information. It has been reported that females showed remarkable attention and evaluative bias even for the processing of moderately negative information, whereas males showed neither biases for these information and the cingulate cortex, both of which areas are involved in attentional networks. Pessoa et al. (Apart from gender differences, Takahashi et al. did not a et al. suggestea et al. suggestea et al. . This nea et al. , despiteattention and evaluative biases,\u201d and \u201cattentional network\u201d mechanisms underlying emotional processing in males and females. These interpretations have implications to understanding the complete picture of gender differences observed in neural processing of emotion information. Future work should be done toward investigating the functional differences between males and females for different emotion processing.Together with explanations reported in the Takahashi et al. study, i"} {"text": "This review has been done after careful research of articles published in indian journal of psychiatry with the search words of manic depressive psychosis and bipolar mood disorder. Many articles in the following areas are included: 1) Etiology: genetic studies: 2) Etiology \u2013 neuro psychological impairment: 3) Adult bipolar disorder 4) Epidemological 5) Clinical picture \u2013 phenomenology: 6) Course of bipolar mood disorder: 7) Juvenile onset bipolar affective disorder 8) Secondary mania: 9) Clinical variables and mood disorders: 10) Disability: 11) Comorbidity: 12) Treatment: biological 13) Recent evidence: 14) Pharmacological evidence in special population. Though there seems to be significant contribution, there are still lot of areas which need careful intervention. The findings in various studies from the indian point of view are reviewed. In 1896, Kraeplin reported \u2018manic-depressive psychoses\u2019 as a circumscribed disease entity. Ever since, manic depressive psychosis, or the current term used nosologically as \u2018bipolar\u2019 mood disorder, has been studied in the Indian perspective. Though it seems like there is no orderliness in the research pursuit of understanding this disorder in the Indian context, one gets an impression that all aspects like nosology, clinical syndromes, course, pharmacological and in special populations as well, were attempted to be looked at from the Indian context. This review is an attempted peep into Indian context research.A review of the studies on bipolar mood disorders (Manic Depressive Psychosis) over the last many decades, published in Indian Journal of Psychiatry, conveys the feeling that over the years, though many aspects have been studied, there is no consistency in reports across the country. Reviewing research would find case reports to studies and reviews. Representative studies have come from across the country and in a way present the diversity of various centers; it doesn\u2019t seem to come from one center alone. We attempt to collect all the studies published in this area by going through as many old issues of Indian Journal of Psychiatry as were available.vis a vis normal groups in various measures. The significance found was - Patients with manic depressive psychosis and with positive family h/o had differed as a group with negative family h/o, making the authors to conclude that genetic factors do play a role in manic depressive psychosis.Dermatoglyphics as a diagnostic tool studied 100 normal, 60 manic depressive psychosis subjects, 30 unipolar subjects, 30 bipolar subjects at Chandigarh.[Among 25 patients with 1.4 years of optimum serum level of 0.6 M.EQ/L, precise and absence of positive h/o of psychiatric illness in first degree relatives of family members was seen. The patients were divided into two groups based on chart reviews in responders and non responders. Responders significantly had positive family h/o affective illness. Though this study had only a small sample, for the first time in Indian literature, it commented on the genetic aspects of pharmacological response.et al. 2000 at Central Institute of Psychiatry (CIP), Ranchi;[Genomic imprinting in bipolar affective disorder was studied by R. Kumar , Ranchi; in the fM Taj and Padmavathi R have asset al.[Chopra HD et al. have attChatterjee and Kulhara have repIn this study, the behavior affect, speech, delusions and hallucinations was done using PSE Items and the periodic assessments helped to show the resolution time of each symptoms preponderance of male subjects was noted. Mean duration of current episode was shorter in males compared to females.Symptomatologically, Indian patients differed significantly as having distractibility as symptoms and more of embarrassing behavior. Hostile irritability is the dominant affect, 62.5% had one or more delusions. In this sample, which is more than many studies reported by other international studies notably Taylor and Abrams (1973) Carlson and Goodwin (1973) in terms of recovery by four weeks, delusions and hallucinations disappear.Authors comment that where some symptoms resolving quickly, where some resolve quickly, others much slowly. Only 15% remained hospitalized for 90 days, and majority got discharged early; authors conclude that in India Mania patients resolve early with treatment. This is one study which systematically studied the clinical symptomatology in the Indian context, at PGIMER Chandigarh.et al.[et al. 1998).R Kumar et al. have carThere were three factors which had significant variance-factor number one had motor activity, pressured speech, racing thoughts, increased sexuality, increased contact as the clinical symptoms and, in essence, picks up psychomotor acceleration as the main factor and has largest variance in the patient sample. The second factor picked up thought disorder and psychosis with grandiosity, lack of insight and paranoid factors. The third factor, which has about 13.8% variance, represented mood with large percent having irritability (82%), euphoria (51%), aggression (70%), anxiety (59%) as phenomenology, the study has a good sample size, good methodology representing Indian population.et al.[et al. and using survival plot of resolution. The authors attempted to see gender differences in resolution of manic symptoms, in sample of 40 . The attempt included to see the differences both in severity and symptomatology across the genders. There were significant difference at Index rating on certain items in females-viz. increased sexuality and aggression.R Kumar et al. have stuet al. In contret al. the methet al.[Srinivasan et al. have attet al. studied et al. This data is from the Primary Health Centre, Sakalwara, adapted at NIMHANS, 27 patients of 34 patients evaluated had not received any treatment at all, though there were many episodes 15% of patient has had rapid cycling, episodes of manic accounted for 72% of episodes. None of the variables examined could predict the total number of episodes. However, patients receiving psycho pharmacological agents are likely to develop rapid cycling. A mania predominant course was observed in this study cohort.A naturalistic course of bipolar disorder in rural India was repoThere are very few studies in this area, published in Indian journal of Psychiatry, except for case reports.et al.[Narasimha Rao IV L et al. have foret al. has repoTricyclic anti depressant-induced mania was presented in a case report of four mono polar depressed patients who developed mania after tricyclic anti depressant therapy. This stuet al.[There are many case reports of mania occurring due to other medical conditions or due to drugs were reported-Chronic mania due to polio encephalomyelitis, Tertiaryet al. have triYadav R and Pinto C report aet al.[et al.[Mania in HIV infection was reported by Venugopal D et al. where thl.[et al. discussel.[et al. seen foll.[et al. Turners l.[et al.Mania starting during hypno therapy into manet al.[Gurmeet Singh et al. have attet al.[Singhal A K et al. have invet al.[Tapas K A et al. have attet al.[H Taroor et al. attempteet al.[et al.[A few case reports of comorbid, other psychiatric illnesses in bipolar mood disorders were reported in Indian literature. Dysmorphophobia as a comorbid disorder occurring in both depressive episodes and manic episodes in a case of bipolar mood disorder was described by Sengupta et al. The diffl.[et al. have disThere has been a large volume of research reported in the treatment aspects. There have been models to explain the treatment approaches and various clinical trials to show early evidence for various drugs both from experience and experimental.Interestingly, the first report on treatment of manic depressive psychosis treated with long term electro convulsive therapy was by Bhaskaran, who descet al.[Lithium kinetics was studied by Pradhan N et al. They havet al.[et al.[N Desai et al. Gangadhal.[et al. have, inl.[et al. in a firI. P. Khalkho and Khess C. R. J have attThe role of quetiapine monotherapy was presented in a case report by Khazaal Y arguing et al.[Solanki R K et al. have, inet al.[M Trivedi et al. have, inPradeep R J has highIn an interesting case report, V Agarwal and Tripathi report oR Balon has in het al.[Khandelwal S K et al. have repMohandas E and Rajmohan V have in There has been a lot of research on bipolar mood disorders. Significantly, not many studies have been reported on biological, neuro imaging and genetic studies and long term course of bipolar disorders. There is also less replication of significant aspects of the bipolar mood disorder. Most studies have a small number as sample, the later studies methodologically improved. Multi centric studies done across the study with sound methodology and in various above areas of different areas will have to be done to generate the Indian data. Likewise, pharmacologically, the often repeated subjective to experience of optional dose of various medications will also have to be studied.Transcultural differences have to be highlighted by attempting to do research in these areas. More naturalistic studies done across rural and urban background will help us to understand the course of bipolar in Indian context. The specific factors of psycho social and compliance issues in drug and therapy in the Indian context need to be studied."} {"text": "Sir,et al.[et al. concluded, \u201cCareful and thorough sputum examination in cases of tuberculous pleural effusion may help as a diagnostic tool and it has therapeutic and epidemiological implications\u201d.[et al. proposed that a combination of pleural fluid with sputum sample and N-PCR improved the diagnosis of pleural tuberculosis.[I read the recent publication sputum on AFB in tuberculous pleural effusion by Chaudhuri et al. with grecations\u201d. I acceptcations\u201d. The use rculosis."} {"text": "Mood stabilizers have revolutionized the treatment of bipolar affective disorders. We review data originating from India in the form of efficacy, effectiveness, usefulness, safety and tolerability of mood stabilizers. Data is mainly available for the usefulness and side-effects of lithium. A few studies in recent times have evaluated the usefulness of carbamazepine, valproate, atypical antipsychotics and verapamil. Occasional studies have compared two mood stabilizers. Data for long term efficacy and safety is conspicuously lacking. Psychopharmacology has revolutionized the understanding and treatment of major mental disorders. With the help of psychopharmacological agents, not only is the neurobiology of various psychiatric disorders being understood, but effective treatments have improved relapse rates, symptom free period, significantly improved the quality of life of patients and have reduced the burden experienced by patients and their families.Prior to lithium, typical antipsychotics and electroconvulsive therapy was used for management of bipolar disorders. However, over the years many drugs have been evaluated as mood stabilizers and have been shown to be efficacious, although the definition of a mood stabilizer is not yet settled.Psychopharmacological research in India regarding mood stabilizers has lagged behind the data from the West. However, there has been a shift of research from mere case series to attempts at multicentric double blind controlled trials. In this review, we would review data on mood stabilizers originating from India on various mood stabilizers. The review shall focus on the research published in Indian Journal of Psychiatry and studies reported in PubMed indexed journals.Amongst various mood stabilizers now available in India, lithium has been the most researched of all. There are a few studies on other mood stabilizers such as carbamazepine and sodium valproate.After its introduction in India in the late 1960s, lithium aroused a lot of research interest in the 1970s and 80s, with most of the research revolving around open trials to see its usefulness in various disorders (mainly mood disorders) and its side-effect.et al.[564et al.[The mood stabilizing property of lithium has led Indian researchers to see its effects in affective disorders. In their earliest work on role of lithium on mood disorders, Dube et al. in an unet al.\u20138 Most oet al.568 Studiet al.56 and goodet al.56 and preset al.565 Patientet al.5649 Studieet al.564 whereas et al.564 In an ef564et al. found th564et al.With regard to early onset bipolar disorder a retrospective chart review found that lithium was the most commonly used mood stabilizer, followed by valproate. However, during the follow-up period of three to 56 months it was seen that 28% of subjects relapsed despite being on apparently adequate doses of lithium.et al.[With regard to organic mania, a recent case report by Loganathan et al. reportedet al. (1980)[+, K+, Li+ to extra cellular concentration of these ions in a group of 22 patients of bipolar affective disorder who were divided into responders and non-responders. No significant difference between two groups was found as regards to these values.Sampath l. (1980) determinl. (1980) examinedIn a double blind placebo controlled cross over trial between lithium and chlorpromazine lasting for four weeks, Dube and Sethi reportedStudies have also showed that lithium is useful in management of aggression in patients of various diagnostic categories who were non-responsive to antipsychotics, electroconvulsive therapy, antiepileptics and sedatives. Authors recommended use of lithium in dose of 500- 750 mg/day (Serum lithium 0.75-1.2 mEq/L) in patients with unspecified aggression. Lithium et al.[et al.[et al.[et al.[et al.[Attempts have been made to estimate saliva levels and see their correlation with serum lithium levels with conflicting results. Prakash et al. investiget al. lithium l.[et al. comparedl.[et al. observedl.[et al. who basel.[et al. found thl.[et al. In a casl.[et al. reportedResearch on side-effects of lithium has been focused on general side-effects and specific side-effects on various systems.et al.[et al.[Venkoba Rao et al. examinedl.[et al. reportedet al.[One study compared the cognitive function of 34 bipolar patients on lithium (serum lithium between 0.6-1.2 mEq/L) with 30 matched controls. There was no difference on cognitive testing as measured by Bhatia Battery test, PGI memory scale and Bender Gestalt test. However, on self assessment of cognitive functions, patients experienced feeling of subjective cognitive impairment.[et al. performeet al.[et al.[Residual neurological side-effects after lithium toxicity in the form of dysarthria, nystagmus and generalized cerebellar damage have been reported.\u201340 Andraet al. also repl.[et al. reportedl.[et al. reported+ excretion, normal glomerular filtration rate (GFR) and renal tubular acidification. It has also been shown that serum lithium levels correlate positively with daily urine volume and negatively with urine specific gravity. Studies have also shown that there is no relationship between total amount of lithium consumed or duration of treatment with renal impairments.[44Five controlled studies have examined renal side-effects of lithium in humans.44\u201347 In airments.4446 A rerments.44 Another rments.44 In one srments.44 Despite et al.[4, free thyroid index or TSH. Srivastava et al.[3 levels and decreased T4 levels in 13% of subjects with bipolar disorder. The authors noted that high dose and high serum lithium levels increase the possibility of reduction of thyroid function status. In a case report Jayesh[Despite well-documented adverse effects of lithium on thyroid functions and good quality research from West, there has been little enthusiasm to study thyroid functions in Indian subjects. In an uncontrolled study which included 40 patients with bipolar disorder on prophylactic lithium with serum lithium of 0.83 (SD 0.20 mEq/L), Kuruvilla et al. assessedva et al. comparedrt Jayesh reportedet al.[P < 0.05) with lithium treatment ranging from 61 to 240 months as compared to the corresponding values in the control group.In a study from PGIMER, Chandigarh, Deodhar et al. comparedet al.[et al.[Venkoba Rao and Hariharsubramaniam examinedet al. reportedl.[et al. reportedet al.[+2, Ca+2 levels in 17 patients of affective disorder, on lithium and followed them prospectively and observed increase of total serum Mg+2, which was more so in patients who relapsed and the authors pointed at deficiency of Mg+2 at cellular level as a probable cause of episodes.Srinivasan et al. assessedParvathi Devi and Rao labeled et al.[In a case report Grover and Gupta reportedet al. with theIn a case report, abrupt withdrawal of lithium in a patient with recurrent depressive disorder led to development of hypomania.et al.[et al.[In contrast to lithium there has been little research on use of carbamazepine in psychiatric disorders from India. In a controlled trial, Sethi et al. recruitel.[et al. reportedet al.[et al.[Based on the kindling hypothesis, Das et al. divided et al. reportedl.[et al. reportedA case series suggested that carbamazepine (600 mg/day) can lead to worsening of psychotic symptoms in toxic range.Among the three mood stabilizers, valproate has been used most recently for treatment of mood disorders. Some of the recent studies have compared the efficacy/effectiveness of valproate with other mood stabilizers.\u201374 A few75et al.[Chadda et al. evaluateet al.In a recent case series, Pradeep presenteet al.[P < 0.01). The mean YMRS score of 37.2 at baseline reduced to 14.5 at the end of the trial in the risperidone group and this was significantly better than the placebo group. Extrapyramidal side-effects were reported by 35% of subjects in the risperidone group (compared to 6% in the placebo group) and these were the most frequently reported adverse events in the risperidone group. Another side-effect which was reported by more than 10% of subjects was insomnia; however there was no difference between the two groups on the incidence of the same.In a three-week randomized, double-blind trial, Khanna et al. includedFew studies have compared the efficacy/effectiveness of various mood stabilizers in affective disorders. In a randomized controlled trial of four weeks duration, Prakash and Bharath comparedet al.[P = 0.123).In the primary efficacy analysis, valproate group experienced significantly greater mean improvement in Young Mania Rating Scale total score than the carbamazepine group. In the CBZ group, significantly more patients required rescue medication during the week 2 and the requirement was quantitatively more as compared to the valproic acid group. In a recet al. comparedP <; 0.01). In a 1 week parallel group randomized comparative prospective study trial, Solanki et al.[A significantly greater number of patients in divalproex group experienced one or more adverse drug events as compared to patients in the oxcarbazepine group , Khess et al. comparedet al.[et al.[In an open label six-month prospective randomized controlled trial, Gangadhar et al. comparedl.[et al. are alsol.[et al.Various case reports have suggested that use of multiple medications in subjects with affective disorders can lead to side-effects like Steven Johnson syndrome.\u201385 Case Most of the literature on mood stabilizers pertains to lithium. Studies have found it to be useful in Indian patients for management of bipolar disorders. In recent times, studies have also compared valproate with lithium and have reported it to be as effective as lithium and more effective than risperidone in management of acute mania. Valproate has also been reported to be better than carbamazepine but equally efficacious to oxcarbazepine in management of acute mania. One double blind randomized controlled trial has also demonstrated risperidone to be better than placebo in the management of acute mania. Studies have also shown that addition of carbamazepine to lithium may be useful in management of acute mania.However, major limitations of the research have been small sample size and lack of multicentric studies. There are no studies which have evaluated the efficacy or effectiveness of various mood stabilizers in the management of bipolar depression. There is dearth of data with regard to usefulness of various medications for prophylaxis. There is an urgent need to conduct multicentric studies."} {"text": "Cancers, papers highlighting cellular mechanisms of metastasis formation, genetic and epigenetic aspects associated with organ and tumor specific metastasis formation, as well as papers outlining experimental and clinical therapeutic concepts for anti-metastatic treatment are included.The development of secondary distant organ and lymph node metastasis has an extraordinary impact on the prognosis of patients with solid cancer. In most cases the advent of metastatic growth represents the turning point from a local, potentially curable, disease to a systemic non-curable situation. As a highly regulated process, metastasis formation follows a distinct, non-random pattern characteristic for each tumor entity. Metastasis formation and strategies to prevent this lethal event in the progression of cancer is of fundamental interest for cancer science and patient care. In this special issue of et al. [Damsky et al. suggest et al. and anatet al. for deteet al. provideset al. [et al. [Tumor microenvironment is of immense importance not only for primary cancers but also for secondary sites . In theiet al. provide et al. focus on [et al. in theirUnderstanding the mechanisms of distant metastasis formation has begun to become translated into clinically relevant therapeutic consequences. Chi and Komaki comprehe"} {"text": "Osteoporosis and its main health outcome, fragility fractures, are large and escalating health problems. Skeletal damage may be the critical result of low-level prolonged exposure to several xenobiotics in the general population, but the mechanisms of their adverse effects are not clearly understood. The current study was aimed to investigate the possible ability of simultaneous subchronic peroral administration of selenium (Se) and diazinon (DZN) to induce changes in bone of adult male rats.2SeO3/L and 40\u00a0mg of DZN/L in drinking water, for 90\u00a0days. Ten 1-month-old males without Se and DZN intoxication served as a control group. At the end of the experiment, macroscopic and microscopic structures of the femurs were analysed using analytical scales, sliding instrument, and polarized light microscopy.In our study, twenty 1-month-old male Wistar rats were randomly divided into two experimental groups. In the first group, young males were exposed to 5\u00a0mg NaP\u2009<\u20090.05). These rats also displayed different microstructure in the middle part of the compact bone where vascular canals expanded into central area of substantia compacta. The canals occurred only near endosteal surfaces in rats from the control group. Additionally, a smaller number of primary and secondary osteons, as well as a few resorption lacunae were observed near endosteal surfaces in rats simultaneously administered to Se and DZN. The resorption lacunae as typical structures of bone resorption manifestation are connected with an early stage of osteoporosis. Histomorphometric analysis revealed that area, perimeter, maximum and minimum diameters of primary osteons\u2019 vascular canals were significantly increased (P\u2009<\u20090.05) in the Se-DZN-exposed rats. On the other hand, all measured variables of Haversian canals and secondary osteons were considerable reduced (P\u2009<\u20090.05) in these rats.The body weight, femoral length and cortical bone thickness were significantly decreased in rats simultaneously exposed to Se and DZN (Simultaneous subchronic peroral exposure to Se and DZN induces changes in macroscopic and microscopic structures of the femurs in adult male rats, and also it can be considered as possible risk factor for osteoporosis. The current study contributes to the knowledge on damaging impact of several xenobiotics on the bone. Bone is a dynamic mineralized connective tissue constantly being remodelled. Bone growth, mineralization and remodeling are regulated by a complex array of feedback mechanisms depending on age, genetic, nutritional and environmental factors -3. ToxicSelenium (Se) is an essential trace element which occurs in various concentrations in the soil, water leading to variable Se contents in food . Industret al. [et al. [Organophosphorus (OP) compounds are one of the most common types of organic pollutants found in the environment . Residuaet al. . Finally [et al. revealedHuman and animal exposures to several xenobiotics in the environment do not occur in isolation, and also pharmacological agents, other toxins, and diet can induce or supress their toxicity.Protective effects of Se against DZN-induced histopathological changes in various organs have been noted in many studies -28, usinBased on known effects of DZN and Se on the bone and other organs already mentioned above we focused on detailed structural analysis of exposed bones in animal model. Therefore, the aim of our study was to determine in detail the effect of simultaneous subchronic peroral administration of Se and DZN on macroscopic and microscopic structure of femoral bone in adult male rats.Twenty 1-month-old male Wistar rats were obtained from the accredited experimental laboratory (number SK PC 50004) of the Slovak University of Agriculture in Nitra (Slovakia). These clinically healthy rats were randomly divided into two experimental groups of 10 individuals. Male rats were used, as they are less susceptible than females to xenobiotics\u2019 toxicity -31.ad libitum. The first group (n\u2009=\u200910 rats) was daily exposed to 5\u00a0mg Na2SeO3/L and 40\u00a0mg of DZN/L in their drinking water for a total of 90\u00a0days. The doses of Se and DZN were chosen on the basis of studied literature [The rats were housed individually in plastic cages in an environment maintained at 20\u201324\u00b0C, 55 \u00b1 10% humidity. They had access to water and food terature -34 and oterature ,36 with terature . The secet al. [et al. [At the end of 90\u00a0days, all the rats were euthanized, weighed and their femurs were used for macroscopic and microscopic analyses. The right femurs were weighed on analytical scales with an accuracy of 0.01\u00a0g and the femoral length was measured with a sliding instrument. For histomorphometric analysis, the right femurs were sectioned at the midshaft of the diaphysis and the segments were fixed in HistoChoice fixative . The segments were then dehydrated in increasing grades 40 to 100%) of ethanol and embedded in Biodur epoxy resin according to the method described by Martiniakov\u00e1 et al. . Transveet al. . The qua to 100% [et al. , who claP\u2009<\u20090.05) between both experimental groups.Statistical analysis was performed using SPSS 8.0 software. All data were expressed as mean\u2009\u00b1\u2009standard deviation (SD). The unpaired Student\u2019s t-test was used for establishing statistical significance (P\u2009<\u20090.05) in comparison with the control group. Also, cortical bone thickness was significantly lower (P\u2009<\u20090.05) in these rats. On the contrary, femoral weight did not differ between the two groups were also identified in anterior, posterior and lateral views. We found some primary and secondary osteons near the endosteal surfaces. In the middle part of the compact bone, primary and secondary osteons were observed. The periosteal border was again composed of non-vascular bone tissue, mainly in the anterior and posterior views Figure\u00a0.The rats simultaneously exposed to Se and DZN displayed a similar microarchitecture to that of the control rats, except for the middle part of the compact bone in the medial and lateral views. In these views, vascular canals were shown to have expanded into the central area of the bone. The expansion in some cases was so enormous that the canals also occurred near periosteal surfaces. Therefore, a smaller number of primary and secondary osteons was identified in these rats. Moreover, a few resorption lacunae were found near endosteal surfaces in rats co-administered by Se and DZN which indicate the early stage of osteoporosis Figure\u00a0.P\u2009<\u20090.05). However, these rats displayed significantly decreased levels of all variables of Haversian canals and secondary osteons (P\u2009<\u20090.05).For the quantitative histological analysis, 424 vascular canals of primary osteons, 410 Haversian canals and 410 secondary osteons were measured in total. The results are summarized in Table\u00a02SeO3/L and 40\u00a0mg of DZN/L in drinking water for 90\u00a0days resulted in a significant decrease in body weight and femoral length in adult male rats. Thorlacius-Ussing et al. [2SeO3 in their drinking water which is associated with reduced production of growth hormone (GH) and insulin-like growth factor I (IGF-I). The results by Gronbaek et al. [2SeO3/L in drinking water for 35\u00a0days related to Se-induced significant reduction in circulating IGF-I. In our previous study [2SeO3/L in their drinking water for 90\u00a0days was also observed. DZN is known to show its toxic effects by inhibiting cholinesterase activity. According to Kalender et al. [et al. [et al. [via gavage was identified. DZN-induced inhibition in growth of some skeletal elements, such as femur, tibia, metatarsi and digits of the leg in chick embryos was also demonstrated [Simultaneous subchronic peroral exposure to 5\u00a0mg Nag et al. observedk et al. also docus study , the decr et al. and Raza [et al. the redu [et al. , decreasnstrated .et al. [et al. [2SeO3/L in their drinking water for 90\u00a0days [et al. [The thickness of cortical bone is generally accepted as an important parameter in the evaluation of cortical bone quality and strength. The values of cortical bone thickness in rats from the control group differed from those reported by Comelekoglu et al. and Chov [et al. , who ana 90\u00a0days . Moreove [et al. , demonst [et al. .et al. [According to Szarek et al. , Se is aet al. -53. Thuset al. . Accordiet al. -58, Se cet al. . Similaret al. .The results of the qualitative histological analysis of the control rats corresponded to those of previous works -63. We iet al. [2SeO3/kg of diet for a period of 12\u00a0weeks). Also, it is known that Se at high doses induces apoptosis in mature osteoclasts [et al. [et al. [Simultaneous subchronic exposure to Se and DZN induced changes in the middle part of compact bone where primary vascular radial bone tissue was observed. We proposed that the formation of this type of bone tissue could be explained as an adaptive response to Se and DZN toxicity to protect bone tissue against cell death and subsequent necrosis. The study by Turan et al. demonstreoclasts , osteobleoclasts , and osteoclasts . In resp [et al. reported [et al. observed [et al. . In our et al. [testes in rats co-administered by Se and DZN (at the same levels as were used in our study) were damaged and significantly dilated. Additionally, Ruseva et al. [Data obtained from the histomorphometric analysis showed a significant increase in area, perimeter, maximum and minimum diameters of the primary osteons\u2019 vascular canals and on the other hand, a significant decrease of the Haversian canals\u2019 variables in the Se-DZN-exposed rats. In general, the vascular system is a critical target for toxic substances and their effects on the vascular system may play an important role in mediating the pathophysiological effects of these substances in specific target organs . Blood vet al. demonstra et al. showed ta et al. that theet al. [et al. [2SeO3/kg for 4\u00a0weeks. The incorporation of carbonate ions into the crystal structure of hydroxyapatite (HA) results in changes in the physical and chemical properties of HA [We found significantly lower values of all variables of secondary osteons in rats simultaneously exposed to Se and DZN. According to Jowsey , the valet al. who foun [et al. , decreas [et al. showed tes of HA . HA cryses of HA creatinges of HA through Our study demonstrates that simultaneous subchronic exposure to Se and DZN had a significant impact on bone structure and causes early stage of osteoporosis in rats. The obtained results can support the better understanding of osteoporosis mechanisms induced by environmental pollutants. However, possible extrapolation of the results to humans may be an interesting topic for discussion. The laboratory rat is preferred animal for most researchers. Its skeleton has been studied extensively, and although there are several limitations to its similarity to the human condition, these can be overcome through detailed knowledge of its specific traits or with certain techniques. The similarities in pathophysiologic responses between the human and rat skeleton, combined with the husbandry and financial advantages, have made the rat a valuable model in osteoporosis research.2SeO3/L and 40\u00a0mg DZN/L in drinking water for 90\u00a0days affects the body weight, femoral length, cortical bone thickness, and both the qualitative and quantitative histological characteristics of femoral bone tissue in adult male rats. In addition, it induces early stage of osteoporosis. The results can be applied in experimental studies focusing on the effects of various xenobiotics on bone structure, especially when they are considered as possible risk factor for osteoporosis.This study demonstrates that simultaneous subchronic peroral administration of 5\u00a0mg NaThe authors declare that they have no competing interests.MM was responsible for qualitative histological analysis of bones and writing an article. IB was responsible for quantitative histological analysis of bones. RO was responsible for the statistical analysis. BG was responsible for preparation of histological sections. HC was responsible for macroscopic analysis of bones. JS was responsible for photodocumentation of histological sections. RT was responsible for animal care and sampling of femora. All authors read and approved the final manuscript."} {"text": "BMC Genomics explore the genetic variation of chemosensory receptor gene repertoires in humans and mice and provide unparalleled insight into the causes and consequences of this variability.Chemosensory receptor genes encode G protein-coupled receptors with which animals sense their chemical environment. The large number of chemosensory receptor genes in the genome and their extreme genetic variability pose unusual challenges for understanding their evolution and function. Two articles in http://www.biomedcentral.com/1471-2164/13/414 and http://www.biomedcentral.com/1471-2164/13/415See research articles BMC Genomics present comprehensive characterizations of the entire human OR and mouse VR repertoire.Humans sense odors with olfactory sensory neurons in the olfactory epithelium. Each olfactory sensory neuron expresses one G protein-coupled receptor from the large odorant receptor (OR) gene family. Different ORs have different ligand specificity and each odor therefore activates a different combination of sensory neurons. Many other mammals, including mice, have an additional olfactory organ, the vomeronasal organ. Instead of ORs, the sensory neurons in the vomeronasal organ express one gene from a repertoire of vomeronasal receptor (VR) genes. The VR gene repertoire consists of three families of G protein-coupled receptors: V1Rs, V2Rs, and formyl-peptide receptors (FPRs). The vomeronasal organ and the VRs play an important role in the detection of pheromones. Two articles in et al. [Mus musculus domesticus. The other four were wild-derived strains of Mus musculus subspecies and a different mouse species, Mus spretus. The authors' analysis led to the identification of over 6,000 non-synonymous single nucleotide polymorphisms (SNPs) in 366 VR genes. Mouse VR genes are 2.3 times as variable as other mouse genes. Olender et al. [Wynn et al. interrogThere are likely to be several evolutionary processes driving the high variability of chemosensory receptor genes, including a substantial contribution from neutral genomic drift, the process of random gene duplication, deletion, or inactivation . Receptoet al. [et al. [et al. [It has been a challenge to identify directed evolutionary processes that shaped the chemosensory receptor gene repertoires in the background of the variability resulting from undirected effects of genomic drift . The datasets analyzed by Olender et al. and Wynn [et al. are larg [et al. demonstret al. [et al. [Similar results were obtained by Wynn et al. for the [et al. provide et al. [Genomic drift with purifying selection that acts only on a subset of the genes and a combination of balancing and perhaps also positive selection result in genetically highly diverse chemosensory receptors. What are the consequences of this genetic variability? In humans, genetic diversity will result in perceptual diversity. Each individual perceives olfactory stimuli with their personal set of ORs. Olender et al. identifiet al. .et al. [et al. [Similar to the findings in human ORs, Wynn et al. found thet al. , this va [et al. and it iet al. [Although for most VRs the ligands are unknown, groups of V2Rs that respond preferentially to conspecific odor cues, odors from other rodent species and subspecies, or predator odors have recently been identified . Intereset al. found moet al. . Rather et al. [et al. [We are only just beginning to understand the causes and consequences of the unusual genetic and functional variability of large chemosensory receptor gene repertoires in different species. The comprehensive data on genetic variability in the human OR and mouse VR repertoire presented in the articles by Olender et al. and Wynn [et al. will be"} {"text": "Linked Comment: Jackson. Int J Clin Pract 2013; 67: 385.Linked Comment: Walach et al. Int J Clin Pract 2013; 67: 385\u20136.Linked Comment: Posadzki and Ernst. Int J Clin Pract 2013; 67: 386\u20137.Linked Comment: Grimes. Int J Clin Pract 2013; 67: 387.Linked Comment: Tournier et al. Int J Clin Pract 2013; 67: 388\u20139. We would like to respond to the comments by Tournier et al. as folloThe issues regarding the case report of Geukens, were addressed in our previous response .Regarding the articles by Barquero-Romero (2004), Lim (2011), Luder (2000) and Prasad (2006) we see no good reason for the notion that the four deaths following ingestion of homeopathic remedies could have been prevented if a \u2018competent qualified homeopath\u2019 had acted more \u2018responsibly\u2019. In fact, we very broadly covered the issues of some homeopaths' professional irresponsibility and its consequences in the discussion section.Tournier et al., seem to confuse the report by Zuzak et al. (2010) with the one by Von Mach et al. (2006). The former included nine cases of intoxications following the ingestion of homeopathic remedies, whereas the latter reported the figure of 1070 cases. In the case series by Zuzak, the 2143 cases were omitted because they referred not to therapeutic but to accidental intake of homeopathic remedies. Perhaps we should have included those cases too. Then the total number of patients who experienced AEs of homeopathy would have amounted to 3293!We provided clear definitions of homeopathy in the introduction to our review.Rhus tox (2CH). Tincture of aconite presented in the case by Guha et al. (1999) is technically speaking, a homeopathic remedy. Tournier et al. know of course that the method of preparation of Aconitum napellus varies in different pharmacopoeias and therefore safety issues arise when these differences are neglected (Sasseville (1995) provided full details of the composition of the ointments along with the level of dilution, e.g. Rhus tox CH. Tincteglected .In our view, the data extraction of the CR by Bernez et al. (2008) and its interpretations were correct. We regret, however, that the translation of the Danish text has led to confusion.In view of these arguments, we reject the accusation of Tournier et al. that our results are unre"} {"text": "Bioactive ceramics have received great attention in the past decades owing to their success in stimulating cell proliferation, differentiation and bone tissue regeneration. They can react and form chemical bonds with cells and tissues in human body. This paper provides a comprehensive review of the application of bioactive ceramics for bone repair and regeneration. The review systematically summarizes the types and characters of bioactive ceramics, the fabrication methods for nanostructure and hierarchically porous structure, typical toughness methods for ceramic scaffold and corresponding mechanisms such as fiber toughness, whisker toughness and particle toughness. Moreover, greater insights into the mechanisms of interaction between ceramics and cells are provided, as well as the development of ceramic-based composite materials. The development and challenges of bioactive ceramics are also discussed from the perspective of bone repair and regeneration. Bone Bone tissue engineering (BTE) has been emerging as a valid approach to the current therapies for bone regeneration . In contet al. at the University of Florida. It can promote gene expression and production of osteocalcin [Among various kinds of biomaterials, bioactive ceramics are considered as the most promising material for BTE. Some bioactive ceramics such as hydroxyapatite (HAP), tricalcium phosphate (TCP), bioactive glass (BG) and calcium silicate (CS) have been profusely investigated as biomaterials owing to their capability to form direct bonds with living bone after implantation in bone defects . HAP is eocalcin . CS has eocalcin . TherefoThis article summarizes the research progress and status of bioactive ceramics scaffolds for BTE at home and abroad. The materials, structure and fabrication technology of ceramic scaffold used in recent years are discussed in detail. The interaction mechanisms between bioactive ceramics and cells are presented. The developments of bioactive nanoceramics and ceramic-based composite materials are also reviewed. Moreover, the problems existed and developing trends of ceramic scaffold for BTE are described.2.A scaffold must have appropriate mechanical properties to match and form firm connections with the newly formed tissue due to the complicated stress environment of human skeleton system . High elBioactive ceramics are known to enhance osteoblast differentiation as well as osteoblast growth. However, their clinical applications have been limited because of their brittleness, difficulty of shaping, and an extremely slow degradation rate in the case of HAP. Also, they have poor fidelity and reliability and new bone formed in a porous ceramic scaffold cannot sustain the mechanical loading needed for weight-bearing bone \u201323.et al. [1/2 have been fabricated via conventional sintering. Fielding et al. [1/2; compressive strength: 130\u2013180 MPa) in the human body [Wang et al. reportedg et al. fabricatman body ,27. Ther3.The brittleness of ceramic is attributed to the crack prior to fracture. The control of crack growth prior to fracture can improve the toughness and reduce the stress concentration caused by the acute angle effect of the crack . Therefo3.1.et al. [co-glycolide (PLGA) fiber as second-phase addition. The results showed that fiber-reinforced calcium phosphate bone cement exhibited more superior structural integrity and material strength than nonreinforced calcium phosphate bone cement. Zhang et al. [et al. [Losquadro et al. used polg et al. combined [et al. investig3.2.et al. [et al. [et al. [Whisker is a filament of material that is structured as a single and defect-free crystal. Typical whisker materials are known for having very high tensile strength (on the order of 10\u201320 GPa) due to the ordered atomic arrangement and periodic structure of lattice . M\u00fcller et al. reported [et al. used HAP [et al. investig3.3.et al. [2) nanocrystals within HAP matrix. The results revealed that nanocomposites possessed significantly higher mechanical strength compared with pure HAP and conventional HAP-based composites. Wei et al. [l-lactic acid) (nano-HAP/PLLA) composite using thermally induced phase separation techniques. The compressive modulus reached 8.3 MPa when the weight ratio of nano-HAP to PLLA was 50:50. Gentile et al. [w/w/w)) for bone repair. The tensile modulus increased from 4.72 \u00b1 0.23 MPa for G1 to 6.46 \u00b1 0.05 MPa for G4. Recently, they fabricated porous scaffolds made of chitosan/gelatin (POL) blends containing different amounts of CEL2 by freeze-drying [w/w). Zhu et al. [in situ synthesized ZrO2 nanoparticles on the surface of carbon nanotubes as second-phase addition. The results showed that the mechanical properties of alumina ceramics mixed with these composites were much better than that of ceramics alone.Ahn et al. reportedi et al. preparede et al. preparede-drying . The comu et al. used in The above methods could improve the strength and toughness of ceramics to some extent. However, some negative effects, including toxicity of whisker and depressed biological properties, are generated by the second-phase additions into matrix ,42. In p4.4.1.Nanotechnology is predominately directing the research of biomedical materials for attaining both sufficient mechanical properties and excellent biological performances . It is wBesides, the scaffold fabricated with nanoceramics also has superior properties in other respects. For example, scaffold degradation could match the growth rate of new bone by controlling the size of crystalline grain, which is important to improve bone graft healing rate and reduce complications. Researchers also found that nano-HAP can significantly suppress the growth of cancer cell line while have no effect on normal cells . There a4.2.It is known that nano-materials have tremendously specific surface area, high surface energy and that their atoms lack coordination . It is dIn order to restrain grain growth during fabrication of scaffold, many methods have been adopted to accelerate the sintering process. Lots of researchers tried to fabricate nanoceramics by hot pressing sintering or vacuum sintering \u201355. Howeet al. [et al. [Selective laser sintering (SLS) method is one of widely used rapid prototype technology . It is aet al. fabricat [et al. fabricat5.Besides the proper mechanical properties, it is a key point to obtain porous structure to create a microenvironment for cell adhesion and proliferation. Nature bone has multi-level three-dimensional pore structure which ranges from several nanometers to several hundreds of micrometers . It can 5.1.etc. [et al. [et al. [et al. [Conventional methods for the fabrication of porous structure mainly include pore-forming method, sintered microsphere method and chemical foaming method, etc. . The por [et al. fabricat [et al. fabricat [et al. fabricat5.2.et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [Several methods have been applied successfully for the fabrication of scaffold with controlled pore structure and shape, such as fused deposition modeling (FDM), stereo lithography appearance (SLA), SLS and 3D printing technology. Espalin et al. investig [et al. fabricat [et al. fabricat [et al. investig [et al. prepared [et al. designed [et al. designed [et al. successf6.et al. [et al. [In vitro cell tests using the MC3T3 cell line demonstrated excellent biocompatibility and high level of bioactivity. In vivo animal experiments using Sprague-Dawley albino rats revealed good bone regeneration capability of PCL/BGNF composite when implanted in a calvarial bone defect. Xu et al. [in vivo bone-regenerative capacity and resorption of porous \u03b2-calcium silicate (\u03b2-CS) bioactive ceramics. The ceramics were implanted in rabbit calvarial defects and harvested after 4, 8 and 16 weeks. Results suggested a cell-mediated process involved in the degradation of \u03b2-CS in vivo. Silva et al. [et al. [et al. [Bone repair and regeneration are a complex interaction processes among scaffolds, cells and microenvironment . The suret al. investig [et al. fabricatu et al. investiga et al. evaluate [et al. seeded h [et al. cultured [et al. .7.Natural bone is a connective tissue largely composed of organic protein, collagen, inorganic mineral and cells . Obvious7.1.et al. [in situ co-precipitation and investigated the porosity, morphology, microstructure and mechanical properties of the scaffolds. The results showed that the introduction of HAP improved the compressive strength of scaffold and the pore size gradually decreased with the increase of HAP content. Cheng-zhen et al. [In vivo implantation in rats showed no inflammatory response after several weeks, and nano-HAP/collagen composite scaffold was found to promote new blood vessels formation and bone regeneration. Jingushi et al. [Natural biomaterials such as collagen, chitin, coral and chitosan demonstrate good biocompatibility and cellular affinity . Not onlet al. preparedn et al. producedi et al. inoculat7.2.et al. [et al. [in vitro proliferation of the osteoblast-like cells was significantly enhanced on the nano-hydroxyapatite coated titanium-niobium alloy compared to the titanium-niobium alloy without coating.Metallic materials , Co-Cr-Mo alloy and titanium-nickel alloy) have become the material of choice for load-bearing implant applications due to high strength, good fatigue resistance and well machining properties. Among these materials, Ti is bioinert, resistant to corrosion and biocompatible with bone tissue. However, some metallic materials may produce adverse effects such as the release of significant amounts of metal ions into the tissues, which may result in complications such as inflammatory and immune reactions . Thus, tet al. prepared [et al. found th7.3.et al. [et al. [Biodegradable polymers are widely used as biomaterials for the fabrication of cartilage tissue engineering scaffolds. Besides, polymers have great design flexibility because their composition and structure can be tailored to specific needs . Biodegret al. fabricat [et al. fabricat7.4.et al. [w/w) composite as a bone graft extenders. The results showed the composite increased new bone formation in the calvarial defect model both quantitatively and qualitatively. Lin et al. [et al. [Bioactive ceramics differ from each other in mechanical properties and biological properties. A combination of two or more bioactive ceramics will provide a wide space for ceramic material design. Moreover, multiphase ceramic is expected to remedy the drawback of some materials and reach a better comprehensive performance . A singlet al. fabricatn et al. fabricat [et al. develope8.An ideal scaffold should be biocompatible, bioactive and biodegradable for osteogenesis, osteoinduction and osteoconduction. In addition, the scaffold should possess sufficient porosity to accommodate cell proliferation and differentiation. It is also desirable for scaffolds to have adequate mechanical property and appearance in accordance with the defect parts. Our research team developed a homemade selective laser sintering system in order to prepare nano-ceramic artificial bone with outstanding mechanical properties. The minimum spot diameter of laser can reach 50 \u03bcm with the focusing system. The SLS system could realize arbitrary complex movements based on the non-uniform rational B-Spline (NURBS) theory ,107. We in vitro experiment in SBF indicates the gradual degradation of scaffold with the increase of immersing time (An ing time . A layering time . Human bing time . The hig9.in vivo experiments are necessary to master the evolution mechanism of mechanical properties and degradation of bioactive nanoceramics. We believe bioactive nanoceramics will become an ideal bone substitute material that is widely used in bone repair and regeneration in the future.The development of BTE provides an effective approach for bone repair and regeneration. The selection of scaffold material and structure optimization is quite important in order to fully mimic the 3D network structure of bone ECM. Among various kinds of biomaterials, bioactive ceramics are drawing more and more attention due to excellent biocompatibility, degradability and osteogenesis. Furthermore, bioactive ceramics possess the ability to induce differentiation of various stem cells. This paper summarizes the latest research progress of bioactive ceramics as bone scaffold materials, including main types of bioactive ceramics, mechanical properties, strengthening and toughening methods, bone-like apatite mineralization ability and cellular biocompatibility. The wide range of chemical composition of bioactive ceramics provides superior material foundation for control and optimization of the physicochemical and biological properties. Nanotechnology and composite materials provide new ways to improve the strength and toughness of bioactive ceramics, which further sophisticates the physicochemical and mechanical properties for BTE applications. Moreover, bioactive nanoceramics are exhibiting great potential for bone repair than conventional ceramics due to better mechanical and biological properties. Future studies should be focused on the responding mechanism between bioactive nanoceramics and cells on molecular and genetic level. On the other hand,"} {"text": "Ingvar , p. 21 tKlein et al. , p. 240 Notably, Klein et al. includedNonetheless, some studies have failed to demonstrate a mnemonic advantage for encoding conditions that foster planning as compared to survival processing . Yet in the everyday situations discussed by Ingvar , it may So far, studies concerning \u201cmemory of the future\u201d have focused mainly on cognitive processes. Although little is known about the neural processes that support encoding of simulated future events, a recent study by Martin et al. indicateOver the last four decades, the concept of episodic memory has evolved into a multifaceted construct that is of great interest to researchers in various areas of psychology and neuroscience (for recent overviews, see Szpunar and McDermott,"} {"text": "The Comprehensive Geriatric Assessment (CGA) is an analytical tool increasingly implemented in clinical practice. Breast cancer is primarily a disease of older people; however, most evidence-based research is aimed at younger patients.A systematic review of literature was carried out to assess the use of CGA in older breast cancer patients for clinical decision making. The PubMed, Embase and Cochrane databases were searched.A total of nine useful full text article results were found. Only five of these were exclusively concerned with early breast cancer; thus, studies involving a variety of cancer types, stages and treatments were accepted, as long as they included early breast cancer.The results comprised a series of low sources of evidence. However, all results shared a common theme: the CGA has a use in determining patient suitability for different types of cancer treatment and subsequently maximizing the patient\u2019s quality of life.There is not yet sufficient high level evidence to instate CGA guidelines as a mandatory practice in the management of breast cancer, due to the heterogeneity of available studies. More studies need to be conducted to cement current work on the benefits of the CGA. An area of particular interest is with regard to treatment options, especially surgery and chemotherapy, and identifying patients who may be suitable for these treatments. The Comprehensive Geriatric Assessment (CGA) is a multidisciplinary management tool aimed at determining an older person\u2019s medical, psychological and functional capability .Current evidence regarding breast cancer is mainly appropriate to younger patients (\u226465\u2009years) as older patients are often excluded from clinical trials ,3. ThereDisadvantages of CGA include additional time of implementation and limited consensus regarding methodology, evaluation and utilization . ComprehCurrently, CGA is not used routinely in breast cancer patients worldwide; however, three main areas where CGA could potentially be implemented include the following.Since age alone may not be an accurate predictor of treatment outcome , CGA couGreater comorbidity increases risk of death from causes other than breast cancer ,6. ConseThere is the need for identification of patients with confounding health problems, social needs or other issues that may have otherwise remained undetected , which cAssessment in these areas allows establishment of targeted treatment plan specific to the individual patient, leading to potential benefits, such as optimization of medical treatment; improved diagnostic accuracy and prognosis; maintained function; and improved quality of life (QOL) -11.The aim of this systematic literature review was to analyze current evidence regarding CGA in early breast cancer and highlight possible areas for further research.Three online databases were searched for relevant literature, including full-text articles and abstracts. These were PubMed, Embase and the Cochrane Library, which cover most clinical studies with high level evidence. The following key words were used: comprehensive geriatric assessment, breast cancer, primary, operable. Studies published in English in the past 10\u2009years were included. Studies were excluded if: a form of geriatric assessment was not used in the methodology; there was no relation to cancer; or no early breast cancer patients were included ; number of participants (N); lower age cut-off of participants; type of cancer the participants had; stage of cancer; uniqueness of study; overall findings.Level of evidence was assessed using the system proposed by Harbour and Miller , where eNine full-text articles met our search criteria and all graded Level 3 for quality of evidence.Despite the aim of this review to evaluate CGA in early operable breast cancer, we found a minimal number of studies being this specific, so all studies which contained any number of early breast cancer patients are discussed.et al.[1. A pilot study by Extermann et al., recruitAfter CGA, cancer treatment was adjusted in four participants (36%); adjuvant endocrine therapy was selected in two patients and adjuvant chemotherapy in one patient. No information is available on the fourth patient.In addition, CGA addressed problems indirectly impacting on treatment, in a further six patients (55%), for example, patient cognition, social support and contra-indicating medications. These problems were effectively resolved.et al.[This study is unique in using the Functional Assessment of Cancer Treatment-Breast (FACT-B) instrument to measure QOL, validated by Extermann et al.. MeasureDue to the small number of patients (N\u2009=\u200915), findings from this study may not be comparable to all older primary breast cancer patients. Thus, more patients need to be recruited from several centers, to verify findings.et al.[2. A pilot study by Hurria et al., consistAverage completion time was 27 minutes (range 8 to 45 minutes) and 78% of the patients were able to complete with no assistance. Approximately 90% of participants were happy with the questionnaire length and 83% agreed it was easy to understand.This study used a cancer-specific geriatric assessment and has proven this is feasible. Of the 40 participants, 25% had breast cancer. The percentage of patients with primary operable cancer is unknown; findings specific to these patients cannot be determined.The potential of applying this cancer-specific geriatric assessment tool could be evaluated in a multicenter study.et al.[3. A prospective study by Pope et al. recruiteet al.[Pope et al. used an As age of the patient increased, functional status decreased. There was less comorbidity among breast patients, when compared to those with gastro-intestinal tumors (GIT) and genito-urinary tumors (GUT). This could be due to the large number of patients with early breast cancer included in this study, compared with GIT and GUT, which consisted of patients with more evenly distributed cancer stages; patients in later stages may experience more severe symptoms. Alternatively, this could be explained by gender; greater comorbidity may exist in the male rather than female population; breast cancer patients are mainly female and GIT and GUT patients largely male.This is an excellent international study using a large number of patients. All patients were receiving surgery, however only 47% for breast cancer, 87.3% being primary cases. Results appropriate to GIT and GIT cannot be differentiated from breast tumors in this study.4. Albrand and Terret conducteet al.[This study suggests CGA components related to function, mentality, nutrition and comorbidity assist in determining fitness for oncological treatments. Comorbidity was measured using the Cumulative Index Rating Scale-Geriatric (CIRS-G), which is only used by this study and the study by Extermann et al. and, theet al..Similar studies need to be conducted on a larger scale in multiple centers. Comparison of participants to matched patients not receiving CGA would be useful to determine if patient factors are acknowledged due to CGA or by increasing awareness of the patient\u2019s own disease status.et al.[5. A prospective, transversal study by Giron\u00e9s et al. was condet al.[The study showed these older breast cancer survivors were able to maintain function, but had high comorbidity; consequently, long-term follow-ups are recommended for cancer survivors. Giron\u00e9s et al. suggest Similarly, it would be useful if patients in this study were matched to patients not receiving CGA.6. A cross-sectional observational study by Molina-Garrido and Guillen-Ponce was concThis CGA showed correlation to the briefer measures of BQ and VES-13; patients who had a score indicative of frailty on CGA were more likely to score a high level of frailty on BQ and VES-13. Therefore, there is potential to develop a screening tool for administration of CGA. This study suggests CGA should be implemented when VES-13 score is <3 .Of 41 patients, 56.1% had no daily medications and no one had more than three daily medications, indicating a possible level of good health in this cohort; it is expected that older people will have greater comorbidity and thus more daily medications. Furthermore, 78% of participants were married, which is a larger proportion relative to others studies and, henet al.[7. A retrospective study in a single center conducted by Barth\u00e9l\u00e9my et al. attemptePrognostic factors, for example, estrogen and progesterone receptor status, and tumor stage were recorded. The CGA included elements focusing on the domains of comorbidity, mood, medication, social support, environmental assessment, nutritional status and motor function.In this sample, 118 out of the 192 patients had at least one or more risk factors which would ordinarily justify the use of adjuvant chemotherapy. However, only 70 of these patients (59%) were actually recommended adjuvant chemotherapy after discussion with the multidisciplinary team. The patients who did not receive chemotherapy despite showing good indications, received adjuvant endocrine therapy as an alternative.et al.[Barth\u00e9l\u00e9my et al. concludeIt is, however, worth noting that this study is largely regarding administration of chemotherapy, which is often not necessary for primary operable breast cancer cases, rather than assessing these patients at diagnosis aiming to formulate a management plan for primary therapy.et al.[8. Lazarovici et al. carried The CGA was conducted by a single geriatrician with oncological training. Where CGA was carried out before a treatment decision was made, this was done on the patient\u2019s first visit to the geriatrician. The CGA assessed functional status, cognition, mood, nutritional status and comorbidity.P\u2009=\u20090.031). These patients were subsequently later taking fewer medications (P\u2009=\u20090.036) and more likely to received adjusted cancer treatment (P\u2009=\u20090.051).Recent weight loss of >10% was more frequent among the group of patients who had geriatric assessment before cancer treatment decision had been made had metastatic disease. Therefore, there might be some selection bias in this study, as patients with primary operable breast cancer alone would have been less likely to be referred to the geriatrician under the criteria used in this study, and thus would not have undergone CGA.et al.[9. A further multicenter study by Hurria et al. aimed toet al.[Geriatric assessment was carried out before chemotherapy began. This study uses the same form of cancer-specific CGA as previously mentioned in the earlier study by Hurria et al..Regarding geriatric assessment, functional status, level of social activity, poor hearing and assistance required to take own medications, were important factors when considering chemotherapy toxicity.This study was conducted on a large scale with the aim to identify any general factors relevant to all cancer types and stages with regards to toxicity from chemotherapy. The study did not look at whether there were any additional or different factors based on particular cancer types or stages.Due to the heterogeneity of our sample papers, it is difficult to draw comparisons relating to our original aim of evaluating CGA use in early breast cancer.et al.[et al.[et al.[et al.[et al.[The studies by Extermann et al., Albrandet al., Giron\u00e9sl.[et al. Molina- l.[et al. and Bartl.[et al. were soll.[et al. (N\u2009=\u2009460l.[et al. conducteet al.[et al.[et al.[et al. [et al.[et al.[et al.[All studies used CGA to recognize comorbidities and significant factors present in their patients, which could potentially impact on treatment recommendation. The studies by Extermann et al., Hurria l.[et al. and Girol.[et al. looked a.[et al. and Moli.[et al. was to a. [et al., Hurria l.[et al. and Hurrl.[et al. were priAll studies were Level 3 grade of evidence; conclusions made may not be strong enough to require immediate change to clinical practice ,22.et al.[Most of the studies used a similarly designed CGA, excluding the studies using cancer-specific CGA, by Hurria et al.,20. The A number of studies ,13,15 imIt appears that of CGA may be difficult to complete due to impediments present in the older population in general ,27. ThisDue to the focused nature of this review, it appears that a number of important articles may have been excluded by our criteria, which are now discussed here.et al.[Girre et al. recruiteet al.[et al.[et al.[et al.[Also, an update of the study by Pope et al. was writl.[et al.. In addil.[et al. and Audil.[et al. underlinFrom the literature, there is not yet enough evidence to recommend CGA in early breast cancer patients. Currently, literature suggests that CGA may be useful in regard to treatment decision making in older cancer patients. This is consistent with the clinical and pilot research experience of the authors .This literature review is hampered by lack of evidence currently available concerning use of CGA in early breast cancer patients. Analysis of some studies was inhibited by the extent of information available, resulting in difficultly in drawing comparisons between studies. Evidence so far suggests that CGA is an important factor in determining treatment and management of early breast cancer by identifying confounding health and personal issues of the patient and their suitability for different treatments, where this is possible. Case\u2013control and cohort studies need to be completed to compare outcomes of patients who receive CGA to those who do not.BQ = Barber Questionnaire; CGA = Comprehensive Geriatric Assessment; CIRS-G = Cumulative Index Rating Scale-Geriatric; FACT-B = Functional Assessment of Cancer Treatment-Breast; GIT = Gastro-intestinal tumours; GUT = Genito-urinary tumours; MGA = Multidimensional geriatric assessment; PACE = Pre-operative Assessment of Cancer in the Elderly; QOL = Quality of life; VES-13 = Vulnerable Elderly Survey.The authors declare that they have no competing interests.RMP carried out the literature research, data acquisition, data analysis and prepared the manuscript. KLC is the guarantor of the integrity of the study and defined the intellectual content. KLC, DALM and KC created the study concept and design. All authors edited the manuscript and read and approved the final manuscript."} {"text": "There was an error in the Funding statement.http://www.arc-cancer.net), The Ligue Nationale Contre le Cancer (http://www.ligue-cancer.net). Muriel Busson was supported by FRM (http://www.frm.org). DV was a recipient of the Ligue Nationale contre le Cancer. CB was supported by a HFSP fellowship (http://www.hfsp.org/). The K. B. lab is supported by the Agence Nationale de la Recherche , the Universit\u00e9 Paris-Sud, the Institut National de la Sant\u00e9 et de la Recherche M\u00e9dicale (INSERM) and the Ligue Nationale Contre le Cancer , and is a member of the Laboratory of Excellence LERMIT (Laboratory of Excellence in Research on Medication and Innovative Therapeutics) supported by a grant from ANR (Investissements d\u00b4Avenir). Dr Madly Brigitte was supported by the Fondation ARC and Dr Carine Jacquard by INCA . The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The correct statement is: This work was supported by grants from ARC ("} {"text": "Forty mice were randomly divided into four groups on the basis of the diet to be fed as follows: 5% (low) fat diet (T1: LF); 20% (high) fat diet (T2: HF); 20% fat containing 1% conjugated linoleic acid (CLA) (T3: HFC); and 20% fat containing 1% CLA with 0.5% biopolymers (T4: HFCB). The high-fat with CLA diet groups (HFC and HFCB) and the low-fat diet group (LF) tended to have lower body weights and total adipose tissue weights than those of the high-fat diet group (HF). Serum leptin and triglyceride were significantly lower in the high fat with CLA-fed groups (HFC and HFCB) and the low-fat diet group (LF) than those in the high-fat diet group (HF). It is noteworthy that the high-fat with CLA and biopolymers group (HFCB) showed the lowest serum triglyceride and cholesterol concentrations. In the high-fat-fed group (HF), voluntary travel distance as a measure of physical activity decreased after three weeks of feeding. However, the CLA-fed groups showed increased physical activity. The groups fed high-fat diets supplemented with CLA alone and with CLA and biopolymers had higher viscosity of small intestinal contents than that in the low- and high-fat dietary groups. It has long been proposed that increased energy expenditure due to physical activity may be the most important factor in body weight control. Pescatello and VanHeest reportedet al. reportedl.[et al. also repl.[et al.. Thus, il.[et al. first rel.[et al.. Despitel.[et al.\u201310, how l.[et al. reportedl.[et al.. HoweverThere were no significant differences in food intake among the dietary groups and 2. ATotal adipose tissue weight was calculated as the sum of perimetrial, mesenteric and retroperitoneal adipose tissue weights . The groet al.[et al.[et al.[et al.[Increased food intake and decreased energy expenditure may increase body weight. Westerterp-Plantenga reportedet al. reportedet al. reportedl.[et al. reportedl.[et al. reportedl.[et al. reportedl.[et al.. Anotheret al.[et al.[Leptin is a protein hormone with important effects on the regulation of body weight and lipid metabolism. It is secreted by adipocytes in proportion to the amount of lipid stored and may function as a signal of body energy stores to the brain . Severalet al. suggesteet al.. Sneddonl.[et al. reportedl.[et al.. In our l.[et al., we founl.[et al.. Thus, oet al.[trans-10, cis-12 isomer of CLA decreases triacylglyceride concentrations. Baumgard et al.[de novo fatty acid synthesis, desaturation of fatty acids and triglyceride synthesis. Our findings in the present study agree with these results. In this study, serum glucose concentrations were not significantly different among the groups fed a high-fat diet . Conversely, Roche et al.[trans-10, cis-12 supplementation in mice was associated with increased serum glucose and insulin concentrations, whereas the cis-9, trans-11 supplementation group showed no weight loss, but had lower concentrations of triglyceride and free fatty acids. Lee et al.[Brown et al. reportedrd et al. reportedhe et al. reportedee et al. reportedet al.[tran-10, cis-12 isomer was administered, but not when a mixture of the trans-10, cis-12 and the cis-9, trans-11 isomer was administered; in contrast, Terpstra [et al. [et al.[In this study, serum triglyceride and cholesterol concentrations decreased in mice fed a high-fat diet supplemented by CLA and biopolymers. In previous studies, Riserus et al. found thTerpstra reported. [et al. reportedet al.[et al. [et al.[Earlier studies have suggested that energy sources have a large influence on physical activity and weight gain. Wells et al. reported.[et al. reported.[et al. . Mizunoy. [et al. also sugIn the present study, the viscosity of small intestinal contents may be associated with lipid digestion and absorption in the small intestine; in addition, differences in the viscosity of small intestinal contents would cause impairment in the hydrolysis and solubilization of lipids . Meyer atrans-10, cis-12 CLA, 26% cis-9, trans-11 CLA and 7% other. CLA isomers (>96% pure) were purchased from Matreya . Chitosan, pectin, Tween 20, acetic acid, sodium acetate, sodium chloride, monobasic sodium phosphate, dibasic sodium phosphate and Nile red were purchased from Sigma-Aldrich Chemical Company . Serum triglyceride, cholesterol and glucose assay kits were purchased from Thermo Electron, Inc. .CLA mixture used in this study contained 63% ad libitum throughout the experiment. Fresh food was given on days 0, 2 and 5. After 2 weeks of adaptation to the environment and voluntary travel distance testing, 40 animals were randomly divided into 4 groups and fed the following diets: 5% fat (T1: LF); 20% fat (T2: HF); 20% fat containing 1% CLA (T3: HFC); and 20% fat containing 1% CLA with 0.5% biopolymer per kg (T4: HFCB). All groups were fed their respective diets for 6 weeks. The formulation of the experimental diets and treatment groups are listed in Forty female ICR mice and a semi-purified powdered diet were purchased from Harlan Teklad . Isocaloric diet formulations were designed by Harlan Teklad . Animals were housed in individual, wire-bottomed cages in a windowless room with a 12-h light/dark cycle, under a protocol approved by the Animal Care Committee of Konkuk University. Food and water were provided Ten mass percentage of chitosan was dissolved in acetate buffer solutions . Ten mass percentage of pectin was dissolved in phosphate buffer solutions . These solutions were stirred for 12 h and then mixed for 3 h using a magnetic stirrer. During mixing, 1 mL of Tween 20 was added dropwise to reduce surface tension and enhance formation of encapsulation. Biopolymer encapsulation was prepared by mixing a final volume of 10 wt% biopolymer solution and CLA together for 1 h using a bio-homogenizer. The mixture was continuously stirred for 15 min using power ultrasound at a frequency of 10 MHz, and this was then mixed with the experimental diets . This process was aimed at developing a coating layer around the lipophilic CLA. Encapsulation of biopolymers and CLA solution was confirmed using confocal microscopy .2 asphyxiation sacrifice.Food intake and body weight gain were measured weekly, and adipose tissues were weighed after COg for 15 min at 4 \u00b0C. Serum leptin and adiponectin concentrations were measured using ELISA kit techniques, as specified by the manufacturers. Serum triglyceride, cholesterol and glucose concentrations were measured using the enzymatic assay kits , as per the manufacturers\u2019 instructions.Blood was collected by cardiac vessel bleeding from anesthetized mice. Serum was obtained by centrifugation at 3,000 \u00d7 Voluntary travel distance as a measure of physical activity was determined using wheel running systems. Mice were placed in an individual case (35 \u00d7 20 \u00d7 15 cm) containing a running wheel with a diameter of 10 cm under a 12-h light/dark cycle. After a period of familiarization (1 week), mice were allowed to run freely on the wheels. Data were collected daily for a period of 6 weeks. Voluntary travel distances were calculated through the use of the National Instruments Compact-DAQ, an NI 9411 module and a custom-designed LabVIEW program, which converted wheel revolution data to daily voluntary travel distance.et al.[Small intestinal contents were collected by gentle finger stripping. The small intestinal contents were pooled per tube. The viscosity of small intestinal contents was determined by the modified method of Turabi et al.. The visSmall intestinal contents were collected by gently finger stripping. The small intestinal contents and biopolymer encapsulation were analyzed through confocal microscopy. A confocal scanning fluorescence microscope with a 20\u00d7 objective lens was used to capture confocal images. Nile red (a lipid fluorescent dye) was excited with a 488-nm argon laser line. The fluorescence emitted from the sample was monitored using a fluorescence detector (543 nm) with a pinhole size of 150 \u03bcm. The resulting images consisted of 512 \u00d7 512 pixels, with a pixel size of 414 nm and a pixel dwell time of 5 s.The influences of dietary CLA and biopolymer encapsulation on lipid metabolism in mice fed high- and low-fat-fed diets were analyzed using SAS software by the generalized linear model procedure. The Student-Newman-Keuls multiple range test was used to compare differences between means.From the results of this study, we conclude that the increase in the voluntary travel distance as a measure of physical activity resulting from dietary supplementation with CLA may be one of the main reasons for lipid depletion. Groups of mice fed a high-fat diet supplemented with CLA and biopolymers showed higher viscosity of small intestinal contents than the controls. This increase in viscosity may be one of the main reasons for triglyceride and cholesterol depletion. Thus, we assume that the lipid depletion effect of CLA may be accelerated by the addition of biopolymers. However, the nature of the relationship between increased physical activity and dietary CLA remains to be elucidated. The effect of CLA on lipid metabolism could be influenced to a large degree by the mixture of CLA isomers, CLA contents (or purity), dietary period or subjects. Therefore, future studies are needed to investigate how dietary CLA influences physical activity in animal models and how dietary supplementation with biopolymers is related to changes in lipid metabolism. The nature of the interaction between CLA and biopolymers also requires further investigation."} {"text": "Single-port, single-incision laparoscopy is part of the natural development of minimally invasive surgery. Refinement and modification of laparoscopic instrumentation has resulted in a substantial increase in the use of laparoendoscopic single-site surgery (LESS) in urology over the past 2 years. Since the initial report of single-port nephrectomy in 2007, the majority of laparoscopic procedures in urology have been described with a single-site approach. This includes surgery on the adrenal, ureter, bladder, prostate, and testis, for both benign and malignant conditions. In this review, we describe the current clinical applications and results of LESS in Urological Surgery. To date this evidence comes from small case series in centres of excellence, with good results. Further well-designed prospective trials are awaited to validate these findings. Natural orifice translumenal endoscopic surgery (NOTES) and laparoendoscopic single-site surgery (LESS) represent the latest evolution of minimally invasive surgery. They have been developed with the aim of preventing port-site complications, decreasing discomfort, and improving cosmetic outcomes associated with standard laparoscopic/robotic surgery.et al.[NOTES involves diagnostic or therapeutic interventions performed via existing orifices of the human body . In 2002,et al. reportedet al.The first urological use of LESS was reported in 2007 with theet al in 2005.[et al reported mean operative times of 2 h, with estimated blood loss of 10\u2013150 mL, and mean length of hospital stay 2.2 days.[\u00ae, Covidien) used to enter the retroperitoneum. A 30\u00b0 endoscope was subsequently used to bluntly develop the space with high CO2 pressures (20\u201330 mmHg). The 10 mm port was then replaced with two 5-mm ports through which a 5-mm endoscope and a 5-mm bipolar scissor were placed to complete the surgery. SARA was possible in 41 cases with similar complication rates between the two groups. While the mean operative times were significantly higher for SARA (56 vs 40 min), postoperative analgesic requirements and hospital length of stay were significantly less.Attempts to perform urological surgery through a single-incision began with a report by Hirano in 2005. The inve in 2005.8 Desai e2.2 days. A singleSince the first laparoscopic nephrectomy was performed in 1991, laparoscet al.[The first single-port, nontransumbilical, simple nephrectomy was presented at the World Congress of Endourology in 2007. An R-poret al. Three huet al. A simpleet al[\u00ae. This was located in a 7 cm paramedian incision and operating time was 96 min. There were no complications and the specimen was retrieved intact.LESS radical nephrectomy has been described in several studies. Ponsky et al describeet al[\u00ae device was used through a single transumbilical incision. All nephrectomies were performed successfully without the need for conversion to conventional laparoscopic surgery, and without intra- or postoperative complications. Their median operative times were 141 min, and mean blood loss was 103 mL. They subsequently published their technique in detail, documenting that while this technique is indeed possible, it is challenging requiring high levels of laparoscopic skill.[Stolzenburg et al have pubet al[\u00ae. An additional 5-mm grasper was used in one case and the mean tumour size was 3 cm. Operating times were long (median 270 min), and average warm ischemia time was 20 min (11\u201329 min). All the margins were negative and there were no intraoperative complications. One patient had postoperative bleeding and pulmonary embolism. Kaouk et al[\u00ae provided intraabdominal access. One patient required conversion to conventional laparoscopy due to intraoperative bleeding. There was one positive margin but no postoperative complications.Nephron sparing surgery (NSS) has been shown consistently to have similar oncological outcomes for small renal tumours (<7 cm) as radical nephrectomy.16 NSS maet al publisheouk et al recentlyet al[et al[The two largest urological LESS series to date reported on 200 procedures.20 Included al[et al performeet al[Cryoablation of renal masses offers an alternative to open or laparoscopic nephron sparing surgery. Standard laparoscopic technique utilizes three ports either trans- or retroperitoneally. Goel et al successfet al[et al[\u00ae device. This allowed open access to the distal ureter but bariatric instruments were required for length.Nine cases of LESS nephroureterectomy were included in the two large LESS series.21 Cystoset al used a h al[et al performeet al,[\u00ae for access. The procedure was successful in all four patients with a median operating time of 3.3 h and minimal blood loss. The median warm ischaemia times were only 6.2 min, and the length of the harvested renal artery, vein, and ureter was excellent. There were no complications and all the allografts functioned immediately on transplantation. A subsequent matched-pair analysis comparing LESS donor nephrectomy (LESS-DN) with standard laparoscopic donor nephrectomy (LLDN) showed equivalence in operating time, blood loss, and hospital stay.[P < 0.0001), however, allograft function was comparable between groups. The patient convalescence after discharge was significantly faster in the LESS-DN group. The two largest series to report on urological LESS surgery include 17[LESS transumbilical live donor nephrectomy was first reported by Gill et al, using a tal stay. The meannclude 17 and 19[2nclude 17 LESS donet al[et al[LESS pyeloplasty for pelvi-ureteric junction obstruction has been reported on with seemingly good results. Desai et al performe al[et al performeOther LESS renal procedures reported are renal cyst excision, renal biet al[Despite its high median stone-free rate, open ureterolithotomy is no longer recommended as a first-line treatment for ureteric stones, because of longer hospitalization and greater postoperative morbidity. Ureterolithotomy does remain a primary or salvage option in difficult situations with large, impacted or multiple ureteric stones. Laparoscopic ureterolithotomy is preferable because of its decreased invasiveness, and resultant reduced morbidity, and has been well described in this setting. One caseet al performeet al[et al[Desai et al performe al[et al performeet al[Autorino et al reportedet al[Radical cystectomy with bilateral pelvic lymph node dissection is the standard of care for patients with muscle invasive urothelial carcinoma of the bladder. It is also an option for patients in whom conservative treatment has failed in high-risk nonmuscle invasive bladder cancer. There is, however, significant morbidity associated with it, and this has led to the development of laparoscopic and robotic approaches. Although long-term data are lacking, in the short term at least laparoscopic (including robotic-assisted) radical cystectomy appears to offer equivalent oncological control, with reduced blood loss and analgesic requirements as well quicker return to bowel function. Kaouk etet al have recet al.[2 were operated on through the umbilicus using a single three-channel port. The urethrovesical anastomosis was performed with interrupted sutures and extracorporeal knot tying. The mean operative time was 285 min and the mean blood loss 288 mL. The average length of stay was 2.5 days and there were no intraoperative complications. One patient, however, developed a recto-urethral fistula. Short-term oncological and functional results were equivalent to other laparoscopic series. The same series was extended to six patients subsequently, with three having positive margins but remaining biochemically free of disease.[Minimally invasive radical prostatectomy is fast becoming the standard of care for prostate cancer. Robotic-assisted laparoscopic prostatectomy using the Da Vinci surgical system is widespread, and although randomized prospective studies do not exist, clinical outcomes seem to be excellent in high volume centres. LESS radical prostatectomy was first described in 2008 by Kaouk et al. Four pat disease.et al. reported, in 2009, the first successful series of single-port robotic procedures in humans, including radical prostatectomy.[Kaouk atectomy. A robotiThe authors noted an improved facility for intracorporeal dissecting and suturing due to robotic instrument articulation and stability.et al. also reported their experience with a hybrid LESS robotic-assisted radical prostatectomy in a single patient.[Together with their preliminary experience in a cadaver model, Barret patient. They pla patient. They utiet al published the initial series of single-port transvesical simple prostatectomy (STEP).[\u00ae or Quadport\u00ae) was inserted percutaneously into the bladder through a 2\u20133 cm incision in the suprapubic skin crease. The prostate adenoma was enucleated transvesically using standard laparoscopic instruments, and extracted in pieces through the port. Digital assistance was required in 55% of cases. One patient died in this series of uncontrolled haemorrhage, a Jehovah\u2019s Witness who refused blood products. In addition, there was one bowel injury and two open conversions were necessary. Digital enucleation required extension of the incision by 1\u20132 cm. Average length of stay was 3 days, and at follow-up there was significant improvement of obstructive parameters with no patient catheter dependent or incontinent. One case in this series utilized the Da Vinci surgical platform. The authors commented that the Endowrist\u2122 technology allowed better maneuverability in the transvesical environment. Specifically, the robotic grasper provided excellent retraction of the adenoma as it was progressively enucleated.Desai y (STEP). This sery (STEP). Mean proSingle-site laparoscopic surgery has been reported in small numbers for a variety of other urological conditions. Paediatric varicocoeles have been successfully operated open through a single transumbilical port. A mesh sLESS in urology is in its relative infancy but to date has proven to be safe and feasible in the hands of experienced laparoscopic surgeons, using specially designed ports and instruments in selected patients. Further refinements in instrumentation and operative techniques will be required before this method of surgical access can be widely accepted. LESS in urology requires prospective randomized studies to define the benefits of this technique for patients, as well as to elucidate the cost-effectiveness of the approach.The introduction of robotics into LESS may make these techniques finally realize their potential. Improvements in existing robotic platforms may pave41"} {"text": "In vitro, MSCs have the capacity to differentiate into multiple mesodermal and non-mesodermal cell lineages. Besides, MSCs possess immunosuppressive effects by modulating the immune function of the major cell populations involved in alloantigen recognition and elimination. The intriguing biology of MSCs makes them strong candidates for cell-based therapy against various human diseases. Type 1 diabetes is caused by a cell-mediated autoimmune destruction of pancreatic \u03b2-cells. While insulin replacement remains the cornerstone treatment for type 1 diabetes, the transplantation of pancreatic islets of Langerhans provides a cure for this disorder. And yet, islet transplantation is limited by the lack of donor pancreas. Generation of insulin-producing cells (IPCs) from MSCs represents an attractive alternative. On the one hand, MSCs from pancreas, bone marrow, adipose tissue, umbilical cord blood and cord tissue have the potential to differentiate into IPCs by genetic modification and/or defined culture conditions In vitro. On the other hand, MSCs are able to serve as a cellular vehicle for the expression of human insulin gene. Moreover, protein transduction technology could offer a novel approach for generating IPCs from stem cells including MSCs. In this review, we first summarize the current knowledge on the biological characterization of MSCs. Next, we consider MSCs as surrogate \u03b2-cell source for islet transplantation, and present some basic requirements for these replacement cells. Finally, MSCs-mediated therapeutic neovascularization in type 1 diabetes is discussed.Mesenchymal stem cells (MSCs) can be derived from adult bone marrow, fat and several foetal tissues. IntroductionBiological characterization of mesenchymal stem cells- Isolation and culture of human MSCs- Phenotypic properties of MSCs- Multi-potent differentiation of MSCs- Immunomodulatory effects of MSCsAetiology and current treatment of type 1 diabetesMesenchymal stem cells in type 1 diabetes therapy- MSCs with potential to differentiate into insulin-producing cells- MSCs as cellular vehicle for insulin gene therapy- Induction of IPCs from stem cells by protein transduction technology- Minimum requirements for replacement \u03b2-cells- MSCs for therapeutic neovascularization in type 1 diabetesConcluding remarksMesenchymal stem cells (MSCs) were first identified by Friedenstein and his colleagues , who desIn vitro. Thus, MSCs provide an alternative \u03b2-cell source for islet transplantation.Diabetes mellitus is a devastating metabolic disease, which falls into two categories. Type 1 diabetes results from autoimmune-mediated destruction of \u03b2 cells in the islets of Langerhans of the pancreas, while type 2 diabetes is due to systemic insulin resistance and reduced insulin secretion by islet \u03b2 cells. In comparison with conventional or intensive insulin treatment, islet transplantation is the only therapy for type 1 diabetes that achieves an insulin-independent, constant normoglycemic state and avoids hypoglycemic episodes. However, the application of this treatment is restricted by the limited availability of primary human islets from heart-beating donors. Some recent studies indicate that MSCs can differentiate into insulin-producing cells by genetic and/or microenvironmental manipulation In this review, we will summarize the major biological features of MSCs, and their possible applications in the treatment of type 1 diabetes.in vitro culture conditions MSCs obtained from young donors can grow to 24\u201340 population doublings and the proliferative potential of the cells obtained from older donors is more compromised [in vitro[in vitro led to genetic instability and resulted in MSCs transformation [Standard conditions for generation of bone marrow derived mesenchymal stromal cultures have been reported , 9. Howepromised . Afterwapromised . Replica[in vitro, 16 due [in vitro, 18. Som[in vitro, 19, 20.ormation . It seemConsiderable progress has been made towards characterizing the cell surface antigenic profile of human bone marrow-derived MSC populations using fluorescence activated cell sorting (FACS) and magnetic bead-sorting techniques. Nevertheless, to date there is no specific marker or combination of markers that specifically identifies MSCs. Therefore, MSCs have been defined by using a combination of phenotypic markers and functional properties. It is generally agreed that adult human MSCs express Stro-1 , 22\u201323, In vitro and in vivo, including bone [e. g. neurons) [A large number of studies demonstrate that bone marrow-derived MSCs from human, canine, rabbit, rat and mouse have the capacity to differentiate into mesenchymal tissues both ing bone , 32, caring bone , fat [34ing bone , tendon ing bone , 37, musing bone , 39 and ing bone . In addineurons) and endoneurons) .et al.[In vitro differentiation strategy, Song et al.[Individual colonies derived from single MSC precursors have been reported to be heterogeneous in terms of their multi-lineage differentiation potential , 42. Theet al. propose et al., 45, whiet al.\u201348. Usinng et al. showed tIn vitro and in vivo in a non-MHC restricted manner [In vitro, MSCs are able to suppress T lymphocyte proliferation induced by alloantigens [+ regulatory T cells that were responsible for inhibition of allogeneic lymphocyte proliferation [+ CD25+ regulatory T cells has been demonstrated in mitogen-stimulated peripheral blood mononuclear cell (PBMCs) cultures in the presence of MSCs [+ CD25+ regulatory T cells had no effect on the suppression of T cell proliferation by MSCs [MSCs have been shown to suppress immune reactions both d manner . These sd manner , 51, 52.antigens , 54, mitantigens , 55\u201358, antigens , 60. Supantigens , 61. Ano by MSCs . Apart f by MSCs , MSCs ca by MSCs , natural by MSCs , 64 and by MSCs , 66. Alt by MSCs , 67, hep by MSCs , 63, pro by MSCs \u201370. Addi by MSCs , 69.in vivo. First, intravenous administration of MSCs derived from BM of baboons prolonged the survival of allogeneic skin grafts [et al.[et al.[The immunomodulatory capacity of MSCs has also been evaluated n grafts . Subsequn grafts . In phass [et al., 73 estis [et al.. In conts [et al.. Grinneml.[et al. observedIn the year 2000, 150 million people worldwide were found to be affected by diabetes mellitus, and this number is considered to double in 2025 . Type 1 Since 1920s, insulin therapy has changed diabetes from a rapidly fatal disease to a chronic disease associated with significant secondary complications, such as renal failure, cardiovascular disease, retinopathy and neuropathy. It is now well-established that the risk of diabetic complications is dependent on the degree of glycaemic control in diabetic patients. Long-term studies strongly suggest that tight control of blood glucose achieved by conventional or intensive insulin treatment, self blood glucose monitoring, and patient education can significantly prevent the development and retard the progression of chronic complications of this disease \u201382. Whilex vivo that include their ability to adopt a pancreatic endocrine phenotype. It has been demonstrated that MSCs residing in various tissues and organs are able to differentiate into functional insulin-producing cells, such as MSCs from pancreas, bone marrow, adipose tissue, cord blood and cord tissue. This will help to meet the demand of \u03b2 cells for islet transplantation, and the goal of a permanent cure for type 1 diabetes will be realized.Among adult stem cells, MSCs appear to have a particular developmental plasticity ex vivo[et al.[In vitro[et al.[et al.[In vitro. Huang et al. [The mature pancreas has two functional compartments: the exocrine portion 99%), including acinar and duct cells, and the endocrine portion (1%), including the islets of Langerhans. Islets are composed of four cell types that synthesize and secrete distinct peptidic hormones: \u03b2-cells (insulin), \u03b1-cells (glucagon), \u03b4-cells (somatostatin) and PP-cells (pancreatic polypeptide). It has been described that adult rat and human islets of Langerhans contain nestin-positive progenitor cells, which can be differentiated into insulin-expressing cells ex vivo. In anotvo[et al. displaye[In vitro. Then, Bro[et al. considerro[et al.. Recentll.[et al. showed tg et al. further g et al. . In agreg et al. successfg et al. reportedg et al. , 96, 97,g et al. . However%, includg et al. . In concet al.[et al.[via enhanced endothelial proliferation by donor cells. In a similar study, Lee et al.[et al.[et al.[et al.[In vitro, bone marrow derived-cells obtained from mice [et al.[et al.[In vitro. In another study, Moriscot et al. [et al.[In vitro. Their results provide the direct evidence for the feasibility of using patient's own BM-MSCs as a source of IPCs for beta-cell replacement therapy.Bone marrow is an important source of easily accessible adult stem cells, and bone marrow transplantation (BMT) is considered to be effective for the treatment of autoimmune type 1 diabetes. However, there is a great debate on the issue of the fate of transplanted bone marrow stem cells. Ianus et al. showed tl.[et al. reportedee et al. demonstrl.[et al., Lechnerl.[et al. and Tanel.[et al. showed ll.[et al.. On the rom mice and ratsrom mice could bee [et al. proposede [et al. and Wu el.[et al. isolatedl.[et al. proved tt et al. indicatet et al. , 116 hav. [et al. demonstret al.[MSCs from human bone marrow and adipose tissue represent very similar cell populations with comparable phenotypes , 118\u2013119et al. isolatedin vivo studies give support to this point. In one study [in vivo capacity of HUCB-derived cells to generate insulin-producing cells. Following transplantation of HUCB cells into NOD/SCID/\u03b22mnull mice, IPCs of human origin were found in recipient pancreatic islets. Double FISH analysis using species-specific probes further indicated that HUCB cells can give rise to insulin-producing cells by fusion-dependent and -independent mechanisms. The number of HUCB cells that transdifferentiated and the rate of such an event are critical aspects. The proportion of HUCB-derived insulin-producing cells per total number of islet cells [et al.[In vitro and in vivo. Moreover, they indicated that UC-MSCs seem to be the preferential source of stem cells to convert into IPCs, because of the large potential donor pool, its rapid availability, no risk of discomfort for the donor, and low risk of rejection.Human umbilical cord blood (HUCB) is another source of stem cells with the potential to develop into insulin-producing cells. A few ne study , transplne study , transplne study has focuet cells was lesset cells . Howeveret cells and HUCBet cells , it shous [et al. successfIn vitro. The proteins included coagulation factors VIII [in vivo to treat acquired and inherited disorders. As far as type 1 diabetes is concerned, insulin gene therapy using MSCs is an alternative treatment.MSCs are a promising target population for cell-based gene therapy against a variety of different diseases . The appors VIII , IX [129ors VIII , human gors VIII , human eors VIII and so oet al.[i) the large capacity to accommodate a construct; (ii) the ability of the virus to infect primary and second cell lines In vitro; (iii) although the virus enters the nucleus it does not integrate with the host DNA and is therefore not likely to unmask oncogenes, it functions separate to the host DNA as an episome; (iv) most patients have already had contacts with the herpes I virus, which normally resides in a quiescent state in neurological tissue; (v). immune reaction against the virus is relatively mild; (vi) established antiviral treatment against the herpes virus is available. In consequence, the modified herpes I virus could serve as a new vector for human insulin gene delivery into MSCs. , HSV type 1 protein VP22 and HIV-1 transactivator TAT protein. The mechanism of PTD-mediated protein transduction is mainly via endocytosis followed by passage from the vesicle into the cytoplasm [New technology, known as protein transduction technology, has been recently developed. A variety of peptides, known as protein transduction domains (PTDs) or cell-penetrating peptides (CPPs), have been characterized for their ability to translocate into live cells. Proteins and peptides can be directly internalized into cells when synthesized as recombinant fusion proteins or covalently cross-linked to PTDs. There are numerous examples of biologically active full-length proteins and peptides that have been delivered to cells both ytoplasm .et al. demonstrated that PDX-1 [et al.[In vitro. In another research, Gr\u00e4slund's group [It has been suggested that protein transduction technology is useful for the treatment of diabetes, because this technology facilitates the differentiation of stem cells into insulin-producing cells. First, PDX-1 protein and BETA2/NeuroD protein, two pancreatic endocrine transcription factors, both have a PTD sequence in their structure. Noguchi at PDX-1 or BETA2at PDX-1 protein 1 [et al. showed t's group reportedAs mentioned above, insulin-producing cells generated either by transdifferentiation of MSCs or by delivery of insulin gene into MSCs are able to act as replacement \u03b2-cells for the transplantation therapy of type 1 diabetes. These MSCs-derived IPCs may solve the donor shortage issue for islet cell transplantation and provide a cure for this disease. Nevertheless, any substitute for primary islets of Langerhans will require some minimum essential properties. The basic requirements for surrogate \u03b2-cells are described as follows .6 primary human islets per recipient, equivalent to approximately 2\u20134 \u00d7 109\u03b2-cells. As a result, the ability of MSCs to replicate and to differentiate toward pancreatic endocrine phenotype makes them attractive candidates for producing replacement \u03b2-cells. Secondly, the replacement cells must have the ability to synthesize, store and release insulin in response to changes in the ambient glycaemia. Understanding \u03b2-cell function at the molecular level will likely facilitate to manufacture physiologically competent insulin-producing cells from MSCs. Thirdly, the proliferative capacity of the replacement cells must be tightly controlled to avoid the development of hyperinsulinemic hypogly-caemia as the \u03b2-cell mass expands in vivo. Excluding proliferative cells from the transplant material will help to overcome this problem. In the case of insulin gene transferred MSCs, the possibility of tumour formation has to be considered. Finally, the transplanted cells must avoid destruction by the recipient's immune system. Two major mechanisms are involved in the immune attack against replacement \u03b2-cells, one is transplant rejection and the other is recurrence of autoimmunity. In addition to appropriate immunosuppressive treatment, autologous transplantation of MSCs-derived IPCs will circumvent the immune rejection dilemma. On the other hand, Burt et al.[In vitro may differ significantly from those in vivo[In vitro differentiation protocols do not generate \u03b2-cells, but cells that have somephenotypic and functional similarity to authentic \u03b2-cells. Since IPCs generated from MSCs are developmentally and immunologically distinct from primary \u03b2-cells, they may escape the recipient's autoimmune assault.First, to make any significant therapeutic impact vast numbers of replacement \u03b2-cells will be required. Current transplantation protocols use up to 1 \u00d7 10rt et al. indicatee in vivo, and it et al.[It has been demonstrated that endothelial progenitor cells (EPCs) are responsible for postnatal vasculogenesis in physiological and pathological neovascularization . Ischaemet al. providedin vivo[In vitro[et al.[et al.[MSCs have been shown to promote angiogenesis both in vivo and In v[In vitro. Yet thero[et al. showed tl.[et al. demonstrl.[et al., 157. Anl.[et al.. For exal.[et al.. In addil.[et al.. Thus, tIn the past few years, there has been dramatic progress in our understanding of the biology of MSCs. Data in the literature concerning cell expansion, phenotypic characterization of MSCs as well as their multi-potency and immunomodulatory properties, are vast and sometimes contradictory. Although the precise identity of MSCs remains a challenge, this has not hampered the beginning of considerable investigation aiming at their potential clinical applications. It is generally accepted that type 1 diabetes is now curable by islet transplantation therapy, and MSCs offer a starting material for generating the large numbers of surrogate \u03b2-cells required. The most difficult and yet unsolved issue are how to manufacture physiologically functional insulin-producing cells from MSCs. Moreover, the angiogenic effect of MSCs could also be utilized for diabetes treatment. In conclusion, the prospect of MSCs in treating type 1 diabetes seems to be very promising. However, we should realize that much work needs to be done before pushing the MSC-based therapy from bench to bedside."} {"text": "The output power produced by high-concentration solar thermal and photovoltaic systems is directly related to the amount of solar energy acquired by the system, and it is therefore necessary to track the sun's position with a high degree of accuracy. Many systems have been proposed to facilitate this task over the past 20 years. Accordingly, this paper commences by providing a high level overview of the sun tracking system field and then describes some of the more significant proposals for closed-loop and open-loop types of sun tracking systems. Such systems are based on a solar collector, designed to collect the sun's energy and to convert it into either electrical power or thermal energy. The literature contains many studies regarding the use of solar collectors to implement such applications as light fixtures, window covering systems, cookers, and so forth -6. In ge\u00b0 \u2013 1\u00b0. Several years later, Semma and Imamru used a simple microprocessor to adaptively adjust the positions of the solar collectors in a photovoltaic concentrator such that they pointed toward the sun at all times [In 1975, one of the first automatic solar tracking systems -13 was pll times ,15.With rapid advances in the computer technology and systems control fields in recent decades, the literature now contains many sophisticated sun tracking systems designed to maximize the efficiency of solar thermal and photovoltaic systems. Broadly speaking, these systems can be classified as either closed-loop or open-loop types, depending on their mode of signal operation . The rem2.et al. [\u00b0 to be achieved. In 1992, Agarwal [\u00b0 (0.2 mrad). Kalogirou [Closed-loop types of sun tracking systems are based on feedback control principles. In these systems, a number of inputs are transferred to a controller from sensors which detect relevant parameters induced by the sun, manipulated in the controller and then yield outputs (i.e. sensor-based). In 1986, Akhmedyarov et al. first inet al. develope Agarwal presente Agarwal developealogirou presentealogirou presentealogirou comparedet al. [et al. [\u00b0. Urbano et al. [et al. [In 1998, Khalifa and Al-Mutawalli developeet al. proposed [et al. presenteo et al. presenteo et al. construc [et al. presenteet al. [et al. [In 2004, Roth et al. , 32 desi [et al. develope\u00b0 to the north. Al-Mohamad [Abdallah investig-Mohamad used a pet al. [et al. [Aiuchi et al. presente [et al. designed [et al. presente3.An open-loop type of controller computes its input into a system using only the current state and the algorithm of the system and without using feedback to determine if its input has achieved the desired goal . The system is simpler and cheaper than the closed-loop type of sun tracking systems. It does not observe the output of the processes that it is controlling. Consequently, an open-loop system can not correct any errors so that it could make and may not compensate for disturbances in the system. Open-loop control algorithms of sun tracking systems utilize some form of solar irradiation geometry model .In 1983, Al-Naima and Yaghobian developeet al. [et al. [Blanco-Muriel et al. argued tet al. . Overallet al. . In 2003 [et al. presente\u00b0 towards the south.In 2004, Abdallah and Nijmeh develope\u00b0 (In the same year, Reda and Andreas presente\u00b0 .et al. [1, S2, S3 and S4 (L is the average length of the square aperture, h is the distance between the mask and the detector plane, and a is the width of the slit. However, in M and N vary nonlinearly with the inputs \u03b1 and \u03b2, respectively, i.e. the sensitivity of the sensor depends upon the incident angle of the sunlight. To resolve this problem, the authors replaced the conventional aperture with that shown in \u00b0 over the entire field of view of \u00b1 62\u00b0 for both axes.In 2007, Chen et al. ,48 prese3 and S4 , is dire\u00b0 [\u00b0 (\u00b0, but was sufficient for most solar engineering applications and could be obtained at a fraction of the computational cost.In a recent study, Grena presente\u00b0 . It was \u00b0 [\u00b0 , was higet al. [et al. [Recently, Chen et al. -51 and C [et al. -53 prese4.Advances in the algorithms of sun tracking systems have enabled the development of many solar thermal and photovoltaic systems for a diverse variety of applications in recent years. Compared to their traditional fixed-position counterparts, solar systems which track the changes in the sun's trajectory over the course of the day collect a far greater amount of solar energy, and therefore generate a significantly higher output power. This paper has presented a review of the major algorithms for sun tracking systems developed over the past 20 years. It has been shown that these sun tracking algorithms can be broadly classified as either closed-loop or open-loop types, depending on their mode of control. The control / computational principles of each method have been reviewed and their performance and relative advantages / disadvantages systematically discussed. Overall, the results presented in this review confirm the applicability of sun tracking system for a diverse range of high-performance solar-based applications."} {"text": "Perioperative stress and anesthesia are risk factors for exacerbation of Multiple Sclerosis (MS) attacks. Infection, emotional labiality and hyperpyrexia are also known to increase the risk of postoperative MS attacks. Appropriate preoperative evaluation, administration of a good premedication, control of fever, selection of the anesthetic agents and effective postoperative pain control can prevent problems after prolonged major surgery in patients with MS diagnosed. This report presents the anesthetic technique in a patient who was a known case of MS for past nine years and presented with renal tumor to undergo laparoscopic nephrectomy under general anesthesia. Multiple sclerosis (MS) is the most common immune demyelinating disorder of the central nervous system which predominantly affects females and play a major role which is involved in genetic factors . Periope2-air-sevoflurane (2%). After intubation, a remifentanil infusion of 0.25 mcg/kg/min was administered. Body temperature ranged between 36.5 \u00baC and 35.7 \u00baC during the surgery. A total of 80 mg atracurium besylate was used as a neuromuscular blocking agent and the surgery lasted 7.5 hours. The dose of non-depolarizing muscle relaxant was chosen initially 0.5 mg/kg atracurium and repeated according to clinical signs and capnography. Two units of erythrocyte suspension were transfused during the surgery. After uneventful perioperative period. The patient was given meperidine via patient controlled analgesia (PCA) and intravenous paracetamol after every six hours during the postoperative 24 hours. The patient was discharged from the hospital 10 days later and had no MS attacks during the three-month follow up period.A 57 years old female patient (weight 59 kg) who was a known case of MS for nine years presented with renal tumor to undergo laparoscopic nephrectomy. The patient\u2019s history revealed that she initially had visual impairment and weakness of the legs which resolved with treatment. She has been using Copaxone (glatiramer acetate) therapy for six years and she had no attacks of MS during the last six years. Preanesthetic evaluation showed that she had no other medical diagnosis except MS. She had normal physical examination, PA chest radiograph, ECG and blood tests. She was premedicated with diazepam 5 mg orally the night before the operation. After routine monitoring and intravenous (iv) access were established, anesthetic induction was achieved with the intravenous administration of 60 mg of lidocaine, 180 mg of propofol, 100 \u03bcgr of remifentanil and 30 mg of atracuruim besylate. Anesthesia was maintained with O2O and O2. The authors concluded that, given that no spike was observed by pEEG monitoring during surgery, sevoflurane anesthesia was safe and can be used in the exacerbation stage in MS. In the case presented here, sevoflurane was used in the anesthetic procedure. In a study by Kono Y et al. (2 and air in a patient with MS undergoing emergency laparotomy and no postoperative exacerbation of the symptoms was observed. In a study by Inoue et al. (2O and fentanyl, and vecuronium was used as neuromuscular blockers, and postoperative pain was managed with IV infusion of fentanyl instead of neural block with local anesthetics and no MS attacks were noted in a patient with a two year history of MS for gynecologic surgery. Lee KH et al. (All anesthetic techniques used in patients with MS may lead to the exacerbation of MS symptoms. In a study by Barbosa et al. , subaracY et al. fentanyle et al. , anestheH et al. reportedH et al. . HoweverH et al. . We usedH et al. , 11. On H et al. perioper"} {"text": "Fish oil and conjugated linoleic acid (CLA) belong to a popular class of food supplements known as \u201cfat supplements\u201d, which are claimed to reduce muscle glycogen breakdown, reduce body mass, as well as reduce muscle damage and inflammatory responses. Sport athletes consume fish oil and CLA mainly to increase lean body mass and reduce body fat. Recent evidence indicates that this kind of supplementation may have other side-effects and a new role has been identified in steroidogenensis. Preliminary findings demonstrate that fish oil and CLA may induce a physiological increase in testosterone synthesis. The aim of this review is to describe the effects of fish oil and CLA on physical performance (endurance and resistance exercise), and highlight the new results on the effects on testosterone biosynthesis. In view of these new data, we can hypothesize that fat supplements may improve the anabolic effect of exercise. Many food supplements claim to induce weight loss by increasing lean body mass or reducing body fat mass, although only a few of these ergogenic aids have been investigated . This reThe class of commercially available fat supplements includes conjugated linoleic acid (CLA), fish oil, long- and medium-chain triacylglycerols. These ergogenic aids are claimed to be associated with a reduction in muscle glycogen breakdown, improved endurance capacity, reduced body mass and a reduction in muscle damage and inflammatory responses . Only twFish oil contains both the omega-3 fatty acids docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA). They are polyunsaturated fatty acids (PUFA) with a double carbon bond starting after the third carbon atom from the end of the carbon chain ,5, reducThe CLA supplement is a mixture of positional and geometrical conjugated dienoic isomers of linoleic acid which present two double bonds separated by a single bond . These din vitro studies have been cited whenever the information is not available in humans. Recent results suggest a new role of this class of supplementation in testosterone biosynthetic pathways. This review describes the various fat supplements pointing out both the known effects produced by these supplements when associated with exercise, and the new data underlying the molecular mechanisms regulating testosterone biosynthesis. Finally, it is briefly described how these fat supplements may influence physical performance. This review focuses mainly on human studies, although animal and Potential studies were identified by searching electronic databases: PubMed, Cochrane, and Scopus. The search terms used included both single words and combinations of words: CLA, fish oil, testosterone, exercise. Bibliographies were checked and experts were consulted for any additional studies. Studies available as full papers were deemed eligible if they conformed to the predetermined inclusion and exclusion criteria .Elite and recreational athletes, who participate in various types of physical activity and sports, consume fish oils and CLA supplements to improve their performance, increase training effects, reduce body fat, increase lean body mass, and reduce muscle damage and inflammatory responses. The following paragraph summarizes the main results obtained from trained individuals after integration with fish oil or CLA , compariOnly a few studies have examined whether fish oil supplementation during training enhances endurance adaptations. These studies conducted in humans show controversial results, but in our opinion this is due to the difference in the level of training of the study participants. In elite or well-trained athletes, the margin of improvement is so inappreciable that it would not be a surprise if small differences in performance parameters were observed; while in sedentary subjects, starting a training program, the improvement in performance is considerable, making a possible small enhancement in performance induced by food supplementation undetectable. Therefore the most reliable results have been observed in trained subjects, who already show adaptations for that specific type of exercise, although still improving their performance.2max), thus improving endurance performance [One of the effects claimed by fish oil is the ability to modify the viscosity of the plasma membrane of red blood cells (RBC), improving their deformability when they pass through the capillary bed ,31,32. Aformance .et al. [2max or exercise performance of well-trained cyclists [2max compared to only training effect, although the supplemented exercised subjects and the supplemented non-exercised subjects showed an increase in the ventilatory aerobic threshold compared to the control [However, literary data are controversial. Oostenbrug et al. studied cyclists . Others cyclists , well-trcyclists ). Fish ocyclists . Sedenta control . 2max increased significantly and O2 desaturation rate decreased as an effect of fish oil supplementation [The most significant documented results were observed in fit male subjects supplemented with fish oil (6 g/day) for six weeks . Fish oientation .et al. [et al. [n-3 PUFA somministration did not alter inflammatory proteins and plasma cytokines. The results on secondary muscle damage induced by an acute inflammatory response are extremely interesting, and more research is needed before conclusions can be drawn on fish oil supplementation in trained individuals. Omega-3 supplementation may provide benefits by minimizing the recovery time between exercise sessions reducing the inflammatory response localized in muscle tissue as well as the associated delayed-onset of muscle soreness.The extensive literature on the effect of omega-3 supplementation also provides evidence that fish oil is effective in the prevention and treatment of inflammatory conditions . It was et al. have shoet al. ,23. Niem [et al. also shoet al. [et al. [et al. [et al. [et al. [CLA supplementation may induce a reduction in body weight, this statement is based on results obtained in humans and in animals, even if the effect on humans is less clear than in animals. In many studies CLA supplementation was not associated with any regular and supervised physical activity. Only six studies have been conducted to evaluate the effect of CLA supplementation associated with exercise. Kreider et al. investig [et al. after CL [et al. . On the [et al. where me [et al. showed t [et al. who studWe can conclude that CLA supplementation, associated with resistance training, results in an increase in lean body mass and a decrease in body fat mass only when the subjects are involved in standardized and supervised exercise sessions during the supplementation period.de novo synthesized cholesterol, either intracellular cholesterol esters or extracellular supplies from circulating low-density lipoproteins. Cholesterol is converted to pregnenolone by P450-linked side-chain cleaving enzyme (P450ssc), an inner-membrane protein of mitochondria, that catalyzes the cleavage reaction. Pregnenolone may be converted to progesterone by 3\u03b2-hydroxysteroid dehydrogenase (3\u03b2-HSD), located in both mitochondria and smooth endoplasmic reticulum, or to 17\u03b1-hydroxy pregnenolone by 17\u03b1-hydroxylase/17,20-lyase (P450c17). Progesterone may be converted to 17\u03b1-hydroxy progesterone, androstenedione and finally to testosterone. Pregnenolone may be converted to 17\u03b1-hydroxy pregnenolone, dehydroepiandrosterone, androstenediol and testosterone, or it may be converted to progesterone derivates entering a different pathway.In males testosterone is mainly (>95%) synthesized in Leydig cells. Testosterone biosynthesis follows an enzymatic sequence of steps from et al. [et al. .in vitro studies. In fact, it has been shown that dietary fat improves reproductive performance although the molecular mechanism has not yet been elucidated. Among the different theories, there is one that hypothesizes that dietary fat may directly increase steroidogenesis [A new role of fat supplements is starting to be delineated in the scientific community, although the results have been mainly obtained in animals or ogenesis or direcogenesis ,39. et al. [in testis affecting the testicular concentration of testosterone. Similar results have been obtained in vitro using the H295R human adrenocortical carcinoma cell line treated with cod liver oil, a model to identify chemicals that may alter steroidogenesis. This nutritional supplement derived from the liver of cod fish, like many fish oils, contains high levels of omega-3 fatty acids, EPA and DHA. Castellano et al. , studyinet al. [Montano et al. showed tin vitro, although a small increase was observed. The limitation of this study was the doses used. In fact, for the in vivo experiments only one dose of CLA supplementation was admet al. [Similar results were obtained in the ovarian tissue. It has been suggested that one of the mechanisms by which CLA may alter steroidogenesis may be by up- and down-regulating specific genes encoding for enzymes and transport proteins involved in the synthesis of prostaglandin and progesterone. In ovarian tissue, May et al. showed tAn important mechanism by which testosterone can increase the cross sectional area of the skeletal muscle fiber is the increase in the contractile protein synthesis while unaffecting protein breakdown . The incet al. [The effects of testosterone on human performance have been the objective of studies since the early 1980s , but onlet al. observedet al. . Moreoveet al. ,59. It het al. , while iet al. . The difet al. . No chanet al. .Testosterone is a steroid hormone with anabolic and anticatabolic effect on muscle tissues, playing a critical function for muscle gain and muscle performance of athletes ,63. It het al. [Differently from fat supplements, the fat ingested with the daily diet may have the potential to alter the regulation and metabolism of testosterone in athletic men . Volek eet al. showed tet al. . Taking On the other hand, fat supplement side-effects have never been demonstrated and documented. If fat supplements induce an increase in blood testosterone, this may have an effect on several other tissues, among which include stem or progenitor cells . TestostAdditional research on the effect of fish oil and CLA supplementation on enzymes leading to testosterone synthesis are important to clarify the molecular mechanisms by which fat supplements may contribute to increase the anabolic effect of exercise, and the side-effects of this kind of supplementation."} {"text": "Acupuncture, an age-old healing art, has been accepted to effectively treat various diseases, particularly chronic pain and neurodegenerative diseases. Since its public acceptance and good efficacy, increasing attention has been now paid to exploring the physiological and biochemical mechanisms underlying acupuncture, particularly the brain mechanisms. Basic and clinical acupuncture studies on neurobiological mechanisms of acupuncture are crucial for the development of acupuncture. This issue compiles 32 exciting papers, most of which are very novel and excellent investigations in this field.The neural mechanism underlying acupuncture analgesia was addressed in four articles. J. Wang et al. established a postincisional pain model of rats and investigated electroacupuncture effect on the brain oscillations involving postoperative pain. B.-S. Lim et al., using interesting bee venom acupuncture, explored whether it can relieve oxaliplatin-induced cold allodynia and which endogenous analgesic system is implicated. The paper by Yumi Maeda et al. aimed to evaluate this linkage between brain response to acupuncture and subsequent analgesia in chronic pain patients with carpal tunnel syndrome. Since many articles aimed to assess the effect of electroacupuncture induced analgesia, W. Kim et al. systemically conducted a review to assess the efficacy and clarify its mechanism on neuropathic pain. In addition, as a great challenge in acupuncture analgesia and treatment evaluations, D. Zhu et al. outlined the advantages and disadvantages of kinds of acupuncture controls and highlighted how the differences among placebo devices can be used to isolate distinct components of acupuncture treatment.\u03baB signaling pathway. Another two papers mainly focused on the acupuncture modulation of neural mechanism for hypertension regulation with long-term as well as short-term treatments. The paper by G.-H. Tian et al. addressed the long-term electroacupuncture on cerebral microvessels and neurons in CA1 region of hippocampus in spontaneously hypertensive rats. H. Chen et al. aimed to explore the hypothalamus-anchored resting brain network underlying primary hypertension patients after short-term acupuncture treatments. Moreover, for mild cognitive impairments, S. Chen et al. have pointed out that acupuncture at KI3 at different cognitive states and with varying needling depths may induce distinct reorganizations of effective connectivity of brain networks.Five papers deal with potential neural mechanism underlying acupuncture treatments on the stroke, hypertension, and mild cognitive impairments. The paper by L. Liu and R.T.F. Cheung aimed to investigate whether the combination of melatonin and electroacupuncture therapies could be beneficial against transient focal cerebral ischemia in a rat model of transient middle cerebral artery occlusion. W. Qin et al. accompanied by L. Liu and R.T.F. Cheung's paper explored the importance of anti-inflammatory acupuncture treatment for the focal cerebral ischemia/reperfusion by inhibiting the NF-The neuroendocrine system involving acupuncture has been addressed by the following three papers. Z. Yu et al. has suggested that TRPV1 receptor is partially involved in the electroacupuncture-mediated modulation of gastric motility. The paper by C.-C. Kuo et al. explored the mechanism of electroacupuncture (EAc) induced antinociception involved opioid receptors and the serotonergic system. Q.-Q. Li et al. conducted a systemic review about the central mechanism of acupuncture in modulating various autonomic responses. Moreover, other papers focused on the relatively acupoint specificity from wide aspects. One of the studies by L. Li et al. suggested that both the size and function of the acupoints comply with the functionality of the internal organs; thus the sensitive degree of acupoints changed according to malfunction of internal organs. C.-Y. Chen et al. implied that somatoparasympathetic neuronal connection and a somatosympathetic neuronal connection could be the prerequisites to the neuronal basis of the somatovisceral reflex and also the neuronal mechanism of acupuncture. Z. Wang et al. advanced that the modulatory effects of different needling sensations induced by relatively different acupoints contribute to acupuncture modulations of limbic-paralimbic-neocortical network. Although the acupoints were spatially adjacent, there was also relatively functional specificity inflected by brain networks.By gathering these papers, we hope to enrich our readers and researchers with respect to the underlying neurological mechanism of acupuncture, and we look forward to an increasing number of both clinical trials and experimental studies to further promote the development of understanding the neurological mechanism involving acupuncture.Lijun\u2009\u2009BaiLijun\u2009\u2009BaiRichard\u2009\u2009E.\u2009\u2009HarrisRichard\u2009\u2009E.\u2009\u2009HarrisJian\u2009\u2009KongJian\u2009\u2009KongLixing\u2009\u2009LaoLixing\u2009\u2009LaoVitaly\u2009\u2009NapadowVitaly\u2009\u2009NapadowBaixiao\u2009\u2009ZhaoBaixiao\u2009\u2009Zhao"} {"text": "Induced pluripotent stem (iPS) cells, are a type of pluripotent stem cell derived from adult somatic cells. They have been reprogrammed through inducing genes and factors to be pluripotent. iPS cells are similar to embryonic stem (ES) cells in many aspects. This review summarizes the recent progresses in iPS cell reprogramming and iPS cell based therapy, and describe patient specific iPS cells as a disease model at length in the light of the literature. This review also analyzes and discusses the problems and considerations of iPS cell therapy in the clinical perspective for the treatment of disease. Induced pluripotent stem (iPS) cells, are a type of pluripotent stem cell derived from adult somatic cells that have been genetically reprogrammed to an embryonic stem (ES) cell-like state through the forced expression of genes and factors important for maintaining the defining properties of ES cells.in vitro. Mouse iPS cells from mouse fibroblasts were first reported in 2006 by the Yamanaka lab at Kyoto University . Human iThe breakthrough discovery of iPS cells allow researchers to obtain pluripotent stem cells without the controversial use of embryos, providing a novel and powerful method to \"de-differentiate\" cells whose developmental fates had been traditionally assumed to be determined. Furthermore, tissues derived from iPS cells will be a nearly identical match to the cell donor, which is an important factor in research of disease modeling and drug screening. It is expected that iPS cells will help researchers learn how to reprogram cells to repair damaged tissues in the human body.The purpose of this paper is to summarize the recent progresses in iPS cell development and iPS cell-based therapy, and describe patient specific iPS cells as a disease model, analyze the problems and considerations of iPS therapy in the clinical treatment of disease. 1. It was first demonstrated that genomic integration and high expression of four factors, Oct4/Sox2/Klf4/c-Myc or Oct4/Sox2/Nanog/LIN28 by virus, can reprogram fibroblast cells into iPS cells [The methods of reprogramming somatic cells into iPS cells are summarized in Table PS cells . \u00a0Later,et al., [et al., [Various growth factors and chemical compounds have recently been found to improve the induction efficiency of iPS cells. Shi et al., demonstr[et al., , 7 reporet al. showed that mouse neural stem cells, expressing high endogenous levels of Sox2, can be reprogrammed into iPS cells by transduction Oct4 together with either Klf4 or c-Myc [et al., [Kim or c-Myc . This su[et al., demonstret al. [1). Retroviruses are being extensively used to reprogram somatic cells into iPS cells. They are effective for integrating exogenous genes into the genome of somatic cells to produce both mouse and human iPS cells. However, retroviral vectors may have significant risks that could limit their use in patients. Permanent genetic alterations, due to multiple retroviral insertions, may cause retrovirus-mediated gene therapy as seen in treatment of severe combined immunodeficiency . Second,et al. reportedet al. [et al. [et al. [et al., [et al. [et al., [via transfection of human adipocyte stromal cells with a nonviral minicircle DNA by repeated transfection. This produced hiPS cells colonies from an adipose tissue sample in about 4 weeks. Stadtfeld et al. used an [et al. used Sen [et al. repeated[et al., reprogra [et al. demonstr[et al., derived et al. [When iPS cells generated from either plasmid transfection or episomes were carefully analyzed to identify random vector integration, it was possible to have vector fragments integrated somewhere. Thus, reprogramming strategies entirely free of DNA-based vectors are being sought. In April 2009, it was shown that iPS cells could be generated using recombinant cell-penetrating reprogramming proteins . Zhou etet al. purifiedet al. over-expressed reprogramming factor proteins in HEK293 cells. Whole cell proteins of the transduced HEK293 were extracted and used to culture fibroblast six times within the first week. After eight weeks, five cell lines had been established at a yield of 0.001%, which is one-tenth of viral reprogramming efficiency. Strikingly, Warren et al., [A similar approach has also been demonstrated to be able to generate human iPS cells from neonatal fibroblasts . Kim et et al., demonstrStrenuous efforts are being made to improve the reprogramming efficiency and to establish iPS cells with either substantially fewer or no genetic alterations. Besides reprogramming vectors and factors, the reprogramming efficiency is also affected by the origin of iPS cells.2). Besides mouse and human somatic cells, iPS cells from other species have been successfully generated (Table 3).A number of somatic cells have been successfully reprogrammed into iPS cells . The properties and safety of these iPS cells should be carefully examined before they can be used for treatment. The cell source of iPS cells is important for patients as well. It is important to carefully evaluate clinically available sources. Human iPS cells have been successfully generated from adipocyte derived stem cells , amniocyet al. [via retroviral transduction of Oct4, Sox2, Klf4, and L-Myc. Miyoshi et al., [via the retroviral gene transfer of Oct4, Sox2, c-Myc, and Klf4. Reprogrammed cells showed ES-like morphology and expressed undifferentiated markers. Yan et al., [et al. [et al. [Shimada et al. demonstr et al., generate et al., demonstr et al., . Anchan [et al. describe [et al. derived in vitro and in teratomas. The ability to reprogram cells from human somatic cells or blood will allow investigating the mechanisms of the specific human diseases.All of the human iPS cells described here are indistinguishable from human ES cells with respect to morphology, expression of cell surface antigens and pluripotency-associated transcription factors, DNA methylation status at pluripotent cell-specific genes and the capacity to differentiate in vitro, and cell therapy effects of implanted iPS cells have been demonstrated in several animal models of disease.The iPS cell technology provides an opportunity to generate cells with characteristics of ES cells, including pluripotency and potentially unlimited self-renewal. Studies have reported a directed differentiation of iPS cells into a variety of functional cell types in vitro and in vivo. Mauritz [in vitro through embryonic body formation. Rufaihah [et al. derived endothelial cells from human iPS cells, and showed that transplantation of these endothelial cells resulted in increased capillary density in a mouse model of peripheral arterial disease. Nelson et al. [et al. [in vitro. The beating cells expressed early and late cardiac-specific markers. In vivo studies showed extensive survival of iPS and iPS-derived cardiomyocytes in mouse hearts after transplantation in a mouse experimental model of acute myocardial infarction. The iPs derived cardiomyocyte transplantation attenuated infarct size and improved cardiac function without tumorgenesis, while tumors were observed in the direct iPS cell transplantation animals. A few studies have demonstrated the regenerative potential of iPS cells for three cardiac cells: cardiomyocytes, endothelial cells, and smooth muscle cells Mauritz and Zhan Mauritz independRufaihah , et al. n et al. demonstr [et al. demonstret al. [et al. [Strategies to enhance the purity of iPS derived cardiomyocytes and to exclude the presence of undifferentiated iPS are required. Implantation of pre-differentiation or guided differentiation of iPS would be a safer and more effective approach for transplantation. Selection of cardiomyocytes from iPS cells, based on signal-regulatory protein alpha (SIRPA) or combined with vascular cell adhesion protein-1 (VCAM-1), has been reported. Dubois et al. first de [et al. adopted Regeneration of functional \u03b2 cells from human stem cells represents the most promising approach for treatment of type 1 diabetes mellitus (T1DM). This may also benefit the patients with type 2 diabetes mellitus (T2DM) who need exogenous insulin. At present, technology for reprogramming human somatic cell into iPS cells brings a remarkable breakthrough in the generation of insulin-producing \u03b2 cells.Human ES cells can be directed to become fully developed \u03b2 cells and it is expected that iPS cells could also be similarly differentiated. Stem cell based approaches could also be used for modulation of the immune system in T1DM, or to address the problems of obesity and insulin resistance in T2DM. et al., [et al., [in vivo. Tateishi et al., demonstr[et al., reportedet al. [Alipo et al. used mouin vitro iPS cell differentiation into functional insulin-producing \u03b2-like cells is low. Thus, it is highly essential to develop a safe, efficient, and easily scalable differentiation protocol before its clinical application. In addition, it is also important that insulin-producing b-like cells generated from the differentiation of iPS cells have an identical phenotype resembling that of adult human pancreatic \u03b2 cells in vivo. Although significant progress has been made in differentiating pluripotent stem cells to \u03b2-cells, several hurdles remain to be overcome. It is noted in several studies that the general efficiency of et al. [et al. [et al. [Currently, the methodology of neural differentiation has been well established in human ES cells and shown that these methods can also be applied to iPS cells. Chambers et al. demonstr [et al. used a c [et al. providedet al., [Wernig et al., showed tet al., [Tsuji et al., used preet al., [in vitro drug screen for treatment of PD.Hargus et al., demonstrReprogramming technology is being applied to derive patient specific iPS cell lines, which carry the identical genetic information as their patient donor cells. This is particularly interesting to understand the underlying disease mechanism and provide a cellular and molecular platform for developing novel treatment strategy.in vitro model of pathogenesis (Table 4). This provides an innovative way to explore the molecular mechanisms of diseases. Human iPS cells derived from somatic cells, containing the genotype responsible for the human disease, hold promise to develop novel patient-specific cell therapies and research models for inherited and acquired diseases. The differentiated cells from reprogrammed patient specific human iPS cells retain disease-related phenotypes to be an Recent studies have reported the derivation and differentiation of disease-specific human iPS cells, including autosomal recessive disease , cardiacet al., [Human iPS cells have been reprogrammed from spinal muscular atrophy, an autosomal recessive disease. Ebert et al., generateet al., [et al., [et al., [Similarly, three other groups reported their findings on the use of iPS cells derived cardiomyocytes (iPSCMs) as disease models for LQTS type-2 (LQTS2). Itzhaki et al., obtained[et al., demonstr[et al., also sucet al. [LQTS3 has been recapitulated in mouse iPS cells . Malan eet al. generateet al. [Human iPS cells have been used to recapitulate diseases of blood disorder. Ye et al. demonstret al., [Raya et al., reportedet al., [in vitro, which mimic the pathological phenotype of T1DM. This will lead to better understanding of the mechanism of T1DM and developing effective cell replacement therapeutic strategy. Maehr et al., demonstret al., [IKBKAP in vitro, while neural crest precursors showed low levels of normal IKBKAP transcript. Transcriptome analysis and cell-based assays revealed marked defects in neurogenic differentiation and migration behavior. All these recaptured familial Dysautonomia pathogenesis, suggesting disease specificity of the with familial Dysautonomia human iPS cells. Furthermore, they validated candidate drugs in reversing and ameliorating neuronal differentiation and migration. This study illustrates the promise of disease specific iPS cells for gaining new insights into human disease pathogenesis and treatment.Lee et al., reportedet al., [et al., [Human iPS cells derived reprogrammed from patients with inherited neurodegenerative diseases, amyotrophic lateral sclerosis and Huntet al., showed t[et al., derived Disease specific somatic cells derived from patient-specific human iPS cells will generate a wealth of information and data that can be used for genetically analyzing the disease. The genetic information from disease specific-iPS cells will allow early and more accurate prediction and diagnosis of disease and disease progression. Further, disease specific iPS cells can be used for drug screening, which in turn correct the genetic defects of disease specific iPS cells. in vitro. However, much remains to be done to use these cells for clinical therapy. A better understanding of epigenetic alterations and transcriptional activity associated with the induction of pluripotency and following differentiation is required for efficient generation of therapeutic cells. Long-term safety data must be obtained to use human iPS cell based cell therapy for treatment of disease. iPS cells appear to have the greatest promise without ethical and immunologic concerns incurred by the use of human ES cells. They are pluripotent and have high replicative capability. Furthermore, human iPS cells have the potential to generate all tissues of the human body and provide researchers with patient and disease specific cells, which can recapitulate the disease"} {"text": "Dear Editor,I read with interest a nice paper by Zidan et al. about viral hepatitis as etiology of hepatocellular carcinoma in Iran and the world . They ha"} {"text": "My comments on Haig\u2019s updating of Blurton Jones and da Costa\u2019s hypotheset al. [et al. [Haig\u2019s hypothesis predicts that \u2018Maximal night-waking \u2026 will \u2026 overlap with the greatest benefits of contraceptive suckling. Consistent with this expectation, infant sleep becomes more fragmented after six months and then gradually consolidates.\u2019 Night wakings and sleep consolidation, however, are known to follow significantly different courses during the first year of life dependent on attachment classifications of the infants. For example, McNamara et al. reported [et al. recentlyIt may be that infants with insecure avoidant orientations adopt a strategy of NOT attempting to prolong IBI so that the Blurton-Jones/daCosta/Haig hypothesis applies only to infants with insecure resistant orientations. Optimal IBIs may differ for mother\u2013infant dyads depending on both attachment status and ecologic context. In environments with high mortality rates parents should pursue an opportunistic reproductive strategy aimed at greater numbers of offspring appearing at shorter IBIs and receiving lower levels of investment (e.g. reduced nursing) resulting in fewer night wakings and greater numbers of avoidant attachment orientations.r REM [et al. [Most night wakings emerge from active sleep or REM . If nigh [et al. , 12 made [et al. . Infant [et al. . The suc [et al. , 22. Wheet al. [Assuming that AS/REM is differentially influenced by genes of paternal origin then both REM properties and REM-associated awakenings can be better explained by mechanisms of genomic conflict than by traditional claims that REM functions as an anti-predator \u2018sentinel\u2019 for the sleeping organism. Capellini et al. , using pIf properties and functions of AS/REM are better explained as due to the influence of genes of paternal origin , then AS/REM sleep in the infant should, according to genomic conflict theory, function to extract resources from the mother consistent with getting paternal line genes into the next generation. As we have seen REM is associated with night wakings and suckling behaviors in the infant. Although REM percent of total sleep declines with age it nevertheless persists into the adult stage. What about the functions of REM sleep in the adult? REM indices appears to vary with attachment (reproductive) strategies in the adult but littConflict of interest: None declared."} {"text": "The progression of chronic kidney disease (CKD) remains one of the main challenges in clinical nephrology. Therefore, identifying the pathophysiological mechanisms and the independent preventable risk factors helps in decreasing the number of patients suffering end stage renal disease and slowing its progression.Smoking data was analyzed in patients with CKD throughout 2005-2009. One hundred and ninety-eight patients who had recently been diagnosed with stage three CKD or higher according to the National Kidney Foundation (NKF) 2002 Classification were studied. The control group was randomly selected and then matched with the case subjects using a computerized randomization technique. The relative risk was estimated by computing odds ratio (OR) by using multinomial logistic regression in SPSS \u00ae for Windows between the two groups.p = 0.009, 95% CI = 1.12-2.29). When compared to nonsmokers, current smokers have an increased risk of having CKD , while former smokers did not have a statistically significant difference. The risk increased with high cumulative quantity . Smoking increased the risk of CKD the most for those classified as hypertensive nephropathy and diabetic nephropathy . No statistically significant difference in risk was found for glomerulonephritis patients or any other causes.Smoking significantly increases the risk of CKD (OR = 1.6, This study suggests that heavy cigarette smoking increases the risk of CKD overall and particularly for CKD classified as hypertensive nephropathy and diabetic nephropathy. Smoking, a well known risk factor for many diseases, was recently proven to play an important role in renal diseases. Studies showed that cigarette smoking is a risk factor for the development and progression of chronic kidney disease (CKD) in community ,2. In thSince urinary albumin is a sensitive marker of glomerular injury , it is cThis study aims to investigate the relationship between cigarette smoking and chronic kidney disease, and its effects on each type of renal failure.2 2002 classification) [2) and a urine protein creatinine ratio less than 0.15 [A cross-sectional study of 198 patients with CKD and 371 healthy control subjects were matched and studied. Cases were patients admitted or referred to one of the three tertiary hospitals affiliated with Aleppo university during 2005-2009 and newly diagnosed with CKD with estimated glomerular filtration rate (eGFR) of less than 60 mg/minute/1.73 mication) . Patienthan 0.15 . A regishttp://www.who.int. To reduce the probability that symptoms of early CKD are influenced by tobacco use, the classification of former versus current tobacco use was based on smoking status 5 years before the interview. Regular cigarette smoking was defined as smoking at least one cigarette per day for six months or more. Regular smokers were classified into either current regular smokers if they smoked during the last five years or former regular smokers if they quit more than five years ago. Tobacco consumption was measured with a pack per year formula.The U.S. National Library of Medicine, Medical Subject Headings (MeSH) define smoking as inhaling and exhaling the smoke of tobacco or something similar to tobacco available at \u00ae for Microsoft Windows\u00ae, using multinomial logistic regression. We studied smoking as a relative risk for CKD, and then we studied former versus current smoking for the same purpose. The next step was to determine the relative risk of smoking for each type of renal disease. We studied analgesic (aspirin or paracetamol) use, body mass index (BMI), angiotensin-converting enzyme inhibitors (ACEi) or angiotensin II receptor blockers (ARBs) use, and hypertension (HTN) as confounding factors; however these factors were excluded from the final model because their inclusion did not affect the risk estimates.As an estimate of the relative risk for CKD among tobacco users compared with non-smokers, we computed odds ratios (OR) in terms of SPSSOne hundred and ninety-eight patients and 371 healthy participants as the control group were included. The most used tobacco form was as a cigarette, so we ignored pipe and snuff usage.Fifty one percent of case subjects and 49% of control subjects were men Table . The mea2. Forty five and a half percent of cases had glomerulonephritis as a cause for their CKD, 28.3% diabetic nephropathy, 13.1% hypertensive nephropathy, and 13.1% had an unknown or other causes for CKD.A majority of the patients were in The National Kidney Foundation Kidney Disease Outcomes Quality Initiative (NKF KDOQI \u2122 stage 3 and 4 . Seventyp = 0.009) and , respectively. The risk of CKD did not reach a significant level for former regular smokers . This risk was obviously increased with the tobacco consumption, with OR = 2.6, CI 95% 1.53-4.41, p = 0.00 for more than 30 pack/year, and OR = 2.04, CI 95% 1.08-3.88, p = 0.02 for 16-30 pack/year compared to those who had never regularly smoked and diabetic nephropathy . This association did not reach a significant level for risk of CKD caused by glomerulonephritis and the other causes of CKD . The control subjects were gathered from the same socioeconomic background. We did not find any influence by body mass index, hypertension and analgesic use on the final results.Table We found an important statistically significant risk of CKD caused by smoking in hypertensive nephropathy and diabetic nephropathy patients and a weak, statistically insignificant association between smoking and CKD caused by glomerulonephritis.et al. [Many studies indicate that the deleterious effect of smoking on renal function is not merely restricted to essential hypertension and diabetic nephropathy. Some of these studies found that smoking is an independent predictor of microalbuminuria in healthy patients with primary hypertension. It is well known that urinary albumin is a sensitive marker of glomerular injury , and theet al. studied et al. ,19. Multet al. ,21. Suchet al. [et al. [et al. [Smoking causes kidney deterioration in diabetic patients with adverse effects on four different aspects of albumin excretion: It increases the risk of microalbuminuria, shortenet al. reported [et al. calculat [et al. which coet al. [To determine the risk factors for noninsulin dependent diabetes in a cohort representative of middle aged British men, Perry et al. conducteet al. .et al. [et al. [Another question would be the effect of smoking cessation on kidney function. In patients with IDDM, Chase et al. found th [et al. studied et al. [One of the other diseases that exacerbate with smoking is autosomal dominant polycystic kidney disease (ADPKD), Chapman et al. found thet al. . The medet al. .et al. [et al. [Appel et al. studied [et al. stated tet al. [et al. [A previous, hospital-based case-control study found a et al. reported [et al. did not [et al. .et al. [The increment of GFR induced by smoking may play a role in the genesis of hyperfiltration as a potential mediator of accelerated progression of chronic renal disease ,35. Pawlet al. studied et al. ,38. On tet al. -41.Smoking also alters the proximal tubular function leading to increased excretion of N-acetyl-f3-glucosaminidase (NAG) and impairment of organic cation transport which coIn conclusion, our study found that smoking, particularly heavy smoking (> 30 pack/year), is an important risk factor to the development of CKD. The association was strongest for CKD classified as hypertension and diabetic nephropathy. These results raise the importance of smoking cessation to decrease the incidence of CKD and other preventable diseases as COPD, coronary artery diseases, and cancers.The authors declare that they have no competing interests.RY is the principle investigator, participated is study design, data analysis, data collection and manuscript preparation. HH participated on study design, data collection and manuscript preparation. AL participated in study design and manuscript preparation. RAA participated in data analysis and manuscript preparation. LV participated in data analysis and manuscript preparation. GA participated in data collection and manuscript preparation. NKA participated in data collection and manuscript preparation. SA participated in data collection. SAA participated in data collection. SAB is the supervisor, participated in study design and manuscript preparation.All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2458/10/731/prepub"} {"text": "Electronic noses (E-noses) use various types of electronic gas sensors that have partial specificity. This review focuses on commercial and experimental E-noses that use metal oxide semi-conductors. The review covers quality control applications to food and beverages, including determination of freshness and identification of contaminants or adulteration. Applications of E-noses to a wide range of foods and beverages are considered, including: meat, fish, grains, alcoholic drinks, non-alcoholic drinks, fruits, milk and dairy products, olive oils, nuts, fresh vegetables and eggs. A third category is based on artificial neural networks (ANN). A neural network consists of a set of interconnected processing algorithms functioning in parallel. On a very simplified and abstract level, ANN is based on the cognitive process of the human brain [The two main components of an electronic nose (E-nose) are the sensing system and the automated pattern recognition system. The sensing system can be an array of several different sensing elements or a single device or a combination of both. Volatile organic compounds (VOCs) presented to the sensor array produces a signature or pattern which is characteristic of the vapor. By presenting many different chemicals to the sensor array, a database of signatures can be build up. Data analysis and pattern recognition (PARC) in particular, are also fundamental parts of any sensor array system. There are a variety of PARC methods available which can be categorized in three classes. The choice of the method depends on available data and the type of result that is required. Graphical analysis with bar charts, profiles polar and offset polar plots are simple forms of data treatment that may be used with an electronic nose. A second way of analysing E-nose signals is by means of multivariate analysis. Multivariate analysis generally involves data reduction. It reduces high dimensionality in a multivariate problem where variables are partly correlated, allowing the information to be displayed in a smaller dimension. There are many multivariate techniques to choose from: principal component analysis (PCA), cluster analysis (CLA), linear discriminant analysis (LDA), partial least squares (PLS), an brain ,2.2) doped with a small amount of a catalytic metal such as palladium or platinum. By changing the choice of catalyst and operating conditions, tin dioxide resistive sensors have been developed for a range of applications. Materials with improved performance with respect to relative humidity variations have been found by empirical experimentation [2) and tungsten oxide (WO3).Gas sensors, based on the chemical sensitivity of metal oxide semi-conductors (MOS), are readily available commercially. They have been more widely used to make arrays for odor measurement than any other class of gas sensors . Althougentation . TitaniuIn addition to variations in the composition of MOS sensor materials, the metal oxide film deposition is an important variable governing sensor performance design . Deposit2 is coated onto the outside of the tube with any catalytic additives. Gas samples are sensed by the change in the electrical resistance of the metal oxide semi-conductor. Resistance changes due to combustion reactions occurring within the lattice oxygen species on the surface of metal oxide particles [p or n type. The n and p type designations indicate which charge carrier acts as the material's majority carrier. N types are semi-conducting materials, which are doped with atoms capable of providing extra conduction electrons to the host material. This creates an excess of negative (n-type) electron charge carriers. P type semi-conductor (p for positive) is obtained by carrying out a process of doping, that is adding a certain type of atoms to the semi-conductor in order to increase the number of free charge carriers (in this case positive). For p type oxides, an increase in the resistance is found in the presence of reducing gases, while the resistance decreases in response to oxidizing gases; n-type oxides show opposite behavior [n type oxides are SnO2 and WO3; and a p-type oxide is CTO.The sensor usually comprises a ceramic support tube containing a platinum heater coil onto which sintered SnOarticles . The sigbehavior . ExampleAn electronic nose can detect and estimate odors quickly though it has little or no resemblance to animal noses. Sensors used in E-noses are less independent and are more narrowly tuned to certain VOCs, compared with olfactory receptors from invertebrate animals like fruit flies . AlthougThis review article focuses on the use of MOS-based electronic noses for food applications, the technical limitations for some applications and the different approaches undertaken to overcome them. Problems that have been tried to solve with MOS-based electronic noses are those related to quality control, monitoring process, aging, geographical origin, adulteration, contamination and spoilage .2.2.1.et al. [2 based Taguchi type sensors . Compared to MOS sensors, MOSFETs rely on a change of electrostatic potential and they are based on the modulation of charge concentration by a MOS capacitance between a body electrode and a gate electrode located above the body and insulated from all other device regions by a gate dielectric layer which in the case of a MOSFET is an oxide, such as silicon dioxide [et al. [Salmonella typhimurium using a metal oxide based electronic nose. The E-nose housed seven thick film SnO2 MOS sensors . Microbiological measures as a standard method to determine spoilage level included Salmonella counts and total aerobic counts. The average prediction accuracy of the E-nose responses was 69.4% using step-wise linear regression principal components (PC) as input. The accuracy of predicting was improved to 83% when using independent components (IC). PCA is a second-order linear transformation method of data representation which assumes that data follows a Gaussian distribution and uses the variance within the data set to estimate the transform. IC analysis is a relatively new multivariate data analysis that assumes data is non-Gaussian and uses data density information to estimate the transform [Meat is an ideal growth medium for several groups of pathogenic bacteria. Estimation of meat safety and quality is usually based on microbial cultures. Bacterial strain identification requires a number of different growth conditions and biochemical tests with overnight or large incubation periods and skilled personnel, which means that testing may not be frequently performed. Other methods of determining meat safety involve quantifying volatile compounds associated with the growth of microorganisms on meat but these are also time consuming \u201318. Winqet al. evaluate [et al. evaluateransform .et al. [Vernat-Rossi et al. studied i.e., non-castrated males, is discouraged because of the unpleasant cooking odor known as \u201cboar taint\u201d. 5a-Androst-16-en-3-one (5a-An) has been identified as the main compound responsible for the urine like-odor associated with boar taint [et al. [Although most applications of metal oxide semi-conductors based sensors in meat have been focused on rapid methods for detecting spoilage by bacterial contamination, some work has also been conducted to determine the presence of off-flavors in meat. In many countries pork production from intact, ar taint , with 3-ar taint . Bourrou [et al. used 14 2.2.et al. [Freshness is the most important factor for fish quality. Traditionally, fish quality evaluation has been based on organoleptic tests. This type of testing is subjective even when performed by experienced and well-trained personnel. Gas chromatography has revealed that many volatile compounds are released from degrading fish, some of which can be used as indicators of spoilage. Electronic nose are suitable instruments for measuring fish freshness since a large number of volatile compounds are related to \u201coffness\u201d . Olafsdoet al. employedThe same E-nose was later used to control processing of smoked salmon in a production plant . It was et al. [Sardines have also been the subject of research with the metal oxide semi-conductor sensors. El Barbri et al. developeet al. later in2.3.It is in the area of milk and other dairy products that there has been extensive research in evaluating electronic noses for monitoring the quality of these products. Areas of research have ranged from detecting adulteration/contamination of milk to determining the geographical origins of cheese. A list of reported applications is given below.2.3.1.etc. [et al. [Liquid milk is an essential nutritional food for infants. Adulteration of milk with water is a matter of serious concern because of the lower nutritional value provided to consumers. The dairy industry employs various quality checks which include the determination of fat and total solids by chemical or physical analyses; estimation of sediment; determination of bacterial count, determination of freezing point, protein, etc. . However [et al. monitoreet al. [Another area related to milk quality and safety is the detection of contaminants, including aflatoxins, in milk. Benedetti et al. studied et al. . Factorset al. . Great eet al. . QMB are2.3.2.2 thin films, prepared using sol-gel technology, was used to measure the development of rancidity in UHT and pasteurized milk during 8 and 3 days, respectively. The sensors could distinguish between both types of milk as well as determine the degree of rancidity of milks . E-nosesof milks . Labrech , mass spectrometer system (MS), MOSFET and a set of MOS . On thei2.3.4.Lactococcus lactis strains isolated from Nature for the production of aroma in cheeses and other dairy products [Lactococcus lactis strains as cheese starter cultures [Lactococcus lactis were isolated from dairy sources such as boutique raw-milk cheeses, non-dairy sources, and commercial starter cultures . PCA based on E-nose data revealed four distinctive groups based on aroma profiles that were not correlated with their origin, which also agreed with the results of the sensory analysis.There is a strong interest in exploring the potential of novel products . A MOS-bcultures . Twenty-2.3.5.Emmental cheese can develop a \u201crind taste\u201d off-flavor that can be identified by tasting the cheese at the hoop side (curved side). The components responsible for this off-flavor are often not eliminated during manufacturing, and so may therefore be present in the final product. Cheese loaves must therefore be treated carefully during ripening in order to avoid this problem. The cut pieces are either used fresh or stored in a freezer room which slows down the oxidation process. Attempts to identify the volatile basis of \u201crind taste\u201d using GC-MS have been unsuccessful . Tainted2.3.6.A study on the discrimination of caseinates from different origins was conducted with an E-nose . PCA and2.4.et al. [One of the main concerns of the egg industry is the systematic determination of egg freshness, because some consumers perceive variability in freshness as lack of quality . The modet al. used fouet al. [et al. [i.e., Haugh unit and yolk factor (ratio of yolk height and width), using E-nose responses indicated a good prediction performance are highly toxic and carcinogenic secondary metabolites produced by fungi, mainly the genus cotoxins . Because [et al. employed [et al. demonstret al. [2 semi-conductors. Prediction of the degree of \u2018mold/musty\u2019 odor was also included in the study. The E-nose correctly classified 75% of samples according to the four descriptors. Ninety percent of the samples could be correctly assigned using a two-class system \u2018good\u2019 and \u2018bad\u2019. These values exceed the levels of agreement between two human grain inspectors classifying the same samples. Jonsson at al. [\u22121, respectively.Other applications of E-nose to grains include measuring off-odors indicative of past or ongoing microbial deterioration ,53. Borjet al. tried to2.6.et al. [835 E-nose with six thin film MOS sensors was tested for its ability to determine microbial contamination in canned peeled tomatoes [Saccharomyces cerevisiae and Escherichia coli) as well as classify spoiled tomato samples with high fidelity.The aroma of fruits and vegetables are either formed during ripening or upon tissue disruption, which occurs after maceration, blending or homogenization. Many volatile compounds are naturally formed by enzymes found in the intact tissue of fruits and vegetables. They originate from secondary metabolites with various biosynthetic pathways. The characteristic aroma of fruits is an important factor in their overall acceptance by the consumer. For many years human senses have been the primary \u201cinstrument\u201d that has been used to determine fruit quality. More recently, techniques such as gas chromatography-mass spectrometry (GC-MS) have been used to characterize the volatile profiles of fruits and vegetables. However, it is neither feasible nor practical to use techniques such as GC-MS or sensory panel to assess cultivars or product found at storage stations. Consequently, E-noses have the potential to fill this gap since they are a rapid, transportable and an objective measurement tool for aroma analysis. Gomez et al. evaluatetomatoes . Tomatoeet al. [et al. [Other fruits including blueberries, melons, snake fruit and mandarins have been evaluated by MOS-based electronic noses either to predict the optimal harvest day (OHD) or to monitor shelf life. Blueberry is a highly perishable fruit that must be processed properly and with care otherwise it can develop damage such as cracks, leaks, soft spoilage, which will be apparent to the consumer. Simon et al. were ablet al. . For manet al. . In contet al. . In a diet al. ,61. OHD [et al. suggeste [et al. . Likewis [et al. ,64.2.7.et al. [Virgin olive oils (VOO), and in particular, extra-virgin olive oils (EVOO), are produced using cold pressing techniques. They are sought-after olive oil product because of their aroma, taste, antioxidant, and nutritional properties. The cultivation of olive trees, harvesting of the fruit, and extraction of olive oil are labor intensive and time consuming tasks which add considerably to the overall cost of the oil. Attempts to adulterate VOO with less expensive vegetable oils or lower quality olive oils are thus by no means rare. Not only does this practice defraud consumers, but also constitutes a threat to the reputation and economic development of Mediterranean agricultural communities . Aroma iet al. used an et al. [2 = 0.97 and 0.98 for Spanish and Italian VOO, respectively). The same E-nose was evaluated for its capacity to detect other attributes in olive oils like \u201crancid\u201d and \u201cfusty\u201d [Other common problems in olive oils are defects caused by chemical taints, mainly volatiles. These may be formed from a number of sources. These include oxidation of unsaturated fatty acids, overripe fruit, molds or bacterial contamination. Trained panels are commonly used to evaluate these defects. Indeed, regulators in the European Union and the International Olive Oil Council (IOOC) have adoet al. studied \u201cfusty\u201d . In this2.8.2 and foam formation when the samples are heated for headspace sampling. Different approaches have been tested to liberate CO2 such as NaCl addition at low temperature [While metal oxide semi-conductors are probably the most widely used of the E-nose sensors, they have still some significant limitations when applied to alcoholic drinks. MOS responses depend logarithmically on the concentration of analyte gases. In the presence of very high concentrations of analyte, such as ethanol in alcoholic beverages, sensors become saturated and mask the responses to other volatile compounds. Consequently, samples tend to be differentiated on the basis of variations in ethanol content rather than the other volatile compounds which are responsible for the aroma . Severalperature , nitrogeperature or simplperature . Notwith2.8.1.et al. [3) semi-conductor thin film sensors. Static headspace sampling (SHS) was used to sample the volatile compounds above the wine sample. Although SHS, as recognized by the authors [et al. [i.e., chemical analysis, E-nose, Etongue and color measurements) correctly classified 100% of wines into their region. The error rate using only E-nose responses was not disclosed. Berna et al. [Discrimination of wines is not an easy task due to the complexity and heterogeneity of the headspace. However, classification of wines is an economically important application because of high value of wines from specific geographical regions and also the need to prevent illegal substitution or adulteration. Current methods to evaluate wines include sensory analysis but, because of its high cost, E-noses have been evaluated as an alternative. Penza et al. tested t authors , is very authors employed [et al. employeda et al. comparedet al. [2.Di Natale et al. , 88 empl2.8.2.et al. [et al. [et al. [Aging of wines and beers allows desirable flavors and aromas to develop and off-flavor notes to diminish. Volatile chemicals are an important component of wine and beer flavor and it is desirable to monitor them through aging. Garcia et al. analyzed [et al. used 20 [et al. attempte2.8.3.et al. [et al. [et al. [2 based MOS sensors . Without the removal of the ethanol content (similar in each sample), it was impossible to discriminate beer brands and 4-ethylguaiacol (4EG). Typically these taints are described as \u201cbarnyard\u201d, \u201csweaty saddle\u201d and \u201cband-aid\u201d when present in red wine at concentrations of several hundred \u03bcg L\u22121 or more. Using nylon membranes to remove ethanol, Berna et al. [\u22121 for 4EP and 91 \u03bcg L\u22121 for 4EG , housing six thin-film semi-conductor MOS, to monitor the ripening or seasoning process of commercial coffee blends, made from 12 different types of monocultivar Arabica coffee. Although the authors focused on the investigation of sampling conditions and feature selection for improving classification performance, the results showed that EOS835 was suitable to monitor coffee blends during the seasoning process. Only two sensors performed adequately for this application.Falasconi et al. used a n2.10.et al. [Cichorium intybus) and carrots (Daucus carota). PCA of the data demonstrated that E-nose responses correlated well with classical evaluation of vegetable spoilage, i.e., microbial population and color index. E-nose is therefore a suitable tool for monitoring the storage of these products.Electronic noses have also been evaluated for their ability to measure shelf life of products like nuts and fresh cut vegetables ,105. Becet al. used an et al. [Staphylococcus and one of Micrococcus. These microbes are of interest since they have been reported to be present in fermented sausages [Discrimination of bacterial species is also of interest to the food industry although E-noses have not been extensively investigated in this application. Rossi et al. looked asausages . Factori3.i.e., these laboratory-based assays are moved into the industry, a number of challenges still need to be met; these are to properly assess various characteristics of electronic nose performance, including drift, humidity influence, redundancy of sensors, selectivity and signal to noise ratio. Although new sensor materials and designs, and correction algorithms that can be applied for each sensor, are being reported, the major limitation of currently available MOS sensors remains their independence and selectivity. Sensors with poor selectivity affect adversely the discriminating power of the array. Additionally, with the technology developed so far, it is unrealistic to envisage a universal electronic nose that is able to cope with every odour type as specific data processing and, sometimes instrumentation, must be designed for each application.Potential applications in odour assessment by electronic noses in the food area are numerous; they have been used for quality control, monitoring process, aging, determination of geographical origin, adulteration, contamination and spoilage. In most cases classification of samples was above 85%, but, before these specific applications can become a reality,"} {"text": "Mitochondria are intracellular organelles that play a crucial role in energy metabolism. Most cell energy is obtained through mitochondrial metabolic pathways, especially the Krebs cycle and electron transport chain which is the main site for production of reactive oxygen species such as superoxide, hydrogen peroxide, and hydroxyl radicals. Brain tissue is highly sensitive to oxidative stress due to its high oxygen consumption, iron and lipid contents, and low activity of antioxidant defenses. Thus, energy metabolism impairment and oxidative stress are important events that have been related to the pathogenesis of diseases affecting the central nervous system.In the present issue, the pathogenesis of common Alzheimer's (AD) and Parkinson's (PD) diseases is addressed in five papers. Mondrag\u00f3n-Rodr\u00edguez et al. proposed that phosphorylated tau protein could play the role of potential connector and that a combined therapy involving antioxidants and check points for synaptic plasticity during early stages of the disease could become a viable therapeutic option for AD treatment. This paper is accompanied by a study by T. Rohn that explores the potential role that the triggering receptor expressed on myeloid cells 2 (TREM2) normally plays and how loss of function may contribute to AD pathogenesis by enhancing oxidative stress and inflammation within the central nervous system. Additionally, O. Myhre et al. review the possible impact of environmental exposures in metal dyshomeostasis and inflammation in AD and PD. Furthermore, T. Omura et al. explore recent studies on the mechanism of endoplasmic reticulum stress-induced neuronal death related to PD, focusing on the involvement of human ubiquitin ligase HRD1 in the prevention of neuronal death as well as a potential therapeutic approach for PD based on the upregulation of HRD1. Lastly, S. Matsuda et al. showed a concise overview on the cellular functions of the mitochondrial kinase PINK1 and the relationship between Parkinsonism and mitochondrial dynamics, with particular emphasis on a mitochondrial damage response pathway and mitochondrial quality control.Three of the papers deal with aspects of oxidative stress implicated in the pathogenesis of neurodegenerative diseases. W. Liu et al. reviewed the current literature on the effects of oxidative stress due to exhaustive training on uncoupling protein 2 (UCP2) and Bcl-2/Bax in rat skeletal muscles. A.-M. Enciu et al. explore the possibility of oxidative-induced molecular mechanisms of blood-brain barrier disruption and tight junction protein expression alteration, in relation to aging and neurodegeneration. Moreover, M. Tajes et al. suggest that peroxynitrite induces cell death and is a very harmful agent in brain ischemia.\u03b2-synuclein, suggesting that \u03b1- and \u03b2-synucleins might play differential roles in the mitochondrial pathology of \u03b1-synucleinopathies. The paper by P. F. Schuck et al. showed that trans-glutaconic acid is toxic to brain cells in vitro, by causing alterations in cell ion balance and probably neurotransmission, as well as oxidative stress in rat cerebral cortex. To complete the issue, M. J. Rodr\u00edguez et al. discuss the mitochondrial KATP channel as a new target to control microglia activity, avoid its toxic phenotype, and facilitate a positive disease outcome. Fortes et al. showed that 5TIO1 can protect the brain against neuronal damage regularly observed during neuropathologies. These papers are accompanied by a review by Z. Yu et al. on how the neuroglobin's neuroprotection is related to mitochondria function and regulation. A. Hosseini and M. Abdollahi review the pathogenesis of diabetic neuropathy with a focus on oxidative stress and introduced therapies dependent or independent of oxidative stress. A. Sekigawa et al. review the currently available evidence that neither mitochondria nor leucine-rich repeat kinase 2 (LRRK2) was present in the swellings of mice expressing P123H By compiling these papers, we hope to enrich our readers and researchers with respect to mitochondrial dysfunction, energy metabolism impairment, and oxidative stress in the pathophysiology of neurodegenerative diseases.Emilio L. StreckEmilio L. StreckGrzegorz A. CzapskiGrzegorz A. CzapskiCleide Gon\u00e7alves da SilvaCleide Gon\u00e7alves da Silva"} {"text": "Il presente contributo focalizza l'attenzione sulla relazione clinica con soggetti stranieri che riportano quadri post-traumatici. Partendo dalla discussione dei concetti di cultura e di contesto tra paradigmi moderno e post-moderno, si propone un modello generale di intervento clinico implicante una forma di setting negoziale fondato sull'estraneit\u00e0 e sulla contingenza. Contesto e cultura non rispondono a categorie universali e circoscritte spaziotemporalmente ma sono il modo con cui il soggetto significa il mondo. Tale disposizione semiotico-clinica rappresenta un criterio di conoscenza delle configurazioni della persona ma anche uno strumento di emersione delle forme post-traumatiche. Dunque, il clinico conosce sia il modo con cui l'altro d\u00e0 senso alla propria esperienza che la forma che il trauma assume sul suo funzionamento di base. La prospettiva negoziale prende le distanze da un'idea dell'altro come portatore di sofferenza o come vittima e connota il soggetto come costruttore di esperienza, dunque dotato di agentivit\u00e0. Perci\u00f2, \u00e8 possibile intercettare le risorse personali per orientare un intervento basato sulla contingenza che promuova un processo di co-costruzione di senso, prodotto della relazione tra gli attori. Quest'aspetto generativo interviene sulle forme traumatiche del soggetto agente, aprendo ad una prospettiva di sviluppo e di cambiamento. Il presente approccio psicologico tratta la crisi dell'altro non come categorie fenomeniche, psichiatriche e classificazioniste, ma secondo il senso che l'altro d\u00e0 alla propria esperienza, anche quando disfunzionale alla propria esistenza. In tal modo, l'oggetto della relazione \u00e8 il processo relazionale stesso ed \u00e8 il soggetto nella sua complessit\u00e0, quindi non solo nella sua accezione di traumatizzato e/o di straniero, condizioni parziali e di stato e non esistenziali.BibliografiaCarli, R., Paniccia, R. M. (2003). Analysis of the demand. Theory and Technique intervention in clinical psychology. Il Mulino Edition.Psychology and Culture: Contexts, Identities and Interventions (pp. 129\u2013158). Rome, Carlo Amore Edition.Salvatore, S. (2004). Unconscious and speech. The unconscious as a discourse. In M. B. Ligorio (Eds.), Yearbook of Idiographic Science: Vol. 1/2008 (pp. 273\u2013283). Rome, Firera & Liuzzo Publishing.Venuleo, C. (2008). What is the nature of idiographic date? In This paper focuses on the clinical relationship with foreign entities who report the aspects of post-traumatic stress. Starting from the discussion of the concepts of culture and context between modern and post-modern paradigms, a general model of clinical intervention involving a form of negotiation setting based on extraneousness and contingency is proposed. Context and culture do not respond to universal categories and limited space-time frames, but they are the way in which the subject means the world. This semiotic clinical disposal represents a criterion of knowledge of the individual configurations and also an instrument of emerging post-traumatic forms. The clinician knows the way in which the individual gives meaning to their experience and the form that the trauma takes on its basic operation. The negotiating prospect moves away from the idea of the person as a suffering carrier or a victim and connotes the person as a builder of experience, so with agency. It is possible to intercept the personal resources to orient a contingency-based intervention that promotes a process of co-construction of meaning, a product of the relationship between the actors. This generative aspect acts on the traumatic forms of the agent, opening a prospect of development and change. This psychological approach to the crisis is not like the phenomenal, psychiatric and ordinal categories, but in the sense that the subject gives the experience, even when it is dysfunctional for their existence. The object of the relation are the relational process and the person in its complexity, not only in the sense of traumatized and/or a stranger but also as the conditions of partial state and nonexistence."} {"text": "Nutrients [et al. 2004 paper published in the American Journal of Physiology [We found an error in our paper recently published in utrients . In our et al. 2004 paper [This error impacts on the conclusions we draw, as our paper refers to this dose as \u2018large\u2019, when in actual fact, the dose of BCAA administered in the Karlsson . 7.44g) . We apologise for any inconvenience caused to the readers."} {"text": "Dietary fiber and whole grains contain a unique blend of bioactive components including resistant starches, vitamins, minerals, phytochemicals and antioxidants. As a result, research regarding their potential health benefits has received considerable attention in the last several decades. Epidemiological and clinical studies demonstrate that intake of dietary fiber and whole grain is inversely related to obesity, type two diabetes, cancer and cardiovascular disease (CVD). Defining dietary fiber is a divergent process and is dependent on both nutrition and analytical concepts. The most common and accepted definition is based on nutritional physiology. Generally speaking, dietary fiber is the edible parts of plants, or similar carbohydrates, that are resistant to digestion and absorption in the small intestine. Dietary fiber can be separated into many different fractions. Recent research has begun to isolate these components and determine if increasing their levels in a diet is beneficial to human health. These fractions include arabinoxylan, inulin, pectin, bran, cellulose, \u03b2-glucan and resistant starch. The study of these components may give us a better understanding of how and why dietary fiber may decrease the risk for certain diseases. The mechanisms behind the reported effects of dietary fiber on metabolic health are not well established. It is speculated to be a result of changes in intestinal viscosity, nutrient absorption, rate of passage, production of short chain fatty acids and production of gut hormones. Given the inconsistencies reported between studies this review will examine the most up to date data concerning dietary fiber and its effects on metabolic health. Dietary fiber and whole grains contain a unique blend of bioactive components including resistant starches, vitamins, minerals, phytochemicals and antioxidants. As a result, research regarding their potential health benefits has received considerable attention in the last several decades. Epidemiological and clinical studies demonstrate that consumption of dietary fiber and whole grain intake is inversely related to obesity , type twThe Food and Drug Administration (FDA) has approved two health claims for dietary fiber. The first claim states that, along with a decreased consumption of fats , an increased consumption of dietary fiber from fruits, vegetables and whole grains may reduce some types of cancer . \u201cIncreaRecent studies support this inverse relationship between dietary fiber and the development of several types of cancers including colorectal, small intestine, oral, larynx and breast ,6,7. AltThe second FDA claim supporting health benefits of DF states that diets low in saturated fat and cholesterol and high in fruits, vegetables and whole grain, have a decreased risk of leading to coronary heart disease (CHD) [Obviously, many studies support the inverse relationship of dietary fiber and the risk for CHD. However, more recent studies found interesting data illustrating that for every 10 g of additional fiber added to a diet the mortality risk of CHD decreased by 17\u201335% ,4. Risk Although only two claims have been adopted and supported by the FDA, multiple other \u201cpotential claims\u201d have been researched and well documented. Those conditions of particular importance, due to their increasing prevalence among the general population, include obesity and type two diabetes ,17. The i.e., starch, simple sugars, and fructans) is easily hydrolyzed by enzymatic reactions and absorbed in the small intestine. These compounds can be referred to as non-structural carbohydrates, non-fibrous polysaccharides (NFC) or simple carbohydrates. The second group are resistant to digestion in the small intestine and require bacterial fermentation located in the large intestine. These compounds can be referred to as complex carbohydrates, non-starch polysaccharide (NSP) or structural carbohydrates and are reflective in Neutral Detergent Fiber (NDF) and Acid Detergent Fiber (ADF) analysis. NDF consists of cellulose, hemicelluloses and lignin while ADF consists of cellulose and lignin. However, NDF and ADF analysis are used primarily for animal nutrition and the analysis of roughages. In the simplest form, carbohydrates can be separated into two basic groups based upon their digestibility in the GI tract. The first group and non-structural carbohydrates provide the basis for beginning to define and understand dietary fiber. This task has been a divergent process and has depended on both nutrition and analytical concepts. The most common and accepted definition is based on nutritional physiology. However, chemists and regulatory boards have leaned toward analytical procedures to factually define dietary fiber. The physiological definition is easier for the general public to understand and adopt for practical use. A recent description, as suggested by the American Association of Cereal Chemists , terms dThis definition describes in more detail the components of dietary fiber as well as its genetic makeup. Furthermore, the changes set forth in its description require few changes for its analytical evaluation .The World Health Organization (WHO) and Food and Agriculture Organization (FAO) agree with the American Association of Cereal Chemists (AACC) definition but with a slight variation. They state that dietary fiber is a polysaccharide with ten or more monomeric units which is not hydrolyzed by endogenous hormones in the small intestine .NSP can be further subdivided into the two general groups of soluble and insoluble. This grouping is based on chemical, physical, and functional properties . SolubleDietary fiber and whole grains are an abundant source of nutrients including vitamins, minerals, and a slowly digestible energy. In addition, they contain phytochemicals such as phenolics, carotenoids, lignans, beta-glucan and inulin. These chemicals, secreted by plants, are not currently classified as essential nutrients but may be important factors in human health . The synApproximately 66% of U.S. adults are overweight or obese resultinSubstantial research has been conducted to evaluate the effect of dietary fiber and body weight, most all of which show an inverse relationship between dietary fiber intake and change in body weight. Tucker and Thomas supporteet al. [Koh-Banerjee et al. concur wet al. [Dietary fiber\u2019s ability to decrease body weight or attenuate weight gain could be contributed to several factors. First, soluble fiber, when fermented in the large intestine, produces glucagon-like peptide GLP-1) and peptide YY (PYY) . These t and peptet al. observedet al. [It should also be noted that the inverse relationship between dietary fiber and ME was independent of dietary fat. Therefore, ME decreased as dietary fiber increased in both high and low fat diets. However, when dietary fiber was split into soluble and insoluble fiber, the results were much more inconclusive. Soluble fiber decreased ME when added to a low fat diet but increased ME when added to a high fat diet . It is net al. showed set al. . This coet al. . Subsequet al. . More reInsoluble fiber seems to have the opposite effect to that of soluble. When insoluble fiber intake was increased in mice consuming a high fat diet, body weight decreased . ResearcAccording to the data presented above, both soluble and insoluble fiber may lead to weight loss. However, there seems to be a relationship between the type of diet (high or low fat) and the type of fiber consumed. Insoluble fiber may play a more important role for weight loss during consumption of a high fat diet. Since resistant starch is a constituent of dietary fiber and undergoes the same digestion as insoluble fiber, comparing resistant starch and insoluble fiber may give us a better understanding of how dietary fiber can be used to treat and prevent obesity. Adding resistant starch to a diet dilutes its ME, but not to the degree of insoluble fiber .Numerous studies ,40 have Type two diabetes has increased exponentially over the past several years. Since 1990, self reported diabetes increased 61% . Althouget al. [Meyer et al. observedet al. [et al. [i.e., high glycemic load) tissues such as skeletal muscle, liver and adipose become resistant to insulin. While Hu et al. found th [et al. found a et al. [P = 0.005) inverse relationship between dietary fiber intake and diabetes when adjusted for age and BMI. Women consuming an average of 26 g/d of dietary fiber had a 22% lower risk of developing diabetes when compared to women only consuming 13 g/d. Schulze et al. [P < 0.001) with the consumption of an additional 12 g of dietary fiber per day. According to these findings, it may be more significant to focus on an increased consumption of dietary fiber to prevent diabetes than glycemic index/load. Although a majority of studies show a positive correlation between high glycemic foods and type two diabetes, several studies disagree with these findings. Meyer et al. found the et al. agreed wet al. [et al. [et al. [It is important to note that the inverse relationship between dietary fiber and diabetes observed by Meyer et al. and Schu [et al. was inde [et al. supporteversus insoluble fraction of fiber may give some insight on the efficacy of dietary fiber on diabetes and its mechanisms. Early research regarding soluble fiber demonstrated delayed gastric emptying and decreased absorption of macronutrients, resulting in lower postprandial blood glucose and insulin levels [According to recent research, the soluble n levels . This isn levels ,43,45.et al. [P = 0.0012) inverse relationship between insoluble fiber and the risk of type two diabetes while soluble fiber had no effect. Montonen et al. [Although some studies have been contradictory, showing no differentiation between soluble and insoluble fiber on diabetes , a majoret al. using hen et al. also foun et al. . Daily iet al. [Insoluble fiber only has a small effect on macronutrient absorption . Therefoet al. found thet al. . GIP is et al. . This maet al. ,50. Earlet al. and oralet al. . Accordiet al. , increasThe inverse relationship between cereal grains and diabetes may also be attributed to an increased consumption of magnesium. Increased intake of magnesium has been shown to decrease the incidence of type two diabetes ,54. Hypoet al. [According to recent research, increasing levels of dietary fiber may contribute a non\u2013pharmacological way to improve carbohydrate metabolism. However, some inconsistency does exist and may be contributed to the classification of dietary fiber and whole grains. Subjects with diagnosed type two diabetes may also add to the irregularity among the data. In a study of men and women with established diabetes, Jenkins et al. observedDietary fiber can be separated into many different factions . Recent Arabinoxylan (AX), a constituent of hemicelluloses, is comprised of a xylose backbone with arabinose side chains. AX is a major component of dietary fiber in whole grains having considerable inclusions in both the endosperm and bran. In wheat, AX account for around 64\u201369% of the NSP in the bran and around 88% in the endosperm . During et al. [Lu et al. , observeet al. . FastingThe lower glycemic index of AX may also play a role. Breads made with a flour rich in AX have a relatively low glycemic index of around 59. Whole wheat flour, although high in fiber, has a glycemic index of around 99 . ArabinoInulin is a polymer of fructose monomers and is present in such foods as onions, garlic, wheat, artichokes and bananas and is used to improve taste and mouthfeel in certain applications. It is also used as a functional food ingredient due to its nutritional properties. Because of this, inulin products can be used as a replacement for fat or soluble carbohydrates without affecting the taste and texture and still contribute to a foods nutritional value. Enzymatic hydrolyses in the small intestine is minimal (<10%) since inulin consists of beta bonds. Therefore, it enters the large intestine and is almost completely metabolized by the microflora. When fermented, they tend to favor propionate production which, in turn, decreases the acetate to propionate ratio leading to decreased total serum cholesterol and LDL , which abifidobacteria while restricting the growth of potential pathogenic bacteria such as E. coli, Salmonella, and Listeria. This could prove to be beneficial in such disorders as ulcerative colitis and C. difficile infections. Rafter et al. [Inulin has also demonstrated the ability to contribute to the health of the human large intestine as a prebiotic . They der et al. agreed wet al. [Increased mineral absorption may also contribute to the functionality of inulin. Increased calcium absorption, by approximately 20%, was reported in adolescent girls supplemented with inulin . Resultset al. , supportet al. [Inulin may also provide a way to prevent and treat obesity. Cani et al. demonstr\u03b2-glucan is a linear polysaccharide of glucose monomers with \u03b2(1\u21924) and \u03b2(1\u21923) linkages and found in the endosperm of cereal grains, primarily barley and oats. \u03b2-glucan concentrations in North American oat cultivars range from 3.9% to 6.8% . \u03b2-glucaet al. [et al. [et al. [The physiological benefits due to \u03b2-glucan seem to stem from their effect on lipid metabolism and postprandial glucose metabolism. Many studies agree an inverse relationship exists between consumption levels of \u03b2-glucan and cholesterol levels. Several recent studies, in both hypercholesterolemic and healet al. found th [et al. found th [et al. reportedet al. [Most authors agree that \u03b2-glucan\u2019s viscosity in the GI tract is the most probable mechanism in which it decreases serum cholesterol levels as well as improves post prandial glucose metabolism. This gellation property may decrease bile acid absorption by increasing intestinal viscosity and increase bile acid excretion. This subsequently results in a higher hepatic cholesterol synthesis because of the higher need for bile acid synthesis . The samet al. observedThe production of short chain fatty acids from \u03b2-glucan may also be a probable mechanism behind its observed metabolic effects. Fermentation of oat \u03b2-glucan has been shown to yield larger amounts of propionate ,74. Propet al. [et al. [Not all research however, agrees that \u03b2-glucan can affect lipid and glucose absorption/metabolism. Keogh et al. observed [et al. not onlyet al. [et al. [et al. [et al. [The inconsistency between studies is thought to be due to the molecular weight (MW) and solubility of the \u03b2-glucan. The MW can be changed by several factors including food processing and the source of the \u03b2-glucan. Suortti et al. states tet al. and Naum [et al. both use [et al. however, [et al. supportset al. [et al. [et al. [Different sources of \u03b2-glucan may also differ in their molecular weight and viscosity. Oats were the \u03b2-glucan source for the Theuwissen and Meinsk and Naumet al. studies [et al. utilized [et al. found si [et al. it may bet al. [Oat and barley \u03b2-glucan also seem to differ in their solubility which would have a direct effect on intestinal viscosity. Gaidosova et al. found thet al. [et al. [et al. [et al. [in vitro. It should be noted however, that the \u03b2-glucan in this study was not in its natural form. Extracted \u03b2-glucan was treated with lichenase to yield the different molecular weights. Oat and barley varieties may also play a role on the MW of \u03b2-glucan. Yao et al. observed [et al. using a [et al. , also us [et al. . Kim et [et al. disagreePectin is a linear polymer of galacturonic acid connected with \u03b1 (1\u21924) bonds. Regions of this backbone are substituted with \u03b1 (1\u21922) rhamnopyranose units from which side chains of neutral sugars such as galactose, mannose, glucose and xylose occur. Pectin is a water soluble polysaccharide that bypasses enzymatic digestion of the small intestine but is easily degraded by the microflora of the colon. Citrus fruit contains anywhere from 0.5% to 3.5% pectin with a large concentration located in the peel. Commercially extracted pectins are also available and are typically used in food applications which require a gelling or a thickening agent. Inside the GI tract, pectin maintains this ability to form a gel or thicken a solution. This is thought to be the likely mechanism behind its many beneficial effects on health including dumping syndrome , improveet al. [Shigella, Salmonella, Klebsiella, Enterobacter, Proteus and Citrobacter. This is supported by Olano\u2013Martin et al. [Bifidobacteria and Lactobacillus in vitro. These bacteria are considered to be directly related to the health of the large intestine and their concentrations depict a healthy microflora population. Several recent clinical studies, Rabbani et al. and Tripet al. , demonstn et al. who obseThe quality of fibrin is thought to be an important risk factor for atherosclerosis, stroke and coronary heart disease. Pectin has been shown to increase fibrin permeability and decrease fibrin tensile strength in hyperlipidaemic men . Althouget al. [Pectin may also have a potential role in the complicated area of cancer prevention. Nangia\u2013Makker et al. found ththe food which is produced by grinding clean oat groats or rolled oats and separating the resulting oat flour by sieving bolting, and/or other suitable means into fractions such that the oat bran fraction is not more than 50% of the original starting material and has a total betaglucan content of at least 5.5% (dry-weight basis) and a total dietary fiber content of at least 16.0% (dry-weight basis), and such that at least one-third of the total dietary fiber is soluble fiber.\u201d [Bran is the outer most layer of a cereal grain and consists of the nucellar epidermis, seed coat, pericarp and aleurone. The aleurone consists of heavy walled, cube shaped cells which are composed primarily of cellulose. It is low in starch and high in minerals, protein, and fat. However, due to its thick cellulosic walls, these nutrients are virtually unavailable for digestion in monogastric species. The AACC defines oat bran as \u201c fiber.\u201d .Bran from a wide array of cereal grains have been shown to have an effect on postprandial glucose levels, serum cholesterol, colon cancer, and body mass. Although the efficacy of bran may change due to its source, the purpose of this section will just evaluate bran\u2019s general effect on the parameters listed above. et al. [et al. [et al. [In a recent study of healthy adults, 31 g of rye bran decreased peak postprandial glucose levels by 35% when compared to the control . This efet al. found th [et al. observedet al. [et al. [et al. [In addition to a possible effect on carbohydrate absorption and metabolism, bran also seems to have the same effect on lipids. In a long term clinical study, Jensen et al. reported [et al. who foun [et al. found th [et al. .Cellulose is a linear chain of \u03b2(1\u21924) linked glucose monomers and is the structural component of cell walls in green plants and vegetables. It is water insoluble and inert to digestive enzymes in the small intestine. However, it can go through microbial fermentation to a certain degree in the large intestine in turn producing SCFA. Natural cellulose can be divided into two groups: Crystalline and amorphous. The crystalline component, which is made up of intra and intermolecular non covalent hydrogen bonds, make cellulose insoluble in water. However, many modified celluloses such as powdered cellulose, microcrystalline cellulose and hydroxypropylmethyl cellulose have been developed and are used as food ingredients. The difference between natural and modified celluloses is the extent of crystallization and hydrogen bonding. When these hydrogen bonds are disrupted and the crystallinity is lost, the cellulose derivative becomes water soluble .Little research has been conducted evaluating the effects of cellulose in humans. Therefore, studies in other models such as the rat will be discussed. The translation to human relevance is poorly understood and debatable. Cellulose pills have been made available for human consumption with the theory that cellulose may decrease a person\u2019s caloric intake. Although no human studies could be found to support this, several animal studies using cats , dogs 1 and ratset al. [Many studies have evaluated the effect of cellulose on blood glucose and insulin levels in many different models. However, the data is extremely contradictory and may depend on the subject, type of cellulose and other unknown factors. Using the rat , dog 10 and cat et al. reportedet al. [Modified cellulose has also been reported to effect lipid metabolism. Maki et al. ,112 bothAccording to this, modified celluloses may be more beneficial than natural cellulose. These modified celluloses, as described above, act like soluble fiber thus adding to the viscosity of the GI tract. Therefore, it is assumed that increased intestinal viscosity delays nutrient absorption and increases bile acid excretion. Resistant starches (RS) are defined as any starch not digested in the small intestine . RS behaet al. [RS has been classified into four basic \u201ctypes\u201d. Type 1 (RS1) is made up of starch granules surrounded by an indigestible plant matrix. Type 2 (RS2) occurs in its natural form such as in an uncooked potato and high amylose maize. Type 3 (RS3) are crystallized starches made by unique cooking and cooling processes. Type 4 (RS4) is a starch chemically modified by esterification, crosslinking, or transglycosylation and is not found in nature. Few studies have compared types, but one recent study by Haub et al. reportedet al. [et al. [et al. [A majority of human studies involving RS have shown a decrease in postprandial blood glucose and insulin levels. However, it is difficult to completely understand these effects due to differences in study design and the type of RS used. Behall et al. found th [et al. reported [et al. , howeveret al. [et al. [et al. [Several studies report that longer term consumption of a RS may decrease fasting cholesterol and triglyceride levels. In a five week study, Behall et al. found th [et al. reported [et al. suggest [et al. , report et al. [et al. [et al. [Research has also been conducted which evaluates the effect of RS on fat oxidation and storage. However, data between studies are contradictory with no clear conclusions. Tagliabue et al. reported [et al. may sugg [et al. reportedIn a simplified definition, dietary fiber is a carbohydrate that resists digestion and absorption and may or may not undergo microbial fermentation in the large intestine. This definition is essentially the basis to its correlation between consumption levels and possible health benefits. Dietary fiber consists of many different constituents, however; some are of particular interest and include arabinoxylan, inulin, \u03b2-glucan, pectin, bran and resistant starches. These individual components of dietary fiber have been shown to significantly play an important role in improving human health. Current research is paying particular attention to these elements; although further research is needed to better understand particular health claims and the mechanisms involved. A large amount of research has reported an inverse relationship between fiber consumption and the risk for coronary heart disease and several types of cancer. For that reason, the FDA has adopted and published the claim that increased consumption of dietary fiber can reduce the prevalence of coronary heart diseases and cancer. The mechanisms behind these findings are still unclear. However, it is thought to be attributed to several factors including increasing bile acid excretion, decreased caloric intake, increased short chain fatty acid production, carcinogen binding effects, increased antioxidants, and increased vitamins and minerals.Although not as yet adopted by the FDA, dietary fiber is suggested to play a role in other conditions such as obesity and diabetes. Although some data are contradictory, a majority of studies regarding dietary fiber report a decrease of these two conditions with increased consumption of fiber. The digestive and viscosity characteristics of dietary fiber are the likely modes of action which affect diabetes and obesity risk. These mechanisms appear to decrease nutrient absorption, therefore, decreasing metabolizable energy. Dietary fiber may also be able to decrease gross energy of a food due to its lower energy density. etc. These sub fractions may give a better understanding of the health benefits of dietary fiber as well as the mechanisms behind them. Further studies are needed in certain areas of dietary fiber research. Those of particular interest are in the components of fiber such as \u03b2-glucan, arabinoxylan, resistant starches,"} {"text": "Venous abnormalities contribute to the pathophysiology of several neurological conditions. This paper reviews the literature regarding venous abnormalities in multiple sclerosis (MS), leukoaraiosis, and normal-pressure hydrocephalus (NPH). The review is supplemented with hydrodynamic analysis to assess the effects on cerebrospinal fluid (CSF) dynamics and cerebral blood flow (CBF) of venous hypertension in general, and chronic cerebrospinal venous insufficiency (CCSVI) in particular.CCSVI-like venous anomalies seem unlikely to account for reduced CBF in patients with MS, thus other mechanisms must be at work, which increase the hydraulic resistance of the cerebral vascular bed in MS. Similarly, hydrodynamic changes appear to be responsible for reduced CBF in leukoaraiosis. The hydrodynamic properties of the periventricular veins make these vessels particularly vulnerable to ischemia and plaque formation.Venous hypertension in the dural sinuses can alter intracranial compliance. Consequently, venous hypertension may change the CSF dynamics, affecting the intracranial windkessel mechanism. MS and NPH appear to share some similar characteristics, with both conditions exhibiting increased CSF pulsatility in the aqueduct of Sylvius.CCSVI appears to be a real phenomenon associated with MS, which causes venous hypertension in the dural sinuses. However, the role of CCSVI in the pathophysiology of MS remains unclear. The cerebral venous system is often viewed simply as a series of collecting vessels channeling blood back to the heart, yet it also plays an important role in the intracranial hemodynamic/cerebrospinal fluid (CSF) regulatory system (hereafter simply referred to as the hydrodynamic regulatory system), a role that is often overlooked and that appears to influence both perfusion of the brain parenchyma,2 and thDespite having very different pathologies, MS, leukoaraiosis, and NPH all share some common characteristics. In all three conditions, cerebral blood flow (CBF) is reduced-21. Bothet al.[et al.[5, whose course begins in the WM[Since the earliest years of research into MS, there has been suspicion that the venous system might be involved in its etiology, with Dawson, Putnamet al. finding [et al. showed tn the WM, and then the WM-57, leucn the WM, and subn the WM lesions n the WM,59.et al.[P\u2009<\u20090.0001). This finding appears to corroborate the work of Ge et al.[et al., who attributed the reduction in VVV to hypometabolic status in the brain parenchyma of patients with MS, Zivadinov et al. performed a pre-contrast and post-contrast SWI venography experiment, which indicated the reduction in VVV to be due to morphological changes in the cerebral veins of patients with MS. Indeed, such was the clear-cut nature of these venous changes that Beggs et al.[Recently, there has been renewed interest in studying vascular changes associated with MS-62. Thiset al. reportede et al.. Howevers et al. were ablet al.[et al.[et al.[et al.[et al.[et al.[et al.[These findings reinforce a large body of evidence connecting MS with alterations in the cerebral vascular bed. Using tomography, a number of early investigators-71, founet al., identif[et al. studied [et al. found re[et al. reported[et al.. Collect[et al.. Several[et al. found hy[et al. found a [et al., who fouet al.[et al.[et al. did not specifically consider CCSVI, their findings are consistent with those of Zamboni et al., and suggest that venous hypertension may be a feature of MS. Abnormal CSF hydrodynamics have also been implicated in the formation of cortical lesions in MS. Sub-pial lesions, which appear not to be perivenous, cover extensive areas of the cortex, and extend from the surface into the brain[et al.[Venous hypertension in the dural sinuses is known to inhibit absorption of CSF through the arachnoid villi (AV),76. Zambet al. reported[et al. also fouhe brain. They aphe brain,78. Kutz[et al. found suet al.[Leukoaraiosis is a radiological finding, characterized by WM hyperintensities in the periventricular region on T2-weighted MRI scans, which iet al. found a Mirroring the cerebral hemodynamics of MS, several researchers have reported leukoaraiosis to be associated with reduced CBF,83,90,91et al.[Further evidence linking leukoaraiosis with altered venous hemodynamics comes from a series of studies by Chung and co-workers,16,101, et al..In a series of studies, Bateman and co-workers investigated altered venous hemodynamics in a variety of neurological conditions,102,103.et al.[NPH occurs when there is an abnormal accumulation of CSF in the ventricles, causing them to become enlarged, but witet al. independet al.,116. Appet al.. In patiet al.; howeveret al.,102. NPHet al.. It is tet al.[et al.[et al.[CBF has been found to be lower in patients with NPH than in normal controls-123. Thiet al. found th[et al., who rep[et al. attribut[et al., who fou[et al..et al.[A number of researchers have reported marked alterations in CSF dynamics in NPH, with CSF pulsatility in the AoS found to be markedly greater in patients with NPH compared with controls,125-129.et al., who fouet al. or actuaet al.. Althouget al.,130, whiet al.,132. If et al., or alteet al.. Batemanet al. suggesteet al.,134, witet al.. Althouget al..Although there are clear differences in the pathologies of MS, leukoaraiosis and NPH, there are also striking similarities. All three are characterized by: 1) WM changes in the periventricular region; and 2) reduced CBF. The lesions associated with both MS and leukoaraiosis tend to be perivenous in nature, and the changes in CSF dynamics associated with NPH and MS also reveal similarities. This raises intriguing questions as to why these similarities exist. Are there some underlying physical mechanisms that are common to all these conditions?The proximity of immune-cell aggregations to the vasculature is a hallmark of MS. Whereaset al.[Another universal principle found in nature is that of mass transfer. In simple terms, in order for matter to move from one place to another, it must be transported by some mechanism. In biology, the transport of cells and chemicals generally occurs either by: diffusion, by active transport (in the case of ion transport across the cell membrane), or through transport in a bulk fluid such as blood. If diffusion or active transport are the mechanisms at work, then there is a tendency towards higher concentrations of the transported substance near its source and lower concentrations further away. If this simple logic is applied to the formation of perivenous MS lesions, it would suggest that the plaque formation emanates from the blood vessels, rather than the other way round. Indeed, the current thinking appears to support this, suggesting that in MS, plaque formation is precipitated by breaching of the blood\u2013brain barrier (BBB),138,139.et al. that in The mass transport associated with bulk fluids also appears to offer insights into the spatial arrangement of ischemic WM changes, such as those found in leukoaraiosis. Considering oxygen transport in the blood through the cerebral vascular bed, the law of mass transport dictates that as oxygen is supplied to the brain parenchyma, so the oxygen levels in the blood will decrease. Consequently, the oxygen tension in the cerebral arteries will be higher than that in the cerebral veins. Under normal circumstances, this should not cause any problems, but when CBF is greatly impaired, as in both leukoaraiosis,83,90,91et al.[et al.[et al.[et al.[et al.[et al.[There is increasing evidence that hypoxia-like metabolic injury may be a pathogenic component in the formation of MS lesions,86. Wakeet al. found moet al. reported[et al. identifi[et al., who mea[et al. found th[et al. conclude[et al., using a[et al.,147. The[et al., followe[et al.,148,149.et al.[et al.[Several researchers have found similarities between leukoaraiosis and MS,61. Leuket al., investi[et al. to be as[et al., and it [et al.,13,22. T[et al.. Any inc[et al., an actiSo why should some regions of the brain be more vulnerable than others to damage? Perhaps the architecture of the cerebral-venous system provides some clues? While the distal venous regions may be prone to hypoxic stress, the spatial arrangement of the veins may also contribute to their vulnerability. Evidence in support this opinion comes from Schlesinger, who foret al.[The finding that the junction between the medullary and sub-ependymal veins has a high resistance to fluid flow is no surprise. The sub-ependymal veins are collecting vessels, which receive venous blood from a large number of the smaller medullary veins that enter the sub-ependymal veins at approximately 90 degrees. From a fluid-mechanics point of view, this is not a very streamlined configuration, and will result in relatively large pressure drop across this junction. Any stenosis at this junction would therefore greatly increase its resistance, possibly leading to distension of the upstream medullary veins, as Putnam and Adler reported. Consequet al. pointed et al., similaret al.,22, withet al.. Stenosiet al., who meaet al.[Unlike the deep venous system, the superficial system has thin-walled cortical bridging veins that traverse the SAS. Blood flow through these compliant vessels is controlled by sphincters, which regulate discharge into the SSS,159. Thiet al. found GMet al..MS, leukoaraiosis, and NPH all appear, to a greater or lesser extent, to be associated with marked changes in the dynamics of the intracranial CSF system. This suggests that these diseases might be associated with alterations in the intracranial hydrodynamic regulatory system, which controls the volume and pulsatility of the blood in the cerebral vascular bed,166,167.The various pulses associated with the intracranial hydrodynamic system are illustrated in Figure\u00a0et al.[Close inspection of Figure\u00a0et al. reportedet al., or alteet al..et al.[et al.[et al.[Although the precise behavior of the intracranial hydrodynamic system under conditions of venous hypertension is unknown, there is evidence that occlusion of the venous-drainage pathways causes blood to accumulate within the cranium. In an experiment involving healthy subjects, Kitano et al. showed t[et al. also per[et al. conclude[et al., who fouet al. published a paper[In 2009, Zamboni a paper linking a paper,64 due t a paper. CCSVI h a paper,174, wit a paper,175-179.The results obtained by researchers for CCSVI have been very mixed. For example, some researchers found CCSVI-like venous anomalies to be strongly associated with MS,180-186,et al.[P\u2009<\u20090.001)[et al.[P\u2009<\u20090.0001) with MS.One possible explanation for the discrepancies between studies is the echo color Doppler sonography (ECDS) frequently used to diagnose CCSVI. The floppiness of the vessels involved and the variability of the venous vasculature can lead to erroneous results if ECDS is not undertaken correctly-195. In et al. develope<\u20090.001). This co[et al., who fouPrevious work,64,196 sFigure\u00a0Q is the fluid flow rate (ml/min), R is the hydraulic resistance (mmHg.min/ml), and \u0394P represents the pressure drop between the two ends of the vessel. By applying equation\u00a01 to the intracranial system in Figure\u00a0where et al.[One common feature of CCSVI is stenosis of one or both of the IJVs,197, whiet al..3/beat (assuming 70 beats/min)[20)[3/beat (assuming 70 beats/min), which is close to the mean value of 3.4\u00a0mm3/beat reported by Magnano et al.[3/beat reported by Zamboni et al.[From Figure\u00a0ats/min), is veryats/min). In normats/min). These oats/min), allowinin)[20). Consequo et al. for reduet al.[It is possible to gain an insight into the nature of the hemodynamic changes associated with MS, by undertaking simple hydrodynamic analysis of composite data published by Varga et al.. These dThe data in Table\u00a0From the data it can be seen that in patients with MS, there is a general reduction in the volume of the vascular bed, which, if approximated to a series of parallel round tubes, equates to a mean reduction in cross-sectional area of the vessels of about 8.4% in patients with MS. According to Poiseuille\u2019s Law:et al.. According to equation\u00a01, hypertension in the dural sinuses would tend to reduce the pressure gradient pushing the blood through the cerebral veins, which in turn would tend to inhibit blood flow. However, when we consider that the CPP is normally in the region of 70 to 90\u00a0mmHg, it is unlikely that venous hypertension of less than 5\u00a0mmHg, such as that associated with CCSVI, could account for the large reduction in WM CBF reported in patients with MS[where R is the hydraulic resistance of the vessel (mmHg.min/ml) and r is the radius of the vessel (mm), it can be calculated that the 8.4% reduction in average cross-sectional area equates to an approximately 19.3% increase in hydraulic resistance. Given that the blood-flow rate is directly proportional to the hydraulic resistance, this means that the reduction in CBV seen in patients with MS, is more than enough to account for the 15.6% reduction in CBF reported by Varga with MS-21. Hencet al.[et al.[et al.[Although the above analysis is somewhat simplistic, it does illustrate that cerebral vascular volumetric changes alone appear capable of accounting for the reduction in CBF in the periventricular NAWM in patients with MS. In addition, this finding mirrors those of researchers investigating: 1) reduced CBF,83,90,91et al. of a 10%[et al., who repet al.[et al.[et al.[Further evidence suggesting that occlusion of the cerebral-venous drainage pathways might not be responsible for reduced CBF in patients with MS comes from Moyer et al., who com[et al., who per[et al., who inv[et al.-20, sugg[et al. reportedAlthough much research work has been undertaken into the contribution of venous abnormalities to various neurological conditions, there has generally been a lack of any hydrodynamic analysis to interpret the data collected. Without such analysis, it is possible to misinterpret results and come to potentially erroneous conclusions. In the Venous hypertension in the dural sinuses seems to be associated with marked changes in intracranial compliance. There is sound theoretical reason to believe that this will alter the dynamics of the intracranial CSF system, which in turn may affect the finely tuned intracranial windkessel mechanism. With respect to this, MS and NPH appear to share some similar characteristics. In particular, both conditions seem to be characterized by increased CSF pulsatility in the AoS.Despite conflicting studies, there is increasing evidence that CCSVI is a real physiological phenomenon, and that it is in some way associated with MS. The evidence from CSF-related studies in patients with MS, and the hydrodynamic analysis presented here, suggests that CCSVI causes venous hypertension in the dural sinuses. However, the role that CCSVI might play in the pathophysiology of MS remains unclear, and more work is urgently needed to understand the clinical relevance of this condition.ADC: Apparent diffusion coefficient; AoS: Aqueduct of Sylvius; AV: Arachnoid villi; AVD: Arteriovenous delay; BBB: Blood\u2013brain barrier; CBF: Cerebral blood flow; CBV: Cerebral blood volume; CCSVI: Chronic cerebrospinal venous insufficiency; CNS: Central nervous system; CPP: Cerebral perfusion pressure; CSF: Cerebrospinal fluid; DVA: Developmental venous anomaly; ECDS: Echo color doppler sonography; GM: Grey matter; HIF: Hypoxia-inducible factor; ICP: Intracranial pressure; IJV: Internal jugular veins; JVR: Jugular venous reflux; MRI: Magnetic resonance imaging; MS: Multiple sclerosis; MTT: Mean transit time; NAWM: Normal-appearing white matter; NPH: Normal-pressure hydrocephalus; PVC: Periventricular venous collagenosis; RR: Relapsing\u2013remitting; SAS: Sub-arachnoid space; SSS: Superior sagittal sinus; SWI: Susceptibility-weighted imaging; VVV: Venous vasculature visibility; WM: White matter.The author declares no conflicts of interest.The study was conceived and undertaken by CBB, who also wrote the manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1741-7015/11/142/prepub"} {"text": "Culture-independent microbiological technologies as well advances in plant genetics and biochemistry provide methodological preconditions for exploring the interactions between plants and their microbiome in the phyllosphere. Improving and combining cultivation and culture-independent techniques can contribute to a better understanding of the phyllosphere ecology. This is essential, for example, to avoid human\u2013pathogenic bacteria in plant food.Most microorganisms of the phyllosphere are nonculturable in commonly used media and culture conditions, as are those in other natural environments. This review queries the reasons for their \u2018noncultivability\u2019 and assesses developments in phyllospere microbiology that have been achieved cultivation independently over the last 4\u2003years. Analyses of total microbial communities have revealed a comprehensive microbial diversity. 16S rRNA gene amplicon sequencing and metagenomic sequencing were applied to investigate plant species, location and season as variables affecting the composition of these communities. In continuation to culture-based enzymatic and metabolic studies with individual isolates, metaproteogenomic approaches reveal a great potential to study the physiology of microbial communities This is a reversible dormancy phase in bacteria when they are unable to undergo a sustained cellular division on or in standard laboratory media Oliver, . This stEscherichia coli O157:H7 on lettuce leaves still produced verotoxins and thus retained their virulent potential.The phenomenon of the VBNC state does indeed occur in the phyllosphere and developing phyllosphere microbial populations has been addressed in studies by Hunter et\u2003al. and Bali Cry1Ac protein in transgenic cotton. DGGE profile data and sequences of 16 and 18S rRNA gene fragments suggested that fungi may be more susceptible to Cry1Ac protein than bacteria used in Gram-negative Proteobacteria as quorum sensing (QS) molecules to regulate density-dependent mechanisms in bacterial communities in the tobacco phyllosphere and determined changes in the composition of this community by DGGE and phospholipid fatty acid analyses. It was suggested that pseudomonads and other AHL-producing Gammaproteobacteria utilize QS-dependent mechanisms to ensure their survival over other epiphytic residents in the nutrient-poor phyllosphere. Hence, AHL QS signals occurring naturally in the phyllosphere could play a role in the interactions between plant-associated bacteria. It is possible that AHL QS signals could be used to suppress pathogens in the phyllosphere of crops.Knowledge of the mechanisms, genes and compounds involved in interactions between microorganisms of the plant microbiome is essential for practical use in biological plant protection. Biological control agents are inherently cultivable microorganisms, and their cultivation is a prerequisite to apply biocontrol strains. Therefore, and for reasons already described in the section \u2018Functional structures and metabolic diversity in microbial phyllosphere communities\u2019 above, studies on microbial agents are usually based on cultivation in suitable media. However, a few other studies, which aimed at a more comprehensive assessment of the microbial phyllosphere communities influenced by microorganism\u2013microorganism interactions, were also based on culture-independent approaches Table\u2003b. Lv et\u2003et\u2003al. identifiE.\u2003coli O157:H7, supported the hypothesis that competition for nutrients is the primary mechanism of interactions between phyllospheric microorganisms (Carter et\u2003al., E.\u2003coli O157:H7 competed for carbon mainly with Actinobacteria, Proteobacteria, Basidiomycota and uncultured fungi and for nitrogen with Proteobacteria, Actinobacteria and uncultured bacteria.Detailed analyses of functional genes in a microbial community of a biofilm established in spinach leaf lysate, which was impaired by co-inoculation with et\u2003al. (Xanthomonas campestris pv. vitians, the causative agent of the bacterial leaf spot, and the presence or absence of other phyllosphere bacteria. It is possible that strains of the genus Alkanindiges act as facilitators, and those of Bacillus, Erwinia and Pantoea operate as antagonists of the pathogen.When studying the composition of bacterial communities colonizing the leaves of preharvest field-grown lettuce by pyrosequencing of 16S rRNA gene amplicons, Rastogi et\u2003al. found coThe interaction between microorganisms in the phyllosphere is surely one of the most important issues raising questions that have to be answered before practical applications can be established. However, the research is far from complete. At present, we do not know whether the abundance of plant pathogens is a function of interactions between phyllosphere microorganisms or of the plant genotype. This needs to be clarified through further in-depth studies.et\u2003al. (Enterobacteriaceae including many culturable coliforms in the summer samples of field-grown lettuce, reflecting that this is a natural part of the lettuce microbiota instead of being accounted for by faecal contamination.In their large culture-independent survey of leaf surface microbiology, Rastogi et\u2003al. found anE.\u2003coli and nontyphoidal Salmonella are enteric human pathogens and do not naturally occur in plants. However, they have been associated with multiple outbreaks of foodborne illness caused by the consumption of fresh-cut leafy vegetables (Teplitski et\u2003al., Enterohemorrhagic E.\u2003coli and nontyphoidal Salmonella are intrinsically culturable, but they can enter the VBNC state (Dinu & Bach, et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al., Enterohemorrhagic et\u2003al., et\u2003al., et\u2003al., et\u2003al., et\u2003al. (E. coli O157:H7 in utilizing plant nutrients, which is significant to its persistence on plants.Table\u2003, et\u2003al. revealedet\u2003al., On the other hand, where quantitative evaluation of the persistence of a potential human pathogen following inoculation onto leafy vegetables is the aim of the study, qPCR targeting species-specific genes would be the recommended procedure (Arthurson et\u2003al. (et\u2003al. (E.\u2003coli on diverse fresh-cut leafy greens under preharvest through to postharvest conditions, respectively. Lopez-Velasco et\u2003al. (E.\u2003coli O157:H7 transformed for GFP expression and kanamycin resistance. The total population of epiphytic bacteria was enumerated on culture plates. Changes in the bacterial community structure during storage were detected by sequencing DGGE bands of 16S rRNA gene amplicons. qPCR was applied for assessing the virulence and stress response genes of E.\u2003coli O157:H7 (Table\u2003E.\u2003coli O157:H7 during storage. The fate of generic E.\u2003coli and E.\u2003coli O157:H7 during the production, harvest, processing and storage of leafy vegetables was reported in the study by Tom\u00e1s-Callejas et\u2003al. (E.\u2003coli isolates and avirulent strains of E.\u2003coli O157:H7 were inoculated onto leafy greens in a greenhouse. Their survival was checked by genotyping the generic E.\u2003coli strains by REP-PCR and qPCR for the detection of E.\u2003coli O157:H7 (Table\u2003E.\u2003coli as well as E.\u2003coli O157:H7 were observed, but individual cells of both populations survived throughout the production and postharvest operations.Cultivation and culture-independent methods were combined in studies by Lopez-Velasco et\u2003al. and Tom\u00e1 (et\u2003al. to inveso et\u2003al. inoculats et\u2003al. . CocktaiFundamental questions regarding the ecology of enteric pathogens, their sources, persistence, distribution and routes of contamination of leafy greens have already been answered using culture and/or culture-independent methods. However, eliminating the risk of plant food contamination and subsequent human disease outbreaks requires clarifying the mechanisms by which enteric pathogens colonize plants and understanding how they can be inhibited or inactivated (Critzer & Doyle, et\u2003al. (in situ. Leaf structure and chemistry have been shown to play important roles in differentiating bacterial populations in the phyllosphere and could be prospective breeding targets for managing the phyllosphere microbiota to reduce the growth of phyto- and human pathogens on vegetables and crops. However, the level of research activity into plant\u2013microorganism interactions is merely moderate at present. More attention should be paid to the effects of phyllosphere microorganisms on plants in addition to those of phytopathogens and to the understanding of the crosstalk between microorganisms. Progress in phyllosphere microbiology is not necessarily connected with the application of culture-independent methods as shown in food microbiology to observe enteric human pathogens on fruit and vegetables. However, unlike plate counting, culture-independent techniques have the advantage of also assessing cells at the VBNC stage, which can retain their virulent potential. Consequently, improving and combining cultivation and culture-independent techniques present a challenge to our better understanding mechanisms of the microbial ecology in the phyllosphere.In general, the phyllosphere microbiology has benefited from culture-independent techniques, such as qPCR, microarray assays, 16S rRNA gene amplicon sequencing and whole metagenome shotgun analyses, over the last 4\u2003years. Using metaproteogenomic approaches, Knief et\u2003al. demonstr"} {"text": "According to the literature, the prevalence of supernumerary teeth is 1% to 4% of permanent dentitions; and among these, the presence of fifth mandibular incisor \u2014 a supernumerary eumorphic tooth \u2014 has rarely been described in literature, and its association with localized aggressive periodontitis is an even more rare entity. This paper reports a very rare case of unusual association of supernumerary eumorphic fifth mandibular incisor with aggressive periodontitis in a Muslim individual, so that these findings generate curiosity and inspire others to carry out further studies and investigations. The condition of supernumerary teeth, or hyperdontia [Online Mendelian Inheritance in Man-187100], is defined as an excess number of teeth compared to the normal dental formula, or existing of teeth additional to the normal series in the dental arches. Their classification is dependent on their position and form. Hyperdontia may occur as a single tooth or multiple teeth, unilateral or bilateral, or in one or both jaws. This classification morphologically can be subcategorized into eumorphic and dysmorphic (rudimentary) elements. Supernumerary eumorphic teeth have the same morphology as that of the normal teeth, whereas dysmorphic ones are small and conical, tuberculate or odontome in shape.Although there is no consensus on the etiology of supernumerary teeth, one etiologic theory suggests that the supernumerary tooth is created as a result of a dichotomy of the tooth bud. Another 413et al.[143et al.;[et al.[In a survey of 2,000 school-going children, Brook found that supernumerary teeth were present in 2.1% of permanent dentitions. The prevet al. The prevet al. Prevalenet al.13 and itet al.1415 Veryet al.14317 The pret al.143 0.32%, b43et al.; and 0.76.;[et al.The possible association between supernumerary teeth and aggressive periodontitis has been reported in literature.1821 But 18A 25-year-old Indian Muslim man reported to the Department of Periodontics, Teerthanker Mahaveer Dental College and Research Centre, Moradabad, with a chief complaint of bleeding gums and foul smell since 2 to 3 years. He was found to have localized aggressive periodontitis. On routine clinical and radiographic examination, supernumerary eumorphic fifth mandibular incisor was found. We could not differentiate the eumorphic fifth mandibular incisor from the remaining mandibular incisors, clinically or radiographically, neither was there any fusion between crowns and roots in lower incisors. The five mandibular incisors were separate and same in morphology Figures \u20133. ApartLocalized aggressive periodontitis was diagnosed by \u22655 mm pocket probing depth around all the four first molars with moderate vertical bone loss radiographically. In lower incisors, there was \u22655 mm attachment loss but due to recession, and pocket depth was \u22653 mm.In lower jaw, third molar of right side was impacted, and impaction was mesioangular, whereas the third molar of left side was well erupted and in normal occlusion. Hematological examination consisting of total leukocyte count (TLC), differential leukocyte count (DLC), hemoglobin (Hb), erythrocyte sedimentation rate (ESR), clotting time (CT), bleeding time (BT) revealed no significant findings.Prevalence of supernumerary teeth in mandibular incisor region is 2% of total supernumerary prevalence,13 and it12143In the present case, supernumerary eumorphic mandibular incisor was normal, well individualized with no fusion in roots and crowns. Differentiation of this supplementary tooth from other mandibular incisors was difficult. This fifth incisor mimicked other mandibular incisors in morphology, radiographically and clinically. Hence this type of supernumerary tooth is overlooked most of the time, unless diagnosed by chance by a dentist during clinical and radiographic examination.et al. reported the presence of a supernumerary eumorphic fifth mandibular incisor in a Lebanese consanguineous family where 4 individuals displayed 5 mandibular incisors with the same shape and size, and they hypothesized the possibility of an autosomal recessive inheritance for this nonsyndromic trait.[et al. (2008) reported the case of homozygosity-mapping to identify a homozygous region with different alleles at chromosome 16q 12.2, located at the marker D16S415, which likely harbors the gene underlying this anomaly.[Supernumerary teeth in mandibular incisor region may be seen in some hereditary syndromes .14 They wic trait. Sami et anomaly. However,et al.[et al.[A possible association between supernumerary teeth and localized aggressive periodontitis has been described in a small number of reported cases.182122 Lo1821et al.; and 0.7l.[et al. So, bothet al. described two identical Black twins with localized juvenile periodontitis, multiple supernumerary teeth and no dental caries. The authors hypothesized that all these three entities were due to genetic influence.[et al. (2004), the association between aggressive periodontitis and supernumerary teeth was suggested to be a random rather than a biologic one.[The first study recognizing the possible connection between supernumerary teeth and periodontitis was a case report by Eley in 1974. In 1981,nfluence. As Odellnfluence. However,ogic one.To conclude, one may think in terms of the correlation between aggressive periodontitis and supernumerary eumorphic mandibular incisor as in this study. This does not mean that both entities have biological connection. However, association between these two entities is definitely a rare one. To prove such biological connection, further studies and genetic investigations are required to be carried out."} {"text": "This special issue on \u201cNew Concepts in Brain Networks\u201d contains articles that review or propose new approaches for investigating brain connectivity. This topic is highly relevant and has been at the forefront of neuroscientific research in recent years. While univariate techniques dominated fMRI data analysis at the onset of fMRI brain mapping, it is now generally accepted that the human brain is a highly multivariate, and dynamic complex system, and more powerful techniques are needed to describe its function adequately. Clearly, such techniques should reflect the complexity and dynamics of brain function.This special topic contains seven articles ranging from reviews of the state of the art to novel clinical applications, and discussions of theoretical background.The paper by V\u00e9rtes et al. belongs Hagmann et al. provide Kn\u00f6sche and Tittgemeyer review tThe two following papers combine novel algorithmic approaches with clinical applications. Allen et al. examine A new algorithm for \u201cresting state\u201d fMRI (R-fMRI) is proposed in the paper by Lohmann et al. . The keyA highly sophisticated algorithmic approach is presented by Smith et al. . While tFinally, Kannurpatti et al. present Overall, this special issue provides an overview of current research on brain networks. This topic is still in its infancy and many new ideas and novel methodologies will be needed in the future."} {"text": "Most microRNAs have a stronger inhibitory effect in estrogen receptor-negative than in estrogen receptor-positive breast cancers. Copy number variants (CNVs) account for a large proportion of genetic variation in the genome. The initial discoveries of long (> 100 kb) CNVs in normal healthy individuals were made on BAC arrays and low resolution oligonucleotide arrays. Subsequent studies that used higher resolution microarrays and SNP genotyping arrays detected the presence of large numbers of CNVs that are < 100 kb, with median lengths of approximately 10 kb. More recently, whole genome sequencing of individuals has revealed an abundance of shorter CNVs with lengths < 1 kb.We used custom high density oligonucleotide arrays in whole-genome scans at approximately 200-bp resolution, and followed up with a localized CNV typing array at resolutions as close as 10 bp, to confirm regions from the initial genome scans, and to detect the occurrence of sample-level events at shorter CNV regions identified in recent whole-genome sequencing studies. We surveyed 90 Yoruba Nigerians from the HapMap Project, and uncovered approximately 2,700 potentially novel CNVs not previously reported in the literature having a median length of approximately 3 kb. We generated sample-level event calls in the 90 Yoruba at nearly 9,000 regions, including approximately 2,500 regions having a median length of just approximately 200 bp that represent the union of CNVs independently discovered through whole-genome sequencing of two individuals of Western European descent. Event frequencies were noticeably higher at shorter regions < 1 kb compared to longer CNVs (> 1 kb).As new shorter CNVs are discovered through whole-genome sequencing, high resolution microarrays offer a cost-effective means to detect the occurrence of events at these regions in large numbers of individuals in order to gain biological insights beyond the initial discovery. Genetic differences between individuals occur at many levels, starting with single nucleotide polymorphisms (SNPs) , short iet al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [NspI and StyI restriction fragments) at approximately 2-kb resolution, and reported > 1,300 CNVs having a median length of approximately 7.4 kb.Progressively higher resolution microarrays, starting with earlier low resolution bacterial artificial chromosome (BAC) arrays followed by commercially available array comparative genome hybridization (CGH) and SNP genotyping arrays, have steadily driven the discovery of new CNVs and have refined the boundaries of earlier reported CNVs. Specifically, the earliest CNVs described by Sebat et al. and Iafr [et al. , using B [et al. used bot [et al. study, w [et al. study, w [et al. study, w [et al. study ex [et al. study an [et al. study exHere in this study, we set out to demonstrate the benefits, as well as limitations, of Affymetrix oligonucleotide arrays with higher resolution than previously available arrays, first in unbiased whole-genome scans to discover CNV regions, and subsequently in localized regions to determine sample-level CNV calls. Our custom arrays were manufactured using standard Affymetrix processes , but witet al. [et al. [et al. [et al. [et al. [et al. [et al. [A fourth custom oligonucleotide array was designed to confirm putative CNV regions identified from the initial genome scans, as well as subsets of CNVs reported in the DGV November 2008), including those reported by Perry et al. , Wang et [et al. , and McC [et al. , and to [et al. and Whee [et al. studies,08, inclu [et al. studies Our arrays are essentially tiling designs with probe sequences picked from the reference genome (build 36), and are more similar to early BAC and Agilent CGH arrays than to recent genotyping arrays, such as the Affymetrix SNP 6.0 or the Illumina BeadChips, which generate allele-specific signals (with the exception of subsets of non-genotyping copy number probes). To observe copy number events on our arrays, we processed our probe signals with circular binary segmentation (CBS) , a CNV dDNA samples from each of the 90 Yoruba individuals was whole-genome amplified, randomly fragmented, end-labeled with biotin, and then hybridized to the three genome-scan arrays . Probe signals were quantile normalized across tet al. [et al. [Of the 3,850 putative CNVs having events observed in at least two individuals (defined as high confidence), approximately 67% overlapped at least one record in the DGV (March 2009), while only approximately 44% of the remaining regions having an event in only one individual (singletons) overlapped a DGV record Table . Overlapet al. , and mor [et al. . The putTo experimentally validate a sampling of the putative CNVs, we randomly selected observed events between 400 bp and 10 kb for PCR or quantitative PCR (qPCR). PCR primers were designed to amplify across putative breakpoints, while primers for qPCR were designed within gain regions. Figure These PCR results provided some assurance that the genome scans had relatively low false discovery rates for CNV regions; however, because of the stringent requirements applied to call an event, a noticeable false-negative observation rate was also demonstrated. PCR tests were performed on Yoruba DNAs selected in pairs, whereby an event was observed in one DNA but not the other on the genome-scan arrays. However, the patterns of bands in the PCR gels showed cases of actual losses or gains in 'non-event' DNAs Figure , the delBecause the primary objective of the genome-scans was CNV region discovery, we set stringent requirements for event detection that prioritized low false discovery of regions at the expense of sensitivity to observe sample level calls at those regions. Once CNV regions had been identified in the genome scans, we focused on designing a new array more suited to generating sensitive and reliable sample-level calls, where space on the genome-scan array originally occupied by additional array probes residing outside of CNV regions can now be better used. To optimize array design parameters that would increase sample-level call sensitivity, we designed a small test array with variable probe lengths from 39 to 69 nucleotides, variable probe feature sizes, and 5 replicates of each unique probe, at 150 arbitrarily chosen regions of which 105 were putative CNVs from the genome scan and the remainder were records from the DGV. Filters were not applied to the choice of probe sequences for the test array, which included probes that overlapped any known repetitive regions, including Alu elements. Results from a subset of 12 Yoruba individuals on the small test array suggested the use of 60-nucleotide long probes at 5 micron pitch, with 3 replicates per probe, and the inclusion of probes in repetitive regions, with the exception of Alu elements (data not shown). Probes on the test array corresponding to nearly all Alu elements were not responsive to copy number differences, while probes at other repetitive regions had variable responses that ranged from no change (similar to Alus), reduced response, or full response (similar to non-repetitive regions), with no clear correlation to the class of repeat elements (data not shown). Based on the test array findings, the CNV-typing array was designed to have higher sensitivity for event detection, and includes probes corresponding to repetitive regions (other than Alu elements). Using data from the CNV-typing array, a thorough study of the possible relationships between repeat elements and CNVs is also possible, but is beyond the scope of the current work.et al. [et al. [et al. [There were approximately 98,000 events observed at the putative CNVs across the 90 Yoruba on the CNV-typing array. Nearly 97% of the putative CNV regions discovered in the genome scans were confirmed to have at least one observed event on the CNV-typing array Table . The higet al. study, wet al. , includi [et al. study, p [et al. study, aet al. [The median length of the confirmed CNVs was 4.4 kb, which was slightly shorter than the median length of the putative CNVs Table . The lenet al. study, wet al. -21,24-39et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [In order to further compare our results with DGV records at the individual sample level, we selected six recent studies, including the McCarroll et al. study, w [et al. and Kidd [et al. studies [et al. study, o [et al. study ex [et al. study, 3 [et al. study, w [et al. study, wde novo occurrences [de novo events. The approximately 98,000 event calls at 6,368 confirmed CNVs across the 90 Yoruba were grouped by the 30 family trios. Of the total observed events, approximately 10,500 (10.8%) were observed in only the children of trios. The same 30 trios were also part of the McCarroll et al. [et al. [et al. [et al. [et al. referred to as CNVs inferred in offspring but not detected in parents (CNV-NDPs).Since the 90 Yoruba are each members of 30 family trios, we examined the inheritance of events from parents to children. The majority of copy number polymorphisms are inherited , rather urrences . The obsl et al. study, i [et al. study ar [et al. study be [et al. study beet al. [et al. [et al. [et al. [In order to delineate the observations of false positives in children and false negatives in parents in our work, the trio event calls from the McCarroll et al. and Wang [et al. studies [et al. and McCa [et al. studies,et al. [et al. [et al. [et al. [et al. [et al. [et al. [The CNV-typing array has probes corresponding to shorter (< 1 kb) CNVs discovered by sequencing individual genomes ,19, enabet al. and Whee [et al. and Whee [et al. and Whee [et al. study.et al. [et al. [et al. [et al. [et al. [et al. [et al. [A large majority > 77%) of the shorter CNVs that were discovered by sequencing individuals of Western European descent had at least one observed event in the Yoruba Table . Based o7% of theet al. , Korbel [et al. , Wang et [et al. , Perry e [et al. , and Kid [et al. studies, [et al. and Whee [et al. studies et al. [et al. [et al. [et al. [That our high resolution genome scans of the 90 Yoruba uncovered as many as 2,690 potentially new CNVs with a median length of approximately 3.0 kb suggests that there are many more CNVs yet to be discovered on the shorter end of the size range. Because of the high resolution of our genome-scan arrays, we were able to delineate neighboring multiple smaller CNVs at regions earlier reported as single larger CNVs, as illustrated in Figure S2 in Additional data file 1. Perry et al. observed [et al. study, w [et al. study, wet al. [et al. [et al. [Even at a resolution of approximately 200 bp, our genome scan detected only a fraction of the CNVs reported in whole-genome sequencing studies examined in our current work are likely just a fraction of what will eventually be discovered through sequencing more individuals. In the near term, high resolution microarrays offer a cost-effective means to confirm these shorter CNVs, and type large numbers of individuals in order to gain biological insights beyond the initial discovery.Arrays were synthesized following standard Affymetrix GeneChip manufacturing methods utilizing contact lithography and phosphoramidite nucleoside monomers bearing photolabile 5'-protecting groups. Fused-silica wafer substrates were prepared by standard methods with trialkoxy aminosilane as previously described . An imprCandidate 49-mer probe sequences for the three genome-scan array designs were chosen from the non-repetitive regions of the genome, and filtered for extraneous matches to the genome in the central 16 nucleotides, resulting in a total of 32 million unique probes. Rather than placing probes sequentially across the three arrays, probes were dispersed such that every second and third probe against the genome was placed on separate arrays . Because of the inter-digitating of probes across the three designs, the inter-probe interval in any one design between the center positions of neighboring probes is generally 147 bp (the combined length of three probes). However, because probes were filtered out at repetitive regions throughout the genome the overall median interval between neighboring probes on the genome-scan arrays is 196 bp.The CNV-typing array design consists of approximately 800,000 unique probes, with each in triplicate for a total of approximately 2.4 million 60-mer probes. The replicate probes are placed in separated locations on the array to mitigate any regional variations in signals. The approximately 800,000 unique probes are organized into approximately 16,000 partitions, each containing up to 50 unique probes. The probe partitions correspond to putative or reported CNV boundaries. The probes within a partition are evenly spaced along chromosomes, with the exception of regions corresponding to Alu elements and occurrences of high allele frequency SNPs. In order to mitigate any potential compounding effects on signals, probes with a common SNP in the HapMap repository within tThe 90 Yoruba individuals are from the HapMap Project ; genomicWhole-genome amplification of genomic DNA samples was performed using the REPLI-g Midi kit following manufacturer-supplied instructions, starting with 200 ng of input DNA in a 60 \u03bcl reaction. Amplified DNA was randomly fragmented by controlled partial digestion with DNase I. The optimal DNA target length for hybridization to the arrays was found to be in the range of 50 to 300 bp, with the majority of fragments at 100 to 200 bp. DNaseI at 2.5 U/\u03bcl was freshly diluted in 10 mM Tris pH 8 to a concentration of 0.3 U/\u03bcl; 3 \u03bcl of the diluted DNaseI was added to 60 \u03bcl of amplified DNA and 7 \u03bcl Fragmentation buffer (Affymetrix) at 37\u00b0C. To achieve the optimal size range, test fragmentation time courses were first performed using a small amount of the amplified DNA samples, where the incubation varied from 4 to 26 minutes. Following fragmentation, the amplified DNA was ethanol precipitated and resuspended in 33.5 \u03bcl water; 1 \u03bcl was removed to measure concentration, which was typically approximately 1.5 \u03bcg/\u03bcl. The fragmented DNA was then end-labeled with biotin using 2.5 \u03bcl of 30 mM DNA labeling reagent (Affymetrix) and 5 \u03bcl of Terminal Transferase (Affymetrix) in a 50 \u03bcl reaction, which included 10 \u03bcl of 5\u00d7 TdT buffer (Affymetrix). Labeling reactions were incubated for 2 hours at 37\u00b0C until heat inactivation at 95\u00b0C for 10 minutes.The labeled DNAs were hybridized to each array in 200-\u03bcl volumes. In addition to 15 \u03bcl of approximately 1 \u03bcg/\u03bcl labeled DNAs, the hybridization solution contained 100 \u03bcg denatured Herring sperm DNA , 100 \u03bcg Yeast RNA , 20 \u03bcg freshly denatured COT-1 DNA , 12% formamide, 0.25 pM gridding oligo (Affymetrix), and 140 \u03bcl hybridization buffer, which consists of 4.8 M TMACl, 15 mM Tris pH 8, and 0.015% Triton X-100. Hybridizations were carried out in Affymetrix ovens for 40 hours at 50\u00b0C with rotation set at 30 rpm. Following hybridization, arrays were rinsed twice, and then incubated with 0.2\u00d7 SSPE containing 0.005% Trition X-100 for 30 minutes at 42\u00b0C with rotation set at 15 rpm. The arrays were rinsed and filled with Wash buffer A (Affymetrix). Staining with streptavidin, R-phycoerythrin conjugate (Invitrogen) and scanning with the GCS3000 instrument (Affymetrix) were performed as described in the Affymetrix GeneChip SNP 6.0 manual .A sampling of putative CNVs in pairs of Yoruba samples was selected where an event was observed in one DNA but not the other . For standard PCR, putative CNVs having an event segment within a sample in the range 400 bp to 2.5 kb were tested; for quantitative PCR, CNVs having gain segments between 500 bp and 10 kb were tested. Primer sequences for standard PCR were designed from 300-bp candidate regions upstream or downstream of the longest event segments within a sample, and for qPCR, from within the shortest gain segment. Candidate regions having less than 50% RepeatMask (UCSC) were processed in either Primer3 or PrimeSignal intensities were quantile normalized in sets CBS was implFor both non-smoothed and smoothed segmentation analyses, gain and loss event thresholds were set to segment mean log2 ratios of > 0.25 and <-0.25, respectively. For each sample, overlapping segments from at least two of three chip designs was required to meet the thresholds in order to call a gain or loss. The boundaries of an individual event were defined by the longest overlap between any two event segments meeting the threshold. A putative CNV was defined as regions having events observed in at least one individual; and the boundaries of a CNV were defined by the longest event among individuals. There were 401 regions where putative CNVs from the non-smoothed segmentation intersected putative regions from the smoothed segmentation. In regions where multiple putative CNVs from the non-smoothed segmentation corresponded to one putative region from the smoothed segmentation, the non-smoothed CNVs were chosen. In regions of one-to-one correspondence, the generally longer putative CNVs from the smoothed segmentation were chosen.et al. [et al. [et al. [et al. [et al. [et al. [et al. [The sample-level event calling thresholds used in the segmentation analysis of the CNV-typing array data were determined by comparing against reference event calls taken primarily from the McCarroll et al. study an [et al. study re [et al. study. T [et al. ,20,30,31 [et al. study, 2 [et al. study, t [et al. study, s [et al. study haTo assess specificity and sensitivity of event detection in the CNV-typing data, segmentation thresholds were titrated at the 732 McCarroll reference CNVs, and at 6,578 putative CNVs from the genome scan. Any false positives or false negatives in the McCarroll reference event calls will artificially lower the estimates of sensitivity or specificity, respectively, of the CNV-typing array. Figure S6 in Additional data file 1 summarizes the results at seven threshold values that ranged from 0.35 (-0.35) to 0.10 (-10), and shows the trade-off between higher specificity and lower sensitivity. Event thresholds of -0.175 and 0.175 for loss and gain calls, respectively, were chosen; based on further titrations, second-level thresholds of -0.70 and 0.45 were chosen for homozygous deletions and multi-copy gain events, respectively. For each individual Yoruba sample, sets of probes for each CNV were analyzed separately by CBS, and segments with log2 ratios above or below the thresholds were called as events. Probes in the CNV-typing design were grouped into partitions corresponding to known or putative CNVs, where a given CNV may be represented by more than one partition . Although the CNVs vary in the number and density (probes per base-pair) of corresponding probes, the degree of discrimination of log2 ratios above or below the event thresholds were comparable across a range of event lengths and numbers of probes, with only slight loss of discrimination at longer lengths and fewer probes . Microarray raw intensities and chip library files are available at ArrayExpress under acBAC: bacterial artificial chromosome; CBS: circular binary segmentation; CGH: comparative genome hybridization; CNV: copy number variant; DGV: Database of Genomic Variants; qPCR: quantitative PCR; ROC: receiver operating characteristic; SNP: single-nucleotide polymorphism.All authors are current or former employees of Affymetrix.RR and GF conceived the experiments and designed the genome-scan arrays. HM and GF designed the typing array. PW and GF prepared samples and hybridized the arrays. PW, GF, and HM ran PCRs. HM and JH performed data analysis. HM and GF wrote the manuscript.et al. [et al. [et al. [The following additional data are available with the online version of this paper: Figures S1 to S7 and Tables S3, S5, S6 and S7 , reported CNVs from whole-genome sequencing studies . CNVs with locus_id numbers starting at 100,000 were from the smoothed segmentation analysis. Chromosome locations are on genome build 36. Confirmed CNVs had at least one Yoruba with an event on the CNV-typing array. For confirmed CNVs that overlapped at least one DGV record (March 2009), the closest matching record (variation_id) is listed along with its build 36 coordinates, length, cited reference, and discovery method. Regions were flagged as 'Complex' if both a loss and gain event were observed in the same individual.Click here for fileGel images correspond to 4% agarose (E-gel), gradient polyacrlyamide (PA gel), and 1% agarose (1% gel) electrophoresis gels. DNAs were run in pairs with one having an observed event (Event DNA_ID), and the other without an observed event (non-event DNA_ID). Confirmation calls were made based on amplicon length differences in each DNA pair, and marked the status of each pair: confirm, maybe (ambiguous), no (no evidence of event), or fail (PCR did not yield expected amplicons). At a subset of regions, amplicon bands were excised and sequenced (seq). The lengths of the putative CNVs are also listed.Click here for file(A) Event calls at confirmed CNVs are compared against consensus references from the Wang et al. [et al. [(B) Calls reported in the McCarroll et al. [et al. [(C) Calls reported in the Wang et al. [et al. [(D) Yoruba trios were arbitrarily assigned trio_ids. The DNA_ids of the 90 Yoruba are listed with the corresponding trio_ids.g et al. and McCa [et al. studies.l et al. study ar [et al. study. or gain, the calls were listed as 1 or 3, respectively. For papers with genome positions in build 35, the liftOver utility at UCSC was usedet al. study (lClick here for file"} {"text": "Primary melanoma of the adrenal gland is exceptionally rare as demonstrated by the few cases reported in the medical literature, and it has a high fatality rate. We present the case of a patient with two relapses and survival to date.We describe the case of a 58-year-old Caucasian woman who consulted her doctor with symptoms of asthenia, anorexia and weight loss. A mass was palpated in her abdomen at the height of the left hypochondrium. A computed tomographic scan revealed a retroperitoneal mass measuring 10 cm \u00d7 15 cm originating in the left adrenal gland. A left nephroadrenalectomy and splenectomy were performed. Histopathologically, the retroperitoneal mass corresponded to a melanoma, and no primary melanoma was found in any other location. The patient was treated with interferon-\u03b1-2b. Three years after her diagnosis the patient presented with a retroperitoneal relapse of the mass measuring 7.2 cm, which was removed. Five years after the first relapse a new retroperitoneal relapse mass was diagnosed, which was also removed. Since then the patient has been healthy and free from illness.et al. and Carstens et al., allowed us to diagnose primary melanoma of the adrenal gland.Histological and immunohistochemical studies, together with the criteria described by Ainsworth Primary melanoma of the adrenal gland is an exceptionally rare occurrence, as demonstrated by the few cases described in the medical literature -11. BothPrimary melanoma of the adrenal gland is usually a voluminous, non-functional tumor showing heterogeneous contrast enhancement on the computed tomographic (CT) scan. The diagnosis is made on the basis of histological and immunohistochemical studies.et al. [et al. [et al. criteria, with the exception of the last one, since our patient is still alive.The adrenal glands can be sites for the metastatic dispersal of cutaneous or visceral melanomas in up to 50% of cases , and hiset al. establis [et al. establisPrimary involvement of the adrenal gland is extremely rare. Only 23 patients have been described in the English-language literature, and documentation was complete in only 11 of them [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [In 1946, Kniseley and Baggentoss [et al. publishe [et al. publishe [et al. , Sasidha [et al. and Cars [et al. reported [et al. in 1986, [et al. in a 42- [et al. in a 33-et al. [et al. [The mean age of the patients in all of these 12 cases is 50.4 \u00b1 17.2 years, with a median age of 49 years and an age range between 24 and 78 years. Fifty percent of these patients were men, and the initial mean size of the tumors was 10.9 cm \u00b1 3.8 cm with a median size of 10 cm and a range between 6 cm and 17 cm. The most frequent symptom in these cases was pain and epithelial membrane antigen and the neuroendocrine markers chromogranin A and synaptophysin were negative. A diagnosis of non-pigmented melanoma was established administered intravenously. Analytical and radiological reviews were carried out. Thirty-two abdominal CT scans were obtained throughout this period, making it possible to diagnose and treat relapses in a relatively timely manner.Two years after the patient's diagnosis and surgery, during a routine review involving an abdominal CT scan, a solid retroperitoneal mass with areas of necrosis measuring 7 cm \u00d7 2 cm \u00d7 4.5 cm was found in the left renal fossa, invading the abdominal wall and in direct contact with the intestinal loops. Also, a para-aortic adenopathy with a diameter of 1.8 cm was observed Figure .The patient once again underwent surgery while under general anesthesia. We performed a mid-line suprainfraumbilical incision and located a tumor in the previous surgical area. Its diameter was approximately 10 cm, it seemed to infiltrate the abdominal wall and it was adhered to the splenic angle of the colon, although without infiltration. After the adhesions were removed, we completely removed the tumor, together with a para-aortic adenopathy with a diameter of approximately 3 cm. The post-operative period progressed without any incidents. The pathology of the retroperitoneal lesion and the lymph node showed a tumor entirely similar to the one previously excised.Four years after the initial diagnosis, during a radiological review (chest X-ray and CT scan), pulmonary nodules were detected that were suggestive of pulmonary metastasis. Chemotherapy with dacarbazine and IFN\u03b12b was begun by the oncology service, and one year later the CT scan revealed that the nodules had disappeared.In the CT scan obtained eight years after the initial diagnosis, we observed a nodule measuring 4.5 cm \u00d7 2.3 cm in the left renal fossa and a nodule measuring 2.5 cm \u00d7 2.5 cm in the mesenteric fat of the descending colon . Melanomas have also been found in the mouth cavity, larynx and bronchi, the esophagus, the rectum, the genitourinary system, the meninges, the ovaries, the uterine cervix, the vagina and the adrenal glands. One of the major obstacles for clinicians is identifying whether the tumor is really a primary tumor or whether it is metastatic .The possibility exists that a tumor lesion of melanocytic cells in the skin or eyes may have gone unnoticed or that a skin lesion removed months or years ago may have been the primary origin of undiagnosed melanoma. Also, some authors have mentioned that primary skin melanoma occasionally reappears spontaneously, and the only clinical symptoms in some of these patients are due to the presence of metastasis in the regional lymphatic ganglia or the internal organs -19. SomeThe adrenal medullary blasts and melanoblasts have a common embryological origin at the neuroectodermal level , similarIn primary melanomas, the four stages of melanogenesis are usually observed . With reBecause at the macroscopic level the melanoma may have a brown or black color, it is necessary to also establish the differential diagnosis with other pigmented adrenal lesions such as pigmented adrenal adenomas . These aAt the macroscopic level, adrenal hematomas may also give rise to confusion; however, under the microscope, hemosiderin-laden macrophages may be seen, and the histological characteristics are completely different. Adrenocortical carcinomas are tumors that require the establishment of a differential diagnosis. It is a larger-sized tumor than the adenoma, and its cells do not stain positively for S-100 or HMB45 .et al. [et al. [The clinical case we present herein meets the criteria of Ainsworth et al. and Cars [et al. . The tumWritten informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.The authors declare that they have no competing interests.LGS participated surgical treatment, design and drafted manuscript. SPF participated in the design, literature review and drafted manuscript. MJLP participated in histopathology immunohistochemical study. FAM participated in histopathology and immunohistochemical study. JMS participated in surgical treatment. JARG participated in surgical treatment. All authors read and approved the final manuscript."} {"text": "The exploitation of DNA for the production of nanoscale architectures presents a young yet paradigm breaking approach, which addresses many of the barriers to the self-assembly of small molecules into highly-ordered nanostructures via construct addressability. There are two major methods to construct DNA nanostructures, and in the current review we will discuss the principles and some examples of applications of both the tile-based and DNA origami methods. The tile-based approach is an older method that provides a good tool to construct small and simple structures, usually with multiply repeated domains. In contrast, the origami method, at this time, would appear to be more appropriate for the construction of bigger, more sophisticated and exactly defined structures. DNA (deoxyribonucleic acid) is the molecule of choice for designed nanostructure fabrication since it has excellent features such as molecular recognition, self-assembly, programmability, predictable nanoscale structure and is easily synthesized . DNA selFabrication of DNA nanostructures was first proposed by Nadrian C. Seeman in the 1980s . Inspireet al. [et al. [et al. [et al. reported the formation of 2D DNA crystals by using only two DNA strands [et al. reported construction of DNA nano-arrays by using three oligos and implementing sequence symmetry [et al. demonstrated formation of three dimensional (3D) DNA crystals using tensegrity triangles as the building blocks [By applying the tile-based motifs and optimizing them, Winfree et al. , LaBean [et al. and Feng [et al. introduc [et al. . Constru [et al. \u201313. Form [et al. , tensegr [et al. and six- [et al. have bee strands , and He symmetry . A reporg blocks .et al. [et al. introduced a way to incorporate some features of both the origami method and the tile based self-assembly to produce weave tile structures [Novel motifs are continuously being introduced, and recently, a half-crossover based structure was reported by Yin et al. , which aructures . They reet al. reported formation of an octahedron by self-folding of a 1.7 kb single-stranded DNA (heavy strand) and a few smaller strands (light strands) [et al. reported construction of a tetrahedron DNA structure. They have also reported conformational changes in the structure due to restriction enzyme digestions [et al. reported the formation of tetrahedron, dodecahedron, buckyball structures [The DNA tile based system has been used to make 3D structures. The first tile based 3D DNA structure was introduced by Seeman . Howeverstrands) . This stgestions or by stgestions . Formatigestions . Anotherructures and a DNructures .The addition of dynamic properties to DNA nanostructures is another practice which has been applied over the last decade ,24,27\u201332et al. [vice versa by multi step hybridization and strand displacement. Later, scientists have programmed the movement of DNA molecules based on toehold strand displacement. Shih and Pierce demonstrated walking of a DNA molecule on top of a DNA tube driven by fuel strands [et al. introduced a new strategy for stepwise walking of a bipedal walker, in which they triggered the forward step by introducing Hg2+ and H+ ions and the backward step by introducing OH\u2212 ions and cysteine [Toehold based strand displacement in DNA nanotechnology was initially proposed by Yurke et al. . They de strands , which h strands \u201336. In a strands . Recentlcysteine .Tile-based DNA nanostructure architecture is a promising method, but there are major drawbacks for the tile-based assembly strategy. First, the design of complex structures using the tile method is a challenge since one needs to design and check the new sequence for each step, which is a time consuming and problematic step. Secondly it is very hard to control production of complex high order nanostructures, and even though some of the structures have finite size and shape, many other structures such as arrays or grids grow as long as sticky strands are available, and therefore there is no control over size. Finally, in order to obtain the predicted structure the strands need to be highly quantitatively controlled.et al. reported formation of the nano arrays by using a long scaffold and some shorter strands [et al. who reported formation of an octahedron by self-folding of a long single-stranded DNA and a few smaller strands [The \u201cDNA origami\u201d method was first proposed and implemented by Paul W. K. Rothemund in 2006 , in whic strands . However strands . Moreove strands .The term origami refers to the Japanese folk art of folding paper into a special shape. The method is called DNA origami since one long strand of DNA is folded to produce the desired structure by the help of smaller staple strands. The origami folding process is described in The origami method is applied to construct 2D and recently 3D DNA nanostructures. In the revolutionary and earliest work , Rothemuet al. reported a tile assembly of the origamies to make higher order self-assembled DNA nanostructures [et al. reported formation of multi-domain DNA origami by using origami four-way junctions [et al. reported construction of the multi-domain DNA origami [Formation of DNA origami from a double stranded DNA (dsDNA) scaffold has been reported which opructures . Very reunctions , and in origami . The DNAet al. showed patterning of enhanced green fluorescent protein on top of the origami [et al. demonstrated patterning of the coat of arms of Ukraine by putting streptavidin molecules on top of a rectangular origami [et al. [DNA origamies are interesting since they can be used as platforms for the study of other systems. In the earliest work, patterns (hairpin dumbbells) on top of the origamies were imaged by AFM . DNA ori origami . Kuzyk e origami . Protein [et al. . Additio [et al. .et al. reported gold metallization of branched DNA origami [DNA origami has been used to investigate binding of thrombin molecules to their aptamers . Moreove origami .et al. [et al. positioned carbon nanotubes on top of the DNA origami by aid of interactions between streptavidin molecules and biotinylated DNA strands precisely positioned on the origami and wrapped around the carbon nanotube [et al. reported specific positioning of gold nanoparticles on top of lithographically confined DNA origamies [et al. attached gold nanoparticles to a tube shaped origami and showed formation of plasmonic nanostructures. The exact positioning capability intrinsic to the DNA origami method enabled them to incorporate nanoparticles at very precise positions and thus they could study and prove predicted optical properties of the nanostructures [et al. demonstrated label free detection of RNA molecules by an AFM study of hybridization of the target on top of a rectangular origami [et al. reported a new secondary DNA binding site in the enzyme topoisomerase I [In a very interesting study, Manue et al. showed pnanotube . In anotnanotube . Furtherrigamies . Positiorigamies . Kuzyk eructures . Ke et a origami . In a ve origami . In very origami . DNA ori origami ,67. DNA origami and by umerase I .et al. programmed directed, uniform and continuous translation of a molecular motor along a 100 nm track on flat DNA origami [In a very interesting development, scientists from Hao Yan\u2019s and Erik Winfree\u2019s groups demonstrated a molecular robot which moves along a predefined path on top of a rectangular origami . The mov origami .A number of studies reported the production of 3D DNA origami structures in 2009. William Shih is a pioneer in this field and one might consider the production of an octahedron as the first successful attempt to make 3D DNA origamies . HoweverThere are several strategies for assembling 3D origamies. One method is based on folding flat surfaces against one another through stacking of the helices in separate domains of a flat multi domain origami. The links between the helices could be coaxial, noncoaxial, orthogonal or at any angle. This strategy has also been used to structure complex 2D origami, for example to produce a triangle and threAnother strategy is to make 3D structures out of multiple layers of DNA origami . Shih anet al. who showed reconfiguration of twisted DNA nanostructures [et al. reported production of a 3D tetrahedral DNA structure which formed from one DNA strand [in vivo, which is a masterful achievement in developing the origami method, since it raises the hope of large scale production of high quality origamies at relatively low cost.Shih\u2019s group reported design and formation of twisted structures, resulting from the insertion or deletion of bases in between the crossovers . A year ructures . They crA strand . This miDNA is a unique molecule for generating nanoscale molecular architectures, with a large number of useful features that facilitate molecular engineering. The tile based method is a promising method when dealing with small and non-complex structures or structures with repeating building blocks. Although tile based structures provided us with a large number of useful and interesting learning steps, the drawbacks of the method necessitated the investigation of another DNA structural method, called DNA origami. The origami method provides a free hand method to design and to construct more sophisticated and highly addressable structures. Currently, using the DNA origami method, construction of large, highly-ordered and complex structures consisting of several hundred thousand atoms is not only possible, but relatively easily accomplished. The DNA origami method is a very useful tool to generate precisely defined molecular systems for characterization. Some of the demonstrated applications of DNA origami include molecular pattering of surfaces, directed positioning of particles/materials, construction of nanorobots and carriers, drug and material delivery, molecular recognition and sensor applications.et al. [et al. [An example of a high impact application of DNA origami, an enclosed, \u201csmart\u201d molecular nano-carrier, was recently reported by Douglas et al. . They pret al. , to open [et al. , in whicHowever the tile-based and origami methods still suffer from major drawbacks. DNA origamies are not stable in various conditions and require special care. For example, pH has a drastic effect on the structure of DNA nanostructures. In low pH solutions the DNA may be de-purinated and in high pH the hydrogen bonding between the DNA strands will be disrupted. Heating, many chemicals and some organic solvents denature double-stranded DNA. DNase enzymes destroy DNA strands by catalyzing the cleavage of the DNA backbone. Thus handling and storing of samples consisting of DNA structures must be performed carefully. The ions present in solution have a strong impact on DNA structures; at low ionic strength the DNA structures will decompose, and high salt concentrations lead to aggregation of the structures. Molecular tensions and mechanical forces may have negative effects on structures, particularly on extended structures, too.et al. proposed production of origamies from double-stranded DNA, a method that may aid in the use of various scaffold sources [Today, production of DNA origami is based on the use of a very limited number of different scaffolds. To make more sophisticated structures, developing new scaffold production methods will be necessary. A few years ago, H\u00f6gberg sources . Moreove sources ,42,82,83 sources ,83,84. E"} {"text": "In their letter, Haighton et al. recommend that we review Bradford Hill guidelines for establishing causality. As noted in our article , we did in vivo (Haighton et al. state that glucuronidated BPA does not appear to be biologically active in mammalian systems; however, glucuronidated BPA can be deconjugated by the placenta or transferred across the placenta, where it can be deconjugated by other fetal tissues . Therefoin vivo .The authors were unclear about how \u201cnormal limits\u201d for neurobehavioral assessment are defined. In our study , we admiFinally, Haighton et al. state that other etiologies could be responsible for the abnormal exam at 1 month of age. We stated this exact same point very clearly in our case report . Althoug"} {"text": "The authors wish to acknowledge an error in and and correct an error in their use of a reference. The fourth sentence of the seventh paragraph of the Introduction should read:\"A biosensor system, observed by Checa et al. (2011), was capable of detecting Au, quantification of Au was not reported here: However, quantification is crucially important for geochemical exploration [26].\"In addition, the authors wish to acknowledge the relevance of an article 'Cerminati S, Soncini FC, Checa SK. Selective detection of gold using genetically engineered bacterial reporters. Biotechnol Bioeng. 2011 Nov; 108(11):2553-60. doi: 10.1002/bit.23213', which relates to the results described in the present article, as follows:A Au biosensor system based on the golTSB regulon and a fluorescent reporter protein was previously developed by Cerminati et al. (2011) and dose-response curves were constructed using KAu(CN)2. While Cerminati et al. (2011) suggested that biosensor systems may be useful for quantifying Au in real soil samples this was not tested in their study. Cerminati et al. (2011) developed a Au biosensor system which under \"clean\" laboratory conditions was capable of detecting environmentally relevant levels of Au. Our study takes further steps towards the development of a field-ready biosensor system with the development and testing of a selective extraction technique for Au, thermodynamic modelling of Au complexes under experimental conditions and the electrochemical testing of the sensor on complex environmental samples.Similar to the biosensor system presented in Zammit et al. (2013), the Au biosensor developed by Cerminati et al. (2011) was able to quantify Au. However, the gold biosensor developed by Cerminati et al. (2011) itself and the experimental conditions in the Checa et al. Bacterial sensing of and resistance to gold salts. Mol. Mic. 63: 1307-1318.) and Cerminati et al. (2011) studies varied in a number of important aspects from the ones in Zammit et al. (2013). The Cerminati et al. (2011) biosensor was developed using the golTSB genes from S. typhimurium fused to the gene of a green fluorescent protein (GFP) and expressed in E. coli. In contrast, the biosensor developed for this study used the golTSB genes from S. typhimurium transcriptionally fused to a promoterless lacZ reporter cassette, whose activity was measured electrochemically aimed at future in-field-application. These differences may have contributed to the differences in sensitivity of Au detection reported in both studies. The biosensor developed by Cerminati et al. (2011) was able to quantify Au down to 33.23 nM Au(I). The biosensor developed by Zammit et al. (2013) was able to quantify Au down to a detection limit of 10 nM. However, differences may also be a result of the different Au(I/III)-complexes used in both studies to construct standard curves. While KAu(CN)2 was used by Cerminati et al. (2011), Au(I)-thiosulfate and AuHCl4.3H2O were added to the medium by Zammit et al. (2013). The concentrations of metals tested ranged from 10 *M of Hg(II) to 1 mM of Cu(II) (50 *M of Au was used). These concentrations were not normalized in subsequent analyses. Importantly, in Zammit et al. (2013), cross reactivity of Au(I/III) was tested by mixing the complexes mixed with the other metal ions to show that Au can still be detected by the sensor whereas the Checa et al. (2007) and Cerminati et al. (2011) studies only investigated one metal at a time."} {"text": "Sir,et al.[Helicobacter pylori infection in patients using NSAIDs\u201d in the previous volume of Saudi Journal of Gastroenterology. They have investigated the accuracy of rapid urease test (RUT) in Helicobacter pylori strains during the gastroscopy process. In response to their article, we present our contradictory evidences about the efficacy of RUT test as the confirmation of H. pylori isolates in biopsy specimen. Detection of H. pylori in clinical samples is an important subject for clinicians for further management of this rogue microorganism. Routinely, diagnosis tests of H. pylori are divided into two major groups:[et al.[H. pylori should be confirmed by using at least any two of thee following methods: RUT, histology, UBT (Urea Breath Test), specific PCR assay, and bacterial culture. Nevertheless, they reported H. pylori-positive strains in their study, while no culture or specific PCR assay were performed in their examination.[H. pylori, while their RUT was primarily reported as being negative. Foroutan et al.[H. pylori strains after culture and specific gene PCR assay.[et al.[H. pylori infection.Foroutan et al. publisher groups:3 invasivs:[et al. It was rs:[et al.3 that thmination. Our datamination. demonstran et al. concludean et al.5 we obseCR assay. In otherCR assay.5 This imy.[et al. study, s"} {"text": "The objective of this review is to evaluate the literature on medications associated with delirium after cardiac surgery and potential prophylactic agents for preventing it.Cumulative Index to Nursing and Allied Health, and EMBASE with the MeSH headings: delirium, cardiac surgical procedures, and risk factors, and the keywords: delirium, cardiac surgery, risk factors, and drugs. Principle inclusion criteria include having patient samples receiving cardiac procedures on cardiopulmonary bypass, and using DSM-IV-TR criteria or a standardized tool for the diagnosis of delirium.Articles were searched in MEDLINE, Fifteen studies were reviewed. Two single drugs (intraoperative fentanyl and ketamine), and two classes of drugs (preoperative antipsychotics and postoperative inotropes) were identified in the literature as being independently associated with delirium after cardiac surgery. Another seven classes of drugs and three single drugs have mixed findings. One drug (risperidone) has been shown to prevent delirium when taken immediately upon awakening from cardiac surgery. None of these findings was replicated in the studies reviewed.These studies have shown that drugs taken perioperatively by cardiac surgery patients need to be considered in delirium risk management strategies. While medications with direct neurological actions are clearly important, this review has shown that specific cardiovascular drugs may also require attention. Future studies that are methodologically consistent are required to further validate these findings and improve their utility. Diagnostic and Statistical Manual fourth, text-revised (DSM-IV TR) edition ))emergencance use , 7. Neveance use .et al. [et al. [et al. [et al. [et al. [et al. [Similarly, it is also difficult to establish the role of postoperative drugs on delirium with such observational designs. Even though delirium is defined by the DSM-IV-TR as having multifactorial etiology the role of postoperative factors on delirium is frequently downplayed in studies because they are not commonly considered \u2018predictors\u2019 of delirium, despite the fact that they are potentially modifiable. For instance, Afonso et al. , Katznel [et al. and Rede [et al. did not [et al. , Shehabi [et al. , and Tul [et al. focused Other studies that have looked at the influence of drugs on delirium are randomized controlled trials (RCTs). These studies, which are more robust, are typically designed to investigate the prophylactic effectiveness of certain medications like dexmedetomidine or rivastigmine to prevent delirium in vulnerable individuals. Unlike observational studies, well-controlled RCTs can suggest that any observed differences in the rates of delirium are due to differences in drug administration.In this review, delirium after cardiac surgery is considered distinct from other types of postoperative deliria for a number of reasons. For one, different surgical populations often have different medication profiles and require different anesthesia techniques. Thus, the pharmacological triggers of delirium will vary depending on surgery. Secondly, the use of cardiopulmonary bypass (CPB) in cardiac surgeries requires special consideration since its use is associated with postoperative effects on neurological function and an increase in delirium . Lastly,The purpose of this article hence is to synthesize the evidence in the literature for drugs that have been shown to be associated with either a higher or a lower rate of delirium after cardiac surgery. We also discuss studies that have attempted to use certain drugs for strategic prevention of postoperative delirium.1. The DSM-IV-TR, which is the current edition of the manual as of this publication, was published in 2000; therefore, only studies that were published in 2000 onward were included in this review. Studies were excluded if they took place in a community setting, if delirium was not a specific outcome, if the diagnosis of delirium was based solely on a clinical diagnosis without the use of DSM criteria or a standardized tool, or if they were reviews or case reports. Studies were searched in MEDLINE (January 1948 through January 2011), Cumulative Index to Nursing and Allied Health , and EMBASE (1980-2011). MeSH headings that were searched included the following: delirium, cardiac surgical procedures, and risk factors. Keyword searches included: delirium, cardiac surgery, risk factors, and drugs. Reference lists of the articles that were retrieved were also searched for relevant citations. Articles were included only if they were published between January 2000 and December 2010, used DSM-IV-TR criteria or a standardized assessment tool for the screening or diagnosis of delirium, involved more than one assessment of delirium in the postoperative period, included only adults, were written in English, were prospective, retrospective, or interventional in their designs, and included cardiac procedures requiring CPB, i.e., coronary artery bypass grafting (CABG), valve replacements, combination CABG-valve replacements, or heart transplantset al. [et al. [n = 284, 158; cardiac patients n = 9,272), and this gives the study considerable relevance for cardiac surgery despite the small proportion of the total sample size that were actually cardiac surgery patients. More importantly, when the rate of delirium was analyzed with respect to their primary factor of interest , they found that the type of surgery received did not affect the relationship that they found between preoperative statin use and postoperative delirium [et al. [It should be mentioned that the study by Redelmeier et al. possesse [et al. did incldelirium .For the et al., and Hude [et al. also quaet al. [From the studies that were evaluated, several drugs that patients were taking as outpatients prior to surgery, during surgery, and in postoperative intensive care emerged as being either significantly predictive or protective of postoperative delirium. In most studies, drugs that were identified to be significantly associated with delirium by univariate analysis were further analyzed by multivariate logistic regression to identify drugs that were independently associated with delirium. One study that took a rather different approach was by Katznelson et al. , who per1.Eight prospective observational studies, 5 randomized controlled trials, 1 retrospective observational study and 1 post-hoc analysis were selected for review, and these are summarized in Table 2 for a complete summary of the drugs that have been studied in relation to delirium after cardiac surgery.See Table IPreoperative use of psychiatric medications is frequently cited to be a risk factor for postoperative delirium . Howeveret al. [p < 0.001). This result was obtained by controlling for age, sex, social status, prior admissions, prior use of medications, all neuropsychiatric, cardiac, vascular and miscellaneous medications, and duration and type of surgery in their multivariable logistic regression model [Only one single study included preoperative antipsychotic use as a factor in their analysis. Redelmeier et al. found thon model . No otheet al. [p = 0.01) [p \u2264 0.01), but also was reported to be highly predictive: patients who were taking SSRIs or tricyclic antidepressants and/or drugs with known anticholinergic effects were 5.12 times more likely to become delirious than patients who were not taking these drugs [et al. [et al. [et al. [n = 287, 353), and they were able to acquire a much larger number of patients who were taking antidepressants in the preoperative period . Redelmeier et al. [p < 0.001). The work on antidepressants and delirium is more extensive, but likewise unconvincing due to small sample sizes and conflicting findings. Tully et al. looked a = 0.01) . Despite = 0.01) . This co = 0.01) . A study [et al. also inc [et al. also inv [et al. study far et al. found thet al. [et al.\u2019s [p < 0.001). Thus, the findings for the influence of preoperative benzodiazepine use on postoperative delirium are inconclusive and further work is required to reveal any relationship that may exist.Afonso et al. includedet al.\u2019s multivarNo studies were found that included preoperative use of mood stabilizers, stimulants, anxiolytics or any other psychiatric drugs. et al. [p = 1.00). However, given that only four individuals in the entire cohort of 112 patients were taking opioids before surgery [et al. [et al. [Preoperative opioids have not been thoroughly investigated for their relationship to delirium. Only one study meeting inclusion criteria included this as a factor . Koster et al. showed t surgery , this fi [et al. and Tull [et al. . et al. [et al. [et al. [et al. [et al. [et al. [p = 0.10; p = 0.31 respectively). Though Tan et al. [n = 20 out of a total n = 53), the broad definition that they attributed to the variable \u201c other anticholinergic meds\u201d means that additional studies are required for evidence on the effects of anticholinergic medications on postoperative delirium.Two studies evaluated the relationship between preoperative anticholinergic drug use and postoperative delirium, and neither found an association between anti-cholinergics and delirium. These studies by Tan et al. and Tull [et al. were bot [et al. used the [et al. included [et al. -31, it w [et al. nor Tull [et al. found a n et al. did haveet al. [in vitro to have moderate anticholinergic activity at therapeutic doses [Interestingly, as noted above, when Tully et al. grouped ic doses , it may ic doses ) to alloet al. [et al. [Studies that have looked at the influence of preoperative statin use on the outcome of delirium after cardiac surgery have produced contrasting findings: of the four studies that focused specifically on preoperative statin use, one study showed that it was protective against delirium , anotheret al. was a pr [et al. performeet al. [p < 0.001). To control for the broad range of surgeries that were included in this study, surgery type was included as a covariate in the analysis. This produced an OR = 1.12, with a 95% confidence interval of 0.99 \u2013 1.27 (p = 0.07), meaning that surgery type had no effect on the relationship between statin use and delirium [et al. [et al. [Redelmeier et al. found thdelirium . Redelme [et al. then pro [et al. . It is p [et al. . Had the [et al. also fou [et al. . et al. [p < 0.01) [On the other hand, the prospective study by Katznelson et al. came to < 0.01) . This re < 0.01) as well < 0.01) . et al. [et al. [et al. [et al. [Besides advanced age, one confounding factor that was not appropriately controlled for in neither the Katznelson et al. nor the [et al. study wa [et al. , 36. The [et al. , for exa [et al. , a selec [et al. . n = 58 in Hudetz et al., [n = 113 in Burkhart et al., [Other studies that included statins or lipid-lowering agents as variables in their databases failed to find a significant association with postoperative delirium use in either direction , 11. Thi et al., ; total nBased on evidence from a systematic review, Kulik and Ruel recommenAntihypertensive drugs taken in the preoperative period, other than drugs with direct cholinergic receptor interactions, have not been linked to delirium after cardiac surgery in any study that was reviewed. Diuretics, calcium channel-blockers (CCBs), angiotensin-converting enzyme inhibitors (ACE-Is), \u03b2-blockers, angiotensin receptor blockers (ARBs) and nitrates have all failed to show a relationship with delirium in the studies published thus far , 10, 24.IIet al. [p < 0.05). However, when Santos et al. [p-values \u2264 0.1 from univariate analysis, the use of pre-anesthetic diazepam was not identified to be an independently associated risk factor for delirium [One class of drugs commonly used during surgery that has been implicated in the development of delirium after cardiac surgery is the benzodiazepines and their derivatives. Specifically, Santos et al. showed tp = 0.006) [p = 0.003) [et al. [et al. [Fentanyl is frequently used to provide sedation and analgesia during cardiac surgical procedures because of its fast onset, short duration of action, and minimal cardiovascular depressive effects. A post-hoc analysis performe= 0.006) . When th= 0.003) . Importa= 0.003) . The ori [et al. had anal [et al. also didin vitro studies have been unable to replicate earlier findings demonstrating that fentanyl displays serum anticholinergic activity [et al. [et al. [The risk that fentanyl imparts is likely not due to its purported minor anticholinergic effects because recent activity , 31. It [et al. is media [et al. reportedet al. [p = 0.01) [p < 0.05) [et al. [The anesthetic agent ketamine is commonly regarded as a \u201cdissociative\u201d drug and has been frequently linked with emergence delirium . Howeveret al. to decre = 0.01) . There w < 0.05) . This le [et al. to postuet al. [p = 0.36 for the day of surgery, and p = 0.46 for postoperative day 1), nor did the proportion of patients who were given morphine in the postoperative period (p = 0.70). Mean total doses of fentanyl received during surgery were also compared, but did not statistically differ between the groups [et al.\u2019s [et al. [et al. [Another possible reason for why ketamine may be associated with a decreased rate of postoperative delirium may be because using ketamine reduces opioid consumption in the postoperative period during the first 48-hours after cardiac surgery, as it is able to serve as an adequate adjunct to analgesic management for pain from sternotomies . In Hudeet al. , the amoe groups . Howeveret al.\u2019s post-hoc [et al. to calcu [et al. were onlIIIet al. [In a review of clinical risk factors for delirium by Sockalingam et al. , four stet al. -44. Howeet al. [Within the cardiac surgery population, similar to preoperative use of opioids, postoperative use of opioids has not been shown to increase or decrease the rate of postoperative delirium. Specifically, two prospective studies found no statistically significant relationship between either average morphine-equivalent dose of opioids consumed over 3 postoperative days or opioid dose per kilogram of body weight and the incidence of delirium , 25. Hudet al. were alsChoice of sedatives in the postoperative period may be critical for preventing postoperative delirium. The selective, centrally-acting \u03b1-2-adrenergic agonist, dexmedetomidine, has recently garnered interest due to its ability to provide adequate postoperative sedation and analgesia without producing excessive hypotension or the need for vasopressors, while reducing the risk of delirium after cardiac surgery . Unlike et al. [p = 0.088) [p = 0.040) [A randomized controlled trial by Shehabi et al. compared= 0.088) . Rather = 0.040) . Thus byet al. [et al. [p < 0.001 for dexmedetomidine vs. propofol, and vs. midazolam) [et al. [et al. [In another randomized study by Maldonado et al. , postope [et al. , they fodazolam) . One pro [et al. and Mald [et al. is that et al. [et al. [et al. [et al. [et al. [et al. [One likely reason for the discrepancy between the rates of delirium associated with dexmedetomidine use reported by Shehabi et al. and Mald [et al. is due t [et al. used a s [et al. used for [et al. , the rat [et al. may be u [et al. . et al. [p = 0.002). This was found to be one of the strongest independent predictors of delirium in this sample after using multivariate regression analysis. No other study in the context of cardiac surgery has yet reproduced this finding. While inotropes are commonly used in the postoperative period to maintain hemodynamic stability in cardiac surgery patients, only one study that met the inclusion criteria looked at their effects on delirium. Norkiene et al. reportedet al. [p = 0.08, p = 0.05, p = 0.07 respectively). However, delirious and non-delirious individuals did have significantly different ejection fractions prior to surgery, with delirious individuals having significantly lower mean ejection fractions in the preoperative period (p = 0.005) [It is especially important, when considering the influence of inotropes on delirium, to consider if this association may be confounded by indication. The reason for this is because problems with cardiac contractility can affect cerebral perfusion, and perfusion abnormalities have been observed by Single Photon Emission Computed Tomography (SPECT) in the brains of older hospitalized delirious patients . With reet al. study, t= 0.005) . Thus, iIVet al. [2 and serotonin 5-HT2 receptors, is the only agent that has been studied in the context of delirium after cardiac surgery for its role as a potential prophylactic agent. The reason this drug was selected for investigation is presumably because of its relatively long half-life, which can be up to 20-hours long in poor metabolizers, and this is due to the similar pharmacology of its active metabolite, 9- hydroxyrisperidone [As previously mentioned, preoperative use of antipsychotics was shown by Redelmeier et al. to increet al. . The mosperidone . In its peridone . In the peridone . Unlike Despite these findings, the future use of risperidone for preventing delirium in cardiac surgery patients remains uncertain. Adverse effects of risperidone, such as hypotension, symptomatic orthostasis, and cardiac arrest, have been reported to be associated in particular with cardiovascular disease and its treatment , althouget al. [et al. [While it may seem contradictory for antipsychotics to be both beneficial and harmful for delirium , one ex [et al. was like [et al. , 14, 22.et al. [Along with studies showing the potentially beneficial effects of taking preoperative statins, intraoperative ketamine, and postoperative dexmedetomidine, the cholinesterase inhibitor rivastigmine, has also been tested for its efficacy to prevent delirium after cardiac surgery. The rationale for this study was based on the \u201ccholinergic deficiency\u201d theory of delirium. This theory, which postulates that delirium is caused by insufficient levels of acetylcholine in the brain, is derived from the observations by Tune et al. that shoIn a randomized, double-blind, prospective interventional study, short-term oral rivastigmine was administered to CPB patients the evening before their procedures, and was continued every 6 hours following surgery for 6 days . Intereset al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [Another, albeit less-detailed investigation of cholinesterase inhibitors was performed by Redelmeier et al. , who inc [et al. found th [et al. and Gamb [et al. had rati [et al. study we [et al. failed t [et al. , 14, 22. [et al. study wa [et al. , it may Two single drugs (intraoperative fentanyl and ketamine), and two classes of drugs (preoperative antipsychotics and postoperative inotropes) have thus far been identified to be independently associated with delirium after cardiac surgery. Specifically, preoperative antipsychotics, intraoperative fentanyl and postoperative inotropes are associated with higher rates of delirium, while intraoperative ketamine is associated with a lower rate of delirium. Additionally, the administration of one drug (risperidone) has been shown to be effective for preventing delirium after cardiac surgery. Another seven classes of drugs and three single drugs have mixed findings in relation to delirium after cardiac surgery.From the results of this review, it is clear that drugs taken perioperatively by cardiac surgery patients have either direct or indirect influence on the outcome of delirium. However, given the present state of delirium research, relatively little is known about how the large number of drugs administered to cardiac surgery patients before, during and after surgery contributes to postoperative delirium. Most of the studies that have recorded perioperative drug use are very broad in their approach and do not include analysis on specific drugs. In addition, the literature on delirium after cardiac surgery is heterogeneous and differs quite substantially in terms of designs, methodologies, definitions and diagnostic instruments, making it difficult to compare the results. For instance, investigators\u2019 selection of covariates for their regression model could significantly change the outcomes and the conclusions that could be made. As for whether or not specific covariates should be consistently accounted for in future studies, investigators should be attentive to control for the factors of age, sex, and any clear confounders of indication; however, it is always important to select covariates based on results of current research. None of the findings on the drugs that have been reported to be associated with delirium, including the potentially prophylactic drugs, have been reproduced in the cardiac surgical population. For this reason, any general conclusions about the relationship of these drugs to postoperative delirium should be considered with these limitations in mind. Future studies on the relationship between perioperative drug use and postoperative delirium clearly are necessary; in particular, these should consider statistical solutions for taking into account factors that influence drug action within the individual, such as drug dose and efficiency of drug metabolism, as some studies that were reviewed obtained significant results only when drug doses were normalized to body weight. Although the evidence for the role of drugs in the etiology of postoperative delirium may not yet be completely conclusive, the data available to date indicate that this should be considered an important aspect of the management of cardiac surgery patients at risk for delirium throughout the course of care."} {"text": "Schwannoma is a common, histologically distinctive, benign, usually encapsulated, peripheral nerve tumor of Schwann cell origin. Schwannomas can appear anywhere in the body, but are more frequently reported in the head and neck with an incidence of 25\u201348% in maxillofacial region. Resorption of bones due to schwannoma is rarely noticed in maxillofacial region. We hereby present a case report of schwannoma in a 35\u2013year-old female, causing resorption of zygomatic arch along with review of literature. Schwannoma is a slow-growing, benign neoplasm derived from Schwann cells, which are sheath cells that cover myelinated nerve fibers. Schwannomas may be encapsulated and can appear anywhere in the body, but are more frequently located in the head and neck.\u20135 Most cA 35-year-old female presented at our department complaining of swelling in right facial region since 2 years. There was no history of pain, paresthesia or limitation in mouth opening. During the examination, a firm mass, measuring 5 \u00d7 3 \u00d7 2 cm in size, was observed in right zygomatic region. The mass was bound to a part of zygoma .in toto [A submentovertex view of skull showed resorption of zygomatic arch . Under gin toto . Beneathin toto . During An oval, sharply demarcated, encapsulated, firm nodule measuring 3 cm in diameter was submitted for histopathologic examination. The cut surface was yellowish white and smooth. Microscopic analysis revealed that the tumor mass was composed of interlacing fascicles of compact spindle cells with twisted nuclei . The nucOral and maxillofacial peripheral nerve tumors include schwannoma, neurofibroma, nerve sheath myxoma, palisaded encapsulated neurinoma, mucosal neurinoma associated with multiple endocrine neoplasia III, traumatic neuroma and granular cell tumor.There are three mechanisms by which schwannomas may involve a bone: (1) a tumor may arise centrally within a bone, (2) a tumor may arise within a nutrient canal and produce canal enlargement, or (3) a soft tissue or periosteal tumor may cause secondary erosion and penetration into a bone.\u201315 This Schwannomas most often occur in the fourth and fifth decade of life with a 1.6:1 female predilection. The duration of symptoms varies from a few months to a few years. A majority of these tumors have a long duration because of their lack of symptoms and slow growth.et al.[et al.[A review of the English literature revealed three cases similar to ours with regard to clinical and radiological features. In 1954, Bruce describeet al. and Sciuet al. reportedl.[et al. reported1-weighted images, enhancement of the solid component of the tumor after administration of contrast medium, heterogeneity on T2-weighted images, and multiple target signs due to a central collagen area (some patients).[et al.[et al.[et al., based on the pre-operative CT and MRI findings, a malignant tumor derived from the sublingual gland was suspected. Intraoperatively, adhesion of the mass to circumferential regions was not observed, but nerves penetrated into the mass at several places. Based on operative findings, the mass was thought to be a tumor derived from the lingual nerve. Almeyda et al.[en bloc\u201d. Even with advancement in imaging techniques, the diagnostic dilemma remains.[et al.,[Radiographically, schwannoma is commonly unilocular and associated with bone resorption. It may r2528atients). MRI findatients). Yamazaki).[et al. reportedl.[et al. reportedl.[et al. Only a fl.[et al. In the cda et al. reported remains. MRI has remains. In the c.[et al., the ultret al.[et al.[Although schwannomas originate from the nerve tissue, locating the nerve of origin exactly can be impossible. Direct relation with a nerve can be demonstrated in only approximately 10\u201350% of the cases.2840\u201343 T28et al. stated tet al. Arda et l.[et al. presenteFor the differential diagnosis, neurofibroma, granular cell tumors, lipoma, fibroma, leiomyoma, rhabdomyoma, nerve sheath myxoma, adenoma, neuroma, granular cell tumor, neurothekeoma and perineurioma should be considered.314647 Th3146In conclusion, we have reported here a rare case of schwannoma with secondary erosion of the zygomatic arch. The tumor may have originated from a branch of infraorbital nerve and may have extended into the zygomatic arch, creating a bony defect."} {"text": "However, this dose corresponds to what would have been absorbed by a bee in an entire day of foraging. In the context of this study, the difference between acute and sub-chronic exposure is critical. Furthermore, it is already well-established that physiological and behavioral effects vary significantly depending on whether the same dose is applied in one treatment or in many treatments over a longer period of time. For example, a human tobacco smoker inhales on average between 10.5 and 78.6 mg of nicotine per day without any immediate lethal consequences (Benowitz et al., Henry et al. apply the thiamethoxam insecticide in an acute manner and claim that the dose used for oral treatment (i.e., 1.34 ng.beemhf value by 17.5\u201320.4%.The model presented by Henry et al. is basedThe ecological relevance of Henry et al.'s study is compromised by four main methodological issues. The daily range of thiamethoxam exposure is incorrectly estimated, the applied dose is uncommonly encountered, thiamethoxam is applied in an acute rather than a sub-chronical manner and the use of an incorrect formula falsely inflates the bees' homing failure rate. It is also important to acknowledge that Henry et al.'s study re"} {"text": "Echinostoma (Echinostomatidae) with 37 collar spines that comprise the so-called \u2018revolutum\u2019 species complex, qualify as cryptic due to the interspecific homogeneity of characters used to differentiate species. Only five species were considered valid in the most recent revision of the group but recent molecular studies have demonstrated a higher diversity within the group. In a study of the digeneans parasitising molluscs in central and northern Europe we found that Radix auricularia, R. peregra and Stagnicola palustris were infected with larval stages of two cryptic species of the \u2018revolutum\u2019 complex, one resembling E. revolutum and one undescribed species, Echinostoma sp. IG. This paper provides morphological and molecular evidence for their delimitation.The digenean species of R. auricularia, 357 R. peregra and 577\u2009S. palustris were collected in seven reservoirs of the River Ruhr catchment area in Germany and a total of 573 R. peregra was collected in five lakes in Iceland. Cercariae were examined and identified live and fixed in molecular grade ethanol for DNA isolation and in hot/cold 4% formaldehyde solution for obtaining measurements from fixed materials. Partial fragments of the mitochondrial gene nicotinamide adenine dinucleotide dehydrogenase subunit 1 (nad1) were amplified for 14 isolates.Totals of 2,030 Echinostoma spp. of the \u2018revolutum\u2019 species complex. A total of 14 partial nad1 sequences was generated and aligned with selected published sequences for eight species of the \u2018revolutum\u2019 species complex. Both NJ and BI analyses resulted in consensus trees with similar topologies in which the isolates from Europe formed strongly supported reciprocally monophyletic lineages. The analyses also provided evidence that North American isolates identified as E. revolutum represent another cryptic species of the \u2018revolutum\u2019 species complex.Detailed examination of cercarial morphology allowed us to differentiate the cercariae of the two revolutum\u2019 group of Echinostoma.Our findings highlight the need for further analyses of patterns of interspecific variation based on molecular and morphological evidence to enhance the re-evaluation of the species and advance our understanding of the relationships within the \u2018 Echinostoma Rudolphi, 1809 (Echinostomatidae) with 37 collar spines that comprise the so-called Echinostoma \u2018revolutum\u2019 complex, qualify as cryptic , E. echinatum and E. jurini , the North American E. trivolvis and the African E. caproni Richard, 1964, were considered valid in the most recent revision of the group using for species delimitation a single morphological feature of the larval stages , the specificity towards the first intermediate host , the ability to infect avian or mammalian hosts (or both) and geographical range on a global scale (continents) [E. echinatum has not been formally described and justified in a taxonomic publication and is not recognised as valid [see 6 for details]. However, recent molecular studies have demonstrated a higher diversity within the \u2018revolutum\u2019 species complex. Thus one African species, Echinostoma deserticum Kechemir et al., 2002, and a yet unidentified species from New Zealand were distinguished based on molecular data [E. trivolvis was found to represent a species complex [Echinostoma spp. have also been obtained. E. revolutum was recorded in Australia [Echinostoma paraensei Lie & Basch, 1967 in Australia and South America [E. cf. robustum in North and South America [The digenean species of rd et al. for a retinents) , E. miyagawai Ishii, 1932, E. robustum Yamaguti, 1935, E. bolschewense , E. nordiana , E. friedi Toledo et al., 2000 [E. revolutum[E. friedi (GenBank AJ564379).However, in contrast with the wealth of sequences gathered recently from North America, which have revealed high diversity (six cryptic lineages) within the \u2018hinostoma,11, datal., 2000 -22, sequrevolutum,12-14 anRadix auricularia , Radix peregra and Stagnicola palustris were infected with larval stages of two species of the Echinostoma \u2018revolutum\u2019 complex of cryptic species, one resembling E. revolutum sensu stricto (s.s.) and one undescribed species . Here wsee also , Kostadio [et al. and Detwl.[et al. to the rR. auricularia, 357 R. peregra and 577\u2009S. palustris were collected during 2009\u20132012 in seven reservoirs of the River Ruhr catchment area : Baldeneysee ; Harkortsee ; Hengsteysee ; Hennetalsperre ; Kemnader See ; Sorpetalperre ; and Versetalsperre . Seven distinct samples of R. peregra were collected in five localities in Iceland: Lakes Family Park and Nordic House in Reykjavik; Opnur ; Raudavatn ; and Helgavogur, Lake Myvatn in May and August 2012. Snails were collected randomly with a strainer or picked by hand from stones and floating vegetation along the shore at several sampling sites at each reservoir. In the laboratory, snails were labelled and placed individually into beakers with a small amount of lake water, and kept under a light source for up to 5\u2009days to stimulate emergence of cercariae. Thereafter, snails were measured, dissected and examined for prepatent infections.Totals of 2,030 et al.[R. peregra and R. ovata have recently been treated as junior synonyms of R. balthica we used the name R. peregra following the molecular studies of Bargues et al.[et al.[Cercariae were examined and identified live using the data from the keys of Falt\u00fdnkov\u00e1 et al.,25 and oet al.,18-22. Det al.. Upon pret al.. Althouges et al. and Hu\u0148ol.[et al. which prnad1) were performed in 25\u2009\u03bcl reactions using Ready-To-Go-PCR Beads containing ~2.5 units of puReTaq DNA polymerase, 10\u2009mM Tris\u2013HCl (pH\u20099.0), 50\u2009mM KCl, 1.5\u2009mM MgCl2, 200\u2009mM of each dNTP and stabilisers including BSA, 10\u2009mM of each PCR primer, and 50\u2009ng of template DNA. The following PCR primers were used: forward NDJ11 amplifications of partial fragments of the mitochondrial gene nicotinamide adenine dinucleotide dehydrogenase subunit 1 ( JB11 in ) 5\u2032-AGA PCR amplicons were purified using Qiagen QIAquick\u2122 PCR Purification Kit and sequenced directly for both strands using the PCR primers. Sequencing was performed on an ABI Prism 3130xl automated sequencer using ABI Big Dye chemistry according to the manufacturer\u2019s protocol. Contiguous sequences were assembled and edited using MEGA v5 [nad1 sequences for Echinostoma spp. (see Table\u2009Newly-generated and published 6 generations via 4 simultaneous Markov Chain Monte Carlo chains (nchains\u2009=\u20094) with a sampling frequency of 100. The first 25% of the samples were discarded (sump burnin\u2009=\u20092500) as determined by the stationarity of lnL assessed with Tracer v. 1.4 [BI log-likelihoods were estimated with default prior probabilities and likelihood model settings over 10r v. 1.4 ; the remr v. 1.4 . GeneticEchinostoma spp. in the snail populations sampled in three of the seven reservoirs in the River Ruhr drainage in Germany and in two of the five lakes in Iceland but occasionally higher values were registered 37 collar spines with an arrangement 5-6-15-6-5 were identified as E. revolutum based on cercarial morphology and especially the presence of 12 small para-oesophageal gland-cells with long ducts, located between pharynx and ventral sucker [R. peregra from Iceland and six ex R. auricularia from Germany, further referred to as Echinostoma sp. IG (indicating the origin of the isolates i.e. Iceland and Germany) exhibited slight differences from the isolates identified as E. revolutum as follows: (i) collar spines with blunt of Kostadinova et al.[Echinostoma sp. IG formed a strongly supported reciprocally monophyletic lineage, basal to Echinostoma spp., which also incorporated the sequence for an isolate from Wales (UK) provisionally identified as Echinostoma cf. friedi by Kostadinova et al.[Echinostoma spp. analysed (p-distance range 18.9-19.1%).Both NJ and BI analyses resulted in consensus trees with similar topologies . On theva et al.. The isoE. revolutum and those obtained from natural infections in Lymnaea elodes and Ondatra zibethicus (L.) in North America by Detwiler et al.[revolutum\u2019 species complex.Unexpectedly, the European isolates of er et al. formed tEchinostoma revolutum of Morgan and Blair [Echinostoma friedi of Marcilla et al. based on an isolate of this species recently described by these authors [i.e. Echinostoma robustum sensu Detwiler et al.[E. robustum from North America exhibited a complex structure suggesting the existence of at least three species [Echinostoma spp. of the \u2018revolutum\u2019 species complex parasitising L. stagnalis and Radix spp. are conspecific or represent as yet undiscovered cryptic species. We believe that \u2018reciprocal illumination\u2019 sensu Hennig [revolutum\u2019 complex of cryptic species.However, the distinguishing features are difficult to detect and/or subject to variation (reviewed in Kostadinova and Gibson ). For exv\u00e1 et al. also pron Europe ,24, infed, 1805) ,37-45. Fu Hennig of morphEchinostoma sp. IG was found to be conspecific with an isolate from Wales (UK) provisionally identified as Echinostoma cf. friedi by Kostadinova et al.[et al.[nad1 gene of Detwiler et al.[E. robustum (U58102) and E. friedi (AY168937) reveals that they are found within the same monophyletic clade and thus do not qualify as distinct species according to a phylogenetic definition. Additionally, they are genetically similar is genetically very similar to E. robustum\u201d. Our results clearly indicate that the sequence for E. friedi from its type-locality in Spain and for the European isolate labelled as E. revolutum (AF025832) of Morgan and Blair [i.e. substantially lower than that (i.e. 18.9-19.1%) between the lineage containing E. cf. friedi (AY168937) of Kostadinova et al.[E. revolutum by Morgan and Blair [et al.[E. friedi of Marcilla et al. (AJ564379) but have mislabelled it (as AY168937).va et al.. The linl.[et al.) and thier et al.. These ai AY16893 reveals nd Blair ,13 repreva et al. and the i AY16893 reveals nd Blair ,13. We br [et al. have in et al.[E. robustum of the isolates of the \u2018Australian-German\u2019 clade of Echinostoma spp. of Morgan and Blair [E. friedi should await examination of a larger number of molecularly characterised natural isolates of the European species of the \u2018revolutum\u2019 complex since our knowledge on cryptic diversity in this group is still limited. This suggestion is supported by the discovery of two genetically distinct, geographically separated lineages of E. revolutum: E. revolutum s.s. from Europe and E. revolutum of Detwiter et al.[Kostadinova et al. indicatend Blair , but sugnd Blair ,12,13 aner et al. from Norer et al. appears er et al.,47.revolutum\u2019 group of Echinostoma.The results of our study suggest that further analyses of patterns of interspecific variation based on a combination of molecular and well-documented morphological data would enhance the re-evaluation of the species and advance our understanding of the relationships within the \u2018The authors declare that they have no competing interests.CS, MS and KS obtained the samples. CS, AF, MS and SG undertook the morphological study. SG carried out the sequencing and phylogenetic analysis. CS, SG, AF and MS prepared the first draft of the MS. KS, BS and AK conceived and coordinated the study and helped to draft the MS. All authors read and approved the final manuscript."} {"text": "Current Biology, different combinations of the present authors used repetitive transcranial magnetic stimulation (rTMS) to disrupt activity in human superior parietal cortex, and reported seemingly contradictory results When faced with ambiguous sensory input, conscious awareness may alternate between the different percepts that are consistent with the input. Visual phenomena leading to such multistable perception, where constant sensory input evokes different conscious percepts, are particularly useful for investigating the processes underlying perceptual awareness Rather than being combined into a single percept, the incompatible images compete and perception alternates between each monocular view every few seconds. Carmel et al.et al.et al.et al.A different kind of bistability arises in structure-from-motion perception. For example, a field of moving dots can be seen as a sphere that rotates clockwise or counterclockwise A. Here tet al.et al.et al.et al.'s To resolve this apparent contradiction, we first revisited the brain-structure/percept-duration correlation reported by Kanai et al.Having established that the structure of posterior For the same bistable stimulus, disruption of posterior SPL slowed perceptual switching These findings imply that further research on separate functions residing within the SPL is required in order to develop an understanding of visual awareness. One promising theoretical approach to guide such research may be found in hierarchical Bayesian network theory The present findings resolve an intriguing contradiction et al.Finally, a further twist to the story comes from a recent study by Zaretskaya"} {"text": "Body size can vary throughout a person's lifetime, inducing plasticity of the internal body representation. Changes in horizontal width accompany those in dorsal-to-ventral thickness. To examine differences in the perception of different body axes, neural correlates of own-body-size perception in the horizontal and dorsoventral directions were compared using functional magnetic resonance imaging. Original and distorted images of the neck-down region of their own body were presented to healthy female participants, who were then asked whether the images were of their own body or not based explicitly on body size. Participants perceived body images distorted by \u221210% as their own, whereas those distorted by +30% as belonging to others. Horizontal width images yielded slightly more subjective own-body perceptions than dorsoventral thickness images did. Subjective perception of own-body size was associated with bilateral inferior parietal activity. In contrast, other-body judgments showed pre-supplementary motor and superior parietal activity. Expansion in the dorsoventral direction was associated with the left fusiform gyrus and the right inferior parietal lobule, whereas horizontal expansions were associated with activity in the bilateral somatosensory area. These results suggest neural dissociations between the two body axes: dorsoventral images of thickness may require visual processing, whereas bodily sensations are involved in horizontal body-size perception. Somatosensory rather than visual processes can be critical for the assessment of frontal own-body appearance. Visual body thickness and somatosensory body width may be integrated to construct a whole-body representation. Height is essentially stable throughout adulthood, but the other dimensions can vary. Although body width and thickness are in the same transverse direction in two dimensions for upright posture, body thickness differs from width in several ways: it is generally less than width (except for the feet), and viewing the body from the side is relatively infrequent when compared with viewing the body from the front. Accordingly, the dorsoventral view of thickness may require precise and fine visual processing. Visual perception of own-body width has been examined in normal subjects and in patients with eating disorders body image, rather than the real body image, is associated with enhanced activity in the inferior parietal lobule (IPL) and fusiform gyrus participated in this study. The mean body mass index was 19.5 \u00b1 1.3 kg/mColor photographs of the participants' own bodies (neck to knees) were used . ParticiParticipants were informed that their own and others' neck-down body images would be presented. Participants were asked to judge whether the presented image was of their own body or not by pressing two buttons on a four-button response pad with the right hand. They were also informed that all other participants wore an identical suit and that they were photographed in an identical posture to make the images indiscernible except for body size. Participants were instructed to judge the images based on body size. The buttons for self or others were counterbalanced between participants. E-prime software was used to control the presentation of stimuli and to record responses.Each image was presented for 1 s, with a mean interstimulus interval of 7 s . The order of images was pseudo-randomized. After the experiment, participants were informed that all of the images they were shown were of their own original and distorted body, and they were asked whether they had noticed this.All brain images were acquired using a 3-T Siemens Trio magnetic resonance imaging (MRI) scanner . Functional images were obtained using a gradient-echo EPI sequence with the following parameters: 36 axial slices in the AC\u2013PC plane; 2000 ms TR, 30 ms TE, 90\u00b0 flip angle, 192 \u00d7 192 mm FOV, 3 \u00d7 3 mm in-plane resolution, 3-mm slice thickness, and no gap; the first six images were discarded. A T1-weighted anatomical scan of each participant was also obtained . Four-hundred and eighty-six scans were acquired for each participant.spm8 software package . Individual slices of a functional volume were temporally corrected for differences in acquisition time with reference to the middle (18th) slice. To correct for head motion, functional images of each participant were realigned with reference to the first image. Anatomical images were co-registered with the first functional images and normalized to the Montreal Neurological Institute (MNI) brain template. Functional data were then normalized using the same transformation parameters with a voxel size of 3 mm \u00d7 3 mm \u00d7 3 mm, smoothed in the spatial domain . Functional data were analysed using an event-related design. A general linear model was applied for each of the two orientations by five sizes and self\u2013other judgments; the analysis was modeled using the canonical hemodynamic response function and its temporal derivative for each event type. Subjective \u2018self\u2019 and \u2018others\u2019 body perceptions associated with faster response times (RTs) were further analysed using parametric modulation. Random effects analysis was performed. Global scaling was not applied. Statistical parametric maps were generated for each contrast of the t-statistic on a voxel-by-voxel basis. These t-values were then transformed into z-scores in the standard normal distribution. The threshold of significance was set at P < 0.001, and more than 10 continuous voxels were reported. This threshold was uncorrected for multiple voxel-wise comparisons to balance the trade-off between Type I and Type II errors was used for image rendering.Preprocessing and data analysis were performed using the t-tests were conducted.To examine the brain regions involved in the perception of horizontal expansion, weightings of 0.7, 0.9, 1.0, 1.1 and 1.3 were applied to horizontal \u221230%, \u221210%, original, +10% and +30% images, respectively, and contrasted with weightings of 1.0 for each of the five dorsoventral images. For dorsoventral expansion, weightings of 0.7, 0.9, 1.0, 1.1 and 1.3 were used for \u221230%, \u221210%, original, +10% and +30% dorsoventral images, respectively, and contrasted with weightings of 1.0 for each of the five horizontal images. The two contrasts were calculated for each participant, and one-sample t-tests. Those subjective own- and other-body judgments associated with faster RTs were also analysed.Neural responses to subjective own-body perception vs. that of other-body perception based on individual responses were also examined using one-sample Mean responses for original and distorted images are shown in anova [two orientations \u00d7 five body sizes ] for own-body judgment revealed no differences between directions , but significant differences between sizes . Post hoc analyses (Ryan's method) showed that \u221210% distortions were judged as own-body more often than +30% , +10% and \u221230% distortions. Original images were judged as own-body more often than both +30% and +10% distortion images .A 2 \u00d7 5 anova for mean RTs showed no effect of image orientation , suggesting no difference in task difficulty between width or thickness. However, the anova did reveal significant differences between sizes , and post hoc analyses revealed faster responses for +30% distortions than for \u221210%, \u221230% and original images .A two-way After the experiment, the participants were asked whether they had noticed that all images were of their own body, with no other bodies presented. Most participants (eight out of 11) were sure that other bodies had been presented. Three participants reported that they had suspected as much mid-way through the test; however, they were not convinced, and their performance was similar to that of the other participants.z-coordinate of activity in the right SI was 50, while the z-coordinate for activity in the left SI was 70 for other than own-body perception. No effects of faster RTs on own-body perception were observed.Faster RTs were associated with activation of the right insula (MNI coordinates: 34, \u22126, 16; The neural mechanisms involved in the perception of body size were found to differ between the horizontal and dorsoventral directions. Perceiving wider bodies induced greater activity in somatosensory areas, whereas the perception of deeper bodies resulted in enhanced activity in the higher visual and posterior parietal areas. Bodily sensations could contribute more to frontal body appearance than visual processes. In addition, when compared with other-body judgment, own-body judgment yielded dissociated neural activity in the posterior parietal area. Therefore, body representation in the brain may be a perceptive and cognitive integration of distributed components.et al., et al., et al., et al., et al., et al., Greater activity in the SI and SII was observed for body width rather than for dorsoventral thickness. Familiar frontal views of body images may induce bodily sensations. Moreover, somatosensory representations may contribute to slightly greater own-body judgments (Frassinetti et al., et al., et al., et al., et al., et al., The area of the fusiform gyrus found to be associated with perceiving dorsoventral body thickness in this study may be the body-selective area, namely, the FBA (Peelen & Downing, vice versa. The differences between horizontal and dorsoventral images found in this study may be derived from viewpoint differences. Body images rotated by 0\u201345\u00b0 induced repetitive neuronal reduction in the right FBA, whereas those rotated by 60\u00b0 were processed differently (Taylor et al., et al., et al., The dorsoventral image is orthogonal to the horizontal image and et al., et al., Consistent with other lesion and neuroimaging studies, we observed posterior parietal cortex involvement in body perception. The right parietal cortex is associated with the conscious perception of the body, whereas left parietal activity is involved in monitoring action (Daprati et al., et al., et al., In patients with anorexia nervosa, alterations in IPL activity have been reported (Pietrini et al., et al., et al., et al., et al., et al., The sense of body ownership has been studied using illusory tactile attribution to visual body-like objects, and sensorimotor inputs can facilitate visual attribution of body ownership associated with the premotor cortex (Petkova et al., et al., et al., et al., et al., We observed that slightly slimmer body images were most acceptable as own-body images in this study. On the contrary, distorted images of the hand appear to be less accepted if they depict a shrunken view than if they depict an enlarged view (Pavani & Zampini, et al., Visual size can vary depending on distance and angle. Therefore, the size of a person's own body in the mirror is a relative feature. Our finding of moderate underestimation of own-body size may be derived from these factors. Although binocular disparity is essential for depth perception, the close proximity of one's own body under direct vision may prevent the utilization of visual depth information. However, somatosensory information is more robust, even though it is not as precise as vision. Blocking somatosensory information with anesthesia induces the sensation of swelling both in large and small body parts (Gandevia & Phegan, et al., et al., et al., et al., Although gender differences in brain anatomy (Luders & Toga, In conclusion, the neural substrates involved in the processing of frontal and side views of identical body images differ in the human cortex. Visual and somatosensory processes and their integration may construct a whole-body representation and may be responsible for the dissociation of own-body perception from other-body perception."} {"text": "CDKN2A germline mutation. This review is designed to depict several of the hereditary pancreatic cancer syndromes with particular attention given to the clinical application of this knowledge into improved control of pancreatic cancer.Pancreatic cancer\u2019s high mortality rate equates closely with its incidence, thereby showing the need for development of biomarkers of its increased risk and a better understanding of its genetics, so that high-risk patients can be better targeted for screening and early potential lifesaving diagnosis. Its phenotypic and genotypic heterogeneity is extensive and requires careful scrutiny of its pattern of cancer associations, such as malignant melanoma associated with pancreatic cancer, in the familial atypical multiple mole melanoma syndrome, due to the It is estimated that in the United States during 2009, 21,050 males and 21,420 females will be diagnosed with pancreatic cancer (PC), giving a total of 42,470. Its mortality is estimated at 18,030 men and 17,210 women giving a total of 35,240. Its mortality closely approximates its incidence [PC\u2019s high lethality rate results from its aggressive metastasis coupled with its low probability for diagnosis at an early stage when surgery would have promising curative results. The best prospects for a cure, center on its early detection. Ideally, tests would enable its diagnosis in asymptomatic individuals, because once clinical signs and symptoms of malignancy have manifested, patients may already have a significant tumor burden . At thiset al. [Poley et al. studied et al. [Chu et al. reviewedet al. .A study from Johns Hopkins involved 38 asymptomatic high-risk patients, 37 of whom had familial PC (two or more first- and/or second-degree relatives with PC) and one had Peutz-Jeghers syndrome, each of whom was screened by EUS. Findings showed abnormal EUS exams which were then followed by EUS-FNA, CT, and ERCP. Findings disclosed six individuals with definitive pancreatic lesions . A total of 29 individuals had abnormalities on EUS. Findings disclosed an overall yield of significant masses to be 5.3% (2/38). Noteworthy was a single ductal adenocarcinoma, which was not detected by either the follow-up CT or ERCP evaluations .Our purpose is to update the genetic epidemiology of PC in the interest of advancing progress in its early diagnosis, screening, and management.Review of the epidemiology of PC depicts a disease whose risk correlates with increasing age; only rarely are PC affecteds younger than 25 and it is even relatively uncommon in those younger than 45 years . There iet al. [et al. [The genetic epidemiology of PC is exceedingly heterogeneous. For example, Zer\u00f3n et al. discuss et al. . Blackfo [et al. note tha [et al. .et al. [Chu et al. note thaet al. .et al. [While young age of onset is a hallmark of many hereditary cancer syndromes, the implications of young-onset in familial PC (FPC) family members remains elusive. Brune et al. studied There has been a recent groundswell of knowledge about hereditary forms of cancer, although its translation into the clinical setting has been problematic. For example, a comprehensive history of cancer in a family, often the linchpin to this effort, has been insufficiently recorded in many patients\u2019 medical records, thereby compromising its clinical significance . Obstaclet al. [Shi et al. emphasizCDKN2A mutation in the familial atypical multiple mole melanoma-pancreatic cancer (FAMMM-PC) prone syndrome [Many high-risk patients can benefit immensely from genetic counseling. This becomes of extreme importance when a cancer-causing germline mutation has been identified in a family, such as the syndrome ,17,18. Set al. [Real et al. stressedet al. [Hruban et al. postulatet al. [Hruban et al. note thaet al. .et al. [Hruban et al. also notet al. [Sipos et al. note thaet al. [MSH2 germline mutation. Interestingly, the patient\u2019s adenocarcinoma of the colon and IPMN of the pancreas revealed identical immunohistochemical staining profiles showing loss of expression of MSH2 and MSH6 proteins with high levels of MSI. The authors concluded that \u201c\u2026The immunohistochemical staining and microsatellite instability patterns of the adenocarcinoma of the colon and IPMN provides strong evidence to support the consideration of IPMN as part of the spectrum of lesions found in LS.Sparr et al. discuss Numerous case-control studies have described families with two or more first-degree relatives with PC which fit a familial category . When suCDKN2A for the FAMMM syndrome, mismatch repair (MMR) for Lynch syndrome, TP53 for Li-Fraumeni syndrome, APC for familial adenomatous polyposis, and BRCA2 for the hereditary breast-ovarian cancer syndrome, indicate that the PC is part of each disorder\u2019s cancer spectrum. Landi [et al. [Landi has descm. Landi has alsom. Landi notes th [et al. provide et al. [6 SNPs. Findings disclosed that \u201c\u2026pancreatic cancers contain an average of 63 genetic alterations, the majority of which are point mutations. These alterations defined a core set of 12 cellular signaling pathways and processes that were each genetically altered in 67 to 100% of the tumors. Analysis of these tumors\u2019 transcriptomes with next-generation sequencing-by-synthesis technologies provided independent evidence of the importance of these pathways and processes\u2026\u201d These genetically altered core pathways and regulatory processes only became evident when the coding regions of the genome were analyzed in depth. Furthermore, \u201c\u2026Dysregulation of these core pathways and processes through mutation can explain the major features of pancreatic tumorigenesis.\u201d Jones et al. [CDKN2A, SMAD4, and TP53 tumor suppressor genes and in the KRAS oncogene have been identified in this lethal cancer. They emphasize that these discoveries, important in comprehending the natural history of PC, spurred efforts for developing improved diagnostic and therapeutic opportunities, since the vast majority of human genes have not been analyzed in this particular cancer. Recognizing that all human cancers are primarily genetic disorders, their plan is to identify additional genes and signaling pathways in the hope that this effort will guide future research on PC. They concluded that the key to understanding the pathogenesis of PCs rests on an appreciation of a core set of genetic pathways and processes. Importantly, they identified 12 partially overlapping processes that are genetically altered in the great majority of PCs; nevertheless, the pathway components that are altered in each tumor vary widely. For example, two of the tumors each contained mutations of a gene involved in the TGF-\u03b2 pathway, one being SMAD4 and the other being BMPR2. Interestingly, these two tumors each contained mutations of genes involved in most of the other 11 core processes and pathways, but the genes altered in each tumor were largely different. However, from the practical standpoint Jones et al. [Jones et al. performes et al. indicates et al. indicateet al. [BRCA germline mutations. Al-Sukhni et al. [BRCA2 germline mutations with PC has been well established. However, the role of its BRCA1 counterpart mutations is less clear. Their study disclosed that the loss of heterozygosity at the BRCA1 locus \u201c\u2026occurs in pancreatic cancers of germline BRCA1 mutation carriers, acting as a \u2018second-hit\u2019 event contributing to pancreatic tumorigenesis.\u201d They concluded that BRCA1 germline mutations may be considered for PC screening. Similar results were found among BRCA1-prone families by Lynch et al. [Wang et al. , when coi et al. note thah et al. .et al. [Couch et al. have recet al. [BRCA2 gene.Hiripi et al. discuss et al. were the first to describe pancreatic cancer as an integral lesion in the FAMMM syndrome [CDKN2A germline mutation was described within these families [Lynch syndrome ,18,36,37families ; malignaCDKN2A germline mutation which is believed to be the pathogenic causal mutation for the syndrome in this family. CMM indicates melanomas and it is noteworthy that the proband had 13 melanomas. CDKN2A germline mutation. Of keen interest is that she did not have any evidence of the FAMMM phenotype comprised of multiple atypical nevi. However, her father (III-2) had the FAMMM phenotype, two melanomas, a sarcoma, and he died of esophageal cancer. He was a nonsmoker and not a consumer of alcohol. It is noteworthy that the proband\u2019s paternal lineage showed four cases of pancreatic cancer as evidenced in II-7, II-9, II-10, and the proband\u2019s paternal great\u2011grandfather (I-4). The yellow stars indicate the presence of pancreatic cancer.CDKN2A mutation as did two of the siblings . Importantly, the proband\u2019s mother had melanoma and died of pancreatic cancer.et al. [Gargiulo et al. note thaet al. [Kastrinos et al. studied PALLD, the gene that encodes the protein palladin, in PC is controversial. More research is clearly needed. The heterogeneity of hereditary and familial forms of PC must be considered, given the possibility that the palladin protein may be involved in some forms of PC but not in others.The issue of the role of PALLD) was identified by Pogue-Geile et al. [PALLD gene is located. This subject is reviewed by Klein et al. [et al. \u201c\u2026implicated an oncogenic function for palladin after finding overexpression of PALLD mRNA in pancreatic cancer tissues.\u201d However, Klein et al. note that since the Pogue-Geile et al. paper was published, subsequent investigations failed to find evidence linking palladin to familial PC. Furthermore, Klein et al. note that investigators have failed to evaluate the full sequence of PALLD in patients with familial PC in order to determine if sequence variants in PALLD might be contributing to PC susceptibility. Given this background, Klein et al. sequenced the entire coding region of PALLD in 48 individuals with familial PC. Importantly, they did not find any deleterious mutations and were not able to show evidence to implicate mutations in PALLD as a cause of familial PC.The palladin gene ; the protein was found to be overexpressed in a rare inherited form of PDA. It is also overexpressed in a number of sporadic pancreas tumors as well as in premalignant precursors . For exall lines , resultsll lines . Goicoec [et al. , comment [et al. note thaet al. [PALB2 which may be a PC susceptibility causative germline mutation. A study by Slater et al. [PALB2 mutations might be causative for familial PC in a small subset of European families, especially in those also manifesting breast cancer. More research is clearly needed.Recently, Jones et al. identifir et al. suggestset al. [miR-200a and miR-200b, but this expression did not affect SIP1 expression, since \u201c\u2026the SIP1 promoter is silenced by hypermethylation and in these cancers E-cadherin is generally expressed. Both miR-200a and miR200b were significantly elevated in the sera of pancreatic cancer and chronic pancreatitis patients compared with healthy controls (P \u2264 0.0001), yielding receiver operating characteristic curve areas of 0.861 and 0.85, respectively\u2026\u201d These authors concluded that most PCs display hypomethylation in concert with overexpression of miR-200a and miR-200b silencing of SIP1 promoter methylation and retention of E-cadherin expression. Of major clinical importance is their suggestion that elevated serum levels of miR-200a and miR-200b in the majority of patients with PC might harbor diagnostic potential.Li et al. discuss PancPRO is the first risk prediction model for PC which is based upon findings from one of the largest registries of familial PC enabling accurate risk assessment . It has et al. [et al. [There are few biomarkers which enable high sensitivity and specificity for early diagnosis of PC. Kim et al. have sho [et al. investigCreating hypotheses for reducing PC morbidity and mortality is clearly an exceedingly difficult task. The most promising areas seem to be in improved imaging or in new gene probes.et al. [Decker et al. have recet al. ). The roet al. [et al. [et al. [The most sensitive imaging modality for the diagnosis of PC is probably endoscopic ultrasound (EUS), though there is poor interobserver agreement on EUS interpretations . Kimmey et al. reported [et al. showed t [et al. describe [et al. have deset al. [et al. [et al. [Others have tried to find diagnostic clues through analysis of pancreatic juice. Yan et al. looked a [et al. analyzed [et al. found saet al. [In conclusion, Chu et al. note thaet al. . The medet al. . Clearlyet al. [Chu et al. call attet al. . In addiet al. ,63. Moreversus relative encouragement when detected early, particularly in a stage I disease? One answer, from our perspective, rests upon the need to focus heavily upon individuals at inordinately high risk for PC when based upon environmental and genetic factors and their interaction. Such patients will then be the ones who will be candidates for innovative approaches to safe screening measures and, in turn, they will be high order candidates for molecular genetics, pathophysiology, pathology, and environmental modification, and genetic investigations.Why are we addressing statistics on the one hand that are discouraging, as in the case of PC\u2019s late diagnosis with metastatic spread,"} {"text": "Dear Editor,et al.[Prolonged exposure to acid and bile induces chromosome abnormalities that precede malignant transformation of benign Barrett\u2019s epithelium\u201d, which has been published in the recent issue of Molecular Cytogenetics. Barrett\u2019s esophagus results from gastroesophageal reflux and harbors an increased risk for the development of esophageal adenocarcinoma [in vitro approach the authors modeled the effect of gastroesophageal reflux on immortalized Barrett\u2019s esophagus cells [et al. found that intermittent exposure of BAR-T cells to acid and bile for 18 to 78 weeks caused a spectrum of genetic abnormalities typical for cancer development, including polyploidy, loss of chromosomes and the development of transformed clones. In addition, the genetic changes evoked by acid and bile exposure format the target protein receptors for tumor stimulating growth factors, i.e. epidermal growth factor (EGF), which are known to promote the growth of gastrointestinal cancers [With interest we read the article by Bajpai et al., entitleer risk) ,3. Usingus cells . Bajpai cancers . In contBarrett\u2019s esophagus, when compared to ablation and post-ablational proton pump inhibitor (PPI) therapy, which solely changes the pH of the reflux, but not the reflux per se[Conceptually, Barrett\u2019s esophagus results as the consequence of a complex neurohumoral response of the esophageal mucosa to gastroesophageal reflux including acid and bile ,3. Thus,ux per se.et al.[Given that the genetic abnormalities assessed by Bajpai et al. can be vet al.,6,7 and et al.. Therefoet al. and endoet al.,4,6,7. Tet al.,5. In coet al.. Endoscoet al.,6,7. TakSincerely,Reza Asari, Martin Riegler, Sebastian F. Schoppmann."} {"text": "Exposure to extreme altitude presents many physiological challenges. In addition to impaired physical and cognitive function, energy imbalance invariably occurs resulting in weight loss and body composition changes. Weight loss, and in particular, loss of fat free mass, combined with the inherent risks associated with extreme environments presents potential performance, safety, and health risks for those working, recreating, or conducting military operations at extreme altitude. In this review, contributors to muscle wasting at altitude are highlighted with special emphasis on protein turnover. The article will conclude with nutritional strategies that may potentially attenuate loss of fat free mass during high altitude exposure. It is well-documented that weight maintenance above 5000 m is extremely difficult, if not impossible, in free living adults. A number of explanations have been proposed including anorexia, elevated basal metabolic rate, body water loss, and altered satiety hormones. However, the majority of studies support that altitude-induced weight loss is largely a function of negative energy balance secondary to inadequate energy intake ,7,8,9. EOf particular concern is the composition of weight loss. Above 5000 m fat free mass (FFM) accounts for as much as 60%\u201370% of weight loss ,8,9. Lesi.e., caloric restriction and low protein intake) on protein turnover as influenced by protein synthesis and protein degradation. Nutritional strategies that may potentially attenuate loss of FFM during high altitude exposure are reviewed.Currently, there is limited research on the mechanisms of muscle wasting at altitude. In this review potential contributors to muscle wasting at altitude are discussed with a special emphasis on the independent effects of hypoxia and concurrent altitude-related nutritional issues ,24. Doubi.e., sea level), FFM comprises approximately 25% of body weight loss [i.e., increased protein) [i.e., high altitude) FFM comprises as much as 60%\u201370% of weight loss [et al. [vastus lateralis cross-sectional area in 14 mountaineers following eight weeks above 5000 m. Further, on an ascent of Mt. Shishapangma (8046 m), subjects lost 1.9 kg FFM compared to 0.9 kg fat mass [et al. [Caloric restriction induces weight loss which is generally a mix of water, fat and FFM. With caloric restriction under normoxic conditions (ght loss . This caprotein) ,31,32,33protein) . Howeverght loss ,34. In s [et al. found a et al. [et al. [There are some studies that support a greater loss of fat mass (FM) than FFM at high altitude. However, differences may be due to altitude attained and limitations of measurement techniques. Fulco et al. compared [et al. compared [et al. . Further [et al. and has [et al. . HoweverMuscle wasting is a well-documented occurrence with chronic high altitude exposure. Skeletal muscle is of particular importance given it contains the majority of body proteins and comprises almost half of human body weight . As suchet al. [et al. [et al. [vs. three-fold increase in MPS). Similarly, in a comparison of subjects who were either flown or walked to 4559 m, those walking had a 35% increase in fractional synthetic rate compared to those not walking [Perhaps the largest contributor to muscle wasting at altitude is hypoxia. Evidence suggests that hypoxic exposure may impair MPS through downregulation of mTOR by the hypoxia-induced REDD1 gene . Of partet al. also fou [et al. found a [et al. exposed walking . However walking suggestiet al. [et al. [i.e., calories and protein) may potentiate the problem. Therefore, targeting diet and nutrient intake may be a more practical approach to influencing MPS at altitude.Limited research suggests that muscle proteolysis may also be affected with hypoxic exposure. Using a rat model, Chaudhary et al. examined [et al. reported [et al. ,46, nutret al. [et al. [As previously discussed, a primary contributor to weight loss at altitude is low energy intake. Energy status influences MPS. Thus it is likely subcaloric intake contributes to both impaired MPS and increased proteolysis at altitude. This effect is exacerbated in combination with hypoxia as demonstrated by Favier et al. who expoet al. . Sea levet al. . Converset al. . Howeveret al. . In the [et al. demonstret al. [et al. [As a consequence of low energy intake, protein intake is also compromised with high altitude exposure. Protein intake, particularly the branched-chain amino acids (BCAA), is critical for the regulation of MPS . Severalet al. found no [et al. similarl [et al. .Regarding high altitude protein supplementation studies, data is limited. In rats supplemented with 10, 20 or 40 g of protein for 26 days at 6000 m, protein supplementation did not prevent the decrease in muscle growth . In ski et al. [et al. [et al. [et al. [Targeting two primary areas of nutrition may be useful to improve FFM retention at altitude: caloric intake and protein intake. Of these two areas, only the latter would seem practical and feasible. As demonstrated in previous studies, weight maintenance is extremely difficult at progressively higher altitudes above 5000 m. Butterfield et al. was able [et al. was able [et al. found th [et al. providedGiven the present body of research, increasing protein intake at altitude would appear a logical choice to aid retention of FFM. However, there are many limitations to this approach. At altitude increasing protein intake may not be feasible or practical. In a recent trek to Everest base camp, trekkers supplemented with protein were able to maintain a protein intake of only 1.1 g protein/kg/day which isLeucine is a branched-chain amino acid that is not only a substrate for the synthesis of new proteins but is critical in mTOR cell signaling and muscle protein synthesis regulation ,65,66. LThe above information presents the possibility of a unique application of leucine to muscle wasting at altitude. Metabolite profiling in yeast cells has shown dramatically low levels of cellular leucine secondary to downregulation of amino acid permeases (leucine transporters) under hypoxic conditions . Furtheret al. [Under normoxic situations where high quality meal protein and energy intakes are sufficient, supplemental leucine would not be expected to exert a stimulatory effect upon protein synthesis. Glynn et al. demonstret al. . It is wi.e., altitude), or when the muscle is resistant to stimuli , small amounts of protein enriched with leucine may enhance the MPS feeding response. In a study investigating the effects of 25 g whey protein (3 g leucine) compared to 6.25 g whey protein enriched with a leucine content equivalent to 25 g of whey (0.75 g leucine + 2.25 g leucine) and 6.25 g whey protein (0.75 g leucine), both whey and low dose whey + leucine increased MPS similarly 1\u20133 h post-feeding. However, the higher dose whey protein was more effective at sustaining increased MPS following resistance-exercise [i.e., higher leucine) (40 g whey) was found to induce the greatest stimulus in MPS following resistance exercise [The majority of research supports the amount of 20\u201325 g of high quality protein ingested at one time as the maximal dose for subsequent MPS stimulation. In situations where it is not possible to ingest sufficient protein endurance exercise increased MPS by 33% compared with 10 g EAA (1.87 g leucine) although both increased Akt and mTOR phosphorylation 30 min post-exercise. Coffey et al. [Sea level studies suggest leucine may also be of benefit when ingested during exercise. Although studies have investigated the effects of leucine ingested post-exercise, few have examined the effects of leucine during exercise. Pasiakos et al. demonstry et al. reportedet al. [vs. \u22120.8 kg). In addition, Schena et al. [Three studies have examined the effects of branched chain amino acids (BCAA) and leucine on body composition at altitude. Bigard et al. examineda et al. investiga et al. . The altChronic high altitude exposure is associated with significant weight loss primarily comprised of FFM. Loss of FFM has negative consequences related to decreased physical performance and increased risk of illness and injury. Studies have demonstrated at lower elevations that when subjects are provided with sufficient food and perform limited activity, energy balance can be maintained. As altitude progresses, weight maintenance becomes virtually impossible. Hypoxia, negative energy balance, and insufficient high quality protein limit the body\u2019s ability to synthesize protein secondary to inhibition of the mTOR pathway and perhaps accelerated proteolysis via upregulation of the UP system. Although this could be viewed as a favorable adaptation in the context of survivability, it is not in terms of performance and health. Increasing caloric and protein intake is difficult at high altitude due to feasibility and perturbations in appetite regulation. Therefore, the most practical strategy to improve FFM retention at altitude may be in the form of supplemental leucine. Clearly more research is needed in this area to determine the exact mechanisms related to altitude-induced muscle wasting, protein requirements and effectiveness of leucine for the retention of FFM during both acute and long-term high altitude exposure."} {"text": "Similarly as we did in 2010, please, find below our annual overview of the published papers in the International Journal of Cardiovascular Imaging in the year 2011. We believe that we have received again very interesting papers over the last year, which we have subdivided over the well-known areas, i.e. X-ray angiography, intravascular imaging, echocardiography, nuclear cardiology, magnetic resonance imaging, and computed tomography. In 2011 we published two Topical issues, one on QCA, IVUS and OCT in interventional cardiology , and one on Transcatheter Valvular Interventions . In addition, the Asian Society of Cardiovascular Imaging published two ASCI Special issues with Adjunct Editor YH Choe and Guest Editors BW Choi, H Sakuma and J Lee, the first one being Vol. 27, no. 5 the second one Vol. 27, Suppl 1.The second volume in 2011 was a Topical issue on QCA, IVUS and OCT in interventional cardiology. In the QCA section a number of important developments in this field were presented, starting with an overview on angiographic imaging and scoring techniques by Ng et al. have appCosta et al. reportedAl-Hay et al. describe the efficacy and safety of the Amplatzer septal device for percutaneous occlusion of Fontan fenestration in in a retIntravascular ultrasound (IVUS) and Intravascular optical coherence tomography (IOCT) are two catheter-based intravascular imaging technologies. IVUS has played a pivotal role in the evolution of interventional cardiology, allowing a better comprehension of coronary artery disease (CAD) as well as technique and device improvements. IOCT, a near-infrared light-based technology, was more recently introduced and has 10 times the spatial resolution of IVUS and a very high contrast between lumen and vessel wall contour, since images are obtained in a virtual blood free environment. These improvements enable for the first time in vivo direct characterization of metrics like fibrous cap quantification and stent coverage. IVUS radio frequency derived technology facilitates plaque composition interpretation and quantification, which consequently can be applied in a wide range of research activities. We are presenting here selected publications that exemplify not only research potential, but also clinical applications of these technologies.The International Journal of Cardiovascular Imaging received in the year 2011 several interesting contributions focused on the evaluation of vascular response following drug-eluting stent implantation as well as scaffold assessment. Chami\u00e9 et al. evaluateGarc\u00eca-Garc\u00eca et al. performeThe assessment of coronary bifurcation lesions by IVUS was extensively reviewed by Costa et al. ; practicFinally, a special highlight to the paper by Tu et al. who perfIn summary intravascular methods have been applied in a wide range of indications from plaque characterization to device evaluation. Most of the papers published in 2011 in the International Journal of Cardiovascular Imaging are focused in a variety of research applications of these methods. Considering the robustness achieved by these methods and continuous evolution we expect, for the upcoming years, more papers with emphasis in clinical applications of these methods in particular their potential impact in patient care.Just like last year, The International Journal of Cardiovascular Imaging received in 2011 several interesting contributions dedicated to conventional echocardiography but also to emerging ultrasound techniques including intravascular ultrasound, stress echocardiography, tissue Doppler imaging, strain analysis and three-dimensional echocardiography .In this era of modern technology, we sometimes forget the value of \u2018basic\u2019 techniques applied in a continent with a huge cardiovascular morbidity and mortality: Africa. Mocumbi et al. discusseThe prognostic value of dobutamine stress echocardiography (DSE) for risk stratification of patients aged \u226580\u00a0years is not clearly defined. Innocenti et al. obtainedP\u00a0<\u00a00.001) when compared with athletes (\u221215.2\u00a0\u00b1\u00a03.6\u00a0%) and control subjects (\u221216.0\u00a0\u00b1\u00a02.8\u00a0%). This technique could offer a unique approach to quantify global as well as regional systolic dysfunction, and might be used as new additional tool for the differentiation between physiologic and pathologic left ventricular hypertrophy. Simsek et al. [Sachdeva et al. evaluatek et al. evaluatek et al. measuredk et al. measuredChen et al. evaluateIn a special focus issue, the role of intravascular ultrasound (IVUS) was reviewed by several authors. Costa et al. discussetranscathether valvular interventions. Multi-modality imaging is of paramount importance during the selection process of patients, the implantation of the valve and during follow-up. The role of conventional echocardiography but also the role of three-dimensional transoesophageal echocardiography during the implantation of the MitraClip device or during the implantation of percutaneous aortic valves was highlighted by Maisano and Siegel [Finally, the last issue of the Journal was dedicated to emerging technology: d Siegel , 36.In 2011 several excellent papers were published in the field of nuclear cardiology. In this overview we highlight a selection of these papers.Cell therapy is an interesting therapeutic option in heart failure patients as it potentially improves contractility and restores regional ventricular function. However cell therapy remains complex and, next to determining the best cell type, the optimal delivery strategy, the bio-distribution and the survival of implanted stem cells after transplantation needs to be elucidated. Van der Spoel et al. presenteSPECT myocardial perfusion imaging is commonly used for comprehensive interpretation of metabolic PET FDG imaging in ischemic dysfunctional myocardium. Nkoulou et al. evaluateP\u00a0=\u00a00.08). LV volumes were also significantly lower when evaluated by gated SPECT as compared to MRI. In 4 patients (21\u00a0%) the LVEDV index was considered normal by gated SPECT and increased by MRI if MRI-derived normal values were used. No differences in LVEF were found between gated SPECT and MRI when MRI-derived LVEF was below 40\u00a0%. However, gated SPECT showed lower LVEF when MRI-derived LVEF was over 40\u00a0% .Sipola et al. prospectThe authors concluded that LV volumes are lower by GSPECT as compared to MRI and that no direct comparisons can be made between methods in follow-up studies. The authors also suggest that abnormal gated SPECT results should be confirmed by another imaging modality, such as MRI, if these findings have therapeutic consequences. These findings were also discussed in an editorial comment by van der Wall et al. . They poP\u00a0<\u00a00.05). However, only 1.9\u00a0% of adenosine SPECT studies were terminated due to brady-arrhythmias with 1 patient requiring aminophylline. These findings indicate that denervation super-sensitivity can persist late after cardiac transplantation. As a result, adenosine induced brady-arrhythmias may occur more frequently than in non-transplant patients, but these arrhythmias are generally short-lasting and benign.In patients after cardiac transplantation, denervation super-sensitivity to adenosine is well described, particularly early after transplant. Al-Mallah et al. now repoP\u00a0<\u00a00.05). However, in these patients with documented obstructive CAD, the prevalence of a positive stress test was comparable between both groups . From these data it can be concluded that the higher burden of CAD observed in AF patients is not associated with a higher burden of myocardial ischemia.Atrial fibrillation (AF) has been linked to the presence of underlying coronary artery disease (CAD). Nucifora et al. evaluatePatients with ischemic cardiomyopathy can show varying degrees of LV remodeling after cardiac surgical revascularization. Skala et al. directlyThere were a number of papers relating to the evaluation of patients with ischemic heart disease by cardiac magnetic resonance (CMR) in 2011. Lubbers et al. examinedDelayed contrast enhanced CMR was found to be useful for differentiating cardiac tumors from thrombi in a small series of stroke patients . Late gaFern\u00e1ndez-Golf\u00edn et al. found siContrast enhanced whole-heart MRA at 3 T was found to be useful for assessing coronary venous anatomy . Normal Metz et al. found thAs in previous years, a large number of very high quality, original articles on cardiac CT have been published in 2011. Many of these studies focused on the expanding role of coronary CT angiography (CTA) beyond the detection of lumen stenosis alone. Wang et al. showed the incremental value of dual-energy CT to assess significance of coronary lesions, comparing this technique with quantitative catheter based angiography and SPECT , 66. OthThe journal also published several important epidemiologic papers in different clinical cohorts. Lee et al. reportedTwo important studies focused on the presence of coronary artery disease in patients with a zero calcium score. Ergun et al. reportedRadiation dose remained a hot topic in 2011, and the journal has published several manuscripts related to efforts of exposure reduction , 76. UsiThe journal has also published a number of interesting meta-analyses. Abdulla and colleagues evaluated 10 coronary CTA studies for the purpose of determining the risk of major adverse events . During Beyond coronary imaging, a special issue of the journal focused on imaging in the context of transvascular interventions for valvular and other structural heart disease , 89\u201398."} {"text": "Apoptosis of uninfected bystander cells is a key element of HIV pathogenesis and believed to be the driving force behind the selective depletion of CD4+ T cells leading to immunodeficiency. While several viral proteins have been implicated in this process the complex interaction between Env glycoprotein expressed on the surface of infected cells and the receptor and co-receptor expressing bystander cells has been proposed as a major mechanism. HIV-1 utilizes CD4 as the primary receptor for entry into cells; however, it is the viral co-receptor usage that greatly influences CD4 decline and progression to AIDS. This phenomenon is relatively simple for X4 viruses, which arise later during the course of the disease, are considered to be highly fusogenic, and cause a rapid CD4+ T cell decline. However, in contrast, R5 viruses in general have a greater transmissibility, are encountered early during the disease and have a lesser pathogenic potential than the former. The above generalization gets complicated in numerous situations where R5 viruses persist throughout the disease and are capable of causing a rigorous CD4+ T cell decline. This review will discuss the multiple factors that are reported to influence HIV induced bystander apoptosis and pathogenesis including Env glycoprotein phenotype, virus tropism, disease stage, co-receptor expression on CD4+ T cells, immune activation and therapies targeting the viral envelope. While HIV directly and selectively infects CD4+ T cells, the low levels of infected cells in patients is discordant with the rate of CD4+ T cell decline and argues against the role of direct infection in CD4 loss. In agreement with this, in natural Simian Immunodeficiency Virus (SIV) infections in the wild there is no loss of CD4+ T cells or immunodeficiency in infected animals despite high levels of viremia ,2. Earlychanisms . This lechanisms ,6,7,8. W debated . It has debated ; 3) Cel debated ; (4) Bys debated . Of thes Cel deba debated ,13.et al. [It is evident from studies over the years that direct infection is not sufficient to account for all the CD4 loss in HIV infections. This has led to the belief that HIV is able to kill uninfected bystander cells via apoptosis . The firet al. who demoet al. ,14,15. TThe primary purpose of Env glycoprotein is to facilitate the fusion of viral and cellular membranes resulting in viral entry. The Env glycoprotein of HIV is arranged on the surface of the virus and virus-infected cells as a hetero-trimer. Each monomer is composed of a receptor-binding surface unit gp120) and a fusogenic transmembrane unit (gp41) that mediates fusion of membranes ,17. The and a fu However, recent studies suggest that the process of Env fusion is more complex than previously thought. It is now believed that the HIV Env not only facilitates infection of isolated cells but that productive transmission of virus occurs at the contact site between infected and uninfected cells referred to as the virological synapse . This inet al. [Although the role of the Env glycoprotein is primarily to mediate fusion of the viral and cellular membranes allowing for viral entry, it is also known that the HIV Env glycoprotein is capable of inducing CD4+ T cell apoptosis. Laurent-Crawford et al., the authors selectively inhibited signaling via CD4 and CXCR4 and found that the conventional signaling pathways known to be associated with these receptors are not involved in the process of Env mediated apoptosis [et al. demonstrated that inhibiting the Env mediated fusion process using gp41 fusion inhibitors abolishes bystander apoptosis [That HIV Env binds CD4 and a co-receptor suggests that these cell surface expressed receptors play a pivotal role in HIV Env mediated bystander apoptosis. This is supported by the observations that inhibiting Env CD4 interactions also inhibits Env mediated apoptosis . Subsequpoptosis . Later, poptosis . This copoptosis ,29,30. Wet al. have shown that abortive infection by HIV can also lead to bystander apoptosis in ex vivo cultures of human tonsil tissue [Two major mechanisms have been proposed for cell death induced by the HIV Env. These include (1) bystander apoptosis via interaction of HIV Env expressing cells with surface receptors CD4 and CXCR4/CCR5 leading to syncytia formation that eventually yield to apoptosis , (2) Parl tissue . et al. showed that fusion of cells mediated by gp41 leads to syncytia formation that subsequently undergo apoptosis [in vitro and in vivo and is a hallmark of HIV infections in humans, monkey models and mouse models of HIV infections. Moreover, syncytia formation has been linked to HIV pathogenesis and progression to AIDS, with syncytia inducing (SI) phenotype viruses appearing later during the disease and associated with rapid CD4+ T cell decline [Studies by Perfettini poptosis . The pro decline . in vitro studies but may also hold true in the case of HIV infected patients. The levels of Cyclin B have been found to be elevated in T cells from HIV infected patients [et al. [in vivo is a result of apoptosis or other mechanism like immune clearance or disintegration remains to be seen.The process of syncytia formation starts by fusion of two cellular membranes that lie in close proximity, followed by mixing of their cytoplasmic contents and eventually the nuclear membranes leading to abortive entry into mitosis . These bpatients ,45. Morepatients ,42. Finapatients ,43. In a [et al. demonstret al. showed that apoptosis induced by the Env glycoprotein required intimate cell to cell contact and binding to the co-receptor CXCR4 that could be reversed by addition of CXCR4 inhibitor AMD3100 [et al. demonstrated that the use of varied kinds of effector cells in assays to determine bystander apoptosis mediated by HIV Env plays an important role in this process and may account for some of the differences seen between laboratories [Another phenomenon associated with HIV Env mediated apoptosis is the process of hemifusion induced by gp41. Hemifusion is a process that involves transient interaction of cellular membranes in a manner that results in mixing of only the outer leaflets of the plasma membrane bilayers . The HIV AMD3100 . Later, AMD3100 . Other s AMD3100 ,29,50,52 AMD3100 ,50. Furt AMD3100 . More reratories . Overallratories ,49,50,51The constant and rapid evolution of HIV Env within a patient has been the focus of extensive research. The variability in HIV Env is also a major limitation in the development of a successful vaccine against HIV. On the other hand the Env phenotypic variation may also play a significant role in HIV bystander apoptosis. Some of the phenotypic characteristics associated with HIV Env that determine virus pathogenesis are described below.in vitro studies regarding HIV Env mediated fusion and apoptosis is in accordance with findings in vivo from infected patients and animal models of HIV infection. The phenotype of Env glycoprotein has long been associated with HIV pathogenesis. In the SHIV model of HIV infection the fusogenic activity of Env correlates directly with CD4 loss [in vitro [in vivo [The data from CD4 loss ,55. AlonCD4 loss ,56, incrCD4 loss and CXCRCD4 loss tropism CD4 loss ,29,50. MCD4 loss ,29,50 bo[in vivo .et al., the authors demonstrated that the Env genes from viruses isolated at the late stages of HIV infection are more fusogenic than early stage viruses and in some cases this phenomenon maps to Asn 362 located near the CD4 binding site [et al. [HIV Env glycoprotein is a highly variable protein that constantly evolves within a patient throughout the disease. It is estimated that the genetic variation in the Env gene is the highest compared to any other HIV-1 genes. This evolution of Env has been associated with disease progression and enhanced pathogenesis of late stage viruses compared to early or chronic stage viruses . This phing site . Increas [et al. . These fet al. demonstrated that in fact these Envs are characterized by higher apoptosis induction [Although the above findings support the notion that Env fusion is related to pathogenesis not only for X4 but also R5 viruses, there is limited information on whether bystander cell apoptosis by Env genes from different stages of the disease are different. However, analysis of late stage Envs by Wade nduction . Interesnduction . Thus, tet al. demonstrated that disease progression in subtype CRF01_AE infected individuals is much faster than subtype B infected individuals [The high level of variability between different subtypes of HIV that in many cases correlates with disease progression rate has also been reported. The evolution of HIV is markedly different in different subtypes. The most well characterized phenomenon is co-receptor usage evolution that seems to vary considerably between subtypes ,68,69. Rividuals . Thus phCoreceptor switching from early CCR5 usage to late CXCR4 usage in HIV infections is associated with rapid CD4 decline and AIDS development. The fact that CXCR4 is expressed on virtually all CD4+ T cells compared to CCR5, which is seen in only 5%\u201310% of CD4+ T cells, has been suggested as the explanation behind this phenomenon. However the fact remains that in a large proportion of patients, the co-receptor switch is not required and patients with exclusively R5 viruses progress to AIDS. The complexity of R5 virus infection is further accentuated by the differential levels of R5 expression among individuals. The classical CCR5\u039432 deletion mutation is well characterized for HIV resistance and long term non-progressors . CCR5\u039432et al. [et al. [et al. used HeLa cell lines with different levels of CCR5 expression to demonstrate that CCR5 expression levels can affect fusion mediated by different R5 isolates [As HIV Env mediated fusion is an interplay between Env fusion activity and receptor/co-receptor expression levels , it is net al. , SCID-hu [et al. presente [et al. ,83. Platisolates . Whetherisolates . More imAutophagy like apoptosis is an essential cellular mechanism that maintains homeostasis in higher organisms. It is involved in protein degradation and recycling, maintenance of cellular organelles, cell growth and restriction of intracellular pathogens including viruses, bacteria and parasites. Autophagy is a defense mechanism used by the cell to restrict incoming pathogens. This may result in (1) elimination of the pathogen, (2) modulation of this process by the pathogen to restrict their elimination or (3) organisms taking advantage of the autophagosomes for their own replication ,86. The et al. found that the addition of Enfuvirtide to HAART therapy regimen can result in greater and faster immunological recovery possibly via effects on bystander apoptosis [et al. [et al. [et al. [in vitro data suggest that the resistant mutants arising as a result of Enfuvirtide therapy have reduced cell to cell fusion capacity. This also parallels the reduced bystander apoptosis induction by these mutants while retaining virus infection and replication capacity [et al. showed that the presence of V38A mutation in combination with N140I polymorphism is associated with reduced HIV mediated cytopathic effects [The importance of gp41 in mediating bystander apoptosis makes it an attractive target for therapy. Enfuvirtide was the first peptide inhibitor targeting gp41 induced fusion process that inhibits HIV entry in ways that parallel inhibition via neutralizing antibodies . Targetipoptosis . Further [et al. showed t [et al. . These f [et al. who repo [et al. . Our in capacity . In a mo effects . These f effects ,106,107.Pathogenic HIV infections can be distinguished from non-pathogenic SIV infections by the presence of immune activation ,111 seenex vivo human lymphoid tissue with HIV-1 show a unique pattern of T cell activation, characterized by CD25+/HLADR+ cells that facilitate virus replication [et al. [et al. [Although it is widely accepted that pathogenic HIV infections lead to chronic immune activation it is not clear what mediates this phenomenon. Immune activation has been shown to occur in isolated lymph node histocultures and requires active virus replication . Intereslication . Intereslication . Among s [et al. demonstr [et al. . Interes [et al. . Hence ain vivo lead to accelerated CD4+ T cell death. This is seen not only in the infected but also the uninfected cell population. Moreover HIV infections also affect B cell and CD8 T cell function most likely due to lack of adequate cognate CD4 help that is essential for proper functioning of both T and B cell population. As a result HIV infections alter homeostasis of the entire immune system that may be at least in part due to apoptosis [in vitro has revealed accelerated cell death in both infected as well as uninfected T cells [HIV infections poptosis . Various T cells . Studies T cells markers T cells ,132. Thi T cells . et al. [in vitro as well as the pathogenesis in vivo [Answers to some of the fundamental questions regarding AIDS have been facilitated by the development of animal models of HIV infection. One of the most extensively used models is the SIVmac infection in Rhesus Macaques (RM) versus SIVsm infection in Sooty Mangabeys (SM) or SIVagm in African green Monkeys. While SIVsm infection in SM results in a nonpathogenic infection characterized by high levels of viremia in the absence of CD4 decline; infection of RM with SIVmac results in rapid loss of CD4 cells leading to immunodeficiency ,134,135.et al. . In the et al. . Furtheret al. . Other ret al. . Intereset al. ,141 and in vivo ,55,142. in vivo has also been greatly facilitated by the development of mouse model systems like the SCID-hu, hu-PBL-SCID and humanized mice systems [\u2212/\u2212 or the equivalent Rag\u2212/\u2212\u03b3c\u2212/\u2212 mice leads to development of a fairly representative human immune system. These human immune cells are susceptible to HIV infection by both CXCR4 utilizing and CCR5 utilizing strains and can support virus infection for several months [et al. [in-vitro studies, in-vivo from HIV infected patients, and an animal model of HIV infection indicate a role for HIV Env mediated fusion and bystander apoptosis in progression to AIDS.The testing of HIV pathogenesis systems . In the l months . Humanizl months ,145,146. [et al. demonstr [et al. . Recentl [et al. . Thus, fOver the years an increasing amount of data has been accumulating regarding the role of bystander apoptosis induction in HIV infection and its role in disease progression. It has become evident that the process is not as simple as previously thought. A number of host and viral factors discussed in this review actually work in concert to regulate this phenomenon . While a"} {"text": "Environmental policies at the European and global level support the diversion of wastes from landfills for their treatment in different facilities. Organic waste is mainly treated or valorized through composting, anaerobic digestion or a combination of both treatments. Thus, there are an increasing number of waste treatment plants using this type of biological treatment. During waste handling and biological decomposition steps a number of gaseous compounds are generated or removed from the organic matrix and emitted. Different families of Volatile Organic Compounds (VOC) can be found in these emissions. Many of these compounds are also sources of odor nuisance. In fact, odors are the main source of complaints and social impacts of any waste treatment plant. This work presents a summary of the main types of VOC emitted in organic waste treatment facilities and the methods used to detect and quantify these compounds, together with the treatment methods applied to gaseous emissions commonly used in composting and anaerobic digestion facilities. Gaseous emissions in composting facilities are typically constituted by nitrogen-based compounds, sulphur-based compounds and a wide group of compounds denominated Volatile Organic Compounds (VOCs) [Solid waste management is becoming a global problem in developed countries. According to international recommendations and legislation, different technologies are being used to reduce landfill disposal of organic wastes, improving recycling of organic matter and nutrients . Among ts (VOCs) . VOCs ars (VOCs) , althougi.e., at the tipping floors, at the shredder and during the initial forced aeration composting period. Also, incomplete or insufficient aeration during composting can produce sulphur compounds of intense odor, whereas incomplete aerobic degradation processes result in the emission of alcohols, ketones, esters and organic acids [et al. [Volatile Organic Compounds is a denomination used to refer to a wide group of organic compounds whose vapor pressure is at least 0.01 kPa at 20 \u00b0C . VOCs aric acids . Van Dur [et al. identifi [et al. also fouet al. [et al. [et al. [Delgado-Rodr\u00edguez et al. reported [et al. , when co [et al. . At the [et al. . These a [et al. found thet al. [et al. [et al. [et al. [Tolvanen et al. and Komi [et al. studied [et al. stated t [et al. found vai.e., amount of VOC emitted per weight of waste treated) or from a Life Cycle Assessment (LCA) perspective, since their contribution is mainly related to the Photochemical Oxidation Potential (POP). Emission factor values of 1.70 and 0.59 kg of VOC Mg\u22121 of OFMSW treated were reported by Baky and Eriksson [et al. [\u22121 of OFMSW treated in two different full scale composting plants using different technologies. In LCA studies, emissions of VOC to the atmosphere are expressed in kg ethylene equivalent/kg of emission and included in the POP category [VOC emissions in biological treatment processes have also been studied as environmental loads related to these processes. Regarding this, VOC emissions have been reported as emission factors and gas detection tubes, with those obtained with tubes being higher (plants studied in both cases correspond to different treatments). It is clear that both analytical methodologies need a deeper and more complete comparison to be fully reliable being, at present, GC-MS the most powerful tool to quantify VOC emissions.p-isopropyltoluene [There are also other compounds that have not been listed in ltoluene ,14,20 orltoluene ,20.The presence of VOCs has also been investigated in landfill gas and ambient air surrounding landfill facilities ,25. Aromet al. [The presence of odors is the main concern associated with VOC emissions and it has been investigated by a wide number of researchers. 2-butanone, \u03b1-pinene, tetrachloroethylene, dimethyl disulfide, \u03b2-pinene, limonene, phenol and benzoic acid were included in the study of Bruno et al. as repreet al. [et al. [Pierucci et al. found th [et al. concludeet al. [Mao et al. determinet al. [p-cymene, ammonia and acetic acid. Correlations found were different when low or high concentrations of these compounds were considered. For ethylbenzene, dimethyl sulfide, trimethylamine and p-cymene, which presented a very low olfactory threshold (0.002 ppm), a linear relationship between concentration and odor was only found for concentration values within the 0.25\u2013100 ppm range. A linear correlation was also found for odors and acetic acid (in the 0.1\u201350 ppm range), while it was not possible for ammonia when concentrations within 5 and 100 ppm were considered.Tsai et al. investiget al. [et al. [et al. [et al. [i.e., not digested waste, digested waste and post-digested waste) odor emissions measured by olfactometry decrease, although no correlation between total VOC concentration and olfactometry measurements could be established. Data obtained from the electronic nose measures showed that odor reduction due to the increment of biological stability was accompanied by a change in the organic compounds present in air samples. Further measurements using GC-MS confirmed these results as VOC mainly present in air samples obtained for the air surrounding fresh waste were terpenes (61%), alcohols (18%) and esters (9%), while air samples from digested waste still presented a high presence of terpenes (51%) and carbonyl compounds (40%), being these same compounds predominant in post-digested waste (58% of terpenes and 34% of carbonyl compounds). Regarding this point, it is evident that a reliable correlation between the odor values obtained from electronic noses or olfactometry measurements and the chemical composition determined by CG-MS still has not been established. Consequently, no international consensus about the suitability of these techniques has been reached.On the other hand, D\u2019Imporzano et al. and Orzi [et al. tried to [et al. found a [et al. studied 2.et al. [et al. [The main sources of VOCs and other gaseous contaminants in composting plants are area emission sources. Composting or, in general, waste treatment plants, can be completely confined with process emissions treated through scrubbers and/or biofilters or completely open, where the composting process takes place commonly in windrows , or a mixture of both situations. Then, biofilter and composting windrows surfaces are the sources of VOCs. In addition, emissions from reception and storage areas should be considered if not confined. Also the emissions of VOCs during some pre (waste conditioning) and post treatment (compost sieving) operations can be significant. In fact, Albretch et al. stated t [et al. showed a2.1.et al. [et al. [et al. [As can be deduced from the nature of the emitting surfaces, obtaining representative and comparable samples of gaseous emissions is not straightforward. In addition, no impact on the characteristics and conditions of the source should be caused when sampling and no influence of the equipment used on the sample should be guaranteed . In the et al. located [et al. also use [et al. . The flu [et al. covered et al. [\u22123) and air velocity (m\u00b7s\u22121) results in VOC mass flow released per windrow surface area unit (mg\u00b7s\u22121\u00b7m\u22122). Measures of VOC emissions were repeated at different days. Data obtained from emission measurements during a single sampling day were represented in a three dimensional graph with windrow length and perimeter in x and y axes respectively . VOC mass flow values per square meter were placed in the z-axis to obtain an emission surface. The three dimension emission surface was then projected in a two-dimension graph (windrow perimeter at x-axis and windrow length at y-axis). Multiplying the pollutant mass flow per area unit by the corresponding area in the graph resulted in the compound mass flow and the sum of the different quantities obtained corresponds to the total mass flow of VOC (g\u00b7s\u22121). VOCs were determined as total carbon concentration by GC using a FID (flame ionization detector).Cadena et al. proposedet al. ,36. Air 2.2.Detection and quantification of VOCs has been performed by different techniques. Although the gas chromatography is preferred by a number of researchers, gas detection tubes, electronic noses and olfactometry have also been used for VOC determination in gaseous samples.et al. [et al. [et al. [The most common technique used in VOC detection and quantification is gas chromatography. However, in most of the cases due to the low concentration of these compounds in the samples to analyze, a pre-concentration step is needed to ensure the complete detection of all VOCs present and the accuracy of the analysis. Different materials and settings are used with this purpose. Defoer et al. pre-conc [et al. collecte [et al. , who staet al. [et al. [et al. [Komilis et al. used act [et al. collecte [et al. when det [et al. ,38.et al. [et al. [et al. [In addition to the use of GC-MS for VOC determination, there is the possibility of using gas detection tubes. In fact, Mao et al. and Tsai [et al. present [et al. also use2, S5: low polarity aromatic and alkane compounds, S6: methane, S7: sulphur compounds and terpenes, S8: alcohols, ketones and partially aromatic compounds, S9: sulphur containing and aromatic compounds and S10: methane at high concentration . Although very attractive, this methodology still needs a scientific validation for a large number of odors and sources. As stated above in this paper (Section 1.1), olfactometric techniques have also been used for VOC measurement although their suitability for VOC quantification still has not been demonstrated.The electronic nose (EN) has been also used for odor determination and VOC families\u2019 identification ,29. An e3.Although gaseous emissions can be directly released to the atmosphere in non-confined waste biological treatment facilities , effectiBiological treatments of waste gases are based on the use of naturally selected microbial strains capable of contaminant removal which act as carbon source, energy source or both . Differe\u22123\u00b7h\u22121 have been reported for typical biofilters colonized with bacteria [Biofilters are a type of bioreactors in which a humidified polluted air stream is passed through a porous packed bed on which a mixed culture of pollutant-degrading organisms is immobilized. Generally the pollutants in the air flow are transported from the gaseous phase to the microbial biofilm (through liquid phase or moisture) where the biological oxidation of VOCs occurs . The by-bacteria . These abacteria . Howeveret al. [\u22123\u00b7biofilter\u00b7h\u22121. Removal efficiencies were highly dependent on the composted waste. The same authors also pointed a biofilter basal emission of VOCs of approximately 50 mg of C\u00b7m\u22123. However, in the literature, VOC biofiltration is frequently studied in laboratory scale biofilters using synthetic gases with two or three mixed compounds or even a single compound [Pagans et al. reportedcompound ,48, whicet al. [\u22123 biofilter h\u22121 for the old material and 17.1 and 27 g\u00b7C\u00b7m\u22123\u00b7biofilter\u00b7h\u22121 for the new material. These results indicate that the biofilter performance was improved as a result of material replacement. In one of the two biofilters the authors observed the pattern reported by Devinny et al. [Another aspect to comment in biofilter operation and maintenance at an industrial level is the need for attention to packing material needs for a correct performance of the gas treatment equipment. In fact, watering of the material is needed, as well as a periodic replacement of the biofilter material. Col\u00f3n et al. reportedy et al. in relaty et al. . For they et al. . Proper 4.VOCs present in gaseous emissions of biological treatment facilities of organic wastes have been investigated and characterized by different researchers. The main VOCs related to organic waste handling and treatment are terpenes, although ketones, alcohols and organic acids have also an important contribution to the total VOC content measured. The presence of sulphur and nitrogen compounds has to be also highlighted. Different relationships have been established between VOC concentration and odor level on specific situations and for a specific range of VOCs and odor concentrations. There is no universal correlation covering the whole concentration range and the entire group of VOC families.The analysis of VOCs is mainly performed by GC-MS although other methods such as gas detection tubes or electronic noses have been used in some cases. GC-MS appears as the most reliable method covering a wide range of VOC concentrations and being capable to identify and quantify a large number of compounds from different families. Gas detection tubes permit the measurement of many compounds present at relatively high concentrations. As an economical and quick method, it can be useful for the self management of gaseous emissions in waste treatment plants. Electronic noses permit the characterization of the families of VOC present in a gaseous sample and the comparison between VOC emissions from different points of a treatment plant of different installations, although they must be considered a first approach.Different methods for air emissions sampling in waste treatment plants have been proposed. The lack of homogeneity among these methods is one of the drawbacks in VOC emission measurement from this type of facilities.Finally, it must be highlighted that biofiltration is the gaseous pollutant abatement technology that is most widely used in organic waste treatment plants for VOC removal."} {"text": "Gadus morhua) and in the analysis of traits such as temperature tolerance, growth characteristics and sexual maturation. We used an Illumina GoldenGate panel and the KASPar SNP genotyping system to analyse SNPs in three Atlantic cod families, one of which was polymorphic at the Hb \u03b21 locus, and to generate a genetic linkage map integrating Pan I and multiple Hb loci.Haemoglobin (Hb) and pantophysin (Pan I) markers have been used intensively in population studies of Atlantic cod have been mapped on linkage group (LG) 2 while the other five were placed on LG18. Pan I was mapped on LG 1 using a newly developed KASPar assay for a SNP variable only in Pan IThe genetic linkage map presented here incorporates the marker Pan I, together with multiple Hb loci, and integrates genetic linkage data produced by two different research groups. This represents a useful resource to further explore if Pan I and Hbs or other genes underlie quantitative trait loci (QTL) for temperature sensitivity/tolerance or other phenotypes. Gadus morhua) represents one of the most valuable commercial resources for international fisheries [The Atlantic cod matches the number of haploid chromosomes reported for the Atlantic cod by Fan and Fox [We used a curated Illumina GoldenGate panel containing 1536 SNPs , includi\u00ae4 . The gen\u00ae4 . The 23 al. 2010 ; a 1:1 c and Fox . The com and Fox ; therefo and Fox ,26. Alle and Fox ,20 suggeA and Pan IB described at the Pan locus can be determined by assessing the polymorphism present at a DraI site located in intron 4 [A homozygotes with thet al. . To iden [et al. ) and Moe [et al. we perfo [et al. using 12is group . Using t [et al. might re [et al. ,12 where [et al. ,14 sequeThe genetic linkage map presented here, that includes the marker PanI and multiple Hb loci, represents a useful resource for studying genotype-phenotype relationships, for QTL studies, as well as for population studies. Our data indicate that Hb genes are located on two different linkage groups while Pan I locus was mapped on a third linkage group. Further studies are needed to elucidate which of these genes/linkage groups will correlate with phenotypic traits. The Hb \u03b21 gene, which has been linked to variation in haemoglobin oxygen binding capacity and water temperature preference ,28 are indicated on the left of each linkage group, with SNP identifiers on the right. PanI and Hb loci are in bold, italicized and highlighted in red while SNPs common to Moen et al. 2009 are in bold and highlighted in red.Click here for fileet al. 2009Linkage group correspondence between the genetic linkage groups presented in this study and Moen . This file contains the data related to the linkage group correspondence between the genetic linkage groups presented in this study and Moen et al. 2009.Click here for file"} {"text": "Clinical laboratories have a vital role to play in translating research findings into clinically useful measurements. This involves assessment of new procedures to ensure that they are analytically valid and also continually integrating the findings from new studies into routine practice. These important tasks are covered in this special issue on laboratory medicine which includes a wide variety of laboratory-related topics as illustrated by three review articles and 12 research papers. The first review article by D.-H. Ko et al. comprehensively examines the methods available for haptoglobin typing and discusses the characteristics, clinical applications, and limitations of each method. A. Noto et al. describe the role of neutrophil gelatinase-associated lipocalin (NGAL) for managing acute kidney injury and the potential benefits derived from the combined clinical use of urine NGAL and metabolomics in kidney disease. The third review by E. Urrechaga et al. looks at new laboratory biomarkers for hypochromia including their clinical significance and utility in daily practice.Several papers examine issues in diagnostic hematology, one of the traditional areas of laboratory testing. Y. Nam et al. assess hypercoagulability using a relatively new thrombomodulin-induced thrombin generation assay (TGA) in patients with liver cirrhosis. They show that, although routine coagulation tests did not detect the thrombotic tendency in cirrhosis, the TGA could detect hypercoagulability in cirrhosis. The paper by M. Hur et al. emphasizes that prothrombin time international normalized ratio (INR) measurements by point-of-care testing coagulometers still need to be confirmed with INR measurements in the laboratory. H. R. Lee et al. evaluate the relationship between mean platelet volume (MPV) and characteristics of cord blood (CB) units and shows that MPV may be one of the most useful parameters to assist with making decisions about the priority for processing specimens in the cord blood bank. F. Chongliang et al., in their interesting study on complete blood counts, suggest that a model that uses levels of neutrophils, lymphocytes, and platelets is potentially useful in the objective evaluation of survival time or disease severity in unselected critically ill patients.Three interesting papers come from the areas of serology/immunology, chemistry, and microbiology. K. Lee et al. evaluate the overall efficacy of reverse sequence screening for syphilis (RSSS) and examine the practical issues associated with the routine investigation of syphilis. The study by M. Han et al. analyzes the degree of concordance between the various multiple allergen simultaneous test assays and a reference method. J. Gervasoni et al. evaluate various assay kits for 25-hydroxyvitamin D2/D3 and show that they have acceptable agreement with liquid chromatography-tandem mass spectrometry.M. tuberculosis. S. Kim et al. introduced a new allele-specific real-time PCR system for TPMT genotyping, which can be used to improve the efficacy and safety of thiopurine treatments in clinical practice. R. Januchowsk et al. analyzed MDR gene expression in drug-resistant ovarian cancer cell lines. They suggest that it is possible to predict cross-resistance to other drugs when the classical MDR, which is correlated with P-gp expression, is involved. S. M. Hwang et al. evaluated the human platelet antigen (HPA) genotype and/or the CD109 mRNA expression in various human cell types. They demonstrate that the 4-1BB signal pathway plays a key role in organ transplantation tolerance. Lastly the paper by Y. Shi et al. demonstrates that gene silencing of 4-1BB by RNA interference inhibits the acute rejection in rats with liver transplantation.Five papers deal with the growing area of molecular diagnostics. T. Kaewphinit et al. combined a loop-mediated isothermal amplification method with a chromatographic lateral-flow dipstick and show that it can specifically and rapidly detect the IS6110 gene of We hope that these papers in this special issue of BioMed Research International will help to confirm the important role that clinical laboratories play in translational research and the need to continuously update laboratory practice as new findings and developments appear.Mina HurAndrew St. JohnAntonio La Gioia"} {"text": "Patellar tendinopathy (PT) presents a challenge to orthopaedic surgeons. The purpose of this review is to revise strategies for treatment of PTA PubMed (MEDLINE) search of the years 2002\u20132012 was performed using \"patellar tendinopathy\" and \"treatment\" as keywords. The twenty-two articles addressing the treatment of PT with a higher level of evidence were selected.Conservative treatment includes therapeutic exercises (eccentric training), extracorporeal shock wave therapy (ESWT), and different injection treatments . Surgical treatment may be indicated in motivated patients if carefully followed conservative treatment is unsuccessful after more than 3\u20136 months. Open surgical treatment includes longitudinal splitting of the tendon, excision of abnormal tissue (tendonectomy), resection and drilling of the inferior pole of the patella, closure of the paratenon. Postoperative inmobilisation and aggressive postoperative rehabilitation are also paramount. Arthroscopic techniques include shaving of the dorsal side of the proximal tendon, removal of the hypertrophic synovitis around the inferior patellar pole with a bipolar cautery system, and arthroscopic tendon debridement with excision of the distal pole of the patella.Physical training, and particularly eccentric training, appears to be the treatment of choice. The literature does not clarify which surgical technique is more effective in recalcitrant cases. Therefore, both open surgical techniques and arthroscopic techniques can be used. There is agreement within the literature that the patellar tendon is particularly vulnerable to injury and often difficult to manage successfully \u20135. The pDanielson et al. found a The purpose of this review is to revise strategies for the treatment of PT.PubMed articles (MEDLINE) in English related to the treatment of PT were searched, using \u201cpatellar tendinopathy\u201d and \u201ctreatment\u201d as key words. Between 2002 and 2012, we found 186 references. We chose the 22 references that had the higher level of evidence and that were closely related to the treatment of PT.There are several strategies for the management of PT: therapeutic exercises, extracorporeal shock wave therapy (ESWT), injections, open surgical procedures and arthroscopic techniques. It is commonly accepted that surgical treatment must be indicated in motivated patients if carefully followed conservative treatment is unsuccessful after 3\u20136\u00a0months \u201322.Hyman studied PT in volleyball athletes . He founESWT appears to be a promising treatment in patients with chronic PT. ESWT is most often applied after the eccentric training has failed. Zwerver et al. studied Injection treatments are increasingly used as treatment for PT. Van Ark et al. describeUltrasound-guided injection of autologous skin-derived tendon-like cells has been show to be more effective than plasma alone for the treatment of refractory PT .Pascual-Garrido et al. tried toFerreti et al. analyzedKaeding et al. found a Shelbourne et al. reportedFifteen patients with PT were treated by Wilberg et al. . All patOgon et al. describeKelly examined the results of arthroscopic tendon debridement with excision of the distal pole of the patella for refractory PT . He concLorbach et al. performePascarella et al. analyzedBayar et al. reportedSurgical treatment (patellar tenotomy) was compared with eccentric training by Bahr et al. . No advaWillberg et al. comparedCucurulo et al. evaluatePT is a common, painful, overuse disorder. Although many different treatment methods have been described, there is no consensus regarding the optimal treatment for this condition \u201322.In this review, no advantage has been demonstrated between surgical treatment and eccentric strength training . TherefoAnother review has shown strong evidence for the use of eccentric training to treat PT . There wIn conclusion, it is commonly accepted that surgical treatment must be indicated in motivated patients if carefully followed conservative treatment is unsuccessful after 3\u20136\u00a0months \u201322. The"} {"text": "The purpose of this study was to evaluate gender-wise diversity of digital dermatoglyphic traits in a sample of Sinhalese people in Sri Lanka.Four thousand and thirty-four digital prints of 434 Sinhalese individuals were examined for their digital dermatoglyphic pattern distribution. The mean age for the entire group was 23.66\u00a0years (standard deviation\u2009=\u20094.93\u00a0years). The loop pattern is observed more frequently compared to whorl and arch in the Sinhalese population. Females have a more ulnar loop pattern than males . The plain whorl pattern is observed more frequently in males compared to females .The double loop pattern is observed more frequently on the right and left thumb (digit 1) of both males and females. Pattern intensity index, Dankmeijer index and Furuhata index are higher in males.Ulnar loop is the most frequently occurring digital dermatoglyphic pattern among the Sinhalese. All pattern indices are higher in males. To some extent, dermatoglyphic patterns of Sinhalese are similar to North Indians and other Caucasoid populations. Further studies with larger sample sizes are recommended to confirm our findings. Dermatoglyphics is the tet al. [et al.\u2019s [et al. [et al.\u2019s [et al.\u2019s [et al. [et al. [To date, sexual dimorphism of qualitative dermatoglyphic traits has been studied in various populations around the world. In 1892, Sir Francis Galton examined 5,000 digital prints from different populations in which he observed the pattern distribution as loop 67.5%), whorl (26%) and arch (6.55%) patterns . Chattop7.5%, who [et al. , in his et al.\u2019s study onet al.\u2019s study onet al.\u2019s among thet al.\u2019s study on [et al. observed [et al. , in her [et al. whose stDermatoglyphic data of Sinhalese people (an Indo-Aryan ethnic group native to the island of Sri Lanka) are scarce in the literature. The main objectives of the current study are to determine the sexual dimorphism of digital dermatoglyphic traits and pattern indices in a sample of the Sinhalese population and compare them with other populations.The present study was conducted from January 2010 to January 2012 at the Department of Forensic Medicine, Faculty of Medicine and Allied Sciences, Rajarata University of Sri Lanka.Ethical clearance for this study was obtained from the Ethical Clearance Committee of the institute. All subjects were informed about the purpose, nature and possible risks of the study, before written informed consent was obtained.The participants in this study were undergraduate students from different faculties in the university. We calculated that a sample size of 434 participants was sufficient to detect a 50% prevalence of ulnar loop, with an absolute precision of 5% of the total population (according to the 2009 census) . There aDigital prints were classified according to the Galton-Henry system ,22. We c1. Loops\u25cf\u2003Ulnar loop (UL)\u25cf\u2003Radial loop (RL)2. Whorls\u25cf\u2003Plain whorl (PW)\u25cf\u2003Double loop whorl (DLW)\u25cf\u2003Central pocket loop (CPL)\u25cf\u2003Accidental whorl (AW)3. Arches\u25cf\u2003Plain arch (PA)\u25cf\u2003Tented arch (TA)The pattern intensity index:{(2\u2009\u00d7% whorl\u2009+% loop) \u00f7 2} ,24;arch/whorl index of Dankmeijer:{(% arches \u00f7% whorl)\u2009\u00d7\u2009100} ;and whorl/loop index of Furuhata:{(% whorl \u00f7% of loop)\u2009\u00d7\u2009100} , were caAnalysis was carried out using SPSS 17 Categorical data are presented as frequencies.A total of 4,340 fingerprints from 434 Sri Lankan Sinhalese were analyzed for different digital patterns. The mean age of the group was 23.66\u00a0years (standard deviation\u2009=\u2009\u00b14.93\u00a0years).The loop pattern is the most common pattern in the Sinhalese population followed by whorl and arch , Similarly, the most frequently observed pattern in both hands of males is also loop. However, the overall frequency of loop pattern is higher in females (60.92%) than males (58.53%). The whorl pattern is observed more frequently in males (36.54%) compared to females (34.52%). The frequency of arch pattern is 4.56% in females and 4.93% in males for both hands, followed by plain whorl , double loop whorl , central pocket loop , plain arch , tented arch , radial loop and accidental whorl .In the left hands of all subjects,the pattern distribution in descending order is: ulnar loop , plain whorl , double loop whorl , central pocket loop , radial loop , plain arch , tented arch and accidental whorl . Similarly, for the right hand, the distribution is: ulnar loop , plain whorl , double loop whorl , central pocket loop , plain arch , tented arch , radial loop , and accidental whorl .The decreasing order of digital dermatoglyphic pattern types from finger to finger is shown in Table\u00a0The double loop whorls are found more frequently on the thumb than on the other fingers , middle finger , ring finger , little finger ).The frequencies of dermatoglyphic pattern indices among Sinhalese are shown in Table\u00a0The pattern intensity index is found higher in males (13.16) compared to females (12.99). Similarly, the index of Dankmeijer is found higher in males (13.49) than females (13.22). The index of Furuhata is found higher in males (62.44) compared to females (56.65).In this study, an attempt has been made to study the sexual dimorphism of dermatological traits and pattern indices among a sample of Sinhalese in Sri Lanka. They are typified by having a high frequency of loops compared to whorls and arches. Ulnar loop is the most commonly observed pattern followed by PW, DLW, CPL, PA, TA, RL and AW in males and similarly, UL is the commonest pattern followed by PW, DLW, CPL, TA, PA, RL and AW in females.A large number of dermatoglyphics studies have been performed over the last century in many countries around the world. The results of the following studies are in line with the present study Table\u00a0.et al. [, in their study on South Indian people, observed that UL was the most common pattern followed by PW, DLW, CPL, PA, TA, RL and AW in males while UL was the most common pattern followed by PW, DLW, CPL, PA, TA, RL and AW in females. Similarly, studies done by Chattopadhyay et al. [et al. (among Black Americans) [et al. (among Indigenous black Zimbabweans) [Srivastava , in his et al. , in thei Bengal) , Namouch Bengal) , Qazi etericans) , Boroffiericans) and Igbiabweans) , observeet al. [et al. (among Tibetans) [et al. (among Muzziena Bedouin) [et al. [The results of the studies done by Banik agaland) , Biswas agaland) , Tiwari ibetans) , KarmakaBedouin) and Cho Bedouin) ,13 are nBedouin) . AccordiBedouin) . HoweverBedouin) stated et al.[et al.[The pattern indices of Sinhalese are compared with several previous studies on different populations in Table\u00a0et al. observedl.[et al. and Biswl.[et al.observed et al.[et al.[et al.[Studies done by Biswas and Cho et al. and Tiwal.[et al. observedl.[et al. and Biswl.[et al., whereasl.[et al. found a Mahavansa, the great chronicle of Sri Lanka, which was written by the Mahanama thero in the fifth century AD [Mahavansa, Sinhalese people originated from a group of 700 people of Indo-Aryan stock led by Prince Vijaya (543 BC to 505 BC), who was a son of the North Indian king, Sinhabahu [In general, dermatoglyphics patterns of Sinhalese are more similar to the Caucasoid populations. The origin of the Sinhalese population of Sri Lanka is disputed. However, studies based on human leukocyte antigen (HLA) have shown that Sinhalese are more likely to originate from the Aryans than the Dravidians . Sinhalentury AD . Accordiinhabahu .It appears that the dermatoglyphic data would certainly support similarities between the Sinhalese and people of North India.In conclusion, the most common fingerprint pattern observed among Sinhalese is ulnar loop. All pattern indices are found to be higher in males. To some extent, the dermatoglyphic patterns of the Sinhalese are similar to North Indians and other Caucasoid populations. Further studies with larger sample sizes are needed to substantiate our findings.AW: Accidental whorl; CPL: Central pocket loop; DLW: Double loop whorl; PA: Plain arch; PW: Plain whorl; RL: Radial loop; TA: Tented arch; UL: Ulnar loop; HLA: Human leukocyte antigen; AD: Anno Domini; BC: Before Christ.The authors declare that they have no competing interests.BTBW and GKR carried out the design of the study and performed the statistical analysis, interpretation of data, and drafting of the manuscript. All authors participated in collecting data, editing the manuscript and helped coordinate research activities. All authors read and approved the final manuscript."} {"text": "To the Editor: In a recently published study, van Ingen et al. analysis, spoligotyping, and mycobacterial interspersed repetitive units\u2013variable number tandem repeat typing. We showed that, in addition to the markers described by van Ingen et al. (M. orygis\u2013specific type and exactly matches that of a previous isolate of the oryx bacillus (SB0319) from the M. bovis spoligotype database 587 was not the only spoligotype specific for"} {"text": "As a minor component of vitamin E, tocotrienols were evident in exhibiting biological activities such as neuroprotection, radio-protection, anti-cancer, anti-inflammatory and lipid lowering properties which are not shared by tocopherols. However, available data on the therapeutic window of tocotrienols remains controversial. It is important to understand the absorption and bioavailability mechanisms before conducting in-depth investigations into the therapeutic efficacy of tocotrienols in humans. In this review, we updated current evidence on the bioavailability of tocotrienols from human studies. Available data from five studies suggested that tocotrienols may reach its target destination through an alternative pathway despite its low affinity for \u03b1-tocopherol transfer protein. This was evident when studies reported considerable amount of tocotrienols detected in HDL particles and adipose tissues after oral consumption. Besides, plasma concentrations of tocotrienols were shown to be higher when administered with food while self-emulsifying preparation of tocotrienols was shown to enhance the absorption of tocotrienols. Nevertheless, mixed results were observed based on the outcome from 24 clinical studies, focusing on the dosages, study populations and formulations used. This may be due to the variation of compositions and dosages of tocotrienols used, suggesting a need to understand the formulation of tocotrienols in the study design. Essentially, implementation of a control diet such as AHA Step 1 diet may influence the study outcomes, especially in hypercholesterolemic subjects when lipid profile might be modified due to synergistic interaction between tocotrienols and control diet. We also found that the bioavailability of tocotrienols were inconsistent in different target populations, from healthy subjects to smokers and diseased patients. In this review, the effect of dosage, composition and formulation of tocotrienols as well as study populations on the bioavailability of tocotrienols will be discussed. The biological role of tocotrienols, as minor components in vitamin E, has been largely underestimated despite studies showing their unique physiological functions. In fact, tocotrienols possess similar structures to tocopherols characterized by a chromanol head named by \u03b1, \u03b2, \u03b3 or \u03b4 according to the position and degree of methylation ,2. Tocotet al. and Nesaretnem et al. provided extensive insights into the molecular targets of tocotrienols, especially in cancer and inflammation [Although most studies were conducted in rodents and animals, they serve as a basis for clinical evaluations to establish their health benefits in humans. Several reviews have clearly summarized the unique properties of tocotrienols including their antioxidant, anticancer, cardioprotective and neuroprotective effects to name a few ,18. In aammation ,3,19. Ovet al.[et al. [vs 537\u00a0mg \u03b1-tocopherol). The rapid disappearance of tocotrienols in the plasma within 24 hours triggered much debate on the bioavailability of tocotrienols on metabolic effects. This could be partly due to the low affinity of \u03b1-tocopherol transport protein (\u03b1-TTP) for tocotrienols. The repacking of \u03b1-tocopherol in the liver into VLDL cholesterol suggests the longer shelf life and higher concentrations of \u03b1-tocopherol in the plasma. However, it was postulated that \u03b1-tocotrienol may be absorbed via an \u03b1-TTP independent pathway [Evidence on the metabolism of tocotrienols is relatively limited when compared with \u03b1-tocopherol . Only a et al. reportedet al.. A cliniet al.. The fol.[et al. reported pathway . In a bi pathway . It is i pathway . Indeed,et al. [0-\u221e) of tocotrienols were shown to be increased by at least 2-fold in the fed state, corresponding with a decrease in the volume of distribution (Vd). When tocotrienol homologues were analyzed individually, the maximum plasma concentrations (Cmax) for \u03b1-, \u03b3- and \u03b4-tocotrienol reached 1.83, 2.13 and 0.34\u00a0\u03bcg/mL respectively. The significant increase in tocotrienol bioavailability under fed state was most probably due to the increase of TAG after a high fat meal, followed by bile secretion. In another study where tocotrienols were given at higher doses , the peak plasma concentrations of \u03b3-tocotrienol did not seem to increase proportionally, i.e. 2.79, 1.55 and 0.44\u00a0\u03bcg/mL for \u03b1-, \u03b3- and \u03b4-tocotrienol [0-\u221e, tocotrienols administered with self-emulsifying systems were increased by 2 to 4-folds compared to non-emulsified tocotrienols. Although the half-lives of tocotrienols were reported to be approximately 4 to 5-fold lower than that of tocopherols (4\u00a0hours vs 20 hours), a dosing schedule of twice daily is sufficient to reach the steady state within 3 days [Several postprandial studies were also designed to investigate the pharmacokinetics of tocotrienols when administered orally. In Yap et al. , evidentet al. . Compariotrienol . Neverthotrienol . The peaotrienol ,31. In fotrienol describeotrienol . In the n 3 days . ResultsIn an effort to determine the therapeutic window for tocotrienols, a number of long term clinical studies were carried out using TRF and tocotrienol derivatives. The majority of these trials were focused on lipid profile as tocotrienols were found to inhibit HMG-CoA reductase -36. Howeet al. [et al. [et al. [et al. [In 1991, Tan and researchers found that TRF was able to decrease total, HDL, LDL cholesterol and TAG levels at a dose of 42\u00a0mg/day . Severalet al. , Rasool [et al. and Raso [et al. did not [et al. -40. Desp [et al. . When se [et al. reported [et al. . Marked 4) and platelet aggregation were also suppressed after 28 days of supplementation. Contrary to Qureshi\u2019s findings, Mensink et al. [et al. [Although TRF is a common term used to describe a mixture of vitamin E rich in tocotrienols, several variations in composition and formulation of TRF are available in the market Table\u00a0. Using pk et al. reportedk et al. . Similar [et al. using se [et al. . However [et al. .25 and lovastatin (HMG-CoA reductase inhibitor) was used [On the other hand, tocotrienols extracted from rice bran oil were found to significantly reduce total, LDL cholesterols, apolipoprotein B-100 and lipoprotein(a) levels, consequently increasing the HDL/total cholesterol and HDL/LDL cholesterol ratios ,52. A mowas used . It shouwas used ,57. Adhewas used . Neverthwas used . Howeverwas used .25 was extracted from rice bran oil having 17% to 21% of d-desmethyl and d-didesmethyl-tocotrienols which are not present in palm-based TRF [As summarized in Table\u00a0ased TRF ,52,53. Iet al. [A number of clinical trials were conducted to examine the multi-faceted health benefits of tocotrienols in different populations. The bioavailability and efficacy of TRF may vary in different populations. Chin and coworkers comparedet al. reportedet al. . Neverthet al. [The role of tocotrienols in the modulation of immune system was first established in animal models in 1999 -61. Follet al. reportedet al. . After 1et al. [et al. [in vitro and in vivo[In addition, there were several studies investigated the therapeutic efficacy of TRF supplementation on chronic diseases. Tomeo et al. investiget al. . By suppet al. . The datet al. . Among t [et al. reported [et al. . Followid in vivo-68. The d in vivo. The pild in vivo. Althougd in vivo. This stSummarizing the above studies, tocotrienols seemed to respond differently to a range of age groups but did not show consistent efficacies in the target study populations. Most of the studies conducted in patients with chronic diseases had relatively small sample size. This demonstrates the need to conduct randomized controlled trials in larger population to confidently evaluate the therapeutic potentials of tocotrienols.in vitro and animal studies collectively [Despite the lack of knowledge in the bioavailability of tocotrienols, several reviews in the past have ascertained the physiological functions of tocotrienols derived from both ectively ,8,19,70.ectively ,72. Withectively . ImagingIn the context of bioavailability, there are convincing evidence that tocotrienols are detectable at appreciable levels in the plasma after short term and long term supplementations. There is insufficient data on the reference range of plasma concentrations of tocotrienols that are adequate to demonstrate significant physiological effects. Although the pharmacokinetics of tocotrienols are distinctly different from tocopherols which are well studied and remained longer in blood circulation, biodistribution study showed considerable accumulation of tocotrienols in vital organs. In the perspective of therapeutic efficacy, it is evident that the outcome of clinical evaluations is not only affected by the bioavailability of tocotrienols, but also closely dependent on the study designs. In view of the limited understanding, more comprehensive studies in the mechanisms of absorption are warranted.The authors declare that they have no competing interests.All authors contributed equally to the manuscript. All authors have read and approved the final manuscript."} {"text": "The present contribution reviews arguments that this type of induction may have principal interest outside this particular example. One principal effect of medical interest may be that cancer cells will not so easily adapt to the synergistic effects from induction of more than one death pathway as compared to induction of only apoptosis. This work proposes to use the marine algal toxin yessotoxin (YTX) to establish reference model experiments to explore medically valuable effects from induction of multiple cell death pathways. YTX is one of few toxins reported to make such induction. It is a small molecule compound which at low concentrations can induce apoptosis in primary cultures, many types of cells and cell lines. It can also induce a non-apoptotic form of programmed cell death in BC3H1 myoblast cell lines. Programmed cell death mechanisms are significant for development, maintenance and self-regulation of multicellular organisms . Cell-inet al. [et al. [Galluzzi et al. recently [et al. . Morphol [et al. . These tProgrammed cell death is an integral part of complex living systems which typically possess redundancy and a capacity of adaptation. Different biochemical mechanisms within programmed cell death may overlap or act as hidden backups for events to take place. Such systems may not in practice be fully observable. Reference model experiments and preliminary generic views may though support their description. The present work provides photographic illustrations indicating diversity in how cells can respond to insults affecting cell death programmes. Such variations may be important for understanding cell death mechanisms. It reflects redundancy and plasticity of cell signalling pathways. Programmed cell death mechanisms vary and they are context dependent . ApoptosKnowledge about the distinct regulatory signalling pathways that can be induced during distinct cell death modalities helps to understand the development of some diseases. Dysregulation or defects in execution of programmed cell death mechanisms can cause cancer and neurodegenerative diseases. Disruption of apoptotic functions can in some cases accelerate transformation of a normal cell into a tumour cell . TumoursYessotoxin (YTX) is a marine algal toxin that can induce programmed cell death at nanomolar concentrations in different model systems ,20,21,22Administration of two different insults inducing for example apoptosis and paraptosis, respectively, may not have the same capacity to kill cancer cells as compared to an insult that can induce both apoptosis and paraptosis. It is presumably harder for a cell to survive after exposure to multiple simultaneous death-inducing stimulus than it is to withstand similar but non-simultaneous ones (as would be the case if the cells where subject to two different drugs with different response/exposure times). A design of a cancer drug may utilise this idea. Therefore, elucidation of the molecular mechanisms and biochemical markers implicated in multiple forms of programmed cell death induced by yessotoxin may help to develop therapeutic approaches related to conditions characterised by excessive cell death or by excessive cell death accumulation. Apoptotic cell death pathways and their biochemical mediators and regulators are relatively known. However, similar detailed knowledge is missing for non-apoptotic forms of programmed cell death mechanisms ,26,27. TActivation of both caspase-dependent and caspase-independent signalling pathways under the same death stimuli has been reported in cases where deletion of damaged cells needs execution of complementary programmes to assure cell death ,31. ApopNon-apoptotic caspase-independent cell death programmes may activate in an apoptotic resistant cancer cell. Such programmes may employ death-inducing proteases such as calpains and cathepsins. These proteases can execute cell death independently of the apoptotic machinery, and they may take a dominant role in the progression of tumour development in some cell types by triggering the TNF-cell death signalling pathway ,37. Non-Alternative cell death signalling pathways can ensure safe elimination of unwanted cells. This redundancy of cell removal may protect organisms against development of diseases such as cancer where huge numbers of mutations and failed cell divisions occur during the life span . Cancer YTX can activate different cell death programmes in BC3H1 myoblast cell lines ,23. Figuet al. [et al. [in vitro biochemical assays [Korsnes et al. and Kors [et al. provide [et al. . Autopha [et al. . Autopha [et al. ,41. Paral assays . Inductiet al. [Sperandio et al. reportedet al. ,24. Multet al. ,45,46,47Receptors involved in mediating cell death may also execute the apoptotic or the paraptotic pathway, or both pathways . Furtheret al. [et al. [et al. [YTX is already suggested for different therapeutic approaches ,49. L\u00f3peet al. proposedEvaluation of programmed cell death induction by YTX with different model cellular systems will increase understanding about YTX\u2019s mechanisms of action and cross-talk among signalling pathways involved in cell death. For example, induction of the apoptotic cell death receptor pathway under YTX exposure is still undetermined and it could be a new potential therapeutic use regarding the apoptotic pathway. Targeting apoptosis by employing the death receptor pathway constitutes a potential therapeutic strategy in many cancer cells, since cancer cells are known to their resistance to apoptosis induction. There are currently few reports of chemical agents which can activate both apoptosis and paraptosis in cells. Simultaneous activation may occur under the same insult and in the same cell population ,55,56. Iet al. [It has been suggested that multiple cell death programmes can be activated in the same cell population. The dominant cell death phenotype is determined by the relative speed of the available cell death programmes. Although characteristics of several cell death pathways can be displayed, only the fastest and most effective death pathway is usually evident ,59. Howeet al. suggesteet al. [Sperandio et al. reportedScreening for novel molecules that can trigger simultaneously programmed cell death programmes may be a current pharmacological challenge. However, the absence of model systems that can be useful for the identification of the molecular mechanisms involved in different types of programmed cell death has limited their identification. Many natural chemical compounds can at low concentrations interfere with well-conserved cell signalling pathways. This ability makes them a resource for future therapeutics. Yessotoxin is such a natural small molecule compound which can induce distinct programmed cell death mechanisms in several model cell systems. The induction seems to be concentration-dependent and cell-specific. The actual cell death modalities may involve cross-talk between several signalling pathways. This is though still not clarified . Knowled"} {"text": "Studies investigating the association between Hepatitis B virus (HBV) and hepatitis C virus (HCV) infections and intrahepatic cholangiocarcinoma (ICC) have reported inconsistent findings. We conducted a meta-analysis of epidemiological studies to explore this relationship.A comprehensive search was conducted to identify the eligible studies of hepatitis infections and ICC risk up to September 2011. Summary odds ratios (OR) with their 95% confidence intervals (95% CI) were calculated with random-effects models using Review Manager version 5.0.Thirteen case\u2013control studies and 3 cohort studies were included in the final analysis. The combined risk estimate of all studies showed statistically significant increased risk of ICC incidence with HBV and HCV infection . For case\u2013control studies alone, the combined OR of infection with HBV and HCV were 2.86 and 3.63 , respectively, and for cohort studies alone, the OR of HBV and HCV infection were 5.39 and 2.60 , respectively.This study suggests that both HBV and HCV infection are associated with an increased risk of ICC. Clonorchis sinensis and Opisthorchis viverrini)[Intrahepatic cholangiocarcinoma (ICC), which originates from intrahepatic bile ducts, is the second commonest primary hepatic tumour behind hepatocellular carcinoma (HCC), accounting for 3% of all gastrointestinal cancers worldwide . The etiiverrini) and expoiverrini), has beeIt has been shown that Hepatitis B virus (HBV) and hepatitis C virus (HCV) infections are the major causative agent for HCC . SeveralTo identify the relevant literature, searches of PubMed, Embase, Ovid, Cochrane Library, and Scopus database for articles on ICC associated with HBV or HCV infection were conducted up to September 2011. The following MeSH search headings were used: \u2018hepatitis B virus\u2019,\u201d \u201chepatitis C virus,\u201d \u201cbile duct neoplasms\u201d, and \u201ccholangiocarcinoma\u201d. Reference lists of all retrieved articles were manually searched for additional studies. Serum HBsAg and hepatitis C antibody were used as the positive markers of chronic hepatitis virus infection.The inclusion criteria in the meta-analysis are as follows: published full-text report in English language, studies provided sufficient data to calculate the risk estimates with its corresponding 95% confidence interval (CI) of ICC associated with HBV or HCV infection.Abstracts, letters, editorials and expert opinions, reviews without original data, case reports and studies lacking control groups were excluded. The following studies were also excluded: 1) those evaluating patients with HCC or liver metastase; 2) those with incomplete raw data; 3) those with repetitive data.Two reviewers (B.L. and Y.Z.) independently extracted the following parameters from published studies: the name of the first author, publication year, study design, the country in which the study was conducted, sample size, prevalence of HBV or HCV seropositivity in cases and in a control group or in cohort, and OR or hazard ratios (HR) estimates with 95% CI for HBV or HCV infection and ICC.The methodological quality of the included studies assessed using a three-items scoring system measured by the study design , sample size , and reported outcomes of interest . Studies having a score of 2 were considered to be of high quality.\u03c72 test. Publication bias was assessed visually using a funnel plot. All analyses were conducted using Review Manager (RevMan) software 5.0.The literature review refered to PRISMA statement standards.We extracted adjusted OR and HR with 95% CI from the included studies. Summary OR was estimated using random-effects models. Heterogeneity was calculated by means of Q test and Figure\u2009Among the 3 cohort studies, one was from United States , one froThe two reviewers had 100% agreement in their reviews of the data extraction.P\u2009<\u20090.001 and OR\u2009=\u20093.42, 95% CI,1.96-5.99, P\u2009<\u20090.001, respectively). When case\u2013control studies were analyzed alone, the combined OR for the association of HBV and HCV infection with the risk for ICC were 2.86 and 3.63 , respectively. When cohort studies were analyzed alone, the combined OR of HBV and HCV infection were 5.39 and 2.60 , respectively. All these results were significant heterogeneous (P\u2009<\u20090.001) and HCV infection were associated with significant increased risk of ICC development.Analysis of these studies ,17,20-22A \u201cfunnel plot\u201d of the studies in the meta-analysis reporting HBV infection and ICC is shown in Figure\u2009Meta-analysis was employed by a recently published study to estimate the correlation between hepatitis virus infection and the risk of ICC and extrahepatic cholangiocarcinoma (ECC) . Becauseet al.[In this meta-analysis, we found positive relationship between HBV/HCV infection and the development of ICC. This conclusion is further supported by the evidences from a series of experimental studies. In a study from the United States, HBV and/or HCV DNA was present in 3 (27%) out of 11 ICC tissue samples obtained at the time of surgical resection . Anotheret al. found thet al.[et al.[et al.[et al.[The mechanism of the development of ICC in patients with HBV or HCV infection is still uncertain. HBV and HCV infection can cause chronic inflammation of the liver. Indeed, chronic inflammation and cancer are closely associated. In this context, long-term expression of several viral oncoproteins, mostly the HCV core protein and HBx protein, might participate in the tumourigenic process. Battaglia et al. found thl.[et al. reportedl.[et al.. There al.[et al.. Clinical.[et al. reportedl.[et al. also foul.[et al.. Taking et al.[et al.[et al.[Some researchers suggest that hepatocytes and cholangiocytes might originate from hepatic progenitor cells (HPCs) . Therefoet al. speculatl.[et al. found thl.[et al.. Viral hl.[et al.,36. One l.[et al.. Indeed,l.[et al. demonstrl.[et al..ICC is characterized by wide variability in incidence and risk factors. HCV seems to be associated with ICC in regions with relatively low prevalence of HBV infection such as United States, Italy and Japan. In contrast, several studies from China ,18,19, aOur study also has some weaknesses which should be considered when interpreting results. First, seropositivity for HBsAg and anti-HCV was used as the sole indicator of HBV and HCV infection. It seems that occult HBV and HCV infection may also play a role in the development of HCC . TherefoIn conclusion, in this meta-analysis of 13 case\u2013control studies and 3 cohorts, we found that HBV and HCV infection are associated with an increased risk of ICC.Yanming Zhou, Yanfang Zhao contributed equally.The authors declare that they have no competing interests.YZ participated in the design and coordination of the study, carried out the critical appraisal of studies and wrote the manuscript. BL, LW, JH, and DX developed the literature search, carried out the extraction of data, assisted in the critical appraisal of included studies and assisted in writing up. YZ and JH carried out the statistical analysis of studies. YZ and JY interpreted data, corrected and approve the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2407/12/289/prepub"} {"text": "In March 2012 Henry et al. published a paper that explored whether or not the consumption of thiamethoxam via nectar could be a causal factor of Colony Collapse Disorder (CCD) in honeybees. In the first part of their report, Henry et al. for a crw (the larger w the lower the eclosion rate), and the forager mortality rate or forager homing failure (m). In Khoury et al.'s . mhf was then used to increase the value of m for population projection under a dietary thiamethoxam exposed scenario, compared to the \u201cnormal\u201d homing failure m postulated in the non-thiamethoxam exposed scenario. In my previous critic of Henry et al. .\u201dIn order to model the population dynamic under dietary thiamethoxam exposure, Henry et al.'s undertaky et al. I pointemhf) is correct. In this note I show that the calculated mhf value is largely impacted by assumptions, and that regardless of which assumption is taken, Henry et al. constant forager death rate with no forager exposure, and (ii) forager death rate raised by post-exposure homing failure In their supplemental material Henry et al. write asmhf was used by Henry et al. (mhf represents the post-exposure homing failure.From this statement it is clear that: (i) y et al. to raisemhf is expressed as a proportional decrease in homing success in the treatment group relative to the control, as highlighted in the comment of Henry and Decourtye = 0.57 individuals.day\u22121. However, these homing success rates also mean that 0.17 and 0.43 individuals.day\u22121 failed to return in the respective control and treatment groups ([homing failure] = 1 \u2212 [homing success]). This translates into a 1.55 fold (155%) additional increase in homing failure rate relative to the control , and represents a treatment homing failure of 0.43 individuals.day\u22121:mhf expresses the proportional increase in homing failure in the treatment given the control it should be calculated as follows:mhf should equal 0.59 and 1.55 for Experiments 1 and 2 respectively, not 0.102 and 0.310 as stated in Henry et al. \u201d\u201c (emphasis added). On the contrary, it assumes that the additional mortality due to pesticide exposure is proportional to all other sources of mortality or homing failure. This is also regardless of the fact that Henry et al. larger following pesticide exposure than the m parameter that is used under non pesticide exposure conditions.Taking Experiment 2 in Henry et al. as an exy et al. :(1)mhf=y et al. . The mhfy et al. are thery et al. claim thy et al. failed ty et al. . Thus, iy et al. need to et al.'s populatimhf is the homing failure that is solely induced by pesticide treatment, and given Equations 4 and 5, then:mhf does indeed [\u2026] estimates the proportion of exposed foragers that might disappear due solely to post-exposure homing failure, all other sources of mortality or homing failure set apart \u201d\u201c as suggested by Henry et al. .The homing failure of the control group reflects normal attrition or mortality and the homing failure attributable directly to experimental stress. For instance:\u22121, mhf is also expressed in individuals.day\u22121. In using Equation 2 to calculate mhf, we assume that pesticide exposure increases homing failure by a set amount and is not proportional to any \u201cnormal\u201d homing failure. If so, the m parameter of the Khoury et al. ]. In this case we assume that most of the homing failure observed in the control is due to natural predation. However, if we choose Equation 3 we assume that the homing failure attributable to pesticide exposure is not only a fixed value in function of the dose of pesticide, but is proportional to the level of the \u201cnormal\u201d homing failure [assumption (b)]. For example, we would assume that most of the natural homing failure is due to an aging population of foragers which are more susceptible to the effects of pesticides in comparison to young foragers. We favor assumption (a), although others may disagree.The choice of Equations 2 or 3 as the basis for the calculation of \u22121) is far removed from the experimentally determined control homing probability. For example, let us imagine that due to a heightened experimental stress the control homing probability is only 0.7 individuals.day\u22121 and the treatment homing probability is 40% less . If we use assumption (a), after 30 days of exposure we project more than 4000 less individuals in the hive than if we use assumption (b) ,\u201d mhf should have been calculated using Equation 3 as previously described in Guez (mhf, the mhf value presented by Henry et al. (This contribution highlights the influence of assumptions on model projections and the importance of making explicit not only any assumptions, but also how these influence outputs within a given model. It also highlights that the formula used to calculate y et al. (Equatioy et al. critiquey et al. are erroy et al. claimed in Guez . Nonethe in Guez when caly et al. is errony et al. model prI would like to thank C. Conway for her feedback during the elaboration of this paper."} {"text": "In detail, we describe the basic principles and pitfalls of searching mass spectral reference libraries. Determining the molecular formula of the compound can serve as a basis for subsequent structural elucidation; consequently, we cover different methods for molecular formula identification, focussing on isotope pattern analysis. We then discuss automated methods to deal with mass spectra of compounds that are not present in spectral libraries, and provide an insight into de novo analysis of fragmentation spectra using fragmentation trees. In addition, this review shortly covers the reconstruction of metabolic networks using MS data. Finally, we list available software for different steps of the analysis pipeline.The identification of small molecules from mass spectrometry (MS) data remains a major challenge in the interpretation of MS data. This review covers the Mass spectrometry (MS) is a key analytical technology for detecting and identifying small biomolecules such as metabolites -3. It isDENDRAL project that started back in 1965 . Un. Un146].et al as the ranking results are very close to that of a random number generator.\u201d Later, Kumari et al[Mass Frontier that is similar to the one for tandem MS data [Mass Frontier 6 for spectrum prediction, the correct structure was reported in 73% within the TOP 5 hits.For the interpretation of tandem mass spectra, Hill et al proposedder et al used ACDski et al comparedari et al implemen MS data , but int MS data . They fi MS data . Using MIt is worth mentioning that rule-based systems did not have much success in proteomics: There, it is apparent from the very beginning that, in view of the huge search space, only optimization- and combinatorics-based methods can be successful.The problem with rule-based fragmenters is that even the best commercial systems cover only a tiny part of the rules that should be known. Constantly, new rules are discovered that have to be added to the fragmentation rule databases. However, all of these rules do not necessarily apply to a newly discovered compound.Sweeney observedEPIC (elucidation of product ion connectivity) [FiD) [ctivity) was the y) [FiD) ,203 enumy) [FiD) , runningMetFrag[in silico fragments currently works only for lipids and is not modeling rearrangements of atoms and bonds. Different from the other approaches, ISIS simulates the spectrum of a given lipid, and does not require experimental data to do so.One problem of combinatorial fragmenters is how to choose the costs for cleaving edges (bonds) in the molecular structure graph. For this, used in . Kangas in [et al used macMOLGEN-MS and MetFrag, and further characteristics . Ludwig et al[Many of the above mentioned techniques are rather complementary yielding different information on the unknown compound. Combining the different results will therefore greatly improve the identification rates. For EI fragmentation data, used a cwig et al proposedde novo sequencing and dereplication of NRPs have been established [Usually the structure of small molecules cannot be deduced from the genomic sequence. However, for particular molecules such as nonribosomal peptides (NRPs) a certain predictability has been established . NRPs arablished ,211,214.vice versa. This fact has been exploited repeatedly, see for example [If we want to assign molecular formulas to the precursor and product ions, we may use the formula of the precursor to filter bogus explanations of the product ions, and example ,146 and spectral trees for multiple stage mass spectrometry [multistage mass spectral trees of Rojas-Cherto et al[n spectra, but do not contain any additional information. We stress that all computational approaches described below target tandem MS, unless explicitly stated otherwise. To compute a fragmentation tree, we need neither spectral libraries nor molecular structure databases; this implies that this approach can target \u201ctrue unknowns\u201d that are not contained in any molecular structure database.Fragmentation trees must not be confused with trometry , or the rto et al et al[et al[B\u00f6cker and Rasche introducet al computedet al and EI fet al were fou al[et al,220 compet al[FT-BLAST . FT-BLAST also offers the possibility to identify bogus hits using a decoy database, allowing the user to report results for a pre-defined False Discovery Rate. Faster algorithms for the computationally demanding alignment of fragmentation trees were presented in [FT-BLAST results were parsed for \u201ccharacteristic substructures\u201d in [et al[To further process fragmentation trees, Rasche et al introducented in . FT-BLASures\u201d in . Rojas-Cin [et al presentein [et al.known compound, and represent hydrophobic fragments and functional groups of the compound, and the way these groups are linked together.Aligning fragmentation trees is similar in spirit to the feature tree comparison of Rarey and Dixon . Featurede novo reconstruction of networks from metabolite mass spectrometry data.Network elucidation based on mass spectrometry data is a wide field. On the one hand, detailed information like quantitative fluxes of the network is achieved by metabolic flux analysis. Here, based on isotope labeled compounds, the flux proceeding from these compounds can be tracked. On the other hand, measured metabolites can be mapped on a known network. This can elucidate distinct metabolic pathways that are differentially \u201cused\u201d dependent on environmental conditions. Both of these variants require previous known metabolic network graphs. In this section, we will only cover the pure The reconstruction of networks solely from metabolic mass spectrometry data is a very young field of research. It can be subdivided into two main approaches: either the network reconstruction is based on metabolite level correlation of multiple mutant and wild type samples, or on data from only one sample by using information of common reactions or similarity between metabolites.et al[et al[et al[et al. 2011 [A first approach that used metabolite mass spectrometry data of multiple expressed samples was introduced by Fiehn et al. Their m al[et al and Kose al[et al develope al[et al. The disal. 2011 suggesteet al[et al[In 2006, Breitling et al reconstr al[et al used a set al[Pseudomonas sp. SH-C52 that has an antifungal effect and protects sugar beet plants from infections by specific soil-borne fungi.Watrous et al used addAMDIS[MathDAMP[TagFinder[MetaboliteDetector[TargetSearch[Metab[AMDIS. PyMS[ADAP-GC 2.0 [et al. 2011 [Several open source, or at least freely available, software packages assist with processing and analyzing GC-MS metabolomics data. The freely available AMDIS is the m[MathDAMP helps wiTagFinder,234 suppeDetector detects getSearch iterativrch[Metab is an R DIS. PyMS compriseP-GC 2.0 helps wial. 2011 developeXCMS[XCMS2[XCMS Online[AStream[MetSign[CAMERA[XCMS feature lists and integrates algorithms to extract compound spectra, annotate peaks, and propose compound masses in complex data. MetExtract[IDEOM[XCMS[mzMatch.R[et al[For LC-MS data, XCMS enables CMS[XCMS2 additionCMS[XCMS2 caused bMS Online is the we[AStream enables m[MetSign providesgn[CAMERA is desigetExtract detects act[IDEOM filters DEOM[XCMS and mzMamzMatch.R, enablesmzMatch.R,250 and h.R[et al presenteMZmine[MZmine2[MET-IDEA[MetAlign[For both, GC-MS and LC-MS data, MZmine and MZmie[MZmine2 allow fo[MET-IDEA proceeds[MetAlign is capabTo compare the power of these software packages, an independent validation would be desirable. But up to now, there exists no such comparison. One reason is the limited amount of freely available mass spectra, see Section \u201cConclusion\u201d. Another reason is that some of the packages are developed for special experimental setups or instruments, and have to be adapted for other data, what makes an independent validation difficult.de novo method is able to elucidate the structure of a metabolite solely from mass spectral data. They can only reduce the search space or give hint to the structure or class of the compound. Computational mass spectrometry of small molecules is, at least compared to proteomics, still very much in a developmental state. This may be surprising, as methods development started out many years before computational mass spectrometry for proteins and peptides came into the focus of bioinformatics and cheminformatics research [No computational research ,185. Butresearch ,21 and la. CASMI is a contest in which GC-MS and LC-MS data is released to the public, and the computational mass spectrometry community is invited to identify the compounds. Results and methods will be published in a special issue of the Open Access MDPI journal Metabolites. This is a first step towards reliable evaluation of different computational methods for the identification of small molecules. Lately, the importance of computational methods has gained more attention in small molecule research: Citing Kind and Fiehn [In metabolomics, a comparative evaluation of methods is very limited due to restricted data sharing. Recently, a first benchmark test for small molecules was provided as part of the CASMI challengend Fiehn , \u201cthe ulWith the advent of novel computational approaches ,206,207,aCritical Assesment of Small Molecule Identification, http://casmi-contest.org/.The authors declare that they have no competing interests.KS wrote the Sections \u201cMolecular formula identification\u201d and \u201cNetwork reconstruction\u201d. FH wrote the Sections \u201cSearching spectral libraries\u201d and \u201cIdentifying the unknowns\u201d. SB wrote the Section \u201cFragmentation trees\u201d. All authors read and approved the final manuscript."} {"text": "Neuropsychiatric disorders represent the second largest cause of morbidity worldwide. These disorders have complex etiology and patho-physiology. The major lacunae in the biology of the psychiatric disorders include genomics, biomarkers and drug discovery, for the early detection of the disease, and have great application in the clinical management of disease. Indian psychiatrists and scientists played a significant role in filling the gaps. The present annotation provides in depth information related to research contributions on the molecular biology research in neuropsychiatric disorders in India. There is a great need for further research in this direction as to understand the genetic association of the neuropsychiatric disorders; molecular biology has a tremendous role to play. The alterations in gene expression are implicated in the pathogenesis of several neuropsychiatric disorders, including drug addiction and depression. The development of transgenic neuropsychiatric animal models is of great thrust areas. No studies from India in this direction. Biomarkers in neuropsychiatric disorders are of great help to the clinicians for the early diagnosis of the disorders. The studies related to gene-environment interactions, DNA instability, oxidative stress are less studied in neuropsychiatric disorders and making efforts in this direction will lead to pioneers in these areas of research in India. In conclusion, we provided an insight for future research direction in molecular understanding of neuropsychiatry disorders. Neuropsychiatric disorders represent the second largest cause of morbidity worldwide. These disorders have complex etiology. The genetic linkage is only 10% while remaining 90% are sporadic in nature. The World Health Organization has estimated that neuropsychiatric disease burden comprises 13% of all reported diseases. The psychiatric disorders include major depression, anxiety, schizophrenia, bipolar disorder, obsessive-compulsive disorder, alcohol and substance abuse, and attention-deficit hyperactivity disorder. Approximately one in five Americans experience an episode of a psychiatric illness such as schizophrenia, mood disorder (depression and bipolar disorder) or anxiety and a similar situation is predicted to be prevailing in developing countries too. The Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) defines the criteria for a wide array of mental illnesses. Most Psychiatrists use it as a basis for diagnosis. But diagnosis is still a challenge as many disorder episodes are overlapping.The prevalence of disorders, and their economics and societal aspects has to be understood through research programs aimed at elucidating the etiologies and pathophysiological mechanisms of these devastating disorders. The final goal will be clear diagnosis, management and drug discovery.Major depression is an affective disorder and the symptoms include feelings of profound sadness, worthlessness, despair and loss of interest in all pleasures. The individuals having depression also experience mental slowing, a loss of energy and an inability to make decisions or concentrate. The symptoms can range from mild to severe, and are often associated with anxiety and agitation. WorldwidBipolar disorder is a chronic mental illness, which begins, in adolescence or early adulthood. The typical bipolar disorders initiates between 15 and 25 years. The bipolar disorder onsets with initial episode of depression; these episodes of depression go as undiagnosed and are not treated appropriately. Bipolar Disorder includes Major Depressive Episodes and Hypomanic episodes. The significant portion of people suffering from manic depression did not have full manic episodes. The classification was divided into Bipolar I and Bipolar II. However, Bipolar II is often a first step to Bipolar I. It is seen that over five years, between 5 and 15% of those with Bipolar II change diagnosis to Bipolar I. Approximately 0.5% of people develop Bipolar II in their lifetimes. The psychobiology of therapeutic approach to anxiety disorders is still complex.). In China the prevalence was only half in India and, rural Latin America, a quarter or less of the European prevalence in rural India). The 10/66-dementia prevalence was higher than DSM-IV dementia, and more consistent across all sites varying between 5.6% (95% CI 4.2-7.0) in rural China and 11.7% (10.3- 13.1) in the Dominican Republic. The validity of the 847 of 1345 cases of 10/66 dementia not confirmed by DSM-IV was supported by high levels of associated disability (mean WHO Disability Assessment Schedule II score 33.7 [SD 28.6]). This indicates that the DSM-IV dementia criterion might underestimate dementia prevalence, especially in regions with low awareness of this emerging public-health problem.[60 in 11 et al.[There are a few reports on the clinical profiles of young-onset dementia from India. Nandi et al. conducteet al.[et al.[There is an international study linking a relationship between diet and dementia. Albanese et al. found thl.[et al. has contet al.[et al.[The amnestic and multiple domain types dementia are two types of dementia among nondemented and nondepressed elderly subjects aged 50 and older. This is a cross sectional community screening study. The subjects are selected by systematic random sampling for the assessment of cognitive function with the help of a validated cognitive questionnaire battery. The results indicated that prevalence of MCI detected based on neuropsychological testing is 14.89% (95% CI: 12.19 to 17.95). The prevalence of the amnestic type is 6.04% (95% CI: 4.40 to 8.1). The multiple domain type is 8.85% (95% CI: 6.81 to 11.32). The data insights that the amnestic type is more common among men and the multiple domain type among women with advancement of age. This is the first study on the prevalence of amnestic type in developing countries. Also, Das et al. has undeet al.[et al.[Rao et al. reviewedl.[et al. reportedet al.[Autism spectrum disorder (ASD) is a of complex neurodevelopment disorders, characterized by social impairments, communication difficulties. It is also characterized by abnormal behavior such as restricted, repetitive, and stereotyped behavioral patterns. The genetic imprinting of certain genes is reported in the autism. The paternal imprinting of HTR2A with expression from only one allele is reported. There are no reports on HTR2A and its association with neuropsychiatric disorders from Indian population. The study showed an association of the above-mentioned markers of HTR2A with Autism spectrum disorder (ASD) in Indian population. The genotyping analyses are carried out for probands, parents and controls. The results indicated that HTR2A is unlikely to be a genetic marker for ASD in Indian population. Recentlyet al. showed aet al.[Sexual dysfunction is a major risk factor for anxiety and depression. The reports of the prevalence of female sexual dysfunction (FSD) are scant. Recently, Singh et al. conducteet al.[et al.[Also, Kumar et al. examinedl.[et al. conducteErectile dysfunction (ED) is a complex condition where in men with minimal organic ED may develop a variable degree of psychogenic component sufficient to reduce the efficacy of medical management. Taneja studied To understand the genetic association of the neuropsychiatric disorders, molecular biology has a tremendous role to play. For example, to understand the inheritance pattern of the particular gene associated with the particular disorder, research in of molecular biology is essential. For example, the role of chromatin modulation in depression is new concept. The alterations in gene expression are implicated in the pathogenesis of several neuropsychiatric disorders, including drug addiction and depression. The development of transgenic neuropsychiatric animal models is a great thrust area. No studies from India in this direction. Biomarkers in neuropsychiatric disorders are of great help to the clinicians for the early diagnosis of the disorders. Gene-environment interactions, DNA instability, oxidative stress are less studied in neuropsychiatric disorders and making efforts in this direction will lead to pioneers in these areas of research in India. The molecular biology of the above mentioned areas would help to understand the neuropsychiatric disorders.In recent years, it has become widely recognized that a comprehensive understanding of chromatin biology is necessary to better appreciate its role in a wide range of diseases. The epigThere are fundamental contributions on genetics and gene-environment interactions in the field of neuropsychiatry from India. However, there are major lacunae in genomics, proteomics, biomarkers, drug discovery and development of animal models for neuropsychiatry disorders is a major thirst all over the word. The following are the major contributions from India.et al.[et al.[Genetics plays an important role in neuropsychiatric research and practice. It is important to understand the exact gene borne influences, mode of inheritance and establishing relationship between gene product and syndromes. Genetic markers like mini satellites, tandem repeat sequences to establish linkages are of great advantage. Rao et al. first col.[et al. studied et al.[LRRK2 gene among Indian PD patients, using PCR-RFLP method. They analyzed G2019S, R1441C, R1441G, and R1441H mutations in 800 PD and 212 controls, I2012T and I2020T mutations in 748 PD patients. LRRK2 gene encodes Leucine-rich repeat kinase 2, associates with the mitochondrial outer membrane. The authors did not find any of the above mutations, except in one female young onset PD patient who has a heterozygous G2019S mutation. They concluded that LRRK2 mutations may be a rare cause of PD among Indians. Dopaminergic pathway has been widely implicated in the pathophysiology of PD. Juyal et al.[Parkin gene is implicated in the PD, associated with mitochondrial dysfunction. Parkin gene encodes for Parkin protein (subunit of E3 ubiquitin ligase) involved in the proteosomal degradation. Chaudhary et al.[SNCA gene. Nagar et al.[et al.[et al.[P=0.028) marker, nominally significant genotypic associations for tyrosine hydroxylase rs6356 A/G (P=0.04) and dopamine beta-hydroxylase rs1108580 A/G (P=0.025) following the case-control approach. In dopamine beta-hydroxylase, catechol-O-methyltransferase, and dopamine receptor D (2) genes, several significant haplotypic associations were present. Gupta et al.[P-value = 0.0001).There are limited studies on the role of genes in neurological disorders. Punia et al. studied al et al. investigry et al. analyzedar et al. investigl.[et al. comparedl.[et al.\u201333 Srival.[et al. performeet al.[et al.[et al.[et al. reported a positive linkage and association for psychosis to the chromosomal position 18p11.2.[The study indicated that the interacting effects within the COMT gene polymorphisms may influence the disease status and response to risperidone in schizophrenia patients. Kumar et al. analyzedl.[et al. observedl.[et al. studied 18p11.2.et al.[et al.[Verma et al. identifil.[et al. also fouThere are several key issues that need to be resolved before we consider the clinical use of additional HDAC inhibitors to treat neuropsychiatric disorders such as schizophrenia or unipolar depression.. Recent et al.[The studies on candidate gene polymorphisms in a popula tion are useful for a variety of gene-disease association. A number of candidate genes from the monoaminergic pathway in the brain, have been associated with schizophrenia. In this study, diallelic/multiallelic polymorphisms in some dopaminergic, serotonergic and membrane-phospholipid-related genes have been evaluated in a control population recruited from North India. Eight genes tested association with schizophrenia for only two gene polymorphisms, one in the promoter region of the serotonin 2A receptor gene and the other in the tryptophan hydroxylase gene. One new allele for the dopamine transporter gene , has not been reported in any population. Ravi Kumet al. reportedThe concentration of trytophan, quinolinic acid, kynurenic acid, serotonin and 5-hydroxyindoleacetic acid was found to be higher in the plasma of patients with all these disorders; while that of tyrosine, dopamine, epinephrine and norepinephrine was lower. These observations are highly relevant in understanding the neuropsychiatric disorders.et al.[Fragile X syndrome (FRAXA), caused by the expansion of CGG repeats in the 5\u2019 untranslated region of the fragile X mental retardation 1 (FMR1) gene is one of the most common forms of mental retardation. Due to the CCG repeats FMR1 gene becomes transcriptionally inactive by the hypermethylation. Guruju et al. performeet al.[et al.[et al.[Thelma et al. analyzedl.[et al. analyzedl.[et al. analyzedNeuronal cell death is observed in neurological and neuropsychiatric disorders. Several mechanisms and pathways were proposed for the neuronal cell death in these disorders. Gene-environment interactions play an important role in the regulation of neuronal cell death, here we focusing on the genes involved in neurological disorders. DJ-1 gene associated with early onset of neurological disorder Parkinson\u2019s disease (PD) which coupled with dementia. DJ-1 belongs to a family protein which includes transcriptional regulators, proteases and chaperones.et al.[et al.[Saeed et al. showed tet al. A genderet al. Karunakal.[et al. showed cet al.[12 after removing the Al using a chelating drug desferoximine.Glutaredoxin helps in the recovery of complex 1 by regenerating the protein thiols. In the ret al. showed tet al.[P<0.02) in bipolar II hypomania, while in bipolar II depression, Na, K, Cu and Al are increased (P<0.001). Na, Mg, P, Cu, and Al are increased significantly (P<0.002) in bipolar V. In all 3 bipolar groups S (P<0.00001), Fe (P<0.002) and Zn (P<0.004) are decreased in all 3 bipolar groups.Interestingly, they found that even after the removal of Al (CCG)12 repeats remain in Z-DNA conformation. Probably the aluminum which is elevated in Fragile X syndrome people may alter FMR1 gene integrity and altered its expression levels. Mustak et al. estimateThe drugs used for the psychiatric disorders work differently among the patients. Genotype of the individuals affects the response of drugs differently. If genotype of the target molecules for the drugs is known it will be great advantage to the clinician to finalize type of the drug and dosage of the drug. Several people worked on the genotypes of patients for different psychiatric disorders in Indian populations. Cytochrome P450 (P450) enzyme plays an important role drug metabolism in the body. P450 enzymes also present brain and involved in the pharmacological modulation of drugs. Cytochrome P4502D constitutively expressed in the pyramidal neurons of CA1, CA2 and CA3 subfields of hippocampus, cerebral cortex, Purkinje and granule cell layers of cerebellum, reticular neurons of midbrain. Tissue-set al.[et al.[The above studies indicate that Cytochrome P4502D may play a role in the metabolism of psychoactive drugs directly at or near the site of action, in neurons, in human brain. Agarwal et al. (2008) sl.[et al. analyzedl.[et al. observedet al.[et al.[et al.[Tiwari et al. analyzedl.[et al. analyzedl.[et al. studied The major conclusions are:Neuropsychiatric burden is going to be in multifold in the next decadeThere is a great need to apply genetic knowledge base to predict and to manage the brain disordersWe need to discover biomarkers as early predictions of disease and as a endpoints for therapeutic interventionGene-environment interaction is necessary as an etiological feature to understand disease risk factorsMolecular genotyping for therapeutic intervention is the need of the hourLast, but the least, life style and diet as intervention events for diseases need to be exploited"} {"text": "Urinary tract infections (UTIs) are among the most common bacterial infections in humans, both in the community and hospital settings. It is a serious health problem affecting millions of people each year and is the leading cause of Gram-negative bacteremia. We previously conducted a study on \u201cUrinary Bacterial Profile and Antibiotic Susceptibility Pattern of UTI among Pregnant Women in North West Ethiopia\u201d but the study did not address risk factors associated with urinary tract infection so the aim of the study was to assess associated risk factors of UTI among pregnant women in Felege Hiwot Referral Hospital, Bahir Dar, North West Ethiopia.A total of 367 pregnant women with and without symptoms of urinary tract infection(UTI) were included as a study subject from January 2011 to April 2011. Midstream urine samples were collected and processed following standard bacteriological tests. Data concerning associated risk factors were collected using structured questionnaires and were processed and analyzed using Statistical Package for Social Science (SPSS version 16).Bivarait analysis of socio-demographic characteristics and associated risk factors of UTI showed that family income level (family monthly income level \u2264 500 birr($37.85); P\u2009=\u20090.006, OR\u2009=\u20095.581, CI\u2009=\u20091.658, 18.793 and 501\u20131000 birr ($37.93-$75.70), P\u2009=\u20090.039, OR\u2009=\u20093.429, CI\u2009=\u20091.065, 11.034), anaemia , sexual activity and past history of UTI were found to be factors significantly associated with increase prevalence of UTI. In contrast multiparity, history of catheterization, genitourinary abnormality, maternal age, gestational age and educational status were not significantly associated with UTI among pregnant women.In this study UTI was high among pregnant women in the presence of associated risk factor such as anaemia, low income level, past history of UTI and sexual activity. Urinary tract infection (UTI) is the single commonest bacterial infections of mankind, to help in alleviating the problem we previously conducted a study on UTI which addressed the bacterial profile and antibiotic susceptibility pattern of UTI ,2 but thPregnancy is one of the factors which increase the risk of UTI partly due to the pressure of gravid uterus on the ureters causing stasis of urine flow and is also attributed to the humoral and immunological changes during normal pregnancy . During Though there are few studies -15 conduThe methods were previously described in our study on \u201cUrinary Bacterial Profile and Antibiotic Susceptibility Pattern of UTI among Pregnant Women in North West Ethiopia\u201d except few differences .A hospital based cross sectional study was conducted at Felege Hiwot Referral Hospital (FHRH) from January 2011 to April 2011 to determine associated risk factors of urinary tract infection (UTI) among pregnant women.A pre-designed and structured questionnaire was used for the collection of data on associated risk factors. Collection of information on sign and symptoms of UTI and physical examination of pregnant women were done by Gynecologists and senior nurse.Pregnant women who were taken antibiotics within seven days at the time of recruitment and who were not willing to participate were excluded from this study.Clean catch mid-stream urine samples were collected using sterile, wide mouthed glass bottles with screw cap tops. The pregnant women were also informed to clean their hands with water and then cleanse their periurethral area with sterile cotton swab soaked in normal saline to reduce the risk of contamination. Urine specimens were processed in the laboratory within 2 hours of collection and specimens that were not processed within 2 hours were kept refrigerated at 4\u00b0C until it was processed.A calibrated sterile platinum wire loop was used for inoculation of specimens in to the culture media. It has a 4.0 mm diameter designed to deliver 0.01 ml. A loopful of well mixed urine sample was inoculated MacConkey, Manitol Salt Agar and Blood Agar .All plates were then incubated at 37\u00b0C aerobically for 24 hrs. The plates were then examined macroscopically for bacterial growth. The bacterial colonies were counted and multiplied by 100 to give an estimate of the number of bacteria present per milliliter of urine. A significant bacterial count was taken as any count equal to or in excess of 10,000 CFU/ml .Mid-stream urine specimen: - a specimen obtained from the middle part of urine flow: Clean catch urine specimen.Symptomatic UTI refers to patients whose urine is yielding positive cultures (\u2265\u2009105CFU/ml) and who have symptoms referable to the urinary tract.Asymptomatic bacteriuria (ASB) refers to the presence of two consecutive clear-voided urine specimens both yielding positive cultures (\u2265\u2009105CFU/ml) of the same uropathogen, in a patient without urinary symptoms.Maternal Anaemia is defined as haemoglobin concentration less than 11 g/dl.Parity is the number of pregnancy reaching viability or beyond stage of abortion (before 20 weeks/less than 500 g BW).Gestational Age is the age of the fetus estimated by computing from the first day of the last menstrual period (time that precedes conception) until the day of consultation.History of UTI is any history of infection pertaining to the urinary tract diagnosed by a physician.Pregnant women were informed to clean their hands with water and their genital area with swab soaked in normal saline before collection of the clean catch midstream urine samples. All specimens were transported from the hospital to regional laboratory within cold box and those specimens which were not processed with in 2hrs were kept in refrigerator and processed no longer than 18 hours after collection.5CFU/ml of urine were considered significant but specimen who produced < 105 colonies/ml of urine considered insignificant or due to contamination. Culture media were sterilized based on the manufactures instruction. Then the sterility of culture media were checked by incubating 3\u20135% of the batch at 35 \u2013 37\u00b0C overnight and observed for bacterial growth. Those media which showed growth were discarded. The standard reference strains; Staphylococcus aureus (ATCC25923), Escherichia coli (ATCC25922) and P. aeruginosa (ATCC 27853) were used for testing quality of culture media.Only specimens which produced \u2265\u200910The study was conducted after getting a full approval by the health research unit of Jimma University, Amhara national regional state health bureau and FHRH. In addition written informed consent for the study was obtained from the study participants and confidentiality of results was kept. The results of urine tests were sent to the responsible person as soon as possible so that the pregnant women could be benefited from the study.A total of 367 pregnant women (37 symptomatic and 330 asymptomatic pregnant women) were investigated for the presence of risk factors associated with urinary tract infections.The assessment of associated risk factors of UTI showed that history of UTI , anaemia , sexual activity and family monthly income (taking family monthly income >\u20092000 birr ($151.40) as reference, family monthly income level \u2264 500 ETB ($37.85); P\u2009=\u20090.006, OR\u2009=\u20095.581, CI\u2009=\u20091.658, 18.793 and 501\u20131000 ETB($37.85-75.70); P\u2009=\u20090.039, OR\u2009=\u20093.429, CI\u2009=\u20091.065, 11.034) showed statistical significant association with UTI.Bivariate analysis of other associated risk factors revealed that age of the pregnant women , gestational age , parity , history of catheterization , genitourinary abnormality and educational status were not significantly associated with UTI was higher among pregnant women who had family monthly income of less than 500 Ethiopian birr ($37.85) and 501\u20131000 Ethiopian birr ($37.93-$75.70) for the overall UTI. Similarly study on the same study subject in Pakistan by Haider et al also sho al[et al on UTI a.9% was h al[et al.et al[et al[et al[et al[et al[Maternal anaemia was also significantly associated with UTI. Similar result were reported by different investigators such as study on UTI by Haider et al in Pakis al[et al among as al[et al among as al[et al among as al[et al in Thailet al[et al[et al[et al[et al[et al[The finding of this study also revealed that past history of UTI had strong association with UTI . Similar findings were reported by Haider al[et al in Qatar al[et al also ide al[et al in Sudan al[et al in Thailet al[et al[et al[Sexual activity was also the other associated risk factor that was found to be significantly associated with UTI. Pregnant women who had recent sexual intercourse of three or more per week were more likely to have UTI than women who had less than three intercourses per week. This may be due to sexual activity increases the chances of bacterial contamination of female urethra. Having intercourse may also cause UTIs in women because bacteria can be pushed into the urethra . The anaet al on UTI i al[et al on UTI i al[et al on UTI aet al[et al[et al[et al[et al[et al[et al[Multiparity was associated with significant bacteriuria in pregnancy. This had been repeatedly recognized to cause a two-fold increase in the rate of ABU in pregnant women ,26. The et al on UTI i al[et al on UTI i al[et al among as al[et al among as al[et al on UTI i al[et al on asympet al[et al[et al[et al[et al[et al[et al[et al[There was no significant difference in the prevalence of UTI with respect to trimester. This is similar with earlier studies by Hamdan et al on UTI i al[et al on UTI i al[et al on UTI i al[et al in Iran, al[et al on asymp al[et al in Pakis al[et al on UTI i al[et al among as al[et al on UTI i al[et al on asympIn this study the chance of UTI was higher among pregnant women in the presence of associated risk factors such as anaemia, low income level, past history of UTI and sexual activity but there was no significant association between prevalence of UTI and risk factors such as multiparity, history of catheterization, genitourinary abnormality, maternal age, gestational age and educational status of pregnant women. Therefore pregnant women should be assessed for associated risk factors during their regular follow up.There is no additional supporting data; the supporting data is included as Additional file We verify that we all the authors have agreed to share the outcome of this manuscript equally and there is no conflict of interest among us.DT was responsible for selection of the topic and write up of the proposal and also responsible for the final write up. BG was responsible for designing the methodology of the study and also involved in the selection of the topic whereas TW was responsible for the analysis and interpretation of the data. MS was responsible for collection of samples and data. All authors read and approved the final manuscript.Questionnaire for socio-demographic, clinical data of symptomatic UTI and assessment of associated risk factors of UTI among pregnant women in FHRH.Click here for file"} {"text": "Molenberghs et al. contribuMolenberghs et al.\u2019s study is timely and much needed after the recent publication of new evidence concerning this debate (see, e.g., Saj et al., As the authors acknowledge in the general discussion, while their method based on \u201cthe peak coordinates of the critical lesion site\u201d is certainly appropriate for gray matter lesions, it seems problematic for long-range white matter bundles. At variance with gray matter lesions, where one can look for maximum overlap (Vallar and Perani, At present, the only way to explore the possibility that a deficit results from disconnection of a particular fascicle is to track the relevant fascicle, draw the lesions, and see whether or not they are located along the fascicle (see, e.g., Bourgeois et al., Despite these caveats, Molenberghs et al.\u2019s results did reveal that \u201cthe largest region involved in the development of spatial neglect was a white matter lesion corresponding to the superior longitudinal fasciculus\u201d, consistent with the meta-analysis of Bartolomeo et al. . HoweverTo conclude, Molenberghs et al. made an"} {"text": "Inversion of the Williams syndrome (WS) region on chromosome 7q11.23 has previously been shown to occur at a higher frequency in the transmitting parents of children with WS than in the general population, suggesting that it predisposes to the WS deletion. Frohnauer et al. recently reported that the frequency of this inversion is not elevated in the parents of children with WS in Germany relative to the German general population. We have compared Frohnauer et al.'s data to those from three previously published studies , all of which reported a significantly higher rate of 7q11.23 inversion in transmitting parents than in the general population. Results indicated that Frohnauer et al.'s data are consistent with previously reported frequencies of 7q11.23 inversion in North America and Spain in both transmitting parents and the general population. Dear Editor:agree with previously published reports of the frequency of inversion in Williams syndrome (WS) progenitors [not having a child with WS had one spouse with an inversion [We read with interest the article, \"No significantly increased frequency of the inversion polymorphism at the WBS-critical region 7q11.23 in German parents of patients with Williams-Beuren syndrome as compared to a population control\" -6. Frohngenitors -6 and ingenitors -6. The mFrohnauer et al. further report that in 5/24 couples who had a child with WS, one (4/24 couples) or both (1/24 couples) parents had an inversion [1 and Osborne et al. [Thus, in contrast to Frohnauer et al.'s claim, te et al. for a Noe et al. and Norte et al. ,6 in the1Frohnauer et al. [r et al. reportedWS: Williams syndromeThe authors declare that they have no competing interests.CAM, CBM and LRO conceived the study. CBM performed the statistical analysis. CAM, CBM and LRO drafted the manuscript. All authors read and approved the final manuscript."} {"text": "Am J Trop Med Hyg 84:653\u2013661 by Marzouki et al, there was an error in the published version of In Am J Trop Med Hyg 84:753\u2013756 by Higazi et al, the fourth author's name is listed incorrectly. The correct name should be Wigdan A. Elmubark, not Wigdan A. Mohamed.In Am J Trop Med Hyg 84:851\u2013857 by Inglis et al, the last author's middle initial was omitted. The author's name should be Christopher H. Heath.In Am J Trop Med Hyg 84:862\u2013869 by Reyburn et al, an incorrect unit of measurement appears in the print version of the article. It should say \u201c\u2026 an increase of 200 mm\u201d in the sixth line of the abstract and on page 865 in the second column, line six.In"} {"text": "Monophasic fibrous synovial sarcoma (SS) is the most common variant of SS. Only a few cytological studies are available on this entity. Bcl-2 protein expression has been described as a characteristic marker of SS and is useful for its differentiation from other sarcomas. Cytokeratin and CD99 are also used in detecting SS.To evaluate synovial sarcoma and its variants cytomorphologically.During a period of 10 years 7 months, i.e. from January 1998 to July 2008, 12 cytologic specimens diagnosed as synovial sarcoma were reviewed. Ten cases were diagnosed as SS on aspiration alone but two cases required ancillary technique i.e., immunocytochemistry staining with bcl-2 and cytokeratin. The smears were stained with Papanicolaou and May-Gr\u00fcnwald-Giemsa stains.All cytologic specimens in our study had similar appearance. Most smears were highly cellular and were made up of densely packed tri-dimensional groups and singly scattered round to oval cells. Cellular monomorphism and vascular channels within the cell groups were the remarkable findings. Only one case showed cytologic evidence of epithelial differentiation. Bcl-2, cytokeratin, CD99 positivity was seen on immunohistochemistry staining. Results were categorized according to age, sex and morphologic variants.Although cytomorphologic features of synovial sarcomas are characteristic enough to permit its recognition, clinical correlation is necessary for accurate diagnosis. Monophasic variant is the most common entity observed in the present study. Synovial sarcoma (SS) is a mesenchymal spindle cell tumor which displays variable epithelial differentiation including glandular formation and has a specific chromosomal translocation t \u00d7:18) (p11:q11)[8 using 23\u201325 gauge needle. Wet and air dried smears were prepared and stained subsequently with Papanicolaou and May-Gr\u00fcnwald-Giemsa stains, respectively. Ten cases were diagnosed on FNAC alone. Two cases required immunocytochemistry in view of the poor cellularity and staining for bcl-2 and cytokeratin were done. Of the total twelve cases, histopathologic correlation was available only for five cases. Histopathology ascertained the cytologic diagnosis in these five cases.Clinicopathologic features are summarized in Tables The monophasic sub type showed spindle cell sarcoma with hemangiopericytoma like foci and the biphasic type contained epithelial-like glands and nests. Presence of mast cells is also an important differential diagnostic feature of synovial sarcoma. Our results showed the same in three out of five cases. Depending on the availability of the markers, cytokeratin, S100 protein and CD34 were done. The S100 protein was done to exclude nerve sheath tumor. Cytokeratin and CD34 were positive and S100 protein was negative. The immunophenotype tends toward schwannian differentiation when S100 positivity is demonstrated (only in 50% of the cases), but is more commonly positive for LEU-7, collagen type 4 and myelin basic protein.All smears showed similar morphology. The smears were highly cellular with clusters and dispersed cell population. There were abundant single cells with naked nuclei. Cells were elongated, spindle to oval in shape and had uni and bipolar cytoplasmic processes. Nuclei were pleomorphic with hyperchromasia. There was presence of mast cells with infrequent mitoses and occasional calcifications. Myxoid matrix was seen stained purple to magenta color on May-Gr\u00fcnwald-Giemsa. Cell clusters were showing whorls and pericytic patterns in some and micro acini in a few cases . Clusteret al.[et al.[et al.[et al. showed female preponderance in the eleven cytologic specimens studied. The primary site is thigh in majority of the cases, which is similar with the present study [et al., but our experience shows that metastatic tumor were common after primary tumor. All cytologic specimens were obtained from fine needle aspiration by Viguer et al., except for two cases obtained from tumoral fluid. Present study does not show any evidence of cystic areas even in a single case, and shows monophasic fibrous variant as the most common on cytology and on histopathologic correlation as well [et al. stating that monophasic variant is the most common type of all morphologic variants of SS.FNAC plays a major role in diagnosing pre-operative mesenchymal lesions and distinguishes both benign and malignant soft tissue lesions. There are no major studies from India on synovial sarcoma alone emphasizing the role of FNAC in diagnosing the same except for a few case reports. Kumar et al. and Dey l.[et al. publishel.[et al.\u20135 Biphasl.[et al. A specifl.[et al.8 Many bil.[et al. Hence, il.[et al. In additl.[et al. Present l.[et al. Infrequel.[et al. in his snt study . Recurre as well . This stSS has to be differentiated from two broad based categories of lesions:Mesenchymal lesions with uniform spindle to round cell morphologyTumors with epithelioid cell morphologyBut the diagnostic difficulties are commonly encountered with hemangiopericytoma and fibrosarcoma.[et al., it cannot be considered critical in the cytologic diagnosis of synovial sarcoma. Bcl-2 protein shows intense cytoplasmic positivity but has to be differentiated from bcl-2 positive lymphomas.[osarcoma. Hemangioymphomas. Thirty pymphomas. Sarcomatymphomas. Cytogeneymphomas.et al.[Mohite et al. in theirIn the hands of experienced cytopathologists FNAC in conjunction with ancillary techniques has a diagnostic accuracy approaching 95% for the diagnosis of soft tissue sarcomas.Domanski stated tAlthough cytomorphologic features of synovial sarcoma are characteristic enough to permit its recognition, clinical correlation is necessary for its correct identification."} {"text": "This review highlights our current understanding of metal-induced responses in plants, with focus on the production and detoxification of mitochondrial ROS. In addition, the potential involvement of retrograde signaling in these processes will be discussed.A general status of oxidative stress in plants caused by exposure to elevated metal concentrations in the environment coincides with a constraint on mitochondrial electron transport, which enhances ROS accumulation at the mitochondrial level. As mitochondria are suggested to be involved in redox signaling under environmental stress conditions, mitochondrial ROS can initiate a signaling cascade mediating the overall stress response, Al. Al61]. d leaves . This su2O2 . A . A 65]. 2O2 ,71. Over2O2 and lackbidopsis . In durubidopsis ,74. As m . Fi. Fi2O2. lacking functional complex I. This induces signaling throughout the cell to reset its antioxidative capacity completely, thereby coping with the loss of a major NADH sink and enhancing resistance to ozone and Tobacco mosaic virus [et al. [in vivo oxidation state of a redox-sensitive GFP targeted to Arabidopsis mitochondria. They demonstrated that mitochondria are highly sensitive to redox perturbation evoked by Cd, with their redox state recovering slower from an oxidative insult as compared to the cytosol or chloroplasts [Changes in mitochondrial electron transport and/or ROS production can have consequences for all other organelles in the plant cell. Indeed, plant mitochondria have a central position in the cellular carbon and nitrogen metabolism via the TCA cycle and their role in photorespiration . Dutilleet al. have demic virus . Schwarz [et al. studied roplasts . In addiroplasts ,33,77,90roplasts ,89. Howeet al. [Once ROS are formed in mitochondria of metal-stressed plants, they can either diffuse out of the mitochondria to mediate signaling functions or induce protein, lipid and DNA damage in the organelle itself Figure . Compareet al. in Al-stet al. .et al. [et al. [et al. [2O2 treatment. Mitochondrial aconitase seems particularly prone to oxidative damage [in vitro oxidation with Cu and H2O2 [Arabidopsis thaliana cells [Furthermore, plant mitochondrial proteins are susceptible to metal-catalyzed oxidation, leading to the irreversible formation of reactive carbonyl groups on amino acid side chains and hence reduced protein function . Substanet al. . Bartoli [et al. have dem [et al. , who have damage as it isand H2O2 or fragmand H2O2 . In addina cells .2O2 and other ROS as signals modulating plant PCD. The main event in plant PCD is the release of mitochondrial cytochrome c [et al. [Arabidopsis mesophyll protoplasts using fluorescence techniques to monitor the in vivo behavior of plant mitochondria and caspase-3-like activity. After a quick ROS burst, mitochondrial swelling and loss of the transmembrane potential occurred prior to PCD. Application of AsA prior to Al-exposure slowed down but did not prevent these processes [2\u00b0\u2212 production\u2014rather than NADPH oxidase-derived extracellular H2O2\u2014was shown to be a key event in Cd-induced cell death in tobacco cells [et al. [et al. [Although enhanced mitochondrial ROS levels may serve as monitors and signal the extent of environmental stress throughout plant cells, they may also lead to oxidative damage and programmed cell death (PCD) when mitochondrial and/or cellular antioxidative defense and repair systems are overwhelmed. Programmed cell death is an active and genetically controlled process essential for growth and development, as well as for adaptation to altered environmental conditions . In animchrome c . This wa [et al. . In addi [et al. . Recentl [et al. investigrocesses . Mitochoco cells . In addi [et al. and rece [et al. . Althouget al. [et al. [et al. [Depending on the intensity of the stressor, metals are able to induce mitochondrial damage and/or signaling outside the mitochondria. An altered organellar redox state generates signals that are transmitted to the nucleus in a process called retrograde signaling. This process occurs between mitochondria, chloroplasts and the nucleus and can be mediated by ROS or oxidative stress-induced secondary signals. Recently, Suzuki et al. reviewed [et al. summariz [et al. . They su [et al. reviewed [et al. ,103.AOX transcription is increased [et al. [To date, no specific components of any mitochondrial retrograde signaling pathway have been identified . Howeverncreased ,104. The [et al. demonstrTobacco mosaic virus in tobacco plants [et al. [Arabidopsis resulted in acute sensitivity to combined drought and light stress, confirming a role for AOX in determining the steady-state cellular redox balance [Arabidopsis protoplasts lacking the gene coding for the major isoform AOX1a showed a dramatically decreased viability during Al exposure as compared to wildtype protoplasts [AOX1a overexpression enhanced Al tolerance, confirming the protective role of AOX against Al-mediated PCD [2O2, salicylic acid and the protein phosphatase inhibitor cantharidin. It is hypothesized that the survival function of AOX is based on its ability to continuously suppress mitochondrial ROS generation. This further prevents oxidative damage that could otherwise evoke disturbed gene expression and favor PCD. In addition, the maintenance of respiration in stressful conditions by the alternative route also contributes to the hypothesis of AOX as a survival protein [et al. [The most intensively studied model for retrograde signaling between the mitochondrion and nucleus resulting in acclimation to stress conditions is AOX ROS in metal-induced PCD has been confirmed in several studies. Functioning as both a target and regulator of stress responses in plants, AOX is of major importance in the mitochondrial metabolism. Due to its ability to reduce mitochondrial ROS production, modulate PCD and TCA cycle activity, AOX is suggested to play a key role in metal-induced responses in plant mitochondria . Further"} {"text": "Systemic autoimmune diseases are a broad range of related diseases characterized by dysregulation of immune system which give rise to activation of immune cells to attack autoantigens and resulted in inappropriate inflammation and multitissue damages. They are a fascinating but poorly understood group of diseases, ranging from the commonly seen rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE) to the relatively rare systemic sclerosis . The mecBased on this background, we assembled this special issue for a better understanding of systemic autoimmune diseases, on aspects of mechanisms of pathogenesis, diagnosis, and treatment, including papers ranging from the basic researches to clinical researches and reviews about systemic autoimmune diseases.\u03b22-AR) and p53 apoptosis effector related to PMP-22(Perp) in the pathogenesis of RA, respectively. It has long been demonstrated that \u03b3\u03b4 T cells play important roles in the development of autoimmune diseases; the precise role of \u03b3\u03b4 T cells in the pathogenesis of SLE was studied by Z. Lu et al. Z. Gu et al. discussed the role of p53/p21 pathway in the pathogenesis of SLE. X. Gan et al. demonstrated the role of GITR and GITRL in the primary Sj\u00f6gren's syndrome. The expression of IL-6 and its clinical significance in patients with dermatomyositis was discussed by M. Yang et al. L. Estrada-Capetillo et al. found that DCs from patients with rheumatic inflammatory disease show an aberrant function that may have an important role in the pathogenesis. S. Stratakis et al. studied the mechanisms underlying this beneficial effect of rapamycin in passive and active Heymann nephritis (HN).Studies on the basic research of systemic autoimmune disease in this issue provided us new insights into the mechanism of the pathogenesis of systemic autoimmune disease. The role of HMGB1 in the T-cell DNA demethylation was discussed in the paper of Y. Li et al. SNPs with strong RA association signal in the British were analyzed in Han Chinese by H. Li et al., and the methylation status of miR-124a loci in synovial tissues of RA patients was analyzed by Q. Zhou et al. indicating the epigenetic factor in the pathogenesis of RA. Expression of microRNA-155 was studied by L. Long et al. in RA patients. IL-33 status was tested in RA patients by S. Tang et al. D. Lorton et al. and Y. Du et al. indicated the possible role of beta2-adrenergic receptors and pulmonary function test (PFT) abnormalities capable of identifying asymptomatic, preclinical RA-ILD. L. Pan et al. made a retrospective study to compare the characteristics of connective tissue disease-associated interstitial lung diseases, undifferentiated connective tissue disease-associated interstitial lung diseases, and idiopathic pulmonary fibrosis. Relationship between Brachial-ankle pulse wave velocity (baPWV) and its associated risk factors in Chinese patients with RA was analyzed by P. Li et al. P. \u017digon et al. studied the diagnostic value of antiphosphatidylserine/prothrombin antibodies in systemic autoimmune disease. The correlations of disease activity, socioeconomic status, quality of life, and depression/anxiety in Chinese SLE patients were studied by B. Shen et al. J. Li et al. made a systematic review on efficacy and safety of Iguratimod for the treatment of rheumatoid arthritis.\u03b3R-mediated trogocytosis in the physiological immune system was discussed by S. Masuda et al.Review papers also cover many aspects about systemic autoimmune disease. Advances in the knowledge of costimulatory pathways and their role in SLE were discussed by N. Y. Kow and A. Mak H. Draborg et al. summed up existing data about the relationship between epstein-barr virus and autoimmune disease. T. Marchetti et al. discussed obstetrical antiphospholipid syndrome from pathogenesis to the clinical and therapeutic implications. Y. f. Huang et al. summarized the immune factors involved in the pathogenesis, diagnosis, and treatment of Sjogren's syndrome. The role of IL-33 in rheumatic diseases was reviewed by L. Duan et al. T. Ito made a review on advances in the pathogenesis of autoimmune hair loss disease alopecia areata. A. W. J. M. Glaudemans reviewed the use of 18F-FDG-PET/CT for diagnosis and treatment monitoring of inflammatory and infectious diseases. The role of FcThis special issue covers many important aspects in the systemic autoimmune diseases ranging from novel insights into the pathogenesis of autoimmune disease and the use of newly developed diagnostic strategy in the early diagnosis of autoimmune disease to the treatment of these kinds of diseases, which will surely provide us a better understanding about systemic autoimmune disease.Guixiu\u2009\u2009ShiJianying\u2009\u2009ZhangZhixin\u2009\u2009(Jason)\u2009\u2009ZhangXuan\u2009\u2009Zhang"} {"text": "The aim of microfluidic mixing is to achieve a thorough and rapid mixing of multiple samples in microscale devices. In such devices, sample mixing is essentially achieved by enhancing the diffusion effect between the different species flows. Broadly speaking, microfluidic mixing schemes can be categorized as either \u201cactive\u201d, where an external energy force is applied to perturb the sample species, or \u201cpassive\u201d, where the contact area and contact time of the species samples are increased through specially-designed microchannel configurations. Many mixers have been proposed to facilitate this task over the past 10 years. Accordingly, this paper commences by providing a high level overview of the field of microfluidic mixing devices before describing some of the more significant proposals for active and passive mixers. For example, the Reynolds number is of the order of 0.1 in a typical water-based microfluidic system with a channel width of 100 \u03bcm, a liquid flow rate of 1 mm/s, a fluid density of 1 g/cm3 and a viscosity of 0.001 Ns/m2. In such low Reynolds number regimes, turbulent mixing does not occur, and hence diffusive species mixing plays an important role but is an inherently slow process. Consequently, the aim of microfluidic mixing schemes is to enhance the mixing efficiency such that a thorough mixing performance can be achieved within shorter mixing channels, which can reduce the characteristic size of microfluidic devices. Furthermore, the development of efficient mixing schemes is essential for increasing the throughput of microfluidic systems and to realize the concept of micro-total-analysis systems and lab-on-a-chip systems.Microfluidic devices have had a considerable impact on the fields of biomedical diagnostics and drug development, and are extensively applied in the food and chemical industries. The diminutive scale of the flow channels in microfluidic systems increases the surface to volume ratio, and is therefore advantageous for many applications. However, the specific Reynolds number of the multiple species, active mixing schemes improve the mixing performance by applying external forces to the sample flows to accelerate the diffusion process . TypicalThe microfluidic mixers presented above are all considered for the mixing of continuous bulk liquids. However, various discrete droplet-based mixing platforms have also been proposed. One such scheme involves the use of air pressure to form, actuate and mix two liquid droplets in a hydrophobic microcapillary valve device. Recently, many studies have presented mixing devices using liquid droplets based on electro-wetting phenomena. In these schemes, electro-wetting actuation is applied to separate liquid droplets from the bulk and to drive them to specific locations where they are repeatedly combined, mixed and separated. The microfluidic mixing of liquid droplets through the application of electro-wetting-induced droplet oscillations has also been demonstrated. These microfluidic mixing schemes take advantage of the open structure of the flow channel, allowing the mixed sample to be more easily transported to its required destination.2.Active microfluidic mixers enhance the mixing performance by stirring or agitating the fluid flow using some form of external energy supply. As shown in 2.1.et al. [et al. [et al. [et al. [et al. [Liu et al. demonstr [et al. employed [et al. presente [et al. proposed [et al. presente2.2.et al. [When the dielectrophoretic (DEP) force is applied to mixing, non-uniform alternating electrical fields induce the motion of polarized particles/cells . The eleet al. proposed2.3.et al. [Electrokinetic time-pulsed microfluidic mixers apply an electrokinetic driving force to transport the sample fluids while simultaneously inducing periodic perturbations in the fet al. proposed2.4.In pressure perturbation mixers, perturbations within the fluid streams are generated by velocity pulsing . In a ty2.5.et al. [In the micromixer presented by El Moctar et al. , two flu2.6.et al. [Tsai et al. presenteet al. [The intrinsically oscillatory flow generated by the bubble actuated nozzle-diffuser micropump was shown to induce a wavy liquid interface, accelerating the mixing process. Xu et al. showed t2.7.et al. [et al. [T between one of the wire electrodes and the circular side-wall electrode and then between the second wire electrode and the side-wall electrode. Particle tracing revealed that chaotic flows were induced and resulted in a satisfactory mixing result within 40 periods. Wang et al. [The magneto-hydrodynamic (MHD) flow effect has been used by various researchers to realize micromixers. For example, Bau et al. develope [et al. . The devg et al. develope2.8.et al. [et al. in [The use of electrokinetic instability (EKI) as a mixing technique for electrokinetically-driven microfluidic flows with conductivity gradients has received considerable attention in recent years ,47. In aet al. developet al. in . However3.Passive micromixers contain no moving parts and require no energy input other than the pressure head used to drive the fluid flows at a constant rate. Due to the inherently laminar characteristics of micro-scaled flows, mixing in passive micromixers relies predominantly on chaotic advection effects realized by manipulating the laminar flow within the microchannels or by enhancing molecular diffusion by increasing the contact area and contact time between the different mixing species. 3.1.et al. [et al. [et al. [et al. [In miniaturized flow systems with Reynolds numbers varying from 2 to 100, flow structures can be artificially induced, assisting flow segmentation through inertia effects. In the zigzag channel considered in , segmentet al. who fabr [et al. presente [et al. proposed [et al. fabricat3.2.et al. [et al. [Micromixers with intersecting channels can be used to split, rearrange and combine component streams to enhance mixing . He et aet al. proposed [et al. presente3.3.et al. [s on the mixing efficiency was investigated in a series of experimental trials. The results indicated that for Re = 0.26, the mixing efficiency increased from 65% to 83.8% as the geometry ratio s/w was increased from 1 to 8. For low values of s/w, the number of angles increased, resulting in an increase in the effective width and a reduction in the effective length. For low Reynolds number flows, the most efficient zigzag configuration corresponding to s = 800 \u03bcm obtained a mixing efficiency of 83.8%. For Re = 267, the mixing efficiency increased rapidly to 98.6% as the geometry ratio was increased to 4, but reduced slightly to 88.1% as the geometry ratio was further increased, thus indicating the existence of an optimal zigzag geometry. In [et al. presented a passive micromixer with a modified Tesla structure. In the proposed design, the species streams flowed close to the angled surface due to the Coanda effect, and this effect was used to guide the fluid streams to collide with one another. Mixing cells in opposite directions were then used to repeat the transverse dispersion caused by the flow impact. In the micromixer, one of the fluids was divided into two sub-streams and one of these two sub-streams was then merged with the second fluid stream from the main channel in the micromixer. The two streams were then mixed with the second sub-stream, resulting in a strong impact around the sub-channel of the micromixer. The results showed that the micromixer attained an excellent mixing performance at higher flow rates, and was characterized by a pressure drop of less than 10 kPa for flow rates of approximately 10 \u03bcL min\u22121. However, at lower flow rates, the mixer was constrained to the diffusive mixing regime, and hence the mixing performance was limited.Mengeaud et al. presente3.4.et al. [et al. [et al. [Re = 1, the mixing performance of both mixers varied inversely with the mass fraction of the sample due to the dominance of molecular diffusion. However, when the Reynolds number was increased to 10, the inverse trend was observed in the serpentine mixer. This phenomenon was attributed to an enhanced flow advection effect at large sample mass fractions. However, this effect was not observed in the herringbone mixer when the Reynolds number was increased over a similar range. Liu et al. [et al. [et al. [i.e., sense of rotation of two rotational flows, aspect ratio of channel and ratio of bypass channel to whole width) and found at proper combination of the variables, almost global chaotic mixing was observed in the Stoke flow regime. Moon et al. [Stroock et al. proposed [et al. presente [et al. consideru et al. fabricat [et al. investig [et al. simulaten et al. presente3.5.et al. [et al. [Re = 1.24 \u00d7 104 with a Rhodamine diffusivity of 2.8 \u00d7 10\u221210 m2s\u22121). Laser scanning of the entrance zone of the micromixer identified a bright image only in the half-zone containing Rhodamine. Bright images were also observed at the no-barrier zone in the first half-cycle, thus confirming the cross-sectional rotating flow effect induced by the slanted grooves. When the streams entered the barrier zone in the next half-cycle, laser scanning showed that the flow had rotated yet further. The experimental results confirmed that the barrier embedded micromixer yielded excellent species mixing within a short length of channel. Recently, Singh et al. [xN, the number of parallel cross-bars per element, pN, and the angle between opposite cross-bars. An optimum series for all possible SMX(n) designs to obey the universal design rule is pN = (2/3) xN \u2212 1, for xN = 3, 6, 9, 12\u2026Keoschkerjan et al. fabricat [et al. ,73 develh et al. analyzed3.6.et al. [et al. [p and depth ratio \u03b1 of the groove on the mixing performance of a staggered herringbone mixer with patterned grooves. Because the two vortices within the mixing channel are determined by the asymmetry index p, vortices with dissimilar scales were shown to provide a better mixing performance than two equally-sized vortices. Furthermore, the results showed that the intensity of the vertical fluid motions at the side edges of the grooves increased with increasing groove depth \u03b1, resulting in a significant improvement in the mixing effect.Johnson et al. ,75 prese [et al. investig3.7.et al. [et al. [et al. [et al. [Jen et al. presente [et al. presente [et al. presente [et al. presente3.8.\u2212). When these surfaces come into contact with a solution containing ions, the positive ions are attracted to the surface, forming an important diffuse layer [In microfluidic systems, very high pressure gradients are generally required to drive and manipulate the fluid flow. Due to the small characteristic scale of the microchannels in typical microfluidic devices, surface forces dominate, and high friction effects are generated. Conventionally, microchannels are fabricated using silicon dioxide. Silicon dioxide surfaces are typically negatively charged due to their deprotonated silanol groups (\u2261Si\u2013Ose layer . If an ese layer . Mixing se layer . By selese layer . In nume4.Advances in MEMS techniques in recent decades have enabled the fabrication of sophisticated biochips for a diverse range of applications. Compared to their traditional macro-scale counterparts, micromixers have a shorter operation time, a lower cost, an improved portability and a more straightforward integration with other planar bio-medical chips. This paper has presented a systematic review of the major micro-mixers presented in the literature over the past 20 years or so. It has been shown that depending on their mode of operation, these micromixers can be broadly categorized as either active or passive. The operational principles and mixing performance of each type of micromixer have been systematically discussed, and their relative advantages and disadvantages highlighted where appropriate. Overall, the results presented in this review confirm the applicability of micromixers for a diverse range of low-cost, high-performance microfluidic applications."} {"text": "In addition, a portable system for monitoring salivary \u03b1-amylase activity was launched in Japan at the end of 2005. The correlation between exercise and salivary \u03b1-amylase has since been extensively investigated. The present review summarizes relevant studies published in the English and Japanese literature after 2006. A search of the PubMed and CiNii databases identified 54 articles, from which 15 original articles were selected. The findings described in these publications indicate that exercise consistently increases mean salivary \u03b1-amylase activities and concentrations, particularly at an intensity of >70% VO2max in healthy young individuals. Thus, these studies have confirmed that salivary \u03b1-amylase levels markedly increase in response to physical stress. Salivary \u03b1-amylase levels may therefore serve as an effective indicator in the non-invasive assessment of physical stress.The secretion of salivary \u03b1-amylase is influenced by adrenergic regulation of the sympathetic nervous system and the hypothalamic-pituitary-adrenal axis; thus, exercise affects the levels of salivary \u03b1-amylase. Granger Salivary \u03b1-amylase secretion is influenced by adrenergic regulation of the sympathetic nervous system and the hypothalamic-pituitary-adrenal axis . Therefoet al and CiNii (http://ci.nii.ac.jp/) databases. The latter is a database maintained by the Japanese National Institute of Informatics , which comprises literature published by Japanese authors in academic journals or university memoirs and is listed in the database of the Japanese National Diet Library .Information was collected from the PubMed or peak power output of the study participants and four used ergometers (Eight studies defined exercise intensity as a ratio (%) of the maximum or peak oxygen uptake . Allgrove et al . Leicht et al (et al (2max for 2 h. The findings of Rosa et al (2max for 1 h supported these results; the mean salivary \u03b1-amylase concentrations were increased but the increase was not statistically significant. Three of the four studies, with the exception of the study of wheelchair athletes, comprised small cohorts, which may account for this discrepancy.By contrast, treadmill running generated mixed results. Fortes and Whitham observedht et al reportedl (et al , salivarsa et al from a set al (et al (et al (et al (Five studies demonstrated changes in salivary \u03b1-amylase in response to exercise without specifying the exercise intensity \u201318. In oet al adopted et al . Salivaret al . Allgrovl (et al examinedl (et al observedl (et al and relal (et al \u201316.et al (et al (et al (Chiodo et al and Diazl (et al investigl (et al . Diaz etl (et al compared2max in healthy young individuals. Therefore, studies published following those reviewed by Granger et al (In conclusion, exercise has consistently been shown to increase mean salivary \u03b1-amylase activity and concentration in all studies examined in the present review, including those in which changes were not significant, with the exception of the 20-min forest walk . The effer et al confirm"} {"text": "There has been much interest in the mechanisms by which calcium may attenuate weight gain or accelerate body fat loss. This review focuses on postprandial energy metabolism and indicates that dietary calcium increases whole body fat oxidation after single and multiple meals. There is, as yet, no conclusive evidence for a greater diet induced thermogenesis, an increased lipolysis or suppression of key lipogenic enzyme systems. There is however convincing evidence that higher calcium intakes promote a modest energy loss through increased fecal fat excretion. Overall, there is a role for dietary calcium in human energy metabolism. Future studies need to define threshold intakes for metabolic and gastrointestinal outcomes. Body weight and body energy content remains quite stable in most adults for long periods of time; despite daily fluctuations in energy intake and energy expenditure. This requires the presence of regulatory processes able to match fuel supply to energy requirements. The maintenance of a physiological set point for body weight is complex and includes aminostatic and glucostatic controls of feeding, metabolic or nutrient partitioning, input from the sympathetic nervous system, signals from adipose tissue as well as additional behavioral influences. Genetics, the environment and psychosocial factors also impact on these regulatory processes ,2,3. Thaet al. [2+) held the key to fat deposition and obesity. According to this early scheme \u2014a key enzyme regulating lipid deposition\u2014while stimulating adipose tissue lipolysis. An increased fat oxidation and thermogenesis through up-regulation of uncoupling proteins (UCP) was also suggested. A role for calcium in the regulation of body weight has gained much interest since the early observation that increased intakes of calcium (as yoghurt) increased the loss of body fat and favoured a redistribution of fat away from the abdomen . Based pet al. proposedy scheme increaseWe now know that dietary calcium may also act at the level of the gastrointestinal tract to increase energy loss, through increased fecal fat excretion . TordoffIn the main, energy balance is determined by the matching of macronutrient intake to energy expenditure and the ability to channel nutrients into oxidative versus storage pathways (nutrient partitioning). Most individuals reach a state of approximate weight maintenance in which the average composition of the fuels they oxidize matches the nutrient distribution in their diets . Acute iThere are three main contributors to total energy expenditure (TEE) of man: basal metabolic rate (BMR) has the largest energetic demand accounting for 60\u201370% of TEE in sedentary individuals, diet induced thermogenesis (DIT) is a variable 10% and physical activity makes up the remainder. BMR is determined by fat free mass (FFM) to a great extent (80\u201385%) and by fat mass (FM) if the person is obese. Smaller contributions are made by age, habitual physical activity and genetics. DIT is traditionally calculated as the change in postprandial energy expenditure from fasting and expressed as a percentage of the energy content of the meal. DIT has two components: an obligatory process of energy expenditure that is strongly related to meal size, composition and the physiological characteristics of the individual, and a regulatory component that is strongly influenced by the SNS. A higher DIT for a given meal would imply that less energy is available for storage, and hence such a meal would be less conducive to weight gain. Abdominal adipocytes are particularly sensitive to SNS mediated lipolysis; a process whereby stored fat is broken down and mobilized. This paper focuses on the potential postprandial events that would contribute to an anti-obesity effect of calcium based on the model in et al. [Some methodological considerations are pertinent to the overall discussion on calcium and postprandial events. It is accepted that the previous evening\u2019s meal composition affects the next day\u2019s glycaemic response and macronutrient oxidation . Since pet al. observedet al. ,18, noneThe absorption of calcium is not straight forward and is highly variable between subjects. There are many factors that may confound its absorption . After set al. [vs. 243 mg) modified postprandial FOR, DIT, FFA and glycerol with carry over effects to a standard lunch, low in D calcium (48 mg). Overall, there was a greater DIT and FOR after the higher breakfast-lunch trial [The rise in insulin in response to mixed meal ingestion drives thermogenesis and promotes an increased carbohydrate oxidation rate (COR), with a suppression of fat oxidation rate (FOR). Circulating free fatty acids (FFA) and glycerol are also suppressed. Cummings et al. tested wet al. ,27. The et al. ,27,28. Wet al. ,29. We och trial ,29. Thisch trial . Togetheet al. [i.e., rest, sleep and exercise. Similarly postprandial 24 h FOR was not statistically different between treatments but a clear rank order (D > ND > control) was evident with a difference of ~8 g/d between high D and control diets. The authors calculated that such a difference could account for a loss of ~3 kg body weight if extrapolated to a year. Interestingly these trends for a greater 24 h FOR were paralleled by a similar trend for a greater negative fat balance across trials. In fact we note that in this study there was a significant negative fat balance following high D but not after the other two. The study utilized high protein diets from either dairy or non-dairy sources, and also provided more dietary fibre than habitual consumed by the participants. Both components would be expected to influence calcium absorption and retention [There are four crossover trials that manipulated calcium intake over a week and measured postprandial DIT and FOR . Boon etet al. achievedetention ,32, but et al. [vs. 23%E) for a week, and compared them to a control diet . Whole body calorimetry (WBC) measurements at the start and end enabled the calculation of change over that week. There was a small but significant drop in weight across all 3 diets; accompanied by a significant drop in RQ. However, there were no differences in 24 h EE, energy balance or FOR. A closer examination of their results indicated that the change in 24 h COR adjusted for energy balance, favoured a greater suppression on the two high calcium diets. Reciprocally, the change in 24 h FOR showed a greater increase (~ 35%) on the high D (15%E protein) diet relative to control [Using a similar design, Jacobsen et al. varied D control . The sma control . In anot control . On the et al. [St-Onge et al. studied Ten obese men and women with calcium intakes <800 mg/d completed a double blind placebo controlled crossover study of 5 weeks. Calcium was provided as a milk mineral sachet twice a day to increase calcium by ~800 mg/d, over and above habitual intake. The control group received a placebo . This elegant study used tracer isotope technology, indirect calorimetry, micro-dialysis of sub-cutaneous fat and fat biopsies for gene expression. The authors reported no statistical difference in DIT or FOR over the 6h postprandial period . In this3 and change in DIT . The relationship of DIT to vitamin D status at baseline is interesting. It suggests that the ability of calcium to stimulate DIT is dependent on its absorption; a function of vitamin D status [3 is linked to thermogenesis either directly or via PTH. Overall, ND calcium decreased RQ at rest and following a meal. In a randomized parallel design, 3 groups of overweight and obese individuals were assigned to either control (~500 mg/d from 1 serve D plus placebo), high calcium and high calcium during 12 weeks of energy restriction . This was a sub-study of a multisite trial on the source of calcium and energy metabolism . There wD status . This maD status . An alteThere was only one long term randomized parallel study conducted over a year . In this3 status and change in FOR [et al. [et al. [Dairy sources contain other bioactive components besides calcium and may be expected to show a greater effect. Dairy naturally contains vitamin D, which may be significantly higher in those countries that have mandatory fortification of their products. While vitamin D will enhance calcium absorption, there is some suggestion of an independent effect from the direct relation between 25(OH)De in FOR and the e in FOR . Furthere in FOR ,35 and be in FOR . A simil [et al. , however [et al. found thet al. [et al. [et al. [In summary this analysis shows that in 6 of 9 randomized studies that measured macronutrient oxidation, an increase in FOR following calcium was demonstrated . We coulet al. for reas [et al. and Jaco [et al. which shet al. [et al. [et al. [One expected effect of higher intakes of dietary calcium, is the reciprocal stimulation of lipolysis and suppression of de novo lipogenesis . Coppacket al. and Frayet al. have arget al. ,18,30. W [et al. ,40 are o [et al. observed [et al. . A role [et al. . Recent [et al. . No diffet al. [et al. [et al. [We found only 3 human studies that have measured the mRNA expression of genes regulating fat metabolism, of which two were conducted by the same group. In the earlier study of Boon et al. , D and N [et al. tested a [et al. studied Fecal fat excretion stems from exogenous (dietary) and endogenous sources . Further, unabsorbed fat may be metabolised by bacteria in lower GI tract and hence not be excreted . While net al. [et al. [In et al. found a [et al. favoured [et al. . et al. (44) estimate that additional calcium may account for between 2\u20135 g/d greater loss of fecal fat. The effect of D calcium was more consistent than ND calcium, in part due to the many formulations of ND used. A regression analysis on a small number of subjects, predicted that ~1241 mg/d of D calcium would account for ~5.2 g/d of fat excretion. The authors estimate that such effects at best, would account for a 2.2 kg difference in body weight over 1 year [The study of calcium\u2019s effect on fecal fat excretion predates the postulation of the calcium-body weight hypothesis, and there are many older studies that document increases in fecal fat excretion following higher calcium ,42,43. Aet al. estimater 1 year ,46 but iIt is our opinion that calcium modulates human energy metabolism. There is convincing evidence that calcium increases postprandial fat oxidation after a single meal or over a day. There is some suggestion that fasting FOR may be greater on a high calcium diet. However a higher DIT is not always observed. There is convincing evidence for calcium to modestly increase the amount of fat excreted, and thereby contribute to energy loss. The effect of dietary calcium on lipolysis and/or lipogenesis is inconsistent at present, and is a fertile area for future research. Nutrition and public health recommendations demand consistent observations with a strong mechanistic basis. Defining threshold effects for metabolic versus gastrointestinal function based on randomized controlled trials with adequate power, is one way forward."} {"text": "Cell migration and invasion are processes that offer rich targets for intervention in key physiologic and pathologic phenomena such as wound healing and cancer metastasis. With the advent of high-throughput and high content imaging systems, there has been a movement towards the use of physiologically relevant cell-based assays earlier in the testing paradigm. This allows more effective identification of lead compounds and recognition of undesirable effects sooner in the drug discovery screening process. This article will review the effective use of several principle formats for studying cell motility: scratch assays, transmembrane assays, microfluidic devices and cell exclusion zone assays. It isThe migration process in wound healing is coordinated among several different cell types including keratinocytes, fibroblasts, endothelial cells and macrophages that provide a variety of growth factors. Platelet derived growth factor (PDGF-BB), basic fibroblast growth factor (bFGF) and granulocyte macrophage colony stimulating factor (GM-CSF) are used as stimulants to facilitate wound healing for patients with diabetic ulcers . These gin vitro [Similar to the complex mechanisms underlying the process of wound healing, many different factors are also involved in cancer cell metastasis. There are diverse mechanisms that cells can employ to initiate and progress invasion and each offers specific pharmacologic targets for development of anti-metastatic therapies . In ordein vitro . TherefoThe goal of HTS is to accelerate the drug discovery process by rapidly evaluating large compound libraries . To achiet al. [in vivo-like data for all segments of the drug discovery pipeline, such as target validation, screening and toxicology.\u201d Similarly, Carragher [in vitro assays that employ 3-dimensional matrices to provide relevant microenvironments for cellular studies [in vivo mechanisms of invasion [Justice et al. assessedarragher points o studies ,10. Inde studies . As a coinvasion . Such moinvasion ,19.In this article, we review the different assay formats employed to study cell motility. For many years, Boyden chamber based transmembrane assays and scratch wound assays were the only widely available formats to study cell migration and invasion. However, new technologies such as microfluidic chambers and exclusion zone assays have recently emerged as alternative phenotypic screening assays that provide additional or complementary information to researchers interested in high content analysis. The advantages and optimal utility of these formats will be covered with specific examples provided for each.2.Scratch assays were first used as models of wound healing for epithelial or mesenchymal cells . In thiset al. designed a 384-well format scratch assay using a 96-head pin tool array to scratch cell monolayers [A majority of researchers employing the scratch assay method utilize multiwell plates that contain 96 or fewer wells. Yarrow nolayers . They idnolayers .et al. [Simpson et al. screenedet al. ,27.et al. describes the inherent limitations of scratch assays as an inability to achieve reproducible and quantitative results [in vitro results distinguished different levels of aggressiveness and invasiveness for three different cell lines which corresponded to their behavior in vivo [Kam results . To addr results . The in in vivo .i.e., to close the wound; (3) the assay surface can be coated with an ECM of choice prior to the experiment; and (4) the movement and morphology of the cells can be visually observed in real time and images captured throughout the experiment thereby permitting velocity measurements [Scratch assays provide several distinct advantages in that: (1) the assay can be performed in any readily available plate configuration; (2) cells move in a defined direction, urements . Howeverurements and the urements -30. Moreurements ,29 and turements and elecurements exist asurements . While Eurements . Additiourements .3.Boyden originally described an assay technique as depicted in 50 values 20-fold less than that needed to abolish cell proliferation [Ogasawra and co-workers utilizedferation . Howeveret al. [In an effort to improve sensitivity and throughput of transmembrane assays, Mastyugin et al. used a met al. . While tet al. .et al. [An alternative assay that uses a measurement of transepithelial electrical resistance (TEER) to indicate the integrity and permeability of cell monolayers was described by Mandic et al. . In thiset al. .Transmembrane assays offer the distinct advantages of being able to analyze migration in response to a chemotactic gradient and can be used with adherent as well as non-adherent cells. Filter membranes may be coated with ECM proteins in an effort to better approximate physiological conditions. However, there are several drawbacks of these assays in that they are technically challenging to set up, the gradient is non-linear and equilibrates between both compartments over time , it is d4.Microfluidic systems such as microarrays, gradient devices, valved arrays and individually addressable channel arrays have recently emerged with the potential to be physiologically relevant and improve assay content to provide cell microenvironments for drug discovery . An examA microfluidic assay was used to demonstrate tumor cell migration through 3-dimensional matrices . In thisMicrofluidic chamber based assays can offer distinct advantages in situations where reagent availability is limited. A 800 nL volume of matrix is required to fill channels and only 1,000\u20132,000 cells are dispensed per channel which represents a 10\u2013100 fold reduction from other methods without sacrificing assay robustness and better enabling the use of rare primary cells . However5.et al. explain: \u201cThe scratch process destroys the removed cells, which release their intracellular content into the medium; this process is also quite traumatic for the cells on the newly formed border. Indeed these border cells may become partially permeable as a result of the brutal tearing off of the adhesive junctions they maintain with their neighbors\u201d [Cell exclusion zone assays originated from the need to study cell migration on an uncompromised surface uncoupled from contributions of cell damage and permeabilization that can arise from scratch wounds . Poujadeighbors\u201d . Thus, tighbors\u201d .Cell exclusion zone assays, as illustrated in The Oris\u2122 Cell Migration Assay was validated for HTS using 3-day plate uniformity and replication of potency protocols established by Eli Lilly and the NIH Chemical Genomics Center which tested three assay plates with interleaved high, mid, and low inhibitor concentrations per day for 3 days . Z\u2032 fact50 values for latrunculin A of 135 nM and 132 nM, respectively on MDA-MB-231 cells [To facilitate the use of automated liquid handling equipment for all steps of the assay, the Oris\u2122 Pro Cell Migration Assays use a self-dissolving biocompatible gel (BCG) spotted in the centers of either 96- or 384-well plates instead of the silicone cell seeding stoppers. The BCG spots act as temporary barriers to prevent cells from settling and attaching during the seeding process and, once dissolved, reveal uniform areas into which cells may migrate . Vogt ha31 cells . Gough r31 cells .The Oris\u2122 Cell Invasion Assays incorporate an ECM overlay in order to form a 3-dimensional environment to study invasion into the cell exclusion zone along the x, y, and z-axes . This asin vivo. Cell exclusion zone assays offer robust and reproducible data since the starting dimensions of the detection zone are accurately and precisely positioned in the assay wells. However, these assays are limited for use with adherent cells and cannot be used to establish chemotactic gradients.Cell exclusion zone assays offer the distinct advantage of not damaging the cells or the ECM as occurs in scratch assays. This assay format also allows continuous visual assessment of the cells throughout the experiment with the ability to acquire multiplexed data unlike the transmembrane assays where the filter restricts observation. In this way, information can be collected regarding morphology, velocity, distance and direction of migrating or invading cells as well as additional phenotypic effects of test compounds. In the Oris\u2122 Cell Invasion Assays, the cell monolayer is entirely surrounded by the ECM thus reflecting a more physiologically relevant environment in which to study cell invasion and the effects of potential cancer therapeutics. In contrast, Brekhman and Neufeld point ou6.et al. [et al. [et al. [Several recent studies examined the effects of oncogene expression or signal transduction mechanisms on cell motility in parallel scratch and transmembrane assays and found a good correlation of results. Valster et al. demonstr [et al. found th [et al. . RNA int [et al. . Bauer e [et al. examined [et al. .et al. [et al. [Studies also compared cell exclusion zone assays to either scratch and/or transmembrane assays. Jiang et al. demonstr [et al. reported [et al. .Researchers at Platypus Technologies compared MDA-MB-231 cell migration on collagen I coated surfaces using both the Oris\u2122 Cell Migration Assay and the scratch assay. Experiments were performed in parallel on four different days to compare the performance of each assay. For each independent experiment, the average area closure achieved using the Oris\u2122 Cell Migration Assay ranged fet al. [in vitro transmembrane invasion assays favor cells with spindle cell morphology and might be less appropriate for cells with epithelioid morphology. They mention that epithelioid cells might be more dependent on mechanisms of motility that rely upon influences from 3-dimensional microenvironments [There are also examples in which alternate assay formats for studying cell migration or invasion can give different, yet complementary results. Attoub and co-workers used scret al. examinedet al. . Howeveret al. . Cells iet al. which pret al. point ouronments . Table 17.in vitro and their efficacy as cancer therapies in human clinical trials [There has been a disappointing lack of correlation between compounds that inhibit migration and invasion l trials ,54. Howel trials . Carraghl trials discussein vivoOffer ECMs and 3-dimensional environments to mimic cell behavior Allow for real-time visualization of cellsi.e., high-content analysis)Permit phenotypic and multiparametric analysis of cells (Facilitate high-throughput screening (automated liquid handling and high content imaging)To accelerate discovery of therapeutics affecting cell motility, cell-based assays should be utilized that:When researchers utilize only single target HTS assays, they obtain limited information on how potential therapeutics may influence complex multifaceted events such as tumor cell migration and invasion. As drug discovery screening continues to transition from biochemical to cell-based assays and from high throughput to high content screening, 3-dimensional culture technologies and phenotypic screens will become essential for increasing the relevance of screening assays ,19. Prof"} {"text": "Dear Editor,We read with great interest the article by Hossain et al., in April 2013 \u201cIs discontinuation of clopidogrel necessary for intracapsular hip fracture surgery? Analysis of 102 hemiarthroplasties\u201d. An increasing number of patients presenting for anesthesia are taking clopidogrel and an even greater number are on combination antiplatelet therapies, potentially increasing the risk of intraoperative and perioperative bleeding.Hossain et al. found no\u00ae Platelet Mapping\u2122 (TEG-PM) assay for preoperative assessment of platelet ADP receptor inhibition. Collyer et al. [It is our opinion that correlation between platelet function and responsiveness to clopidogrel is of paramount importance. There is significant individuality in patient responsiveness to clopidogrel, suggesting that an individualized, evidence-based approach is needed to assess the risk of adverse outcomes in patients receiving regular clopidogrel therapy. We suggest the use of the Thrombelastograph r et al. , 3, provWe assessed the ability of TEG-PM to detect preoperative platelet function secondary to clopidogrel and/or aspirin therapy . In an eTwo important findings were: (1) a trend towards platelet recovery by day 5 off clopidogrel, (2) 60\u00a0% of the patient population were not effectively inhibited while on therapy . In a pr"} {"text": "The high prevalence of cardiovascular disease (CVD) is largely attributable to the contemporary lifestyle that is often sedentary and includes a diet high in saturated fats and sugars and low ingestion of polyunsaturated fatty acids (PUFAs), fruit, vegetables, and fiber. Experimental data from both animals and humans suggest an association between increased dietary fiber (DF) intakes and improved plasma lipid profiles, including reduced low density lipoprotein cholesterol (LDL-C) concentrations. These observations underline that the intake of DF may protect against heart disease and stroke. Dietary fibers (DF) are highly complex substances described as nondigestible carbohydrates and lignins resistant to digestion and absorption in the small intestine . CommonlFruit, vegetables, wholegrains, and cereals are the major sources of DF components. Total DF can be divided into two groups: viscous fiber and non-viscous fiber .DF are indigestible substances resistant to human digestive enzymes without nutritional or energetic value . DF was an almost unknown phrase and fibers were considered only annoying intestinal wastes until the 1970s when a wide range of potential therapeutic applications were suggested .The high prevalence of cardiovascular disease (CVD) is largely attributable to the contemporary sedentary lifestyle combined with a diet high in saturated fats and sugars, and low ingestion of polyunsaturated fatty acids (PUFAs), fruit, vegetables, and fiber. Epidemiological studies have confirmed a strong association between fat intake, especially saturated- and transfatty acids, plasma cholesterol levels, and rate of coronary heart disease (CHD) mortality . In contThe term DF includes a wide range of molecular structures, a highly complex mixture of different non-starch polysaccharides (NSPs), which include cellulose, \u03b2-glucans, hemicellulose, pectins, gums, polysaccharides of algae (agar and carrageenan) and lignin ,10. Sourd-galacturonic acid is a principal constituent. They are structural component of plant cell walls and also act as intercellular cementing substances. The backbone structure of pectin is an unbranched chain of axial-axial\u03b1-(1-4)-linked d-galacturonic acid units. Pectin is highly water soluble and is almost completely metabolized by colonic bacteria. Other NCPs include gums, mucillages, and algal polysaccharides. Lignin is a cross-linked racemic macromolecule, a three-dimensional aromatic polymer of phenylpropane derivatives (containing about 40 oxygenated phenylpropane unit), and is associated with cellulose in plant cell walls. Lignins vary in molecular weight and methoxyl content. Due to strong intramolecular bonding which includes carbon to carbon linkages, lignin is very inert [The two major classes of DF are polysaccharides and lignin. Polymers of phenylpropane, are found in most plant structures in association with cellulose, and is the most widespread organic molecule on Earth and the major component of plant cell walls. It is a linear polymer made up of 10,000 to 15,000 glucose molecules bonded in a 1\u21924 glycosidic linkage. Cellulose molecules contain many polar hydroxyl groups, which allow them to interact with adjacent molecules to form fibers. These fibers are structurally strong and resistant to chemical attack. The more important non-cellulosic polysaccharide (NCPs) is hemicellulose and pectic substance. Hemicellulose are cell wall polysaccharides solubilized by aqueous alkali, which contain backbones of \u03b2-1,4-linked pyranoside sugar, and differ from cellulose in size (less than 200 sugar residues). The hemicelluloses are subclassified on the basis of the principal monomeric sugar residue . Pectin ry inert .The soluble DF has strong probiotic characteristics, which can be considered a non-digestible food ingredient that can selectively stimulate the growth and metabolic activity of a limited number of microbial groups, important for the proper functioning of the body. Recently, there has been a further substantial revision of the view of what exactly constitutes DF with the emerging recognition of the contribution of resistant starch and oligosaccharides to \u2018fiber\u2019 action . It was Consumption of DF has been associated to lower risk of CVD for some time , but theSoluble fiber clearly lowers cholesterol to a small but significant degree and one would expect that this would reduce coronary heart disease (CHD) events . The spevs. those who eat them rarely [Many epidemiological studies examined the relationship between wholegrain consumption and CHD. In these studies the researchers concluded that a relationship between wholegrain intake and CHD is seen with at least a 20% and perhaps a 40% reduction in risk for those who eat wholegrain food habitually m rarely . Notwithm rarely .et al. [A study about the relationship of long-term intake of dietary fiber by 68,782 women showed a substantial lowering of relative risk (0.53) for women in the highest quintile of fiber consumption (22.9 g/day) compared with the lowest (11.5 g/day). These intakes are low compared to those recommended by health authorities 30 g/day). Only the effect of cereal fiber was significant. However, the question of the relationship of other contributors to the effects of DF and CHD risk remains unanswered. Elevated plasma total and low-density lipoprotein cholesterol (LDL-C) concentrations are established risk factors for coronary morbidity and mortality. There are abundant human and animal data showing that diets high in soluble fiber lower plasma cholesterol . One pop g/day. Oet al. observedet al. ,34. Any et al. . It was et al. . This woet al. ,37.An analysis of dietary factors and cardiovascular risk performed in a sample of 3452 Swiss adults demonstrate that a healthy diet characterized by high consumption of dietary fiber was associated with lower rates of serum triglycerides and higher values of high density lipoprotein cholesterol (HDL-C) . An imprThe American Heart Association (AHA) recommends a total dietary fiber intake of 25 g/day to 30 g/day from foods (not supplements) to ensure nutrient adequacy and maximize the cholesterol-lowering impact of a fat-modified diet . Finallyet al. [et al. [et al. [vs. 42.4 g/day) was 0.82 . The authors acknowledge that a greater intake of wholegrains was related to an overall healthier diet and lifestyle. They performed stratified analyses to address potential confounding by healthy participant characteristics and found no discrepant results with respect to the main finding of protection from heart disease with wholegrains in subgroups including never smokers and non-drinkers.Wholegrain consumption has been shown to be linked to improvements in body mass index (BMI) ,43 insulet al. did not et al. \u201352. The [et al. carried [et al. \u201356 which [et al. followedvs. those who eat them rarely [A further review has also reported the results of more recent prospective studies of wholegrain consumption The reviewers concluded that a relationship between wholegrain intake and CHD is seen with at least a 20% and perhaps a 40% reduction in risk for those who eat wholegrain food habitually m rarely .Jacobs and Gallaher consider that, given the wide variability in study designs, an estimated risk reduction of 20\u201340% is \u2018impressively robust\u2019. Study participants included men and women; participants from the US and Norway. Findings, which were consistently positive for wholegrains, occurred using a variety of different data collection methodologies.et al. [In conclusion, Jacobs and Gallaher and other reviewers find that there is good evidence that wholegrain foods substantially reduce the risk of CHD. Whether all grains are equal in this respect cannot be concluded from these studies, nor can the effectiveness of different parts of the grain. Results from Jensen et al. suggest et al. [vs the lowest quintile). However, the inverse relation remained suggesting that other constituents may confer additional protection. The authors acknowledge that their findings may not be generalizable to other populations (98% of their cohort were white).Liu et al. examinedet al. [vs the lowest quintiles). Mozaffarian et al. [Steffen et al. did incln et al. followedet al. [Fung et al. examinedIn conclusion, there are few studies to date that specifically examine the relationship between wholegrain intakes and risk of stroke. The studies described above have shown mixed results when adjustment has been made for potential confounders. However, the trends are strongly suggestive of a protective effect of wholegrain on risk of stroke.et al. [A number of researchers have found that a higher intake of cereal fiber is associated with a lower risk of CHD, although results tend not to be statistically significant after adjusting for multiple confounding factors ,53,60\u201362et al. pooled tet al. [et al. [i.e., wholegrain intake) . Other cereal fibers had a non significant influence , other cold cereals , cooked cereals ). The authors did not state whether these other cold or cooked cereal fibers were wholegrain or refined grain products.Anderson et al. also per [et al. . After aet al. [In contrast, Jensen et al. investiget al. [Bazzano et al. examinedet al. [vs the lowest quintile of total dietary fiber intake was observed for a lower BMI in those with the highest intake. For women, only vegetable fiber displayed an association. To date, detailed clinical data and the incidence of cardiovascular events in relation to fiber intake from this study have not been published.Lairon et al. investiget al. [et al. [et al. [et al. [et al.\u2019s cohort, Ness and co-workers acknowledge that dietary advice stopped after 2 years and recent data were collected when around half of the original participants had died. The researchers cannot discount the possibility that the diets of those who survived are different from those who did not. Follow-up diet was assessed with a limited number of questions focused on fiber intake; questionnaire data were not collected on other aspects of current diet and it is thus possible that important differences in diet were not detected. The authors concluded that the failure of longer term data to confirm the initial but non-significant increased risk of CHD deaths in men who were given advice to eat more cereal fiber, suggested that increased fiber intake was not harmful. The results do, however, suggest that increasing cereal fiber intake by a small degree does not confer any immediate survival advantage. It must be borne in mind, however, that this was a secondary prevention trial of CHD, in which the influence of environmental factors on the pathological process is likely to be weaker than in a primary prevention trial [Ness et al. followed [et al. . Between [et al. allocate [et al. followedon trial .et al. [Jacobs et al. argued tThese results highlight the emerging \u2018wholegrain story\u2019 which argues that health benefits stem from more than just the fiber; the wholegrain is nutritionally more important because it delivers a whole package of nutrients and phyto-protective substances that may work synergistically to promote health . Thus alThere is little epidemiological data on the association between soluble fiber and CHD even though soluble fiber clearly lowers serum cholesterol concentration and also lowers glucose in diabetics. A cholesterol-lowering claim is allowed for oat-derived beta glucan by the United States Food and Drug Administration .In a study of female health professionals, fiber was associated with protection from heart disease . The asset al. [In a secondary prevention study in men with angina, Burr et al. found noet al. [A meta-analysis of eleven clinical intervention trials involvinet al. confirmeFurthermore, two other well-known cardiovascular risk factors (type 2 diabetes and obesity) seem to be influenced by intake of DF. Vegetarians who consume a high-fiber lacto-ovo vegetarian diet appear to have a lower risk of mortality from diabetes-related causes compared to nonvegetarians . ConsumpA number of recent studies give novel insights that might help establish a metabolic link between insoluble DF consumption and reduced diabetes risk. Potential candidates are improved insulin sensitivity and the modulation of inflammatory markers, as well as direct and indirect influences on the gut microbiota . A breakCurrently, the American Diabetes Association recommends that diabetic patients consume 14 g/1000 kcal/day of fiber because a high amount of fiber is necessary to improve glycemic control. This amount is between 2 and 3 times higher than that consumed by individuals in many developed countries .The association between fiber intake and obesity or CHD is confirmed by several epidemiological studies. DF can modulate body weight by various mechanisms. Fiber-rich foods usually have lower energy content, which contributes to a decrease into the energy density of the diet . Foods ret al. [2; one moderately obese group, with a BMI between 27.1 and 40 g/m2; and one severely obese group, with a BMI > 40.0 kg/m2). These authors showed that fiber intake was significantly higher in the normal weight group and was inversely associated with BMI after adjusting for several potential confounders . The CARDIA (Coronary Artery Risk Development in Young Adults) study, a multicenter population-based cohort study carried out over 10 years, examined 2909 young individuals to determine the relationship between total DF intake and plasma insulin concentrations, weight and other CVD risk factors. After adjusting for BMI and multiple dietary and potential non-dietary confounders , the study reported an inverse association between total fiber intake, plasma insulin concentrations and body weight gain [Alfieri et al. assessedght gain suggestiResults from all epidemiologic studies described above underline that the intake of wholegrain foods clearly protects against CHD and stroke but the exact mechanism is not yet clear. Moreover, the intake of high carbohydrates (from both grain and non-grain sources) in large amounts is associated with an increased risk of CHD in overweight and obese women even when fiber intake is high, but this requires further confirmation in normal-weight women."} {"text": "Among the most intriguing molecular factors that have been uncovered are the sirtuins, a family of NAD to flies. Mammalser et al.010 add tAging in the heart is characterized by hypertrophy and fibrosis, although its exact causes are unknown. Employing SIRT3\u2212/\u2212 mice, Hafner et al. present intriguing evidence linking these aging phenotypes to mitochondrial dysfunction and SIRT3. The aut+, the aging phenotypes of SIRT3\u2212/\u2212 mice are only apparent when the mice are older [This study combined with recent work by Someya et al. 2010), Kim et al. (2010), and Sundaresan et al. 2009) show that SIRT3 can delay the onset of age-related pathologies in multiple tissues 010, Kim -7. The p show tha"} {"text": "Vaccination against human papillomavirus (HPV), predominantly targeting young females, has been introduced in many countries. Decisions to implement programs, which have involved substantial investment by governments, have in part been based on findings from cost-effectiveness models. Now that vaccination programs have been in place for some years, it is becoming possible to observe their effects, and compare these with model effectiveness predictions made previously.Australia introduced a publicly-funded HPV vaccination program in 2007. Recently reported Australian data from a repeat cross-sectional survey showed a substantial (77%) fall in HPV16 prevalence in women aged 18\u201324\u00a0years in 2010\u20132011, compared to pre-vaccination levels. We have previously published model predictions for the population-wide reduction in incident HPV16 infections post-vaccination in Australia. We compared prior predictions from the same model (including the same assumed uptake rates) for the reduction in HPV16 prevalence in women aged 18\u201324\u00a0years by the end of 2010 with the observed data. Based on modelled vaccine uptake which is consistent with recent data on three-dose uptake , we had predicted a 70% reduction in prevalence in 18\u201324\u00a0year old females by the end of 2010. Based on modelled vaccine uptake consistent with recent national data for two-dose coverage and similar to that reported by women in the cross-sectional study, we had predicted a 79% reduction.A close correspondence was observed between the prior model predictions and the recently reported findings on the rapid drop in HPV prevalence in Australia. Because broadly similar effectiveness predictions have been reported from other models used for cost-effectiveness predictions, this provides reassurance that the substantial public investment in HPV vaccination has been grounded in valid estimates of the effects of vaccination. Vaccination against human papillomavirus (HPV), predominantly targeting young females, has been introduced in many countries. Implementation of publicly-funded programs has involved substantial investment by governments. In Australia, for example, the government budgeted approximately A$580 million over the first five years of its National HPV Vaccination Program for females 2007\u20132011) . Now that vaccination programs have been in place for some years, it is becoming possible to observe their effects on health outcomes , outside of the context of clinical trials. Australia implemented a National HPV Vaccination Program in 2007, with routine vaccination of 12\u201313\u00a0year old females and catch-up in females aged 13\u201326\u00a0years to 2009 with the quadrivalent vaccine . The National HPV Vaccination Program is primarily delivered through schools, although missed doses can be obtained from primary care. The catch-up program was delivered through schools for school-aged girls, and through primary care for older 18\u201326\u00a0years) females or females who were no longer at school. Commencing from 2013, the National HPV Vaccination Program was extended to include routine vaccination of 12\u201313\u00a0year old males and a two year catch-up program for boys aged 14\u201315\u00a0years during 2013 and 2014. Australia offers unique opportunities to observe the early impacts of HPV vaccination, as its program commenced earlier and its catch-up component was more extensive than in most other countries. Post-vaccination reductions have already been documented in Australia in genital warts [\u201326\u00a0yearset al. recently reported a substantial fall in HPV prevalence in a repeat cross-sectional study in Australian women aged 18\u201324\u00a0years recruited during 2010\u20132011, compared to women of the same age recruited in 2005\u20132007, which was prior to the commencement of the vaccination program [Tabrizi program . HPV16 p program . Self-re program -11. The program ,11. Undeet al., which required specific examination of the prior model predictions for HPV prevalence in the specific age group and at the specific timepoint reported by Tabrizi et al.[et al.[We have previously published model predictions for the reduction in incident HPV16 infections post-vaccination in Australia, across the female population ,13. The zi et al.. We alsol.[et al.,12.et al. also comprised the subset of women from WHINURS who were 18\u201324\u00a0years of age at the time of recruitment.In line with methods used by several other groups , we usedet al.[et al.[To compare model predictions with the findings of Tabrizi et al., we extrl.[et al.) among wWe had modelled uptake in terms of effective vaccination; initially we had assumed three vaccine doses would be needed to achieve the efficacy levels observed in clinical trials . Howeveret al.[et al.[et al. [et al. (70.6%) [et al. were unchanged compared to our previous analysis . The earet al.. Table\u00a01l.[et al.. Full del.[et al.. In the l.[et al.-11,19), eported) . The hig (70.6%) . The lowet al. (70.6%) [et al. ), the model predicted a reduction in HPV16 prevalence of 70% by the end of 2010 [et al. (70.6%) [et al. (70.6%) [In the main scenario examined , but lower than self-reported uptake in Tabrizi (70.6%) ), we had (70.6%) . When we et al.) . Based ot period 10\u20132011), (70.6%) ), we pre (70.6%) ), we preet al., who reported a 77% reduction in HPV16 prevalence, [This predicted 64%-79% fall in HPV16 infections in 18\u201324\u00a0year old females by the end of 2010 accords well with the data from Tabrizi et al., where three-dose uptake was potentially somewhat higher than average for other females of the same age [et al. was substantial, after adjusting for differences in age and hormonal contraceptive use in the pre- and post-vaccination study samples [While our model incorporated the best estimates of uptake available at the time, and attempted to encompass a broad feasible range, there are now more data available on both uptake achieved by the program and the efficacy of two versus three doses of quadrivalent vaccine. The uptake we assumed for our original predictions is genersame age . Two-dos samples .Models of HPV vaccination typically assume high vaccine efficacy in the base case , based oet al. noted that, as far as they were aware, their findings \u201cwere the first genoprevalence-based evidence of the protective effect of HPV vaccination outside the setting of a clinical trial\u201d[The joint task force of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society for Medical Decision Making recommended as best practice for medical decision-making models that model predictions of future events should be tested against observed data, when these data subsequently become available . This isal trial\u201d.The investments in HPV vaccination programs around the world have been substantial. Such investments in affluent countries have been made, in part, on the basis of predictions from cost-effectiveness models. However, prior model predictions for vaccine effectiveness in a 'real world\u2019 population have not previously been compared to observed data after the vaccination program has been implemented.The close correspondence between these prior model predictions and the recently reported findings on the rapid drop in HPV prevalence rates in Australia providesHPV: Human papillomavirus; NHVPR: National HPV vaccination program register.MS: has no commercial or other association that might pose a conflict of interest. KC: declares that she is involved as a principal investigator in a new trial of primary HPV screening in Australia, which is supported by Roche Molecular Systems and Ventana, CA, USA. KC receives salary support from the NHMRC Australia (CDF1007994).MS: Designed the study; analysed and interpreted the data; drafted the manuscript. KC: Conceived and designed the study; helped draft the manuscript. Both authors read and approved the final manuscript.Age-specific vaccine uptake in females in modelled scenarios versus reported uptake in Australian female population.Click here for file"} {"text": "ABCA1 gene. In this study we evaluated the single nucleotide polymorphisms (SNPs) of ABCA1 gene. We analyzed SNPs in chromosome 9 such as rs2230806 (R219K) in the position 107620867, rs2230808 (R1587K) in the position 106602625 and rs4149313 (I883M) in the position 106626574 according to gender and lipid profile of Greek nurses.One of the important proteins involved in lipid metabolism is the ATP-binding cassette transporter A1 (ABCA1) encoding by ABCA1 gene polymorphisms. Additionally, lipid profile was evaluated.The study population consisted of 447 (87 men) unrelated nurses who were genotyped for ABCA1 gene polymorphisms did not differ according to gender. However, only R219K genotype distribution bared borderline statistical significance (p\u2009=\u20090.08) between the two studied groups. Moreover, allele frequencies of R219K, R1587K and I88M polymorphisms did not differ according to gender. In general, blood lipid levels did not seem to vary according to ABCA1 gene polymorphisms, when testing all subjects or when testing only men or only women. However, a significant difference of LDL-C distribution was detected in all subjects according to R1587K genotype, indicating lower LDL-C levels with KK polymorphism (p\u2009=\u20090.0025). The above difference was solely detected on female population (p\u2009=\u20090.0053).The distribution of all three studied ABCA1 gene polymorphisms frequency, distribution and lipid profile did not differ according to gender. However, in the female population the KK genotype of R1587K gene indicated lower LDL-C levels. Further studies, involving a higher number of individuals, are required to clarify genes and gender contribution.The ABCA1 gene in macrophages increases atherosclerotic lesions in hyperlipidemic mice . This study in line with our previous work [ABCA1 gene polymorphisms mention above and lipid profile in Greek male and female nurses.ATP-binding cassette transporter A1 (ABCA1) mediates the transport of cholesterol and phospholipids from cells to lipid-poor apolipoproteins. Animals and human studies documented that defects in the ABCA1 pathway are significant determinants of coronary artery disease (CAD) [mic mice ,3, and omic mice ,5. The AThe genotyping of 447 nurse students median age 22 (21\u201325) years old, who were attended to the University of Nursing of Technological and Educational Institution, was performed. All students had no personal history of CAD and were not taking any drugs. Other exclusion criteria were diabetes mellitus, thyroid and liver disease, high alcohol consumption, professional athleticism and any chronic disease.All students were attended to the University every day and were staying for 8\u201310 hours. Students were eating at the school canteen which served typical Mediterranean food. Only one meal daily (dinner) was most likely to be different in each student.The University of Nursing of Technological and Educational Institution Ethics Committee approved the protocol of this study. All subjects signed an informed consent form.Plasma total cholesterol (TC), triglycerides (TGs), high density lipoprotein cholesterol (HDL-C) and apolipoprotein A1 (Apo A1) were measured using enzymatic colorimetric methods on Roche Integra Biochemical analyzer with commercially available kits (Roche). The serum low density lipoprotein cholesterol (LDL-C) concentration was calculated using the Friedewald formula only in subjects with TGs concentration\u2009<\u2009400\u2009mg/dl.et al.[et al.[The ABCA1 gene polymorphisms were detected using polymerase chain reaction (PCR) and restricted fragment length polymorphism analysis (RFLP\u2019s). The PCR was performed using Taq polymerase KAPATaq. The oligonucleotide primers used for R219K and R1587K polymorphisms were described by Saleheen D et al.[\u03bfC for 5\u2009min, thirty cycles of 95 \u03bfC for 30\u2009s, 65 \u03bfC for 30\u2009s and 72 \u03bfC for 30\u2009s and final extension to 72 \u03bfC for 7\u2009min, producing a fragment of 132\u2009bp. This fragment was subsequently cleaved by EcorV, creating fragments for I allele 97\u2009bp and 35\u2009bp and for M allele 132\u2009bp, which were subjected to electrophoresis on an agarose gel 4% and visualized with ethidium bromide . The Kruskal \u2013 Wallis H statistic was employed in order to detect differences in lipid levels according to three different polymorphisms of ABCA1 gene . ABCA1 polymorphisms\u2019 distribution according to lipid profile was tested using the Kruskal Wallis test in men, in women and in all patients. All results were corrected for multiple testing (Bonferroni correction). All tests were two-tailed and statistical significance was established at 5% (p\u2009<\u20090.05). Data were analysed using Stata \u2122 .The Shapiro-Wilk test was performed to test for normal distribution of continuous variables. The results are given as median and interquartile range (IQR), whereas, all qualitative variables are presented as absolute or relative frequencies. The Mann\u2013Whitney Comparison of various characteristics according to gender showed that the study group was homogenous regarding age (p\u2009>\u20090.05) Table\u2009. HoweverABCA1 gene polymorphisms studied did not differ according to gender .In previous study we evaluated the influence of et al.[et al. [ABCA1 gene as risk factor for ischemic stroke. Srinivasan et al.[et al. [et al. [et al.[et al.[ABCA1 gene polymorphisms was not different according to gender. Noteworthy to mention is that the R219K distribution bared borderline statistical significance (p\u2009=\u20090.08) between two studied groups. In general, blood lipid levels did not seem to vary according to ABCA1 gene polymorphisms, when testing all subjects or when testing only men or only women.One of the first study which evaluated the R219K polymorphism and lipid profile in man was published by Clee SM et al.,9-16. Co.[et al. did not an et al. found th.[et al. found th [et al. found a . [et al. did not l.[et al. studied et al.[et al.[et al.[Clee et al. in the set al. associatl.[et al. did not l.[et al. found thl.[et al. where onl.[et al.. Howeveret al.[et al.[et al.[et al.[et al.[et al.[I883M gene, which were also present between the highest and lowest HDL-C concentration groups . On the contrary, Kitjaroentham et al.[et al.[et al.[et al.[ABCA1 gene polymorphisms i27943 (rs2575875) and R219K (rs2230806) have a lower postprandial response as compared to minor allele carriers. The fact that in current study blood lipid levels did not seem to vary according to ABCA1 gene polymorphism could be influence, at least in part, because they were measured in the fasting state.Tan et al. studied l.[et al. did not l.[et al. correlatl.[et al. among yol.[et al. in popull.[et al. found soam et al. did not l.[et al. found thl.[et al. found thl.[et al. suggestiA limitation of this study is the relatively small number of men\u2019s group. However, the effort was put for sample to be homogenous, living in the similar conditions as it happened with our study population.Another limitation of this type of study is that, studies based on the candidate-gene approach, which have been demonstrated genotype-phenotype associations, are not always replicable.ABCA1 gene polymorphisms on lipid profile in accordance to gender. The ABCA1 gene polymorphisms frequency, distribution and lipid profile did not differ according to gender. However, in the female population the KK genotype of R1587K gene indicated lower LDL-C levels.There are only few studies which compare the influence of The authors declare that they have no competing interests.VK participated in the development of hypothesis, drafting of the manuscript and carried out the genetic analysis, AM participated in the molecular genetic studies, AK performed the statistical analysis and drafting of the manuscript, GV and AK collected the blood samples, SM participated in the drafting and revising the manuscript, DD participated in revising the manuscript critically for important intellectual content, CM participated in the study design and its coordination and GK conceived the study and participated in the development of the hypothesis, the study design and drafting of the manuscript. All authors read and approved the final manuscript."} {"text": "An association between fluid intake and limb swelling has been described for 100-km ultra-marathoners. We investigated a potential development of peripheral oedemata in Ironman triathletes competing over 3.8\u2009km swimming, 180\u2009km cycling and 42.2\u2009km running.In 15 male Ironman triathletes, fluid intake, changes in body mass, fat mass, skeletal muscle mass, limb volumes and skinfold thickness were measured. Changes in renal function, parameters of skeletal muscle damage, hematologic parameters and osmolality in both serum and urine were determined. Skinfold thicknesses at hands and feet were measured using LIPOMETER\u00ae and changes of limb volumes were measured using plethysmography.p <0.05), fat mass, skinfold thicknesses and the volume of the arm remained unchanged (p >0.05). The decrease in skeletal muscle mass was associated with the decrease in body mass (p <0.05). The decrease in the lower leg volume was unrelated to fluid intake (p >0.05). Haemoglobin, haematocrit and serum sodium remained unchanged (p >0.05). Osmolality in serum and urine increased (p <0.05). The change in body mass was related to post-race serum sodium concentration ([Na+]) and post-race serum osmolality .The athletes consumed a total of 8.6\u2009\u00b1\u20094.4\u2009L of fluids, equal to 0.79\u2009\u00b1\u20090.43\u2009L/h. Body mass, skeletal muscle mass and the volume of the lower leg decreased (ad libitum fluid intake maintained plasma [Na+] and plasma osmolality and led to no peripheral oedemata. The volume of the lower leg decreased and the decrease was unrelated to fluid intake. Future studies may investigate ultra-triathletes competing in a Triple Iron triathlon over 11.4\u2009km swimming, 540\u2009km cycling and 126.6\u2009km running to find an association between fluid intake and the development of peripheral oedemata.In these Ironman triathletes, Specific gravity was analysed using Clinitek Atlas\u00ae Automated Urine Chemistry Analyzer . Creatinine and urea were measured using COBAS INTEGRA\u00ae 800. Electrolytes were determined using ISE IL 943 Flame Photometer .Pre- and post-race, venous blood samples were drawn and urine samples were collected. Two Sarstedt S-Monovettes for chemical and one Sarstedt S-Monovette for haematological analysis were drawn the afternoon before the start of the race and upon arrival at the finish line. Monovettes for plasma were centrifuged at 3,000\u2009g for 10\u2009min at 4 \u00b0Celsius. Plasma was collected and stored on ice. Urine was collected in Sarstedt monovettes for urine (10\u2009ml). Blood and urine samples were transported immediately after collection to the laboratory and were analysed within six hours. Immediately after arrival at the finish line, identical measurements were applied. In the venous blood samples, haemoglobin, haematocrit, (p >0.05). The change in body mass was related to both post-race serum [Na+] . The decrease of the volume of the lower leg was unrelated to fluid intake (p >0.05). Fluid intake was neither related to the changes in the thickness of adipose subcutaneous tissue nor to the changes in skin-fold thicknesses (p >0.05). Sodium intake was not related to post-race serum [Na+] (p >0.05). Post-race serum [Na+] was unrelated to both the change in the potassium-to-sodium ratio in urine and TTKG (p >0.05). The increase in serum urea was not related to the increase in serum osmolality (p >0.05). The change in serum urea was unrelated to the change in skeletal muscle mass (p >0.05). The change in the thickness of the adipose subcutaneous tissue at the medial border of the tibia was significantly and positively associated with the change in creatinine clearance . The increase in the thickness of adipose subcutaneous tissue at the medial border of the tibia was not related to the non-significant change in skin-fold thickness of the calf (p >0.05). The non-significant changes in skin-fold thicknesses were neither related to overall race time nor to the split times (p >0.05).Fluid intake was unrelated to the decrease in body mass ,41,42. Rer et al., Speedy l.[et al., and Noal.[et al. describiKavouras and ShirKavouras describeKavouras ,48.A further finding was that the circumferences of the thigh and the calf decreased by 2.7% and 2.4%, respectively, whereas the circumference of the upper arm remained unchanged. This indicates that the estimated skeletal muscle mass at the lower limbs became reduced. Since the change in the estimated skeletal muscle mass showed no association with the change in plasma urea, we presume that no substantial degradation of myofibrillar proteins must have occurred, and the loss in estimated skeletal muscle mass might be due to a depletion of intramyocellular stored energy, such as muscle glycogen and intramyocellular lipids . We furtet al.[et al.[et al.[et al.[However, the reduction in limb circumference could also be due to a reduction in interstitial fluid. The decrease in the lower leg volume might also suggest an action of the \u2018muscle pump\u2019 during exercise helping to clear pre-race swelling. Perhaps the tapered athlete started with oedemata, the result of being relatively inactive in a pre-race taper. If so, perhaps the muscle pump cleared this oedemata during the race, and perhaps clearing was aided by compression socks. Regarding the results concerning the decrease in the circumferences of both the thigh and the calf, we expected that the main areas of decrease would occur in the muscles used most, meaning in the lower leg and thigh muscles. Because the thigh has a larger skeletal muscle mass than the calf, it is likely that the change in the thigh muscle mass influenced the change in estimated skeletal muscle mass more than the change in calf muscle mass did. Another possible explanation could be that there actually would have been a correlation between the decrease of the lower leg volume and the estimated skeletal muscle mass, but that this correlation was influenced due to a non-quantified change in tissue fluid in the lower leg. As we were using plethysmography for measuring the volumes of the whole limbs, we were not able to differentiate a change in volume between arm and hand or between lower leg and foot, respectively. This could have influenced our results. Lund-Johansen et al. measuredl.[et al. measuredl.[et al. describel.[et al. measuredet al.[et al.[Another interesting finding was that the change in the thicknesses of adipose subcutaneous tissue at medial border of the tibia was significantly and positively associated with the change in creatinine clearance. However, correlation analysis does not prove cause and effect; therefore this correlation must be questioned. In case of an impairment of renal function, we would expect a development of peripheral oedemata ,51. Howeet al. describil.[et al. showed, et al.[et al.[Na or FeUrea is also an argument against any association between a change of the adipose subcutaneous tissue and a change in renal function. FeNa and FeUrea are parameters which can be used to detect an impairment of the renal function [In the present study, a reason why the thickness of the adipose subcutaneous tissue of the lower leg showed no increase might be due to the compression, which might be induced by wearing socks and running shoes. Knechtle et al. also desl.[et al. describifunction ,54. Sincet al.[et al.[et al.[et al.[et al.[et al.[et al.[et al.[The present Ironman triathlon with a mean average race time of about eleven hours was rather short when compared to the studies from Milledge et al., Williaml.[et al. and the l.[et al. where thl.[et al., Irving l.[et al. and Knecl.[et al. showing l.[et al., the eccl.[et al. also deml.[et al. that theThe type of oedemata that develops following an Ironman triathlon is not necessarily the result of frank rhabdomyolysis. Leg swelling is often of oedematous nature where biIt should also be noted that this kind of oedema cannot be said to be due to aggressive overdrinking completely unrelated to thirst. Excess water is ingested because the debris of prolonged exercise increases the osmolality of body water, appropriately increasing thirst. The mean fluid intake in these Ironman triathletes was 0.79\u2009\u00b1\u20090.43\u2009L/h. In a recent study on 100-km ultra-marathoners showing an association between fluid intake and limb swelling, the athletes consumed 0.63\u2009\u00b1\u20090.20\u2009L/h . Obvious. An implication for future research would therefore be to measure the volume of hands and feet separately from the arms and the legs using plethysmography. It would as well be useful to have a measurement method that allows us to differentiate the volume changes occurring in a body part into the different body compositions. Bioelectrical impedance analysis [et al.[et al.[et al.[et al.[A strength of this study was that anthropometric measurements were performed immediately upon arrival at the finish line. A limitation of the present study was that by measuring the entire lower leg volume, or arm volume, we could not precisely quantify nor locate specifically where the changes in volume occurredanalysis for examanalysis . Howeveranalysis since planalysis ,65. Regas [et al., Milledgl.[et al. and Willl.[et al. describil.[et al.,2,6,66 il.[et al.. Furtherl.[et al.,68. NSAIl.[et al.. Hew et l.[et al. reportedl.[et al. and athll.[et al.. Finally+ was maintained and serum osmolality increased because body mass decreased. Considering the findings of Milledge et al.[et al.[To summarize, the volume of the lower extremity decreased and this decrease was unrelated to fluid intake in the present male Ironman triathletes. We found no increase in the thickness of adipose subcutaneous tissue of the hands and feet. Renal function was altered. Serum [Nage et al. and Willl.[et al., the durl.[et al.. For futl.[et al..The authors declare that they have no competing interests.MM drafted and wrote the manuscript. BK designed the study and assisted the manuscript preparation. BK, JB, PK, CM, AM and BE conducted all the measurements during two field study for data collection before and after the race. CAR and TR assisted in data analyses, statistical analyses, data interpretation and manuscript preparation. All authors have read and approved the final version of the manuscript."} {"text": "As knowledge of the structure and function of nucleic acid molecules has increased, sequence-specific DNA detection has gained increased importance. DNA biosensors based on nucleic acid hybridization have been actively developed because of their specificity, speed, portability, and low cost. Recently, there has been considerable interest in using nano-materials for DNA biosensors. Because of their high surface-to-volume ratios and excellent biological compatibilities, nano-materials could be used to increase the amount of DNA immobilization; moreover, DNA bound to nano-materials can maintain its biological activity. Alternatively, signal amplification by labeling a targeted analyte with nano-materials has also been reported for DNA biosensors in many papers. This review summarizes the applications of various nano-materials for DNA biosensors during past five years. We found that nano-materials of small sizes were advantageous as substrates for DNA attachment or as labels for signal amplification; and use of two or more types of nano-materials in the biosensors could improve their overall quality and to overcome the deficiencies of the individual nano-components. Most current DNA biosensors require the use of polymerase chain reaction (PCR) in their protocols. However, further development of nano-materials with smaller size and/or with improved biological and chemical properties would substantially enhance the accuracy, selectivity and sensitivity of DNA biosensors. Thus, DNA biosensors without PCR amplification may become a reality in the foreseeable future. Detection of specific DNA sequences is needed in many fields: the Human Genome Project is providing massive amounts of genetic information that should revolutionize our understanding and diagnosis of inherited diseases ; pathogeConventional methods for the analysis of specific gene sequences are carried out using gel electrophoresis of DNA fragments amplified by the polymerase chain reaction (PCR) using primers that are sequence-specific for the chosen region of DNA. Although it is simple and effective for the detection of PCR products, gel electrophoresis fails to provide any sequence information about the amplified DNA. Furthermore, the use of PCR is limited to laboratories because of its complexity in obtaining proper primers design and the corresponding empirical conditions for consistent amplification . SoutherIn recent years, DNA biosensors based on nucleic acid hybridization have been vigorously pursued. DNA biosensors are defined as analytical devices incorporating a single-stranded oligonucleotide (probe) intimately associated with or integrated within a physicochemical transducer or transducing microsystem, which may be optical, electrochemical, thermometric, piezoelectric, magnetic or micromechanical. The aim of a DNA biosensor usually is to produce either discrete or continuous measurable signals, which are proportional to the concentration of complementary (target) DNA sequence. Because of their specificity, speed, portability, and low cost, DNA biosensors offer exciting opportunities for sequence-specific DNA detection. However, the concentration of genetic targets is very low in biological samples, making it unsuitable for detection by a DNA biosensor. Therefore, an ultrasensitive method of detecting nucleic acids is clearly essential.In order to achieve high detection sensitivity, researchers have developed many techniques to enhance the response of DNA biosensors by modifying the sensors with different functional materials. Within the growing and increasingly complex area of nanotechnology, great attention has been paid in recent years to nano-structured materials of different chemical composition, produced as nanoparticles, nanowires or nanotubes. Nano-materials are larger than individual atoms and molecules but are smaller than bulk solids, therefore they obey neither absolute quantum chemistry nor the laws of classical physics and have properties that differ markedly from those expected. There are two major phenomena that are responsible for these differences. First is the high dispersity of nanocrystalline systems . As the The nano-materials used in DNA biosensors including nanoparticles, like gold (Au) nanoparticles, Cadmium sulfide (CdS) nanoparticles; nanowires like silicon (Si) nanowires, nanotubes like carbon nanotubes, etc. There are mainly two purposes of using nano-materials in DNA biosensors: as substrates for DNA attachment and as signal amplifiers for hybridization. The aim of this article is to summarize the various nano-materials used in DNA biosensors based on the two purposes in recent years, including the information on applications and future prospects.2.As we know, the most critical step while preparing a DNA biosensor is the immobilization of DNA probe on the surface of a sensing device such as an electrode. The amount of immobilized DNA probe will influence the accuracy, sensitivity, selectivity, and life of a DNA biosensor directly. Because of the high surface-to-volume ratio and excellent biological compatibility, nano-materials can enlarge the sensing surface area to increase the amount of immobilized DNA greatly, and the DNA mixed with nano-materials can keep its biological activity well.2.1.Over the past decade, the unique properties of nanoparticles have continued to attract considerable research attention. Nanoparticles, especially metal nanoparticles, offer excellent prospects for chemical and biological sensing because of their unique optical, electrical, and thermal properties as well as catalytic properties \u201319. The 2.1.1.et al. [et al. [Gold (Au) nanoparticles are a hot study topic lately and they play a key role in DNA biosensors. Thiol\u2013Au (SH\u2013Au) linkages usually were used to bind Au nanoparticles covalently with solid electrodes or DNA because of the strong affinity of covalent bonds between sulfur atoms and gold atoms. Jin et al. immobili [et al. also fab [et al. \u201333 have et al. [2O3) substrates by a physical method to prepare a DNA biosensor. Pan [et al. [et al. [Au nanoparticles as substrates for DNA attachment though other linkages were also reported in several publications. Spadavecchia et al. depositesor. Pan prepared [et al. fabricat [et al. deposite2.1.2.et al. [Platinum (Pt) nanoparticles were also used as substrates to improve DNA immobilization due to their high catalytic activities. Zhu et al. fabricat2.1.3.2) is a thermally stability, chemically inert, non- toxicity inorganic oxide with affinity for the groups containing oxygen, and therefore it is an ideal candidate material for immobilization of biomolecules containing oxygen or phosphate groups. Thus, ZrO2 is an ideal material for immobilization of DNA. Zhu et al. [2 thin films onto the bare gold electrode by cycling the potential between \u22121.1 and +0.7 V (versus Ag/AgCl) at a scan rate of 20 mV/s. Oligonucleotide probes with phosphate group at the 5\u2032 end were attached onto the ZrO2 thin films. Yang et al. [2 coupled with carbon nanotubes to improve DNA attachment to the electrode.Zirconia has many unique properties, such as a high dielectric constant, a large band gap, high electron affinity, and it is also relatively easy to present as a thin film. Shrestha et al. [6O11 surface. Pr6O11 was deposited on a tin-doped indium oxide (ITO) surface to form an ultrathin layer with a larger internal surface area, and then the thiol-modified oligonucleotide attached to an amine-modified Pr6O11 surface.Praseodymium oxide were first noticed and characterized in 1991 by Iijima of NEC C2.2.1.et al. [N-ethyl-N\u2032-(3-dimethylaminopropyl) carbodiimide (EDC) and N-hydroxysuccinimido-biotin (NHS), then the synthesized ssDNA was covalently immobilized on the CNTs modified gold electrodes. Abdullin et al. [et al. [et al. [et al. [Tang et al. modifiedn et al. modified [et al. , Erdem e [et al. and Lian [et al. .2.2.2.et al. [et al. [et al. [Polypyrrole (PPy) is a common material which acts as a bridge to connect DNA with CNTs. Xu et al. ,52 repor [et al. and Qi e [et al. also repet al. [2 for chitosan-CNTs-modified electrodes and chitosan-modified electrodes, respectively). Bollo et al. [Chitosan is another material used as a bridge for DNA attachment to CNTs. Li et al. fabricato et al. also diset al. [et al. [et al. [Other materials used for DNA attachment to CNTs were also reported. Guo et al. describe [et al. covalent [et al. studied 2.3.Nanowires fabricated by polyaniline were utilized as substrates for DNA attachment in some literatures ,61. Nanoet al. [2 sensor areas and superior DNA bonding stability over 30 hybridization/denaturation cycles.Nebel et al. introduc2.4.et al. [Nanoclusters also reported as substrates for DNA attachment. Zhu et al. synthesi3.When we detect small-molecule ligands of biomedical interest, ligand binding may not significantly perturb the biosensor interface. In this situation, signal amplification may be useful. Signal amplification by labeling the analyte with nano-materials has been reported for DNA biosensors in many literatures.3.1.3.1.1.et al. [2, whereas no change could be observed for goat antifluorescein without the Au nanoparticle conjugate. This allowed construction of high-sensitivity electrochemical impedance biosensors at a single low frequency, where the signal was sensitive to the interfacial Rct. This change in the electrochemical impedance signal of binding to goat antifluorescein conjugated with Au nanoparticles could be attributed to the much higher electrochemical activity of Au surfaces relative to the underlying organic layer. Amplification of voltammetric signal was also characterized by many researchers [Due to their electrochemical properties, Au nanoparticles have been used as signal amplifiers in many electrochemical DNA biosensors. Wang et al. demonstrearchers \u201367. Li aearchers developeet al. [et al. [et al. [Optical properties of Au nanoparticles were also used for optical DNA biosensor. Yao et al. used Oli [et al. and Mart [et al. .Au nanoparticles were also reported to be used as signal amplifiers for quartz crystal microbalance (QCM) biosensors ,72\u201375. A3.1.2.et al. [+ with concentrated HNO3 then released from the hybrids, but complete dissolution of a gold tag required more severe conditions (1 M HBr containing 0.1 mM Br2) and the electrode might be damaged in this medium. Similar electrochemical characterizations of Ag nanoparticles used in DNA biosensors were done by Fu et al. [et al. [Ag nanoparticles have desirable compositions as oligonucleotides labels in electrochemical detection assays because Ag nanoparticles exhibit better electrochemical activity than Au ones. Cai et al. reportedu et al. and K\u2019Ow [et al. .3.1.3.et al. [et al. [Cadmium sulfide (CdS) nano-material is a semiconductor material with attractive electrochemical properties. CdS as oligonucleotide labeling tags for detection of DNA hybridization have been reported by several researchers. Xu et al. used a t [et al. also use3.2.et al. [3Fe(CN)6 at a bare glassy carbon electrode and an electrochemically activated glassy carbon electrode which was treated in the absence or presence of multi-walled CNTs under identical experimental conditions. The results indicate that the electrochemical response of the K3Fe(CN)6 at the CNTs activated electrode was the strongest among these three kinds of electrodes.The conductive properties of CNTs suggest that they could mediate electron transfer reactions and enhance the relative electrochemical reactivity with electroactive species in solution when used as the electrode material or substrates modified on solid electrodes. As electrode or substrates modified materials, CNTs show better electrochemical behavior than traditional glass carbon electrodes, carbon paste electrodes and any other carbon electrodes . Therefoet al. comparedet al. [Escherichia coli were used as a target and a control, respectively. Southern blotting, which used photostable Raman signals of nanotubes instead of fluorescent dyes, demonstrated excellent sensitivity and specificity of the probes. Their results showed that SWCNTs may be used as generic nano-biomarkers for the precise detection of specific kinds of genes. Cao et al. [The unique optical properties make CNTs ideal optical probes for tagging biomolecules. CNTs exhibit strong Raman signals as well as fluorescence emissions in the near infrared region. And such signals do not blink or photobleach under prolonged excitation, which is an advantage in optical nano-biomarker applications. The intense Raman scattering from CNTs provides a large amount of information about the structure and properties of nanotubes with some of the highest known cross-sections for single molecules. Hwang et al. presenteo et al. developeet al. [et al. [et al. [Recently, CNTs have also been utilized as a novel support material to concentrate nanoparticles or enzyme molecules on it as a more powerful DNA hybridization indicator than using a single nanoparticle or enzyme molecule indicating DNA hybridization with embedded Au nanoclusters. It was shown that the sensitivity of the SPR biosensor could be improved by adjusting the size and volume fraction of the embedded Au nanoclusters in order to control the surface plasmon effect. The DNA hybridization experimental results could achieve 10-fold improvement in the resolution performance compared with the conventional SPR biosensors.Hu et al. develope4.In conclusion, different nano-materials have been successfully applied to DNA biosensors as substrates for DNA attachment and as labels for signal amplification, specifically. (1) When nano-materials are used as substrates for DNA attachment, the smaller size the nano-material is, the better result a DNA biosensor can provide. As the size of a crystal is reduced, the surface-to-volume ratio of a crystal is increased, thus it can enlarge the electrode surface area to greatly enhance the amount of immobilized DNA. But this conclusion is only applicable under the condition that the shape of nano-materials is regular, otherwise we cannot obtain a repeatable result. Similar conclusion holds true when nano-materials are used as signal amplifiers.et al. [(2) Due to the diverse properties of different nano-materials, utilizing two or more types of nano-materials could enhance the good qualities as well as offset the insufficiency of each individual nano-material, which could produce better results than that using only one type of nano-materials. For example, Cai et al. describeet al. ,96, incl(3) Although great efforts have been made in all DNA biosensor technologies in order to eliminate the role of the PCR from their protocols, this goal has not yet been achieved . Many sc"} {"text": "The ongoing retreat of glaciers globally is one of the clearest manifestations of recent global warming associated with rising greenhouse gas concentrations. By comparison, the importance of greenhouse gases in driving glacier retreat during the most recent deglaciation, the last major interval of global warming, is unclear due to uncertainties in the timing of retreat around the world. Here we use recently improved cosmogenic-nuclide production-rate calibrations to recalculate the ages of 1,116 glacial boulders from 195 moraines that provide broad coverage of retreat in mid-to-low-latitude regions. This revised history, in conjunction with transient climate model simulations, suggests that while several regional-scale forcings, including insolation, ice sheets and ocean circulation, modulated glacier responses regionally, they are unable to account for global-scale retreat, which is most likely related to increasing greenhouse gas concentrations. The extent to which greenhouse gases forced glacier retreat during the last deglaciation remains unclear. Here, the authors recalculate cosmogenic nuclide ages for 195 glacier moraines and show that deglacial glacier retreat was broadly globally synchronous with rising levels of atmospheric CO2. Goehring et al.et al.3He. However, recent advances have pointed out shortcomings in previous scaling models, and more recent 10Be calibrations are yielding values that are consistently and significantly lower than those included in Balco et al.10Be and 3He production rates.Accurate cosmogenic-nuclide production-rate estimates are critical for surface-exposure dating applications, particularly for global comparisons such as in this study. In addition to reliable calibrations of production rates at sites covering as wide a spatial and temporal range as possible, one needs consistent implementation of accurate production-rate scaling models to transform those rates to other sites of interest. The online calculator of Balco et al.et al.in situ cosmogenic nuclides. All scaling calculations here were done using nuclide-specific formulations and the atmospheric, geomagnetic and solar framework considered in Lifton et al.A new scaling modelet al.RC) to describe the dependence of the cosmic-ray flux on position within the geomagnetic field (including both dipolar and non-dipolar components). Cutoff rigidity is defined as the minimum rigidity that an incident primary cosmic-ray particle may possess and still be able to interact with the atmosphere at a given location . Calculating whole-sky cutoffs globally is prohibitively computationally expensive for our purposes . This value is similar to that arrived at by Heyman\u03c72), or excluding sites with ambiguously defined \u2018too large scatter' while including sites with much larger apparent scatter. We did, of course, focus solely on post-2005 calibration data sets, that is, those not contained within Balco et al.et al.et alet al.10Be at g\u22121 per year with or without it.A number of high-quality results . As a wh3He analysis, combining new data sets from Amidon and Farleyet al.et al.et al.3He data tend to be more scattered than the 10Be data set, both within and between sites, but all site results pass Chauvenet's criterion and thus are included. Grouping the data by study yields an arithmetic mean and s.d. for the sea-level and high-latitude 3He production rate for LSD scaling of 122\u00b114 3He at g\u22121 per year (1\u03c3).We did a similar calculation for our et al.10Be and 11.5% for 3He. We excluded cosmogenic ages deemed outliers by the original authors . All exposure age calculations used the new 10Be and 3He calibrated global-production rates described above with nuclide-specific LSD scaling and the atmospheric, geomagnetic and solar framework described in Lifton et al.There is a considerable literature on how best to model moraine ages from individual boulder ages in the typical case that the scatter exceeds analytical uncertainty and thus must reflect geomorphic processes. Two competing processes are likely to dominate on deglacial-age moraines: prior exposure contributes inherited nuclides that lead to overestimates of moraine age, while boulder exhumation yields underestimates of the true moraine age. Applegate The four single-forcing transient simulations of the TraCE simulationsHow to cite this article: Shakun, J. D. et al. Regional and global forcing of glacier retreat during the last deglaciation. Nat. Commun. 6:8059 doi: 10.1038/ncomms9059 (2015).Supplementary Figures 1-9, Supplementary Tables 1-2, Supplementary Notes 1-2 and Supplementary ReferencesCosmogenic surface exposure age data for all reconstructed glaciers"} {"text": "Viral polymerases replicate and transcribe the genomes of several viruses of global health concern such as Hepatitis C virus (HCV), human immunodeficiency virus (HIV) and Ebola virus. For this reason they are key targets for therapies to treat viral infections. Although there is little sequence similarity across the different types of viral polymerases, all of them present a right-hand shape and certain structural motifs that are highly conserved. These features allow their functional properties to be compared, with the goal of broadly applying the knowledge acquired from studying specific viral polymerases to other viral polymerases about which less is known. Here we review the structural and functional properties of the HCV RNA-dependent RNA polymerase (NS5B) in order to understand the fundamental processes underlying the replication of viral genomes. We discuss recent insights into the process by which RNA replication occurs in NS5B as well as the role that conformational changes play in this process. Polymerases are crucial in the viral life cycle. They have an essential role in replicating and transcribing the viral genome and as a result are key targets for therapies to treat viral infection. A virus may not need to encode its own polymerase depending on where it spends most of its life cycle. Some small DNA viruses that spend all their time in the cell nucleus can make use of the host cell\u2019s polymerases. However, viruses that remain in the cytoplasm do need to encode their own .in vitro without accessory factors. This is primarily because the sizes of genomes that can be packaged in the viral capsid are limited . ,7. 1,7].channel) ,10 (see vs. deoxyribose NTPs (dNTP) is regulated by the interaction of the polymerase with the 2\u2032-OH of the NTP. In general, DNA polymerases that incorporate dNTP in the growing daughter strand have a large side chain that prevents binding of an rNTP with a 2\u2032-OH. However, RNA polymerases utilize amino acids with a small side chain and form H-bonds with the 2\u2032-OH of the rNTP. The polymerase active site often binds the correct NTP with 10\u20131000-fold higher affinity than incorrect NTPs [In the active site, the correct NTP to be added to the daughter strand is selected by Watson-Crick base-pairing with the template base. The selectivity for ribose (rNTP) ect NTPs .While viThere are several structural motifs necessary for catalysis. Motif B contains a consensus sequence of SGxxxT and is located at the junction of the fingers and palm domains [de novo initiating RdRps such as that present in HCV [Motifs A and C have been closely studied because they are located in the active site. Motif C includes the GDD amino acid sequence that is the hallmark of RdRps. These conserved residues are bound to the metal ions and dsRNA viruses see . Genome et al. [The first X-ray structure of an RdRp was generated for Poliovirus (PV) polymerase in 1997 . X-ray set al. .A characteristic trait of RdRps is the extensive interaction between fingers and palm domains . RdRps hRdRps were originally thought to be found uniquely in viruses. However, in 1971 the first eukaryotic RdRp was found in Chinese Cabbage . Later oThe Flaviviridae family has been widely studied because many members of this family cause diseases in humans. Within this family there are three genera: Flaviviruses, Hepaciviruses and Pestiviruses see . HCV is n\u201d. Residues at the \u201cn\u201d and \u201cn + 1\u201d positions of the template define the P-site and N-site.All known polymerases synthesize nucleic acid in the 5\u2032 to 3\u2032 direction . Thus, rde novo) if this hydroxyl group is provided by the first NTP [de novo mechanism. Most viruses in Picornaviridae and Caliciviridae families utilize a primer-dependent mechanism, but exceptions are found, such as noroviruses in the Caliciviridae family, that synthesize the (\u2212) strand de novo [de novo mechanism has a large thumb domain and narrower template channel suited to accommodate only the ssRNA and NTP [de novo polymerases can be induced to become primer dependent [At the initiation stage, the formation of the first phosphodiester bond is key for polymerization of the nucleotides to begin. To form this phosphodiester bond a hydroxyl group corresponding to a nucleotide 3\u2032-OH is needed. Depending on how this 3\u2032-OH is supplied two mechanisms are differentiated: primer dependent in the case that a primer provides the required hydroxyl group, or primer independent is dictated by the template. Both HCV and BVDV from the Flaviviridae family have been observed to require high concentrations of GTP for the initiation of RNA synthesis regardless of the RNA template nucleotide [et al. [et al. [de novo initiation or in allosterically regulating the conformational changes needed for replication [When the cleotide ,33 which [et al. also sug [et al. pointed [et al. . We notelication .Because base-pairing alone is insufficient to stabilize the dinucleotide product in the \u201cP-site\u201d, specialized structural elements are employed . Besidesde novo mechanism the first dinucleotide is not sufficiently stable and an initiation platform is needed to provide additional stabilization. This reduced stability sometimes results in abortive cycling for the de novo mechanism. However, an advantage of the de novo mechanism is that no additional enzymes are needed to generate the primer [An advantage of the primer-dependent mechanism is that a stable elongation complex is formed more easily. There is limited abortive cycling, if any, and no requirement for large conformational rearrangements . In conte primer .(1)incorporation of the incoming NTP into the growing daughter strand by formation of the phosphodiester bond(2)release of pyrophosphate(3)translocation along the template.After the template and primer or initiating NTP are bound to the enzyme, the steps required for single-nucleotide addition are :(1)incoThese three steps are repeated cyclically during elongation until the full RNA strand is replicated.2+ or Mn2+) in the active site bound to two conserved aspartic acid residues. These metal ions have been shown to be essential for catalysis via the so-called \u201ctwo metal ions\u201d mechanism. This mechanism was proposed by Steiz in 1998 [In order to facilitate nucleotide addition, all polymerases have two metal ions ,14,38,42ng RdRps and non-nucleoside inhibitors (NNIs) see . NIs binet al. [Four NNI sites have been identified: two in the thumb (NNI-1 and NNI-2) and two in the palm (NNI-3 and NNI-4) see . Brown aet al. studied This may facilitate their use in combination therapies by degrading complementary functionalities in the enzyme. Understanding the molecular mechanisms by which small molecules in general and NNIs in particular inhibit the function of NS5B is essential for rationally design NS5B inhibitors. Such molecules may ultimately serve as a basis for more efficacious or cost-effective HCV therapies, either individually or in combination.et al. [et al. [et al. [et al. [One informative example that illustrates the useful interplay between determining the roles of structural and functional elements of NS5B and understanding the efficacy of NNIs is provided by recent studies of Gilead pharmaceuticals. Boyce et al. assessed [et al. discover [et al. ,47. Boyc [et al. found th [et al. .et al. [et al. [et al. [et al. [et al. [Simulation studies by Davis and Thorpe suggest that the enzyme C-terminus reduces conformational sampling in NS5B, likely eliminating transitions between the closed and open conformations necessary for the initiation and elongation phases of replication respectively . These oet al. . Other s [et al. ,47 indic [et al. . However [et al. indicate [et al. measuredet al. [et al. [In contrast to NNI-2, Boyce et al. observedet al. ,71. This [et al. that suget al. [et al. [With regard to palm inhibitors, Boyce et al. observed [et al. observed [et al. .et al. [in vitro using enzyme variants without the C-terminus. It is likely that any ligands employing the inhibitory mechanisms described by Boyce et al. [The findings of Boyce et al. are impoe et al. would noet al. [Studies such as those of Boyce et al. or Daviset al. ,47,69 maet al. ,74. We net al. ,47,69.Understanding the molecular mechanisms involved in inhibition by NNIs could facilitate the design and deployment of these molecules. The insights acquired may also be transferable to other polymerases to better understand the relationship between structure, function and dynamics in these enzymes. Due to the fact that individual NNIs can have distinct sites of binding, it should be possible to combine multiple NNIs such that their total inhibitory effect is enhanced relative to applying any given inhibitor on its own . It may Flaviviridae viruses are (+) RNA viruses with RdRp polymerases that utilize the de novo mechanism for initiation. While Flaviviridae polymerases possess elements common to other RdRps such as the fingertips region, they are also unique in possessing the \u03b2-flap that may be used as an initiation platform during genome replication.Flaviviridae family within the Hepacivirus genus and employs NS5B as the RdRp that replicates its genome. There are two key steps involved in the replication process: (1) the formation of the initial dinucleotide and (2) the transition from initiation to processive elongation. Structural elements of NS5B that likely have a crucial role in these steps are the C-terminal linker and the \u03b2-flap (see The important pathogen HCV is a member of the flap see . Initiatflap see . FinallyThus, the available evidence suggests that NS5B possesses an intrinsic capacity to be regulated via allosteric effectors including NNIs, the \u03b2-flap and the C-terminal linker. In addition, the role of these effectors seems to be strongly modulated by the specific context of the interaction. Understanding how these structural elements govern enzyme activity and how they interface with inhibitors is important for understanding the molecular mechanisms of allosteric inhibition in NS5B. Such knowledge paves the way for rational design of inhibitors and combination therapies both for NS5B and for the polymerases to which these insights can be generalized. This information may also be useful in designing enzymes with attenuated activity, as would be required if one sought to develop a strain of HCV that could serve as the basis for a vaccine. Attenuating HCV by degrading the activity of NS5B is one strategy that could prove useful in this regard. One potential drawback to such efforts is the high mutation rate of HCV that results from the error-prone nature of NS5B. However, it is possible that one could circumvent this issue by generating a polymerase that not only possesses reduced efficacy, but also displays increased fidelity and thus faithfully replicates the viral genome.Viral polymerases and, specifically, RdRps share many common structural, functional and dynamic features. Thus, the knowledge obtained in understanding how NS5B functions may be transferable to polymerases from closely related viruses such as Dengue or West Nile virus, or even to other more distantly related polymerases such as reverse transcriptase from HIV and 3D-pol from poliovirus."} {"text": "Medical image analysis is performed in order to facilitate medical research and ultimately provide better healthcare. It is critical to the advancement of imaging-based medical research, for example, using magnetic resonance (MR) imaging to probe brain structural and functional changes related to a disease or cognitive process.This special issue highlights new methods of signal and image processing, computer vision, machine learning, and statistical analysis and their application in medical image analysis. It is organized into two groups of papers: predictive modeling and image processing.In the first group, we include a set of papers for imaging-derived biomarker detection and clinical decision support systems. The overall theme is how to detect biomarkers characterizing a disease and to build decision support systems using computer algorithms. B. Tay et al. propose a classification scheme for identifying healthy individuals and patients with spinal cord injuries based on fractional anisotropy values obtained from diffusion tensor imaging data. H. Liu et al. address cirrhosis classification based on multimodality imaging and propose a new method to extract texture features for multilabel classification . G.-P. Liu et al. employ deep learning and multilabel learning to construct the syndrome diagnostic model for chronic gastritis. M. Yang et al. describe a method to automatically detect corticospinal tract damage in chronic stroke patients and demonstrate that the detected biomarker is associated with motor impairment. Y. Liu et al. describe a novel machine learning algorithm which combines tract-based spatial statistics and Bayesian data mining to quantify white matter changes in mild traumatic brain injury.The second group of papers consists of a wide range of medical image processing algorithms, including segmentation, visualization, and enhancement. For brain tissue segmentation, S. Ji et al. describe a multistage method based on superpixel and fuzzy clustering for brain MRI segmentation. Y. Zhong et al. propose a method for regularization of fMRI data to address the limitations of traditional spatial independent component analysis. J. Gao et al. describe a global search algorithm based method to separate MR images blindly. Y. P. Du et al. develop a method to reduce partial volume effect with voxel-shifted interpolation and this algorithm substantially improves the detection of BOLD signal in fMRI. Z. Jin et al. use a high-pass filter based method to enhance the visibility of the venous vasculature and reduce the artifacts in the venography. A framework for tracking left ventricular endocardium through 2D echocardiography image sequence is proposed by H. Ketout and J. Gu. E. Bengtsson and P. Malm review automated analysis of the cell samples to screen for cervical cancer.This special issue is selective. Among 27 submissions, 12 were selected. It is our hope that this impressive group of papers will help the medical image analysis community in their efforts to advance imaging-based medical research.Rong ChenRong ChenZhongqiu WangZhongqiu WangYuanjie ZhengYuanjie Zheng"} {"text": "Results from studies where human, animal and cellular models have been utilised to investigate the effects of EPA and/or DHA on white adipose tissue/adipocytes suggest anti-obesity and anti-inflammatory effects. We review here evidence for these effects, specifically focusing on studies that provide some insight into metabolic pathways or processes. Of note, limited work has been undertaken investigating the effects of EPA and DHA on white adipose tissue in humans whilst more work has been undertaken using animal and cellular models. Taken together it would appear that EPA and DHA have a positive effect on lowering lipogenesis, increasing lipolysis and decreasing inflammation, all of which would be beneficial for adipose tissue biology. What remains to be elucidated is the duration and dose required to see a favourable effect of EPA and DHA in vivo in humans, across a range of adiposity.Adipose tissue function is key determinant of metabolic health, with specific nutrients being suggested to play a role in tissue metabolism. One such group of nutrients are the Adipose tissue, the largest organ in the human body, was historically considered to be metabolically inert. However, white adipose tissue is now considered an endocrine organ as it secretes adipokines (and hormones) which act locally and distally through autocrine, paracrine and endocrine effects . Althougn-3 (or \u03c9-3) fatty acids, specifically those derived from marine sources. n-3 fatty acids have been suggested to lower the risk of a number of non-communicable metabolic diseases including cardiovascular disease, obesity and diabetes [n-3 polyunsaturated fatty acids (LCPUFA), specifically eicosapentaenoic acid and docosahexaenoic acid on white adipose tissue metabolism and function. Although other n-3 fatty acids such as \u03b1-linolenic acid and docosapentanoic acid are of potential interest, data are limited. A number of reviews on the effect of fish oil or n-3 fatty acids on adipose tissue have previously been undertaken [in vitro cellular studies regarding the specific effects EPA and DHA have on the metabolism and function of white adipose tissue from different depots. Specifically, we will discuss the mechanisms by which EPA and DHA are proposed to reduce adiposity along with discussion regarding how n-3 fatty acids may influence markers of adipose tissue inflammation and cytokine production.A class of fatty acids that has received a lot of attention over the last 30 years is the diabetes ,13,14. Hdertaken ,18,19,20de novo by fish. Fish accumulate them through consumption of water plants, such as plankton and algae, which are part of the marine food chain [n-6) and oleic acid (18:1n-9) may be given, the EPA and DHA content of the fish will decrease [EPA and DHA, commonly referred to as fish oil fatty acids, are not synthesized od chain . Therefodecrease ,23. Maridecrease .Within the human diet, EPA and DHA can be produced from ALA but the capacity of conversion is low in humans, although higher in women of child-bearing age than men . Thus, iAs the fatty acid composition of adipose tissue has a half-life between 6 months and 2 years, it reflects long-term dietary intake along with endogenous metabolism . The abun-3 supplementation are limited and findings inconsistent with some [et al. [et al. [n-3 fatty acid supplementation. However, as visceral adipose tissue samples are often obtained during elective surgery, it would be challenging to undertake a well-controlled study. Taken together, the data presented in n-3 fatty acid intake, over the weight maintenance period [Studies measuring the change in adipose tissue fatty acid composition, as a marker of compliance to ith some ,30,31,32ith some ,34 notin [et al. . Of notee period . These ci.e., body weight, BMI, waist circumference or waist to hip ratio). From the 21 studies they found no evidence to support an anti-obesity role of n-3 LCPUFA [et al. [Measuring an anti-obesity effect of increased EPA and DHA consumption in humans is challenging, not least as there are many other factors to control for and methodology sensitive to small changes in adipose tissue mass needs to be used. In 2009, Buckley and Howe reviewed3 LCPUFA . It is p [et al. noted th [et al. , as well [et al. . Moreove [et al. ,46,47,48 [et al. ,53,54,55 [et al. ,67,68,69de novo lipogenesis or re-esterification of fatty acids within the tissue; alternatively it may occur due to a lower flux of fatty acids to the tissue. In the latter situation, fatty acids could be repartitioned to other tissues (such as muscle) for disposal, rather than going to adipose tissue for storage. In humans, the absolute contribution of de novo synthesized fatty acids to adipose tissue triacylglycerol is potentially small [de novo lipogenesis (or fatty acid esterification/re-esterification) in vivo in humans is challenging. Therefore, it is not surprising that studies have not been undertaken investigating how EPA and DHA supplementation influence these processes in humans. Although not a direct measure of fatty acid synthesis or esterification/re-esterification within the tissue, the measurement of the expression of genes related to these processes provides some insight to the effect of EPA and DHA on these processes. Camargo et al. [A decrease in fatty acid deposition within adipose tissue may occur due to a decrease in triacylglycerol synthesis via decreased ly small and measo et al. reportedet al. [-/-) mouse model whilst others have typically used C57Bl/6 mice or Wistar rats.Work in animal models has typically found EPA and DHA to limit lipid accumulation in adipose tissue . The majet al. did not et al. . This diet al. [et al. [Studies investigating the effects of dietary EPA and DHA on adipose tissue function have also been undertaken in fish . Todorceet al. demonstret al. . Diet coet al. . Further [et al. in grassn-3 fatty acids may decrease adipogenesis and lipogenesis and thus exposure in utero to these fatty acids may lower the risk of obesity in offspring. In 2011 Muhlhausler and colleagues [n-3 LCPUFA supplementation during pregnancy and lactation on postnatal body composition of offspring. Although 13 potential studies were identified, only four met the inclusion criteria and the authors found from albeit limited data that there was a suggestion that the offspring from n-3 LCPUFA supplemented dams had a lower fat mass [n-3 LCPUFA [n-3 fatty acids decreases adipogenesis and lipogenesis and is an area that warrants further investigation.The process of adipogenesis (or an increase in fat mass) involves the differentiation of preadipocytes to mature adipocytes, is a complex and tightly regulated process involving a cascade of transcription factors which are sensitive to the nutritional environment . In a colleagues reviewedfat mass . In cont3 LCPUFA . Thus itin vitro evidence regarding the mechanistic effects of EPA and DHA on triacylglycerol accumulation/lipid deposition comes from the clonal murine cell line, 3T3-L1 the cells were exposed to, along with the duration of exposure.Overall, the effects of EPA and DHA as well as EPA n-3 fatty acids on adipocyte apoptosis and only limited work has been undertaken in animal and in vitro cellular models. Although outside the scope of this review, there have been a large number of studies investigating the effect of n-3 fatty acids and cancer in relation to apoptosis, as reviewed by Wendel and Heller [To our knowledge, there have been no studies in humans investigating the effect of d Heller .in vitro and in vivo studies have reported apoptosis in white adipose tissue along with alternations in adipose tissue mass. Thus consideration is required when looking at adipose tissue mass in relation to cell number as they might be partly regulated by pre-adipocyte/adipocyte apoptosis [et al. [et al. [n-3 fatty acids and regulation of cellularity in adipose tissue. Using a rodent model the authors suggested that increased intakes of EPA and DHA but alsiewed by , there am et al. reported [et al. treated in vitro studies suggests that high doses of EPA and DHA may induce adipocyte apoptosis. How targeting the apoptotic pathway in white adipose tissue would decrease obesity and influence adipose tissue function and overall metabolic health in humans remains to be elucidated.Taken together, the available data from animal and n-3 fatty acids [n-3 fatty acids (EPA and DHA) improved adipose tissue insulin sensitivity compared to diets high saturated or monounsaturated fat in individuals with the metabolic syndrome [in vitro studies, translation to the appropriate dose, along with duration required to see an effect in humans needs to be elucidated.Although an increase in fatty acid oxidation, via \u03b2-oxidation has been suggested to play a role in a reduction of triacylglycerol accumulation in adipocytes, evidence for this in white adipose tissue is limited; fatty acid oxidation and mitochondrial function has been studied more often in brown adipose tissue. The number and activity of mitochondria within adipocytes has been suggested to contribute to insulin resistance and type 2 diabetes . Changesty acids has been reported to increase in young, healthy men (n = 5) after 3 weeks of supplementation with fish oil (6 g/day) when compared to a control diet containing equal amounts of total dietary fat [et al. [et al. [tary fat . Only a [et al. reported [et al. reportedIn vitro cellular studies have found increased \u03b2-oxidation in 3T3-L1 adipocytes after incubation with 100 \u00b5M of EPA for 24 h [et al. [et al. [n-3 fatty acids on mitochondrial oxidative phosphorylation (OXPHOS) and fatty acid oxidation in white adipose tissue. In this comprehensive review they reported that in a murine model, supplementation with n-3 fatty acids in combination with mild calorie restriction induced mitochondrial OXPHOS in epididymal white adipose tissue only, independent of UCP1 induction; other studies in rodents have reported increased levels of UCP1 mRNA and/or protein in BAT in response to n-3 fatty acid supplementation [in vitro cellular model of isolated stromal-vascular (SV) cells from inguinal adipose tissue of suggested that EPA enhanced energy dissipation capacity by recruiting brite adipocytes to stimulate oxidative metabolism. From the limited data available it appears that EPA and DHA increase fatty acid \u03b2-oxidation in adipocytes, however the mechanisms responsible and the effect on mitochondrial OXPHOS and thermogenesis in human adipose tissue remains to be elucidated.for 24 h . The incfor 24 h . As EPA for 24 h . EPA andfor 24 h . Todorce [et al. demonstr [et al. . In 2013 [et al. reviewedentation . Recentlentation using anAn expansion of adipose tissue mass, is often associated with macrophage infiltration which may lead to inflammatory responses, which have been implicated in the development of pathological changes in adipose tissue physiology ,7,8,9. Tn-3 fatty acid supplementation, for periods between 8 weeks up to 6 months, on the expression of genes related to inflammation in human subcutaneous white adipose tissue have been undertaken. Overall results are variable, with some suggesting consumption of EPA and DHA decreases the expression of genes related to inflammation, whilst other report no change of individuals with chronic kidney disease (CKD) who were randomised to take either a low (n = 6) or high (n = 6) dose of MaxEPA for 10 weeks. In contrast, Itariu et al. [Studies investigating the effect of o change . For exa per gram of fat for 6 weeks resulted in a reduction in macrophage infiltration in combination with decreased expression of inflammatory genes in white adipose tissue [et al. [\u2212/\u2212 mice and showed similar results to Todoric et al. [\u03b1, metalloprotease 3 (MMP3), and serum amyloid A3 (SAA3) in white adipose tissue [n-3 fatty acids have the potential to modulate immune response in adipose tissue.Work in murine models has found consumption of e depots . Todoric or in measuring the effects of n-3 fatty acids on leptin at different stages of adipogenesis. Culturing human primary adipocytes in either EPA or DHA resulted in a down-regulation of IL6 and TNF\u03b1 secretion [et al. [in vivo, as the anti-inflammatory effects of EPA and DHA on TNF\u03b1 expression would be modulated through the direct effect of these fatty acids on macrophages; cells that were not present in their in vitro culture [The effects of ipocytes and primipocytes . Reselan [et al. reported [et al. and Pere [et al. , where e [et al. . Thus, tecretion . In contecretion ; the und [et al. who note culture . It remavs. cell-lines). Moreover, in vitro cellular cells often investigate the effects of EPA and DHA on adipocytes and it is plausible a different response may be found in whole adipose tissue due to the presence of other cell types and their interaction with adipocytes. Although the effects of n-3 fatty acid supplementation on the fatty acid composition of subcutaneous abdominal and gluteal adipose tissue have been investigated, mechanistic studies (in vivo and in vitro) appears to be limited to primarily subcutaneous abdominal adipose tissue and/or adipocytes. Evidence for an effect of n-3 fatty acids in human visceral adipose tissue is sparse and therefore not well understood. Evidence for a reduction in fat accumulation in animal models, along with an anti-inflammatory effect appears to be consistent when intakes of EPA and DHA are high ; however recommendations for human intakes are between 0.5% and 2% of total energy intake [In recent years evidence demonstrating that an increased consumption of EPA and DHA may have a beneficial effect on white adipose tissue function and metabolism is starting to emerge. Although current literature cannot support an exact mechanistic role of EPA and DHA on adipose tissue biology it is apparent that these fatty acids have the potential to be potent modulators of adipose tissue and adipocyte function. More work has been undertaken using animal and cell models therefore consideration is required regarding the dose and duration of EPA and DHA, the animal and cell model used (e.g., primary y intake . Thus, t"} {"text": "The starting point of modern biosensing was the application of actual biological species for recognition. Increasing understanding of the principles underlying such recognition , however, has triggered a dynamic field in chemistry and materials sciences that aims at joining the best of two worlds by combining concepts derived from nature with the processability of manmade materials, e.g., sensitivity and ruggedness. This review covers different biomimetic strategies leading to highly selective (bio)chemical sensors: the first section covers molecularly imprinted polymers (MIP) that attempt to generate a fully artificial, macromolecular mold of a species in order to detect it selectively. A different strategy comprises of devising polymer coatings to change the biocompatibility of surfaces that can also be used to immobilized natural receptors/ligands and thus stabilize them. Rationally speaking, this leads to self-assembled monolayers closely resembling cell membranes, sometimes also including bioreceptors. Finally, this review will highlight some approaches to generate artificial analogs of natural recognition materials and biomimetic approaches in nanotechnology. It mainly focuses on the literature published since 2005. One could regard the advent of plastics as the starting point of this development. Whereas in the beginning, the main fascination arose from being able to generate unprecedented materials, the aspect of studying natural structures as a model for technologically processed materials has been gaining increasing importance. This is perfectly reflected in the following statement: \u201cThe inspiration from nature is expected to continue leading to technological improvements and the impact is expected to be felt in every aspect of our lives\u201d . One of Molecular imprinting actually aims at designing \u201cartificial receptors\u201d or \u201cartificial antibodies\u201d that show biological functionality, despite consisting of a fully artificial, synthetic matrix. Binding to the target analytes occurs by the same non-covalent interactions as in biological systems, but still the backbone is completely different. In the case of membrane mimics, two different strategies are currently followed: the first uses artificial polymers that either repel biospecies or attract them and hence, establish affinity layers for biosensors. These are sometimes combined with natural or nature-analogous receptor/indicator materials to make use of specific interactions in an artificial environment and thus, stabilize the receptor of interest. The second one aims at mimicking natural cell membranes, e.g., with Langmuir\u2013Blodgett techniques and immobilizing receptors within them. Compared to polymers, this should immobilize the respective target receptor in an environment that is much closer to its original one.etc. aim at focusing on the functional part of a biomolecule and reproducing only this functional part to make synthesis simpler. Nanoparticles and nanostructures themselves are not necessarily biomimetic, but, in combination with different receptor strategies, they open the way for novel biosensing approaches. Within this review, we will highlight sensor strategies falling into these categories and discuss the different aspects of recognition within the systems.Artificial peptides, DNAzymes, Molecular imprinting is a very straightforward technique to achieve artificial receptors based on highly cross-linked polymers for a wide range of analytes in almost any dimensions ,4. In deAfter a groundbreaking paper by Alexander and Vulfson , the litet al. [d) of MIP for the template peptide was similar to that of monoclonal antibodies, namely 3.17 nM, calculated through Scatchard analysis. A limit of detection (LoD) of 2 ng/mL was achieved and practical performance was tested on real samples of human urine with satisfactory results. According to the authors, these LoD of HIV-1 gp41 were comparable to the reported ELISA method. On the basis of high its hydrophilicity and biocompatibility, dopamine excels over other functional monomers for this application. Furthermore, this simple epitope method can be adapted to other biomolecules. A further diagnostically important protein is myoglobin, which, among others, can be utilized as a cardiac marker. Rather than going for conventional epitope imprinting, Liao et al. [et al. [Firstly, let us regard \u201cstraightforward\u201d protein imprinting: human immunodeficiency virus type 1 (HIV-1) related protein , for instance, has attracted scientific interest, because it is the transmembrane protein of HIV-1 and plays a crucial role in membrane fusion between individual virions and T cells during infection. As such, it plays an important role in the efficacy of therapeutic intervention, because it indicates the extent of HIV-1 disease progression. By implementing the epitope imprinting strategy\u2014where only a substructure of the analyte of interest is used as a template\u2014Lu et al. developeo et al. presente [et al. , for ins\u22121 enzyme and response times of a few minutes. The highest sensitivity was achieved in the case of solution-based imprinting with native trypsin: this system could differentiate denatured trypsin from the native form of enzyme. As already mentioned above, in the case of entire cells, so-called \u201cplastic\u201d copies of the original cells are promising templates for imprinting. For instance, Saccharomyces cerevisiae\u2014baker\u2019s yeast\u2014undergoes a range of growth stages, each of which may be recognized biomimetically. Seidler et al. [et al. [S. cerevisiae and S. bayanus, which strongly corroborates the claim that the functional pattern of the initial yeast cells is correctly reproduced within the polymeric matrix; both native and \u201cplastic\u201d cells yielded the same sensitivity while selectivity exceeded a factor of three in the second case.Sensors reached detection limits of 100 ng\u00b7mLr et al. produced [et al. fabricatet al. [MIP can withstand harsh environments and yet retain recognition abilities for the detection of species in complex media. In this connection, Hayden et al. further et al. .et al. [Double imprinting approaches can also be applied to literally generate \u201cplastic antibodies\u201d that are inherently robust and selective. For instance, Schirhagl et al. developeet al. [Of course, such double-imprinting techniques are especially interesting for generating antibody replica. If immunoglobulin is applied as a template in the MIP, and the resulting substrates as stamps in a second imprinting procedure, this leads to \u201cpositive images\u201d of natural antibodies in the polymer. Inherently, such double-imprinted layers can be produced in bulk for industrial processing and are more robust as compared to natural counterparts. Additionally, cost-efficiency increases, because the templates are required only in the first step. MIPs have also been proposed for cell surface proteins, such as wheat germ agglutinin (WGA) lectin, a model compound for interactions between viruses and cells. Wangchareansak et al. developeWhen comparing MIP-based and natural recognition materials with one another, the following conclusions can be drawn: typically. the selectivity of MIP is in the same order of magnitude as that of natural antibodies or ligand\u2013receptor interactions, or they may even reach it. Mass sensitive measurements revealed that in the case of proteins, MIP functions as a \u201ccondensation nuclei\u201d, thus, those systems feature a strongly increasing sensitivity. BET analysis of the WGA MIP mentioned above revealed that the sensor signal at 160 \u00b5g/mL WGA corresponds to an average of ten molecular layers on the MIP, whereas affinity to NIP is low. However, the overall sensitivity of any given biosensor is determined by the combination of the recognition material and the respective transducer.et al. [et al. [2-glycoprotein-I (\u03b22GP-I): \u03b22GP-I binds to negatively charged membranes and exposes the correct epitopes only in this bound state. Otherwise, the binding sites are hidden within the protein, which makes immunoassays with the whole protein useless. To provide an environment comparable to the physiological conditions in the human body, the authors co-immobilized a multilayer assembly of covalently attached polymeric DCPEG/PEG (lipid anchor) and a liposome membrane on the transducer. Quantification of anti-\u03b22GP-I antibodies and calibration of the sensor chip in buffer was done using reflectometric interference spectroscopy. This strategy can be extended to distinguish between healthy and ill individuals within routine clinical diagnostics. Furthermore, it is an impressive example for the fact that in bioanalysis not only the target analyte plays a role, but also its environment. In this way, the sensor surface mimics not \u201conly\u201d an antibody or binding site, but also the membrane structure necessary to ensure biological activity.A different approach suggests synthesizing MIP membranes have been reported based on self-assembly to more closely mimic natural functionality, a huge amount of which actually happens in (cell) membranes. For example, Zhao et al. used sucet al. . However [et al. further [et al. have devet al. [in vivo diagnostics without showing any significant response to the anti-Salm-functionalized surface . Astonishingly, high selectivity and sensitivity (<3 ng/cm2 for undiluted blood plasma) was achieved for an activated leukocyte cell adhesion molecule , a potential carcinoma biomarker. Such combinations of artificial and natural materials have also been reported by other groups. Henry et al. [Membranes play a key role for life, as they are interfaces between cells and their surroundings. As such, they are also interesting starting points for biomimetic sensor development. Here, two seemingly contradictory goals are of main interest: on the one hand\u2014and as mentioned already in the previous section\u2014the design of artificial receptors, and on the other hand, effectively preventing nonspecific protein adsorption from real-life media , which is absolutely key for implanted medical devices. Addressing this issue, Vaisocherova et al. applied y et al. , for insi.e., not in combination with natural materials, the aim can also be facilitating the formation of adhered biolayers rather than preventing them. One example for this strategy has been reported by Mueller et al. [et al. [When using pure polymer films, r et al. , who mod [et al. applied et al. [et al. [6 particles/mL after short incubation and can be monitored by naked eye or readily available scanners. Although, detection limits are a few order of magnitudes higher than in modern PCR strategies, the technique offers unparalleled simplicity in its approach.Polymer films can also function as \u201caffinity receptors\u201d themselves: Ibri\u0161imovi\u0107 et al. for inst [et al. , who con [et al. used glaet al. [\u22126 M to 10\u22122 M. In a related, yet completely different approach, Yang et al. [Such straightforward receptor materials, of course, allow for evaluating novel transducers, e.g., derived from modern nanotechnology as bio(mimetic) sensors. Chang et al. , for insg et al. synthesiet al. [et al. [Salmonella typhimurium can be detected in situ with highly selectivity at 90% relative humidity by using a lead zirconatetitanate (PZT)/gold-coated glass cantilever. Its sensitivity was found to be 1 \u00d7 103 and 500 cells/ml in 2 mL of liquid with a 1 and 1.5 mm dipping depth, respectively. Such a limit of detection is more than one order of magnitude lower than that of the commercial Raptor sensor.Generally speaking, electrochemical impedance spectrometry is gaining increasing importance in the detection of bioanalytes down to viruses due to iet al. , chlorid [et al. showed tet al. [2O2 produced in situ can trigger ECL reaction in the sensing layer. In another approach of using supported lipid bilayers (SLB) mimicking natural structures, Choi et al. [in vitro by label-free surface plasmon resonance spectroscopy (SPRS). Biomimetic sensor chips were fabricated by the fusion of unilamellar lipid vesicles on a hydrophilic Au surface for SPRS. This setup enables real-time measurements of protein aggregation. Self-assembly is one of the fundamental driving forces of life that leads, for example, to the phospholipid bilayers constituting cell membranes. The Langmuir\u2013Blodgett technique is a way to build up artificial membranes. It makes it possible, for instance, to immobilize lipid bilayers (LB) that themselves can stabilize biomolecules to achieve biomimetic \u201ccell membranes.\u201d In contrast to the methods previously mentioned, the biomolecule is therefore immobilized in an artificial membrane that more closely resembles a natural one, as compared to a polymer thin film. Using this approach, Jiao et al. demonstri et al. demonstrThe respective membrane behavior was analyzed by investigating specific types of Cu/Zn-superoxide dismutase (SOD1) species, which are linked to neurodegenerative disease. In an unusual manner, SOD1 aggregates degenerate membranes closely related with aberrant characteristics of conformationally disordered and/or mostly aggregated proteins. The presented biomimetic sensor build-up can be further applied for the development of biomarkers to analyze different injured cells.In recent years, peptide-based sensors have been developed, including a range of different strategies. One of these can be regarded as biomimetic, namely synthesizing peptides that are designed to make use of well-known interactions between a single or a few amino acids and the respective target compounds. However, selecting them can be difficult, when the target molecule is small and/or it considerably changes its conformation upon immobilization . Other b9 hematologic cells). To cope with the challenge, Myung et al. [2. Hence, they can be separated from one another and detected finally by CTC devices that offer enhanced selectivity and sensitivity without involving complex fabrication processes. These differences are governed by the fact that the ratio of cell rolling and stationary binding on the surface depends on the respective immobilized artificial peptide. When designing artificial recognition sequences, applying molecular modeling of the actual interaction center usually substantially improves the overall experimental time, as candidate receptors can be selected \u201cin silico.\u201d An example for this is the rapid and direct QCM detection of the mussel heat shock protein HSP70 in crude extracts of the mussel mantle by artificial heptapeptides [The detection of circulating tumor cells (CTCs) is clinically important for diagnosis and prognosis of cancer metastasis. Separation and detection of these cells are challenging due to extremely low quantity in patient blood of that species and immobilized it onto a streptavidin-coated dextran sensor surface. Employing surface plasmon resonance (SPR), detection limits of 0.5 nM were obtained with linearity from 5 to 1,000 nM. Escherichia coli and S. aureus yield no significant signal in selectivity pattern studies. Salmonella detection.An example for a biomimetic DNA-based system has been presented by Pelossof et al. . They cog et al. developeet al. [Molecular self-assembly processes are crucial for generating new functional materials with suitable prosperities for sensor development. This concept is biomimetic to its core, as nature has, for example, constructed thousands of nanostructures from 20 amino acids . Chemicaet al. discusseet al. ,43,44.et al. [Motivated by biological processes, Sundh et al. developeet al. [10) onto the modified electrode simultaneously. Using cyclic voltammetry, a sensitivity of \u22120.04988 \u03bcA/mM for NADH with the modified electrode could be achieved.The outstanding effect of nanoparticles (NP) has been demonstrated by Lee et al. through 3O4 nanoparticles (CMNP) as signal amplifiers, Lin et al. [Escherichia coli as a target. For this purpose, CMNP were functionalized with a chitosan layer for binding negatively charged E. coli via electrostatic attraction. The CMNPs binding to the sensor resulted in increased mass loading ultimately decreasing its resonance frequency without interference by serum proteins. The method can be extended further to other bacteria. For example, slightly varying the NP approach, the immobilized Au nanoparticles were functionalized with single-strand DNA (ssDNA) probes specific to the eaeA gene of E. coli O157:H7 [et al. [Similarly, employing chitosan-modified magnetic Fen et al. reported O157:H7 . First r O157:H7 . In a di [et al. demonstret al. [2O2. Combining the above two strategies , Kim et al. [et al. [3O4) and multi-walled carbon nanotubes (CNT) coated onto indium tin oxide (ITO) coated glass plate by electrochemical synthesis of polyaniline (see N. gonorrhoeae) genosensors leading to detection limits of 1 \u00d7 10\u221219 M bacteria within 45 s of hybridization time at 298 K.Mesoporous materials constitute another interesting class of substrates for biomimetic sensor setups. For instance, Li et al. synthesim et al. employed [et al. : they deline see . The reset al. [et al. [et al. [2O3 magnetic nanoparticles and fluorescent quantum dots were embedded together into single swelling poly(styrene/acrylamide) copolymer nanospheres. In this way, fluorescent-magnetic bifunctional nanospheres can be generated. Their approach included the fabrication of smart wheat germ agglutinin (WGA)- trifunctional nanobiosensors (TFNS), peanut agglutinin (PNA)-TFNS and Dolichos biflorus agglutinin (DBA)-TFNS composites that combine high fluorescence yield, magnetic properties and selective detection of N-acetylglucosamine, D-galactosamine and N-acetylgalactosamine residues on the A549 cell surface. Such biomimetic lectin-modified nano biosensors (lectin-TFNS) can thus be employed for analysis of glycoconjugates on A549 cell surface.Being a comparably new class of nanomaterials, quantum dots (QDs) are considered to be promising matrices for sensor layers. For example, employing QDs as biomimetic material, Huang et al. develope [et al. applied [et al. . Fe2O3 mAs can be seen, mimicking biological functionality has become a field of substantial scientific interest and has generated a range of different strategies for actually reaching that goal. Whereas, the idea behind molecular imprinting is to generate as high selectivity as possible in a fully artificial material, polymer thin-film coatings and self-assembled monolayers follow different goals and strategies: first, they are optimized to tune the affinity between cells and surfaces, which can lead both to materials preventing cell adhesion, as well as to ones supporting it. Furthermore, such systems are useful to incorporate and stabilize biological receptors, hence, generating a monofunctional surface containing a specified interaction center.Diagnostics and healthcare, as well as security topics are the main driving forces behind biosensing. Those fields do not necessarily require reversible sensors, but disposable systems. Therefore, they do not require the main advantages of biomimetic materials, which are their ruggedness and re-usability.Sensitivity of biomimetic materials reaches, or even exceeds, that of e.g., antibody-based test formats. However, polymerase chain reaction (PCR) has made substantial progress during the last two decades allowing for extremely low detection limits for microorganisms due to its amplification properties.The market of diagnostic tools is inherently conservative and thus reluctant to replace well-established techniques with novel ones due to lack of experience or expertise.Especially in MIP, studies on batch-to-batch reproducibility and upscaling to (pilot) plant level still have to be done.Compared to the substantial scientific effort, however, such systems are commercially not (yet) very successful. It is difficult to attribute these to concrete reasons. However, the following points definitely do play a role:In conclusion, biomimetic sensing is still a matter of research at the academic level, rather than commercial development. Replacing\u2014or at least complementing\u2014bioreceptors will thus not be possible in the immediate future. However, the main advantages of artificial systems will make them interesting candidates for measuring applications requiring long-term stability, such as process control or monitoring air/water quality over extended periods of time. Once established in those markets, application in the diagnostic area seems more realistic, as artificial materials offer inherent cost advantages."} {"text": "Epigenetics provides a molecular mechanism of inheritance that is not solely dependent on DNA sequence and that can account for non-Mendelian inheritance patterns. Epigenetic changes underlie many normal developmental processes, and can lead to disease development as well. While epigenetic effects have been studied in well-characterized rodent models, less research has been done using agriculturally important domestic animal species. This review will present the results of current epigenetic research using farm animal models . Much of the work has focused on the epigenetic effects that environmental exposures to toxicants, nutrients and infectious agents has on either the exposed animals themselves or on their direct offspring. Only one porcine study examined epigenetic transgenerational effects; namely the effect diet micronutrients fed to male pigs has on liver DNA methylation and muscle mass in grand-offspring (F2 generation). Healthy viable offspring are very important in the farm and husbandry industry and epigenetic differences can be associated with production traits. Therefore further epigenetic research into domestic animal health and how exposure to toxicants or nutritional changes affects future generations is imperative. Mendelian genetic theories have guided much of the biological research preformed in recent history. It has long been assumed that specific phenotypes arise only from DNA sequence. However, non-Mendelian inheritance patterns challenge these theories and suggest that an alternate process might exist to account for certain mechanisms of inheritance. Epigenetics provides a molecular mechanism that can account for these non-Mendelian observations\u20133. EpigeDespite the amount of epigenetic and transgenerational epigenetic inheritance research being done on a multitude of mammal, insect, and plant models, 16\u201321, et al. found that during lactation the (STAT)5-binding lactation enhancer, which is part of the \u03b1S1-casein encoding gene, is hypomethylated[Escherichia coli infection of the mammary gland, this region becomes methylated at three CpG dinucleotides which accompanies a shut down of \u03b1S1-casein synthesis[Streptococcus uberis[et al. preformed a generational study to see if a mother dairy cow affected the milk production of her offspring[The relationship of DNA methylation and milk production in dairy cattle has been investigated. During lactation the bovine \u03b1S1-casein gene is hypomethylated. Researcthylated. Howeverynthesis. These ous uberis. In addius uberis. Gonz\u00e1leffspring. They foffspring. Becauseet al.[More research has been done on histone modification related to nutritional changes than on DNA methylation. Short-chain fatty acids are particularly important in ruminant digestion, and are used for cell energy production and use. Butyratet al. show thaet al., but hiset al.[in vitro fertilized, and somatic-cell nuclear transfer embryos, and suggests that these methylation differences may account for the different success rates and health of calves born from these reproductive technologies[et al.[The influence of epigenetics on disease has been studied in many animal models such as rats, mice, and humans, but very little has been done with cattle. One bovine developmental disease called large-offspring syndrome (LOS) has been found to have epigenetic components during embryonic growth. LOS has largely been associated with reproductive technologies commonly used with cattle such as in vitro fertilization and somatic-cell nuclear transfer. Symptomet al. has reponologies. A numbenologies and bovinologies, which i[et al. looked a[et al..No studies have been published showing epigenetic transgenerational inheritance in cattle.Swine are often used as animal models to study human disease because of the similar physiology between the two species. Because of this, much of the epigenetic porcine research involves exposure and response, with very little of the current research being transgenerational.et al.[et al.[Epigenetic effects due to histone modification and acetylation have been studied in a porcine model both in order to increase meat production and to develop a potential treatment for muscular degenerative disease. Sulforaphane is a bioactive histone deacetylase inhibitor often found in edible vegetation like broccoli. Fan et et al. treated et al.. Liu et [et al. also loo[et al.. Another[et al.. Researc[et al.. However[et al.. This st[et al..et al. demonstrated that neonatal estrogen exposure in piglets can lead to epigenetic changes that affect uterine capacity and environment[Research conducted by Tarletan ironment. This leironment. Anotherironment. Howeverironment.et al. preformed a three generational study to look at the effect of feeding on male epigenetic inheritance. The experimental group F0 generation males were fed a diet high in methylating micronutrients, and the resulting F2 generation had a lower fat percentage and higher shoulder muscle percentage as compared to controls. They also found significant differences in DNA methylation between the control and experimental groups, especially in the liver, which was proposed to epigenetically affect fat metabolism pathways[One recent transgenerational porcine study has been reported, Table\u00a0pathways.et al.[As shown in the bovine model and porcine model, maternal nutritional impact is a common topic in epigenetic research, and ovine studies are no exception. Zhang et al. looked iet al.. Other net al., 52. HowNo studies have been published showing epigenetic transgenerational inheritance in sheep.et al.[in vitro the propagation in the infected cells was slowed. Observations suggested that DNA methylation in the host may be associated with virus resistance or susceptibility[Marek\u2019s disease in chickens is a manifestation of Marek\u2019s disease virus and progresses to become a T-cell lymphoma that affects chickens and other birds. Vaccines have been developed but they are not completely successful. Tian etet al. set to ftibility.Different developmental epigenetic patterns have been studied between chicken types. One study looked at differential DNA methylation in breast muscle between slow-growing and fast-growing broiler chickens. They foAs one review indicated, many poultry studies indicate that there may be epigenetic effects, and even transgenerational epigenetic inheritance, though very few studies actually test for DNA methylation or histone modification in their research.No studies have been published showing epigenetic transgenerational inheritance in chicken.While a good amount of epigenetic research has been preformed on domesticated farm animals still more needs to be done, Table\u00a0Epigenetics: Molecular factors/processes around the DNA that regulate genome activity independent of DNA sequence, and are mitotically stable.absence of direct environmental influences, that leads to phenotypic variation.Epigenetic: Transgenerational Inheritance: Germline-mediated inheritance of epigenetic information between generations in the Epimutation: Differential presence of epigenetic marks that lead to altered genome activity."} {"text": "Ixodes scapularis ticks collected in Texas and Mexico were infected with the Lyme disease spirochete Borrelia burgdorferi . However, our analyses of their initial data and a recent response by Esteve-Gassent et al. provide evidence that the positive PCR results obtained from both ribosomal RNA intergenic sequences and the flagellin gene flaB are highly likely due to contamination by the B. burgdorferi B31 positive control strain.Feria-Arroyo et al. had reported previously that, based on PCR analysis, 45\u00a0% of Ixodes scapularis ticks collected in Texas and Mexico were infected with the Lyme disease spirochete Borrelia burgdorferi, based on nested PCR amplification and sequencing of the 16S rDNA-23S rDNA intergenic spacer region (IGS). Positive PCR results were also reported for the B. burgdorferi flagellin gene flaB, but no flaB sequences were provided in the article. The results reported by Feria-Arroyo et al. [Borrelia of any kind. In a comprehensive reanalysis of their data [B. burgdorferi B31, used in their study. This commonly used strain was the first isolate of B. burgdorferi, and was obtained from ticks collected on Shelter Island, New York in 1981 [flaB sequences from several of the tick specimens from Texas and claimed that \u201cInfection levels using a second genetic marker (flaB), confirmed the results originally obtained by the 16S rRNA-23S rRNA gene intergenic spacer (IGS) of B. burgdorferi\u201d. However, a simple comparison of these flaB sequences to existing genomic sequences indicated that they are essentially identical to the B. burgdorferi B31 sequence, contrary to their conclusions [1) The findings reported by Feria-Arroyo and coworkers [B. burgdorferi in their sample of ticks from Texas and Mexico are erroneous; and (2) The inaccurate information, most plausibly, was the consequence of laboratory contamination of the samples in the chain of possession and faulty analysis of their results and the scientific literature. Our presumption is that the contamination was not intentional but inadvertent. . The inThe Esteve-Gassent et al. response does not address the IGS sequence identity between the Texas samples and the positive control strainIn particular, we bring attention to the following points.Ixodes scapularis ticks collected in Texas and Mexico were infected with Borrelia burgdorferi. This conclusion was based on positive PCR results obtained with these samples using primers for the 16S rDNA-23S rDNA intergenic spacer region (IGS). In our prior Letter to the Editor [B. burgdorferi B31, indicating cross-contamination of the Texas samples with B31 DNA. Extensive regions (664 to 925 bp) were identical to the corresponding B31 sequence in 10 specimens, and were \u226598.9\u00a0% identical in the remaining 11 specimens , confirmed the results originally obtained by the 16S rRNA-23S rRNA gene intergenic spacer (IGS) of B. burgdorferi.\u201d It should first be noted that the flagellin protein gene flaB is highly conserved in B. burgdorferi, so it is useful for detecting B. burgdorferi strains but not for distinguishing between them. Indeed, Figure one in the Esteve-Gassent et al. response [et al. argued that the infected ticks reported in our study were found infected with B. burgdorferi likely due to contamination of the PCR reactions with DNA from the strain B31 of B. burgdorferi, the positive control used in the study. Nevertheless, B. burgdorferi B31 flaB has a cytosine (C) at position 75 in this alignment while the Texas isolates had an adenine (A). The A in the Texas isolate makes them more similar to strains N40 and 297 than to B31. Contamination of our samples with strains N40 and 297 is impossible, since these strains are not present in the laboratory in which molecular analyses were carried out\u201d.\u201cNorris In their response, Esteve-Gassent et al. state thresponse indicateresponse go on toB. burgdorferi B31 chromosome sequence (GenBank Accession No. AE000783.1) has an adenine (A) at the indicated position. A recent resequencing of the B31 genome (CP009656) has the same sequence. In fact, nearly all of the reported B. burgdorferi flaB sequences (over 100) have an adenine at this position. The only exceptions are three reported sequences: two from Dr. Reinhardt Wallich in Germany (X15661 and X16833) and one from a group in the United Kingdom (Y15088). Because the sequence difference is restricted to this one nucleotide, the particular clone used by these groups likely had a point mutation at this position.We were puzzled by this statement in that the widely accepted flaB. In the B31 genomic sequence, the gene is annotated as BB0147 and the descriptor used is \u201cP41\u201d, one of the initial descriptions of this protein as a 41-kDa protein antigen. Esteve-Gassent et al. [BB0147 is the same as flaB, although a simple BLAST search would have demonstrated this fact.Esteve-Gassent et al. apparentflaB sequences from Texas samples in the GenBank database; three sequences included in their response have not been provided to GenBank. These GenBank sequences are longer than those provided in Fig. one of their response, and are 234 to 238 nt in length. An alignment of the GenBank sequences with the corresponding flaB region from the B. burgdorferi B31 genome sequence (nt 147949\u2013148187) is shown in Fig.\u00a0flaB sequences reported by Esteve-Gassent et al. [flaB gene, with the exception of a few differences (apparent sequence errors) at the ends of their sequences and one near the middle of one of their sequences (MMWMA69-70). The comparable sequences from B. burgdorferi N40 and 297 and the GenBank entry from Dr Wallich\u2019s group (X15661) are shown at the bottom of the alignment. The position of the cytosine in question (nt 67) in X15661 is indicated.Esteve-Gassent et al. have depflaB sequences reported by Esteve-Gassent et al. [B. burgdorferi B31 flaB has a cytosine (C) at position 75 in this alignment while the Texas isolates had an adenine (A)\u201d is incorrect. In fact, we have confirmed experimentally by Sanger sequencing that low-passage B31 contains an adenine at this position, in order to further rule out the possibility of a sequence error at this nucleotide. The flaB sequences are the only new information provided in the response by Esteve-Gassent et al. [flaB sequences are the same as those in the positive control (strain B31). The flaB results in no way affect the interpretation of the IGS results, which are much more important because of the heterogeneity in IGS sequences observed among B. burgdorferi strains.3.The reverse complement of the BWTX12-16 DNA sequence is mistakenly displayed in Fig. oneThese results clearly demonstrate that the t et al. are indet et al. , yet theB. burgdorferi in the regions sampled likely exceed the values found by Feria-Arroyo et al. study [\u201cBWTX12-16, a questing tick, has a significantly different sequence from either of the controls or the other Texan samples, suggesting that the degree of genetic variation of l. study , which oIn this figure in the response from Esteve-Gassent et al. , the BWTB. burgdorferi strains 297, N40 and B31, and most of the other Texas samples. Indeed, the reverse complement of the BWTX12-16 sequence was mistakenly used in Fig. one, resulting in the appearance of many nucleotide differences. In addition, the last 12 nucleotides of BWTX12-16 sequence in Fig. one do not match any sequence, including the BWTX12-16 sequence from GenBank that Esteve-Gassent et al. [4.Figure two of the Esteve-Gassent et al. response reinforces sequence identity with B31 in most tick samples from Texas, but also indicates the presence of sequence errors in some of the sequences from Texas samplesHowever, Figure two of their response shows that the predicted amino acid sequence of BWTX12-16 is identical to those of the B. burgdorferi B31 sequence from \u201cT\u201d to \u201cN\u201d, most of the Texas samples have identical sequences to B31. However, two samples (GE64 and MM68) have extensive regions that are different from the FlaB sequence in B. burgdorferi strains. The GE64 nucleotide sequence has two frameshifts and MM68 has a single frameshift, which result in the aberrant amino acid sequences shown in Fig. two of Esteve-Gassent et al. [5.Figure three of the response by Esteve-Gassent et al. inappropWith correction of amino acid 22 in the mples GE6 and MM68http://phylogeny.lirmm.fr/phylo_cgi/index.cgi and shows that all of the sequences from GenBank are identical, except for single nucleotide differences in the Texas sample MM69-10, the Wallich et al. flaB sequence X15661 and strain N40. The corresponding Figure from Esteve-Gassent et al. [6.B. burgdorferi strainsThe conclusions of Esteve-Gassent et al. are contOnly nucleotide or amino acid sequences that cover the same range of sequence and are free from sequence errors can be utilized to construct reliable phylogenetic trees. In Fig. three-a of the response , both N4B. burgdorferi with identical or nearly identical IGS genotypes.The following paragraph indicates their resistance to consider the possibility of DNA contamination and sequence errors in their results, with the conclusion that samples acquired from ticks feeding on white-tailed deer, gembok and dogs in a large geographical area harbour et al. stated in their letter that due to the low variability observed in the IGS from the Texas samples most, if not all of them, were likely to have been originated from the same clone which they assume could be the product of contamination with the B31 strain. We disagree with the interpretation put forward by Norris et al. and instead think it is more likely that the lack of variability reported in Feria-Arroyo et al. reflects the level of B. burgdorferi variability present in the Texas-Mexico transboundary region. Several of the ticks included in the Feria-Arroyo study were collected from white-tailed deer, gemsbok and dog. These mammalian hosts, particularly white-tailed deer, harbour ticks from several lineages. Thus, ticks collected from white-tailed deer, even if collected from the same individual, are likely to carry a representation of the B. burgdorferi strains present in a particular location. Thus, the B. burgdorferi genetic diversity reported by Feria-Arroyo et al., likely represents the genetic variation present in the Texas-Mexico transboundary region\u201d.\u201cNorris Esteve-Gassent et al. state:\u201cNB. burgdorferi isolates in other geographical regions. They also do not address the overall high sequence identity of the IGS sequences (99.8\u00a0% to 100\u00a0%) and random distribution of sequence differences shown in Table one of the Letter to the Editor from Norris et al. [\u221217. These points are addressed thoroughly in the prior Letter [7.B. burgdorferi infection in Texas and MexicoThe article by Feria-Arroyo et al. providesHere, the authors provide no citations that indicate a similar sequence homogeneity among s et al. . In conts et al. ). In ours et al. , we had r Letter and are B. burgdorferi was detected in 45\u00a0% of I. scapularis ticks\u2026\u201d.To quote the Feria-Arroyo et al. article , \u201cInfect\u201cNorris et al. suggest that the Feria-Arroyo et al. publicatThis degree of positivity is contrary to all prior Texas studies (except for one prior article by members of this group) and would indicate a tick infection rate as high as that found in the \u2018hot spots\u2019 of Lyme disease in the Northeastern United States , to which BioMedCentral adheres, states that \u201cJournal editors should consider retracting a publication if they have clear evidence that the findings are unreliable, either as a result of misconduct or honest error .\u201d We believe that the latter assessment is the case. Therefore, we recommend the retraction of the Feria-Arroyo et al. article [Parasites & Vectors.We conclude that, as in the case of the article by Feria-Arroyo et al. , the dat article from Par"} {"text": "We identified three issues with the study by Second, the Revised Center for Epidemiologic Studies Depression Scale is not tThird, Wang et al. did not address health status in their reported association. There isconvincing evidence supporting the association between cardiovascular disease (CVD) andboth ambient air pollution e.g., and depr"} {"text": "Electrical properties of living cells have been proven to play significant roles in understanding of various biological activities including disease progression both at the cellular and molecular levels. Since two decades ago, many researchers have developed tools to analyze the cell\u2019s electrical states especially in single cell analysis (SCA). In depth analysis and more fully described activities of cell differentiation and cancer can only be accomplished with single cell analysis. This growing interest was supported by the emergence of various microfluidic techniques to fulfill high precisions screening, reduced equipment cost and low analysis time for characterization of the single cell\u2019s electrical properties, as compared to classical bulky technique. This paper presents a historical review of single cell electrical properties analysis development from classical techniques to recent advances in microfluidic techniques. Technical details of the different microfluidic techniques are highlighted, and the advantages and limitations of various microfluidic devices are discussed. Study of the cell has emerged as a distinct new field, and acknowledged to be one of the fundamental building blocks of life. Moreover, the cells have unique biophysical and biochemical properties to maintain and sense the physiological surrounding environment to fulfill its specific functions ,2. CelluSingle cell analysis (SCA) has become a trend and major topic to engineers and scientists in the last 20 years to develop the experimental tools and technologies able to carry out single cell measurement. In addition, in depth analysis and more fully described activities of cell differentiation and cancer can only be accomplished with single cell analysis . In convElectrical properties of cells provide some insight and vital information to aid the understanding of complex physiological states of the cell. Cells that experience abnormalities or are infected by bacteria may have altered ion channel activity , cytoplaThe classical technique for a cell\u2019s electrical properties analysis was originated in 1791, when Luigi Galvani conducted the first experiment for measuring electrical activity in animals, which is evoking muscular contractions in frog nerve muscle preparations by electrical stimulation with metal wires . From thThe patch clamp technique is unique in enabling high-resolution recording of the ionic currents flowing through a cell\u2019s plasma membrane. Since the introduction of the patch-clamp technique by Neher and Sakmann in 1976, patch-clamp was adopted by researchers in cellular and molecular biology research areas for studying and providing valuable information of biological cell electrical properties ,27. The et al. sparked an approach for obtaining information about the characteristics and distribution of ion channels in living cells [et al. combined the whole-cell patch clamp with fluorescence ratio imaging for measuring the electric properties of a cell membrane [The work of Hamill ng cells . They usmembrane . FluoresThe conventional patch clamp technique has several disadvantages. First, the patch clamp technique is time consuming process ,35. The et al. developed a dual nanoprobe integrated with nanomanipulator units inside environmental scanning electron microscope (ESEM) to perform electrical probing on single cells for novel single cell viability detection [Salmonella typhimurium, Escherichia coli, Lactobacillus sakei and Listeria innocua), which were well correlated with the hydration state of bacteria. Nanoprobes could potentially be used to perform single cell\u2019s electrical characterization. The nanoprobe capable to measure direct electrical properties of single cell and quantitatively determine the viability of single cells. M. R. Ahmad etection . Figure etection . This teetection . Recentletection . They deAn advance in microfabrication technique, such as soft lithography, creates new opportunities for producing structures at micrometer scale inexpensively and rapidly . For thiA cell shows a rotated ability when it is placed into a rotating electric field within a medium with a non-uniform electric field. Analysis of these phenomena called an electrorotation (ROT), is commonly used for measuring the dielectric properties of cells without invasion. ROT measurement theory is based on rotational speed of cells/particles when the cell and the suspending medium have different electric polarizability, by referring to the frequency of a rotational electric field. This electric field is generated by quadrupole (arranged in a crisscross pattern) electrodes and each electrode is connected to an AC signal with a 90\u00b0 phase difference from each other.The quadrupole electrodes connected to sine wave was a famous design in ROT technique ,49,50. Fversus frequency of the applied field. Jun Yang et al. [i.e., T- and B-lymphocytes, monocytes, and granulocytes. From this experiment, ROT was capable to characterize the dielectric properties of cell subpopulations within a cell mixture. In addition, ROT was utilized to determine the cell viability at real time assessment [i.e., Giardia Intestinalis and Cyclospora Cayetanensis [In the ROT technique, the amplitude of the electric field remains unchanged because the cells are only rotated at a certain position in an electric field . Therefog et al. used fresessment ,60,61. Ctanensis . An ellitanensis was utilet al. and M. Cristofanilli et al., utilized ROT technique to analyze single cells and it took approximately 30 min to test a single cell [Recently, the concept of negative quadrupole dielectrophoresis (nQDEP) and ROT signals superposed on each other in electrorotation technique was reported b. An accgle cell ,63. Thesgle cell . Neverthgle cell . Table 1et al. [Flow cytometry is a fundamental and powerful analytical tool in cell biology and cellular disease diagnosis for many years. Flow cytometry has an ability to address some problems in single cell analysis such as identifiying, counting and sorting cells ,69. Baseet al. developeet al. [et al. [et al. [et al. [Gawad et al. develope [et al. designed [et al. . In addi [et al. to monit [et al. . Recentl [et al. . The devet al. [Holmes et al. demonstret al. and RBC et al. . In ordeet al. developeE. coli and B. subtilis was well achieved. T. Sun et al. extends impedance measurements from one dimension to two or three dimensions by utilized electrical impedance tomography (EIT) [A label-free cell cytometry based on electrophysiological response to stimulus was reported . This mehy (EIT) . A circuhy (EIT) . Table 2Micro electrical impedance spectroscopy (\u00b5-EIS) is a technique where dielectric properties in a frequency domain of a cell is measured to characterize and differentiate the various types of cell. Mainly this technique analyzed the current response when a single cell was trapped in a trapping system where an alternating current (AC) was applied across the trapping zone. A trapping system is a major contribution and significant part in \u00b5-EIS device. For this reason, development of a trapping system is very crucial and varieties of the trapping system have been developed, such as hydrodynamic traps, negative pressure traps and DEP traps.et al. [First development of micro electrical impedance spectroscopy (\u00b5-EIS) was reported in 2006 . They deet al. developeet al. [et al. performed impedance measurement of HeLa cell based on two geometry structures of micropillars trapping system, namely, parallel and elliptical geometry [et al. demonstrated a hydrodynamic trapping device which has a differential electrode arrangement that measures multiple signals from multiple trapping sites. Measurements was performed by recording the current from two electrode pairs, one empty (reference) and one containing HeLa cells [Furthermore, the concept of vertical trapping system in \u00b5-EIS has been used to monitor the dynamic change of single cell electrical properties over a period of time ,99. Hydret al. . Figure geometry . Malleo La cells . The devet al. to capture a single HeLa cell, then impedance measurement was performed [Recently, the concept of dielectrophoresis (DEP) for trapping system was reported . The nonerformed . Figure erformed . Despiteerformed and timeerformed . Table 3\u22121 and is capable of testing a large number of cells for obtain statistically meaningful data [The rapid development of single cell analysis tools can be seen based on the hundreds of review and technical papers currently published every year . Clearlyful data . Microflet al. utilized magnetically activated cell sorting (MACS) for obtaining the subpopulations from human peripheral blood (B-lymphocytes and monocytes), thus performing the single cell electrical properties measurement by electrorotation techniques [Electrical measurements can also be incorporated with a cell sorting unit to collect cells having different physical properties for further biochemical assaying. AC dielectrophoretic (DEP) for sorting live cells from interfering particles of similar sizes by their polarizabilities under continuous flow was reported . DEP forchniques .Microfluidic devices have demonstrated great potential in realizing electrical measurements on single cells at a higher testing speed and label free approach. Electrical measurements on single cells can be used to indicate possible diseases and it suitable for disease prescreening application. From prescreening processes, future examinations can be done to evaluate the disease condition. in situ observation of the cell response [The presented review of selected research works on single cell electrical properties provides information on technological development in single cell electrical characterization from traditional approaches to current microfluidic approaches. Microfluidics technology opens a new paradigm in cellular and microbiology research for early disease detection and provides critical information needed by research scientists and clinicians for improved clinical diagnosis and patient outcome. The recent excellent achievements in microfabrication techniques have enabled the rapid development of microfluidic technologies for further practical applications for the benefit of mankind. Furthermore, microfluidic technological progress has provided additional advantages such as reduced complexity of experiment handling, lower voltage on the electrodes, faster heat dissipation, small volume of reagents used, and response ."} {"text": "We are in the era of precision medicine, but I am not sure whether we are at the beginning, in the middle, or at the end. On the one hand, we have only a few targets in certain cancer types for which we have precise drugs, like the tyrosine kinase inhibition in chronic myeloid leukemia; on the other hand, it seems that targeting most cancers requires action towards several different pathways. This review indicates that our knowledge accumulates fast, but that simple solutions might not be expected anymore. Nevertheless, I do think that we are on the right path, but it will be a long one.high and BCRlow) subsets of patients: higher levels of BTK/SYK/BLNK/CARD11/PLCG and lower expression of MALT1/BCL10 genes was considered indicative of low BCR activation, amplified expression of TLR6/TLR7/TLR9 was regarded as BCRhigh; the latter group had also enhanced expression of genes associated with NF-\u03baB pathway and a poorer clinical outcome. Also in follicular lymphoma (FL) BCR signaling is critical. In fact, there is aberrant glycosylation of the immunoglobulin receptor in FL as a result of which DC-SIGN can activate the lymphoma cells. Linley et al. [A fine example of complexity but nevertheless potential new treatment options is given by the work of Thijssen et al. . It is wy et al. show thaTrying to understand the process of transformation of FL also does not result in an easy answer. Extensive work on many cases of transformed FL by Kridel et al. resultedUsing large data sets in a clever way can provide insight in the complexity of lymphoma-immune response interaction and Care et al. provide n\u00a0>\u00a0150)) identified somatic mutations in PCBP1 in 3/17 (18\u00a0%) BL . They confirmed the recurrence of PCBP1 mutations by Sanger sequencing in an independent validation cohort, finding mutations in 3/28 (11\u00a0%) BL (still not a lot of cases\u2026.) and in 6/16 (38\u00a0%) BL cell lines. In silico evaluation of the mutations indicates that these alter the function of the gene, which includes nuclear trafficking and pre-mRNA splicing. How this affects BL pathogenesis remains elusive. A subset of the collaborative group Doose et al. ([Burkitt lymphoma (BL) remains an intriguing lymphoma, for which we know already a lot of the molecular background and it is already well curable in children but not in adults. The genetic hallmark of BL is the translocation t, or one of its light chain variants, resulting in IG-MYC juxtaposition, but other genetic alterations and Epstein-Barr virus (EBV) play a role as well. Piccaluga et al. compared in 3/28 \u00a0% BL (stFH) cells. Microarray analyses revealed high levels of transcripts encoding IL-21 associated with high levels of serum IL-21. Then, they developed IL-21 receptor (IL21R)-deficient SJL mice, which appeared to have reduced numbers of TFH cells, lower serum levels of IL-21, and few germinal center B cells, and did not develop B cell tumors. Surprisingly, they also noted features similar to human angioimmunoblastic T cell lymphoma (AITL), which is a malignancy of TFH cells. Subsequently, they performed gene expression analyses of human AITL samples and showed that all cases expressed elevated levels of transcripts for IL21, IL21R, and a series of genes associated with TFH cell development and function. So, unexpectedly, they have developed a mouse model with features of AITL based on which they suggest that patients with this disease might benefit from therapeutic interventions that interrupt IL-21 signaling.Hypothesis-driven research sometimes leads to unexpected findings. Jain et al. set off Anaplastic lymphoma kinase (ALK)-positive anaplastic large cell lymphoma (ALCL) is unusual among the lymphoma types, since it has a clear driver mechanism that it shares with other malignancies, especially ALK-positive lung cancer. Therefore, targeted treatments are studied across cell of origin concepts, but there appear to be still cell type-dependent resistance mechanisms. Crizotinib is the best known ALK tyrosine kinase inhibitor and is used to treat ALK-associated cancers. Mitou et al. showed tn\u00a0=\u00a021,372) and reviewed pathology reports from a regional subset of 2007 through 2011. cHL rates were stable until 2007 and then decreased. Nodular sclerosis rates declined after 2007 by 6\u00a0% annually, with variation by gender, age, and race/ethnicity. In 1992 through 2011, mixed cellularity rates declined, whereas not otherwise specified (NOS) rates rose. Eighty-eight of 165 reviewed NOS pathology reports addressed classification choice. Twenty (12\u00a0%) justified the classification, 21 (13\u00a0%) described insufficient biopsy material, and specific subtype information was missing for 27 (16\u00a0%). They conclude that recent nodular sclerosis rate declines largely represent true incidence changes but that rate decreases for mixed cellularity and other less common subtypes, and increases for NOS (comprising \u223c30\u00a0% of cHL cases in 2011!!), likely reflect changes in diagnostic and/or classification practice.Cancer incidence changes, due to different reasons, one of them being changes in diagnostic criteria. To understand whether Hodgkin lymphoma (HL) really is changing in appearance or that diagnostic criteria give that impression, Glaser et al. analyzedOne of the most serious consequences of organ transplantation is the occurrence of post-transplant lymphoproliferative disease (PTLD), especially problematic in heart and lung transplantation. Kuramarasinge et al. describeLymphomas are classified based on morphology, phenotype, genotype, and clinical features, and mediastinal (m)DLBCL is one of the entities in which localisation is key for its recognition or the leg-type DLBCL that can present anywhere in the skin). It therefore does make sense to investigate whether mDLBCL can present at other sites, where it should then be diagnosed as mDLBCL type. Yuan et al. used theLu et al. as well Pathologists like to see what happens in a tumor rather that relying on something that comes out of a sequencing machine or is seen in a dark room with fluorescence and actually that makes some sense: the results of genetic changes are what counts for a cell. Agarwal et al. studied As discussed above, extranodal NK/T cell lymphoma, nasal type, may present outside of the nasal region. Fang et al. describeRecognizing the leg-type DLBCL of the skin requires a set of antibodies but remains sometimes problematic. Robson et al. investigTumors like to confuse pathologists, especially those who rely very heavily on immunohistochemistry to classify lymphomas. Experienced pathologists of course know that lymphomas do not read our books and cancer cells do have aberrations, including aberrant phenotypes. It is therefore of no surprise that CD10-positive MCL exists and Akhter et al. wonderedAnother example of an aberrant feature in MCL is plasma cell differentiation. Ribera-Cortada et al. investigMagro et al. did a laDeng et al. go for aThe diagnosis of BL can be challenging in adults, especially in case there is expression of BCL2. A few authors from the group that studied BL for gene and MYC expression in a relatively small series of cases (see above) present the expression of BCL2 in 150 cases of conventionally diagnosed BL using two different BCL2 antibodies . BCL2 exFH differentiation and enrichment of an interleukin 12-induced gene signature. Tissue samples with IDH2 mutations displayed a prominent increase in H3K27me3 and DNA hypermethylation of gene promoters. However, data regarding clinical features were not presented, so it is not clear whether this is just a phenomenon of variation within an entity or a real subgroup.Classification of T cell lymphomas is difficult, but the AILD-type has relatively well defined criteria. Wang et al. complicaWith the increasing availability of whole genome sequencing increasingly also rarer tumor types are completely sequenced. Ungewickell et al. report rDiagnosing lymphomas after treatment can be quite difficult, but sometimes the clinical situation is such that a biopsy can only be taken after some treatment. This may especially be the case in primary central nervous system lymphomas (PCNSL). Onder et al. analyzedIn my previous review of the literature , I was pCencini et al. investigSuppressor of cytokine signaling 1 (SOCS1) mutations are among the most frequent somatic mutations in HL, yet their prognostic relevance is unexplored; for Lennerz et al. sufficiePastore et al. performeThe work of Novak et al. brings nAlso, exome sequencing is a powerful tool, and Jiang et al. hereby iOkamata et al. investign\u00a0=\u00a073). Cases with high FOXP1 and low HIP1R expression frequency exhibited poorer survival. They conclude that HIP1R expression is strongly indicative of survival when utilized on its own or in combination with FOXP1, and the molecule is potentially applicable for subtyping of DLBCL cases. This latter conclusion goes a bit far, since the expression of these markers is not a better classifier that the previously published classifiers.Huntingtin-interacting protein 1-related (HIP1R) is an endocytic protein involved in receptor trafficking, including regulating cell surface expression of receptor tyrosine kinases. Wong et al. had showWang et al. evaluateSeveral techniques can detect very low levels of tumor cells . Drandi et al. used DroHurabielle et al. investigDeep learning, an approach combing Big Data and novel bioinformatics tools (3 buzz words in only 10 words!), is increasingly popular: it uses large amounts of data to create new knowledge without the bias of a hypothesis (at least that is the theory). Deeb (nomen est omen) et al. present Insuasti-Beltran et al. developeRyan et al. used a nhttp://www.bioinformatics.leeds.ac.uk/labpages/softwares/ or on github https://github.com/Sharlene/BDC.Sha et al. use a biAll and this selection from the vast literature hopefully provides you with some new insights and some helpful new tools but probably also with the realization that the world of lymphomas is not loosing its complexity."} {"text": "The rapid accumulation of whole-genome data has renewed interest in the study of using gene-order data for phylogenetic analyses and ancestral reconstruction. Current software and web servers typically do not support duplication and loss events along with rearrangements.MLGO is a web tool for the reconstruction of phylogeny and/or ancestral genomes from gene-order data. MLGO is based on likelihood computation and shows advantages over existing methods in terms of accuracy, scalability and flexibility.To the best of our knowledge, it is the first web tool for analysis of large-scale genomic changes including not only rearrangements but also gene insertions, deletions and duplications. The web tool is available from http://www.geneorder.org/server.php. Comparative genomics, evolutionary biology, and cancer research all require tools to elucidate the history and consequences of the large-scale genomic changes, such as rearrangements, duplications, losses. However, using gene-order data has proved far more challenging than using sequence data and numerous problems plague existing methods: oversimplified models, poor accuracy, poor scaling, lack of robustness, lack of statistical assessment, etc.As whole genomes are sequenced at increasing rates, using gene-order datainversion operation reverses both the order and orientation of a segment of a chromosome. A transposition is an operation that swaps two adjacent segments of a chromosome. In case of multiple chromosomes, a translocation breaks a chromosome and reattaches a part to another chromosome, while a fusion joins two chromosomes and a fission splits one chromosome into two. Yancopoulos et al. [double-cut-and-join (DCJ) operation that accounts for all rearrangements used to date. None of these operations alter the gene content of genomes, whereas deletions (or losses) delete segments of (one or more) contiguous genes from a chromosome, while insertions introduce a segment of (one or more) contiguous genes from external sources into a chromosome. and duplications copies an existing segment within the genome and inserts into a chromosome. Finally, whole genome duplication (WGD) creates an additional copy of the entire genome of a species.Genome rearrangement operations change the ordering of genes on chromosomes. An s et al. proposedet al. [et al. [et al. [et al. [et al. [As phylogenies play a central role in biological research, over the past decade many methods were developed to reconstruct phylogenies from gene-order data. The first algorithm for phylogeny inference from gene-order data was BPAnalysis based on breakpoint distances . Moret eet al. later exet al. develope [et al. have dem [et al. and Fast [et al. with an [et al. . Instead [et al. first pr [et al. develope [et al. with MLWIf the tree is fixed, then computing its parsimony score is known as the Small Parsimony Problem (SPP). Ancestral reconstruction has been studied through several optimization schemes for SPP on gene-order data\u2014using adjacencies \u201315, usinet al. [Relatively few of these tools are offered through web servers. Lin et al. had deveWe present a new tool MLGO for the reconstruction of phylogeny and/or ancestral genomes from gene-order data. MLGO relies on two methods we have developed: MLWD for phylMLGO preprocesses the gene-order data, configures the transition model, reconstructs a phylogeny, and finally solves the SPP on that phylogeny.n genes labeled as {1,2,\u22ef,n}, gene-order data for a genome consists of lists of genes in the order in which they are placed along one or more chromosomes. Each gene is assigned with an orientation that is either positive, written i, or negative, written \u2212i. Two genes i and j form an adjacency if i is immediately followed by j, or, equivalently, \u2212j is immediately followed by \u2212i. If gene k lies at one end of a linear chromosome, we let k be adjacent to an extremity o to mark the beginning or ending of the chromosome, written as or , and called telomere.Given a set of et al. [n genes and O(1) chromosomes has n+O(1) adjacencies and telomeres, the transition probability from 1 to 0 is n times less likely than that from 1 to 0. Despite the restrictive assumption that all DCJ operations are equally likely, this result is in line with the observed bias in transitions of adjacencies given by Sankoff and Blanchette [The data preprocessing and the configuration of the transition model follow the approach of MLWD . Each adet al. gave theanchette : the proanchette for its n distinct genes presents a single character (the ordering) with 2n\u00d7n! possible states (the first term is for the strandedness of each gene and the second for the possible permutations in the ordering). This single character is equivalent to an alignment with a single column, albeit one where each character can take any of a huge number of states\u2014we cannot meaningfully resample a single character. The binary encoding effectively maps this single character into a high-dimensional binary vector, so that the standard phylogenetic bootstrap [A distinct advantage of using sequence encoding is the ability to use the bootstrap method to assess the robustness of the inferred phylogeny. Doing so with gene-order data is not possible, because a chromosome with ootstrap can be uootstrap .et al. [et al. [Using the phylogeny thus computed, we then proceed to solve the SPP, now following the approach of Hu et al. . The fir [et al. . If such [et al. .MLGO is written in C++ and Perl as a web tool. Figure et al. [D. simulans, D. sechellia and the rest. The total running time for reconstructing the phylogeny of 12 drosophila species is less than 1 minute, while ancestral reconstruction adds less than 30 minutes. We also tested the performance of MLGO on 15 Metazoan genomes from the eGOB (Eukaryotic Gene Order Browser) database [We used the genomes of 12 fully sequenced drosophila species to demonstrate the performance of MLGO. Figure et al. , all majdatabase , and thedatabase .Figure 2As whole genomes are sequenced at increasing rates, using gene-order data for phylogenetic analyses and ancestral reconstruction is attracting increasing interest, especially coupled with the recent advances in identifying conserved synteny blocks among multiple species \u201334.MLGO is the first web tool for likelihood-based inference of both the phylogeny and ancestral genomes. It provides fast and scalable analyses with bootstrap support of large-scale genomic changes including not only rearrangements but also gene insertions, deletions and duplications.Project name: MLGOProject home page:http://www.geneorder.org/server.phpOperating system(s): Platform independentProgramming language: PerlOther requirements: NoneLicense: GNURestrictions for use by non-academics: NoneThe web tool is available from http://www.geneorder.org/server.php.a We use the term \u201cgene\u201d as this is in fact a common form of syntenic blocks, but other kinds of markers could be used."} {"text": "The development of diabetes mellitus is the interplay between insulin secretion and insulin resistance, while insufficient compensatory beta cell function plays a major role during the natural progression of the disease. In the past decades, a lot of work had been carried out to investigate the initiation and regulation of beta cell dysfunction, yet the underlying mechanism is largely unknown. In this issue, 11 interesting papers are compiled to discuss from experimental and clinical aspects the mechanism of beta cell dysfunction in the development of diabetes. in vitro and investigated its effect on beta cell apoptosis. In another paper, the attenuating effect of metformin against high glucose-induced suppression of cell proliferation and osteogenic-related gene expression in osteoblast was investigated by X. Shao et al.MicroRNAs (miRNAs) are small noncoding 18\u201325 nucleotides that bind to the complementary 3\u2032UTR regions of target mRNAs and function in transcriptional and posttranscriptional regulation of gene expression. In recent years, miRNAs were reported to regulate several metabolic pathways including insulin secretion, cholesterol biosynthesis, carbohydrate, and lipid metabolism. In this issue, 3 papers touched upon the regulatory effect of some miRNAs on beta cell function from different points of view. Dr. X. Chang et al. reported that the mean level of miR-375 methylation was significantly lower in T2DM patients from Kazak population than Han population, which might partly explain from genetic background that even though Kazak population clusters more risk factors for T2DM, prevalence rate of T2DM is 6 times less than that of the Han population in the same region. Q. Zhang et al. showed that 8-week treatment of Tianmai Xiaoke Tablet, a chromium picolinate hypoglycemic agent, in diabetic rat significantly upregulated the expression of multiple miRNAs such as miR-375 and miR-30d, which might be part of its effect on improving glucose control. X. Lin et al. investigated the direct suppression of Bcl-2 by miR-34a, which might account for palmitate-induced apoptosis in MIN6 cell, the latter of which is believed to be the most important mechanism of beta cell dysfunction. Another 2 manuscripts also discussed beta cell apoptosis. Dr. L. Zhou et al. examined the 3 signaling pathways of MAPKs in INS-1 cells treated with glucolipotoxicity conditions and concluded that P38 might be involved in the regulation of beta cell apoptosis through phosphorylation of IRS-2. Z. Zhang et al. simulated intermittent high glucose situationThree papers in this issue presented with unique clinical picture of diabetes in the Chinese population. W. Tang et al. investigate the relationship between serum uric acid and residual beta cell function in 1021 T2DM patients. They concluded that patients with higher serum uric acid had greater insulin secretion at the early stage, but their residual beta cell function decayed more rapidly. Y. Ma et al. compared newly diagnosed T2DM patients with or without hyperlipidemia and found that the former were younger and had worse beta cell function. H. Lu et al. reported that ketosis onset type 2 diabetic patients had better beta cell function and were more insulin resistant. in vitro.Two manuscripts brought new exploration in the treatment of type 1 diabetes. Dr. W. Li et al. revealed that, apart from higher density of insulin-producing beta cells, small islets transplantation expressed less angiotensin and more angiotrophic VEGF-A, which might be beneficial for the facilitation of microcirculation and revascularization in small islets. H. Luo et al. presented with a novel protocol that reprogrammed primary hepatocytes into functional insulin-producing cells using multicistronic vectors carrying Pdx1, Ngn3, and MafA. These cells activated multiple beta cell gene expression, synthesized and stored considerable amounts of insulin, and released the hormone in a glucose-regulated mannerWe hope to bring about extensive concern and energetic discussion about beta cell function from experimental as well as clinical aspects. We wish that our readers enjoy this special issue.Yanbing LiYanbing LiLi ChenLi ChenChen WangChen WangDongqi TangDongqi Tang"} {"text": "Transcutaneous electric nerve stimulation (TENS) is a non-pharmacological method which is widely used by medical and paramedical professionals for the management of acute and chronic pain in a variety of conditions. Similarly, it can be utilized for the management of pain during various dental procedures as well as pain due to various conditions affecting maxillofacial region. This review aims to provide an insight into clinical research evidence available for the analgesic and non analgesic uses of TENS in pediatric as well as adult patients related to the field of dentistry. Also, an attempt is made to briefly discuss history of therapeutic electricity, mechanism of action of TENS, components of TENs equipment, types, techniques of administration, advantages and contradictions of TENS. With this we hope to raise awareness among dental fraternity regarding its dental applications thereby increasing its use in dentistry. Key words:Dentistry, pain, TENS. Pain has been a constant tormentor of mankind since time immemorial. Techniques used to control pain are broadly divided into pharmacological and non pharmacological methods. Most common pharmacological means to curb pain in dentistry is the use of local anesthesia during dental procedures and analgesics for the postoperative pain. Use of local anesthesia instills fear in a many patients as it requires the use of the \u2018horrifying\u2019 syringe. A non-pharmacological method for pain control is the use of transcutaneous electrical nerve stimulation [TENS] . FDA , a type of electric fish for pain relief. In modern era, John Wesley in 18th century introduced electrotherapy for the relief of pain from sciatica, headache, kidney stone, gout, and angina pectoris. Use of electricity for relief of dental pain was first described in 19th century by a physician named Francis. In 20th century, various dental handpieces that provided an electrical current to the tooth via the bur were used to relieve pain during cavity preparation. After a lot of research, TENS or electronic dental anesthesia as it is called in dentistry has established itself as an anesthetic agent .Analgesic effect of TENS is based on two main theories- Gate control theory of pain and endogenous opiod theory.Gate control theory of pain proposed by Melzack and Wall in 1965,In 1969, Reynolds showed tClinically, TENS is applied at varying frequencies, intensities, and pulse durations of stimulation. Depending upon frequency of stimulation, TENS is broadly classified into 2 categories: [1] High frequency TENS [>50Hz]. [2] Low frequency TENS [<10 Hz] -11. High- Tens EquipmentMain parts of TENS system are: [1] TENS unit. [2] Lead wires. [3] Electrodes- TENS unitIt is an electric pulse generator. It has two variations: 1] \u201cClinical\u201d model- This is used by dentist in the clinic and is connected to the buildings electrical outlet to generate power . [2] \u201cPa \u201cClinica- Lead WireThese connect electrodes to TENS unit to establish electrical connection Fig. .- ElectrodesBy means of electrodes, electric flow from TENS unit is converted into an ionic current flow in the living tissue. Electrodes can be placed extraorally or intraorally. Extraoral electrodes are of two types: [1] Carbon- impregnated silicone rubber electrodes- They are flexible and coupled to the skin surface through the use of electrically conductive gel. They are retained in place with surgical tape. [2] Tin plate or aluminum electrodes- These don\u2019t conform to the body and are coupled to the skin surface with tap water retained within cotton pad or sponge.The intraoral electrodes are cotton roll electrodes, clamp electrodes and adhesive electrodes. Adhesive electrodes are the most widely used type nowadays. These electrodes are thin and flexible so can adapt easily to the oral mucosa Fig. 1)1).Three main types of TENS are described in the literature \u2013 1. Conventional TENS 2.Acupuncture-like TENS [AL-TENS] and 3. Intense TENS. Different TENS techniques are used to selectively activate different afferent nerve fibers .1. Conventional TENS`It is the most commonly used method for delivering currents in clinical practice. It uses high frequency [between 10-200 pulses per second [pps]], low intensity [amplitude] pulsed currents to activate the large diameter A\u03b2 fibers without concurrently activating small diameter A\u03b2 and C [pain-related] fibres or muscle efferents . It prod2. Acupuncture-like TENS [AL-TENS]It uses low frequency , high intensity pulsed currents to activate the smaller diameter A\u03b4 fibers arising from muscles [ergoreceptors] by the induction of phasic muscle twitches. It produces extrasegmental analgesia which has a delayed onset [> 30 min after switch-on] and offset [>1 h after switch-off]. AL-TENS can be used for about 30 minutes at a time as fatigue may develop with ongoing muscle contractions.3. Intense TENSIt uses high frequency [upto 200 pps], high intensity pulsed currents which are just bearable to the patient. It activates small diameter A\u03b4 cutaneous afferents and produces extrasegmental analgesia which has a rapid onset [< 30 min after switch-on] and delayed offset [>1 h after switch-off]. Intense TENS can be used for about 15 minutes at a time as the stimulation may be uncomfortable.1. It is non-invasive, safe and can 2. As compared to local anesthesia there is no postoperative anesthesia after the TENS unit is turned off .3. Patients are able to self-administer TENS treatment and learn to titrate dosages accordingly to manage their painful condition. This results in positive acceptance by the patients .1. Apprehensive patients- usage of TENS requires patient co-operation, hence the procedure shouldn\u2019t be at-tempted in patients with a communication handicap or a mental disability.2. Patients with cardiac pacemakers-,13. If t3. Patients with cerebrovascular problems- patients with a history of aneurysm, stroke and transient ischaemia shouldn\u2019t be treated using TENS, as it stimulates peripheral blood flow which can be fatal in such cases .4. Epileptic patients- TENS \u201cpulses\u201d have the potential to trigger a seizure .5. Pregnant patients- As such there are no specific side effects. However, since there has been no FDA approval, the usage is frowned upon .6. Acute pain cases/pain of unknown etiology- usage of TENS in undiagnosed cases may hinder in the diagnosis (Apart from its analgesic effect, TENS can also be used to produce non-analgesic physiological effects and has been found to be beneficial in the management of xerostomia. Various applications of TENS in dentistry are summarized below1. Dental treatment in pediatric patientsA commonly observed negative behavior in pediatric patients is fear towards syringes. Use of TENS has positive effects on the behavior of pediatric patient which in turn decreases the anxiety levels as it removes the \u201cfear of needle\u201d. Studies have shown that 53 -78% children prefer TENS over local anesthesia -16. In pet al. (Abdulhameed et al. in 1989 et al. (et al. (teDuits et al. in 1993 (et al. in 1997 Harvey and Elliott in 1995 Baghdadi in 1999 et al. (Munshi et al. in 2000 et al. (P>0.05]. They concluded that TENS can be a useful adjunct in pediatric patients during various minor dental procedures.Dhindsa et al. in 2011 2. Dental treatment in adult patientsIn adults TENS has been used successfully as an excellent analgesia during various procedures like rubber dam placement, cavity preparation, pulp capping and other endodontic procedures, prosthetic tooth preparations, oral prophylaxis as well as extractions. It is also used to reduce the discomfort from injection of local anesthesia and to alleviate periodontal pain associated with orthodontic separation.Roth and Thrash in 1986 et al. (Malamed et al. in 1989 William Stenberg in 1994 Yap and Ho in 1996 Quanstrom and Libed in 1994 et al. (Meechan et al. in 1998 According to Hochman , TENS isTENS has also been used in combination with nitrous oxide-oxygen or diazepam to achieve analgesia during dental treatment. Quanstrom and Milgrom in 1989 3. In chronic pain of maxillofacial regionTENS has been used successfully to alleviate chronic pain of TMJ syndrome, trigeminal neuralgia, and post herpetic neuralgia.- In TMJ syndromeKatch reported- In trigeminal neuralgiaet al. (Singla et al. conducteet al. (Yameen et al. used TENThorsen and Lumsden reported- In post-herpetic neuralgiaIn post-herpetic neuralgia most of the larger myelinated afferent nerve fibers are destroyed and therefore, normal presynaptic inhibition of inputs of C fibers does not occur . This isNathan and Wall in 1974 et al. (Mittal et al. in1998 u4. In acute orofacial painHansson and Ekblom studied 5. In patients with xerostomiaet al. (Application of TENS increases the salivary flow rate in healthy individuals as well as in xerostomic patients. Hargitai et al. in 2005 et al. (Pattipati et al. in 2013 et al. (Weiss et al. in 1986 et al. (Steller et al. in 1988 et al. (Talal et al. in 1992 et al. (Wong et al. in 2003 et al. (Wong et al. in 2012 In conclusion, though TENS can\u2019t replace local anesthesia, it can be used for pain relief during various dental procedures. Its analgesic and non analgesic physiologic effect can be used in the management of a variety of conditions affecting maxillofacial region."} {"text": "Salinity is a stressful environmental factor that limits the productivity of crop plants, and roots form the major interface between plants and various abiotic stresses. Rice is a salt-sensitive crop and its polyploid shows advantages in terms of stress resistance. The objective of this study was to investigate the effects of genome duplication on rice root resistance to salt stress.+ content, H+ (proton) flux at root tips, and the microstructure and ultrastructure in rice roots were examined. We found that tetraploid rice showed less root growth inhibition, accumulated higher proline content and lower MDA content, and exhibited a higher frequency of normal epidermal cells than diploid rice. In addition, a protective gap appeared between the cortex and pericycle cells in tetraploid rice. Next, ultrastructural analysis showed that genome duplication improved membrane, organelle, and nuclei stability. Furthermore, Na+ in tetraploid rice roots significantly decreased while root tip H+ efflux in tetraploid rice significantly increased.Both diploid rice (HN2026-2x and Nipponbare-2x) and their corresponding tetraploid rice (HN2026-4x and Nipponbare-4x) were cultured in half-strength Murashige and Skoog medium with 150\u00a0mM NaCl for 3 and 5\u00a0days. Accumulations of proline, soluble sugar, malondialdehyde (MDA), Na+ entrance into the roots.Our results suggest that genome duplication improves root resistance to salt stress, and that enhanced proton transport to the root surface may play a role in reducing Na Plants Flowers ; Glenn a Flowers ; Blumwal Flowers ; Munns a Flowers ; Horie e Flowers ). During Flowers ; Shinoza Flowers ; Kronzuc Flowers ; Horie e Flowers ; Hauser Flowers ). Among h et al. ; Brini ah et al. ), and pla et al. ; Anil eta et al. ) and exta et al. ). Plantsnd Munns ; Kavi Kind Munns ; Yamada nd Munns ; Chyzhyknd Munns ; Jayaseknd Munns ); Xu andnd Munns ; ). Ion am et al. ; Gorham m et al. ); Glenn m et al. ; ). Osmotm et al. ).+ enters the shoots of plants remain unclear . Plantsd Britto ), but apd Britto ; Ochiai d Britto ). The apd Britto ; Anil etd Britto ; Gong etd Britto ). The mae et al. ), bypasse et al. ; Gong ete et al. ). In rice et al. ; Ranathue et al. ). Caspare et al. ; Krishnae et al. ; Zhou ete et al. ). Charace et al. ; Steudlee et al. ; Steudlee et al. ; Barrowce et al. ; Miyamote et al. ; Lee et e et al. ); Zimmere et al. ). Furthee et al. ).Triticum macha and the endemic tetraploid wheat Triticum timopheevii were among the most tolerant to salt stress . In tetnd James ). Other nd James ). In othnd James ; Mouhayand James ). In hexl et al. ). Rice il et al. ; Lutts el et al. ; Hasanuzl et al. ). Few stl et al. ; Cai et l et al. ; He et al et al. ; He et al et al. ). Some rl et al. ). PolyplThe length, fresh weight, dry weight, and number of roots of polyploid rice cultivars were investigated to characterize the effects of genome duplication under salt stress. Our results demonstrated that salt stress significantly restricted rice root growth, irrespective of being diploid or tetraploid rice, and genome duplication improved root resistance in tetraploid rice by contributing to faster and better root growth in the presence of 150\u00a0mM NaCl and reached 120.99\u00a0\u03bcg\u00a0g\u20131 in Nipponbare-4x. However, the increase in free proline in Nipponbare-4x compared with Nipponbare-2x was 23.30%. In addition, the increase in HN2026-4x was 19.55%.Free proline in roots of diploid and tetraploid rice subjected to 150\u00a0mM NaCl for 5\u00a0days was measured Figure\u00a0. The amoGenome duplication led to a similar increase in different rice cultivars in terms of soluble sugar accumulation under salt stress, and the difference was significant between tetraploid and diploid rice subjected to salt stress. However, no significant changes were found between the two different cultivars for tetraploid or diploid rice Figure\u00a0. The amo\u20131) and HN2026-4x (60.10\u00a0\u03bcmol\u00a0g\u20131) accumulated less MDA in their roots compared to the corresponding diploid cultivars under normal condition. Under salt stress, the amount of MDA in HN2026-4x conditioned with salt was lowest among all cultivars accumulated to similar levels in all rice cultivars tested, and no significant difference was detected between diploid and tetraploid rice cultivars without salt stress. However, the amount of MDA in the roots of various rice cultivars under salt stress was significantly greater than in the control Figure\u00a0. In contTo increase our understanding of the root response in polyploid rice, the anatomical structure of roots in Nipponbare-2x and -4x cultivars were observed on plants under salt stress for 3 and 5\u00a0days because Nipponbare-4x was thought to be more resistant to salt. Histological analysis indicated that genome duplication regulated the root response to salt stress. The root microstructure in diploid and tetraploid rice was similar without salt stress, and no evident morphological differences in the epidermis, cortex, vascular system, or aerenchyma were observed to facilitate oxygen transport. However, the diameter of the longest root in tetraploid rice was larger than that in the corresponding diploid Figure A. The rep Figure B. We obs+ in the soil . Salinid Tester ; Hasanuzd Tester ; Brini ad Tester ). As we d Tester ; Al-Khayd Tester ; Yamada d Tester ; Jayasekd Tester ; Munns ad Tester ; Krishnad Tester ; Szabadod Tester ).The ricd Tester ). Osmotid Tester ; Munns [d Tester ; Ranathud Tester ; Ranathud Tester ; Munns ad Tester ). On thed Tester ); Radic d Tester ; ). Our r+ accumulate in plants, particularly when transport from root to leaves is over the threshold damage ; Glenn Flowers ; Zhu [20 Flowers ). Roots s et al. ); L\u00e4uchls et al. ; ). Baseds et al. ). In moss (Munns ). Howeveo et al. ; Yadav eo et al. ; Garcia o et al. ; Gong eto et al. ), which l et al. ). Casparl et al. ; Schreibl et al. ; Schreibl et al. ). In thind Heydt ; Barrowcnd Heydt ; Miyamotnd Heydt ; Lee et nd Heydt ). Howeved Wendel ). We nexd Wendel ; Blanc ad Wendel ; Saleh ed Wendel ). Adaptai et al. ; Gersteii et al. ; Saleh ei et al. ); Dhar ei et al. ). Polypli et al. ). Howeve+-ATPase is inhibited and may contribute to a weaker acidification of the apoplast, and thus to growth inhibition ). Na+ sg et al. ; Brini ag et al. ). The ceb et al. ). At a 2d Boutry ; Palmgred Boutry ; Gaxiolad Boutry ; Cosgrovd Boutry ; Gaxiolad Boutry ). The hi+ \u201cexclusion\u201d are now known to reside on different chromosomes in various genomes of species in the Triticeae, further studies are required to identify the underlying mechanisms controlling genes for the various traits that could act additively or even synergistically, which may enable substantial gains in salt tolerance . The reh et al. ; Yildiz h et al. ; Mouhayah et al. ). Severah et al. ; Chinnush et al. ; Sahi eth et al. ). Polypl+ absorption, and Na+ content in Nipponbare-2x greatly increased compared to that in Nipponbare-4x under salt stress. The high H+ efflux in tetraploid rice led to low pH conditions and may have contributed to increased root growth, the length of the root, and the fresh and dry weights of the root, which were restricted in the diploid compared to the tetraploid under salt stress. Overall, our results suggest that genome duplication improved root resistance to salt stress and enhanced proton transport to the root surface, which may play a role in reducing Na+ entry into the roots.Rice is a very important and salt-sensitive crop. Previous reports have suggested that polyploid rice has some superiority in stress resistance. However, few studies have focused on the effect of genome duplication on rice root response under salt stress. The objective of this study was to investigate how genome duplication regulates the rice root response to salt stress. Our results demonstrated that salt stress significantly restricted rice root growth in both diploid and tetraploid rice, and that genome duplication improved the root growth in tetraploid rice, with faster and better root growth in the presence of 150\u00a0mM NaCl. Free proline accumulated in tetraploid rice cultivars under salt stress varied greatly, which increased compared to that in the diploid cultivars. Genome duplication significantly decreased the MDA content in tetraploid rice compared to diploid cultivars subjected to salt stress, which suggests that the membrane integrity improved in the tetraploid compared to the diploid. Investigation of the anatomical structure of roots under salt stress showed a high frequency of epidermis cells maintaining their normal structure, and a gap appeared between the cortex and pericycle cells in tetraploid rice roots. These protective mechanisms improved the root adaptability to salt stress. Ultrastructural analysis showed that genome duplication also improved the root response, including the epidermis cell protective mechanism formation, and membrane organelle and nuclei stability. Anatomical structure and ultrastructure of roots in Nipponbare-4x may play critical roles in counteracting Na\u20132\u00a0s-\u20131 using fluorescent lighting with a day and night cycle of 12\u00a0h each. Seedlings were then cultured in \u00bd MS medium . Seeds nd Skoog ) with 15We choosed the longest root ) of eveThe proline content in roots in the presence of 150\u00a0mM NaCl for 5 days was investigated. The free proline content was extracted and quantified using the acid ninhydrin method as described by Bates et al. . The co\u20131)\u2009=\u20096.45 (A532\u2013A600) \u2013 0.56 A450. All samples were tested in three independent experiments with three replicates each.The content of MDA in roots subjected to 150\u00a0mM NaCl for 5 days was calculated. The method was in accordance with the results of Hodges et al. . BrieflThe soluble reducing sugar content in roots exposed to salt treatment for 5 days was measured as described previously . Briefl4 (Sigma) in PBS (pH\u00a07.2). The tissues were washed in PBS, dehydrated in a graded ethanol series, and embedded in EPON812 . Half-thin sections (100\u00a0nm) were examined at every stage, and observations and photographic recordings were performed with a BX51 microscope . Ultrathin sections (50\u201370\u00a0nm) were double-stained with 2% (w/v) uranyl acetate (Sigma) and 2.6% (w/v) lead citrate (Sigma) aqueous solution and examined with a transmission electron microscope at 100\u00a0kV.Rice roots exposed to salt treatment for 3 and 5 days were dissected and vacuum-infiltrated with 3% (w/v) paraformaldehyde and 0.25% glutaraldehyde (Sigma) in phosphate-buffered saline (PBS) for 30\u00a0min (pH\u00a07.2). The fixed roots were renewed with fresh solution and post-fixed in 1% OsO3 and the concentration of Na+ was determined using ICP-AES . All samples were tested in three independent experiments with three replicates each.After seedlings were cultured in \u00bd MS medium (Murashige and Skoog ) with 15+ fluxes were measured noninvasively using SIET . Rice plants were equilibrated in measuring solution for 20\u201330\u00a0min, and these equilibrated rice plants were transferred to the measuring chamber, which was a small plastic dish (3\u00a0cm diameter) containing 2\u20133\u00a0ml of fresh measuring solution. When the root became immobilized at the bottom of the dish, the microelectrode was vibrated in the measuring solution between two positions (5\u00a0\u03bcm and 35\u00a0\u03bcm from the root surface) along an axis perpendicular to the root. The background was recorded based by vibrating the electrode in measuring solution not containing roots. The microelectrode was made and silanized by Xuyue Science and Technology Co., Ltd. . All samples were tested in three independent experiments with three replicates each.HAll values are shown as the mean of five replicates, and the average was calculated. The results were analyzed for variance using the SAS/STAT statistical analysis package to determine significant differences. Means followed by common letters are not significantly different at P\u2009=\u20090.05 using a protected least-significant difference.The authors declare that they have no competing interests.Yi Tu and Aiming Jiang contribute equally to this paper, they cooperted to finish all experiments. Lu Gan, Md. Mokter Hossain and Jinming Zhang made contribution of making figure and table. Bo Peng, Yuguo Xiong and Zhaojian Song were responsible for materails planting , nursing and data analysis. Detian Cai and Jianhua Zhang gave this research important guidance and revised the manuscript. Yuchi He and Weifeng Xu cooperated to design the the whole research and write the manuscript, they are sharing the corresponding person for giving final approval of the version to be submitted. All authors read and approved the final manuscript."} {"text": "Day surgery, coming to and leaving the hospital on the same day as surgery as well as ambulatory surgery, leaving hospital within twenty-three hours is increasingly being adopted. There are several potential benefits associated with the avoidance of in-hospital care. Early discharge demands a rapid recovery and low incidence and intensity of surgery and anaesthesia related side-effects; such as pain, nausea and fatigue. Patients must be fit enough and symptom intensity so low that self-care is feasible in order to secure quality of care. Preventive multi-modal analgesia has become the gold standard. Administering paracetamol, NSIADs prior to start of surgery and decreasing the noxious influx by the use of local anaesthetics by peripheral block or infiltration in surgical field prior to incision and at wound closure in combination with intra-operative fast acting opioid analgesics, e.g., remifentanil, have become standard of care. Single preoperative 0.1 mg/kg dose dexamethasone has a combined action, anti-emetic and provides enhanced analgesia. Additional \u03b1-2-agonists and/or gabapentin or pregabalin may be used in addition to facilitate the pain management if patients are at risk for more pronounced pain. Paracetamol, NSAIDs and rescue oral opioid is the basic concept for self-care during the first 3\u20135 days after common day/ambulatory surgical procedures. Day or ambulatory surgery; coming and leaving the hospital the same day or within 23-hours after surgery is becoming increasingly adopted. There are several potential benefits associated with the avoidance of in-hospital care. The risk for hospital acquired infections and the potential negative effects of increased immobilization and subsequent risk for thrombo-embolic and anenergetic sequalae are also lowered. Enhanced recovery and shortened hospital stay has also been shown to reduce the risk for cognitive side effects.Early discharge demands a rapid recovery and low incidence and intensity of surgery and anaesthesia related side-effects; such as pain, nausea and fatigue. Patients must be fit enough and symptom intensity so low that self-care is a feasible in order to secure quality of care. Multi-modal or balanced analgesia has becoet al. } and 24 h [MD \u22120.48 ] after surgery. Dexamethasone-treated patients used less opioids at 2 h [MD \u22120.87 mg morphine equivalents (95% CI: \u22121.40 to \u22120.33)] and 24 h [MD \u22122.33 mg morphine equivalents ], required less rescue analgesia for intolerable pain [relative risk 0.80 ], had longer time to first dose of analgesic [MD 12.06 min ], and shorter stays in the post-anaesthesia care unit [MD \u22125.32 min (95% CI: \u221210.49 to \u22120.15)]. There was no dose-response with regard to the opioid-sparing effect. There was no increase in infection or delayed wound healing with dexamethasone, but blood glucose levels were higher at 24 h [MD 0.39 mmol L\u22121 ]. Higher dose, 0.2 mg may however increase the risk for postoperative cognitive dysfunction as suggested in a study by Fang et al. [Single intra venous 0.1 mg per kilogram preoperative dose of dexamethasone has been shown to have combined anti-emetic and enhance analgesia . De Olivet al. conclude [et al. conducteg et al. .et al. [et al. [\u22121 provided effective postoperative analgesia and reduced morphine requirement when administered intravenously or in wound infiltration with bupivacaine. However, the incidence of complications was less with wound infiltration. Patients receiving i.v. clonidine had more hypotension (p < 0.01) and sedation (p < 0.001) compared with other groups.Both clonidine and dexmedetomidine has been studied as adjutants to paediatric general anaesthesia in order to reduce and ameliorate emergence agitation. A meta-analysis by Dahmani et al. from 201 [et al. found inet al. [He et al. found inet al. [p < 0.001). Systolic blood pressure was significantly lower in group D compared with group P from the beginning of the operation. HR, RR, and SpO(2) were comparable between the two groups. There were 8 cases (25.8%) of hypertension in group P, and 1 case (3.2%) in group D (p < 0.05). In contrast, 1 case (3.2%) of hypotension and 1 case (3.2%) of bradycardia occurred in group D. Whether the favourable effects seen from the use of dexmedetomidine as a complement to anaesthesia for cardiac surgery can be translated into day surgery for the elderly needs much further studies. Ji et al. [et al. [Dexmedetomidine has been studies not only as an adjunct to nerve block but as an alternative to propofol for sedation. Na et al. studied i et al. recently [et al. found shet al. [There is a meta-analysis evaluating the addition of \u03b1-2-agonists on postopetive pain not explicitly following day surgery by Blaudszun et al. . This reTawfic made a ret al. [et al. [et al. [p < 0.001) in Pregabalin versus control. Pregabalin was also associated to significantly less (p < 0.001) patients with postoperative nausea, vomiting, sedation and dizziness versus control. Overall patient satisfaction with pain management was significantly higher (p < 0.001) in the pregabalin group. Sarakatsianou et al. [et al. [Peng et al. did not [et al. found 15 [et al. studied u et al. found li [et al. showed iet al. [et al. [There is rather sparse data around the use, benefits from gabapentin as part of the pain management after ambulatory surgery. Kazak et al. found be [et al. found a et al. [Duari et al. conducte4 50 mg/kg in 100 mL of normal saline over 15 min before anesthesia induction, followed by an infusion of 15 mg/kg/h) has been shown by De Oliveiera et al. [There are also other drugs, substances, that have been studied showing potential positive effects. Magnesium sulphate in the control group (p < 0.001). Similarly, the severity of pain was six-fold less in the treatment group than in the control group. The incidence of nausea in the PACU was significantly less in the treatment group; 4.7% vs. 29.5% in the control group (p < 0.05). Patients from the treatment group satisfied Postanaesthesia Discharge Score significantly earlier than those in the control group . They concluded that concomitant use of local anaesthetic, NSAID and opioid drugs proved to be highly effective in our patients, resulting in faster recovery and discharge. One of the goals is to reduce opioid related side effects and subsequently facilitate adequate pain relief but with a minimum of side effects. Zhao et al. [The concept of multi-modal analgesia was shown effective for cholecystectomi already in 1996 by the Chung group in Toronto . Patiento et al. from theo et al. one of tThere is still no strong evidence for a pre-emptive effect from preoperative start of analgesia administration in clinical practice. Providing pain relief in order to secure adequate concentrations, effect after surgery, preventing pain to become severe is sound. Thus starting administration of paracetamol and an oral NSAID as premed has subsequently become well-accepted practice in many day surgical programs . The aimet al. [Aluri et al. conducteet al. [p < 0.001. Postoperative rise in individual creatinine levels demonstrated a non-significant rise in the multimodal group, 33.0 \u00b1 53.4 vs. 19.9 \u00b1 48.5, p = 0.133. Patients in the multimodal group suffered less major in-hospital events in crude numbers: myocardial infarction (MI) , stroke , dialysis , and gastrointestinal (GI) bleeding . 30-day mortality was 1 vs. 2, p = 0.54. The authors conclude that in patients undergoing cardiac surgery, a multimodal regimen offered significantly better analgesia than a traditional opiate regimen. Nausea and vomiting complaints were significantly reduced. No safety issues were observed with the multimodal regimen.Multi modal analgesia has also been shown safe and effective for the management of pain following major surgery. Rafiq et al. studied et al. [et al. [Although the classical papers by Eriksson et al. and Michet al. suggest [et al. around p [et al. .In conclusion, a multi-modal procedure specific analgesic strategy facilitates the postoperative course following day surgery; improves quality of care, reduce experience of PONV and shorten time to discharge. The exact impact of multi-modal\u2014balanced analgesia on resumption of daily living and quality of life is not well-documented. Providing paracetamol in standard dose, adding an NSAID when feasible and not contra-indicated in lowest effective dose fort short period and furt"} {"text": "Biology has become the land of the \u201c-omics,\u201d including genomics , transcrThe Peer-reviewed papers are collected in the special issue. They are approximately divided into three areas: bioinformatics, functional genomics, and functional genetics. The majority of the papers are purely bioinformatics related papers. We define bioinformatics papers as those using computational tools or developing methods to analyze functional \u201c-omics\u201d data without using wet labs. Two papers fell into the category of functional genomics, which is focused on using whole genome level wet-lab technology to find important molecules and investigate their potential functions. Five papers are considered as functional genetics papers. Functional genetics is a broad concept here and these papers are concentrated on studying the molecular functions and mechanisms of individual molecules using wet-lab experimental approaches.Bioinformatics. In the bioinformatics papers, four papers deal with transcriptomics data. F. Wang et al. developed a novel approach for coexpression analysis of E2F1-3 and MYC target genes in chronic myelogenous leukemia (CML); they found a significant difference in the coexpression patterns of those candidate target genes between the normal and the CML groups. It is challenging to analyze the quantity of image data on gene expression. A. Shlemov et al. developed a method called 2D singular spectrum analysis (2D-SSA) for application to 2D and 3D datasets of embryo images related to gene expression; it turned out to work pretty well. J. Li et al. characterized putative cis-regulatory elements (CREs) associated with male meiocyte-expressed genes using in silico tools. They found that the upstream regions (1 kb) of the top 50 genes preferentially expressed in Arabidopsis meiocytes possessed conserved motifs, which were potential binding sites of transcription factors. NAGNAG alternative splicing plays an important role in biological processes and represents a highly adaptable system for posttranslational regulation of gene function. Interestingly, X. Sun et al. identified about 31 NAGNAG alternative splicing sites that were identified in human large intergenic noncoding RNAs (lincRNAs). N. meningitidis and N. gonorrhoeae, which belong to Neisseria, a genus of gram-negative bacteria. D. Yu et al. selected 18 Neisseria genomes, preformed a comparative genome analysis, and identified 635 genes with recombination signals and 10 genes that showed significant evidence of positive selection. Further functional analyses revealed that no functional bias was found in the recombined genes. The data help us to understand the adaptive evolution in Neisseria.Three papers are focused on the deification of new gene family members and gene evolution. Conotoxins are small disulfide-rich neurotoxic peptides, which can bind to ion channels with very high specificity and regulate their activities. H. Ding et al. developed a novel method called iCTX-Type, which is a sequence-based predictor that can be used to identify the types of conotoxins in targeting ion channels. A user-friendly web tool is also available. Y.-Z. Zhou et al. analyzed the evolution pattern and function diversity of PPAR gene family members based on 63 homology sequences of PPAR genes from 31 species. They found that gene duplication events, selection pressures on HOLI domain, and the variants on promoter and 3\u2032UTR are critical for PPARs evolution and acquiring diversity functions. There has recently been considerable focus on its two human pathogenic species Azospirillum amazonense could have been acquired from distantly related bacteria from horizontal transfer. They also demonstrated that the coding sequence related to production of phytohormones, such as flavin monooxygenase and aldehyde oxidase, is likely to represent the tryptophan-dependent TAM pathway for auxin production in this bacterium. They conclude that the genomic structure of the bacteria has evolved to meet the requirement for adaptation to the rhizosphere and interaction with host plants.One paper tried to solve the key algorithm issue called the all-pairs suffix-prefix matching problem, which is crucial for de novo genome assembly. M. H. Rachid et al. developed a space-economical solution to the problem using the generalized Sadakane compressed suffix tree. One paper conducted a comparative genomics analysis. R. Cecagno et al. found that the versatile gene repertoire in the genome of rhizosphere bacterium\u03b1 structure of porcine reproductive and respiratory syndrome virus PRRSV.One article conducted a meta-analysis. H. Ye et al. have demonstrated that rs2228671 is a protective factor of CHD in Europeans. One paper is concentrated on the microorganism bioinformatics. Y. Ding et al. recognized the roles of the synonymous codon usage in the formation of nsp1Functional Genomics. There are two papers that conducted gene association studies based on genome wide data. J. Li et al. found that the presence of ATT\u03b54haplotype was associated with an increased risk of mental retardation (MR) in children but did not find any significant association between single loci of the four common ApoE polymorphisms and MR or borderline MR. J. Zhou et al. did not find an association between rs7529229 and chronic heart disease (CHD) in Han Chinese. However, their meta-analyses indicated that rs7529229 was associated with the CHD risk in Europeans.Functional Genetics. There are 5 articles that investigate the individual gene function in different areas. Two papers are related to neural diseases. G.-M. Chang et al. found that activating NF-\u03baB signaling pathway can protect intestinal epithelial cell No. 6 against fission neutron irradiation. X.-S. Liu et al. demonstrated that hepatocyte growth factor (HGF) could promote the regeneration of damaged Parkinson's disease (PD) cells at higher efficacy than the supernatant from hUC-MSCs alone. Thus, the combination of hUC-MSC with HGF could potentially be a new biological treatment for PD. One paper is focused on cancer. N. Ji et al. found that celastrol had antiprostate cancer effects partially through the downregulation of the expression level of hERG channel in DU145 cells, suggesting that celastrol may be a potential agent against prostate cancer with a mechanism of blocking the hERG channel. One paper is studying heart disease. Z. Lu et al. reported that the levels of NT-proBNP and CCR were closely related to the occurrence of HF and were independent risk factors for heart failure (HF). Meanwhile, there was a significant negative correlation between the levels of NT-proBNP and CCR. One interesting paper is trying to understand the function of Japanese encephalitis virus (JEV), and they have demonstrated that RNA recombination in JEV occurs unequally in different cell types. They conclude that the adjustment of viral RNA to an appropriately lower level in mosquito cells prevents overgrowth of the virus and is beneficial for cells to survive the infection.In summary, this special issue presents a broad range of topics from functional genomics, genetics, and bioinformatics. It covers a variety of diseases such as cancer, heart, and neural and infectious diseases. The study organisms include human, mouse, plant, and microorganisms. We hope that the readers will find interesting knowledge and methods in the issue.Youping DengHongwei WangRyuji HamamotoDavid SchafferShiwei Duan"} {"text": "Dear Editor,et al.[ADIPOQ gene is more associated with the increasing risk of coronary artery disease (CAD) in subjects with type 2 diabetes. Adipocytes were described in myocardial tissue of CAD patients and their role recently discussed[Q gene of adiponectin has been reported for 3\u2019-UTR, which harbours some genetic loci associated with metabolic risks and atherosclerosis[etal.[et al.[ADIPOQ SNPs in diagnosing susceptibility to CAD and the relationship with plasma adiponectin level. In normal, non diabetic, normoglycemic subject, this relationship does not seem to work. Therefore the question is how much predictive this SNP haplotype may be to foresee metabolic syndrome and CAD onset risk in young health subjects? Maybe, the role of adiponectin in cardiovascular physiology depends on its ability to target adiponectin receptors and to negatively regulate obesity. Some authors reported in healthy volunteers an absence of correlation between circulating adiponectin levels and biochemical markers, particularly lipoproteins and suggested that SNP +276G>T was related to an independent effect on adiponectin levels and on lipoprotein metabolism[The recent article by Mohammadzadeh et al. on the ldiscussed. Susceptsclerosis. Actuallsclerosis. This evsclerosis. Moreovesclerosis. CAD is sclerosis. In the is[etal., T allelis[etal. or a diris[etal.. The papl.[et al. assessesetabolism. On the etabolism.et al.[ADIPOQ +276G>T should be related to susceptibility to glucose metabolism, while indirectly to lipid metabolism and fat-related cardiovascular damage.The interesting study by Mohammadzadeh et al. suggests"} {"text": "Methods for the detection of specific interactions between diverse proteins and various small-molecule ligands are of significant importance in understanding the mechanisms of many critical physiological processes of organisms. The techniques also represent a major avenue to drug screening, molecular diagnostics, and public safety monitoring. Terminal protection assay of small molecule-linked DNA is a demonstrated novel methodology which has exhibited great potential for the development of simple, sensitive, specific and high-throughput methods for the detection of small molecule\u2013protein interactions. Herein, we review the basic principle of terminal protection assay, the development of associated methods, and the signal amplification strategies adopted for performance improving in small molecule\u2013protein interaction assay. Speciet al. as well as researchers from other groups have developed a series of methods for sensitive and specific detection of the interactions between proteins and small molecules 3\u2212/4\u2212, accordingly resulting in an increased electrochemical signal. The method was demonstrated to have a detection range of 0.3\u201320 ng/mL for FR and can be used for the investigation of small molecule\u2013protein pairs with nanomolar dissociation constants.Nucleases, both of endonucleases and exonucleases, are useful tools in developing DNA signal amplification methods. Nickase is a kind of endonuclease which needs a specific recognition site in dsDNA but only cleaves one strand of the duplex . If thereraction . In this3.3.2.In Zhou\u2019s study, they developed a strategy of Exo III-assisted recycling cleavage of fluorescent probe for SA\u2013biotin interaction detection . Based o3.4.et al. [Combining the RCA technique with an exonuclease-assisted recycling cleavage of fluorescent probe in terminal protection assay, Chu et al. have rea4.4.1.et al. constructed a small molecule-linked DNA conversion for screening the small molecule\u2013protein interaction [et al. [The terminal protection assay strategies proposed by Jiang and his group utilized the unique properties of exonucleases. Without the participation of exonuclease, Wu eraction . This no [et al. have acc4.2.et al. [et al. [The foundation of terminal protection assay of DNA is that binding a protein to small molecule-labeled DNA will dramatically increase the steric hindrance around the binding site and thus inhibit the action of exonucleases. According to the principle of altering work environment of enzymes, Jiang et al. proposedet al. . Fok I iet al. . Like ot [et al. have demo [et al. .5.et al. [The review traces the recent development in the field of small molecule\u2013protein interaction assays upon the terminal protection of small molecule-labeled DNA. Terminal protection is a generalized discovery demonstrated by Jiang et al. , that sm"} {"text": "Clinical laboratory tests are the scientific basis for the diagnosis and management of diseases. In addition to the traditional areas of laboratory testing, including clinical chemistry, hematology, microbiology, immunology, and transfusion medicine, genetic testing is also broadening its role in the clinical laboratory field. While many new research procedures and analytical methods are becoming available, they all should be strictly validated before being adopted in the clinical practice as routine assays. Clinical laboratories have an important role to play in this translational process from bench to bedside. K-ras wild-type patients. Y. Liu and G. Shi review the recent advances regarding the role of chemotactic G protein-coupled receptors in control of migration of subsets of dendritic cells.Based on the success of the inaugural issue in 2013, Biomed Research International annualized the special issue on laboratory medicine, and this special issue is the second one succeeding the first success. This issue includes a wide variety of laboratory-related topics as illustrated by four review articles and 15 research papers. A review article by J.-L. Choi et al. provides a comprehensive review on progresses in clinical application of platelet function tests. I. Mozos describes laboratory markers of ventricular arrhythmia risk in renal failure. Another review by T.-K. Er et al. deals with current approaches for predicting a lack of response to anti-EGFR therapy inTwo interesting papers cover the basic field of laboratory medicine. The paper by H. Jin et al. was the first to compare the characteristics of erythrocytes derived from cord blood and granulocyte colony-stimulating factor-mobilized adult peripheral blood. H.-Y. Kim et al. showed that increased mitochondrial DNA copy number might be a useful biomarker associated with polycyclic aromatic hydrocarbons toxicity and hematotoxicity. Candida guilliermondii and Candida famata correctly by using matrix-assisted laser desorption/ionization-time of flight mass spectrometry and gene sequencing. Y. J. Hong et al. evaluated a multiplex real-time PCR and melting curve analysis for the detection of herpes simplex and varicella-zoster virus in clinical specimens. Y. Kim et al. evaluated three automated nucleic acid extraction systems for identification of respiratory viruses in clinical specimens by multiplex real-time PCR. An interesting paper by S. Kim et al. showed differences in disease risk estimations among three commercial genetic-testing services, implying that the genetic services need further evaluation and standardization. Another paper by S. Kim et al. analyzed in vivo expressions of the pharmacodynamic marker inosine monophosphate dehydrogenase (IMPDH) mRNA to investigate its usefulness in assessing effects of mycophenolic acid. Y. J. Hong et al. reported the significance of Lewis phenotyping in various tissues and concluded that the gastric Lewis phenotype must be used for the study on the association between the Lewis phenotype and Helicobacter pylori. C.-W. Park et al. proposed that the degree of hepatitis C virus (HCV) quasispecies measured by ultradeep pyrosequencing might be useful to predict progression of hepatocellular carcinoma in the patients with chronic HCV.Seven papers deal with the growing area of molecular diagnostic applications. The paper by S. H. Kim et al. identifiesSix papers come from the conventional areas of hematology and chemistry. L. A. S. Nunes et al. established reference intervals for the hemogram and iron status biomarkers in a physically active male population. T.-D. Jeong et al. investigated the association between the reduction in the estimated glomerular filtration rate and the prevalence of monoclonal gammopathy of undetermined significance in healthy Korean males. Another paper by T.-D. Jeong et al. indicated that total cholesterol concentration is correlated with the levels of bone turnover markers, suggesting that it might predict osteoporosis in premenopausal women. T. Ruskovska et al. emphasized that the variability of results of total (anti)oxidants which are obtained using different assays should be taken into account when interpreting data from clinical studies of oxidative stress, especially in complex pathologies such as chronic hemodialysis. E. Gruszewska et al. concluded that the changes in concentrations of total sialic acid and free sialic acid during the same liver diseases indicate significant disturbances in sialylation of serum glycoproteins. Lastly, B. De Berardinis et al., on behalf of GREAT international, reported the usefulness of combining Galectin-3 and bioimpedance vector analysis in predicting short and long term events in patients admitted for acute heart failure.Given that laboratory medicine plays a vital role in translating research findings into clinical practice, we hope that this special issue would broaden the readership of Biomed Research International and contribute to the scientific development in this field.Mina\u2009\u2009HurPatrizia\u2009\u2009CardelliGiulio\u2009\u2009Mengozzi"} {"text": "The evidence on the role of particular lifestyles, smoking, binge drinking, lack of physical activity, and poor health care seeking, in increased risks for mortality and morbidity is compelling . UnderstExtant research on behaviour change interventions highlights the importance of structural support and a range of requisites such as capability, motivation, and opportunities . Taking avant-garde research across the various levels of the behaviour change research cycle.Through this special issue, robust research on behaviour change, as envisaged and implemented across the range of public health disciplines, is presented to enhance our understanding of how behaviour change research could inform public health practice at local, regional, and global levels. The selected papers in this issue provide motivators of certain behaviours is presented through three studies included in this special issue. W. Liang and T. Chikritzhs explore survey data and find a significant association between heavy alcohol use and risk of violence. Likewise, pro- and antitobacco imagery as seen in media and other outlets appear to be an important determinant of the attitudes towards and uptake of smoking among adolescents. G. Waqa et al. conclude therefore that public health warnings can work in real life. Most behaviour change (positive or negative) happens when individuals find themselves in a completely new context and this hypothesis is demonstrated in the study conducted with Filipino migrants by D. Maneze et al. where competing priorities of daily living were perceived by study participants as a key barrier to their health-seeking behaviour.Research on role of technology in supporting behaviour change is increasingly important although the evidence remains sparse. In their systematic review, J. L. Watterson, et al. focus on the mobile health technologies in low- and middle-income settings to improve behaviours related to maternal and child health. They find some evidence of effectiveness in changing behaviour to improve antenatal/postnatal care attendance or childhood immunisation rates. Likewise, \u201cnudging\u201d through a smartphone application, SmartAPPetite, is the subject of a study by J. Gilliland et al. in which they find the application was effective in increasing a sense of improved awareness and consumption of healthy foods. The use of technology as a research tool is also an increasingly valuable innovation and is explored in the study by E. Bisung et al. using \u201cphotovoice.\u201d The method employs the use of photography as an effective participatory research tool to understand behaviours, create awareness, and support sanitation and hygiene related behaviour change at community levels.Theexotic\u201d public health practice. It is exotic in the sense that implementing behaviour change can often cross the boundaries of national health services and this can happen at home, as demonstrated by E. L. Melbye and H. Hansen in relation to prevention focused feeding strategies, as well as at school classroom as investigated by M. Bronikowski et al. They test the hypothesis whether support given through teachers and peers can have positive effect on stimulating adolescents' physical activity levels.The real-life evidence presented in this special issue offers an interesting notion of \u201cOther real-life evaluations include studies by A. Gigantesco et al. on school-based mental health programme, by B. AM Schutte et al. on BeweegKuur lifestyle intervention implemented in Dutch primary healthcare settings, by H. Limm et al. on a health promotion program based on a train-the-trainer approach, and by L. H. Norton et al. on pedometer or instructor-led group protocol to increase physical activity levels. How socioeconomics can impact lifestyle risk factors is the subject of enquiry in the study by S. Streel et al. while P. Sedlak et al. present long term lifestyle changes in preschool children. stakeholder engagement. A.-M. Hendriks et al. provide a policy analysis of local decision making in Fiji. Obesity prevention has been difficult to implement and major impediments include power inequalities across various actors due to lack of engagement. A. Linden provides an informative analysis of the missing data generated as the unintended consequence of lack of patient (or service users) engagement in public health research. Stakeholder engagement throughout and beyond the research process therefore seems inevitable for a successful behaviour change agenda.Finally, implementing behaviour change is replete with challenges not least of which is\u2009Subhash\u2009\u2009Pokhrel\u2009\u2009\u2009Nana\u2009\u2009K.\u2009Anokye\u2009\u2009\u2009Daniel\u2009\u2009D.\u2009\u2009Reidpath\u2009\u2009\u2009Pascale\u2009\u2009Allotey"} {"text": "Reducing the frequency of milk recording decreases the costs of official milk recording. However, this approach can negatively affect the accuracy of predicting daily yields. Equations to predict daily yield from morning or evening data were developed in this study for fatty milk components from traits recorded easily by milk recording organizations. The correlation values ranged from 96.4% to 97.6% (96.9% to 98.3%) when the daily yields were estimated from the morning (evening) milkings. The simplicity of the proposed models which do not include the milking interval should facilitate their use by breeding and milk recording organizations.Reducing the frequency of milk recording would help reduce the costs of official milk recording. However, this approach could also negatively affect the accuracy of predicting daily yields. This problem has been investigated in numerous studies. In addition, published equations take into account milking intervals (MI), and these are often not available and/or are unreliable in practice. The first objective of this study was to propose models in which the MI was replaced by a combination of data easily recorded by dairy farmers. The second objective was to further investigate the fatty acids (FA) present in milk. Equations to predict daily yield from AM or PM data were based on a calibration database containing 79,971 records related to 51 traits . These equations were validated using two distinct external datasets. The results obtained from the proposed models were compared to previously published results for models which included a MI effect. The corresponding correlation values ranged from 96.4% to 97.6% when the daily yields were estimated from the AM milkings and ranged from 96.9% to 98.3% when the daily yields were estimated from the PM milkings. The simplicity of these proposed models should facilitate their use by breeding and milk recording organizations. This dataset contained observed yields from consecutive AM and PM milkings. Daily yields were also calculated. These samples were analyzed by MIR spectrometry using a FOSS Milkoscan FT6000 . FA content (g/dL of milk) was estimated by applying the MIR calibration equations described in The second validation dataset included milk samples composed of 50% morning milk and 50% evening milk. These samples were collected from 138,141 Holstein cows belonging to 1291 herds that participated in the Walloon milk recording system from October 2007 to June 2012.Samples were collected from all of the cows milked in the herds on a given test day, and these samples were analyzed using MIR spectrometry according to the normal milk recording procedure . The fincis-9.Models were developed to investigate whether daily yields can be estimated by replacing the MI effect with difi.e., the sum of the squared differences between each observation and its predicted value) for the estimated model.The accuracy of the AM-PM predictions was evaluated using two criteria. First, root mean squared error (RMSE) was calculated (Equation (4)), which represents the SD of the difference between observed and estimated daily yields. The model with the smallest RMSE and the highest coefficient of determination (or correlation) was considered to provide the best fit.The second criterion was R\u00b2, defined as the coefficient of determination. The square root of this value is the correlation which represents the relationship between the observed and predicted values. Statistical parameters were calculated using the GLMSELECT procedure in the SAS/STAT software package .A validation was applied on the best fitted model using the two available validation sets. The estimated statistical parameters were RMSE, the standard deviation of prediction (\u03c3\u0177) and Ry,\u0177.et al. [In the calibration set, the average milk production between October 2007 and April 2013 was 26.3et al. from theet al. [et al. [et al. from fat [et al. observedCorrelations between milking and daily yield traits varied from 92.4% to 97.9% . The strong positive correlations between daily and AM or PM yields observed in et al. [et al. [et al. [et al. [et al. [et al. [et al. [The FAT content and FAT yield correlation values were similar than those observed by Liu et al. . These a [et al. found 90 [et al. were oft [et al. . For mil [et al. found 85 [et al. and Liu [et al. or our set al. [et al. [As also shown by Liu et al. and Berr [et al. for milket al. [et al. [The PROC GLMSELECT procedure selected always combined effects. There were not individual effects such as only DIM or only lactation number in the selected equations. Such complexity of equations was not mentioned in previous studies ,10,13. Het al. [In order to appreciate the good fitting of a model, Liu et al. indicateet al. [et al. [Except for milk yield, the observed correlations suggested that the estimations of daily yield were better when PM milking data were used. Indeed, the calibration correlation values were found to range from 96.4% to 97.6%, and from 96.9% and 98.3%, when estimations were realized from AM or PM milkings, respectively . Except et al. and Berr [et al. . Howeveret al. [vs. 97.7% and 96.5% vs. 97.4% for the AM and PM milking data, respectively) . The \u03c3\u0177 ctively) .et al. [vs. 94.3% and 96.8% vs. 94.0% for Ry,\u0177, respectively; and 90.52 vs. 106.0 g/day and 85.49 vs. 109.0 g/day for RMSE, respectively). The \u03c3\u0177 values were slightly higher in the present study .et al. [et al. [Better AM/PM predictions were observed for milk yield compared to fat content and yield. It was also observed by Liu et al. and Berr [et al. . These lObserved AM/PM calibration Ry,\u0177 values for fatty acid traits were all within the same range and were higher than 96% suggesting a good prediction.i.e., real observed data) were bigger than the WAL validation set . One hypothesis is that these differences were due to the initial step used to predict AM/PM values for the calibration set. A potential confirmation of this hypothesis comes from the fact that small differences were observed between the RMSE or Ry,\u0177 observed from the first and second validation data sets for the equations predicting daily milk yield whose AM and PM milk records were always observed. However, the predictability stayed good with Ry,\u0177 never lower than 92.0%.As expected, validation Ry,\u0177 values obtained from the two validation sets were lower than calibration Ry,\u0177 values. Validation RMSE values were higher than the observed calibration RMSE values . Howeveri.e., these results were predicted using the same methodology as the one used for the calibration set) suggest a good robustness of the developed equations which was the main interest of the proposed methodology to build the calibration dataset. Indeed, as the first validation set which was composed of real records, was not large enough to cover the entire lactation, many parities, herds or cows, the theory of selection index was used to predict AM\u2013PM records from 50% AM/50% PM collected records. Better results could be obtained by using only real observations but a large sampling procedure (larger than the one conducted for the LUX data) should be conducted to present a sufficient variability for DIM, parity, month of test as well as studied traits. The advantage of the selection index theory applied in this study is to use data routinely available at large scale to build the predictive models and, therefore, to require a smaller dataset containing real observations to validate the obtained models.Small differences observed between calibration and WAL validation results . The obtained results show the interest to use the theory of the selection index to construct the calibration set in order to have more robust equations thanks to a large calibration set. With validation Ry,\u0177 higher than 92% obtained from observed records for all studied traits, the results are promising, although further studies are needed to confirm these results by using a larger database. Moreover, the results obtained also shows that it is possible to replace the MI parameter with a combination of more reliable parameters such as: milk production and fat content, stage of lactation classes, the test month, and calving month. The application of the models developed in this study has the potential to reduce the number of collected samples per test-day , thereby reducing the costs associated with official milk recording , while still maintaining a high accuracy of predicted daily yields.The main objective of this study was to propose a practical, simple, and robust method for accurately estimating daily FA yields from a single milking ("} {"text": "Y chromosome is a superb tool for inferring human evolution and recent demographic history from a paternal perspective. However, Y chromosomal substitution rates obtained using different modes of calibration vary considerably, and have produced disparate reconstructions of human history. Here, we discuss how substitution rate and date estimates are affected by the choice of different calibration points. We argue that most Y chromosomal substitution rates calculated to date have shortcomings, including a reliance on the ambiguous human-chimpanzee divergence time, insufficient sampling of deep-rooting pedigrees, and using inappropriate founding migrations, although the rates obtained from a single pedigree or calibrated with the peopling of the Americas seem plausible. We highlight the need for using more deep-rooting pedigrees and ancient genomes with reliable dates to improve the rate estimation. The paternally inherited Y chromosome has been widely applied in anthropology and population genetics to better describe the demographic history of human populations . In partet al. screened three Y chromosome genes for sequence variation in a worldwide sample set, using denaturing high-performance liquid chromatography (DHPLC) [-9 per site per year was not given in [et al. attempted to address this in 2006, by sequencing nearly 13\u00a0Mb (more than 20% of the whole chromosome) of the male-specific region of the chimpanzee Y chromosome. Their analysis yielded a slightly higher rate, at 1.5\u2009\u00d7\u200910-9 , despite also using a chimpanzee-human calibration time that was 20% older than the previous study (6 million years) [In 2000, Thomson (DHPLC) . In ordegiven in ). Using given in . A weaknWhat is hopefully clear from the above, is that although direct comparisons of human and chimpanzee Y chromosomes offer us a powerful means to better understand the evolutionary process in our sex chromosomes during the past 5 to 6 million years, the process is clearly susceptible to a number of assumptions that need to be made. First, there is uncertainty over the exact timing of the human-chimpanzee divergence, as fossil records and genetic evidence have given a range of 4.2 to 12.5 million years ago , which wGiven the above, a variety of other methods have been proposed, including Y chromosome base-substitution rate measured in a deep-rooting pedigree, adjusted from autosomal mutation rates, and based on archaeological evidence of founding migrations. We address each of these in turn.et al. [-9 per site per year (95% CI: 3.0\u2009\u00d7\u200910-10-2.5\u2009\u00d7\u200910-9) under the assumption that the generation time is 30\u00a0years. It is notable that this pedigree-based estimate overlaps with the evolutionary rates estimated from human and chimpanzee comparisons. For pedigree-based substitution rate estimation, there are at least two criteria to be taken into careful consideration. First, the pedigree must be biologically true and the generation information validated. The pedigree used by Xue et al. is a Chinese family carrying the DFNY1 Y-linked hearing-impairment mutation. The same Y-linked disease-related mutation has validated the authenticity of their genealogy. Second, the detected mutations must be true. In this regard, Xue et al. used a variety of methods to verify the candidate mutations, thus validity of the rate: The Y chromosomes of the two individuals were sequenced to an average depth of 11\u00d7 or 20\u00d7, respectively, thus mitigating the possibility of sequencing and assembling errors; they also reexamined the candidate mutations using capillary sequencing.In 2009, Xue et al. sequenceet al. [et al. [et al. [This pedigree-based rate has been widely used in Y chromosome demographic and lineage dating. Cruciani et al. applied [et al. also use [et al. used thiet al. is O3a; however, other haplogroups probably have experienced very different demographic history and selection process, and might have different substitution rates as compared with haplogroup O3a. Second, the substitution rate was estimated using two individuals separated only 13 generations, thus, the question is whether the substitution rate estimated at relatively short time spans could be used in long-term human population demographic analysis without considering natural selection and genetic drift. Actually, many studies have noted that molecular rates observed on genealogical timescales are greater than those measured in long-term evolution scales [Although this pedigree-based substitution rate is widely accepted, some concerns have also been raised. First, the mutation process of Y chromosome is highly stochastic, and the rate based on a single pedigree and only four mutations might not be suitable for all the situations. For instance, the haplogroup of the pedigree used in rate estimation of Xue n scales .et al. [-10 per site per year . Strikingly, this substitution rate is only approximately half of the previous evolutionary rates and pedigree rate, although is very similar to estimates of autosomal rate [et al.\u2019s pedigree rate and Mendez et al.\u2019s rate which was also obtained from pedigree analysis. Mendez et al. [et al. [In 2013, in collaboration with the FamilyTreeDNA Company, Mendez et al. identifiet al. ,7 or froet al. ; insteadmal rate . In partz et al. . While M [et al. . Several [et al. . Using t [et al. -9, whichet al. [et al. [. Mendez et al. [et al. [Elhaik et al. also cri [et al. . Mendez z et al. assumed z et al. . Rather z et al. . The unr [et al. seem to et al. [-9 per site per year (95% CI: 0.72\u2009\u00d7\u200910-9 to 0.92\u2009\u00d7\u200910-9), and estimated the TMRCA of Y chromosomes to be 120\u2013156 kya (haplogroup A1b1-L419). In comparison, the mitochondrial genome TMRCA was 99 to 148 kya. Thus the authors concluded that the coalescence times of Y chromosomes and mitochondrial genomes are not significantly different, which disagrees with the conventional suggestion the common ancestor of male lineages lived considerably more recently than that of female lineages [et al.\u2019s calibration method, Francalacci et al. [et al. [-9 per site per year (95% CI: 0.42\u2009\u00d7\u200910-9 to 0.70\u2009\u00d7\u200910-9). This rate is extremely low and only half of the pedigree-based rate.In 2013, Poznik et al. reportedet al. . A key ai et al. also use [et al. generateet al., how do they know the Q-M3 and Q-L54*(\u00d7M3) diverged at the exact same time of initial peopling of Americas? In fact, individuals belonging to haplogroup Q-M3 have also been found in Siberia [et al. [et al. In Francalacci et al.\u2019s case, the current Sardinian people might be directly descended from that initial expansion 7.7 kya, but there is also possibility that they are descended from a later successful founder population. If the latter is true, Francalacci et al. [The main concern of the above two rates is the calibration point. In Poznik Siberia , suggest Siberia . Y chrom [et al. estimatei et al. have undAlthough using the archaeological evidence for calibration in Y chromosomal substitution rate estimation is correct in principle, we have to pay great attention to whether the calibration point is reliable and suitable or not. In addition, more calibration dates could lead to more robust estimates. Besides the initial peopling of Americas and the initial expansion of the Sardinian population, the peopling of Oceania might be another good calibration point.To simply illustrate the considerable effect of using the different proposed Y chromosomal substitution rates for me estimation, we used the Y chromosome dataset of 1000 Genome Project to calcuSome of the most widely-cited Y chromosomal substitution rate estimates have several shortcomings, including a reliance on the ambiguous human-chimpanzee divergence time, insufficient sampling of deep-rooting pedigrees, and using inappropriate founding migrations. Here, we propose two possible approaches to obtain greater precision in measuring Y chromosomal substitution rate. First is the pedigree-based analysis, we can collect and sequence some reliable deep-rooting pedigrees representing a broad spectrum of worldwide Y chromosomal lineages or at least common haplogroups of East Asia. Recording the family trees has been a religious tradition of Han Chinese, and some family trees even span more than 100 generations, linking the contemporary individuals to their ancestors over 2 to 3 kya, although their authenticity requires careful validation ,30. Morebp: base pairs; CI: confidence interval; DHPLC: denaturing high-performance liquid chromatography; kya: thousand years ago; SNP: single nucleotide polymorphism; TMRCA: time to the most recent common ancestor.The authors declare that they have no competing interests.CCW conceived of the study, performed data analysis and wrote the manuscript. HL supervised the study and assisted in drafting and revising the manuscript. MTPG, LJ are involved in the discussion and revised the manuscript. All authors read and approved the manuscript."} {"text": "Arabidopsis genes involved in the control of lateral root initiation. This close relationship stresses the conservation among plant species of an auxin responsive core gene regulatory network involved in the control of post-embryonic root initiation. In addition, we report on several genetic regulatory pathways that have been described only in rice. The complementarities and the expected convergence of the direct and reverse genetic approaches used to decipher the genetic determinants of root development in rice are discussed in regards to the high diversity characterizing this species and to the adaptations of rice root system architecture to different edaphic environments.In this review, we report on the recent developments made using both genetics and functional genomics approaches in the discovery of genes controlling root development in rice. QTL detection in classical biparental mapping populations initially enabled the identification of a very large number of large chromosomal segments carrying root genes. Two segments with large effects have been positionally cloned, allowing the identification of two major genes. One of these genes conferred a tolerance to low phosphate content in soil, while the other conferred a tolerance to drought by controlling root gravitropism, resulting in root system expansion deep in the soil. Findings based on the higher-resolution QTL detection offered by the development of association mapping are discussed. In parallel with genetics approaches, efforts have been made to screen mutant libraries for lines presenting alterations in root development, allowing for the identification of several genes that control different steps of root development, such as crown root and lateral root initiation and emergence, meristem patterning, and the control of root growth. Some of these genes are closely phylogenetically related to The online version of this article (doi:10.1186/s12284-014-0030-5) contains supplementary material, which is available to authorized users. Roots are essential organs for exploring and exploiting soil resources, such as water and mineral nutrients. Different root architecture ideotypes that are adapted to different soil mineral nutrient balances or water statuses have been proposed. It is generally acknowledged that a deeper, thicker and more branched root system with a high root to shoot ratio can enhance the tolerance of rice to water deficits for thoO. sativa, which induces high risk of false-positive associations, the analyses were conducted on panels composed of accessions belonging to a single varietal group. This type of panel gives access to new varietal group-specific QTLs that cannot be detected by classical mapping in indica x japonica populations. However, even if the resolution on QTL position is clearly improved in GWAS compared with mapping populations, the number of possible candidate genes is still large and requires complementary evidence to determine the functional gene(s).The main limitation with QTL detection in these mapping populations is the large size of the confidence interval. Even meta-analyses, on average, did not enable a decrease in the meta-QTL confidence intervals to less than half the original size of the segment . GenomePHOSPHORUS UPTAKE 1 (PUP1), a QTL contributing to phosphorus (P) uptake in low P content soils. The gene underlying the QTL, later termed PHOSPHORUS-STARVATION TOLERANCE 1 (PSTOL1), was cloned and appeared to encode a receptor-like cytoplasmic kinase . The geDEEPER ROOTING 1 (DRO1), which controls root growth angle and enhances deep rooting transcription factors. DRO1 modulates root gravitropic response, likely via a modulation of epidermal cell elongation that enables roots to orientate their growth relative to the pull of gravity. The expression of DRO1 in the IR64 background increases the angle between roots and the horizontal axis, inducing deeper rooting. Compared to IR64, the near isogenic lines carrying DRO1 showed less deleterious effects, such as leaf wilting or delayed flowering, under moderate to severe drought stresses and better yield under stress, with no yield penalty under no stress conditions.Another cloned gene related to a root development QTL is a et al. ). DRO1 ePSTOL1 in the reference cultivar Nipponbare. A final reason is that many genes do not have an assigned function. For example, the DRO1 product has no similarity to known proteins and, wia et al. ). For thThe sequencing and annotation of the rice genome mutant is impaired in OsGNOM1, which encodes a membrane-associated guanine-nucleotide exchange factor for the ADP-ribosylation factor G protein and NAL3, a pair of duplicated genes. The nal2/nal3 double mutant shows a complex phenotype, with altered development of different parts of the plant and, notably, a strong reduction of the density of lateral roots. This latter effect is associated with an increase in root hair number and length and AUXIN SIGNALING F-BOX2 (AFB2), are negatively regulated by OsMir393a and OsMir393b regulatory protein ubiquitin ligase, which ubiquitinates the AUX/IAA protein, targeting it for degradation. These interactions require other chaperone and co-chaperone proteins (Guilfoyle and Hagen [Oscyclophilin2 (Oscyp2) and lateral rootless2 (lrl2), which are characterized by an absence of lateral roots and are caused by mutations in the same gene transcription factor belonging to a specific clade regrouping LBD proteins involved in the control of post-embryonic root initiation in different species that interacts in vitro with the OsARF16 transcription factor . This li et al. ; Liu et i et al. ; Steinmai et al. ). Crl4 ai et al. ). This ro et al. ). In nalo et al. ). OsTIR1n et al. ; Xia et n et al. ). Plantsn et al. ; Xia et n et al. ). AUX/IAnd Hagen ; Vannestnd Hagen ; Mockaitnd Hagen ; Lavenusnd Hagen ). In ricg et al. ; Zheng eg et al. ). In addg et al. ). OsCYP2g et al. ). OsIAA1g et al. ; Zhu et g et al. ). OsIAA1i et al. ) and advu et al. ) mutantst et al. ). Crl1 ai et al. ). Taken i et al. ; Lavenusi et al. ; Hochholi et al. ). In rici et al. ).crown root less5 (crl5) mutant exhibits a reduced number of crown roots due to a reduction in crown root initiation transcription factor. Its expression is induced by auxin via an ARF transcription factor. Cytokinins negatively regulate the initiation of crown and lateral roots in rice that are repressors of cytokinin signaling . CRL5 ei et al. ; Kitomi i et al. ), but plo et al. ). Similao et al. ). WOX11 HEME OXYGENASE 1 (HOX1) gene controls the formation of lateral roots via the production of carbon monoxide. HOX1 is regulated by auxin and stress related signals, such as jasmonate and nitric oxide , which QUIESCENT CENTER HOMEOBOX (QHB) gene is an ortholog to the WUSCHEL-related WOX5, a QC-specific gene in Arabidopsis that contributes to QC and root stem cell maintenance CLV1 , which is composed of a small number of mitotically inactive cells, is indispensable for the maintenance of the undifferentiated status of the surrounding stem cells from which root tissues are produced . The maa et al. ; Breunina et al. ; Ditengoa et al. ). QHB isi et al. ; Ni et ai et al. ; Kamiya i et al. ; Liu et i et al. ). The CLu et al. ; Suzaki u et al. ). This rr et al. ). In ricu et al. ). Nevertu et al. ).Arabidopsis, SCARECROW (SCR), a transcription factor from the GRAS family, also contributes to the specification of the QC , encodes an LRR RLK protein . In rici et al. ). The exu et al. ). Togetho et al. ; Cui et o et al. ). In ric et al., ). In add et al., ; Ni et a et al., ; Kamiya et al., ). Rice r et al., ; Coudert et al., ). An alug et al. ). In thig et al. ). This rOscand1 mutant, crown root meristems are properly initiated and formed but do not emerge SCFTIR1 ubiquitin ligase involved in the degradation of AUX/IAA in response to auxin knock-down plants, the emergence of lateral roots is blocked due to a perturbation of cell cycle activity in newly formed lateral root meristems or the root tissues. Crown root emergence is stimulated by submergence. This stimulation is mediated by ethylene, which induces the expression of cell cycle regulatory genes in crown root meristems and promotes, in synergy with gibberellic acid, the death of epidermal cells at the site of root emergence . In Arag et al. ). In then et al. ). OsORC3OsDGL1 encodes DOLICHYL DIPHOSPHOOLIGOSACCHARIDE-PROTEIN GLYCOSYLTRANSFERASE 48\u00a0kDa subunit precursor necessary for the proper structuration of the polysaccharide matrix of root cell walls. Osdgl1 mutant plants exhibit a short root phenotype due to a defect in root cell elongation and division is specifically expressed in root tips mutant also exhibits a short root phenotype that is due to defective root cell elongation and is characterized by a perturbation in iron homeostasis mutant, the elongation of root cells is strongly inhibited under salt stress conditions, resulting in a short root phenotype transcription factors OsNAC5, OsNAC9 and OsNAC10 in roots enhances the water stress tolerance of plants . Anotheg et al. ). In Osgg et al. ). Expansn et al. ). Plantsa et al. ). This pa et al. ). OsSPR1a et al. ). RSS3 ia et al. ). RSS1 pg et al. ; Redillag et al. ; Jeong eg et al. ). The ovOsiaa13, c68 or crl1 encodes a component of the CHROMATIN ASSEMBLY FACTOR1 that contributes to crown root formation . Data ci et al. ). In thee et al. ). It is DRO1 and PSTOL1, with functions related to drought tolerance or adaptation to soil with low phosphate content, respectively . The idThese approaches need to be more closely linked to accelerate the discovery of new genetic determinants, which will result in the generation of varieties with improved root architecture. We hope that this review helps to link the data obtained by these complementary approaches.SUB1, a gene controlling submergence tolerance, in various Asian countries . In less et al. ). DRO1 a"} {"text": "The aim was to study the associated factors and extent of short lactations in Sahiwal cattle maintained under organized herd.The present study was conducted on Sahiwal cattle (n=530), utilizing 1724 lactation records with respect to lactation length (LL), spread over a period of 15 years (1997-2011), maintained at Livestock Research Center, National Dairy Research Institute, Karnal. Observations of LL were analyzed by descriptive statistical analysis in order to know the extent of short lactation of animals in the herd. Paternal Half sib method was used to estimate the genetic parameters, i.e., heritability, genetic, and phenotypic correlation. The influence of various non-genetic factors on LL was studied by least squares analysis of variance technique.The least squares means for LL was found to be 215.83\u00b13.08 days. Only 32.48% of total lactation records were fell in the range of 251-350 days of LL, while more than three-fourth (76%) of total observations were failed to reach the standard level of 305 milking days. LL class ranges from 251 to 300 days accommodated maximum number of observations (19.2%). The heritability estimate of LL was 0.22\u00b10.07. Positive correlations were found between LL and service period, LL and 305 or less days milk yield, LL and calving interval; whereas dry period was negatively correlated with the LL. The least squares analysis had shown that LL was significantly (p<0.01) influenced by the period of calving, type of calving, and season of drying. Significantly higher LL (276.50\u00b17.21 days) was found in animals calved in the first period than those calved in other periods. The cows dried during summer season had the shortest LL (188.48\u00b17.68 days) as compared to other seasons.Present findings regarding short lactations occurrence may be alarming for the indigenous herd, demanding comprehensive study with the larger data set. Since LL was influenced by various environmental factors suggesting better managerial tools, besides special attention on the milch animals going to dry during the summer season. In the present scenario, dairying has emerged as a constant source of income for millions of rural families round the year thus plays a significant role in the Indian agricultural economy. Sahiwal is the best milch breed in the tropics including India . This daThough India is top milk producer (133 million tons) in the world . This miIt has been stated that most of the indigenous cattle were found with average lactation length (LL) below 305 days and EthiTherefore, it is most important to improve the production performance of our native stock to meet the expanding demand of milk and milk products for vigorously growing the population of our country. Keeping the above intricacies, the present study has been designed to assess the probable causes and extent of short lactations in Sahiwal cattle maintained under organized herd.The present investigation did not require ethical approval as only phenotypic records obtained from history sheets were used for investigation.viz. winter (December to March), summer (April to June), rainy (July to September), and autumn (October and November). A subtropical climate with maximum air temperature during summer about 45-48\u00b0C and minimum temperature during winter near to 1-4\u00b0C prevails in the area. In the study area relative humidity ranges between 41% and 85% and annual rainfall between 760 mm and 960 mm.The present investigation was conducted on Sahiwal cattle (n=530) maintained at Livestock Research Centre of National Dairy Research Institute, Karnal, Haryana. Study area is located at 29\u00b042\u2032N latitude and 72\u00b0 02\u2032E longitude with an altitude of 250 m above the mean sea level in the bed of Indo-Gangetic alluvial plain. There are four major seasons in the year The data for the present study were collected from history sheets of Sahiwal cattle maintained at dairy cattle breeding Division of National Dairy Research Institute, Karnal. The data comprising of 1724 lactation records of Sahiwal cattle (n=530) spread over a period of 15 years (1997-2011) were utilized for this study. Incomplete lactations, i.e., transfer, sale or death of an animal during lactation were not included in the present study.th parity, further lactation records were grouped together in a single class.Observations of LL were analyzed by descriptive statistical analysis in order to know the extent of short lactation of animals in the herd. For this purpose, all lactation records were grouped into different classes, keeping a class interval of 50 days. Paternal half-sib method, as described by Becker , was useFor heritability:ij = \u03bc + Si + eijYwhere,ij = Adjusted value of jth progeny of ith sireY\u03bc = Overall meani = Effect of ith sireSij = Random error, NID eFor least squares analysis:where,\u03bc = Overall meani = Effect of ith season of calvingSj = Effect of jth period of cavingPk = Effect of kth parityAl = Effect of lth type of calvingTm = Effect of mth season of dryingDijklmn = Random error, NID ei, Pj, Ak, Tl, and Dm are fixed effectsSLeast squares analysis revealed an average LL of 215.83\u00b13.08 days, which is far below than the standard 305-days of milking days. Lower LL values were recorded as 204.33\u00b170.35 days reported in local cows in Ethiopia ; 213.90\u00b1et al. [et al. [It was quite clear from et al. stated t [et al. that mayet al. [et al. [et al. [et al. [et al. [The heritability estimate of LL was 0.22\u00b10.07. The findings of this study were near to the findings of Goshu et al. and Endr [et al. , whose h [et al. in HF in [et al. , Kathira [et al. observedGenetic and phenotypic correlation between LL and other traits have been presented in et al. [et al. [In this study, the genetic correlation of LL with SP was non-estimable as the estimate was greater than unity. High and positive estimate of genetic correlation 0.62\u00b10.18) between LL and SP in Sahiwal cattle was observed by Kumar ; whereaset al. ; whereas between et al. [et al. [Genetic correlation of LL with 305 DMY was obtained high, positive (0.85\u00b10.05), and significant (p<0.01) in the present study. High and positive (0.95\u00b10.09) genetic correlation between above traits were also repsorted by Manoj . Whereaset al. reportedet al. and Rehmet al. , whose e [et al. and Kath [et al. , whose eet al. [et al. [et al. [et al. [The results of the present study revealed that there is high, negative (\u22120.71\u00b10.61), genetic correlation between LL with DP which is statistically non-significant. Negative genetic correlations between above traits have also reported by Goshu et al. , Kathiraet al. , and Kumet al. , whose eet al. and Moge [et al. , which w [et al. and Kuma [et al. , whose e [et al. and Moge [et al. obtainedet al. [et al. [Genetic correlation of LL with CI was obtained high, positive (0.73\u00b10.16), and significant (p<0.01). High and positive estimates of above correlation were also reported by Moges et al. and Kumaet al. , whose e [et al. and Kuma [et al. , whose eet al. [et al. [The results of the present study revealed that the influence of season of calving on LL was statistically non-significant. Similar findings were also reported by Habib et al. , Nawaz e [et al. and Raja [et al. ; whereas [et al. ,10,12.et al. [et al. [st period (1997-1999) and lowest (182.60\u00b15.93 days) for the period of 2003-2005 , which is supported by most researchers ,15,19. Het al. and Endr [et al. . The ave003-2005 . Averageet al. [The effect of parity on LL was non-significant, which is supported by Habib et al. ; whereaset al. .The results of the present study revealed that the effect of type of calving on LL was statistically significant (p<0.01). Significantly higher LL (217.52\u00b13.17 days) was observed in normal calver than cows with abnormal calving (185.21\u00b111.48 days) which might be due to uterine disorders and low body condition owing to poor feed intake in affected cows .Effect of season of drying was found to be significant (p<0.01) on LL. Animal which dry during the autumn season were reported with significantly longer LL (241.78\u00b17.88 days) as compared to those which dry in summer and rainy seasons . The finThe findings of this study reflected that short lactation problem may be alarming for the indigenous herd. Since LL is mainly influenced by environmental factors, therefore proper management measures and good feeding practices are of paramount importance to reduce the occurrence of short lactation in Sahiwal cows. Furthermore, a study on large-scale data is pertinent to assess the problem of short lactation in real sense, which may pave the way for ameliorative actions needed to increase milking days of indigenous milch animals in years to come.USN, RKM, and SSL designed the study. USN conducted the study and analyzed the data. RNY contributed in the data collection. KKV and AKV revised the manuscript. All authors read and approved the final manuscript."} {"text": "Childhood exposure to lead remains a critical health control problem in the US. Integration of Geographic Information Systems (GIS) into childhood lead exposure studies significantly enhanced identifying lead hazards in the environment and determining at risk children. Research indicates that the toxic threshold for lead exposure was updated three times in the last four decades: 60 to 30 micrograms per deciliter (\u00b5g/dL) in 1975, 25 \u00b5g/dL in 1985, and 10 \u00b5b/dL in 1991. These changes revealed the extent of lead poisoning. By 2012 it was evident that no safe blood lead threshold for the adverse effects of lead on children had been identified and the Center for Disease Control (CDC) currently uses a reference value of 5 \u00b5g/dL. Review of the recent literature on GIS-based studies suggests that numerous environmental risk factors might be critical for lead exposure. New GIS-based studies are used in surveillance data management, risk analysis, lead exposure visualization, and community intervention strategies where geographically-targeted, specific intervention measures are taken. The use of GIS in environmental risk factor studies on childhood lead exposure became a focus of research activity in the late 1990s. This prompted the CDC to develop a guideline for the use of GIS in childhood lead poisoning studies in 2004 . FundingEcological studies focusing on the distribution of blood lead levels, susceptible populations, and exposure sources have been cited to address childhood lead exposure. Identification of environmental risk factors and understanding of the distribution of the lead in the environment is important for health departments in better targeting at risk populations ,7,8,9. EDespite being a preventable environmental problem, lead poisoning remains a major health threat and a persistent source of illness in the United States. Its estimated cost is $50.9 billion . ChangesIn lead poisoning studies, GIS was used in various stages from data preparation, to multivariate mapping of BLLs with their risk factors, to spatial and statistical analysis. At the data preparation stage, address geocoding is the most used tool to transfer tabular data sets, such as screened children addresses, into GIS ,18,19,23A review of GIS-utilized studies on childhood lead poisoning has not been conducted. There are some non-GIS based reviews on lead poisoning in relation to cardiovascular diseases , resuspeet al. [Navas-Acien et al. studied et al. published two reviews about the relationship between lead in soil and children blood lead levels in 2008 and 2011 [et al. [vs. \u201cblood lead\u201d studies. They created a statistical model in order to investigate the atmospheric soil seasonality and the prediction model for atmospheric soil in the US. In terms of soil lead topology, they reviewed studies indicating that lead in soil decayed exponentially away from the historical main roads [et al. [et al. [Laidlaw and 2011 ,60. In 2 [et al. claimed in roads ,62. Anot [et al. also suget al. [et al. found that children on the Mexican side of US-Mexico border had higher BLLs compared to the children who lived in the US side of the border. However, poverty could be a confounding factor in the area [Brown et al. presentethe area . Anotherthe area in theirthe area ,66. In tet al. published two reviews about environmental aspects of lead poisoning in consecutive years 2010 and 2011 [et al. investigated the effect of traffic on lead poisoning regarding lead emissions and additives used in eight California urbanized areas. The authors used three datasets in order to show the gasoline lead contribution in the environment; annual lead amounts from 1927 to 1984, 1982 lead additive quantities for eight urbanized areas in California, and California fuel consumption data from 1950 to 1982. The review showed that there was a correlation between the lead amount in soil and size of the cities. Community location was also related to the lead amount. Inner cities where high traffic volume occurs had higher amounts of leaded soil compared to the suburbs. The review also showed that the distance decay characteristics of lead in soil were similar throughout the US. There was a strong correlation between children BLLs and lead in soil. Mielke\u2019s review confirmed the relationship between children BLLs and seasonality. Mielke et al. found a negative relationship between lead in soil and school erformance of children. In their second review in 2011, they expanded their previous California study to 90 urbanized areas throughout the US. Their findings corroborated the previous findings.Mielke and 2011 ,59. In tA literature search was conducted to identify recent articles discussing childhood lead poisoning and the use of GIS and risk modeling. Several online databases were queried, including JSTOR, CINAHL, Web of Science, ScienceDirect, and PubMed. The following key words were used individually and in combination as inclusion criteria for articles to be considered for this review; children, childhood, pediatric, Pb, lead, poisoning, toxicity, geographic, information, systems, and GIS. Our review covers a 21 year period which includes GIS-based studies published since 10 \u00b5g/dL thresholds were first introduced in 1991 until the new threshold of 5 \u00b5g/dL in 2012. Initial searches yielded approximately 981 results. The abstracts of these papers were reviewed to confirm applicability. After considering additional exclusion criteria , 23 papers remained.Reviewed articles were summarized and grouped into five categories: screening methodology design, risk modeling studies, environmental risk factors, spatial analysis of genetic variation, and political ecology. All of the reviewed articles obtained their lead toxicity data from health departments. In these studies, blood lead screening data was collected by clinics or health workers without GIS. Data collection methods may vary among states.et al. in 1998, Reissman et al. in 2001, Roberts et al. in 2003, and Vaidyanathan et al. in 2009 [Studies on childhood lead poisoning surveillance that used GIS include Lutz in 2009 ,7,8,9. T in 2009 . The guiet al. [et al. found that the screening data thoroughly represents the targeted population in Knoxville, TN.The questions included in the questionnaire are: \u201cDoes your child live in or regularly visit a house that was built before 1950?\u201d; \u201cDoes your child live or regularly visit a house built before 1978 with recent or ongoing renovations or remodeling within the last six months?\u201d; and \u201cDoes your child have siblings or playmate who has or did have lead poisoning?\u201d Some states had additional questions added to the CDC questionnaire. Lutz et al. defined et al. ,69. Lutzet al. [et al. study [et al. considered the \u201cat-risk\u201d population as children between 6 and 35 months of age who reside in a home built before 1950 or live in a target zone where more than 27% of houses were built before 1950. The authors compared the percentage of screened children with corresponding target zones by both census tracts and ZIP codes. The study found that the percentage of children with EBLLs is strongly associated with old housing. The study also showed that the significant numbers of children who live in at risk areas were not being tested throughout the county. The second part of the study mapped the children who are younger than 7 years old with confirmed BLL \u226520 \u00b5g/dL and the houses where more than one child resides with confirmed BLL \u226520 \u00b5g/dL.Reissman et al. used GISl. study , Reissmaet al. [et al. [et al. [Roberts et al. conducte [et al. and Reis [et al. , the autet al. [et al. study, all of the studies in this section demonstrate that corresponding health departments failed to account for \u201cat-risk\u201d populations. The studies also demonstrate that GIS could be an effective tool to target \u201cat-risk\u201d neighborhoods by health departments.Vaidyanathan et al. developeet al. [This section refers to nine articles on risk model development for childhood lead poisoning ,16,17,18et al. conducteet al. [et al. also prioritized the Durham, NC region in four risk areas: (1) predicted parcels which are most likely to contain leaded paint; (2) predicted parcels which are less likely to contain leaded paint; (3) predicted parcels which are lesser likely to contain leaded paint; and (4) predicted parcels which are least likely to contain leaded paint. Unlike the Sargent et al. study, Miranda et al. found that the dependent variable is correlated with the percentage of the African American population as well as median income and construction year of housings. One major shortcoming of the model is missing data since address geocoding rates may be under 50%. This study was later updated by Kim et al. [et al. [et al. [et al. also deployed an address geocoding based on the cadastral parcel reference system. Also similar to the Miranda et al. study [et al. [Miranda et al. used a tm et al. in 2008. [et al. , Griffit [et al. and Kim l. study , the geo [et al. study anet al. [i.e., census block group, census tract, and ZIP code) to monitor social inequalities in childhood lead poisoning. The authors used blood lead level screenings of children who live in Rhode Island. The screening period was between 1994 and 1996. Different from Miranda et al. [et al. used a street reference system dataset) for their address geocoding process. Street reference systems generally produce higher geocoding success rates compared to cadastral parcel reference systems. For instance, the Kriger et al. study produced more than 90% of geocoding success in all geographic units, census block groups, census tracts, and ZIP codes. However, one potential weakness of the method is that street geocoding results may be distant from the actual location of houses since the method uses a linear interpolation on street segments in the reference file. The authors found that the choice of measure and the level of geography matter. Census tract and census block group socioeconomic measures detected stronger socioeconomic gradients than the zip code units. The results indicate that BLLs are strongly associated with poverty but not education level, occupation, and wealth. A similar sensitivity analysis was conducted by Kaplowitz et al. [et al. assessed predictive validity of different geographic units for their risk assessment. According to their study, census block groups explain more variance in BLL than high and low risk ZIP codes. Their study confirmed that children\u2019s BLL is more closely associated with characteristics of their immediate environment than with characteristics of a larger area such as a census tract or ZIP code.Kriger et al. examineda et al. , Kriger z et al. in 2010.et al. [Haley and Talbot presenteet al. , this stet al. [et al., on the other hand, used cadastral parcels as the reference files. Geocoding success rate is generally much higher in geocoding process with street reference files than ones with cadastral parcel reference files. However, cadastral parcel reference files provide more precise geocoding results and the construction year of housing units. The authors compared cadastral and TIGER based geocoded addresses in three sections including, census tract, census block group, and census blocks of 1990 and 2000 census demographics. The study shows that there is a noticeable but not considerably high positional error difference in their spatial statistical analyses using the two methods. The regression analysis in the study was employed in two different BLL thresholds, 5 and 10 \u00b5g/dL. Regardless of the threshold level, the results indicate that EBLLs are significantly associated with the percentage of the African American population and average house value in the census block and census block group analyses.Griffith et al. conducteet al. [et al. uses a GIS scripting tool to deduplicate pediatric blood data. This study also differs from others by producing a kriging map for the area. The kriging map of Chicago shows that Westside area has the highest risk of EBLLs in the city. The authors also used TerraSeer\u2019s Space-Time Intelligence Systems (STIS) to explore the krigged prevalence rates in order to analyze spatial patterns [Using descriptive discriminant and odds ratio analyses, Oyana et al. ,18 creatpatterns . Moran\u2019spatterns and LISApatterns were useet al. [et al. used command line address matching software, which is one of the oldest address geocoding engines. In terms of environmental factors, Mielke et al. [This section discusses eight studies that address environmental risk factors ,24,25,26et al. conductee et al. studied et al. [et al., and Mielke et al. [et al. also showed that BLLs are correlated with percentage of children at risk, population density, mean housing value, and percentage of the African American population.Griffith et al. employede et al. ,58,59. Iet al. [et al. mapped the distribution of these 76 point sources as well as five point sources containing 19 soil samples with the values ranging from 100 to 7870 \u00b5g/g soil lead. They compared the children BLLs with Bocco and Sachez [Gonzalez et al. investigd Sachez study\u2019s et al. [In 2007, Miranda et al. exploredet al. [et al. used GIS to show children locations in a jittered representation even though they run the statistical model based on actual point locations. Using the geocoded locations, Miranda et al. was able to join children locations and buffer zones, which were created from the airport boundaries. The authors assigned dummy variables to children locations based on the boundaries mentioned above and seasons for the screening time. The model includes the age of housing, screening season, and demographic variables. The authors also used inverse population weights to eliminate the possible bias caused by high numbers of screening cases on parcels. The study found a significant positive association between logged BLLs and the distances to the airport locations. It further shows that seasonality is an important factor in estimating BLLs. In fall, spring, and summer seasons, children were found having higher BLLs on average compared to winter season screenings. Age of housing was negatively associated with BLLs while the median household income and minority neighborhoods had positive associations with BLLs.Another environmental study by Miranda et al. conducteet al. [Mielke et al. conducteet al. [et al. [In 2013, Mielke et al. analyzed [et al. , this stet al. [One of the reviewed studies focused on the genetic variation of childhood lead poisoning problems . Since oet al. , which gHanchette\u2019s study focused on the political ecology aspect of childhood lead toxicity. The author used Moran\u2019s I and LISAThis article reviewed twenty-three GIS-based studies examining spatial modeling of childhood lead poisoning and risk factors that were published after 1991, the year the CDC\u2019s threshold updated to10 \u00b5g/dL. GIS use in lead studies revealed greater detail about the magnitude of lead poisoning within populations. Reviewed articles indicate that surveillance and screening practices have extended considerable amount of importance in targeting \u201cat-risk\u201d populations. However, the literature shows that some health departments failed to account for \u201cat-risk\u201d populations ,8,9. ThiRisk factors for childhood lead poisoning have been thoroughly parsed out in childhood lead poisoning research. Unfortunately, address geocoding methods, the parameters used, and the uncertainties they presented were not included in a similar level of detail in the research. Most of the reviewed studies did not provide the input parameters such as the reference system and the match rate. Since these parameters have a direct impact on results of the spatial analyses, this makes it difficult to conduct legitimate comparisons among the various articles.Even though to date no safe blood lead thresholds for the adverse effects of lead on children have been identified , data reFuture lead poisoning studies should also be concerned with data aggregation and the choice of geographical analysis. Data aggregation is done for two reasons: to link socio-economic and environmental measures to lead data and to ensure data confidentiality. In the former case, geocoded addresses may fall far away from their actual locations resulting in boundary problems during data aggregation to census block groups, census tracts, or ZIP code areas. Very few studies examined these aggregation problems and spatial scale effects to monitor risk factors . Studieset al. [et al. [et al. [Environmental studies on lead paint usage before 1978 have shown a link between house age and elevated BLLs. Soil studies can also reveal sources of lead toxicity. Several studies have shown that the distribution of lead toxicity among young children can be explained by proximity to high volume traffic areas. The relationship of vehicular lead deposits and children with elevated BLLs is contentious. Griffith et al. found no [et al. ,24,26 fo [et al. also fouThe environmental studies in this review also indicate a correlation between BLLs and African American populations. However, very few studies investigated the individual characteristics of children . The his"} {"text": "Science and knowledge progress rapidly. How to make something out of huge data sets, large amounts of information that comes on a daily base to us through various sources? Although the bias of hypothesis-driven research may indeed prevent to discover the unusual, the downside of it is that just finding significant correlations overflows the literature . This rThe NF-kB pathway is activated in several lymphoma types. De Oliveira et al. analyzedPaydas et al. describeH. Pylori (HP) infection, but most patients who have HP gastritis will not develop a lymphoma or carcinoma. Gossmann et al. [Helicobacter felis (H. felis). In contrast to their hypothesis, Plcg2Ali5/+ mice developed ENMZL less frequently than the WT littermates after long-term infection of 16\u00a0months. Infected Plcg2Ali5/+ mice showed downregulation of pro-inflammatory cytokines and decreased H. felis-specific IgG1 and IgG2a antibody responses. These results suggested a blunted immune response of Plcg2Ali5/+ mice towards H. felis infection. Intriguingly, Plcg2Ali5/+ mice harbored higher numbers of CD73 expressing regulatory T cells (Tregs), possibly responsible for impaired immune response towards Helicobacter infection. They suggest that Plcg2Ali5/+ mice may be protected from developing gastric ENMZL as a result of elevated Treg numbers, reduced response to H. felis, and decrease of pro-inflammatory cytokines. Of course, it would be interesting to see whether such differences exist between individuals with HP gastritis with and without ENMZL.Gastric extranodal marginal zone lymphoma (ENMZL) is a consequence of n et al. used BALCui et al. investigBroutier et al. use knowHigh expression of the forkhead box P1 (FOXP1) transcription factor distinguishes the more aggressive ABC-DLBCL subtype from germinal center (GC)-DLBCL subtype and is correlated with poor outcomes. Dekker et al. show thaHafsi et al. show thaNairism\u00e4gi et al. used whoA whole other approach was chosen by Hao et al. . BecauseAnaplastic large cell lymphoma (ALCL) is a peripheral T cell lymphoma presenting mostly in children and young adults. Malcolm et al. present Perry et al. describeIt is now well known that rare cancers are common, but hard to study. O\u2019Suoji et al. used datn\u2009=\u20091), histologic transformation (n\u2009=\u20092), and intercurrent deaths (n\u2009=\u20092). The estimated 10-year overall survival was 94.0\u00a0% and the 10-year progression-free survival 76\u00a0%. A similar study was performed by Kenderian et al. [Population-based data are very relevant to get a good idea on the features of rare diseases, since centers will have a selection of patients resulting in a certain bias. Strobbe et al. made usen et al. with a fAcquired C1-inhibitor (C1-INH) deficiency (C1-INH-AAE) is a rare condition resulting in acquired angioedema (AAE) and about 33\u00a0% of the patients develop NHL. Castelli et al. report tA subset of DLBCL is CD30 positive. Gong et al. studied Journal of Hematopathology by van den Brand et al. [BCL2 rearrangement, which was strongly overlapping with the morphological features of NMZL. This study raises the hypothesis that a subset of LG B-NHL with a follicular growth pattern but without a BCL2 translocation actually represents NMZL.An interesting hypothesis was put forward in the previous issue of the d et al. : quite sBatlle-L\u00f3pez et al. used thrLu et al. approachRoth et al. describeMoench et al. investigTanaka et al. investigThe splicing factor neuro-oncological ventral antigen 1 (NOVA1) is present in T cells of tertiary lymphoid structures. Kim et al. found thDobashi et al. investigAdult T cell leukemia/lymphoma (ATLL) is a rare T cell neoplasm caused by human T cell leukemia virus type 1 occurring in specific regions on the world. Tobayashi et al. analyzedLee et al. investigThe diagnosis of panniculitis-like T cell lymphoma (SPTCL) can be difficult. Especially, cases of SPTCL and lupus erythematosus panniculitis (LEP) can have clinical and histopathologic overlap, raising the possibility that they represent opposite ends of a disease spectrum. SPTCL, however, is typically associated with greater morbidity and risk for hemophagocytic lymphohistiocytosis (HLH). LeBlanc et al. present Lorenzi et al. report tInitially MYD88 mutations were seen as typical for lymphoplasmacytic lymphoma (LPL) but the spectrum has expanded. Rovira et al. describeMost of the post-transplant lymphoproliferative diseases (PTLD) are EBV driven, but a subset is EBV negative. Finalet Ferreiro et al. performeAs I wrote in my editorial , we knowJain et al. incidentKi-67 has been shown to be a good prognostic marker in MCL. Hoster et al. comparedAll in all, the briefest paragraph on prognostic factors in the series of review, you can read the argumentation in the previous issue .Bone marrow biopsies are slowly being replaced by other methods for staging. Cho et al. investigDi Martino et al. comparedSometimes a complete new technique that is simple but really different comes up and it is interesting to see if the promise becomes real (remember AGNORs\u2026..). Aesif et al. describeIt is now well demonstrated that free circulating DNA is partially derived from neoplastic cells and contains at least part of the genetic alterations present in the tumor. Camus et al. developeLee et al. used theDubois et al. describeGene alterations are important, but proteins do the work. Wu et al. applied And finally, stop grading FL by eye, use FLAGS !"} {"text": "Orthodontic patients show high prevalence of tooth-size discrepancy. This study investigates the possible association between arch form, clinically significant tooth-size discrepancy, and sagittal molar relationship.Pretreatment orthodontic casts of 230 Saudi patients were classified into one of three arch form types using digitally scanned images of the mandibular arches. Bolton ratio was calculated, sagittal molar relationship was defined according to Angle classification, and correlations were analyzed using ANOVA, chi-square, and t-tests.No single arch form was significantly more common than the others. Furthermore, no association was observed between the presence of significant Bolton discrepancy and the sagittal molar relationship or arch form. Overall Bolton discrepancy is significantly more prevalent in males.Arch form in a Saudi patient group is independent of gender, sagittal molar relationship, and Bolton discrepancy. Orthodontic diagnosis and treatment planning require properly trimmed study casts in order to analyze dental relationships. One of these measurements is tooth-size discrepancy, which is defined as disproportionate sizing of opposing teeth . Bolton Many investigators evaluated the effect of tooth-size discrepancy on occlusion among different malocclusion groups, sexes, and ethnicities. Nie and Lin showed that tooth-size discrepancy was highly prevalent in Class III and uncommon in Class II . Araujo et al. found that male crown measurements are slightly larger and show higher variability than female measurements, which in turn demonstrates differences in tooth-size discrepancy (TSD) between sexes [et al. showed significant sex differences for overall ratio among normal occlusion subjects [et al. [et al. [et al. [Lavelle found that overall and anterior ratios are higher among males than females, regardless of race . Santoroen sexes . Uysal esubjects . Howeversubjects , Araujo subjects , Akyalci [et al. , Basaran [et al. , Nie and [et al. , Al-Tami [et al. , and End [et al. reportedet al. found that African American subjects had higher prevalence of clinically significant anterior tooth-size discrepancies than did Caucasians and Hispanics; and discrepancies among Hispanic patients were more likely due to mandibular anterior excess [et al. reported that Bolton ratios apply to white women only but are not applicable to white men, blacks, or Hispanics [Ethnicity is a factor in tooth-size ratios. Individuals of African ethnic background have been reported to have larger teeth than Caucasian individuals . Dominicr excess . The matr excess . Howeverispanics . Other sispanics , 17. Howet al. investigated the possibility of an ideal orthodontic arch form that might be identified for treated and untreated individuals, but found no a specific arch form [et al. found five predominant mandibular dental arch forms in their sample of French individuals with normal occlusion [et al. compared morphological difference between Caucasian and Japanese mandibular arches and concluded that no single arch form is specific to any Angle classification or ethnic group [et al., Gafni et al., and Bayome et al. followed the method prescribed by Nojima et al. to determine the arch forms in different populations [et al. evaluated longitudinal arch width and form and concluded that maxillary arch forms were mostly tapered, and that mandibular arches were tapered and narrow-tapered [Preformed archwires are commonly used in orthodontic practice . Severalrch form . Raberincclusion . Nojima ic group . Kook etulations \u201325. Tane-tapered .et al. identified 23 mandibular arch forms in a Brazilian group and concluded that a single arch form cannot represent the normal dental arch [Trivino tal arch .et al. found that preformed archwires were significantly narrower than normal dental arches [et al. developed a method to classify dental arch forms to ensure both goodness of fit and pragmatic clinical application [et al. concluded that smaller teeth were associated with \u201cwide\u201d or \u201cpointed\u201d maxillary arch forms and \u201cflat\u201d mandibular arch forms [Oda l arches . Subjectl arches . Recentllication . In an ach forms .Few studies have explored the predominant arch forms and the prevalence of Bolton tooth-size discrepancy among Saudi patients. Thus, this study examines the arch form distribution in a sample of Saudi orthodontic patients, to evaluate the percentage of patients who present with a significant tooth-size discrepancy, and to investigate the possible association between arch form, clinically significant tooth-size discrepancy, and sagittal molar relationship.All available pretreatment orthodontic records of patients who attended the orthodontic clinics at the College of Dentistry, King Saud University, and a private orthodontic clinic in Riyadh, Saudi Arabia, were reviewed, and orthodontic casts from 230 patients matching the following selection criteria were included: Good-quality pretreatment study casts; fully erupted permanent teeth at least from first molar to first molar; absence of tooth crown size alteration ; no history of trauma or orthodontic treatment; and Saudi ethnicity. Ethical approval was obtained from the College of Dentistry Research Center (Registration No. NF 2271).Molar relationship was assessed according to Angle\u2019s definition. Molar Class I was defined as occurring where the mesiobuccal cusp of the upper first molar occluded with the mesiobuccal groove of the lower first molar, or within less than half a cusp width anteriorly or posteriorly. Mismatched right and left molar classifications were considered \u201casymmetric\u201d.Mandibular models were digitally scanned and a ruler was used for size calibration. The most facial aspect of 13 proximal contact areas around the arch was digitized using AutoCAD software . The clinical bracket point for each tooth was located facially via a line perpendicular to that connecting the mesial and distal contact points of each tooth , 23, 32.A digital caliper was used to measure the mesiodistal crown diameters of all teeth (from first molar to first molar) to the nearest 0.01\u00a0mm . The widError of method: for intra-examiner reliability, measurements were compared via coefficient of reliability and kappa statistics. Within a two-week period, the mesiodistal widths of 10 pairs of casts were re-measured by the same investigator, and a high coefficient of reliability was observed (r\u2009=\u20090.936). Arch forms were re-determined by the same investigator for 19 lower casts and perfect agreement was observed between the first and second evaluations (kappa score of 1).Descriptive analysis including the prevalence of Bolton discrepancy and distribution of arch form types among the sample.t-test, and ANOVA were used to evaluate the presence of an association.Chi-square, Data were evaluated using PASW\u00ae Statistics 18 , and the level of significance was set at p\u2009<\u20090.05. The following tests were used:The demographic characteristics of the sample group and the distribution of sagittal molar classes and arch forms are shown in Table\u00a0Approximately half 49.1\u00a0%) of the sample showed an anterior Bolton tooth-size discrepancy i.e. exceeding \u00b11 standard deviation (SD) (<75.55 or >78.85), while only 39.1\u00a0% showed an overall Bolton discrepancy (<89.39 or >93.21) . No assot-test results showed a significant difference in the prevalence of overall Bolton discrepancy between males (mean\u2009=\u200992.306) and females (mean\u2009=\u200991.545) (p\u2009=\u20090.013). No significant difference was observed in the anterior ratio between males (mean\u2009=\u200977.883) and females (mean\u2009=\u200977.329) (p\u2009=\u20090.08).ANOVA showed no significant difference in anterior ratio or overall ratio by sagittal molar class or arch form for the study sample as a whole. However, The distribution of the cases based on the amount of tooth-size correction required to balance the anterior Bolton discrepancy in the maxillary teeth (reduction or addition) is shown in Fig.\u00a0Previous studies have reported significant differences in head form and arch form between various ethnic groups , 36\u201338. et al., who reported little difference in arch forms between malocclusions groups [Our Saudi group showed a significantly different distribution of arch forms compared with Egyptian, Caucasian , Israelis groups . Howevers groups , 23. Egys groups .Table 4TSimilarly to previous studies, our group showed no sex differences in arch form , 39\u201342. et al. (28\u00a0%), and American population reported by Freeman et al. (30.6\u00a0%) [A large proportion of orthodontic patients present with tooth-size discrepancy. Those who have anterior or overall ratios beyond 2 SDs are considered to have a significant Bolton discrepancy. In the Saudi sample, 17.4\u00a0% of patients had significant anterior tooth-size discrepancy. This figure matches the findings for a British orthodontic population (17.4\u00a0%) and a Croatian population (16.28\u00a0%) but was lower than the prevalence in a Turkish population reported by Uysal and Sari (21.3\u00a0%), Dominican American population reported by Santoro (30.6\u00a0%) \u201345. The (30.6\u00a0%) .et al. in an Irish population [et al. [et al. in Saudi population and Croatian populations respectively [In the present study, sagittal molar classification was not related to the distribution of the tooth-size discrepancy groups. This was in agreement with the findings of Uysal and Sari, who reported no difference in tooth-size ratios between malocclusion groups in a Turkish population, the findings of Crosby and Alexander in an American population, and O\u2019Mahony pulation , 47, 48. [et al. , 5. Thisectively , 45. In Most prior studies reported no significant differences in anterior or overall tooth-size ratio between males and females , 43, 48.Arch form types were not related to the presence of tooth-size discrepancy. Therefore, arch form is likely determined by patient-specific genetic and environmental factors, and orthodontists need to recognize the uniqueness of each case in their treatment planning.In Saudis, there were more ovoid cases forms than tapered and square but no single arch form was significantly more common.Arch form types were not associated with gender, sagittal molar relationship, or the presence of tooth-size discrepancy.Sexual dimorphism was evident in the prevalence of overall Bolton tooth-size discrepancy.Based on the results of the present study, the following conclusions can be drawn:"} {"text": "Trypanosoma cruzi, member of Trypanosomatidae family, Kinetoplastida order. Chagas disease is recognized by the World Health Organization as one of the main neglected tropical diseases, affecting about 8\u201315 million individuals in 18 countries in Central and South America, where it is endemic, as well as countries in North America and Europe (Chagas disease was discovered by Carlos Chagas in Brazil in 1909 (d Europe . At leasd Europe .Trypanosoma cruzi presents a complex life cycle both in the vertebrate and invertebrate hosts, involving dramatic changes in cell shape (ll shape . Its lifT. cruzi to penetrate into host cells. In the first article, de Souza and de Carvalho (T. cruzi always penetrates the host cell through an endocytic process with the formation of a transient parasitophorous vacuole. The second article by Barrias et al. (T. cruzi with cardiomyocytes, an important host cell, because in vivo many of the parasite strains have a tropism for the heart. In the fourth article, Tonelli et al. (T. cruzi are represented by trans-sialidases. These proteins are involved in parasite\u2013host cell recognition, infectivity, and survival. The sixth and seventh articles by Nde et al. (T. cruzi with the host cells. Infective trypomastigotes up-regulate the expression of laminin-gamma\u22121 and thrombospondin to facilitate recruitment of parasites to enhance cell infection. The extracellular matrix interactome network seems to be regulated by T. cruzi and its gp 83 ligand. The eighth article by Maeda et al. (T. cruzi trypomastigotes elicit an inflammatory edema that stimulates protective type-1 effector cells through the activation of the kallikrein\u2013kinin system, providing a framework to investigate the intertwined proteolytic circuits that couple the anti-parasite immunity to inflammation and fibrosis.This thematic issue deals with the ability of Carvalho make a rs et al. reviews s et al. analyze i et al. call thei et al. point oue et al. and Ferre et al. respectia et al. reviews a et al. initiallThe author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Vermeulen et al. 2014 found a statistically significant dose\u2013response association and described elevated lung cancer risks even at very low exposures.Vermeulen et al. 2014 published a meta-regression analysis of three relevant epidemiological US studies that estimated the association between occupational diesel engine exhaust (DEE) exposure and lung cancer mortality. The DEE exposure was measured as cumulative exposure to estimated respirable elemental carbon in \u03bcg/mWe performed an extended re-analysis using different modelling approaches and explored the impact of varying input data .P\u2009=\u20090.6). A (non-significant) threshold estimate for the cumulative DEE exposure was found at 150\u00a0\u03bcg/m3-years when extending the meta-analyses of the three studies by hockey-stick regression modelling . The data used by Vermeulen and colleagues led to the highest relative risk estimate across all sensitivity analyses performed. The lowest relative risk estimate was found after exclusion of the explorative study by Steenland et al. 1998 in a meta-regression analysis of Garshick et al. 2012 (modified), Silverman et al. 2012 and M\u00f6hner et al. 2013. The meta-coefficient was estimated to be about 10\u201320\u00a0% of the main effect estimate in Vermeulen et al. 2014 in this analysis.We reproduced the individual and main meta-analytical results of Vermeulen et al. 2014. However, our analysis demonstrated a heterogeneity of the baseline relative risk levels between the three studies. This heterogeneity was reduced after the coefficients of Garshick et al. 2012 were modified while the dose coefficient dropped by an order of magnitude for this study and was far from being significant (The findings of Vermeulen et al. 2014 should not be used without reservations in any risk assessments. This is particularly true for the low end of the exposure scale. The authors reported that some of them worked as members of the IARC Working Group , which oup (see aimed to3 for DEEE (diesel engine exhaust emissions), measured as elemental carbon. This recommendation is not health-based but reflects mainly socio-economic considerations. Cherrie et al. 3P\u2009=\u20090.63Another aspect is important for correct evaluation of the Garshick study . Thus, the US mining study is particularly relevant for the meta-analysis. Critical comments and a list of open questions were published [The highest exposure value included in the meta-study came from the DEMS case\u2013control study by Silverman et al. : 1036\u00a0\u03bcgublished on the Dublished , 12, to ublished . Howeverublished and K. Cublished ). It is ublished team wasublished and publublished , 30.In a Letter to the Editor on Vermeulen et al. , Crump reportedMoolgavkar et al. re-analy1) Time-dependent factors are superimposed, so that model coefficients should not be estimated without considering the interaction with age. Accordingly, stating an isolated risk coefficient \u2013 as described in Attfield et al. \u2013 does n2) One mine (limestone mine) is an outlier in the data. The DEE exposures in this mine are the lowest, but the risk estimates are the highest of all the mines. A Cox regression analysis of the data shows tP-value of P\u2009=\u20090.001 . The analyses by Crump et al. [P\u2009\u2265\u20090.65 or the trend is negative. The authors wrote: \u201cMost importantly, we used the radon concentration data for the DEMS cohort provided by the DEMS investigators. When adjustment was made for radon, a known human lung carcinogen, the effect of REC on the association with lung cancer mortality was confined to only the three DEMS REC estimates. Most notably, there was no evidence of an association with any of the six alternate REC estimates, including REC6. When T2 trend tests were conducted, based on the use of individual worker REC estimates, the results were less statistically significant and in many cases the trends were negative. Indeed, for miners who always worked underground, five of the six REC metrics exhibited negative trends.\u201dCrump et al. re-analyp et al. , i.e., less certain than in the original data analysis (P\u2009<\u20090.000001).If the paper by Crump et al. nor the individual data for this research project were available. Threshold analyses for dust should focus on a concentration threshold [see the discussion in The limitations mentioned affect the analysis by Vermeulen et al. and our The meta-regressions performed here show significant variations in the results, depending on the study data incorporated or the analytical methods applied. The data used by Vermeulen and colleagues led to the highest risk estimates in our meta-analysis . After excluding the explorative study by Steenland et al. , the lowToxicological results of the current ACES study from controlled long-term experiments on rats with new technology diesel exhaust (NTDE) did not exhibit tumour growth or precancerous conditions , by contRegardless of the fundamental restrictions mentioned above, the present re-analysis also revealed that the results of the meta-regression study by Vermeulen et al. should nVermeulen et al. publisheThe present re-analysis largely succeeded in reproducing the individual cohort results and main meta-findings of Vermeulen et al. from theTherefore, the results of the meta-regression analysis by Vermeulen et al. should n"} {"text": "Stimulation by fluoxetine of the astrocytic 5-HT2B receptor causes a multitude of effects that in astrocyte cultures could be prevented by drug- or siRNA-induced 5-HT2B receptor inhibition or differentiated serotonergic neurons. They allege that these data rule out that the antidepressant effects of fluoxetine or BW723C86 could be SERT-independent (as claimed by us) and show that serotonergic neurons expressing SERT are necessary for the 5-HT2B receptor effects exerted by fluoxetine (and other 5-HT2B receptor agonists). They further remind us that Launay et al. and subsequent arachidonic acid release and metabolism seen in normal mice is abolished in mice lacking SERT. There is no information in the literature that DOI should interact with SERT. Although Qu et al. indicated DOI as a 5-HT2A\u2215C agonist it also activates the 5-HT2B receptor (Pineda-Farias et al., 2B receptors in cultured astrocytes (Li et al., 2 activity is potently stimulated by the SSRI sertraline in yeast (Rainey et al., not prove fluoxetine dependence on SERT in adult brain.We were well aware of the lack of antidepressant effect in SERT knock-outs described by Diaz et al. and of t2B receptor in mature individuals, because Homberg et al. (With respect to the paper by Launay et al. it g et al. and Sarkg et al. have shoKi values based on the graph and the concentrations of mesulergine. The Zhang et al. (2B receptors is invalidated by the observation by Li et al. (2 receptor antagonist, and in 5-HT2B receptor-depleted cells (see Figure 2A or 5-HT2C receptor antagonists.\u201d Fluoxetine stimulation of astrocytic 5-HT2B receptors was confirmed by Qiao et al. (Banas et al. also claim we changed results. However, the figure in Hertz et al. and in Kg et al. paper, ri et al. that \u201cERo et al. . The \u201cpuo et al. .The commentary by Banas et al. has accordingly not altered our original conclusions but rather strengthened it.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Managing fisheries resources to maintain healthy ecosystems is one of the main goals of the ecosystem approach to fisheries (EAF). While a number of international treaties call for the implementation of EAF, there are still gaps in the underlying methodology. One aspect that has received substantial scientific attention recently is fisheries-induced evolution (FIE). Increasing evidence indicates that intensive fishing has the potential to exert strong directional selection on life-history traits, behaviour, physiology, and morphology of exploited fish. Of particular concern is that reversing evolutionary responses to fishing can be much more difficult than reversing demographic or phenotypically plastic responses. Furthermore, like climate change, multiple agents cause FIE, with effects accumulating over time. Consequently, FIE may alter the utility derived from fish stocks, which in turn can modify the monetary value living aquatic resources provide to society. Quantifying and predicting the evolutionary effects of fishing is therefore important for both ecological and economic reasons. An important reason this is not happening is the lack of an appropriate assessment framework. We therefore describe the evolutionary impact assessment (EvoIA) as a structured approach for assessing the evolutionary consequences of fishing and evaluating the predicted evolutionary outcomes of alternative management options. EvoIA can contribute to EAF by clarifying how evolution may alter stock properties and ecological relations, support the precautionary approach to fisheries management by addressing a previously overlooked source of uncertainty and risk, and thus contribute to sustainable fisheries. Maintaining a healthy ecosystem while balancing competing interests of stakeholders is one of the main goals of the EAF FAO . Althouget al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. Assessments of exploited fish stocks are often highly uncertain evolutionary change and microevolutionary change influences ecological processes , which may be impacted if FIE changes trophic interactions, size structures, or migration distances. Provisioning services benefit humans through tangible products such as fisheries yields, recreational fishing experiences, and economic rents and are likely to be modified by FIE through changes in the characteristics and demography of stocks and the dynamics of communities. Cultural services benefit humans through the values ecosystems offer for education, recreation, spiritual enrichment, and aesthetics, which may all be affected if FIE occurs.Fisheries-induced evolution: \u2018Genetic change in a population, with fishing serving as the driving force of evolution\u2019 the natural system, including the target stock, non-target species, and the surrounding ecosystem and its physical environment, (ii) the resulting ecosystem services generated by targeted fish stocks, (iii) the management system, and (iv) the socioeconomic system Fig.\u2003. Each ofet\u2003al. per capita resource availability and thus to faster individual growth and reduced age at maturation and output controls are intended to alter fishing pressure. However, several factors within the socioeconomic subsystem may shape realized fishing pressures because they influence the decisions taken by individual fishers about their fishing activities and non-consumptive use values , and arises from provisioning and cultural services and primarily arises from regulating services. Option value comes from the potential future use of living aquatic resources or related ecosystem components such as yet to be discovered resources with medicinal or industrial use and can arise from all ecosystem services. Non-use value comes from attributes inherent to a living aquatic resource or related ecosystem components that are not of direct or indirect use to members of society but still provide value to stakeholders now mature at younger ages and smaller sizes than in the past in the North Sea and west of Scotland are now more fecund than 30\u2003years ago quantification of the losses or gains in utility that may result from FIE and (ii) evaluation of alternative management regimes while accounting for the potential effects of FIE. The first type, illustrated in Fig.\u2003et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. The second type of EvoIA, illustrated in Fig.\u2003et\u2003al. Despite efforts to predict the direction of FIE for different kinds of selection regimes also be considered. When evaluating the causal relationships between these two groups of quantities, it is crucial to recognize that fishing parameters do not change quantitative traits directly. Instead, they alter the selection pressures operating on phenotypes and thus the expected rates of evolutionary change. When these rates are integrated over a given time period, they yield the magnitude by which the quantitative trait will change in response to the altered fishing parameters. Because selection pressures may differ over the lifetime of individuals, an assessment of the relative strength of larval, juvenile, and adult selection pressures is warranted desirability means that the utility component increases (decreases) as the considered trait value increases. Vulnerability is the sensitivity with which a change in a fishing parameter alters the rate of change in a utility component. When the absolute value of vulnerability is high, the utility component quickly changes in response to the considered change in fishing. Positive (negative) vulnerability means that the rate of change in the utility component increases (decreases) in response to an increase in the considered fishing parameter.Linear impact analyses are based on sensitivity measures. Once a sensitivity measure has been estimated, the impacts of changes in a fishing parameter are obtained simply by multiplying this measure with the magnitude of change in the causative parameter and, where the result is a rate, by multiplying it with the duration of the considered time period. If changes in several fishing parameters are considered at once, their aggregated impact is obtained by summing their individual impacts. The following four sensitivity measures Fig.\u2003 may be os Conrad . In the evolutionary vulnerability, as the sensitivity with which a change in a fishing parameter alters the rate of change in a utility component through FIE. Following the multivariate chain rule of calculus, we define this as the product of adaptability and desirability summed over all considered quantitative traits should be reduced relative to the current reference points for this stock, which ignore FIE. This is because the estimated optimal fishing mortality when FIE is ignored (\u2018static\u2019 FMSY) is well above the evolutionarily optimal fishing mortality (\u2018evolutionary\u2019 FMSY). Hence, even if the stock would be fished at the currently estimated \u2018static\u2019 FMSY, this mortality would still be too high and decrease the future yield. The currently advised reference points can therefore not be considered sustainable.The EvoIA of North Sea plaice by Mollet et\u2003al. is among. et\u2003al. . Their met\u2003al. analysing environmental variables, (ii) estimating selection pressures, and (iii) examining multiple stocks. The three paragraphs below outline these approaches in turn.et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. Some methods control for environmental variance in life-history traits by including relevant additional explanatory variables in the fitted statistical models, thus aiming to remove the effects of phenotypic plasticity from genetic trends. While the removal of all other known effects will never be possible, residual year or cohort effects may indicate evolutionary change. For instance, the estimation of probabilistic maturation reaction norms (PMRNs) was developed to disentangle genetic and environmentally induced changes in age and size at maturation, by accounting for growth variation (Dieckmann and Heino et\u2003al. et\u2003al. et\u2003al. et\u2003al. (et\u2003al. et\u2003al. et\u2003al. et\u2003al. Although the statistical methods mentioned above can be applied using data commonly available from harvested fish, it remains impossible to separate genetic responses from all potential plastic responses in life-history traits for most wild fish stocks (Dieckmann and Heino . et\u2003al. measured. et\u2003al. or size Regardless of the nature of the phenotypic trends in commercial fish stocks, an additional challenge in EvoIA is to link the observed trends to fishing pressure. This is directly related to the general problem of inferring causation from correlation in insufficiently controlled settings. One way to alleviate \u2013 albeit not remove \u2013 this problem is to include multiple fish stocks in a single analysis. For example, one can test whether fishing pressure is correlated with rates of trait changes across multiple fish stocks, as suggested by Sharpe and Hendry . Howeveret\u2003al. et\u2003al. An additional complication arises when fisheries are targeting mixed assemblages of fish from several different evolutionary units, such as in the migrating Atlantic herring (Ruzzante et\u2003al. et\u2003al. EvoIAs typically require examination of the demography and evolution of populations and, ideally, ecological communities Fig.\u2003. We can et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. Models used for EvoIA can also be classified according to the variables structuring the demographic component of stock dynamics. In the context of modelling FIE, researchers have used age-structured models (e.g. Law and Grey et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. A further distinction among process-based models arises from methods used for quantifying the effects of selection, and thus for describing the evolutionary component of stock dynamics Fig.\u2003. In modeet\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. Depending on the objectives of a specific EvoIA, a population's demographic and evolutionary dynamics may best be described by different combinations of the alternative model choices described above. Nevertheless, one type of models, coined \u2018eco-genetic\u2019 models (Dunlop et\u2003al. EvoIAs need to evaluate the socioeconomic implications of the impacts of fishing on ecosystem services and utility values. Usually, this can be achieved by coupling a biological model of a stock to a socioeconomic model describing the utility components stakeholders derive from that stock. The complexity of the latter models may range from relatively simple, focusing on a small set of readily quantifiable utility components, such as yield or profit (e.g. Dankel et\u2003al. et\u2003al. (et\u2003al. (To date, most attempts to quantify changes in utility arising from fishing have included only a small subset of traditional utility components (but see Dichmont . et\u2003al. demonstr (et\u2003al. analysedet\u2003al. et\u2003al. et\u2003al. (et\u2003al. (In recognition of the potentially significant changes in utility that could result from FIE, some recent studies have attempted to quantify changes in utility brought about by demographic, plastic, and evolutionary changes (e.g. McAllister and Peterman . et\u2003al. studied (et\u2003al. showed h (et\u2003al. also spe (et\u2003al. used a m (et\u2003al. estimate (et\u2003al. found thAn additional challenge arising when assessing the corresponding socioeconomic dynamics associated with fisheries is to account for the disparity of time horizons among stakeholders. For example, fishers often focus their interests on relatively short-term developments, whereas conservation groups usually advocate an emphasis on longer-term considerations. As we have already discussed above, attempts to capture such differences in the time horizons of stakeholders often involve the use of different discount rates, which convert future costs or benefits into different net present values that reflect the interests of different stakeholders. While this approach is meant to account for the different time preferences and opportunity costs of resource users, it has been argued that using market-based discount rates for managing natural resources is inherently problematic (e.g. Arndt et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. Management-strategy evaluation is a framework for assessing and comparing the differential merits of management strategies in the face of uncertainty (Smith In the EvoIA framework, MSE methods can be used either for relatively simple tasks, such as examining whether a specific alternative management strategy should be adopted instead of a currently applied strategy, or for more complex tasks, such as selecting an optimal management strategy by evaluating a continuum of possible management options according to a given global utility function. MSE could thus offer a possible platform for embedding EvoIA in current practices for assessment and management by drawing on existing operating models and by extending these as necessary to cover the relevant ecological, evolutionary, and socioeconomic components. A particular appeal of interfacing EvoIA with MSE is the explicit treatment of uncertainty in MSE. Sources of uncertainty include observation error limiting the accuracy of monitoring efforts, parametric and structural uncertainty associated with operating models, process uncertainty resulting from fluctuations in the natural and socioeconomic subsystems, and implementation uncertainty involved in adopting and enforcing management measures. For example, uncertainty about estimated selection differentials or selection responses could be accommodated relatively easily by considering these quantities in terms of their distributions, while qualitatively different predictions about evolutionary dynamics could be treated as alternative hypotheses about the operating model.et\u2003al. et\u2003al. et\u2003al. (Overexploited and collapsed fish stocks, poor recovery after fishing ceases, and altered interspecific interactions indicate that fisheries science and management are not accounting for all relevant factors that influence the dynamics of aquatic ecosystems (Francis . et\u2003al. , outliniet\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. The majority of methods highlighted in this paper are already in place. Yet, most of these methods have been developed in isolation and have been used for disparate purposes. In principle, these methods can be used to investigate any kind of environmental impact on marine systems, but we have here focused solely on the impacts of exploitation. EvoIA provides a framework for combining these methods towards the common purpose of assessing impacts of FIE on the utility of living aquatic resources. Nevertheless, it goes without saying that a continuous development of new methods will further strengthen the EvoIA approach. First, in addition to PMRNs (Dieckmann and Heino et\u2003al. et\u2003al. A possible application of EvoIA concerns the determination of reference points for fisheries management in a way that accounts for FIE (Hutchings et\u2003al. et\u2003al. et\u2003al. et\u2003al. et\u2003al. In many cases, fishing may be assumed to exert the main selection pressure on a fish stock (Heino et\u2003al. et\u2003al. et\u2003al. et\u2003al. Great complexity characterizes the possible impacts of FIE. In some cases, these impacts are desirable, such as when a declining age at maturation increases a stock's resilience to high fishing pressure (Heino et\u2003al. et\u2003al. et\u2003al. The overlap between EvoIA and EAF-based management, in terms of goals and methods, is substantial (Francis et al. (Undoubtedly, the EvoIA approach outlined here is highly complex and a full-scale EvoIA will be a challenging task. Beyond accounting for FIE in the estimates of demographics and sustainability, the effective incorporation into fisheries management will largely depend on the extent to which the various components proposed are taken up by fishery managers. Furthermore, because of the many building blocks \u2013 each with many parameters of which many are highly uncertain and inherently difficult to estimate \u2013 it can be easy to dismiss this approach as a purely academic exercise without practical value. However, the complicated characteristics of ecological, evolutionary and socio-economic processes do not lend themselves well to simplified analyses. Thus, the EAF mandates that the scientific basis for management decision rely on analyses that are as complicated as necessary to incorporate all relevant factors. Moreover, the fact that we, in many cases, may have to rely on models including a high level of uncertainty should in any case not be an excuse for inaction. As a start, progressively building and extending assessment models by including evolutionary thinking into practices will be more realistic than an immediate implementation of the whole framework. However, because there is a strong need for immediate operational advice, we have, in Table\u2003et al. for geneImproved assessment of the evolutionary impacts of fishing can lead to better management practices and more accurate predictions of stock dynamics and ecosystem effects. Failure to investigate the presence of, and account for, FIE in stock assessments, management advice, and policy making may exacerbate the negative consequences of phenotypic changes already commonly observed across the fish stocks we aim to sustain."} {"text": "There have been a number of studies evaluating the association of aneuploidy serum markers with adverse pregnancy outcome. More recently, the development of potential treatments for these adverse outcomes as well as the introduction of cell-free fetal DNA (cffDNA) screening for aneuploidy necessitates a re-evaluation of the benefit of serum markers in the identification of adverse outcomes. Analysis of the literature indicates that the serum markers tend to perform better in identifying pregnancies at risk for the more severe but less frequent form of individual pregnancy complications rather than the more frequent but milder forms of the condition. As a result, studies which evaluate the association of biomarkers with a broad definition of a given condition may underestimate the ability of such markers to identify pregnancies that are destined to develop the more severe form of the condition. Consideration of general population screening using cffDNA solely must be weighed against the fact that traditional screening using serum markers enables detection of severe pregnancy complications, not detectable with cffDNA, of which many may be amenable to treatment options. Prenatal screening for birth defects was initially implemented using a single biochemical marker to identify a single condition in the second trimester of pregnancy ,2. Over More recently, there have been new developments that need to be considered when evaluating aneuploidy screening markers for other adverse outcomes. Evaluation of cell-free fetal DNA (cffDNA) in maternal blood offers the opportunity to significantly improve the detection of Down syndrome while substantially reducing false positive rates ,26,27,28et al. [et al. [Initial meta-analysis of 31 randomized trials indicated that aspirin had only a small beneficial effect on reducing the incidence of preeclampsia as a whole . Howeveret al. and Robe [et al. ,40 showe [et al. . In addi [et al. . Other s [et al. ,43,44,45 [et al. ,47,48. F [et al. ,50,51. BSeveral studies have evaluated first trimester free \u03b2 human chorionic gonadotropin (free hCG\u03b2) and pregnancy associated plasma protein A (PAPP-A) as markers for preeclampsia ,54,55,56et al. [et al. [Morris et al. performe [et al. found a [et al. also shoAmong preeclampsia pregnancies, approximately 70% of perinatal deaths and 60% of cases of severe neonatal morbidity occur in early onset (<34 weeks) preeclampsia even though these cases represent only about 10% of all preeclampsia cases . As a reet al. [et al. [et al. [Olsen et al. found thet al. ,60,61. K [et al. found th [et al. developeRecent data has indicated that a direct screen including maternal characteristics, PAPP-A, placental growth factor (PlGF), uterine artery doppler pulsatility index and mean arterial pressure can identify over 90% of early-onset preeclampsia pregnancies in the first trimester ,64,65. CUntil recently, the terminology used to describe intrauterine growth restriction (IUGR) has been inconsistent and confusing and the et al. [In general, studies of serum screening markers have used birth weight rather than estimated fetal weight to describe IUGR. There appears to be a tendency for extreme analyte values to be associated with more extreme low birth weight. D\u2019antonio et al. , recentlet al. ,71,72,73et al. ,75 including 2.8% which were early preterm (<34 weeks) . IdentifSeveral studies have evaluated the association of PAPP-A with preterm birth. et al. [Dugoff et al. evaluateet al. [et al. estimated the likelihood ratio for birth before 32 weeks for cervix length <1 cm, 1\u20132 cm, 2\u20133 cm, 3\u20134 cm, 4\u20135 cm, 5\u20136 cm and 6\u20137 cm to be 51.52, 2.66, 0.71, 0.48, 0.24, 0.04 and 0.01, respectively [et al. [et al. determined that the risk of preterm birth in pregnancies with negative fibronectin, large cervical length and no history of preterm birth is 1% compared to 64% in pregnancies with positive fibronectin, small cervical length and history of preterm birth [Data on the incidence of early preterm birth and short cervix can also be converted to likelihood ratios. Using the summarized data from Werner et al. , the likectively . These r [et al. . Iams etrm birth ,83. AlthFirst trimester screening typically takes place beginning at 11 weeks. However, in some programs the blood sample for biochemistry testing is drawn prior to the ultrasound and may be collected as early as 9 weeks. As a result, data on fetal loss can be stratified into 3 timeframes; loss prior to nuchal translucency (NT) ultrasound, loss prior to 24 weeks gestation and loss after 24 weeks gestation.et al. [et al. [et al. [et al. [Cuckle et al. and Kran [et al. evaluate [et al. examined [et al. found me [et al. .et al. [et al. [et al. [A number of studies ,86,87,88 [et al. also eva [et al. found thet al. [The performance of screening for late fetal loss does not appear to be as effective with observed detection rates between 3%\u201320% at a 5% false positive rate ,88,90. Tet al. , who repPlacenta accreta is a life-threatening obstetric complication resulting from abnormal placental implantation. The risk of placenta accreta increases significantly with placenta previa and the number of previous cesarean deliveries . Currentet al. found that high levels of PAPP-A were associated with increased risk of placenta accreta [et al. [et al. [Desai accreta and more [et al. also sho [et al. found thet al. [et al. [et al. [et al. [In the second trimester, Zelop et al. and Kupf [et al. showed i [et al. and Dreu [et al. found thFurther study into the development of algorithms encompassing multiple marker cross-trimester protocols, repeat marker testing, prior history of cesarean section and existence of previa are warranted.The concept of prenatal screening began with the use of AFP for the detection of open neural tube defects (ONTDs) and evolved so that the main focus of serum marker screening is now chromosomal abnormalities ,9,10,11.A second trimester anatomy scan can be effective in identifying neural tube defects in specialized centers focused on high risk pregnancies ; howeverThe serum screen for open neural tube defects is straightforward with labs using either a 2.0 MoM or 2.5 MoM cut-off. The detection rate of open spina bifida is approximately 10 percentage points greater with a 2 MoM rather than a 2.5 MoM cutoff ,105. In For adverse outcomes such as IUGR, preeclampsia and preterm birth the clinical presentation may vary widely with respect to maternal/fetal morbidity and mortality. The more severe form of these adverse outcomes have significantly higher rates of severe morbidity and mortality. The information presented in this review indicates that there is improved performance of serum markers with respect to the more severe form of various pregnancy complications. Moreover, the most severe cases tend to occur less frequently than the milder forms of these conditions . As a rePublished performance data among different studies are often inconsistent due to discrepancies in a number of factors such as the definition and/or description of the severity of the condition, the marker cutoffs used and the maternal characteristics incorporated into risk algorithms. Optimally, a risk-based approach similar to that used in aneuploidy screening would be used for each disease state, in which consistent definition of the disease state, continuous multiple marker likelihood estimates and consistent estimates of a priori risks based on maternal characteristics were incorporated. Additionally, refinements to the risk based on follow-up assessments after the completion of serum screening could further improve the process.Clinicians are faced with a difficult dilemma in which they must balance the potential benefits of non-invasive genetic screening while not losing sight of the potential pitfalls in missing other adverse outcomes especially since there now appears to be opportunity to improve those outcomes with effective treatments. Aspirin shows great promise if administered prior to 16 weeks in reducing the risk of preeclampsia, IUGR, preterm birth and fetal death. Some of the protocols described above include second trimester markers and would require completion by 16 weeks to maximize the benefits of aspirin administration. However, it is likely that the effectiveness of aspirin is not based on a simple dichotomy of <16 weeks and \u226516 weeks so aspirin may still be effective at 17\u201318 weeks even if less so than at 16 weeks. More research is needed to evaluate the association between effectiveness and time of initiation of aspirin treatment.Moving forward, the goal should be to develop and implement high-performance direct screening protocols for specifically defined adverse outcomes. When evaluating the adoption of cffDNA testing for aneuploidy, clinicians should ensure that they continue to utilize existing screening protocols or new direct screens to identify pregnancies at risk for adverse outcomes. Otherwise, there may potentially be an increase in the overall morbidity and mortality in the population."} {"text": "In the editorial section, Veronica Magar (743) argues that the sustainable development goals need to respond to a range of inequalities, including gender. Taye Balcha et al. (742) draw on the Ethiopian experience to stress the importance of country ownership and local innovations to achieve these goals. st session of the Conference of the Parties to the UN Framework Convention on Climate Change (748\u2013749). In the news section, Atasa Moceituba and Monique Tsang report on the health effects of global warming in Fiji (746\u2013747). Christiana Figueres talks to Andr\u00e9ia Azevedo Soares about climate change mitigation ahead of the 21Sheik Mohammed Shariful Islam & Reshman Tabassum (806\u2013809) describe the implementation of information and communication technologies for health. Val\u00e9rie R Louis et al. (750\u2013758) study the effects of a first national insecticide-treated bed-net campaign. Hsien-Ho Lin et al. (790\u2013798) model the potential impact of control measures. Christopher Fitzpatrick et al. (775\u2013784) examine the cost-effectiveness of a programme for drug-resistant tuberculosis. Linda Bartlett et al. (759\u2013767) observe compliance with guidelines for the active management of the third stage of labour. Ramnath Subbaraman & Sharmila L Murthy (815\u2013816) describe obstacles to water access in Mumbai. Aisha NZ Dasgupta et al. (768\u2013774) test family planning cards as a method to evaluate contraceptive use. Allison E Gocotano et al. (810\u2013814) describe how security measures increased people's exposure to cold weather during the visit of Pope Francis. Kai Ruggeri et al. (785\u2013789) argue that better evidence is needed for medical travel policies. Danny J Edwards et al. (799\u2013805) analyse the options for equitable access to new medicines."} {"text": "Objectives: To quantify incisor decompensation in preparation for orthognathic surgery.Study design: Pre-treatment and pre-surgery lateral cephalograms for 86 patients who had combined orthodontic and orthognathic treatment were digitised using OPAL 2.1 [http://www.opalimage.co.uk]. To assess intra-observer reproducibility, 25 images were re-digitised one month later. Random and systematic error were assessed using the Dahlberg formula and a two-sample t-test, respectively. Differences in the proportions of cases where the maxillary (1100 +/- 60) or mandibular (900 +/- 60) incisors were fully decomensated were assessed using a Chi-square test (p<0.05). Mann-Whitney U tests were used to identify if there were any differences in the amount of net decompensation for maxillary and mandibular incisors between the Class II combined and Class III groups (p<0.05). Results: Random and systematic error were less than 0.5 degrees and p<0.05, respectively. A greater proportion of cases had decompensated mandibular incisors (80%) than maxillary incisors (62%) and this difference was statistically significant (p=0.029). The amount of maxillary incisor decompensation in the Class II and Class III groups did not statistically differ (p=0.45) whereas the mandibular incisors in the Class III group underwent statistically significantly greater decompensation (p=0.02). Conclusions: Mandibular incisors were decompensated for a greater proportion of cases than maxillary incisors in preparation for orthognathic surgery. There was no difference in the amount of maxillary incisor decompensation between Class II and Class III cases. There was a greater net decompensation for mandibular incisors in Class III cases when compared to Class II cases. Key words:Decompensation, orthognathic, pre-surgical orthodontics, surgical-orthodontic. Approximately 4% of the population have dentofacial deformity requiring combined surgical-orthodontic treatment and thesPre-surgical orthodontic treatment consists of three concurrent aspects: arch alignment, arch co-ordination and arch decompensation . In mostet al. (et al. (et al. (et al. (Capelozza Filho et al. found de (et al. also fou (et al. examined (et al. found thet al. (Troy et al. similarlet al. found deet al. . In a Chet al. noted thHowever, none of these investigations determined the delivery of incisor decompensation in a complete cohort of patients of all malocclusion groups scheduled for orthognathic surgery in a state-funded healthcare system. This is an area that therefore requires further investigation.The aims of this study were to determine if maxillary and mandibular incisors are adequately decompensated in preparation for orthognathic surgery and to quantify any differences between the maxillary and mandibular incisors for patients presenting with Class II and Class III malocclusionOne hundred consecutive patients who underwent maxillary and/or mandibular orthognathic surgery at a university dental hospital from 1 January 2005 onwards were included. Within the cohort, the following cases were excluded .All the cases were treated by, or under the direction of a Consultant Orthodontist with the 0.022 inch-slot MBT prescription appliance . The \u2018surgical\u2019 archwire was an 0.019x0.025 inch stainless steel archwire. Where necessary, elastics and auxiliary archwires were used, particularly to retrocline / procline the maxillary / mandibular incisors in Class II and Class III cases as necessary (The pre-treatment and pre-surgery lateral cephalograms were recorded before the start of orthodontic treatment and at the end of pre-surgical orthodontic treatment, respectively. These were digitised using OPAL 2.1 lateral cephalometric system [http://www.opalimage.co.uk]. This was installed on a Lenovo R61 machine attached to a Lenovo 2-button USB optical mouse and 15.4-inch TFT active matrix monitor with 1280x800 resolution, aspect ratio 16:10, pixel pitch 0.2373 and contrast ratio 628:1 [www.lenovo.com]. Data were extracted and analysed using Microsoft Excel to determine whether the incisors were decompensated to the normal range for a Caucasian population: 1100 +/- 60 and 900 +/- 60 for the maxillary and mandibular incisors, respectively . Values p < 0.05 for the systematic error (To assess intra-observer reproducibility, 25 lateral cephalograms were digitised on two separate occasions one month apart by the same operator using the same technique in accordance with Houston . Random ic error .p<0.05]. The data were categorized into Class II (U tests were used to determine if there were any statistically significant differences between Class II combined (p<0.05].Descriptive statistics were used to summarise the whole sample. A Chi-square test was used to determine if there was a statistically significant difference in the proportion of cases where the maxillary or mandibular incisors were fully decompensated , 23 with a Class II division 1 malocclusion, 7 with a Class II division 2 malocclusion and 52 with a Class III malocclusion.Sixty-three percent of the maxillary incisor group were judged to be adequately decompensated whilst the value for the mandibular incisors was 80% . This dip=0.45] but the amount of tooth movement for the mandibular incisors was statistically significantly greater [p=0.02] in the Class III group than in the Class II group (The amount of maxillary incisor decompensation in the Class II malocclusion and Class III malocclusion groups did not statistically differ [II group .et al. (et al. (We found that adequate decompensation was more likely to be achieved in the mandible [80%] than in the maxilla 63%] and this difference was statistically significant. That not all patients were fully decompensated before surgery is in line with other investigations. Ahn and Baek found loet al. found on (et al. however, (et al. ,5,6. Thi3% and thet al. (et al. (This appears to be the first study that has investigated the differences between the maxillary and mandibular incisors and between patients presenting with either a Class II or Class III malocclusion. We found no statistically significant difference for the mean change during decompensation for maxillary incisors between the Class III and Class II group, whilst the amount of decompensation achieved for the mandibular incisors for Class III patients was statistically significantly greater than in the Class II group. Interestingly, Potts et al. found thet al. identifiet al. and our (et al. . However (et al. .There are a number of possible explanations for full decompensation not being achieved when desired. In Class III cases, inadequate labial bone and lack of periodontal support to allow sufficient advancement of incisors, previous mandibular arch extractions, lower lip neuromuscular resistance to mandibular incisor advancement and poor patient compliance with intra-oral elastic traction are all possible reasons whilst iet al. (et al. (The cohort was completely ascertained as all surgical-orthodontic treatment in this region is undertaken by the NHS. Cases where full decompensation was not planned were excluded as the data for these cases would introduce bias. However, one limitation of this study was the lack of completeness of the records due to the conventional film radiographs for two patients being lost. When analyzing subgroups, the numbers of subjects can become small and in this study there were only seven patients with a class II division 2 incisor relationship. The Class II division 1 and division 2 cases were combined for analysis as per Burden et al. and Prof (et al. . There wet al. (The results of this study have implications for clinical practice. Clinicians should be aware of the need to fully decompensate incisors [where clinically appropriate] in advance of orthognathic surgery. As this has been shown to be more difficult in the maxilla, careful attention should be paid to pre-surgical orthodontic biomechanics. Fixed appliances are used along with inter-proximal reduction, extractions, molar distalization [where appropriate] and cortet al. found thMandibular incisors were decompensated for a greater proportion of cases than maxillary incisors in preparation for orthognathic surgery.There was no difference in the amount of maxillary incisor decompensation between Class II and Class III cases.There was a greater net decompensation for mandibular incisors in Class III cases when compared to Class II cases."} {"text": "The primary goal of this special issue is to showcase cutting-edge research on tracking and identifying objects, analyzing motion, and extracting interesting frames from analog or digital video streams automatically. At the same time, we particularly focus on the efficiency of video surveillance systems and machine learning methods which can be used to analyze video and control the machine automatically. Our aim is to unify the machine learning techniques as an integral concept and to highlight the trends in advanced video intelligence and automated monitoring.With the developments of computer science, communication technology, and internet engineering, intelligent video surveillance systems have become more and more important in today's life. They can be seen everywhere. Intelligent video surveillance is digital, network-based video surveillance but is different from the general network video surveillance\u2014it is higher-end video surveillance applications. Intelligent video surveillance system can automatically recognize different objects and find anomalies in the monitor screen. Thus, it potentially provides the fastest and best way to alert and provide useful information, which can help security personnel more effectively deal with the crisis. Moreover, intelligent video surveillance system can maximally reduce false positives and false negative phenomena.The basic information framework can be found in the illustrated In this special issue, there were 51 submissions from more than 16 countries including China, the United States, Canada, Germany, France, Australia, Japan, Pakistan, Bangladesh, Korea, Malaysia, South Africa, and Romania. Contributions of the accepted papers are summarized as follows.Based on the studies on the video data sets, innovative results are reported in some papers. Y. D. Khan et al. proposed a sufficiently accurate method while being computationally inexpensive solution to recognize human actions from videos; H. Fan et al. proposed a novel part-based tracking algorithm using online weighted P-N learning; J. Hariyono et al. presented a good pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG); O. A. Arigbabu et al. presented an effective approach for estimating body related soft biometrics and propose a novel approach based on body measurement and artificial neural network for predicting body weight of subjects and incorporate the existing technique on single view metrology for height estimation in videos with low frame rate; X. Hu et al. proposed a novel local nearest neighbor distance (LNND) descriptor for anomaly detection in crowded scenes; R. Mustafa et al. presented a novel method for detecting nipples from pornographic image contents; J. Zhang et al. set up a new image multilabel annotation method based on double-layer probabilistic latent semantic analysis (PLSA); Z. Wang et al. constructed an accurate pedestrian detection system after combining cascade AdaBoost detector and random vector functional-link net; H. Wang et al. proposed a novel vehicle detection algorithm from 2D deep belief network (2D-DBN) by deep learning framework; J. Li et al. proposed a human action recognition scheme to detect distinct motion patterns and to distinguish the normal status from the abnormal status of epileptic patients after learning video recordings of the movement of the patients with epilepsy; this work is very interesting in the field of health care system of epileptic patients; S. Zhu proposed a new approach to automatically recognize the pain expression from video sequences, which categorize pain as 4 levels: no pain, slight pain, moderate pain, and severe pain.Some great contributions are devoted to the field of biometrics. Z. Chen et al. presented a novel real-time method for hand gesture recognition using the finger segmentation; D. Li et al. introduced a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level and enhanced the robustness in emotion-dependent speaker recognition effectively; H.-M. Zhu and C.-M. Pun proposed an adaptive and robust superpixel based hand gesture tracking system and hand gestures drawn in free air had been recognized from their motion trajectories; Y. Daanial Khan et al. proposed a biometric technique for identification of a person using the iris image.There are some novel contributions from knowledge management and services selection in the cloud computing. Y. Jiang et al. proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems; Y. Guo et al. proposed a comprehensive causality extraction system (CL-CIS) integrated with the means of category-learning; J. Zhai et al. proposed a novel cost function and improved the discrete group search optimizer (D-GSO) algorithm; H. Zhang et al. proposed a novel web reputation evaluation method quality of service (QoS) information."} {"text": "In the discussion section, the mechanism proposed for the rescue effect involving activation of the nuclear factor \u03baB (NF-\u03baB) pathway was scrutinized. This mechanism could explain the promotion of cellular survival and correct repair of DNA damage, dependence on cyclic adenosine monophosphate (cAMP) and modulation of intracellular reactive oxygen species (ROS) level in irradiated cells. Exploitation of the NF-\u03baB pathway to improve the effectiveness of RIT was proposed. Finally, the possibility of using zebrafish embryos as the model to study the efficacy of RIT in treating solid tumors was also discussed.The rescue effect describes the phenomenon where irradiated cells or organisms derive benefits from the feedback signals sent from the bystander unirradiated cells or organisms. An example of the benefit is the mitigation of radiation-induced DNA damages in the irradiated cells. The rescue effect can compromise the efficacy of radioimmunotherapy (RIT) . In this paper, the discovery and subsequent confirmation studies on the rescue effect were reviewed. The mechanisms and the chemical messengers responsible for the rescue effect studied to date were summarized. The rescue effect between irradiated and bystander unirradiated zebrafish embryos Conventional radioimmunotherapy (RIT) aims to deliver a lethal ionizing-radiation dose to tumor cells through a monoclonal antibody labeled with a radionuclide that has specificity for an antigen associated with the tumor cells. RIT has been an attractive tool for treating local and diffuse tumors with ionizing radiation. However, the efficacy of RIT might be compromised by a phenomenon called \u201crescue effect\u201d which was discovered relatively recently by our group in 2011 [in vitro experiments [Rescue effect is closely related to a more extensively studied non-targeted effect of ionizing radiation known as radiation-induced bystander effect (RIBE), which was first observed in eriments . RIBE ineriments ,6,7,8,9)eriments , transfoeriments , interleeriments , interleeriments and nitreriments ,15,16 aneriments .et al. [et al. [The rescue effect describes the phenomenon where irradiated cells or irradiated organisms derive benefits from the feedback signals released from the bystander unirradiated cells or organisms. An example of the benefit is the mitigation of radiation-induced DNA damages. Chen et al. discover [et al. found thAs such, the efficacy of RIT can be compromised in the presence of rescue effect. et al. [et al. [et al. [In this paper, the discovery of rescue effect will first be reviewed in et al. observed [et al. demonstr [et al. revealedet al. [et al. [Studies on the mechanisms and the chemical messengers responsible for communicating the rescue effect have been scarce. He et al. confirme [et al. confirmein vivo sharing the same medium [et al. [The rescue effect was also discovered between irradiated and bystander unirradiated zebrafish embryos e medium . Subsequ [et al. exploredet al. [et al. [In 2011, Chen et al. . Chen etet al. [all cells) without co-culture with bystander cells or those (IRby cells) co-cultured with bystander cells induced significant increases in the numbers of 53BP1-positive foci at 30 min post-irradiation, but with no statistically significant differences between the numbers of foci for IRall and IRby cells. By 24 h post-irradiation, the numbers of 53BP1 positive foci in IRall or IRby cells dropped significantly. In particular, the number of 53BP1 foci in IRby cells was significantly smaller than that in IRall cells, indicating that the bystander cells helped repair the DNA DSBs in the irradiated cells. The different manifestations of the rescue effect at 30 min and 24 h post-irradiation was likely due to the time required to facilitate the DNA repair. The irradiation conditions were designed to ensure that all cells in the designated irradiated population were actually irradiated, since any unirradiated cell would become a bystander cell. Contamination in the designated irradiated population with unirradiated bystander cells would lead to erroneous results, e.g., the rescue effect would then be present within the IRall cell population itself.Chen et al. observedet al. [Chen et al. revealedi.e., colonies counted/(cells seeded \u00d7 plating efficiency), at 24 h post-irradiation of IRall was lower than that of IRby, although the differences were not statistically significant. These results indicated that the bystander cells helped reduce the number of apoptotic irradiated cells and promoted survival of irradiated cells.The number of apoptotic cells, which were annexin V-positive FL1-H), was significantly increased at 72 h post-irradiation. With partnered bystander cells, this number was significantly decreased [-H, was set al. [Chen et al. also conSubsequent to our discovery of the rescue effect, various other groups confirmed the presence of the rescue effects in different cell systems.et al. [et al. [et al. [et al. [et al. [et al. [137Cs gamma irradiator.Widel et al. confirme [et al. who obse [et al. who obse [et al. used 2 o [et al. used 20 [et al. used 70 et al. [Furthermore, Widel et al. found thet al. [137Cs gamma irradiator, then partnered them with non-irradiated bystander ZF4 cells for 1 h, and finally irradiated them with gamma radiation for another 20 h with a total dose of 58 mGy (dose rate of 70 mGy/day) or 460 mGy (dose rate of 550 mGy/day); (b) gamma radiation for 24 h. Schematic diagrams showing the two different treatments are shown in Pereira et al. comparedet al. [Desai et al. adopted The number of \u03b3-H2AX foci fluorescence intensity per cell nucleus was chosen as the studied biological endpoint. The authors showed that the \u03b3-H2AX foci fluorescence intensity per nucleus of irradiated A549 cells partnered with unirradiated A549 cells surged significantly with time, with the peak value recorded at 3 h post-irradiation significantly higher than the corresponding value obtained from a co-culture with bystander WI38 cells. These latter observations suggested mitigation of the proton-induced DNA damage in the irradiated A549 cells by the bystander WI38 cells, which hinted at a rescue effect provided by these WI38 cells, or that bystander WI38 cells provided a much stronger rescue effect than bystander A549 cells. et al. [Despite the occurrence of the rescue effect, the authors did not observe detrimental bystander effects on the bystander WI38 cells induced by the irradiated A549 cells. This likely showed that the bystander signals from the irradiated cells, which triggered the generation of rescue signals in the bystander cells, might not necessarily lead to observable damages. Relatedly, Kong et al. also fouThe authors also studied the involvement of GJIC in the rescue effect from bystander WI38 cells to irradiated A549 cells, knowing that GJIC existed between WI38 and A549 cells. Before irradiation, the A549 cells were treated with lindane to block the GJIC. However, this lindane treatment did not significantly alter the \u03b3-H2AX foci intensity per irradiated A549 nucleus, which suggested that GJIC was not involved in this rescue effect, and that probably soluble factors might play an important role.On the other hand, the authors did not observe significant changes in the \u03b3-H2AX foci intensity per irradiated WI38 nucleus regardless of partnering with WI38 or A549 cells, which meant no rescue effect was induced by bystander A549 cells on irradiated WI38 cells.et al. [241Am source with a dose of 40 cGy at a dose rate of 0.244 Gy/min. The energy of the alpha particles reaching the cells was 4.4 MeV which corresponded to a Linear Energy Transfer (LET) of 100 keV/\u00b5m.He et al. examinedet al. [18 and the green fluorescent dye DiOC18, respectively. After the desired co-culture period, the red fluorescent irradiated cells were completely washed away from the cell co-culture with PBS for examination. Without the co-culture, the cAMP level in the irradiated cells rapidly increased at the beginning and peaked at 30 min after irradiation, then gradually decreased to the lowest level at 6 h after irradiation, and stabilized until 12 h. After the 6-h co-culture, the intracellular cAMP level in the irradiated cells was recovered while that in the bystander cells was reduced. He et al. [He et al. used thee et al. proposedet al. [et al. [Furthermore, He et al. demonstr [et al. proposedet al. [He et al. further et al. [Although cAMP could help protect cells against radiation-induced DNA damages , the undet al. proposedet al. .et al. [Lam et al. used the241Am source . In fact, alpha-particle irradiation was utilized for two separate purposes, namely, (1) to prepare irradiated cells on which the effects of different treatments would be studied; or (2) to prepare the CM. Alpha-particle irradiation of cells always took place in a \u201cMylar-film dish\u201d, which was a 100 mm diameter tissue culture dish with a hole of 10 mm diameter at the center and covered by a Mylar film with a thickness of 3.5 \u00b5m. The thin Mylar film allowed the alpha particles to go through without causing significant energy losses in the alpha particles.The authors used the human cervical cancer HeLa cells for their studies, employed alpha particles for irradiation and relied on the number of 53BP1 foci/cell as the studied biological endpoint. The alpha-particle dose of 5 cGy was delivered using an Preparation of the CM was a special methodology designed to physically separate the rescue signals from bystander signals. This methodology is schematically shown in The experimental setup and procedures to prove the presence of rescue effect and to study the role of NF-\u03baB activation in the rescue effect are shown in et al. [Using the setup shown in et al. found siDanio rerio) [241Am source (with an \u03b1-particle energy of 5.49 MeV under vacuum and an activity of 4.26 kBq), and used the number of apoptotic signals on the irradiated embryos at 24 h post fertilization (hpf) as the studied biological endpoint. These results were particularly relevant to studies on human disease as the human and zebrafish genomes shared considerable homology, including conservation of most DNA repair-related genes [Rescue effect was also induced between irradiated and bystander unirradiated zebrafish embryos (o rerio) ,24. All ed genes . In relaed genes ,36,37,38et al. [Choi et al. employedet al. [i.e., the bystander signals by definition, were performing functions similar to the rescue effect on other irradiated embryos. The latter result strongly suggested that similarity between the bystander and rescue signals, but probably with different concentrations.Kong et al. further et al. [et al. [\u2212) to cause DNA damage and lipid oxidation [et al. [Furthermore, Kong et al. studied [et al. , it did xidation , and thu [et al. also repin vitro rescue effects have been confirmed between different combinations of irradiated cells and bystander cells, including human primary fibroblast (NHLF) cells and cancer (HeLa) cells [To date, a) cells , human ma) cells , irradiaa) cells , lung ada) cells , human ma) cells , and irra) cells . The bioa) cells ,22; (2) a) cells ,20; (3) a) cells ,18,21; varied significantly according to the cell types (irradiated and bystander cells), the biological endpoints and the radiation dose. For using the numbers of 53BP1 foci as the biological endpoint, the magnitude of the rescue effect was about 13% . For usi [et al. found th [et al. revealed [et al. did not As mentioned in et al. [et al. [et al. [et al. [et al. [Moreover, Desai et al. reported [et al. , as well [et al. to be in [et al. found th [et al. for the et al. [et al. [(1) Expression of NF-\u03baB target genes in general promotes cellular survival. The anti-apoptotic proteins regulated by NF-\u03baB were reviewed by Magn\u00e9 et al. . Moreoveet al. , where pet al. . In addiet al. ; (3) cAMet al. , where Cet al. ; (4) Ceret al. . As such [et al. for the et al. [et al. [et al. [et al. [As described in et al. and Kong [et al. confirme [et al. conclude [et al. . The IKK [et al. . Moreove [et al. . As such [et al. .At the time of discovery of the rescue effect, it was already recognized that the effect would have far reaching consequences on the treatment procedures of tumors using ionizing radiation, particularly when it was discovered that unirradiated normal cells could rescue irradiated cancer cells . As explin vivo might help illustrate the rescue effect induced by RIT within solid tumors or the rescue effect between tumors arising from micrometastatic disease targeted by RIT. Application of RIT to treat solid tumors has been an attractive idea because it can target both known and occult lesions. Although RIT has been successfully applied to treat lymphoma, it has been generally facing a number of obstacles in treating solid tumors including, among others, heterogeneities in blood flow, tumor stroma, expression of target antigens and radioresistance [On the other hand, the rescue effect studied using zebrafish embryos sistance . Indeed,sistance ). Effortsistance ,51,52). sistance . In factsistance ,55,56,57sistance ), and atsistance ,60.The present paper reviewed the discovery in 2011 and research progress of a phenomenon called the rescue effect where irradiated cells or irradiated organisms derived benefits from the feedback signals released from the bystander unirradiated cells or organisms. The rescue effect can compromise the efficacy of all radiotherapy including RIT, noting in particular that unirradiated normal cells can rescue irradiated cancer cells. The mechanisms and the chemical messengers involved in the rescue effect proposed to date were described. In particular, activation of the NF-\u03baB pathway in irradiated cells has been identified as the crucial step for the rescue effect. The activation of the NF-\u03baB pathway can also explain the promotion of cellular survival and correct repair of DNA damages, the dependence on cAMP and the modulation of intracellular ROS level in the irradiated cells, which have been observed in previous studies on the rescue effect. The rescue effect has also been observed between irradiated and bystander unirradiated zebrafish embryos, which may help illustrate the rescue effect induced by RIT within solid tumors or between tumors."} {"text": "The interest towards polysaccharides of natural origin is continuously growing during the past decade. Fields of interest for their applications are widening, ranging between ecocommodities, food supplements, cosmetics, pharmaceuticals, and biomedical uses. Exploitation of new sources of polysaccharides of different origin is well documented in recent literature. Since this tendency is involving biomaterials science in a pressing way, this journal set up to publish a special issue devoted to this topic. The result is a collection of twelve original research articles, whose authors belong to academic or research institutions of eleven different countries from Asia, Europe, and Australia. Papers are representative of a large share of biomedical applications, related chemical modifications, and manufacturing methodologies.From a chemical point of view, the perspective is to replace traditional methods for production and modification of natural polysaccharides with more ecofriendly, efficient, and targeted methodologies. N. Chopin et al. from France (\u201cA Direct Sulfation Process of a Marine Polysaccharide in Ionic Liquid\u201d) described a new sulfation method to produce high specificity derivatives with biological activity. On the other hand, S. Islam et al. from Australia (\u201cComparison and Characterisation of Regenerated Chitosan from 1-Butyl-3-methylimidazolium Chloride and Chitosan from Crab Shells\u201d) had exploited the effect of ionic liquid solvents on chemical-physical and functional properties of chitosan and demonstrated that ionic liquid solvents represent a good medium for dissolution of chitosan, also useful for blending with other polysaccharides.J. Varshosaz et al. from Iran , A. Rees et al. from UK (\u201c3D Bioprinting of Carboxymethylated-Periodate Oxidized Nanocellulose Constructs for Wound Dressing Applications\u201d), and F. Hong et al. from China had focused on nanotechnology applied to polysaccharides, particularly on the use of sophisticated techniques to optimize the design of final products. J. Varshosaz et al. prepared agar nanospheres for controlled drug release using sophisticated Design-Expert software with the aim of optimizing the release characteristics by a careful control of fabrication conditions. A. Rees et al. reported on three-dimensional (3D) bioplotter applied to nanocellulose. 3D bioplotting allows the construction of complex shapes, otherwise unfeasible through traditional manufacturing techniques. The authors prepared nanocellulose in form of short nanofibrils of reduced viscosity and used this novel material as a bioink for printing 3D porous structures for wound healing applications. F. Hong et al. described a novel double tubes bioreactor, based on previous patents and papers of KLEMM group and GATENHOLM group, designed for the production of bacterial nanocellulose tubes for small vascular implants. Y. Shirosaki et al. from Japan (\u201cPreparation of Porous Chitosan-Siloxane Hybrids Coated with Hydroxyapatite Particles\u201d) developed a method for apatite deposition on a novel porous material based on chitosan, potentially interesting as bone substitute in craniofacial surgery. S. Uthaman et al. from Korea (\u201cPolysaccharide-Coated Magnetic Nanoparticles for Imaging and Gene Therapy\u201d) reviewed recent progresses of polysaccharide-coated magnetic nanoparticles for imaging and gene delivery, highlighting the role of polysaccharide coating and its advantages.) explored the field of tissue engineering by investigating, respectively, the role of fabrication parameters on efficiency of alginate microcapsules for cell immobilization and the surface modification of stents by polysaccharides deposition to favor protein absorption and reendothelialization after implantation in human vessels. L. Russo et al. from Italy developed novel hyaluronic acid hydrogels with mechanical properties similar to those of the ECM of natural tissues.P. Montanucci et al. from Italy (\u201cInsights in Behavior of Variably Formulated Alginate-Based Microcapsules for Cell Transplantation\u201d) and S. Benni et al. from France Hayne ssp. raddiana Polysaccharide on Streptozotocin-Nicotinamide Induced Diabetic Rats\u201d) investigated the antidiabetic activity of a polysaccharide extracted from Acacia tortilis, a tree widespread in the globe, mainly in North Africa and Asia.Polysaccharides have also been exploited for their potential healing effects. M. Matoba et al. from Japan highlighted the antiadhesive effect of alginate in postoperative treatments to prevent adhesions induced by PGA meshes. P. K. Bhateja and R. Singh from India (\u201cAntidiabetic Activity ofUniformly, the authors highlighted the potentiality of this emerging class of biocompatible macromolecules in biomaterials field as source of both new molecules with therapeutic effects and new materials for regenerative medicine and realization of biomedical devices.\u2009\u2009LaurienzoPaola\u2009\u2009C.\u2009\u2009FernandesJo\u00e3o\u2009\u2009Colliec-JouaultSylviaJ. Helen Fitton"} {"text": "When all the individuals in a social group can be easily identified, one of the simplest measures of social interaction that can be recorded is nearest-neighbour identity. Many field studies use sequential scan samples of groups to build up association metrics using these nearest-neighbour identities. Here, I describe a simple technique for identifying clusters of associated individuals within groups that uses nearest-neighbour identity data. Using computer-generated datasets with known associations, I demonstrate that this clustering technique can be used to build data suitable for association metrics, and that it can generate comparable metrics to raw nearest-neighbour data, but with much less initial data. This technique could therefore be of use where it is difficult to generate large datasets. Other situations where the technique would be useful are discussed. Once we have information at this basic level of interaction, we can then begin to build networks and test hypotheses regarding their structure \u20133. DiffeWhen individuals within a group can be easily identified, field studies typically use either focal sampling, where pre-selected individuals are followed for a given length of time, collecting sequential metrics about their associations with other individuals, or scan sampling, where the associations of all measurable individuals are recorded at a given moment . Both teScan sampling of all individuals can give a quick measurement of intragroup association in the field, forcing a record to be taken for all individuals. The simplest association metric involves identifying the nearest neighbour of each individual. This is a fast and reliable technique that is frequently implemented in studies of primates \u20138 and hePapio ursinus, tend to cluster, so that each individual is within 5\u2009m of a nearest neighbour , it , it spatalways recorded. For example, Schreier & Swedell [Papio hamadryas, groupings using sequential scan samples, recording association between only these individuals without considering closer baboons who were not leader males. It could also be the case that some individuals may be absent or simply unidentifiable during one or more of the sampling scans. In this case, the clustering metric would be biased to the same degree as any other association metric, and should deliver similar biased results .The method I describe relies on data being collected for all the individuals in a group during a sample period, rather than something more similar to focal sampling , Swedell collecteet al. [et al. [et al. [The construction of a measure similar to nearest-neighbour clusters has been implicitly used in some field studies where subgroup membership is recorded, rather than nearest-neighbour identities. For example, Ramos-Fern\u00e1ndez et al. describe [et al. and Hiro [et al. place in [et al. use inte [et al. ,31\u201333). et al. [et al.\u00a0[et al. [The technique described here can be used to generate a matrix of associations between identifiable individuals that is demonstrably faster than simply considering just the counts of nearest-neighbour association. Once generated, these summary metrics still need to be processed to give meaningful comparable measures of association. For examples of how analyses can be conducted, I recommend the studies described in Henzi et al. and Ramoet al. , and theet al. and Whitet al. . A \u2018soci [et al.\u00a0 and furt\u00a0[et al. , which c\u00a0[et al. gives re"} {"text": "The aim of this n=15) after removing the coronal 3 mm of the obturating materials. In the MTA group, white MTA plug was placed in pulp chamber and coronal zone of the root canal. In CEM cement group, CEM plug was placed in the tooth in the same manner. In both groups, a wet cotton pellet was placed in the access cavity and the teeth were temporarily sealed. After 24 h the teeth were restored with resin composite. In the negative control group the teeth were also restored with resin composite. The color change in the cervical third of teeth was measured with a colorimeter and was repeated 3 times for each specimen. The teeth were kept in artificial saliva for 6 months. After this period, the color change was measured again. Data were collected by Commission International de I'Eclairage's L*a*b color values, and corresponding \u0394E values were calculated. The results were analyzed using the one-way ANOVA and post-hoc Tukey\u2019s test with the significance level defined as 0.05. Forty five endodontically treated human maxillary central incisors were selected and divided into three groups (P<0.05). There was no significant differences between CEM group and control group in mean discoloration. The mean tooth discoloration in MTA group was significantly greater than CEM and control groups (According to the result of the present study CEM cement did not induce tooth discoloration after six months. Therefore it can be used in vital pulp therapy of esthetically sensitive teeth. Biomaterials are used in many endodontic treatments, including repair of tooth perforation, as root end filling material, treatment of teeth with open apices and as pulp capping agents. One of these biomaterials which is being widely used is mineral trioxide aggregate (MTA). MTA is a biocompatible material with profound sealing properties that make it suitable for sealing root perforations -5; and ret al. , whereas a plane represents the degree of red or green [+a (red) and -a (green)] and b plane corresponds with the degree of yellow or blue [+b (yellow) and -b (blue)] within the sample. To position the tip of the colorimeter in the same location on each specimen, a silicon rubber mold was prepared. The colorimeter was calibrated on white calibration plate according to the manufacturer\u2019s instruction. The color of the cervical third of the teeth was assessed three times and the mean value was considered as the final measurement at the baseline examination. The teeth were then kept in an incubator 37\u00b0C in artificial saliva for 6 months, whereas artificial saliva was replenished each week. After this period, color assessment was made using the colorimeter in the manner described for baseline readings. The calculation of the discoloration (\u0394E*) between the two color measurements is as follows: \u0394E*=[(\u0394L*)2+(\u0394a*)2+(\u0394b*)2]1/2. The human eye cannot perceive color difference between two specimens (\u0394E) values less than 1. \u0394E values between 1 and 3.3 represent a clinically acceptable range [\u0394E values of 3.3 and higher are reported to be unacceptable for human eyes in clinical conditions [A colorimeter was used relative to standard illuminant with a white background to measure the color of each specimen in a standardized condition according to the CIE LAB There was no significant differences between CEM cement group and control group in mean discoloration value. The mean tooth discoloration in MTA group was significantly more than CEM cement and control groups (2O3), periclase (MgO) and FeO has been lowered in white MTA compared to grey MTA but these metal oxides are still present in white preparations [et al. [et al. [Many studies have pointed to tooth discoloration as one of the major drawbacks of MTA -9; whicharations . Marcian [et al. examined [et al. also repSimilarly EndoCem Zr and Retro MTA (both containing zirconium oxide) caused less discoloration than ProRoot MTA and Angelus MTA (which contain bismuth oxide) . 3, P2O5, and SiO2 [et al. [et al. [CEM cement, on the other hand contains CaO, SOand SiO2 . Despiteand SiO2 also obs [et al. showed t [et al. also obsFelman and Parashos restoredet al. [et al. [E.faecalis. Kangarlou et al. [Candida albicans even in low concentration. Similar to the result of this study, Eghbal et al. [CEM cement is a hydrophilic endodontic biomaterial. It has pH values similar to MTA . Samiee et al. showed bet al. , 32. CEMet al. . Razmi e [et al. showed bu et al. also eval et al. evaluate\u0394E*) with a colorimeter confers advantages such as repeatability, sensitivity and objectivity, despite some limitations [Color measurement has several techniques and in the present study the quantitative technique was used. The quantitative measurement of color difference (itations , 37.According to the results of this study, CEM did not have discoloration properties unlike other endodontic materials like MTA. CEM cement can be advocated as an endodontic biomaterial in esthetically sensitive teeth."} {"text": "It remains a challenge to discuss every 3\u00a0months the published literature on the pathology of lymphomas. A pleasant challenge, because I need to read many interesting works, but also an impossible one: I cannot read everything. Nevertheless, I hope that also this month, I can provide the readership with an interesting summary of my personal selection.CD30 expression is a hallmark of Hodgkin lymphoma (HL), and this is associated with activation of the nuclear factor-\u03baB (NF-\u03baB) pathway. Epstein-Barr virus (EBV) latent membrane protein-1 (LMP-1) and ligand-independent signaling by overexpressed CD30 are both known to cause activation of NF-\u03baB in lymphomas, but in normal cells hyperactivation of NF-\u03baB triggers cellular senescence and apoptosis. Ishikawa et al. [Another example is provided by Oelmann et al. who studNOTCH2 and MYD88 genes, but the other 23 had not been associated with SMZL before. However, in none of 24 additional sMZL, these latter mutations were also found. Therefore, this approach was not very successful in finding recurrent genetic alterations in this study, but further work is needed.Nowadays, whole genome or exome sequencing has become a widely available tool so we can expect rapid increase in the knowledge on genetic changes in many tumors. Peveling-Oberhag et al. investigThe gene encoding the lysine-specific histone methyltransferase KMT2D has emerged as one of the most frequently mutated genes in follicular lymphoma (FL) and diffuse large B cell lymphoma (DLBCL), but the biological consequences of these mutations are not known. Two groups of researchers took on the challenge to determine the role of these mutations in lymphoma development and come with similar results. Ortega-Molina et al. show in BL and FL both have features of GC B cells but are biologically and clinically quite distinct. Kretzmer et al. performeHartmann et al. tried toActivated B cell (ABC) DLBCL relies on B cell receptor (BCR) signaling but only a part of patients benefit form BCR pathway inhibition. Young et al. developeSome lymphomas are closely related to specific environments in the body. An example is the tight relation with infiltration in the epithelium of some cutaneous T cell lymphomas (T-NHL). Adachi et al. investigAs mentioned above for sMZL, WGS was performed by da Silva Almeida et al. on normaOhtani et al. give furNasal natural killer T cell lymphoma (NKTL) is a highly malignant tumor that is closely associated with Epstein-Barr virus (EBV) infection. Latent membrane protein 1 (LMP1) is encoded by EBV and plays an important role in EBV-induced cell transformation. Therefore, Sun et al. assessedHLA gene is associated with Epstein-Barr virus (EBV)-positive HL (both mixed and nodular sclerosis type) but not with EBV negative HL. They could do this since they had a large series of cases (1200), which was compared with 5726 controls. The results were confirmed in an independent series of 468 cases and 551 controls.The genetic susceptibility for the development of disease can only be unraveled when the disease is well defined. Delahaye-Sourdeix et al. provide n\u2009=\u20098). Two patients are in complete remission at 26 and 216\u00a0months. Nine patients died 8.0\u2009\u00b1\u20096.5\u00a0months after diagnosis. Of the five cases with late PTLD occurring 4\u201323\u00a0years after transplantation, one had pulmonary lymphomatoid granulomatosis (the only endothoracic case), one cutaneous large T cell lymphoma, two had anaplastic large cell lymphomas, and one HL. Two of the five cases were EBV-negative, including one followed by a second EBV\u2009+\u2009positive PTLD after 8\u00a0years of complete remission. Two patients were alive and well (follow-up: 44 and 151\u00a0months). These data are in line with the suggestion that early PTLD is generally an immune deficiency and EBV-associated disease, but that late lesions have a different and variable pathogenesis.De Montpr\u00e9ville et al. describeThe distinction between extensive progressive transformed germinal centers (PTGC) and nodular lymphocyte predominant (NLP) HL is not easy. The presence of regular large CD20-positive cells has been described as indicative of NLPHL, but in practice, I prefer the criterium that the lesions need to have a mass effect, a disturbance of the lymph node architecture. Hartmann et al. describeA typical feature of NLPHL is the increased number of CD57-positive CD4+ T cells , cells tEladl et al. describeClassical (c) HL is defined by morphology and aberrant B cell program. Rare cases of otherwise typical HL have strong CD20 expression. Benharrough et al. describeNodal marginal zone lymphoma (NMZL) remains an ill-recognized entity that easily may be misdiagnosed as FL. In this issue of the Journal of Hematopathology, van den Brand et al. hypothesChoung et al. investigLymphoplasmacytic lymphoma (LL) secreting IgA or IgG is rare. Cao et al. evaluateMYD88 mutation is a hallmark of LL, but is also described in other lymphomas, including primary testicular DLBCL. Oishi et al. investigMantle cell lymphoma (MCL), characterized by the t and cyclin D1 overexpression, commonly has overexpression of SOX11. Silencing of SOX11 in MCL cells promotes the shift into an early plasmacytic differentiation phenotype. Ribera-Cortada et al. correlatA bit better defined variant of MCL is the blastoid subtype, although there is still substantial interobserver variation in recognition of the morphological feature. Bhatt et al. analyzedMottok et al. analyzedEndemic (e) BL is found in children in equatorial regions and represents the first historical example of a virus-associated human malignancy. EBV infection and MYC translocations are hallmarks of the disease; it is unclear whether other factors may contribute to its development. Abate et al. analyzedSome cases of morphological and phenotypical characteristic BL lack a MYC translocation. De Falco et al. comparedXue et al. describeScarfo et al. used advDNMT3A, KRAS, JAK3, STAT3, STAT5B, GNB1, and TET2 genes, genes implicated previously in other T cell neoplasms. The outcome was heterogenous: two patients are alive without disease, four are alive with disease, and six died of disease. In conclusion, PCNSTLs are histologically and genomically heterogenous with frequent phenotypic aberrancy and a cytotoxic phenotype in most cases.Menon et al. report tAggerwal et al. encounteNow that more and more knowledge on IgG4-related disease is accumulating, retrospective studies in different organs reveal relevant information. Ferry et al. reviewedFCGR) genes influence response to rituximab they prospectively obtained specimens of 408 previously untreated, low tumor burden FL patients treated with single agent rituximab. The response rate to initial rituximab was 71\u00a0% but no FCGR genotypes or grouping of genotypes were predictive of this response or response duration. Although this is a disappointing result, the material and data are unique, and may provide positive results in the future.Finding predictive markers for expensive drugs is considered a very important method to reduce health care costs, but is not easy task. This is examplified by the work of Kenkre et al. . Based oAnother potential promising approach was chosen by Nelson et al. who deveChuang et al. took a mCopie-Bergman et al. analyzedMiura et al. tried toOkina et al. investigThese studies reveal once again that we have many prognostic factors, but there is still little use of them in clinical trials.Detection of light chain restriction (LCR) in FFPE-tissue remains often not successful and is a potential help in some cases. Arora et al. describeAccording to Kirsch et al. early di"} {"text": "The topic of recent advances in information technology has attracted a wide range of articles on technology theory, applications from many aspects, and design methods of information technology. Reviewing the papers in this topic, it is clear that all fields such as computer science, cloud computing, wireless sensor network, prediction, image annotation, and storage have been involved. And the publications about recent advances in information technology tackled significant recent developments in the fields mentioned above, both of a foundational and applicable character.Also, we can easily find that most contributors regard \u201cinformation technology\u201d as synonymous with tools such as the computer, mobile, and tablet and such issues as instructional design, mobile learning, social networking, and open sources. Through the topic's development, research designs are appropriate for studying the potential of information technology applications under controlled situations.This special issue includes a collection of 100 papers selected from 466 submissions to 36 countries or districts. All submitted papers followed the same standard (peer-reviewed by at least three independent reviewers) as applied to regular submissions.A self-adaptive parameter optimization algorithm in a real-time parallel image processing system\u201d by G. Li et al. proposed an adaptive load capacity balance strategy on the servo parameters optimization algorithm (ALBPO) to improve the computing precision and achieve high detection ratio while not reducing the servo circle.The paper entitled \u201cImproved algorithm for gradient vector flow based active contour model using global and local information\u201d by J. Zhao et al. proposes an improved approach based on existing gradient vector flow methods. Main contributions of this paper are a new algorithm to determine the false part of active contour with higher accuracy from the global force of gradient vector flow and a new algorithm to update the external force field together with the local information of magneto static force.The paper entitled \u201cA topology visualization early warning distribution algorithm for large-scale network security incidents\u201d by H. He et al. a comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed.In the paper entitled \u201cA novel approach to word sense disambiguation based on topical and semantic association\u201d by X. Wang et al. proposes a novel approach to word sense disambiguation based on topical and semantic association. For a given document, supposing that its topic category is accurately discriminated, the correct sense of the ambiguous term is identified through the corresponding topic and semantic contexts.The paper entitled \u201cA fast map merging algorithm in the field of multirobot SLAM\u201d by Y. Liu et al. a map merging algorithm based on virtual robot motion is proposed for multirobot SLAM. The thinning algorithm is used to construct the skeleton of the grid map's empty area, and a mobile robot is simulated in one map.In the paper entitled \u201cHexahedral localization (HL): a three-dimensional hexahedron localization based on mobile beacons\u201d by L. Liu et al. proposes a three-dimensional range-free localization scheme named hexahedral localization. In the scheme, the space is divided into a lot of hexahedrons. Then, all the unknown nodes are located by utilizing the perpendicular properties of the trajectory.The paper entitled \u201cA two-level cache for distributed information retrieval in search engines\u201d by W. Zhang et al. proposes a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.The paper entitled \u201cAnalysis of DNS cache effects on query distribution\u201d studies the DNS cache effects that occur on query distribution at the CN top-level domain (TLD) server, and the approximate TTL distribution for domain name is inferred quantificationally.This paper by Z. Wang \u201cThe generalization error bound for the multiclass analytical center classifier\u201d by Z. Fanzi and M. Xiaolong presents the multiclass classifier based on analytical center of feasible space (MACM). This multiclass classifier is formulated as quadratic constrained linear optimization and does not need repeatedly constructing classifiers to separate a single class from all the others.The paper entitled \u201cThe estimate for approximation error of neural network with two weights\u201d by F. Zeng and Y. Tang constructs a new BP neural network and proves that the network could approximate any nonlinear continuous function.The paper entitled \u201cAn automatic image inpainting algorithm based on FCM\u201d by J. Liu et al. an automatic image inpainting algorithm which automatically identifies the repaired area by fuzzy C-mean (FCM) algorithm is proposed. FCM algorithm classifies the image pixels into a number of categories according to the similarity principle, making the similar pixels cluster into the same category as possible.In the paper entitled \u201cUsing the high-level based program interface to facilitate the large scale scientific computing\u201d by Y. Shang et al. makes further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the anticipated data migration.The paper entitled \u201cA cooperative model for IS security risk management in distributed environment\u201d develops a cooperative model for IS security risk management in a distributed environment. In the proposed model, the exchange of security information among the interconnected IS under distributed environment is supported by Bayesian networks (BNs).The paper by N. Feng and C. Zheng entitled \u201cVocal emotion of humanoid robots: a study from brain mechanism\u201d by Y. Wang et al. explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings.The paper entitled \u201cReal-time tracking by double templates matching based on timed motion history image with HSV Feature\u201d by Z. Li et al. presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV).The paper entitled \u201cAn improved piecewise linear chaotic map based image encryption algorithm\u201d by Y. Hu et al. proposes a symmetric cryptographic system using MPWLCM chaotic system to encrypt grayscale image. The scheme possesses high sensitivity to plain image and key, so it has a good ability to resist differential attack.The paper entitled \u201c\u03bb)-based control model for real-time traffic light coordinationA Sarsa-based real-time traffic control optimization model that can maintain the traffic signal timing policy more effectively. The Sarsa(\u03bb)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions.The paper entitled \u201cMixed pattern matching-based traffic abnormal behavior recognition\u201d propose a trajectory pattern learning method based on dynamic time warping (DTW) and spectral clustering. The paper introduced the DTW distance to measure the distances between vehicle trajectories and determined the number of clusters automatically by a spectral clustering algorithm based on the distance matrix.J. Wu et al. in their paper entitled \u201cA reward optimization method based on action subrewards in hierarchical reinforcement learning\u201d by Y. Fu et al. a hierarchical reinforcement learning method based on action subrewards is proposed to solve the problem of \u201ccurse of dimensionality,\u201d which means that the states space will grow exponentially in the number of features and low convergence speed.In the paper entitled \u201cConstructing better classifier ensemble based on weighted accuracy and diversity measure\u201d by X. Zeng a weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task.In the paper entitled \u201cSimplified process model discovery based on role-oriented genetic mining\u201d by W. Zhao et al. proposes a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models.The paper entitled \u201cAn improved topology-potential-based community detection algorithm for complex network\u201d by Z. Wang et al. puts forward a mass calculation method for complex network nodes, which is inspired from the idea of PageRank algorithm.The paper entitled \u201cGait correlation analysis based human identification\u201d by J. Chen, silhouette correlation analysis based human identification approach is proposed. By background subtracting algorithm, the moving silhouette figure can be extracted from the walking images sequence.In \u201cInformation spread of emergency events: path searching on social networks\u201d by W. Dai et al. collects Internet data based on data acquisition and topic detection technology, analyzes the process of information spread on social networks, describes the diffusions and impacts of that information from the perspective of random graph, and finally seeks the key paths through an improved IBF algorithm.The paper entitled \u201cApplication of butterfly Clos-network in network-on-chip\u201d by H. Liu et al. studied the topology of network-on-chip (NoC). By combining the characteristics of the Clos-network and butterfly network, a new topology named BFC (butterfly Clos-network) network was proposed.The paper entitled \u201cA linear method to derive 3D projective invariants from 4 uncalibrated images\u201d by Y. Wang et al. presents a method to compute projective invariants of 3D points from four uncalibrated images directly.The paper entitled \u201cComparative study of multimodal biometric recognition by fusion of iris and fingerprint\u201d by H. Benaliouche and M. Touahria a novel combination of iris and fingerprint biometrics is presented in order to achieve best compromise between a zero FAR and its corresponding FRR.In the paper entitled \u201cDesign of jitter compensation algorithm for robot vision based on optical flow and Kalman filter\u201d by B. R. Wang et al. the condition number of coefficient matrix was proposed to quantificationally analyse the effect of matching errors on parameters solving errors.In the paper \u201c \u201cDynamic multiobjective optimization algorithm based on average distance linear prediction model\u201d by Z. Li et al. defines a kind of dynamic multiobjective problem with translational Pareto-optimal set (DMOP-TPS) and proposes a new prediction model named ADLM for solving DMOP-TPS.The paper entitledApplication of genetic algorithm to hexagon-based motion estimation\u201d by C.-M. Kung et al. is to propose a new technique which focuses on combing the hexagon-based search algorithm, which is faster than diamond search, and genetic algorithm.The aim of the paper entitled \u201cDynamic scene stitching driven by visual cognition model\u201d by L.-h. Zou et al. investigates dynamic video sequence stitching, especially under the situation that the scene, captured on a movable platform, contains moving objects or other important interesting regions.The paper entitled \u201cAutomatic recognition of seismic intensity based on RS and GIS: a case study in Wenchuan Ms8.0 earthquake of China\u201d propose a RS/GIS-based approach for automatic recognition of seismic intensity, in which RS is used to retrieve and extract the information on damages caused by earthquake, and GIS is applied to manage and display the data of seismic intensity.Q. Zhang et al. in the paper entitled \u201cA robust H.264/AVC video watermarking scheme with drift compensation\u201d by X. Jiang et al. a robust video watermarking scheme with drift compression is proposed. The devised MB selection scheme can lower the influence on video quality and reduce the possibility of drift distortion.In the paper entitled \u201cAn improved feature selection based on effective range for classification\u201d by J. Wang et al.A novel efficient statistical feature selection approach called improved feature selection based on effective range (IFSER) is proposed in the paper entitled \u201cModeling of task planning for multirobot system using reputation mechanism\u201d by Z. Shi et al. a task planning method based on reputation mechanism is proposed. Reputation plays an important role in the collaboration among people.In the paper entitled \u201cP-bRS: a Physarum-based routing scheme for wireless sensor networks\u201d by M. Zhang et al. proposes a novel Physarum-based routing scheme (P-bRS) for WSNs to balance routing efficiency and energy equilibrium.The paper entitled \u201cReliability prediction of ontology-based service compositions using petri net and time series models\u201d by J. Li et al. presents a comprehensive dependability prediction model for OWL-S processes.The paper entitled \u201cA new approach for clustered MCs classification with sparse features learning and TWSVM.\u201d The proposed method is based on sparse feature learning and representation, which expresses a testing sample as a linear combination of the built vocabulary (training samples).A novel approach described to aid breast cancer detection and classification using digital mammograms is presented by X.-S. Zhang in his paper entitled \u201cThe study of cooperative obstacle avoidance method for MWSN based on flocking control\u201d by Z. Chen et al. studied the features of target tracking in mobile wireless sensor network and the concept, features, category, application areas of flocking control mode, and obstacle avoidance algorithm.The paper entitled \u201cA systematic comparison of data selection criteria for SMT domain adaptation\u201d by L. Wang et al. performs an in-depth analysis of three different sentence selection techniques.The paper entitled \u201cAn exponentiation method for XML element retrieval\u201d by T. Wichaiwong investigates retrieval techniques and related issues over a strongly structured collection using the exponentiation weight for the document's structure over the content-and-structure query, in the data-centric track of the INEX 2011.The paper entitled \u201cResearch and application for grey relational analysis in multigranularity based on normality grey number\u201d by J. Dai et al., combining with the probability distribution of the data, the conception of the normality grey number is proposed.In the paper \u201cTrusted computing strengthens cloud authentication\u201d by E. Ghazizadeh et al. proposes the use of trusted computing, federated identity management, and OpenID Web SSO to solve identity theft in the cloud.The paper entitled \u201cAn efficient fitness function in genetic algorithm classifier for landuse recognition on satellite images\u201d proposes a new index, DBFCMI, by integrating two common indices, DBI and FCMI, in a GA classifier to improve the accuracy and robustness of classification.The paper by M.-D. Yang et al. entitled \u201cUnregistered biological words recognition by Q-learning with transfer learning\u201d by F. Zhu et al. proposes a novel approach to recognize words based on transfer learning, by which we turn the process of recognizing the terms into a property marking process by redefining the property of terms according to features of terms and the corresponding context.The paper entitled \u201cMeasuring semantic relatedness between Flickr images: from a social tag based view\u201d by Z. Xu et al. mainly discusses the semantic relatedness measures systematically, puts forward a method to measure the semantic relatedness of two images based on their tags, and justifies its validity through the experiments.The paper \u201cA GA-based approach to hide sensitive high utility itemsets\u201d by C.-W. Lin et al., a GA-based algorithm is proposed to find the feasible combination for data sanitization. Each gene in a chromosome represents a possible transaction to be inserted.In the paper entitled \u201cA novel macroblock level rate control method for stereo video coding\u201d by G. Zhu et al. proposes a novel macroblock (MB) level rate control method based on binocular perception.The paper entitled \u201cEfficient parallel video processing techniques on GPU: from framework to implementation\u201d by H. Su et al. proposes serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter.The paper entitled \u201cGenetic algorithm and graph theory based matrix factorization method for online friend recommendation\u201d by Q. Li et al. proposes a hybrid genetic algorithm and graph theory based online recommendation algorithm. Hybrid genetic algorithm is used for multiobjective combinatorial problem.The paper \u201cDynamic cooperative clustering based power assignment: network capacity and lifetime efficient topology control in cooperative ad hoc networks\u201d by X.-H. Li et al. proposes a dynamic cooperative clustering based power assignment (DCCPA) algorithm to solve a new topology control problem: network capacity and energy efficient in cooperative wireless ad hoc networks.The paper entitled \u201cA generalized quantum-inspired decision making model for intelligent agent\u201d by Y. Hu and C. K. Loo, a generalized quantum-inspired decision making model (QDM) is proposed. QDM helps to extend previous research findings and model more complicated decision space.In the paper entitled \u201cUnsupervised chunking based on graph propagation from bilingual corpus\u201d by L. Zhu et al. presents a novel approach for unsupervised shallow parsing model trained on the unannotated Chinese text of parallel Chinese-English corpus.The paper entitled \u201cResearch on universal combinatorial coding\u201d by J. Lu et al., the concept of universal combinatorial coding is advanced and the related properties are analyzed. Universal combinatorial coding can stride over the three branches of coding theory.In the paper entitled \u201cUnsupervised Chunking based on graph propagation from bilingual corpus\u201d present a novel approach for unsupervised shallow parsing model trained on the unannotated Chinese text of parallel Chinese-English corpus.L. Zhu et al. in their paper entitled \u201cMeasurement and analysis of P2P IPTV program resource\u201d by W. Wang et al. focuses on characteristic analysis of program resources, including the distributions of length of program names, the entropy of the character types, and hierarchy depth of programs.The paper entitled \u201cSimple-random-sampling-based multiclass text classification algorithm\u201d by W. Liu et al. investigates the power law distribution and proposes a SRSMTC algorithm for the MTC problem. This algorithm is an excellent industrial algorithm for its easy implementation and convenient transfer to many applications.The paper entitled \u201cAn improved artificial bee colony algorithm based on balance-evolution strategy for unmanned combat aerial vehicle path planning\u201d by B. Li et al. a novel artificial bee colony (ABC) algorithm improved by a balance-evolution strategy (BES) is applied in this optimization scheme.In the paper entitled \u201cNew similarity of triangular fuzzy number and its application\u201d by X. Zhang et al., a new method shape's indifferent area and midpoint (SIAM) to measure the similarity of two triangular fuzzy numbers is proposed, which considers the shapes and midpoints of them.In the paper entitled \u201cModeling and computing of stock index forecasting based on neural network and Markov chain\u201d by Y. Dai et al. presents a new forecast method by the combination of improved back-propagation (BP) neural network and Markov chain, as well as its modeling and computing technology.The paper entitled \u201cThe theoretical limits of watermark spread spectrum sequence\u201d by N. Jiang and J. Wang proposes LAC&TCC properties and gives the theoretical limits of SS watermarking sequences under the attacks of cropping and translation.The paper entitled \u201cA survey on investigating the need for intelligent power-aware load balanced routing protocols for handling critical links in MANETs\u201d by B. Sivakumar et al. discusses a short survey on the specialized algorithms and protocols related to energy efficient load balancing for critical link detection in the recent literature.The paper entitled \u201cStock price change rate prediction by utilizing social network activities\u201d by S. Deng et al. proposed a model to generate heuristically optimized trading rules by utilizing social network activities and historical traded prices and transaction volumes. The proposed model extracts three kinds of features from multiple sources.The paper entitled \u201cOntology-based multiple choice question generation\u201d by M. Al-Yahya aims to address this issue by assessing the performance of these systems in terms of the efficacy of the generated MCQs and their pedagogical value.The paper entitled \u201cA simulation approach to decision making in IT service strategy\u201d by E. Orta and M. Ruiz explores the use of simulation modeling in this scope and the works founded show that different simulation approaches have been used.The paper entitled \u201cA hybrid approach to protect palmprint templates\u201d by H. Liu et al. proposes a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points.The paper entitled \u201cOntoTrader: an ontological web trading agent approach for environmental information retrieval\u201d by L. Iribarne et al. showed how traditional traders, properly extended to operate in WIS, are a good solution for information retrieval.The paper entitled \u201cAn effective news recommendation method for microblog user\u201d by W. Gu et al. proposed NEMAH system architecture to tackle the personalized news recommendation based on microblog and subclass popularity prediction.The paper entitled \u201cReHypar: a recursive hybrid chunk partitioning method using NAND-flash memory SSD\u201d by J. No et al. presents a new form of hybrid data allocation scheme, called recursive hybrid chunk partitioning (ReHypar), which can be used in the hybrid structure whose address space is organized by integrating a small portion of NAND-flash SSD partition with the much larger HDD partition.The paper entitled \u201cBgCut: automatic ship detection from UAV images\u201d by C. Xu et al., an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically.In the paper entitled \u201cChinese unknown word recognition for PCFG-LA parsing\u201d by Q. Huang et al. investigates the recognition of unknown words in Chinese parsing. Two methods are proposed to handle this problem.The paper entitled \u201cA novel hybrid self-adaptive bat algorithm\u201d have hybridized this algorithm using different DE strategies and applied these as a local search heuristics for improving the current best solution directing the swarm of a solution towards the better regions within a search space.I. Fister Jr. et al. in the paper entitled \u201cAn improved mixture-of-Gaussians background model with frame difference and blob tracking in video stream\u201d adopt a blob tracking method to cope with this situation.L. Yao and M. Ling in the paper entitled \u201cA dynamic ensemble framework for mining textual streams with class imbalance\u201d propose a new ensemble framework, clustering forest, for learning from the textual imbalanced stream with concept drift (CFIM).G. Song and Y. Ye in their paper entitled \u201cResearch on dynamic routing mechanisms in wireless sensor networks\u201d by A. Q. Zhao et al., a collection tree protocol based, dynamic routing mechanism was proposed for WirelessHART network.In the paper entitled \u201cAn approach for integrating the prioritization of functional and nonfunctional requirements\u201d propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements.M. Dabbagh and S. P. Lee in the paper entitled \u201cA collaborative recommend algorithm based on bipartite community\u201d propose a bipartite community partitioning algorithm according to the real data environment of collaborative recommendation.Y. Fu et al. in the paper entitled \u201cA novel two-stage illumination estimation framework for expression recognition\u201d by Z. Zhang et al., a two-stage illumination estimation framework is proposed based on three-dimensional representative face and clustering, which can estimate illumination directions under a series of poses.In the paper entitled \u201cTrajectory-based morphological operators: a model for efficient image processing\u201d propose a new model for computing mathematical morphology operations, the so-called morphological trajectory model (MTM), in which a morphological filter will be divided into a sequence of basic operations.A. Jimeno-Morenilla et al. in their paper entitled \u201c \u201cMultiobjective resource-constrained project scheduling with a time-varying number of tasks\u201d propose an innovative evolutionary algorithm-based approach, called mapping of task ID for centroid-based adaptation with random immigrants (McBAR) and applied this to search for optimal schedules as solutions to the problems.M. B. Abello and Z. Michalewicz in the paper entitledPEM-PCA: a parallel expectation-maximization PCA face recognition architecture\u201d present the parallel architecture proposal including the first PCA optimization (EM-PCA) and the second parallel face recognition architecture in three substages (PEM-PCA).K. Rujirakul et al. in the paper entitled \u201c\u03bc: Multilingual Sentence Boundary Detection ModeliSentenizer-\u201d D. F. Wong et al. present a multilingual sentence boundary detection system (iSentenizer-\u03bc) for Danish, German, English, Spanish, Dutch, French, Italian, Portuguese, Greek, Finnish, and Swedish languages. The proposed system is able to detect the sentence boundaries of a mixture of different text genres and languages with high accuracy.In the paper \u201cPalmprint based multidimensional fuzzy vault scheme\u201d present a multidimensional fuzzy vault scheme (MDFVS) in which a general subspace error-tolerant mechanism is designed and embedded into FVS to handle intraclass variances.H. Liu et al. in the paper entitled \u201cA relationship: word alignment, phrase table, and translation quality\u201d focus on formulating such a relationship for estimating the size of extracted phrase pairs given one or more word alignment points.L. Tian et al. in the paper entitled \u201cThe application of similar image retrieval in electronic commerce\u201d focus on the online marketing platform based on similar image retrieval that connects information platform with purchase platform.Y. Hu et al. in the paper entitled \u201cTowards emotion detection in educational scenarios from facial expressions and body movements through multimodal approaches,\u201d by M. Saneiro et al., an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process is presented.In the paper entitled \u201cA novel resource management method of providing operating system as a service for mobile transparent computing\u201d by Y. Xiong et al. presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals.The paper entitled \u201cDistributed SLAM using improved particle filter for mobile robot localization\u201d by F. Pei et al., the improved particle filter was proposed to estimate the state vector of distributed SLAM. The performance of the particle filter is affected by several factors in its operation.In the paper entitled \u201cUnsupervised quality estimation model for English to German translation and its application in extensive supervised evaluation\u201d propose an unsupervised MT evaluation metric using universal part-of-speech tagset without relying on reference translations.A. L.-F. Han et al. in their paper entitled \u201cCharacterizing the effects of intermittent faults on a processor for dependability enhancement strategy\u201d by C. (Saul) Wang et al. focuses on the effects of intermittent faults on NET versus REG on one hand and the implications for dependability strategy on the other.The paper entitled \u201cA system for sentiment analysis of colloquial Arabic using human computation\u201d by A. S. Al-Subaihin and H. S. Al-Khalifa is to design and implement a system that can extract the sentiments of Arabic content found in Arabic websites that has informal nature.The aim of the paper entitled \u201cMoving object detection using dynamic motion modelling from UAV aerial images\u201d by A. F. M. S. Saif et al. presents moments based motion parameter estimation called dynamic motion model (DMM) to limit the scope of segmentation called SUED which influences overall detection performance.The paper entitled \u201cA novel artificial bee colony algorithm based on internal-feedback strategy for image template matching\u201d choose the well-known normalized cross correlation model as a similarity criterion. The searching procedure for the best-match location is carried out through an internal-feedback artificial bee colony (IF-ABC) algorithm.B. Li et al. in their paper entitled \u201cDevelopment of biological movement recognition by interaction between active basis model and fuzzy optical flow division\u201d by B. Yousefi and C. K. Loo, a human action recognition method has been proposed; this method is based on interrelevant calculated motion and form information followed the biologically inspired system.In the paper entitled \u201cAlgorithm for image retrieval based on edge gradient orientation statistical code\u201d by J. Zeng et al., the statistical method of 8-direction chain code was applied to the statistics for edge gradient direction, to propose a novel edge gradient orientation statistical code (EGOSC), which was sequentially used as feature vector to represent the shape.In the paper entitled \u201cA new gravitational particle swarm optimization algorithm for the solution of economic emission dispatch in wind-thermal power system\u201d by S. Jiang et al., a new hybrid optimization approach, namely, gravitational particle swarm optimization algorithm (GPSOA), is proposed in this paper to solve economic emission dispatch (EED) problem including wind power.In the paper entitled \u201cAn efficient hierarchical video coding scheme combining visual perception characteristics\u201d by P. Liu and K. Jia, a hierarchical video coding scheme based on human visual systems (HVS) is proposed in this paper. The proposed scheme uses a joint video coding framework that consists of visual perception analysis layer (VPAL) and video coding layer (VCL).In the paper entitled \u201cSignal waveform detection with statistical automaton for internet and web service streaming\u201d propose an approach to signal waveform detection for Internet and Web streaming, with novel statistical automatons.K.-K. Tseng et al. in the paper entitled \u201cParallel simulation of HGMS of weakly magnetic nanoparticles in irrotational flow of inviscid fluid\u201d present the process of high gradient magnetic separation (HGMS) using microferromagnetic wire for capturing the weakly magnetic nanoparticles in the irrotational flow of inviscid fluid.K. Hournkumnuard et al. in the paper entitled \u201cCreation of reliable relevance judgments in information retrieval systems evaluation experimentation through crowdsourcing: a review\u201d by P. Samimi and S. D. Ravana is intended to explore different factors that have an influence on the accuracy of relevance judgments accomplished by workers and how to intensify the reliability of judgments in crowdsourcing experiment.The paper entitled \u201cMonte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement\u201d by J. Siswantoro et al., a nondestructive method for volume measurement of irregularly shaped food products that is based on the Monte Carlo method using a computer vision system is proposed.In the paper entitled \u201c"} {"text": "Adult intussusception is rare and usually caused by a tumor acting as the lead point. Therefore, laparotomy should be considered for the treatment. Laparoscopic procedures for use in cases of adult intussusception have been recently reported; however, there is no consensus regarding the safety and efficacy. Here, we describe a successful case of laparoscopic management of an octogenarian adult intussusception caused by an ileal lipoma, which was preoperatively suspected. An 87-year-old male presented with progressive abdominal distention and vomiting. Contrast radiography of the small intestine showed an ileal tumor, and magnetic resonance imaging indicated a target-like mass, consistent with an ileal intussusception. The patient was suspected with an intussusception due to an ileal lipoma, and laparoscopic surgery was performed. An approximately 10-cm-long ileal intussusception with a preceding tumor was present, and partial resection of the ileum, including the tumor, was performed. Macroscopic examination of the excised specimen showed a pedunculated tumor measuring 4.0\u2009\u00d7\u20093.5\u2009\u00d7\u20091.9\u00a0cm with an uneven surface, yielding a histological diagnosis of lipoma. The patient had an uneventful recovery and was discharged on postoperative day 8. This successful case showed that laparoscopic surgery can be a useful, safe, and efficacious procedure for adult intussusception, even in octogenarians. Adult intussusceptions constitute approximately 5% to 10% of all intussusceptions and are usually caused by a tumor that acts as the lead point . While lAn 87-year-old male presented at our department with progressive abdominal distention and vomiting for 2\u00a0weeks. His medical history included hypertension and revealed that he had undergone laparoscopic-assisted ileocecal resection for cecal cancer in January 2004. Physical examination revealed a distended abdomen and a tender right lower quadrant. Laboratory test results, including serum levels of carcinoembryonic antigen and carbohydrate antigen 19\u20139, were within normal limits. Abdominal radiography revealed a prominently dilated small intestine with some air-fluid interfaces. A long tube was inserted to reduce the internal pressure of the small intestine, and Gastrografin contrast radiography was performed. It showed an approximately 40-mm-diameter ileal tumor 1, commenced oral fluids on POD 2, and was discharged on POD 8.et al. [Intussusception is rare in adults compared with that in children, and the frequency is approximately 5% to 10% of all intussusceptions . Of the et al. reviewedet al. . We had et al. .et al. [et al. [http://www.ncbi.nlm.nih.gov/pubmed/), there have been 14 reported cases described by the words \u2018adult intussusception,\u2019 \u2018lipoma,\u2019 and \u2018laparoscopy\u2019 including the present case (Table\u00a0et al. [et al. [First treatment for adult intussusception is debatable. Many cases of adult intussusceptions are caused by an identifiable etiology; laparotomy should be considered as the first line of treatment. As a result of the marked improvement in laparoscopic devices and surgical techniques, there are reports of its use in cases of adult intussusception ,11. Howeet al. and Tart [et al. reportede\u00a0et al. . This di [et al. reportedLaparoscopic surgery in experienced hands may be considered for the treatment for adult intussusception, even in octogenarians, as it is a safe and efficacious procedure.Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal."} {"text": "The purpose of this article was to review the biocompatibility, physical, and mechanical properties of the polyamide denture base materials. An electronic search of scientific papers from 1990-2014 was carried out using PubMed, Scopus and Wiley Inter Science engines using the search terms \u201cnylon denture base\u201d and \u201cpolyamide denture base\u201d. Searching the key words yielded a total of 82 articles. By application of inclusion criteria, the obtained results were further reduced to 24 citations recruited in this review. Several studies have evaluated various properties of polyamide (nylon) denture base materials. According to the results of the studies, currently, thermo-injectable, high impact, flexible or semi-flexible polyamide is thought to be an alternative to the conventional acrylic resins due to its esthetic and functional characteristics and physicochemical qualities. It would be justifiable to use this material for denture fabrication in some cases such as severe soft/ hard tissue undercuts, unexplained repeated fracture of denture, in aesthetic-concerned patients, those who have allergy to other denture base materials, and in patients with microstomia.\u00a0 Although polyamide has some attractive advantages, they require modifications to produce consistently better properties than the current polymethyl methacrylate (PMMA) materials. Moreover, since there is a very limited knowledge about their clinical performance, strict and careful follow-up evaluation of the patients rehabilitated with polyamide prosthesis is recommended. These of2)4-COOH.-15 Nylon2)4-COOH.-17 Moreo2)4-COOH. toxicolo2)4-COOH. use of h2)4-COOH. On the oThis study is a structured literature review of articles published from 1990 to 2014. PubMed, Scopus, and Wiley Inter Science databases were used to search \u201cnylon denture base\u201d and \u201cpolyamide denture base\u201d key words. The search was limited to English language publications. The articles were reviewed by two experts in the field of prosthodontics. Searching the key words yielded a total of 82 articles. As the inclusion criteria, the publications had to be exactly related to the key words; no editorials and manufacturer-supported publications were accepted for review process. By application of inclusion criteria, the obtained results further reduced to 24 citations that formed the basis for this review .Flexural PropertiesThere are some studies that have evaluated the mechanical properties like flexural strength, modulus of elasticity, deflection at breakage, and tensile strength of nylon as a denture base material.et al. in 2005 [Yunus in 2005 evaluateTakabayashi in 2010 [et al. in 2011,[Hamanaka in 2011, comparedet al.[et al. in 2013,[et al.[In 2012, Ucar et al. evaluateet al.-25 Anothet al. reported in 2013, mechanic3,[et al. comparedWater sorption and water solubilityet al. in 2003,[Lai in 2003, studied et al. in 2014, the sorption and solubility of heat-cured polymethyl methacrylate denture base resin and flexible denture base resin were compared and it was found that heat-cured PMMA had more sorption and solubility values than flexible (thermoplastic polyamide nylon) resin.[In the study carried out by Takabayashi in 2010,), but Lun) resin. The studn) resin.Hardnesset al. in 2012[et al. (2014)[Ucar . in 2012compared l. (2014) PMMA demColor stability and stain resistanceet al. in 2003,[et al. in 2010[et al. in 2011[In the study of Lai in 2003, the colo. in 2011 compared. in 2011-33 It wa. in 2011 Another . in 2011-36Bond strength to other materialsAuto-polymerizing resin is often used as reline or repair material for PMMA denture base, but theret al. in 2009[et al. showed that bond strength of repair materials increased significantly after chemical treatments of denture base materials.[Katsumata . in 2009 studied . in 2009 This proaterials. However In another study in 2013, it was dthat tribochemical silica coating and 4-META/MMA-TBB resin could cause the greatest post-thermocycling bond strength to polyamides among the different surface treatment methods used in this study . et al. in 2013[2O3. Therefore air abrasion of polyamide resins should be avoided in order not to impair their bond strength to silicon-based soft denture liners.Polyamide was exceedingly difficult to bond to an auto-polymerizing repair resin; also the shear bond strength could improve using tribochemical silica coating followed by application of 4-META/MMA-TBB resin. Korkmaz . in 2013 used peeDimensional accuracy of nylon denture base materialset al.[et al. in 2004[Despite several advantages of PMMA, one of its main disadvantages is polymerization shrinkage during processing. For solving this problem, various injection-molded materials and processing techniques are now available. The studet al. was the . in 2004 compared. in 2004-45Surface roughnessA rougher surface can cause discomfort to patients and also discoloration of the prosthesis; it may contribute to microbial colonization and biofilm formation.et al. in 2010[et al. in 2014[Abuzar . in 2010 evaluate. in 2010 Similar . in 2010 the conv. in 2014who evaluEffect of denture cleansers on polyamide denture base materialset al. in 2013,[Adhesion of microorganisms, especially yeasts, to the denture base materials is an important issue that compromises its service and efficacy. Although in 2013, the effe in 2013,Cytotoxic evaluation of polyamideet al.[There are several studies in regard to cytotoxicity of denture base materials.-59 It haet al. investigIn 2013, a clinicet al. in 2013[Sinch . in 2013 reportedPhysical and clinical properties of polyamides were briefly discussed in this review article. Although the flexural strength and modulus of elasticity and rigidity of nylon (polyamide) denture base materials are relatively low, they demonstrate great impact strength, toughness, and resistance to fracture. It was suggested that by adding glass fibers to polyamides, their stiffness and other mechanical properties could be increased. The use of these materials for non-metal clasp dentures has some advantages regarding their esthetic and degree of retention. However, these materials show some degree of color instability in different beverages. The bond strength of these materials to the repairing resins is low, but it can be significantly enhanced by silica coating with Rocatec system. Using the denture cleansers would increase the surface roughness of these materials and their cytotoxicity increases after long-term use. It was demonstrated that polyamides have rougher surface than other resin materials, and it causes more bacterial and fungal colonization. Therefore, selection of thermoplastic resin and designs adaptation for each clinical case must be achieved only after complete indulgence of the properties of polyamide materials."} {"text": "The aim of this study was to evaluate some reproduction performances in Ouled-Djellal rams.This study involved genital organs removed after slaughter from 54 rams at the municipal slaughterhouse of Batna (East Algeria).The measurements of survival and mobility of epididymal sperm followed at 0, 24, 48 and 72 h after collection, revealed significant (p<0.05) to highly significant differences (p>0.001) according to time. Thus, concerning the sperm motility the values were 91.00\u00b12.40%, 89.20\u00b12.40%, 77.00\u00b16.20% and 62.60\u00b11.20% at 0, 24, 48 and 72 h, respectively. Indeed, in live sperm, the viability rates were 82.15\u00b11.48%, 77.67\u00b11.74%, 66.56\u00b11.95% and 52.30\u00b11.46% at 0, 24, 48 and 72 h, respectively.This study revealed that epididymal spermatozoa stored at 04\u00b0C for 72 h kept their mobility and vitality at nearly a half of their the original parameters. Ouled-Djellal (OD) breed is the most dominant in this region representing nearly 60% of the 22.868 million heads . This iset al., , epididyThe aim of the present study was to determine the effect of the storage time on the quality of epididymal spermatozoa kept at 4\u00b0C by evaluating their motility and viability at 0, 24, 48, and 72 h after the rams\u2019 slaughter.Ethical approval was not necessary. The samples were taken from slaughtered animals.This study involved 54 OD rams at the municipal slaughterhouse of Batna (East Algeria). The genital organs were removed after slaughter of the animal. Before the collection of sperm, a paraffin oil injection was performed into the vas deferens, after which an incision in the apical region of the tail was carried out, and the epididymal sperm was collected and stored at 4\u00b0C without dilution . Mobilit\u00ae5. Version 5.03, to calculate the mean, the standard deviation and the standard error of the mean (SEM) and the statistical signification was set at p<0.05.We used the Software Graph Pad PrismThe analysis of massal motility and vitality are summarized in in vitro with extended media, although differences decreased at 48 and 72 h [et al., [et al., [et al., [et al., [et al., [et al., [et al., [etal., [The post-mortem recovery of epididymal sperm from dead animals is a useful method that would permit the creation of germplasm banks to preserve endangered breeds and contributes to the preservation of biodiversity . The speand 72 h . In bull[et al., obtained[et al., and Mir [et al., . Lones e[et al., have obt[et al., have not[et al., and Mir [et al., in rams,[etal., in bullset al., [etal., [in vitro fertilization (IVF) rates of in vivo matured oocytes with epididymal sperm, and conceived that epididymal sperm was better for IVF than ejaculated semen due to deficiency of contact with seminal plasma.The motility of spermatozoa varies with the transport temperature (ambient or refrigeration) and its et al., did not [etal., have not[etal., obtainedet al., [et al., [etal., [According to Tamayo-Canul et al., , the sto[et al., revealed[et al., . However[et al., . It is w[et al., ,8,18,28.[et al., . In Iber[etal., concludeThis study showed that epididymal sperm can be used for more or less short time after storage. A storage time up to 72 h at 4\u00b0C can lead to a reduction of nearly half in motility and viability rates. We can conclude that the cauda epididymal sperm stored at the above-mentioned conditions constitutes, despite an obvious reduction in viability, an alternative source of gametes of meritorious parents for artificial insemination or IVF. However, for a better evaluation of the fertility or performance, rams should be tested for different trials such as scrotal measurement, semen examination, libido testing, hormonal profile and other examinations."} {"text": "R2 = 0.86, p = 4.88 \u00d7 10\u221223), male age standardised death rate and percent all cause death attributed to MND . Conclusion: Legacy petrol lead emissions are associated with increased MND death trends in Australia. Further examination of the 20 year lag between exposure to petrol lead and the onset of MND is warranted.Background: The age standardised death rate from motor neuron disease (MND) has increased from 1.29 to 2.74 per 100,000, an increase of 112.4% between 1959 and 2013. It is clear that genetics could not have played a causal role in the increased rate of MND deaths over such a short time span. We postulate that environmental factors are responsible for this rate increase. We focus on lead additives in Australian petrol as a possible contributing environmental factor. Methods: The associations between historical petrol lead emissions and MND death trends in Australia between 1962 and 2013 were examined using linear regressions. Results: Regression results indicate best fit correlations between a 20 year lag of petrol lead emissions and age-standardised female death rate ( Motor Neuron Disease (MND) is a general term for neurological disorders that affect motor neurons which are cells that control voluntary muscles including those affecting breathing speaking, swallowing and walking. Amyotrophic lateral sclerosis (ALS) is the commonest form of MND, affecting both corticomotor neurons in the cerebrum and the anterior horn cells in the brainstem and spinal cord . This neet al. (1997) [et al. (1994) [et al. (2003) [et al. (2000) [et al. (2013) [et al. (2005) [et al. (2005) [et al. (2005) [A review of the literature indicates that MND mortality rates have generally shown increasing trends. Veiga-Cabo . 1997) document. (1997) found tht al. 199 [4] foun. (2003) found th. (2000) studied documen. (2005) , observeet al., 2014) [While 10% of all MND cases are caused by monogenic mutations that are increasingly identified [., 2014) . Althoug., 2014) . Exposur., 2014) .Lead is a potent neurotoxin and there is a growing body of evidence that past lead exposure plays a possible causal role in a subset of MND patients . Multiplet al. (2014) [et al. (2014) [Wang . (2014) performe. (2014) combinedet al. (2002) [et al. (2003) [et al. (2002) [ALAD) gene and the vitamin D receptor gene (VDR) and the influence of genotype on the previously observed association of MND with lead exposure. They concluded that genetic susceptibility conferred by polymorphisms in the ALAD gene may affect MND risk, possibly through a mechanism related to internal lead exposure (such as from bone lead). There are two retrospective case-control studies that found an association between bone lead and MND. Kamel . (2002) assessed. (2003) used the. (2002) to inveset al. (2005) [et al. (2002) [et al. (2005) [et al. (2005) [ALAD gene was associated with a 1.9-fold increase in MND risk.The Kamel . (2005) study us. (2002) along wi. (2005) observed. (2005) observedet al. (2015) [Eum . (2015) measuredIn Australia, the majority of non-occupational lead exposure occurs in mining towns ,28 and oRecent progress has been made in the understanding numerous chronic health effects associated with lead exposure. For example, there are strong associations between lead and a multitude of diseases such as autism ,32 preecA major source of non-occupational lead exposure is from atmospheric emissions from leaded petrol. Historically this exposure source has been the principal driver of blood lead levels in the Australian population . Leaded The accumulated lead emitted from automotive tailpipes was deposited into urban soils and dusts with a legacy of lead still present within the top 5 cm of soil. Because lead stays as a residue until geologic processes gradually cover or remove its expected \u201chalf-life\u201d is approximately 700 years . The priInformation about the MND deaths was collected on death certificates and certified by either a medical practitioner or a coroner. Registration of deaths is compulsory in Australia and is the responsibility of each state and territory Registrar of Births, Deaths and Marriages under jurisdiction specific legislation. On behalf of the Registrars, deaths data are assembled, coded and published by statistical agencies. These agencies have varied since 1900 and have included state based statistical offices, the Commonwealth Statistician's Office and the Commonwealth Bureau of Census and Statistics, now known as the Australian Bureau of Statistics (ABS). Cause of Death Unit Record File data are provided to the Australian Institute of Health and Welfare (AIHW) by the Registries of Births, Deaths and Marriages and the National Coronial Information System (managed by the Victorian Department of Justice) and include cause of death coded by the ABS (ICD10 code G12.2). The data are maintained by the AIHW in the National Mortality Database.The Australian MND death rate is available for a fifty four year period ,43. The 2R value and smallest p-value. Twenty year lag of MND response was thus selected for further evaluation for accumulated petrol lead emissions.Taking into account the combination of age of exposure and the bone lead reservoir of exposure we assume a lag in the onset of MND response. The best fit of MND response lag was determined by performing linear regressions between accumulated petrol lead emissions between 1933 and 2002 with forward lags of 10 to 24 years for male and female MND death rates and for percent MND all cause death in Australia between 1962 and 2013. The best fit was 20 years for lag response because 20 years showed the largest The petrol lead emission trend calculated by Kristensen (2015) was convAs illustrate in The combined male and female percent all cause death attributed to MND increased 300% from 0.12% in 1959 to 0.48% in 2013 and this result is illustrated in R2 = 0.96, p = 9.4 \u00d7 10\u221223). Likewise regression results between a 20 years lag of petrol lead emissions and age-standardized death rates indicate positive correlations for females . As illustrated in R2 = 0.98, p = 2.6 \u00d7 10\u221244) . The data used in the regression analysis and the statistical results are presented in a Regression results between a 20 years lag of petrol lead emissions and age-standardized death rates indicate positive correlations for males [et al. (2000) [There are other limitations of the study. Chio . (2000) acknowle. (2000) notes thet al. (2000) [et al. (2000) [Chio . (2000) also sug. (2000) suggest et al. (2015) [Another factor that had the potential to effect the international MND mortality trends was the international diagnostic criteria for ALS/MND diagnoses named the El Escorial diagnostic criteria which was implemented in the late 1990s for the purpose of standardising the steps of diagnosis, clarifying the complexity of various clinical features and ensuring diagnostic criteria. Agosta . (2015) state thIt must be noted that there are other factors causing ALS, such as genetics in familial ALS , occupatet al. (2005) [et al., (2013) [et al. (2005) [et al. (2005) [et al. (2002) [et al. (1992) [et al. (2007) [et al. (2009) [The introduction emphasized the increasing trend of MND mortality; however, there have also been some exceptions to the international trend of MND mortality rate increases. For examples, while Noonan . (2005) observed, (2013) indicate. (2005) conclude. (2005) also not. (2002) found th. (1992) observed. (2007) noted th. (2009) found thWe acknowledge that we cannot conclude that the association between the forward lag of petrol lead emissions are causally related. However if these variables were found to be causally related, then based on the plateau of the forward lag of the accumulated petrol curve at the end of the petrol lead accumulation curve and the Urban environments are the habitat for most of the world\u2019s peoples and it is essential to fully understand the qualities which sustain life. Non-occupational lead exposure from the legacy of petrol lead emissions was a major contaminant introduced into urban environments worldwide and it provides a case example of the need for further study of processes that contaminate urban environments. The history of the use in petrol lead worldwide has been described as occurring in two phases. The first phase involved the rise in total lead emissions between the 1930s and the 1970s, and the second phase involved the phase down of lead in various countries between the 1970s and the present time . After tWe postulate that the petrol lead source and its legacy of aerosol emission and accumulation became a driver for increasing the incidence of MND. The lead emissions contaminate surface soil and are then resuspended into the atmosphere during dry periods with fine lead enriched soil dust migrating into homes ,63,64. CMultiple lines of evidence suggest that lead exposure plays a significant role in the aetiology of MND. These lines of evidence include the meta-analysis between MND and occupational lead exposure , previouNon-occupational lead exposure remains a pervasive and a continuing worldwide problem from air lead, soil lead and lead dust in homes, especially in urban environments. Primary prevention of exposure is paramount to solving the negative health outcomes associated with lead exposure. The Precautionary Principle as defined by the United Nations Educational, Scientific and Cultural Organisation (UNESCO) (2005) establisThe emissions from leaded petrol have left a legacy in the environment. The legacy shows up as many forms of chronic diseases in the population. Diseases in the population can be both from early childhood exposure and later exposure during adulthood resulting in the accumulated storage of lead in bones and its subsequent release later in life. Regression results indicate positive correlations between a 20 year lag of petrol lead emissions and MND trends in Australia. The results indicate that non-occupational exposures to the population from past Australian petrol lead emissions and its environmental accumulation may have contributed to the development of sporadic MND. We predict that when evaluated in other countries similar findings will also be found. Further examination of a lag between exposure to petrol lead accumulation in urban soils and dusts and MND trends is warranted. If these studies confirm that exposure to lead from past emissions of lead in petrol is causally related to the development of MND, then lead contaminated urban soils and house dusts may require extensive remediation or isolation to prevent further development of sporadic MND in future generations."} {"text": "The aim of this systematic review was to synthesize and appraise the evidence of the benefits of presbyopic correction on the cornea for visual function.Comprehensive search was conducted in MEDLINE using keywords like \u201cpresbylasik\u201d, \u201cpresbyopic refractive surgery\u201d, \u201ccorneal pseudoaccommodation\u201d and \u201ccorneal multifocality\u201d. We reviewed corrected and uncorrected visual acuities for distance and near , uncorrected near visual acuity (UNVA), corrected distance visual acuity (CDVA), distance corrected near visual acuity (DCNVA), corrected near visual acuity (CNVA)), along with the refractive outcomes in spherical equivalent (SE) and astigmatism comparing the differences observed between preoperative myopic and hyperopic patients, as well as among techniques.Thirty-one studies met the inclusion and quality criteria. Monovision provides excellent distance and near uncorrected acuities, but with a 17% retreatment and a 5% reversal rate. Initial multifocal ablations result in 12% loss of 2 or more lines of CDVA, and a 21% retreatment rate. Laser Blended Vision provides excellent UDVA, but with a 19% retreatment rate. Initial experiences with Supracor show moderate predictability and a 22% retreatment rate. Intracor results in 9% loss of 2 or more lines of CDVA. KAMRA provides excellent UDVA, with only a 1% retreatment rate, but a 6% reversal rate. Initial experiences with PresbyMAX provided excellent UNVA and DCNVA, showing excellent predictability and a 1% reversal rate.The findings have implications for clinicians and policymakers in the health-care industry and emphasize the need for additional trials examining this important and widely performed clinical procedure. Surgical presbyopic correction has seen a tremendous increase in interest in the recent times. The application of excimer laser systems in surgically correcting presbyopia is as old as the laser refractive surgery . RefractMoreira et al. said in 1993 [The aim of this systematic review was to synthesize and appraise the evidence of benefits of presbyopic correction on the cornea for visual function.We categorized the found techniques as:Monovision is another extended technique where thVinciguerra et al. proposedA pseudo-accommodative cornea is realized basically in the form of a peripheral near zone (concentric ring for near vision) or in thCharman concludeArtola et al. found evDai was one Ortiz et al. charactePresbyLASIK is another important addition to the techniques of multifocal ablations. The term presbyLASIK indicates a corneal surgical procedure based on traditional laser in situ keratomileusis (LASIK) to create a multifocal surface able to correct any visual defect for distance while simultaneously reducing the near spectacle dependency in presbyopic patients ,12.Little literature is found concerning monocular distance corrected performance after presbyLASIK .In a previous report , presbyIn another report , statistIntracorneal ablations in the form of concentric rings are used to produce a weaker region in the central part of the cornea resulting in a hyperpolate shape. This flapless procedure is restricted within the boundaries of the corneal stroma.Three types of corneal inlays are available today for presbyopic treatments. The KAMRA corneal inlay uses a pinhole effect. PresbyLens is based on microscopically changing the shape of the eye\u2019s surface. Flexivue Microlens is a refracting inlay with a refractive index different from the cornea.Based on reports and presentations, near vision can be improved while retaining distance visual acuity with the KAMRA presbyopic inlay. Although corneal inlays are placed only in one eye, they differ from monovision by not compromising on the distance vision.However, inlays also represent a compromise. For the KAMRA, light entering the eye is restricted, which may reduce contrast and night vision, and there can be optical side effects. The benefit of using inlays is the ability to remove and reverse the effects of the treatment. Intracorneal inlay and simultaneous refractive surgery has also presented safe and efficacious results in patients with presbyopia and emmetropia.Hybrid techniques are designed to combine the benefits of two of the above-mentioned approaches and suppress the related disadvantages. These hybrid modifications include: conductive keratoplasty (CK) , Supracor and PresbyMAX , Intracor , KAMRA (full correction in distance eye combined with pinhole based extended depth-of-focus and monovision in the near eye), as well as laser blended vision .Reinstein et al. successfWe conducted a comprehensive search in MEDLINE using keywords like \u201cpresbylasik\u201d, \u201cpresbyopic refractive surgery\u201d, \u201ccorneal pseudoaccommodation\u201d and \u201ccorneal multifocality\u201d.We reviewed corrected and uncorrected visual acuities for distance and near ), along with the refractive outcomes in SE and astigmatism comparing the differences observed between preoperative myopic and hyperopic patients, as well as among techniques. We used the standard equivalent Snellen acuities for distance vision combined with the revised Jaeger scale for near vision .Thirty-one studies met the inclusion and quality criteria.This is probably the most extensively used and old (but not outdated) technique for alleviating presbyopic symptoms.Ayoubi et al. comparedBraun, Lee and Steinert evaluateReilly et al. assessedWright et al. measuredSee Table\u00a0Stahl assessedYe et al. evaluateJackson, Tuan, and Mintsioulis evaluatePinelli et al. analyzedUy and Go investigSee Table\u00a0Ryan and O\u2019Keefe reportedSee Table\u00a0Uthoff et al. investigSee Table\u00a0Bohac et al. reportedHolzer et al. investigSee Table\u00a0Lindstrom et al. providedTomita et al. evaluateY\u0131lmaz et al. evaluateSeyeddain et al. evaluateSee Table\u00a0Alarcon et al. evaluateSee Table\u00a0Monovision is highly rated by patients even though binocular vision is compromised .Although the depth of focus acts as a useful marker, acuity at typical near vision distances is a more suitable metric that is closely related to patients\u2019 real expectations and concerns . High leContact lens monovision and LASIK-induced monovision traditionally use a nomogram for near addition, with the degree of anisometropia increasing from approximately \u22121.50 D for a 45-year-old patient up to \u22122.50 D for a 65-year-old patient .CK seems to produce functional corneal multifocality with definable introduction of surgically induced astigmatism and higher-order optical aberrations, and development of a more prolate corneal contour. These optical factors may militate toward improved near vision function .PresbyLASIK treatment constitutes a new modality in the correction of presbyopia after monovision LASIK ,52. In sPinelli et al. investigSemoun et al. were the first to describe a P. acnes (Propionibacterium acnes) infection after a presbyopic LASIK procedure. This unusual case of infectious keratitis emphasizes the fact that even though such cases may rarely present in the clinics, the patients must be informed about the potential risks of such infections .The reversibility of the presbyLASIK procedures have been viewed controversially. These viewpoints have been discussed in the work of Luger et al. . In anotArtola et al. found evidence for delayed presbyopia after PRK in a non-presbyLASIK protocol for myopia .The initial results of Intracor raised the expectations for this technique. However, subsequent reports on corneal ectasia and concerns regarding the retreatment and reversal possibilities raised questions about the safety of this technique .As for the corneal inlays, some more clinical and theoretical work has been published so far, including reading performance and patient satisfaction, optimumSee Table\u00a0We introduced a hybrid category in the results. This category combines several methods together to benefit from their advantages and reduce the impact of their disadvantages. In other words, one can say that all corneal presbyopic correction methods have evolved to hybrid techniques that combine their original approach in different powers for either eye or the original approach with certain amount of monovision .The current developments throughout the corneal presbyopic correction spectrum indicate a converging trend towards hybrid techniques."} {"text": "We read with great interest the recent paper by C. B. Papini et al. in which the authors examined \u201cimpact of a community-based exercise program in primary care on inflammatory biomarkers and hormone levels\u201d in the 1-year quasiexperimental study [\u03b1, and CRP [The authors did not discuss exclusion criteria in this paper. However it is well established that any type of systemic inflammation, autoimmune disorders, and malignant or chronic illnesses may affect inflammatory biomarkers and hormone levels . Also ob and CRP , 4. Beca and CRP .In our opinion, future clinical studies assessment of considering these conditions may be helpful for exact results. We hope that bearing in mind these conditions would add to the value of the well-written paper of C. B. Papini et al. ."} {"text": "By binding additional viral proteins, gH/gL forms multimeric complexes which bind to specific host cell receptors. Both Epstein\u2013Barr virus (EBV) and human cytomegalovirus (HCMV) express alternative multimeric gH/gL complexes. Relative amounts of these alternative complexes in the viral envelope determine which host cells are preferentially infected. Host cells of EBV can modulate the gH/gL complex complement of progeny viruses by cell type-dependent degradation of one of the associating proteins. Host cells of HCMV modulate the tropism of their virus progenies by releasing or not releasing virus populations with a specific gH/gL complex complement out of a heterogeneous pool of virions. The group of Jeremy Kamil has recently shown that the HCMV ER-resident protein UL148 controls integration of one of the HCMV gH/gL complexes into virions and thus creates a pool of virions which can be routed by different host cells. This first mechanistic insight into regulation of the gH/gL complex complement of HCMV progenies presents UL148 as a pilot candidate for HCMV navigation in its infected host. Epstein-Barr virus (EBV) and human cytomegalovirus (HCMV) spread in their hosts are considered to be highly directed. Current models how EBV or HCMV particles are navigated through the infected host are based on alternative gH/gL complexes in the virus envelope which target different host cells. EBV envelopes are complemented with dimeric gH/gL and trimeric gH/gL/gp42 complexes and HCMV envelopes with trimeric gH/gL/gO and pentameric gH/gL/UL128-131 complexes. Both EBV and HCMV show differences in cell tropism of virus progenies derived from different host cells. Several studies have addressed mechanisms of how host cells shape the tropism of virus progenies by influencing the gH/gL complement of virions.In vivo, it has been shown that EBV particles shed in saliva are high in gp42 and may thus be directed to B cells, which are the first target cells in horizontal transmission of EBV [For EBV, a switch in cell tropism of the virus derived from B cells or epithelial cells has been described ,2. Infecn of EBV .in vivo spread of gO-knockout mutants of murine cytomegalovirus (MCMV). Entry of gO-knockout mutants of MCMV into the first target cells is impaired, but subsequently shows a focal spread pattern in organs comparable to the wildtype (WT) virus [For HCMV, differences in cell tropism of virus progenies released from different cell types cannot be explained so easily. Virions complemented with gH/gL/gO and gH/gL/UL128-131 can infect all HCMV host cells, including endothelial and epithelial cells, whereas virions complemented only with gH/gL/gO show a tropism restricted mainly to fibroblasts. Interestingly, virions lacking gH/gL/gO also show a massive loss of infectivity for epithelial and endothelial cells ,5, whichT) virus . Cell tyT) virus . In summet al. [et al. [et al. [An elegant study by Li et al. offers aet al. . Data fr [et al. also sho [et al. . Interes [et al. propose [et al. , an incr [et al. .et al. [et al., a relative increase in infection in fibroblast cultures when compared to infection in epithelial cell cultures [et al. [I would like to propose an alternative model which is also compatible with the data from Li et al. (Figure et al. . When ULcultures . Due to [et al. of how U [et al. . Fibrobl [et al. (Figure et al. [in vivo? A comparative sequence analysis of HCMV isolates has shown that UL148 is a highly conserved protein, which indicates that viral infection success in vivo is coupled to an intact UL148 [in vivo.Do the findings from Li et al. qualify ct UL148 ,7. By in"} {"text": "Pinus ponderosa) and mixed-conifer forests were incomplete, missing considerable variability in forest structure and fire regimes. Stevens et al. (this issue) agree that high-severity fire was a component of these forests, but disagree that one of the several sources of evidence, stand age from a large number of forest inventory and analysis (FIA) plots across the western USA, support our findings that severe fire played more than a minor role ecologically in these forests. Here we highlight areas of agreement and disagreement about past fire, and analyze the methods Stevens et al. used to assess the FIA stand-age data. We found a major problem with a calculation they used to conclude that the FIA data were not useful for evaluating fire regimes. Their calculation, as well as a narrowing of the definition of high-severity fire from the one we used, leads to a large underestimate of conditions consistent with historical high-severity fire. The FIA stand age data do have limitations but they are consistent with other landscape-inference data sources in supporting a broader paradigm about historical variability of fire in ponderosa and mixed-conifer forests than had been traditionally recognized, as described in our previous PLOS paper.In a recent PLOS ONE paper, we conducted an evidence-based analysis of current versus historical fire regimes and concluded that traditionally defined reference conditions of low-severity fire regimes for ponderosa pine ( The accompanying paper by Stevens et al. is critiHere, we first briefly summarize points of agreement between Stevens et al. and us, In Odion et al. 2014) [014 [2], We did not intend to suggest that tree recruitment occurred only with fire. Stevens et al. hypothesize that pulsed recruitment in the absence of fire has shaped the age distributions in many FIA plots. We agree that this process occurred historically. There is also agreement that a dominant cohort of trees will establish after high-severity fire, but that later in stand development understory recruitment can happen with favorable climate or following insect outbreaks. This, along with the presence of some trees that pre-date the fire, will create an uneven-aged stand, but there may still be a dominant overstory size class established after fire.Stevens et al. report tStevens et al. replaced the traditional 70\u2013100% mortality definition for high-severity fire that weStevens et al. suggest,There is also a logical problem: if high-severity fire predominantly caused 90\u2013100% mortality historically, and 70\u201389% mortality was rare, then there would be very little difference between the number of FIA plots with 90\u2013100% mortality and the number with 70\u2013100% mortality. But, Stevens et al. found a large difference when using these basal area thresholds. Therefore, plots with 70\u201389% mortality were not rare, and narrowing the fire-severity definition is not supported.Stevens et al. state thStevens et al. point ouTo understand historical forest structure and fire, it is common to reconstruct the size of trees in the 1800s by subtracting tree growth since that time . Steven2 of dead tree basal area. There are 5 live trees of 0.5 m in diameter at breast height (dbh) in the 1-ha FIA plot for a total of 1 m2 live, surviving basal area. The surviving trees have a higher growth increment in earlier years which decreases as they age. However, when the mean growth rate is calculated using 1594 mature ponderosa pine in dry forests in Oregon reThe concern raised by Stevens et al. pertains"} {"text": "Staphylococcus aureus is an emerging and rare infection and so far the related data are scarce.Pyogenic abscess of psoas muscles is a rare condition. Psoas abscess due to methicillin-resistant Staphylococcus aureus in a 54-year-old Arab Jordanian woman with breast cancer who had neutropenia after starting chemotherapy. She was diagnosed 50 days after onset of symptoms. However, despite this delay in diagnosis and the large size of the abscesses, she had a full recovery. She was treated with antibiotics and percutaneous drainage and was doing very well at a follow up of 18 months.We report the rare case of primary and bilateral large psoas abscesses due to methicillin-resistant Staphylococcus aureus might have insidious presentation with extensive disease especially in immunocompromised patients. However, it can be managed effectively with percutaneous catheter drainage and appropriate antibiotic therapy.Psoas abscess due to methicillin-resistant Pyogenic abscess of psoas muscles is a rare condition with a mortality rate of 100 % if left untreated , a signiStaphylococcus aureus is the leading cause for primary psoas abscess, psoas abscesses due to methicillin-resistant Staphylococcus aureus (MRSA) have been rare until recently when increasing rates have been reported in the last few years [Although ew years , 4, 5. Tew years . Here, w9/L (neutrophils 10 %). Therefore, she was treated with imipenem with moderate improvement and was discharged after 4 days on cefixime orally.A 54-year-old Arab Jordanian woman who had a history of breast cancer 11 years prior to her current complaint, presented in 2012 with a left breast mass due to recurrence of her cancer. The mass of the recurrent cancer was excised and chemotherapy with docetaxel, doxorubicin, and cyclophosphamide was started. However, in October 2012 and 7 days after the second cycle, she was admitted because of fever, chills, and right lower quadrant pain of 3 days\u2019 duration. Her physical examination showed epigastric and right lower quadrant tenderness and her white blood cell count (WBC) was 1.28\u00d7109/L (neutrophils 15 %), and platelets of 161\u00d7109/L. During her stay, however, she complained of left hip and knee pain. Her examination showed tenderness over her great trochanter with full range of movement. Therefore, a dedicated left hip magnetic resonance imaging (MRI), without pelvic cuts, was performed and was reported as normal. Her blood and urine cultures remained negative. She was treated with imipenem and vancomycin and was later discharged after 8 days of hospital stay.Later, and after the third cycle of chemotherapy which consisted of docetaxel and cyclophosphamide, she was admitted again for fever. Her laboratory tests were unremarkable except for hemoglobin of 7.9 g/dL, WBC 2.6\u00d7109/L (neutrophils 73 %). An abdomen computed tomography (CT) was subsequently performed, 50 days after onset of symptoms, and showed large bilateral psoas abscesses: the right measuring 13\u00d78\u00d75 cm and the left 9\u00d75\u00d73.5 cm after 4 weeks of drainage without complications and later on completed another three cycles of chemotherapy. At follow up of 18 months, she was doing well and had no evidence of recurrence of infection on repeat CT imaging.Psoas abscess is classified as either primary or secondary. A primary abscess occurs due to the hematogenous or lymphatic spread of the causative organism from a distant site. Secondary abscess occurs as a result of the direct expansion of a nearby infectious or inflammatory process into the psoas muscle . PrimaryOur case has several rare and interesting features: the primary type, the infection with MRSA, the bilateral location, the large size of the abscess, the immunocompromised host due to chemotherapy and neutropenia, and the excellent outcome of the patient.et al. [et al. [et al. [Bilateral psoas abscesses are rare; Ricci et al. in a lar [et al. in a com [et al. , in a caS. aureus; Ricci et al. [S. aureus caused 88 % of cases followed by Streptococcus (5 %) and Escherichia coli (3 %). Lai et al. [S. aureus was still the dominant etiology. However, despite the fact that S. aureus was the predominant etiology, MRSA remained a rare etiology until recently [et al. [et al. [et al. [et al. [The predominant organism in primary psoas abscess is i et al. , in theihia coli %. Lai erecently , 8. Two [et al. and Kim tococcus % and Es [et al. reportedtococcus % and Es [et al. and 10 omecA would have been more sensitive to differentiate between community and health care MRSA [The phenotype of the isolated MRSA here suggests it is a community-associated MRSA. However, this phenotype criterion remains inaccurate and nucleic acid amplification methods to detect are MRSA . Since ware MRSA , 13. We are MRSA . Later, are MRSA .et al. [Currently, whether PCD or surgical intervention should be used in patients with psoas abscess remains controversial because few studies with large sample sizes have been performed . Howeveret al. in a retet al. .MRSA is an increasingly reported and emerging etiology in psoas abscess. Our case has several rare and interesting features specifically: the primary type, the infection with MRSA, the bilateral location, the large size of the abscesses, the immunocompromised host, and the excellent treatment outcome. Psoas abscess is an important condition in the differential diagnosis of patients with low back and hip pain. MRI or CT scan could establish the diagnosis and define the extent of the psoas abscess. Percutaneous catheter drainage with appropriate antibiotic therapy can be effectively used in management.Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal."} {"text": "Single Nucleotide Polymorphism (SNP) markers were used in characterization of 113 cowpea accessions comprising of 108 from Ghana and 5 from abroad. Leaf tissues from plants cultivated at the University of Ghana were genotyped at KBioscience in the United Kingdom. Data was generated for 477 SNPs, out of which 458 revealed polymorphism. The results were used to analyze genetic dissimilarity among the accessions using Darwin 5 software. The markers discriminated among all of the cowpea accessions and the dissimilarity values which ranged from 0.006 to 0.63 were used for factorial plot. Unexpected high levels of heterozygosity were observed on some of the accessions. Accessions known to be closely related clustered together in a dendrogram drawn with WPGMA method. A maximum length sub-tree which comprised of 48 core accessions was constructed. The software package structure was used to separate accessions into three groups, and the programme correctly identified varieties that were known hybrids. The hybrids were those accessions with numerous heterozygous loci. The structure plot showed closely related accessions with similar genome patterns. The SNP markers were more efficient in discriminating among the cowpea germplasm than morphological, seed protein polymorphism and simple sequence repeat studies reported earlier on the same collection.The online version of this article (doi:10.1186/2193-1801-3-541) contains supplementary material, which is available to authorized users. Vigna unguiculata (L) Walp] is an important staple food crop in Ghana and many other parts of the world are powerful tools in genetic diversity study in living organisms was obtained from West Africa Centre for Crop Improvement (WACCI), University of Ghana. Four accessions were breeding lines selected from accession GH4524 based on seed coat colour differences . It was used for a factorial plot and a Maximum Length sub-tree was constructed to select a representative core accessions.et al. Detection of the underlying genetic population among the studied cowpea accessions was carried out with the Structure software , \u2018Zaayura\u2019 and IT97K-556-6 shared a rare allele. IT97K-556-6 is IITA line while \u2018Zaayura\u2019 is a commercial variety released by CSIR - Savanna Agricultural Research Institute. Another rare allele was shared by UCR779 and \u2018Zaayura\u2019. UCR 779, a Botswana landrace, resistant to aphid . Meaningfully, they also fell in their appropriate colour seed coat clusters. These are elite germplasm (improved varieties) and have been selected for similar traits over a long period of time. The improved varieties from both Ghana and abroad are found on or above the red line Figure\u00a0. Local aet al.et al.et al. et al. et al.et al.et al. No elite genotype fell in the dark coat coloured cluster Figure\u00a0. Commercet al.et al. , \u2018Bawuta\u2019 and \u2018Zaayura\u2019 is very significant. This is because Padi \u2018Tuya\u2019 and a number of varieties released by CSIR \u2013 SARI are known to have parentage from California Black eye Additional file 1:Factorial plot of the cowpea accessions.(DOCX 48 KB)Additional file 2:"} {"text": "In this Special Issue, entitled \u201cFood choice and Nutrition: A Social Psychological Perspective\u201d, three broad themes have been identified: (1) social and environmental influences on food choice; (2) psychological influences on eating behaviour; and (3) eating behaviour profiling. The studies that addressed the social and environmental influences indicated that further research would do well to promote positive food choices rather than reduce negative food choices; promote the reading and interpretation of food labels and find ways to effectively market healthy food choices through accessibility, availability and presentation. The studies on psychological influences found that intentions, perceived behavioural control, and confidence were predictors of healthy eating. Given the importance of psychological factors, such as perceived behavioural control and self-efficacy, healthy eating interventions should reduce barriers to healthy eating and foster perceptions of confidence to consume a healthy diet. The final theme focused on the clustering of individuals according to eating behaviour. Some \u201ctypes\u201d of individuals reported more frequent consumption of fast foods, ready meals or convenience meals or greater levels of disinhibition and less control over food cravings. Intervention designs which make use of multi-level strategies as advocated by the Ecological Model of Behaviour change that proposes multi-level strategies are likely to be more effective in reaching and engaging individuals susceptible to unhealthy eating habits than interventions operating on a single level. The Special Issue on the social psychological issues related to food choice and nutrition has attracted a wide variety of papers from around the world and across population groups. Three broad themes were identified through the papers: (1) social and environmental influences on food choice; (2) psychological influences on eating behaviour; and (3) eating behaviour profiling.et al. [i.e., having family and friends who rarely consume soft drinks), stricter family rules, greater perceived behavioural control and confidence were less likely to consume soft/energy drinks [et al. [Six papers focused on the social and environmental influences on food choice. Deliens et al. surveyedy drinks . Tanja e [et al. surveyed [et al. . These f [et al. ,4 and fr [et al. positive [et al. .et al. [et al.\u2019s [et al. [et al. [i.e., whole fruit, fruit salad, vegetarian daily specials) in a school canteen. Their results showed that the intervention was effective in facilitating subsequent selection of more healthy food choices among secondary school students. Taken together, these studies suggest that further research would do well to promote positive food choices rather than reduce negative food choices; promote the reading and interpretation of food labels and find ways to effectively market healthy food choices through food architecture models.Other social-environmental papers published in this issue focused on attention to food labels and enviet al. found thet al.\u2019s study iset al.\u2019s conducte [et al. , found t [et al. conducte [et al. , found tet al. [The second main theme in this set of papers is centred around the psychological influences on eating behaviour. Perceived behavioural control (the perceived ease or difficulty in performing a behavior) and confidence were found to statistically predict eating behaviour in several studies involving university students ,11 and yet al. . Finallyet al. suggesteet al. , Social et al. and Selfet al. ,17 appeai.e., end of aisle displays) to influence the impulsive involved group. They also found that impulsive involved individuals relied heavily on ready-made sauces and mixes which may indicate a lack of cooking skills. As such, healthy eating interventions may do well to promote the use of healthier processed foods such as canned and frozen vegetables and beans in cooking rather than focusing on cooking from scratch using fresh ingredients. Other research found that a main outcome of a cookery skills intervention was that participants learnt how to make healthy meals from scratch that were both tasty and time efficient [The final theme identified was a focus on eating behaviour profiling or the clustering of individuals according to eating behaviours. Two papers used approaches to identify typologies of individuals ,19. Daltfficient . The impfficient . It may i.e., barriers concerning time, cost and taste) and confidence to prepare health meals be enhanced. Additionally, further work is required on food labels, both in terms of who responds to them and how people make sense of them. Finally, given the importance of psychological factors such as perceived behavioural control and self-efficacy, healthy eating interventions should reduce barriers to healthy eating and foster perceptions of confidence to consume a healthy diet. Health behaviour change theories, including those outlined above, may be usefully applied to foster such confidence.In conclusion, both socio-psychological and environmental strategies appear effective in changing eating behaviour and associated outcomes. It would be interesting in future research to employ intervention designs which make use of multi-level strategies as advocated by the Ecological Model of Behaviour Change , which p"} {"text": "Toward this end, animal model studies in some ways are necessary to precisely analyze the in vivo situation, and also are essential for developing countermeasures against virus infections. Since a full variety of viruses with distinct biological properties exist, we virologists should study \u201cthe target virus\u201d in a specialized manner, in addition to common theoretical/experimental approaches. The Research Topic entitled \u201cAnimal model studies on viral infections\u201d collects articles that describe the studies on numerous virus species for their animal models, or those at various stages toward animal experiments.One of the major missions of animal virology is to understand how viruses replicate and cause asymptomatic/symptomatic conditions in individuals . Also, a bovine model for HTLV-1 pathogenesis has been described by Aida et al. descriptions/evaluations/new challenges of animal model studies for investigating the biology of viruses; (ii) experimental materials/methods for upcoming animal model studies; (iii) observations important for animal model studies. (i) Reynaud and Horvat have desa et al. . Challena et al. , and alsa et al. . Anothera et al. . (ii) Koa et al. has desca et al. have suma et al. has repoa et al. have suma et al. have suga et al. have shoa et al. have repa et al. have desin vivo and for overcoming virally-caused infectious diseases. We human virologists should make every effort to fight against numbers of unique pathogenic viruses.We are proud to add our \u201cAnimal model studies on viral infections\u201d to a series of Research Topic in Frontiers in Microbiology. A wide variety of DNA and RNA viruses are covered by this special issue consisting of original research, review, mini-review, methods, and opinion articles. As we described in the beginning, animal studies are certainly required for understanding virus replicative/pathogenic properties The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Based on their ability to recognize and eliminate various endo- and exogenous pathogens as well as pathological alterations, Natural Killer (NK) cells represent an important part of the cellular innate immune system. Although the knowledge about their function is growing, little is known about their development and regulation on the molecular level. Research of the past decade suggests that modifications of the chromatin, which do not affect the base sequence of the DNA, also known as epigenetic alterations, are strongly involved in these processes. Here, the impact of epigenetic modifications on the development as well as the expression of important activating and inhibiting NK-cell receptors and their effector function is reviewed. Furthermore, external stimuli such as physical activity and their influence on the epigenetic level are discussed. Natural killer cells (NK cells) are historically named by their ability to kill target cells without prior priming on a \u201cnatural\u201d way . As a pa\u2212 CD56+) [bright and a CD56dim subgroup. CD56bright NK cells express high amount of the NK cell marker CD56 and are characterized by a lower cytotoxic capacity but high secretion of cytokines upon stimulation [bright NK cells represent the minority of NK cells and are mainly located in secondary lymphoid tissues [dim NK cells express CD56 in low amounts on the cell surface and display a high cytotoxic capacity [dim are the majority of NK cells circulating through the body [Their phenotype is determined by surface expression of the NK cell marker CD56 and a concomitant absence of the T-cell marker CD3 (CD3\u2212 CD56+) . In genemulation ,2. CD56b tissues ,7. In cocapacity ,2. With the body ,7.+ NKG2Chi NK cells has adaptive functions and mediates a fast recurring response against viral infections like Cytomegalo Virus (CMV) [Current literature suggests a third subgroup of NK cells, described as memory NK cells. This population of CD57us (CMV) .The effector function of NK cells is regulated by an orchestra of activating and inhibitory receptors on the cell surface. These receptors are encoded by the germ line and recognize structures of high molecular weight . All inhThe activation of NK cells effector function is mediated by a critical balance of signals from inhibitory and activating receptors. When an activation of effector function is achieved, NK cells secrete granules containing effector molecules like perforin and granzyme B. Perforin is a 70 kDa protein containing a conserved membrane attack complex of Complement/Perforin (MACPF) domain . The MACIn 1942, about 20 years before the Nobel-Prize for defining the ultrastructure of the DNA was awarded, Conrad Waddington defined epigenetic as \u201cThe branch of biology which studies the causal interactions between genes and their products, which bring the phenotype into being\u201d. Newer definitions of epigenetic describe it as heredity which is independent of the DNA sequence and as research of changes in gene expression and mitotic heredity of gene expression patterns . The actDNA methylation is catalyzed by DNA-methyl-transferases (DNMTs) by the addition of a methyl-group on cytosines in the 5th position. Therefore, DNA methylation occurs at cytosine-guanine-dinucleotides (CpGs) and mostly in regions with a high CpG density. At the beginning of epigenetic, research DNA methylation was described to be inhibitory for gene expression. It is described that methylation of CpG rich regions could inhibit the binding of transcription factors that recognize these regions and furtHistone modifications result in opening or closing the chromatin structure. Depending on the posttranslational modification, repression or activation of gene transcription is achieved. For example, the acetylation of lysine residues on histone 3 and 4 is associated with active transcription, whether methylation of lysine residues on these histones could be activating or inhibitory . The mosFinally, the expression of microRNA molecules is described as epigenetic mechanism. These small endogenous, single-stranded RNA molecules modulate gene expression by binding to complementary sites in the 3\u2032-untranslated region (3\u2032-UTR) of the target genes mRNA . This inet al. [\u2212 Lymphocytes. In contrast, NK cells and other KIR expressing Lymphocytes reveal a demethylation of KIR genes that resulted in the abolishment of KIR gene expression. Similar results were reported by Gao et al. [et al. [Only a few studies have investigated the influence of epigenetic modifications on the differentiation and maturation of NK cells. Santourlidis et al. showed to et al. , who des [et al. found th [et al. . Further [et al. . MiR-181 [et al. . Notch s [et al. ,24.+ and NKG2Chi [Regarding epigenetic modifications during differentiation and maturation, most studies focused on memory NK cells which display elements of the adaptive immune system and are induced by an infection with cytomegalovirus (CMV). Memory NK cells have been discovered in mouse models and have been described to have properties of memory T cells, such as being self-renewing, long-lived and to show expansion upon a second viral challenge . These c NKG2Chi as well NKG2Chi . Deficie NKG2Chi , which i NKG2Chi showed s+ and CD4+ T cells, what resulted in higher similarities between memory NK cells and CD8+ and CD4+ T cells than between memory and canonical NK cells [H1 cells [Furthermore, the DNA methylation profile of canonical and memory NK cells was compared to CD8NK cells . The comNK cells . ImprintAs mentioned above, NK cell activity is related to a complex interaction of activating and inhibiting receptors on the NK cell surface. Interestingly, both activation patterns (cytokines or secretion of cytotoxic agents) seem to be epigenetically regulated.et al. [Incubation with HDAC inhibitors (HDACi) like suberoylanilide hydroxamic acid (SAHA) or valproic acid (VPA) suppresses Interleukin (IL)-2-activated NK cell cytotoxicity . These ret al. who founet al. [Besides histone modifications, the DNA methylation is important to orchestra the NK cell effector function. To investigate the effect of DNA methylation, demethylating agents like 5-azacytidine (Aza) and decitabine (Deci) are used. Aza is an analogue to the DNA base cytidine, whereas Deci is the analogue to desoxycytosine. Schmiedel et al. describeet al. ,29 and aet al. ,29. The et al. .et al. [et al. [et al. [It has to be noted, that IL-2 seems to be a cofactor for DNA demethylating or histone acetylating agents. While Gao et al. and Schm [et al. used IL- [et al. tried th [et al. , wherebyAs mentioned above, epigenetic modifications are involved in NK cell differentiation and regulation of effector function. NK cell receptors are differentially expressed through the development of HPCs to mature NK cells and are crucially involved in the induction and inhibition of NK cell effector function.et al. [The suppression of NK cell cytotoxicity by incubation with HDACi observed by Ogbomo et al. was due KIR belong to the Ig superfamily and are named by the number of extracellular immunoglobulin domains and their type of signaling, which can be stimulatory (S) or inhibitory (L) .et al. [+ and KIR3DL1\u2212 NK cells showed a densely methylated KIR3DL1 promoter in KIR3DL1\u2212 cells and a completely unmethylated promoter in KIR3DL1+ cells [et al. [Gao et al. investiget al. ,31. Furt1+ cells ,32. Theset al. [During the development of HPCs to mature NK cells, the DNA demethylation of KIR genes leads to KIR expression. But DNA methylation does not just determine which KIR gene is expressed, it also determines which allele expresses the KIR gene. KIR genes are mostly expressed in a mono-allelic manner with non-expressed alleles exhibit DNA hypermethylation and expressed alleles DNA hypomethylation . Furtheret al. mentioneet al. [Besides chromatin modifications, KIR genes are also regulated by microRNA. As reported by Davies et al. , KIR genet al. .et al. [Cichockie et al. describeet al. .In contrast to DNA methylation, histone acetylation seems to play a less important role in KIR gene regulation. Incubation of NK cells with HDACi did not affect KIR gene expression ,31. Moreet al. [In addition to KIR gene regulation, epigenetic modifications have been reported to be involved in the expression of NKG2D, which is one the most important activating NK cell receptor. Fernandes-Sanchez et al. found a et al. . Furtheret al. . Furtheret al. , is unknet al. [et al. [Besides histone modifications and DNA methylation, microRNAs are involved in the regulation of NKG2D expression. Espinoza et al. have shoet al. . Intereset al. . This im [et al. describeet al. [Life style factors such as physical (in-) activity and nutrition are supposed to influence the immune system and its function . NK cellet al. ).et al. [Since physical activity and exercise are known to induce short- and long-term epigenetic alterations in various cell types and tissues ,38,39,40et al. who haveet al. focused on changes in histone acetylation and DNA methylation in NK cells in response to exercise. In a first study, the influence of chemotherapeutic treatment and a single bout of exercise on global H3K9Ac and H4K5Ac in NK cells of B-cell non-hodgkin lymphoma patients and a healthy control group has been determined [et al. [Against this background, Zimmer termined . After ttermined , which ltermined . Interes [et al. who demoAnother mechanism by which regular physical activity may alter the epigenome of NK cells is a reduction of psychophysiological stress which is associated with the secretion of glucocorticoids . As reviet al. [et al. [et al. [Krukowski et al. revealed [et al. confirme [et al. . Therefo [et al. and conc [et al. , it mighAll studies mentioned above also analyzed the histone acetylation at the promoter regions of perforin and granzym B and could show that treatment with Dexa resulted in a reduced histone acetylation ,47,48 whex vivo experiments. Therefore, functional alterations of NK-cells which were detected in such experiments are hard to transfer from bench to practice. Translational research approaches combining basic research findings with clinical outcomes are necessary to approve results from ex vivo experiments. Further research is needed to fulfill the understanding of the epigenetic regulation of NK cells and potential therapeutic implications.Epigenetic modifications seem to play a key role in NK cell development and regulation. On the one hand, a stable epigenetic imprinting is used for maintaining the way of differentiation. On the other hand, the dynamic properties of epigenetic modifications are used for adapting/reacting on internal and external stimuli . However, the significance of the described findings is limited by study designs. Although human NK-cells were frequently used, in most cases treatment took place in"} {"text": "The study develops an integrated humidity microsensor fabricated using the commercial 0.18 \u03bcm complementary metal oxide semiconductor (CMOS) process. The integrated humidity sensor consists of a humidity sensor and a ring oscillator circuit on-a-chip. The humidity sensor is composed of a sensitive film and branch interdigitated electrodes. The sensitive film is zinc oxide prepared by sol-gel method. After completion of the CMOS process, the sensor requires a post-process to remove the sacrificial oxide layer and to coat the zinc oxide film on the interdigitated electrodes. The capacitance of the sensor changes when the sensitive film adsorbs water vapor. The circuit is used to convert the capacitance of the humidity sensor into the oscillation frequency output. Experimental results show that the output frequency of the sensor changes from 84.3 to 73.4 MHz at 30 \u00b0C as the humidity increases 40 to 90 %RH. Liang et al. [2O3 film and Pt interdigitated electrodes. The sensing material of ZnO-In2O3 was deposited by radio-frequency sputtering.Humidity sensors are widely used in industrial, electronic, and biomedical equipment. Conventional humidity sensors have the disadvantages of large volume and high cost. On the contrary, the advantages of humidity microsensors include small volume, low cost, high performance and easy mass-production . Recentlet al. proposed [et al. also useg et al. presenteet al. [et al. [et al. [et al. [et al. [et al. [et al. [Zinc oxide can be applied as a piezoelectric, gas-sensing and photoelectric material. Many studies have utilized zinc oxide as the sensitive material of humidity microsensors. For instance, Zhang et al. fabricat [et al. presente [et al. employed [et al. used a h [et al. proposed [et al. \u20139 are su [et al. \u20139 were n [et al. . In this [et al. ,6,8,9, aet al. [The commercial CMOS process has been utilized to manufacture various microactuators and microsensors ,11. Micret al. , was fabet al. \u20139. The cet al. to coat 2.2. The sensitive film of the sensor is zinc oxide, and the film is coated on the interdigitated electrodes. When the sensitive film absorbs or desorbs humidity vapor, the sensor produces a variation in capacitance.sensorf of the ring oscillator circuit is given by sensor\u03c4 the delay time associated with the humidity sensor; inv\u03c4 is the delay time associated with the inverters; sensorC is the humidity sensor capacitance; loadC is the load capacitance; and \u0394V and aveI are the threshold voltage and average current, respectively. According to 3.iso-propanol (100 mL), and the mixture was denoted solution A. Sodium hydroxide (0.5 g) and poly(vinyl pyrrolidone) (2 g) were added to iso-propanol (50 mL), and the misture was denoted solution B. Solution B and hexamethylenetetramine (0.7 g) were added to solution A with vigorous stirring at 75 \u00b0C for 2 h. The mixture was transferred into a teflon-lined stainless steel autoclave, sealed and maintained at 120 \u00b0C for 12 h. After the reaction, the resulting products were filtered, and washed with deionized water and ethanol. Finally, the zinc oxide film was coated on the substrate, followed by calcination at 350 \u00b0C for 2 h.The sensitive material of zinc oxide was prepared by the sol-gel method. The preparation steps were as follows : zinc acThe surface morphology of the zinc oxide film was measured by scanning electron microscopy . 4.The integrated humidity sensor chip was manufacture using the commercial 0.18 \u03bcm CMOS process of TSMC . 5.A spectrum analyzer, a test chamber and an LCR meter were employed to test the characteristics of the integrated humidity sensor. The capacitance variation of the humidity sensor was measured by the LCR meter. The output frequency of the humidity sensor was recorded by the spectrum analyzer. The humidity and temperature of the test chamber could be tuned. The test chamber could supply a humidity range of 30\u201395 %RH and a temperature range of 25\u2013100 \u00b0C.To understand the capacitance variation of the humidity sensor, the sensor without the ring oscillator circuit was tested under different humidity. The humidity sensor without the circuit was set in the test chamber. The test chamber provided different humidity to the sensor, and the LCR meter recorded the capacitance variation of the sensor. The output frequency of the humidity sensor with the ring oscillator circuit was measured. The ring oscillator circuit converted the capacitance variation of the humidity sensor into the oscillation frequency output. The sensor with the circuit was set in the test chamber. The power supply provided a bias voltage of 3 V to the circuit. The spectrum analyzer detected the output frequency of the sensor. To characterize the influence of temperature, the integrated humidity sensor was tested under different temperatures. et al. [et al. [et al. [Hu et al. reportedet al. . Dai [21 [et al. presente [et al. .6.An integrated humidity sensor has been fabricated using the commercial 0.18 \u03bcm CMOS process and a post-process. The integrated humidity sensor contained a humidity sensor and a ring oscillator circuit. The humidity sensor was a capacitive type. The sensor generated a change in capacitance when it sensed water vapor. The ring oscillator circuit converted the capacitance variation of the sensor into the output frequency. The humidity sensor consisted of branch interdigitated electrodes and a sensitive film. The sensitive film was zinc oxide that prepared by the sol-gel method. The post-process included a wet etching to remove the sacrificial oxide layer and a zinc oxide film to coat on the interdigitated electrodes. The experimental results revealed that the output frequency of the sensor changed from 84.3 to 73.4 MHz at 30 \u00b0C when the humidity increased 40 to 90 %RH."} {"text": "Cell adhesion is essential in cell communication and regulation, and is of fundamental importance in the development and maintenance of tissues. The mechanical interactions between a cell and its extracellular matrix (ECM) can influence and control cell behavior and function. The essential function of cell adhesion has created tremendous interests in developing methods for measuring and studying cell adhesion properties. The study of cell adhesion could be categorized into cell adhesion attachment and detachment events. The study of cell adhesion has been widely explored via both events for many important purposes in cellular biology, biomedical, and engineering fields. Cell adhesion attachment and detachment events could be further grouped into the cell population and single cell approach. Various techniques to measure cell adhesion have been applied to many fields of study in order to gain understanding of cell signaling pathways, biomaterial studies for implantable sensors, artificial bone and tooth replacement, the development of tissue-on-a-chip and organ-on-a-chip in tissue engineering, the effects of biochemical treatments and environmental stimuli to the cell adhesion, the potential of drug treatments, cancer metastasis study, and the determination of the adhesion properties of normal and cancerous cells. This review discussed the overview of the available methods to study cell adhesion through attachment and detachment events. Adhesion plays an integral role in cell communication and regulation, and is of fundamental importance in the development and maintenance of tissues. Cell adhesion is the ability of a single cell to stick to another cell or an extracellular matrix (ECM). It is important to understand how cells interact and coordinate their behavior in multicellular organisms. In vitro, most mammalian cells are anchorage-dependent and attach firmly to the substrate . AccordiSrc and Ras oncogenes reduces the adhesiveness to fibronectin (Fn) by impairing \u03b15\u03b21 integrins, the activation of oncogene ErbB2 in breast cancer up-regulates \u03b15\u03b21 integrin and enhances adhesion . Th. Thin viThe mechanical interactions between a cell and its ECM can influence and control cell behavior and function. The essential function of cell mechanobiology and its progressively important role in physiology and disease have created tremendous interests in developing methods for measuring the mechanical properties of cells. In general, cell adhesion studies can be categorized into cell attachment and detachment events. Numerous techniques have been developed to analyze cell adhesion events through the study of single cells as well as the populations of cells. Cell adhesion attachment events are focusing on the cell attachment mechanism to the substrate, while the detachment events involve the application of load to detach the adhered cells on the substrate .Cell attachment studies cover the analysis from the formation of a molecular bond between the cell\u2019s surface receptors and the complementary ligands (on the ECM\u2019s surface) to the observation of a population of cells\u2019 responses through the cells\u2019 behavior and changes of morphology during the attachment events. In cell migration, the cell adhesion plays a pivotal role in the driving force production . The adhPolyacrylamide-Traction Force Microscopy (PA-TFM). Numerous techniques have been developed to understand cell adhesion by characterizing single cells during their attachment events. PA-TFM is one of the widely used techniques to study single cells\u2019 traction force, the force exerted by cells through contact to the substrate surface. Cells will be cultured on the polyacrylamide gel functionalized with the cells\u2019 adhesive ligands and fluorescent beads embedded near the gel surface [et al. [et al. [et al. [MCF10A on PA substrates with a range of substrate compliances. Their findings showed that the wound could initiate a wave of motion that directs cells\u2019 coordination towards the wound edge and substrate stiffness influenced the collective cell migration speed, persistence, and directionality as well as the coordination of cell movements [et al. [ surface . Upon th surface ,48,49. R [et al. reported [et al. improved [et al. to studyovements . Tractio [et al. reported [et al. .Micropatterning. Micropatterning or micropillar) is a method that provides a micrometer scale: a soft, three-dimensional complex and dynamic microenvironment for both single cell studies and also for the multi-cellular arrangements in populations of cells. It relies on basic elastic beam theory, which makes force quantification easier and more reliable, as there is only one traction force field for each micropost/micropillar displacement map [et al. [et al. [N-isopropyla-crylamide) (PNIPAM) as an actuator which induces cell detachment when the temperature is reduced below 32 \u00b0C. It has been reported that the micropatterning technique is able to independently tune the biochemical, mechanical, and spatial/topography properties of biomaterials that could provide the opportunity to control cell fate for tissue engineering and regenerative medicine applications [ment map . Cell miment map . At the ment map . Micropa [et al. found th [et al. . Mandal [et al. introducications . Laminarications .Three-Dimensional Traction Force Quantification (3D-TFM). The ability to grow cells within ECM gels (3D culture) is a major advantage to understand in vivo cellular cell behaviors, ranging from differentiated function to maintenance of stem cell functions [et al. [MDA-MB-231 breast carcinoma, A-125 lung carcinoma) and found that the directionality is important for cancer cell invasion rather than the magnitude of traction, and the invasive cells elongated with spindle-like morphology as opposed to the more spherical shape of non-invasive cells [\u03b2Pix with srGAP1 that is critical for maintaining suppressive crosstalk between Cdc42 and RhoA during 3D collagen migration. Fraley et al. [unctions ,62. The unctions ,65,66,67unctions . Koch et [et al. used theve cells . The disve cells managed Wash Assay. In the population cell adhesion studies, the process of cell attachment can be divided into two types; static culture and dynamic culture, depending on the cell adherence mechanism during the cell culturing. Static culture is the stagnant condition of the cell culture medium during the incubation for cell adhesion, which is applicable for the culturing of cells inside microwell plates is one of the widely used piezoelectric acoustic wave resonator [esonator biosensoesonator . The senesonator upon celesonator ,83,84,85esonator . The adhesonator ,83,86,87et al. [f) and acoustic wave energy dissipation can be used to gain direct measurements of the physical properties of the layers in contact with the chip [et al. [MC3T3-E1 and NIH3T3 cells on different precoated biocompatible surfaces. Braunhut et al. [et al. [et al. [Zhu et al. found thet al. reportedet al. ,88. The the chip ,88,89. T [et al. ,85 analyt et al. ,91,92,93t et al. . Recentl [et al. have dev [et al. producedMicrofluidics. In contrast to the static culture, the dynamic culture applies fluid movement during the cell culturing and adhesion process. Low fluid shear flow is needed to help the cell attachment process as it mimics the blood flow in the human body. Cells are continuously exposed to hemodynamic forces generated by blood flow in most biological systems. The balance between the adhesive forces generated by the interactions of membrane-bound receptors and their ligands with the dispersive hydrodynamic forces determines cell adhesion [et al. [et al. [et al. (2007) to analyze the deformation and biological and migratory capability of various tumor cell lines to the lining of HMEC cells inside the channels [et al. [et al. [et al. [et al. [MDA-MB-231 cancer cells by analyzing the cells\u2019 adhesion capability to the endothelial monolayer inside the channel.adhesion . Cell ad [et al. reported [et al. applied channels . Nalayan [et al. have bui [et al. have dev [et al. studied [et al. to demonCell adhesion detachment studies involve load application to the adhered cells on the ECM to free the cells from their cell-matrix bonding . The appCytodetachment. Cytodetachment technique uses an atomic force microscopy (AFM) probe to physically detach individual cells in an open medium environment such as petri dish [et al. [L929 on four different materials by using cytodetachment technique. Using an image analysis system and fiber optic sensor, the apparent cell adhesive area was measured and the adhesive strength and detachment surface energy were calculated by dividing them by the cell adhesive area [NHIK 3025) were found to attached stonger and faster to the hydrophilic substrate using technique 1 at 37 \u00b0C when compared to 23 \u00b0C. There are multiple adhesion measurement studies combining the cytodetachment method with the laser tweezer work station [et al. [et al. [hFOB) cells was influenced by the cell\u2019s shape grown on the Ca-P grooved micropattern surface.tri dish . Cells htri dish . Yamamot [et al. studied ive area . Their five area . Human c station and opti station ,105. The [et al. used rab [et al. reportedMicropipette Aspiration Technique. Micropipette aspiration is a widely used technique for measuring the mechanical properties of single cells. For single cell adhesion measurement, this technique detaches an immobilized cell by applying suction force to a portion of the cell surface employed by micropipette suction [et al. [\u221212 N [et al. [ suction under ob suction . Adhesio suction , the cor suction , and the suction . Gao et [et al. used cel. [\u221212 N . Micropi [et al. have dev [et al. .Single Cell Force Spectroscopy (SCFS). Force spectroscopy measurement methods were developed to measure the strength of cell adhesion down to single cell level. Commonly, the methods will use a microscope to observe the cell while force is applied to detach the cell using a nano/micromanipulator or micropipette. The imaging mode is used to study the structures and mechanics of isolated biomolecules [olecules ,111,112,olecules ,114, andolecules ,115, whiAFM Probe Force Measurement. AFM probe force measurement is widely used to measure the stiffness [et al. [Wnt11 signaling in modulating cell adhesion at single cell level. Weder et al. [Saos-2) during different phases of the cell cycle and found that the cells are loosely attached to the substrate during the cells\u2019 round up (M phase) compared to during the interphase. Hoffmann et al. [NK cell receptor 2B4 on the early adhesion processes of NK cells using AFM-probe SCFS. In addition, AFM is flexible and can be integrated with the standard modern inverted and transmission optical microscope. Lee et al. [NIH/3T3 fibroblast cells and found the cell adhesion strength increased with culturing time, and growth factor was found to enhance the adhesion strength between the cell and substrate. The study was continued with the combination of the optical trapping technique to further study the single cell adhesion properties of MCDK cells in different phases of adhesion [MDCK cells through all the different phases of cell adhesion. Beaussart et al. [Staphylococcus epidermidis and Candida albicans.tiffness ,118,119 tiffness ,105,120 tiffness . This prtiffness . When thtiffness . Various [et al. reportedr et al. studied n et al. determint et al. studied Biomembrane Force Probe (BFP). BFP is a sensitive technique that allows the quantification of single molecular bonds. It is a versatile tool that can be used in a wide range of forces (0.1 pN to 1 nN) and loading rates (1\u2013106 pN/s) [et al. [\u22122 to 103 picoNewton (pN) for probing molecular adhesions and structures of a living cell interface [et al. [06 pN/s) . This te06 pN/s) . The pro [et al. ,127 deventerface to impronterface . Gourier [et al. proved tOptical Tweezers (OT). OT uses a highly focused laser beam to trap and manipulate microscopic, neutral objects such as small dielectric spherical particles that experience two kinds of forces: namely, the scattering force and the gradient force [et al. [Saccharomyces cerevisiae cells [nt force ,130. Thent force . Single nt force ,135,136 nt force used OT nt force . Thoumin [et al. were ablae cells ,135,138 Centrifugation Assay. Centrifugation assay is one of the frequently used techniques to measure cells adhesion strength due to their simplicity and the wide availability of equipment in most laboratories. Cells will be seeded in a multiwell plate and undergo treatments by culturing (cell culturing is similar to wash assay) before being spun for the cell detachment process. During the spinning, cells will experience a body force acting in the direction normal to the bottom of the plate that pulls them away from the surface [ surface . To asse surface ,140, qua surface , or by u surface . In manyet al. [HT 1080 cells compared to the cells binding to ECM proteins. Garc\u00eda et al. [et al. [HT-1080, NIH3T3 and MC3T3-E1) on different concentrations of collagen or fibronectin. Koo et al. [This method was used by Channavajjala et al. to undera et al. ,144,145 [et al. ,147 demoo et al. used theSpinning Disk. The spinning disk technique utilizes shear stress generated from a rotating disk device. Cells are first seeded on circular glass coverslipsor on the surface of a disk . These disks are later fixed onto a rotating device, that is placed inside a chamber filled with buffer solution [MC3T3-E1 cells on RGD peptides on self-assembled monolayers [et al. [et al. observed the effect of transformed oncogene v-src on the adhesion strength of chick embryo fibroblasts [et al. [IGF-I). The effect of surface charges on different substrates has also been studied using HT-1080 cells following fibronectin coating [et al. [et al. [IMR-90 human fibroblasts adhered to fibronectin-coated glass using the spinning disk technique.solution ,148. Thesolution or automsolution . The spisolution ,151, humsolution ,153, andnolayers . Lee et [et al. used thi [et al. ,155. Boeroblasts and humaroblasts to under [et al. investig coating . Reuteli [et al. were ablFlow Chamber. There are two types of flow chambers used for adhesion strength measurement: the radial and parallel flow chambers. The radial flow chamber methods involve flowing fluid in a chamber over adhered cells on a stationary substrate where a wide range of radially dependent shear stresses are applied [ applied to detac applied . The rad applied . The lin applied .et al. [et al. [HSMCs) initial attachment strength and migration speed on a range of fibronectin and collagen type IV concentration. Their finding\u2019s suggested that cell-substrate initial attachment strength is a central variable governing cell migration speed and the cell\u2019s maximal migration occurs at an intermediate level of cell-substrate adhesiveness [3T3 fibroblasts on the fibronectin-coated of self-assembled monolayers on glass surfaces [MC3T3-E1 cells to multilayer polyallylamine hydrochloride (PAH) heparin films was analyzed in order to evaluate biocompatibility of various film chemistries [et al. [et al. [Escherichia coli K12-D21 in the in the mid-exponential and stationary growth phases under flow conditions. Chinese hamster ovary cells (CHO) were used to explore the adhesion behavior towards different substrates with different treatments [The radial flow chamber technique has been used by Cozens-Roberts et al. to inves [et al. studied siveness . A numbesurfaces ,166,167.mistries . Rezania [et al. ,172,173 [et al. ,170,171 [et al. showed teatments .The parallel plate flow chamber consists of a bottom plate and an upper plate separated by a distance of the channel\u2019s height to form a rectangular flow channel. Cells are grown on a coverslip and positioned in the flow chamber, constructed by sandwiching a thin rubber gasket between two plates and mounted on a microscope to allow direct observation of the cells during experiments . The floet al. [l-lactide (PLL) films [et al. [VCAM-1) transduced human endothelial cells under physiological flow conditions. Palange et al. [The parallel flow technique was first introduced to study endothelial cell adhesion ,179 and et al. modifiedet al. ,187. TheL) films ,189, polL) films , polyethL) films , and numL) films ,193,194. [et al. were able et al. investigMicrofluidics. Microfluidic lab-on-a-chip technologies represent a revolution of the flow chamber in laboratory experimentation, bringing the benefit of miniaturization, integration, and automation to many research-based industries. These greatly reduce the size of the devices and make many portable instruments affordable with quick data read-outs. The use of small sample volumes leads to greater efficiency of chemical reagents, straightforward construction and operation processes, and low production costs per device, thereby allowing for disposability and fast sampling times. The ability for real-time observation makes microfluidics bring high promises for cell adhesion studies. In recent years, cell adhesion studies have been carried out in a miniature form of the traditional parallel plate flow chambers as discussed above, using flow in rectangular microchannels to apply shear stresses to cells. These devices are typically constructed from optically transparent PDMS bonded to glass using the soft lithography rapid prototyping process that allows many nearly identical devices to be manufactured in a short amount of time [ of time . The opt of time . Small d of time .et al. [NIH/3T3 mouse fibroblast and bovine aeortic endothelial cells) and materials [et al. [et al. [NIH3T3 fibroblast cells that had been allowed to adhere for 24 h on the collagen and fibronectin coatings on glass [A microfluidic device consisting of eight parallel channels has been used to assess the effect of varying collagen and fibronectin concentrations on the adhesion strength of endothelial cells . A serieet al. used a materials . Recentl [et al. integrat [et al. have upgon glass . In thisSingle cell approaches allow for precise measurements of the separation of the cell from the substrate. Specialized equipment, which is bulky and expensive, is often required for manipulation and alignment of the probe and testing can be time-intensive. The single cell adhesion measurement approach provides more precise measurement of the individual cell when compared to the population cell approach. The single cell measurement approach allows the system to image biomolecules at nanometer-scale resolution, to have a dynamic range of forces able to be applied to cells, and to process samples in their physiological medium and aqueous buffer . It has Studying human diseases from a biomechanical perspective can lead to a better understanding of the pathophysiology and pathogenesis of a variety of illnesses because changes occurring at the molecular level will affect, and can be correlated to, changes at the macroscopic level. Research on biomechanics at the cellular and molecular levels not only leads to a better elucidation of the mechanisms behind disease progression, it can also lead to new methods for early disease detection, thus providing important knowledge in the fight and treatments against the diseases. Sickle cell disease (SCD) could be characterized by observing the red blood cells\u2019 (RBC) adhesiveness and deformability ,204,205.Cell adhesion is a very important process in the human biological system. Studying both cell attachment and detachment events provides essential knowledge in understanding many important functional processes in the human body, which lead us to find the causes and problems that trigger certain diseases and thus develop the strategy for curing and improving them. Many different techniques and adhesion assays have been developed to study cell adhesion applicable to a wide range of fields. Every method is unique and was developed for specific important and independent purposes, which makes them difficult to compare in finding the best method applicable for cell adhesion studies. Choosing an appropriate technique is highly dependent on the purpose of the information that a person desires to obtain. Both single cell and population studies are equally important and required to fully understand how cells behave and function in the human system."} {"text": "Exemplified by cancer cells' preference for glycolysis, for example, the Warburg effect, altered metabolism in tumorigenesis has emerged as an important aspect of cancer in the past 10\u201320 years. Whether due to changes in regulatory tumor suppressors/oncogenes or by acting as metabolic oncogenes themselves, enzymes involved in the complex network of metabolic pathways are being studied to understand their role and assess their utility as therapeutic targets. Conversion of glycolytic intermediate 3-phosphoglycerate into phosphohydroxypyruvate by the enzyme phosphoglycerate dehydrogenase (PHGDH)\u2014a rate-limiting step in the conversion of 3-phosphoglycerate to serine\u2014represents one such mechanism. Forgotten since classic animal studies in the 1980s, the role of PHGDH as a potential therapeutic target and putative metabolic oncogene has recently reemerged following publication of two prominent papers near-simultaneously in 2011. Since that time, numerous studies and a host of metabolic explanations have been put forward in an attempt to understand the results observed. In this paper, I review the historic progression of our understanding of the role of PHGDH in cancer from the early work by Snell through its reemergence and rise to prominence, culminating in an assessment of subsequent work and what it means for the future of PHGDH. As a genetic disease, cancer is primarily caused by mutations in oncogenes and tumor suppressors, which serve to control tissue homeostasis . Altered in vitro and in vivo [The \u201cWarburg effect\u201d named in his honor describes a process wherein cancer cells preferentially use fermentative glycolysis-based glucose metabolism instead of entering the tricarboxylic acid cycle (TCA) and subsequent electron transport chain, even under aerobic conditions \u20136. The c in vivo .Metabolic reprograming in tumorigenic cells is not limited to deregulation by oncogenes and tumor suppressors but can also result from genomic modifications to the metabolic enzymes themselves, independently contributing to biomass accumulation and proliferative growth \u201314. ThisOne example is the conversion of glycolytic intermediate 3-phosphoglycerate into phosphohydroxypyruvate by the enzyme phosphoglycerate dehydrogenase (PHGDH)\u2014a rate-limiting step in the conversion of 3-phosphoglycerate to serine Figures and 2 1. In the In a review of serine metabolism published in 1984, Snell provides in vivo did not alter the effect [Two years later, Snell and Weber publishee effect , leadinge effect would laSome 19 months later in July 1987, Snell et al. [Finally in January 1988, Snell et al. publishe PHGDH gene. Citing work by Achouri et al. [ PHGDH gene from a rat hepatoma in 1997, Cho et al. [\u03bb ZAP human Jurkat T-cell cDNA library to look for similarity to the known rat PHGDH gene. One 852\u2009bp clone showed partial similarity to the 3\u2032-region and was subsequently used to rescreen the library as a novel probe [ PHGDH sequences and 3D modeling allowed for the assignment of substrate-binding, nucleotide-binding, and regulatory domains. The overall sequence was found to share 94% homology with rat PHGDH [ PHGDH [Following Snell et al.'s final publication in 1988 , the groi et al. that repo et al. used 300el probe . Of fiveat PHGDH and 93% [ PHGDH .17-Asp sequence known to be involved in binding the adenosine portion of NAD+, showed the highest degree of homology [When the sequence was compared via individual functional domains, the nucleotide-binding domain, containing a consensus Gly-Xaa-Gly-Xaa-Xaa-Gly-Xaahomology . The C-thomology , 30.32P-labeled version of the 2,478\u2009bp PHGDH cDNA probe on a human multiple tissue northern blot assay [Tissue distribution of PHGDH-specific mRNA was then analyzed using a ot assay . Two traot assay . In contot assay found a ot assay note thaot assay may not ot assay . Taken tot assay suggest ot assay .Km, Vmax\u2061, and/or stability. Akin to their analysis of normal tissue, Cho et al. [Cho et al. then souo et al. used a no et al. . The higo et al. . On the o et al. establiso et al. \u201320 and s Nature and Nature Genetics, the work of Pollari et al. [Studies of human PHGDH began to appear in subsequent years, prior to prominently reemerging in 2011 with the work of Possemato et al. and Locai et al. set the i et al. led to ti et al. . Subsequ in vivo via intracardiac inoculation of immunodeficient mice. Comparison of parental MDA-MD-231 and an enhanced daughter variant with heightened metastatic abilities, MDA-MB-231(SA), revealed genetic aberrations that were highly conserved between the two lines [P < 0.001) in strongly versus weakly metastatic cells, PSAT1 expression was not significant and PSPH failed to correlate with metastatic ability at all [Highlighting the incurable nature and significant morbidity of bone metastatic disease, Pollari et al. used breP < 0.001) in addition to shorter overall survival time (P = 0.002). Further assessment for \u201cclinically relevant features\u201d in a subset of 251 samples pointed to additional associations between enzymatic PHGDH and PSAT1 expression and several recognized risk features, including estrogen and progesterone receptor negative status, mutated p53, higher tumor grade, heightened expression of the cell proliferation markers PCNA and Ki-67, and higher levels of ERBB2 [Pollari et al. concludeof ERBB2 . Nature. With the stated aim of identifying metabolic genes required for tumorigenesis, Possemato et al. [In August 2011, Possemato et al. publisheo et al. cross-reo et al. construco et al. . PHGDH exists in a region of chromosome 1p commonly amplified in several types of cancer, including cancers of the breast and skin (melanoma) [ in vivo screen also decreased PHGDH protein expression; two of differing knockdown efficacies inhibited tumor growth [Consultation of cancerous somatic genome-wide copy number alterations reported in the work of Beroukhim et al. in an efelanoma) . None ofr growth . PHGDH had the most significantly elevated expression in estrogen receptor-negative breast cancer cells. Among 82 human breast tumor samples assessed in an immunohistochemical assay (not discussed in this review) PHGDH protein levels significantly correlated with estrogen receptor-negative status [The initial results of Possemato et al. corrobore status . In whate status conclusie status . PHGDH had 8\u201312-fold higher PHGDH protein expression relative to nontransformed cell lines (lacking gene-based amplification). Where the subsidiary analysis becomes interesting is in its secondary observation that PHGDH protein levels were elevated in two estrogen receptor-negative cell lines, lacking a PHGDH copy number gain [Turning to yet another line of evidence, the letter by Possemato et al. continueber gain \u2014a findinTo better understand the metabolic consequences associated with such increased PHGDH expression, Possemato et al. used metIn their penultimate experiment, Possemato et al. sought tSuch seemingly contradictory results with respect to serine flux presented a conundrum for Possemato et al. . On the 13C-glutamine further revealed that absolute flux from glutamine to aKG and to other TCA intermediates was significantly reduced in cells with RNAi-mediated suppression of PHGDH [Thus, in their final experiment for the paper, the researchers sought to test the notion that PHGDH, PSAT1, and PSPH reactions may produce metabolites beyond serine critical for cell proliferation . They hyof PHGDH . In cellof PHGDH . Based oof PHGDH believe Nature Genetics addressing PHGDH expression in melanoma cells, Locasale et al. [13C-glucose using targeted chromatography and mass spectrometry in HEK293T cells. Labeled glucose was incorporated into 13 metabolites across multiple pathways over a 30\u2009min span [13C-phosphoserine labeling paralleled 13C-serine labeling\u2014data further corroborated by NMR experiments which indicated that a \u201csubstantial fraction\u201d of glucose is diverted from 3-phosphoglycerate toward the conversion of serine and glycine in these cells. To measure the total amount of glucose-derived serine being made, Locasale et al. [13C-glucose and measured metabolites in cell extracts using targeted chromatography and mass spectrometry. Labeled serine accounted for about one-half with corresponding amounts of glucose incorporation detected in subsequent nucleotide and nucleoside intermediate formation. Expression of PHGDH was verified by Western blot; Locasale et al. [In their nearly concurrent publication ine et al. challenge et al. , suggeste et al. began thmin span . Importae et al. culturede et al. reported PHGDH gene. To that end, they identified PHGDH in the work of Slamon et al. [ PHGDH was found in a region of chromosome 1p (1p12) known to exhibit recurring copy number gains in 16% of all cancers [ PHGDH revealed localized amplification within the coding region of the gene. In an effort to verify these findings, the researchers examined focal copy number gain using fluorescence in situ hybridization (FISH) with an esophageal squamous cell carcinoma line (T.T. cells) known to contain amplification of PHGDH. Stable PHGDH knockdown using shRNA reduced the proliferation rate. Moving on to test whether the decreased proliferation was due to reduced ability to utilize the serine biosynthetic pathway, Locasale et al. [Indication of selective glucose diversion led to the notion that there may be a context in which pressure exists for tumors to increase PHGDH activity ; much lin et al. and note cancers . Furthere et al. generate PHGDH amplification results and noted that since amplification in a single tumor type was most commonly found in melanoma, it may be of use to consider PHGDH expression and copy number gain in human melanoma tissue samples. To that end, they used immunohistochemistry to measure PHGDH expression and found that high expression (defined by an IHC score > 1) was observed in 21% of samples. Corresponding copy number gains were detected with FISH in a subsample of 21 out of 42 tumors assayed. To investigate whether melanoma cell lines containing PHGDH amplification would be sensitive to decreased expression of PHGDH, cell lines with and without copy number gains were assessed with a methodology similar to that of the studies conducted by Possemato et al. [Echoing the need for human data first espoused by Snell and Cho o et al. took to P < 0.0001) with several clinical parameters. In an effort to validate and expand these results, Locasale et al. [P = 0.002) and basal subtypes (P = 0.004) but did not associate with general parameters such as metastasis (as was previously reported) or with tumor size. Consistent with a reliance of a subset of breast cancers on PHGDH, protein expression was required for growth in a panel of three breast cancer cell lines (including the BT-20 cell line that carries amplification). Reduced PHGDH expression decreased phosphoserine levels in PHGDH amplified BT-20 cells, while nontumorigenic breast cancer epithelial MCF-10a cells did not require PHGDH for growth, exhibit alterations in glycolysis on shRNA knockdown of PHGDH, or show detectable labeling of phosphoserine from glucose [However, while copy number gain offers one mechanism to divert flux into serine biosynthesis, Locasale et al. also note et al. assessed glucose . PHGDH on chromosome 1p12 is amplified in some 6% of breast cancers and 40% of melanomas. Beyond genomic amplification, a larger fraction of tumors were found to have elevated PHGDH protein levels, including 70% of estrogen receptor-negative breast cancers. High PHGDH expression (with or without genomic amplification) was associated with dependence on the enzyme for growth either via serine utilization or a hypothesized benefit of aKG to the TCA\u2014a phenomenon that Deberardinis [Given the importance granted to these two papers and the dense nature of the results that they report, it seems prudent to take a moment to review. Both studies published in mid-2011 report that the gene encoding PHGDH is amplified in a significant subset of human tumors and underscore that diversion of glycolytic intermediates into the serine biosynthetic pathway may contribute to tumorigenesis in cancer cells , 21, 34.rardinis and Setorardinis , 21, 34. in vitro and in vivo. In nude mice injected with stable, PHGDH shRNA-silenced glioma cells, overall survival was prolonged relative to mice injected with wild-type cells. The oncogenic transcription factor FOXM1 was also downregulated in PHGDH shRNA-silenced glioma cells. Using liquid chromatography/liquid chromatography-mass spectrometry, Liu et al. [In a series of papers that followed Liu et al. , Jing etu et al. identifiu et al. and, in u et al. .P < 0.05) [P < 0.05)\u2014both of which also associated with tumor progression, stage, and size (P < 0.05) [P = 0.029). On the whole, protein expression of PHGDH tended to be high in patients classified as mixed or basal-like subtype and low in patients classified as immune-related, molecular apocrine, or null subtype but was not statistically different when considered across all six TNBC types (P = 0.070). Among mixed subtype cases, 89.3% showed partial expression of basal markers in their mix [The work of Jing et al. follows < 0.05) . Express < 0.05) . Finally < 0.05) publishe < 0.05) with est < 0.05) ). Using < 0.05) assembleheir mix .Historic progression aside, the question remains \u201cWhat do these findings actually mean for the utility of PHGDH in cancer cells?\u201d The answer is not clear, rendering PHGDH a fascinating and, at first glance, unintuitive metabolic target in cancer . Known tde novo production of serine and, by way of serine hydroxymethyltransferase, glycine from which multiple amino and nucleic acids can be made. What is interesting about this aspect of the pathway is that cells grown in standard in vitro culture conditions consisting of abundant extracellular serine have been found to be sensitive to PHGDH knockdown [ de novo synthesis of an otherwise nonessential amino acid preferentially occurs such that in its absence cell proliferation suffers or does not occur. This notion of dependency or \u201caddiction\u201d to a \u201cflexible flux\u201d [Considered in more detail, the first and perhaps most readily apparent benefit involves the products of serine biosynthesis itself\u2014nockdown . Perhapsnockdown suggest,le flux\u201d , 34 lead c-Myc with serine hydroxymethyltransferase overexpression leading to stimulated proliferation in c-Myc-deficient cells. At the same time, both serine and glycine are abundant in the plasma, so as noted previously by Mullarky et al. [Tying into the conversion of serine to glycine by the enzyme serine hydroxymethyltransferase, the second notion stems from recognition that this conversion provides a major source of methyl groups for the one carbon pools required for biosynthesis and DNA methylation , 6. By cy et al. .Perhaps the role of aKG used by Possemato et al. to expla3\u2212) and the reversible transamination of aspartate to form oxaloacetate (including generation of glutamate and depletion of aKG) coupled with the oxidation of glutamate back to aKG (glutamate + NAD+ + H2O \u2192 NH4+ + aKG + NADH + H+) or the conversion of propionyl-CoA to succinyl-CoA (propionyl-CoA + ATP + HCO3\u2212\u2009\u2009\u2192 succinyl-CoA + ADP + Pi) in the \u03b2-oxidation of fatty acids [The suggestion that aKG synthesized from serine is critical for tumor growth is not easy to understand from a metabolic point of view given the numerous potential physiological sources of the compound . It is gty acids . Neverthty acids acknowle+. How this is accomplished remains unclear. Mullarky et al. [+ on mitochondrial electron transport chain complex II, may provide a means. In such a situation, the purported pathway could provide an additional benefit to cancer cells by enabling mitochondrial ATP synthesis to occur with less production of reactive oxygen species, passing the electrons from NADH to complex II instead of conventional passage to complex I [Allowing for a greater amount of speculation, Mullarky et al. contend y et al. contend omplex I .Finally, Mullen and DeBerardinis succinct PHGDH amplification likely stems from \u201ca combinatorial effect of pathway flux toward biomass production, changes in redox status, energy metabolism, and possibly some signaling functions\u201d in a manner that \u201clikely varies based on environmental factors, tissue of origin, and cooperating oncogenic mutations\u201d [Across all of these theories, the one thing that becomes collectively clear is that although glycolysis, the TCA cycle, and glutamine metabolism are central to the functioning of normal, and at least to some extent cancerous, cells, they do not act alone . They artations\u201d . If the Moving forward, the challenge will be to determine whether or not cancer cells expressing elevated levels of PHGDH require additional serine for growth. Evidence presented throughout this paper speaks to a complicated and uncertain result, for, as we have seen in the work of Locasale et al. , higher \u03b6 [\u03b6 function in mice results in increased tumorigenesis and heightened expression of PHGDH as well as PSAT1. Mechanistically, crystal structures of PHGDH dimers reveal phosphorylation sites at Ser55, Thr57, and Thr78 thought to be highly conserved among humans, rats, monkeys, and mice [Ongoing work in the field has begun to consider intersecting pathways such as reported correlations between serine biosynthesis and p73 expression in human lung adenocarcinomas as well \u03b6 . Recent \u03b6 providesand mice . In prosand mice , and in and mice . Additioand mice and Al-Dand mice .In terms of using PHGDH as a potential therapeutic target, one would need to establish a significant difference between the requirement of the enzyme's activity in cancer and in normally proliferating cells . Over th"} {"text": "Keratitis is an inflammatory condition, characterized by involvement of corneal tissues. Most recurrent challenge of keratitis is infection. Bacteria, virus, fungus and parasitic organism have potential to cause infection. TLR are an important class of protein which has a major role in innate immune response to combat with pathogens. In last past years, extensive research efforts have provided considerable abundance information regarding the role of TLR in various types of keratitis. This paper focuses to review the recent literature illustrating amoebic, bacterial, fungal and viral keratitis associated with Toll-like receptor molecules and summarize existing thoughts on pathogenesis and treatment besides future probabilities for prevention against TLR-associated keratitis. Toxoplasma gondii acts as a first ligand for TLR11 that has been described as an important clue regarding the gene expression studies on mice but not on human. MyD88 plays an essential role in TLR signaling and is involved with TLR expression in eye is a painful and most serious corneal infection, caused by strain of protozoan\u2014El-Sibae . DiagnosPseudomanas aeruginosa, Staphyloccus aureus and Streptococcus pneumonia. Gram-negative bacteria, Pseudomanas aeruginosa, are a major cause of bacterial keratitis in Unites States and have been found as a risk factor in contact lens wearer and adenovirus keratitis. HSV is an enveloped double-stranded DNA virus and considerably causes ocular infection. Epidemic studies of ocular HSV infection showed that HSV-1 is a prominent cause of viral keratitis demonstrated that corneal blindness caused by microbial keratitis is emerging as a main cause of visual impairment after cataract and glaucoma production in cornea of Wistar rat using inhibitors like PDTC and U0126. Similar findings have been obtained by Ren et al. (Acanthamoeba.Ren and Wu investign et al. in humanPseudomonas aeruginosa endotoxin-induced keratitis in BALB/c, C3H/HeN mice. They measure stromal thickness, haziness and neutrophil recruitment by in vivo scanning confocal microscopy and immune histochemistry, respectively. Saint Andre et al. (Wolbachia play an important role in corneal inflammation induced by bacterial lipopeptides that activate TLR2/TLR6/MyD88 signaling in the cornea. Zhang et al. (Pseudomonas aeuroginsa infection. They determined IK\u03b2-\u03b1 phosphorylation and degradation, expression of IL-6, IL-8 in mRNA, and secretion using Western blotting, RT-PCR and ELISA, respectively. Johnson et al. (Staphyloccousaureus or synthetic lipopeptide Pam3cys. Huang et al. (Pseudomonas aeruginosa. Carlson et al. (Pseudomonas keratitis. Huang et al. (Pseudomonas aeruginosa but have higher potential for systemic infection. Ito and Hamerman (Pseudomaonas aeruginosa infection in mice. Huang et al. (Pseudomonas aeruginosa. Hayashi et al. (It has been seen in various researches that modulation in TLR signaling occurs in corneal epithelium cells in several ways after infection with bacterial pathogens, responsible for bacterial keratitis. Studies shows a role of different types of TLR and mechanism associated with TLR-linked bacterial keratitis. Khatri et al. determine et al. show thag et al. examinedn et al. demonstrn et al. exploredg et al. showed tn et al. demonstrn et al. examinedn et al. demonstrn et al. show thag et al. show thaHamerman observedHamerman displayeg et al. showed ti et al. revealedi et al. demonstri et al. demonstrKariko et al. reportedFusariumsolani fungus in BALB/c mice. Gao and Wu (Aspergillus fumigates and Fusarium solani. Jin et al. (Fusarium solani. Yuan and Wilhelmus (Candida albican infection using BALB/c and C57BL/6 mice. Sun et al. (Fusarium keratitis. Hua et al. (Fusarium solani keratitis in BALB/c mice and TLR4 also involved in controlling fungal infection during Fusarium oxysporum keratitis in C57BL/6 mice. Tarabishy et al. (\u2212/\u2212 and TLR4\u2212/\u2212 mice, but not TLR2\u2212/\u2212 mice, have an initially increased F. oxysporum burden because of reduced fungal clearance in the cornea. In C. albicans keratitis, TLR-knockout mice are helpful to determine the effect of TLR2 and 4 in experiment yet, the severity of fungal keratitis in murine mutant strains was similar to that in wild-type control mice, more fungi were recovered after 3\u00a0days of infection from TLR2\u2212/\u2212 than from TLR4\u2212/\u2212 mouse corneas. The results of this study were similar to another study done by Villamon et al. (Oropharyngeal candidiasis, the level of TLR2 and 4 increased after fungal infection in mice. MyD88 is also essential for TLR2 and TLR4 signaling, yet the role of TLR2 and TLR4 is not clear in the development of corneal opacification; TLR4\u2212/\u2212 mice have an impaired ability to clear the infection so TLR4\u2212/\u2212 is important in eradicating fungal infection. Gao and Wu (Aspergillus fumigates and Fusarium solani. Bellocchio et al. (A. fumigatus\u00a0and to the pathogenic yeasts C. albicans and Cryptococcus neoformans (Table\u00a0Hu et al. observedo and Wu and Zhaoo and Wu elucidatn et al. observedilhelmus showed tn et al. determina et al. and Taraa et al. showed ty et al. displayen et al. . Zhang en et al. showed to and Wu , Zhao ano and Wu , Guo et o and Wu , Guo ando and Wu , Zhao eto and Wu and Hu eo and Wu showed to et al. and Meieo et al. elucidatns Table\u00a0.Table\u00a01LStaphylococcus. Huang et al. (Pseudomonas aeruginosa keratitis in BALB/c mice. Zhang et al. (Pseudomonas aeruginosa infection in human corneal epithelium cell. Guo et al. (Apergillus fumigatus keratitis by suppressing corneal inflammation. Wilhelmus (Acanthamoeba keratitis usually affect contact lens wearer and to prevent this problem improvement in disinfecting solution has been done because Verani et al. (Acanthamoeba sometimes anti-Acanthamoebic efficacy of solution was not sufficient. Hara et al. (Sun and Pearlman investigg et al. elucidatg et al. used anto et al. determinilhelmus showed ti et al. shows tha et al. proposeda et al. demonstrIn above studies, animals like rabbit and mice have been mostly used for studying different types of keratitis. There has been found several disadvantages of using animal models because of physiological differences between human and animal eye like corneal size, corneal thickness, arrangement of corneal collagen (Hayes et al. In many research broad spectrum antibiotics like vidarabine, trifluridine, acyclovir or gancyclovir linked with several side effects/risks because broad spectrum antibiotics can disturb normal flora in eye and drugs can be associated with drug resistance by pathogenic bacteria.Improvement in corneal cell culture models would be useful in pathogenesis of ocular diseases because it can reduce risk associated with killing of animals. Some researchers use primary cell culture of corneal epithelial cells and cell lines with long lifespan as in vitro models for ocular toxicology studies and to explore human corneal epithelial cell biology but it is difficult to cultivate primary human corneal epithelium because of paucity of accessible tissue So, SV 40-immortalized HCEC lines with properties that have resemblance with normal human corneal epithelial cells can be used in pathogenesis of keratitis problem. It also has been observed that artificial corneal epithelium cell under serum-free conditions can act as better model than normal corneal epithelium cell for ocular surface studies. These cell cultures are useful for studying gene regulation and tissue development studies.Understanding complex mechanisms associated with TLR-linked corneal inflammation will be helpful to device new therapeutic approach to modulate immune responses associated with TLR. Since some studies show that understanding of molecular pathways of TLR and RIG/Mda 5, which activate immune response against antiviral infection, will lead to novel approach for treating antiviral infection. Small molecules have been used for better understanding of molecular basis of infective keratitis. Toll like receptor namely TLR7 and TLR 8 can independently mediate recognition of small compounds like \u201cimidazoquinoline R-848\u201d suggest possible redundancy in these receptors. Understanding the function and biology of the corneal LPS receptor complex may lead to novel therapies for the management of ocular Gram-negative bacterial infections has been seen.Micavibrioaeruginosavorous and Bdellovibriobaceriovorous may be susceptible to attack pathogenic MDR bacteria. These good bacteria can combat with bad bacteria.RNAi pathway is often providing good advantage to investigate function of genes in cell culture studies according to previous researches. RNAi is fast, uncomplicated and reliable effort to repress targeted genes expression. Therapeutic potential of RNAi has been revealed in several diseases like viral infection, hepatitis and ocular neovascularization. siRNA technique provides great advantage due to facile delivery of siRNA on cornea. It also has major applications in gene knockdown, functional genomics, medicinal field and biotechnology field studies. RNAi drugs show better response than antisense RNA molecules and antibody-based drugs. RNAi may be more effective than antisense RNA in human cancer cell lines. To deal with antibiotic resistance bacteria problem, some bacteria can be used to combat with drug-resistance bacteria. Predator bacteria like"} {"text": "It is now well established that psychosocial factors can adversely impact the outcome of spine surgery. This article discusses in detail one such recently-identified \u201crisk\u201d factor: demoralization. Several studies conducted by the author indicate that demoralization, an emotional construct distinct from depression, is associated with poorer pain reduction, less functional improvement and decreased satisfaction among spine surgery patients. However, there are indications that the adverse impact of risk factors such as demoralization can be mitigated by psychosocial \u201cmaximizing\u201d factors\u2014characteristics that propel the patient towards positive surgical results. One of these maximizing factors, patient activation, is discussed in depth. The patient activation measure (PAM), an inventory assessing the extent to which patients are active and engaged in their health care, is associated not only with improved spine surgery results, but with better outcomes across a broad range of medical conditions. Other maximizing factors are discussed in this article. The author concludes that the past research focus on psychosocial risk factors has limited the value of presurgical psychological screening, and that future research, as well as clinical assessment, should recognize that the importance of evaluating patients\u2019 strengths as well as their vulnerabilities. For example, Rajaee et al. examinin [et al. who founet al. [et al. [et al. [Spine surgery can be, and often is, quite effective. For example, Weinstein et al. in the S [et al. found th [et al. ).et al. [et al. [On the other hand, it is well established that spine surgery not infrequently fails to provide pain relief and improved functional ability. For example, a recent analysis of discectomy patients found 28% had unfavorable outcomes , with a et al. found th [et al. for a syet al. [et al. [et al. [et al. [A large and growing body of research is finding that psychosocial risk factors can contribute significantly to the variability in spine surgery outcome , and ot [et al. ; Trief, [et al. ; Voohies [et al. ; Chiacha [et al. ) demonst [et al. ) patientet al. [2 = 0.88, and internal consistency (r2 ranging from 0.87 to 0.93 depending on the population tested), with no significant differences between average scores of men and women , was based on the use of the original Minnesota Multiphasic Personality Inventory (Hathaway and McKinley ) or its et al. found thn-Porath , pp. 24\u2013et al. [Our research has fouet al. ) and pooet al. ). Furtheet al. T-scoreset al. by our gElevated scores on scale RCd have also been found to be associated with poorer conservative treatment outcomes in chronic low back pain. Tarescavage, Scheman and Ben-Porath examininet al. [et al. [et al. [A substantial literature exists on demoralization, especially in the context of other chronic medical conditions. Most of these studies use instruments other than the MMPI-2-RF, including the Diagnostic Criteria for Psychosomatic Research or the [et al. ). A rece [et al. ) found ti.e., inability to experience pleasure . Demoragueiredo ). Severagueiredo examinin [et al. found am [et al. ; Adogwa [et al. ), which [et al. ), demoraThe feelings of ineffectiveness, helplessness and the sense of giving up that comprise the core of demoralization stand in sharp contrast to the behaviors and general health orientation that are associated with positive health outcomes. In order to achieve and maintain good health, individuals must be able take control over diet and exercise and seek out health information. Individuals also need to recognize when illness occurs, and be able to communicate with health care providers. They need to work with their physicians on plans to overcome or mitigate illness, and have the fortitude to follow through on these plans. Such an effective health orientation is captured by the Patient Activation Measure .1c, and testing for low-density lipoprotein cholesterol, among others . Belief that taking an active role in health is important; (2). Having the confidence and knowledge to take action; (3). Taking health-related action; (4). Staying the course under stress. In the original studies, PAM scores correlated significantly with the use of a glucose journal in diabetes, with following a low fat diet in patients with high cholesterol, with routinely exercising for patients with arthritis, and for seeking out information from health care providers. Further studies have found the PAM correlates significantly with both health outcomes and health care utilization. For example, in diabetics PAM scores predicted testing for, and control of Hemoglobin And Jones ). In an nd Jones ).Two previous studies have examined the role of patient activation in spine surgery patients. Skolasky, Mackenzie, Wegener and Riley examinedet al. Unpublished data [2 = 0.33, p < 0.01 ), reduction of negative affect as assessed by change scores on Likert-type emotion ratings , and with patient satisfaction at an average of about 5 months post-op.We have been examining the relationship of the PAM to the outcome of spine surgery. Thus far, we have giviz, the assessment of patient characteristics that may militate towards improved outcomes. Certainly, my own research as well [et al. ; denBoer [et al. ), which [et al. ) have foet al. [Other potential \u201cmaximizing factors\u201d warrant exploration. Consider, for example, a patient who has a high level of family problems . Such a patient may simultaneously have a strong social support system outside the family, or even be satisfied with the level of support received by family members, despite the problems that exist within the family. Social support has been found to be an important predictor of improved health outcomes. For example, Mutran, Reitez, Mossey and Fernandez , examiniet al. found thet al. ). Thus, et al. [et al. [et al. [A third potential characteristic that may help to maximize spine surgery results revolves around expectations for the outcome of spine surgery. Spine surgery has three major goals: reduce pain; improve functional ability; and correct the underlying physical pathology responsible for the pain and functional deficits. The extent to which these three goals are achieved, however, varies widely. Some patients coming to surgery expect total pain relief and a complete return to pre-morbid activity levels, while others may consent to surgery even though their expectation is that minimal improvements will occur. So, it is reasonable to consider whether patient expectations bear a relationship to surgical results. Although the results are not completely consistent, several studies show that greater expectations of improvement assessed pre-operatively are associated with more sanguine surgical results. For example, Yee et al. examininet al. ) found h [et al. examinin [et al. ). Thus, --Positive pain coping strategies, such as optimism and Haret al. );--et al. [Spirituality and forgiveness .Patient activation, social support and positive outcome expectations are but three of a host of psychosocial factors that might potentially be associated with improved spine surgery results. Some other factors that have been found to correlate with improved outcome of treatment for pain, and may militate towards better spine surgery response include: It would be of great value to explore these and other positive factors that may contribute to better spine surgery results.A number of psychosocial risk factors for reduced spine surgery outcome are by now well established. Depression, somatic sensitivity, demoralization, substance abuse, vocational issues such as workers\u2019 compensation and litigation\u2014all these are shown to have strong empirically-derived correlations with diminished results. However, the focus of PPS upon psychosocial risk factors has ignored much of the complexity of each case and provided limited insight into factors that may improve surgical outcomes. Research on patient activation, social support and surgical outcome expectations point to the importance of examining psychosocial \u201cmaximizing factors\u201d\u2014those patient characteristics that may mitigate the adverse impact of established risk factors, and may propel the patient towards good surgical response. In order to provide a full and effective picture of each patient\u2019s capacity for achieving reduction in pain and improvement in functional ability, the field of presurgical psychological screening must begin to focus as much on the patient\u2019s strengths as upon his or her vulnerabilities."} {"text": "Selected papers from the 13th International Conference on Bioinformatics (InCoB2014), July 31-2 August, 2014 in Sydney, Australia have been compiled in this supplement. These range from network analysis and gene regulatory networks to systems level biological analysis, providing the 2014 update to InCoB's computational systems biology research. Sydney, Australia hosted the 13th InCoB , the official conference of the Asia-Pacific Bioinformatics Network (APBioNet) . This inAuthors were offered the BMC track and PeerJ . The detet al.[et al.[et al.[et al.[The complexity of biological systems of often represented in terms of networks where biomolecules that interact with each other are represented as interaction partners. Characterizing a complex biological network by measuring its topology using 11 parameters and identifying nodes and sub-networks, is computationally challenging. Chin et al. have devet al. have devet al. have anal.[et al. have coml.[et al.. For sysl.[et al. have appet al.[cisMEP database comprising predicted cis-regulatory modules integrated with available epigenetic data, as a first step to support research in this area. Approaching the transcription regulation problem from another angle, Yan and Wang [Understanding transcriptional regulation of genes remains a challenging problem, dependent on the binding of several transcription factors as well as epigenetic changes. Given the sparseness of experimental genome-wide epigenetic profiles, Yang et al. have devand Wang propose et al.[et al.[A comprehensive understanding of the symbiosis between the human microbiome and the host organism is essential for defining its role in human health and disease. Yang et al. have appl.[et al. have anaThe articles in this supplement span network analysis, regulatory networks as well as systems-level analysis. With ongoing NIH Big Data to Knowledge (DB2K) and otheThe authors declare that they have no competing interests.SR wrote the introduction. CS and SR (Program Committee Co-chairs) managed the review and editorial processes, respectively. TWT supported the post-acceptance manuscript processing."} {"text": "Translational bioinformatics is an emerging field that aims to exploit various kinds of biological data for useful knowledge to be translated into clinical practice. However, the flooding of the huge amount of omics data makes it a big challenge to analyze and to interpret these data. Therefore, it is highly demanded to develop new efficient computational methodologies, especially data mining approaches, for translational bioinformatics. Under these circumstances, this special issue aims to present the recent progress on data mining techniques that have been developed for handling the huge amount of biological data arising in translational bioinformatics field.In data mining, one of the most important problems is how to represent the data so that the computational approaches could handle these data appropriately. In this special issue, B. Gan et al. utilized the latent low-rank representation to extract useful signals from noisy gene expression data and then classified tumors with sparse representation classifier and obtained promising results on benchmark datasets. C. Zhao et al. proposed a new feature representation of facial complexion for diagnosis in traditional Chinese medicine and achieved high recognition accuracy. G. Zhang et al. formulated the skin biopsy image annotation as a multi-instance multilabel (MLML) problem and automatically annotated the skin biopsy images with a sparse Bayesian MLML algorithm based on region structures and texture features. Except for feature extraction, feature selection is also very important in data mining. Z. Ji et al. proposed a particle swarm optimization-based feature selection approach to predict syndromes for hepatocellular carcinoma and improved diagnosis accuracy. With the accumulation of various data in translational bioinformatics, it is becoming a challenging task for traditional intelligent approaches to handle and interpret these data; S. Li et al. presented a survey on the recent progress about the hybrid intelligences and their applications in bioinformatics, where the hybrid intelligence is more powerful and robust compared with traditional intelligent approaches.The rapid accumulation of various kinds of biological data requires more powerful statistical approaches to extract useful signals from the huge amount of noisy data. L. Sun et al. built a new pipeline to investigate the DNA methylation profiles in male and female nonagenarians/centenarians and identified some differentially methylated probes between male and female nonagenarians/centenarians, which provide insights into the mechanism of longevity gender gap of human beings. Z. Teng et al. developed a new algorithm to predict protein function based on weighted mapping of domains and GO terms, which outperforms other popular approaches on benchmark datasets. J.-L. Huang et al. presented an online cross-species comparative system to identify conserved and exclusive simple sequence repeats within model species, which can facilitate both evolutionary studies and understanding of gene functions. L. Guo et al. proposed a new approach to identify microRNAs (miRNAs) associated with breast cancer and found that miRNA gene clusters demonstrate consistent deregulation patterns despite their different expression levels, which may provide insights into the regulatory roles of miRNAs in tumors.\u03b2 to ovarian carcinoma immunoreactive antigen-like protein 2 (OCIAD2) by exploring the pathway bridge, and the resultant pathway explained how TGF\u2009\u03b2 affects the expression of OCIAD2 in cancer microenvironment.Recently, network biology is becoming a promising research field by organizing different kinds of data into a network representation. T. Jacquemin et al. proposed a new approach to identify disease associated protein complexes based on a heterogeneous network that consists of a disease similarity network and a tissue-specific protein-protein interactions network and successfully found disease associated complexes. X. Li et al. proposed a new pipeline to detect symptom-gene associations by integrating multiple data sources and found some potential disease genes. It is known that DNA mutations will affect gene expression. However, it is difficult to know which mutations will affect the gene expression and how the genes are regulated within the biological system. D. Kim et al. developed a novel approach that can both identify the Quantitative Trait Loci and infer the gene regulation network and successfully identified the genes associated with psychiatric disorder. R. Zhang et al. presented a new approach to identify the pathways linking TGF\u2009Xing-Ming ZhaoJean X. GaoJose C. Nacher"} {"text": "We genotyped variants using PCR and direct sequencing and evaluated estimated haplotypes of MDR1 variants. The analysis revealed few differences in SNP genotype frequencies between patients and controls, or in clinical parameters among the patients. Genotype distribution of MDR1 SNPs rs1236, rs2677, and rs3435 showed significant (p < 0.05) association with different medication regimes . Some marginal association was detected between ANGPTL4, GPC5, GLCCI1, and NR3C1 variants and different medication regimes, number of relapses, and age of onset. Conclusion. While MDR1 variant genotype distribution associated with different medication regimes, the other analyzed gene variants showed only little or marginal clinical relevance in INS.Polymorphic variants in several molecules involved in the glomerular function and drug metabolism have been implicated in the pathophysiology of pediatric idiopathic nephrotic syndrome (INS), but the results remain inconsistent. We analyzed the association of eleven allelic variants in eight genes Childhood-onset idiopathic nephrotic syndrome (INS) is a common kidney disease in children. It is characterized by minimal glomerular changes in light microscopy and podocyte foot process effacement in electron microscopy. The great majority (80\u201390%) of INS patients show good responsiveness to steroid treatment, but recurrent episodes occur in at least 70% of the patients. Development of renal failure occurs rarely.Angptl4), glycoprotein, which is upregulated in rats with steroid sensitive proteinuria in patients who received IS drug medication instead of GCs only.ntal INS , 25. The4 levels . RecentlGPC5) gene and acquired NS through a genome wide association study and replication analysis. They showed that glypican-5 is localized on podocyte cell surface membranes and that the risk genotype (AA) of the GPC5 SNP rs16946160 was associated with higher expression. In our study, we observed an association between rs16946160 A allele and early disease onset (16 versus 5%) but we did not find an association between this SNP and INS in general. It is, however, notable that none of our patients and only one of the controls carried the AA genotype. Okamoto et al. found the A allele frequency of controls to be 0.168 and dbSNP (http://www.ncbi.nlm.nih.gov/snp/) puts it at 0.161. In our study, it was only 0.08. Thus, it is possible that due to the frequency differences between populations the association between the risk genotype and INS is not visible in our patients.Recently, Okamoto et al. identifi nNOS gene polymorphism rs2682826, the TT genotype was associated with INS but not with GC responsiveness. NO attenuates many functions in the kidney and all forms of NOS are expressed in the kidney but the role of NO in renal disease is unclear. In our study, we did not find an association of rs2682826 genotypes with INS or with any clinical features of the disease.Alasehirli et al. found th IL-13 and MIF, whose genetic variants have been associated with NS, were also negative. While Wei et al. [ IL-13 gene correlate with long term outcome of INS, we did not see any association between the analyzed SNP and the number of relapses, response to medication, or any other feature. MIF is counterregulated by glucocorticoids and the rs755622 SNPs have been studied in association with NS. Vivarelli et al. [Our results of the two cytokines,i et al. reportedMDR-1 gene codes for a membranous P-gp, which is a multidrug transporter expressed in the proximal tubule cells. Certain SNPs in MDR1 gene are believed to affect the expression of the gene or activity of the protein it codes. The common SNP rs1045642 in exon 26 has garnered a lot of attention. It is synonymously variant and it has been suggested that it is not causal itself but linked with another polymorphism or has an effect on DNA structure or RNA stability [tability . Of the tability , 28. Thetability , 12 whiltability , 22. MDR1 SNP rs1045642 CC genotype showed association with higher age of onset (20 versus 0%). Youssef et al. reported similar association for SNPS rs2032582 as well as rs1045642. Other studies did not show this association [We did not find any difference in the Finnish patients and controls in rs2032582 genotypes. Wasilewska et al. , Jafar eociation , 22. MDR1 SNPs showed association with treatment choices, T allele and TT genotype being more common in patients who needed IS drugs compared to those who were only medicated with GCs, which indicates that T and TT are associated with more complicated form of the disease. Surprisingly, only rs1045642 showed significant association between genotype distribution and GC responsiveness , although it must be noted that only ten of our patients were not responsive to GCs; this small cohort size may affect these results.In our study, all three MDR1 variants. In previous studies, Wasilewska et al. [We carried out haplotype analysis of the threea et al. reporteda et al. and Yousa et al. , althouga et al. reached a et al. , 22, 30.a et al. found haNR3C1 codes for glucocorticoid receptor (GR) that can affect the regulation of many biological functions, including responsiveness to GC, and its functional variability may play a role in the therapeutic response to GC. In this study, we analyzed NR3C1 SNP rs41423247 and curiously found that patients with more than five relapses carried more frequently heterozygous GC genotype than those with less than five relapses (68 versus 33%). The amount of both CC and GG homozygotes was diminished in these frequent relapsers. The allele distribution between groups with over and under five relapses showed no difference. In some previous studies, G allele has been associated with increased GC sensitivity [sitivity , 32 whil GLCCI1. Tantisira et al. [An interesting new gene in the context of INS isa et al. first sha et al. showed ta et al. looked tThe studied genetic variants have little role in the course of NS in Finnish patients. A notable exception to this is MDR1 SNPs whose genotype and allele distribution show significant association to different medication regimes. The genetic background to GC sensitivity is very heterogenic and varies between ethnic groups, which may have to be considered when drawing up treatment strategies for individual patients. More work needs to be done to discover other contributing molecules before the genetics of steroid responsiveness in NS can be understood."} {"text": "It was treated with 6% lime and incubated for 1 week under room temperature in 2 kg sealed polythene bags and was evaluated for proximate composition after incubation. Different isonitrogenous complete diets containing 0-50% of lime treated olive cake on ADF replacement basis were formulated as per the requirement of adult male goats. In ADF replacement, fiber and concentrate sources were replaced by lime treated olive cake by replacing the 0-50% ADF percentage of the total 40% ADF value of complete feed. The formulated complete diets were tested for in vitro dry matter degradation (IVDMD) values were comparable at all replacement levels. However, a point of inflection was observed at 40% ADF replacement level, which was supported by truly degradable organic matter (TDOM), microbial biomass production (MBP), efficiency of MBP and partitioning factor values (PF).Treatment of olive cake with 6% slaked lime increased availability of cellulose and alleviated digestibility depression caused by high ether extract percentage. Organic matter, nitrogen free extract, ADF and neutral detergent fiber were significantly lowered by lime treatment of olive cake. The cornell net carbohydrate and protein system analysis showed that non-degradable protein represented by acid detergent insoluble nitrogen (ADIN) was 21.71% whereas the non-available protein represented by neutral detergent insoluble nitrogen (NDIN) was 38.86% in crude olive cake. The in vitro studies that Indian olive cake can be included in complete feed at 30% level for feeding in small ruminants without compromising in vitro degradability of the feed.In our study, we concluded that there is comparable difference in composition of Indian olive cake when compared with European olive cake. The most important finding was that about 78% of nitrogen present in Indian olive cake is available to animal in contrary to that of European olive cake. We concluded from The shortage and increasing cost of conventional feed ingredients have driven the attention of research towards utilization of non-conventional feedstuffs in livestock ration. The use of non-conventional feedstuffs minimizes the competition of livestock with humans for conventional food grains and reduces the cost of animal production -3.Olea europaea L.) oil industry by-products are promising unconventional feedstuffs ) \u00d7 100/TDOMPF = TDOM/net gas volumeGeneralized linear model analysis of variance procedure was usedThe proximate composition and fiber fractions of olive cake and lime-treated olive cake used in this study are shown in The ADIN was 21.71% whereas the NDIN was 38.86%. Around 78% of nitrogen is likely to be available to the ruminant. Fraction A which gets instantaneously solubilized in rumen and is cent percent digested in the intestine was 33.69%. The protein fractions likely to be degraded in the rumen were 56% and undegradable dietary nitrogen component was 22.29% .The IVDMD values were comparable (p>0.05) between different inclusion levels. The IVDMD values varied from 48.75% to 56.25%. The TDOM (mg/200 mg DM) values varied from 97.50 to 112 with significant (p<0.01) difference among various replacement levels. The 40% ADF replacement level differed significantly (p<0.01) in TDOM from all other levels except 30% and 35% replacement level. The gas volume varied from 24 to 33 ml/200 mg DM and the levels differed significantly (p<0.01) from each other. The MBP value varied from 33.45 to 48.80 mg/200 mgDM. The MBP value was highest in 40% ADF replacement level and differed significantly (p<0.01) from all other levels. The EMP values varied from 33.42 to 46.41(%TDOM) and differed significantly (p<0.01) in various replacement levels. The values were highest in 40%, 45%, and 50% ADF replacement levels. The PF values varied from 3.31 to 4.12 and the values differed significantly (p<0.01) in various replacement levels. The values were highest and comparable (p>0.05) in 40%, 45% and 50% ADF replacement levels.et al. [et al. [et al [et al. [et al. [et al. [et al. [et al. [et al. [et al [et al [Proximate composition of the olive cake was similar to that reported by Ahmad , Ashraf et al. . CP cont [et al. , Al-Masr [et al. , Vargas-. [et al , Mart\u00edn [et al. , Molina- [et al. and Sade [et al. ; but wer [et al. and Gul [et al. . The per [et al. , Al-Masr [et al. , but is [et al. , Bashir [et al. , Alhamad [et al. , Luciano. [et al , Awawdeh. [et al and Vargl [et al .et al., [et al., [et al. [et al. [Chemical composition of olive cake has been shown to be influenced by factors such as geographical origin, procedure of production and processing . Differeet al., , Al-Masret al., , Mart\u00edn [et al., , Al-Masr[et al., and Moli [et al. , which m [et al. .et al. [et al. [et al. [et al. [et al. [et al. [et al. [The NDF and ADF content of the crude olive cake were found to be 80.33 and 62.00%, respectively on DMB. The values were higher to those recorded by Ahmad , Ashraf et al. , Mart\u00edn [et al. , Molina- [et al. , Chiofal [et al. and Rowg [et al. . However [et al. Abarghoe [et al. and Al-M [et al. . The var [et al. , Ashraf [et al. and Bash [et al. .et al. [Slaked lime treatment was used as per Ashraf to improet al. . HoweverOlive cake as a feedstuff blurs the demarcating criteria between roughage and concentrates. By standard classifying pattern of feedstuffs, it should be classified as extremely poor quality roughage. However, its higher EE% makes it better concentrate. Although, it is an oil seed cake, it is extremely poor in protein and in contrast, it is almost 50-60% fiber, which makes it an ideal candidate for quality improvement through lime treatment.et al [2 which is slowly degraded amounts to 5.14%. Fraction C which is undegradable was found to be 21.71%. Many previous workers have reported protein degradability of olive cake is poor, owing to the fact that 75-90% of the nitrogen is linked to the ligno-cellulose fraction [The crude olive cake was evaluated for protein fractions as per CNCP system by Sniffen et al . It was fraction -27 therefraction , which win vitro gas production technique as per Menke and Steingass [Ten isonitrogenous complete diets containing variable levels of lime treated olive cake on ADF replacement basis were formulated and tested through teingass .et al. [et al. [et al. [The IVDMD values obtained were compared to select the complete feed with maximum inclusion level of lime treated olive without significant decrease in IVDMD. The IVDMD values obtained were intermediate to the values reported by Vera et al. and Shab [et al. but were [et al. . The IVD [et al. .in vitro analysis that olive cake can be included in complete feed at 30% level for feeding in small ruminants without compromising in vitro degradability.In our study, we concluded that there is a comparable difference in composition of Indian olive cake when compared to European olive cake. The most important finding was that about 78% of nitrogen present in Indian olive cake is available to the animal. We concluded from AI carried out the research work and drafted the manuscript, RKS and AR planned and supervised the research work, BAM and JF helped in conducting lab trial and AI and AR revised the manuscript. All authors read and approved the final manuscript."} {"text": "Pain is the most common health challenge that drives patients to consult physicians. Pain and inflammation are usually symptoms of many diseases. Inability to perceive sensory pain may lead to shortened life expectancy and yet excessive pain may result in low quality of life. Despite the arrays of therapies available for the treatment of pain, it is still not tamed and the currently available drugs have their drawbacks. The challenges to amelioration of pain and inflammation have become an impetus for research into alternative and complementary medicines for the treatment of these ailments. The roles of phytotherapy and nutritional therapies in the treatment of varied types of ailments cannot be overemphasized. Actually, most of the commonly available therapeutic drugs for pain, such as opioids and nonsteroidal anti-inflammatory drugs (NSAIDs), were derived from phytotherapy. Hence, giving attention to this issue is highly recommended as it provides a medium for knowledge acquisition for improvement of the treatment of the twin ailment-pain and inflammation. Physalis alkekengi var. franchetii and Its Main Constituents\u201d as well as R. Boukhary et al., \u201cAnti-Inflammatory and Antioxidant Activities of Salvia fruticosa: An HPLC Determination of Phenolic Contents.\u201d In another study, I. J. M. Santos et al. reported \u201cTopical Anti-Inflammatory Activity of Oil from Tropidurus hispidus .\u201d A review was accepted for inclusion in the issue on effectiveness of acupuncture for treating sciatica by Z. Qin et al. A. Rauf et al. reported on a potential cyclooxygenase inhibitor named daturaolone while W. Kim et al. evaluated the anti-inflammatory potential of a new Ganghwaljetongyeum (N-GHJTY) on adjuvant-induced inflammatory arthritis in rats. This issue therefore is a rich resource for all interested in pain and inflammation research and treatment.The researches published in this issue are broad and they accomplished the aim for which the issue was set out. Many manuscripts were received but only those contributing significantly to the subject matter were accepted after thorough peer review. The new insights include topics on systemic models of inflammation contributed by Y. Liu et al., \u201cEffects of Wutou Decoction on DNA Methylation and Histone Modifications in Rats with Collagen-Induced Arthritis,\u201d and X. Wu et al., \u201cProtective Effect of Tetrandrine on Sodium Taurocholate-Induced Severe Acute Pancreatitis.\u201d Z. Shu et al. reported the anti-inflammatory effects of plant extract \u201cAntibacterial and Anti-Inflammatory Activities ofBamidele V. OwoyeleBamidele V. OwoyeleMusa T. YakubuMusa T. YakubuRoi TreisterRoi Treister"} {"text": "Discovery of prognostic and diagnostic biomarker gene signatures for diseases, such as cancer, is seen as a major step towards a better personalized medicine. During the last decade various methods, mainly coming from the machine learning or statistical domain, have been proposed for that purpose. However, one important obstacle for making gene signatures a standard tool in clinical diagnosis is the typical low reproducibility of these signatures combined with the difficulty to achieve a clear biological interpretation. For that purpose in the last years there has been a growing interest in approaches that try to integrate information from molecular interaction networks. Here we review the current state of research in this field by giving an overview about so-far proposed approaches. In recent years, the topic \u201cpersonalized medicine\u201d has gained much attention. A famous example is the anticancer drug Cetuximab, which binds to the EGF receptor and, consequently, prevents activation of the downstream signaling pathway, thus inhibiting cell proliferation. However, it has been found that Cetuximab can work only if the K-RAS gene is not mutated. Testing patients for mutations of this gene in the European Union is thus prescribed before application of Cetuximab to prevent a costly and ultimately ineffective therapy. Another example is the anti-cancer drug Trastuzumab, which is only effective in patients that express highly the human epidermal growth factor (HER2) at the cell surface, to which the antibody binds.These examples underline the need for identifying reliable biomarkers that predict a patient\u2019s response to therapy, including potential adverse effects, in order to avoid ineffective treatment and to reduce drug side-effects and associated costs. Towards that goal a large amount of work has been conducted within the last decade, which tries to stratify patients according to disease subtypes or different clinical prognosis. Nowadays modern high-throughput technologies allow for screening of massive amounts of OMICs-type data, and so one goal is to associate such data with a patient\u2019s clinical prognosis or with the membership to a certain disease subtype. Based on OMICs data it has been even possible to identify novel disease subtypes. For example, based on gene expression profiles, five subtypes of breast cancer have been identified .et al. [Prognostic or diagnostic biomarker signatures have been derived in numerous publications for various disease entities. One of the best known ones is a 70-gene signature for breast cancer prognosis (mammaprint) by van \u2019t Veer et al. , which hFor the construction of biomarker signatures, one typically uses supervised machine learning methods together with algorithms for variable/feature selection. This is because OMICs data is typically very high dimensional compared to the number of samples/patients in a typical study. The microarray technology nowadays enables measurement of tens of thousands of transcripts at the same time, whereas the sample size is typically in the order of 100\u2013300 patients. This not only imposes high challenges for the interpretation of such data, but also for robust and stable statistical procedures, which are needed to detect those genes that are truly correlated with the clinical phenotype. In this context it should be mentioned that typical machine learning algorithms operating with far more variables/features than samples are prone to the so-called \u201coverfitting\u201d phenomenon: The classifier or Cox regressor can perfectly explain the data used for model construction, but fails in making good predictions on new test data . Therefoet al. [Well known algorithms for this purpose are PAM , SVM-RFEet al. . Moreoveet al. ,10,11. Fet al. ,13. Howeet al. . For thaet al. . In thisNowadays knowledge on protein-protein interactions (PPIs) as well as on canonical pathways can be retrieved easily in a computer readable format from databases, such as KEGG , HPRD 1, PathwayIn general one may divide existing methods integrating network knowledge broadly into two main classes: On one hand there are network centric approaches, which map gene expression data onto a PPI network reconstructed from the literature and then either try to identify discriminative/differential sub-networks between patient groups, or directly compute summary statistics (pathway activity) for pre-defined sub-networks . Afterwards often a conventional classifier or Cox regressor is applied to make predictions based on the expression profiles of sub-network genes. On the other hand data centric approaches are closer to traditional machine learning methods. Here the idea is to bias the gene selection process within a machine learning framework in such a way that connected genes are preferably selected. There are two main techniques for this purpose: One is to construct a mathematical embedding of gene expression data into a network graph space via the so-called kernel trick . AfterwaIn the following we give a more detailed overview about these methods. et al. [i.e., proteins with an extraordinary high degree of interactions. In their paper Taylor et al. show that the average Pearson correlation of the expression of a hub protein and its interacting partners can be used to reliably predict survival of breast cancer patients without any further machine learning based variable or feature selection procedure. An approach, which is possibly most focusing on the network structure itself, is to purely select genes based on topological features of the protein-protein interaction network. An example is the method proposed in Taylor et al. . Here thAnother method to integrate network knowledge is to summarize the expression level of predefined canonical pathways obtained from databases, such as KEGG , into onet al. [Guo et al. report tet al. ) categoret al. [Rather than simply looking at mean or median expression Vaske et al. propose et al. [Teschendorff et al. further et al. show that their \u201ccombined optimal response genes\u201d (CORGs) approach yields better prediction performance than if pathway activity is simply estimated via the mean or median expression level. A further improvement of the method with respect to the selection of discriminative genes within each pathway is proposed in Yang et al. [Another approach following the same direction is proposed by Trey Ideker and co-workers . In theig et al. . et al. [et al. show that estimated pathway activities are predictive for the respective patient subgroups, and that in cell lines pathway activity also predicts the sensitivity to therapeutic compounds. An extension of the pathway activity classifier to identify oncogene-inducible modules is described in Bentink et al. [Bild et al. estimatek et al. . et al. [Yu et al. propose et al. . Afterwaet al. [The paper by Kammers et al. focuses et al. . Penalizet al. [et al. [et al. [et al. [Rather than looking at predefined canonical pathways or GO groups, another idea that puts a little bit more emphasis on measured data is to reconstruct a protein-protein interaction network for all gene products and then use experimental data to identify differentially expressed sub-networks. One of the first approaches in this direction is described in Chuang et al. . The alget al. , Fortney [et al. , Su et a [et al. , Ahn et [et al. . A particular interesting variant has recently been introduced by Dutkowski and Ideker . They mooptimal differential sub-network. Attempts to obtain an optimal sub-network are described in Chowdhury et al. [et al. [et al. [It has to be mentioned that despite their good performances, all so far mentioned approaches are heuristic and thus cannot guarantee to find the y et al. via bran [et al. via exha [et al. . After cet al. [et al. use a randomized approximation algorithm to obtain polynomial run time complexity. Afterwards the authors employ a 3-NN classifier on averaged expression levels of each sub-network to discriminate response to chemotherapy in breast cancer. In general, identification of an optimally discriminative sub-network is an NP-hard problem ,46 and tet al. , which aAll previously mentioned approaches deal with a PPI network as the central entity. In contrast, data centric approaches focus on the experimental data. Kernel techniques allow fok : can be thought of as a special similarity measure between arbitrary objects x \u2208 , which fulfills additional mathematical requirements, namely symmetry = k for all x,y \u2208 ) and positive semi-definiteness = for all x,y, where (\u00b7) denotes the dot product in Hilbert space and : \u2192 is some arbitrary map) [x and y, but weights each path in dependency on the path length. This is done in an exponentially decreasing way. Diffusion kernels are mathematically equivalent to the fundamental solution of the heat equation in physics, which describes the evolution of heat in a region under certain boundary conditions. If instead of exponentially decreasing weights for path lengths a linear weighting scheme is preferred, one arrives at the pseudo-inverse of the graph Laplacian [p. In general, a kernel function ary map) ,48. Amonary map) is a speaplacian . In the x \u2192 y in the network by the similarity of the gene expression of x and y (using the dot product). This is equivalent to defining a kernel function between x and y as:x and y are the vectors of gene expression values for genes x and y, and Q is the graph kernel matrix between nodes in the network. Consequently the expression data is linearly mapped via the graph kernel matrix Q to some different space. The aforementioned graph kernels allow for easily incorporating measurement data, such as gene expression. This is done by weighting each edge et al. [et al. [et al. in particular emphasize the possibility to conduct unsupervised clustering analysis of gene expression data in this way besides more common supervised classification, which yields biologically interpretable results. Several other authors have used graph kernels to identify possibly disease causing genes [Combining gene expression data with network information in such a way has been described by Rapaport et al. and Gao [et al. . In geneng genes ,53. et al. [vs. late recurrence of ER positive breast cancer patients with comparably high accuracy. Moreover, the obtained sub-network markers appear to be biologically plausible. Recently, Chen et al. have intet al. [Instead of augmenting the similarity measure of each pair of genes with network information via embedding techniques, another approach is to directly integrate network information into conventional variable/feature selection techniques. Zhu et al. describeet al. [i.e., embedded into a different space. In their paper the authors demonstrate that SVM-RRFE is not only superior to the conventional SVM-RFE algorithm in predicting an early relapse in breast cancer patients, but can also compete with several other network based gene selection approaches. Moreover, the stability and interpretability of the obtained gene signatures are significantly improved. Johannes et al. introducet al. , which iet al. to identBinder and Schumacher propose et al. [et al. show that this way miRNA and mRNA expression data can be combined in a straightforward way for predicting the risk of a relapse in prostate cancer via penalized Cox regression. Moreover, they demonstrate that their approach enhances prediction performance and gene selection stability compared to several other methods. In a recent paper Gade et al. extend tet al. by consiLasso regression models have gaiIntegration of biological knowledge, specifically from protein-protein interaction networks and canonical pathways, is widely accepted as an important step to make biomarker signature discovery from high dimensional data more robust, stable and interpretable. Consequently there is an increasing amount of methodologies for this purpose. In this review we gave a general overview about these approaches and grouped them into categories. The majority of algorithms at the moment follow the network centric paradigm, specifically by looking for differentially expressed sub-networks. This approach certainly appears attractive, because the gene selection problem is solved in a very elegant and natural way. Moreover, learned sub-networks are usually better to interpret than gene signature obtained via conventional machine learning techniques. One difficulty, however, lies in the fact that usually sub-networks are found by connecting differentially expressed genes. If there are no differentially expressed genes, then the identification of discriminative sub-networks becomes difficult. Moreover, finding an optimally discriminative sub-network is a computationally difficult, NP-hard problem. i.e., the phenotype can be explained by the activity of defined canonical pathways. If this is not the case or if only a few genes within a pathway are slightly differential, then identifying a pathway\u2019s activity to be associated to the clinical phenotype becomes difficult. Hence, smart filtering strategies to focus on the most relevant genes within a pathway, like in the CORGs algorithm [et al. [Estimating the total activity of canonical pathways appears to be comparably simpler from a computational point of view. However, the approach only works if many genes in a pathway change their expression in a coordinated way, lgorithm or via t [et al. , are useThere are comparably few data centric approaches, which come from the machine learning field, specifically from the area of kernel based methods. Since OMICs data is very high dimensional, linear classifiers are usually sufficient and thus the biased feature selection framework appears to be more natural. Compared to network centric approaches, these methods have the potential advantage that they do not rely so much on the PPI network structure. That means false positives or false negative interactions in the network structure will affect the model less. In addition, they are able to detect sub-networks, which are not part of canonical pathways. Thus they can be seen as more generally applicable. On the other hand they use available biological knowledge in a much less effective way than network centric methods. vs. late relapse of breast cancer patients in 6 microarray datasets. They found a large variability of prediction performances of individual algorithms across datasets, but no general advantage of network based methods. Network based SVM approach by Zhu et al. [In conclusion we see that all approaches that have been proposed so far have specific advantages and disadvantages. Thus there is a strong need for systematic empirical comparisons. Cun and Fr\u00f6hlich conducteu et al. yielded u et al. had the In this context it has to be emphasized that most published methods have been evaluated for one specific clinical question in one disease (mostly breast cancer), only. To get a more complete picture, more comprehensive studies including more clinical questions and more disease entities are needed in order to guide practitioners, under which conditions which method would be a good choice. Nonetheless, there will be always a dataset specific dependency of an algorithm\u2019s performance, which can never be resolved. Careful checking of assumptions is therefore a prerequisite for the successful application of any algorithm."} {"text": "Coastal biotechnology is developing fast with exciting achievements being realized in biological, chemical, and environmental sciences and technologies. The aim of the 1st International Conference on Coastal Biotechnology (1st ICCB) on \u201cCoastal Biotechnology for Sustainable Development,\u201d which was successfully held in the Australian winter of 2012 in Adelaide, Australia, was to bring together the international community working on coastal biotechnology issues, to help strengthen connections in this highly interdisciplinary field, and to inspire new research and development from the scientific and industrial sectors.The 1st ICCB had two themes: \u201cthe changing distribution pattern of coastal bioresources\u201d and \u201ccoastal biotechnology: facing the global and regional changes.\u201d Bioresources are the essence of biotechnology. We first need to understand the biodiversity of these bioresources before we choose the techniques needed to use them to our advantage. Products and industries that emerge from coastal biotechnologies influence research and the development of technologies and policies. We are living in a world with rapid global changes\u2014both naturally and socially. Those changes could alter the patterns of biodiversity especially in marine and coastal environments. How we address the global questions of food, fuel, population, environment, and so forth with sustainable developments of coastal bioresources is an issue not only for marine biologists, but also for all people of insight both from academia and industry.Cloning and expression of a cytosolic HSP90 gene in Chlorella vulgaris\u201d by Z. Liu et al.; \u201cBiochemical and anatomical changes and yield reduction in rice (Oryza sativa L.) under varied salinity regimes,\u201d by M. A. Hakim et al.; \u201cCultivation of Isochrysis galbana in phototrophic, heterotrophic, and mixotrophic conditions,\u201d by Y. Alkhamis and J. G. Qin; \u201cPhotosystem II photochemistry and phycobiliprotein of the red algae Kappaphycus alvarezii and their implications for light adaptation,\u201d by X. Guan et al.; \u201cExtraction and separation of fucoidan from Laminaria japonica with chitosan as extractant,\u201d by R. Xing et al.; \u201cPyrolytic and kinetic analysis of two coastal plant species: Artemisia annua and Chenopodium glaucum,\u201d by L. Li et al.; and \u201cEffects of seawater salinity and temperature on growth and pigment contents in Hypnea cervicornis J. Agardh ,\u201d by L. Ding et al.This special issue is devoted to coastal biotechnology, containing 7 papers that pertain to our goals in the ICCB conference. Papers from L. Ding et al., R. Xing et al., and X. Guan et al. are on seaweeds ; the ones from Z. Liu et al and Y. Alkhamis et al. are on microalgae; the one from L. Li et al. focuses on halophyte; and the one from M. A. Hakim et al. is on coastal salt-tolerant crop, rice. These 7 papers are as follows: \u201cThe inspiration of this special issue has grown from the contributions from the combined conferences ICCB and the 8th Asia Pacific Conference on Algal Biotechnology (APCAB 2012). APCAB meetings date back to more than 20 years ago when the first was held in Malaysia in 1992. With its development over the years, APCAB is regarded as the major international forum in algal biotechnology R&D field, serving the Asia-Pacific region, offering not only an examination of achievements in algal biotechnology, but also a platform for discussing the roadmap for future developments."} {"text": "This paper investigates the stability and change of etiological influences ondepression, panic, generalized, separation and social anxiety symptoms, and theirco-occurrence, across adolescence and young adulthood.Depression and anxiety persist within and across diagnostic boundaries. The manner inwhich common A total of 2619 twins/siblings prospectively reported symptoms of depression andanxiety at mean ages 15, 17 and 20 years.Each symptom scale showed a similar pattern of moderate continuity across development,largely underpinned by genetic stability. New genetic influences contributing to changein the developmental course of the symptoms emerged at each time point. All symptomscales correlated moderately with one another over time. Genetic influences, both stableand time-specific, overlapped considerably between the scales. Non-shared environmentalinfluences were largely time- and symptom-specific, but some contributed moderately tothe stability of depression and anxiety symptom scales. These stable, longitudinalenvironmental influences were highly correlated between the symptoms.The results highlight both stable and dynamic etiology of depression and anxietysymptom scales. They provide preliminary evidence that stable as well as newly emerginggenes contribute to the co-morbidity between depression and anxiety across adolescenceand young adulthood. Conversely, environmental influences are largely time-specific andcontribute to change in symptoms over time. The results inform molecular geneticsresearch and transdiagnostic treatment and prevention approaches. Theinclusion of siblings inevitably resulted in large age ranges; however, 72% of theparticipants were twins with a tighter age range . Attrition was predicted by socioeconomic status ,delinquency and sex ,but not by zygosity and internalizing symptoms.The analyses use data from waves 2\u20134 (hereafter referred to as times 1\u20133 respectively) ofa longitudinal twin and sibling study, the Genesis 1219 (G1219). Full details are providedelsewhere .At each time the participants completed the Short Mood and Feelings Questionnaire completed the Spence Children's Anxiety Scale Spence,; a 38-itet al.The internal consistencies and descriptive statistics of all measures were comparableto published samples and are presented elsewhere and DZ(sharing on average 50% of their segregating genes) twin pairs. Differences in within-paircorrelations allows estimations of the influences of additive genetics (A), sharedenvironment (C) factors that contribute to phenotypic similarity between siblings) andnon-shared environment . Quantitative genetic methods are described comprehensively elsewhere . The fit of sub-models was assessed by\u03c72 difference tests, i.e. Akaike's Information Criterion(AIC) and Bayesian Information Criterion, with lower values suggesting a better fit. Ifthe difference between AIC of two models was <10, the more parsimonious model wasselected of the observations. This is not an overall measure offit, but provides a relative measure of fit, since differences in \u20132LL between models aredistributed as et al.Univariate genetic analyses were conducted on all variables at each time. Males andfemales showed differences in variance on all variables except for social anxiety, and ascalar was fitted to account for this difference (Waszczuk a) was used to examine the homotypiccontinuity of etiological influences separately for each variable. The Choleskydecomposition assumes three distinct sets of genetic and environmental influences on avariable at each time point. A1 and E1 are common factors on the first variable (pathsa11 and e11) that can also influence the remaining two variables. A2 and E2 influence the second variable and can also influence the third variable overand above the influences accounted for by A1 and E1 . A3 and E3 are uniqueinfluences specific to the third variable only . Total A and E effects on each individualmeasure can be obtained by summing all squared paths to that measure . Multivariate models best suited to investigate specific research questions were chosen apriori. The Cholesky decomposition a was useb) was fitted in order to investigate the stability and change ofthe etiological influences shared between depression and anxiety symptom scales acrossdevelopment, to inform the mechanisms underpinning heterotypic continuityacross time. The model is illustrated on Fig.1b (with just three variables for clarity); the model was runwith all five variables included, each measured at three time points. This model assumesfive latent factors; each underlying a variable assessed three times. For example, thedepression latent factor captures the stability of the depression symptoms across times1\u20133. Variance of each latent factor is then decomposed into genetic (Al) andenvironmental (El) influences to assess the etiological factors underpinningthe stability of each symptom. Of note, El is free from time-specificmeasurement error but not from shared measurement error. The genetic and environmentalcorrelations between the latent factors (rAl andrEl) represent the degree of developmental stability commonto depression and anxiety symptom scales. Any remaining variance (not explained by thelatent factor) is then calculated as variable-specific genetic and environmentalinfluences (As and Es). The variable-specific etiological influencesinclude genetic and environmental influences that emerge at later time points, and areallowed to correlate with the within-time influences on all other variables(rAs and rEs), capturingtime-specific associations between them.The common pathway model b was fitr\u00a0=\u00a00.35\u20130.58). The heterotypiccorrelations between the different anxiety symptom scales, and between depression and eachof the anxiety scales, were similar in magnitude and generally moderate.Homotypic correlations were generally larger than heterotypic correlations, but tended todecrease at longer time intervals (times 1\u20133). The longitudinal correlations between the variables across the three time points arepresented in 1 on Fig. 1a) accounted for 45% of thevariance in generalized anxiety symptoms at age 15, but reduced to 21% by age 17 (patha12) and 18% by age 20 (path a13). Third, new genetic factorsemerged at each age (paths a22 and a33). Genetic influences thatemerged at age 17 continued to influence symptoms at age 20 (path a23) ingeneralized anxiety, panic and social anxiety, but not in depression and separationanxiety. Separation anxiety was characterized by particularly high change in geneticinfluences over time. The Cholesky decompositions show the effect of stable and new genetic and environmentalfactors across the three times, separately for each of the five symptom scales. Theresults were similar for depression and each anxiety symptom scale . First, Non-shared environmental influences on all symptoms were largely age-specific. Forexample, the non-shared environmental factors influencing generalized anxiety symptoms atage 15 had a small effect at age 17, and no significant effect at age 20. For 95%confidence intervals see Supplementary Table S1.2, see l\u00a0=\u00a00.61\u20130.76),with the remaining variance explained by modest to moderate, significant non-sharedenvironmental influences (El\u00a0=\u00a00.24\u20130.39) (rphl\u00a0=\u00a00.58\u20130.83) (rAl\u00a0=\u00a00.60\u20130.86) and the non-shared environmentalcorrelations between the latent factors were also high(rEl\u00a0=\u00a00.46\u20130.76) (s\u00a0=\u00a00.01\u20130.26)(s\u00a0=\u00a00.31\u20130.56) and accounted for mostof the non-shared environmental influences on each variable (rphs\u00a0=\u00a0\u22120.12\u20130.56), as did the genetic and non-sharedenvironmental within-time correlations between them . The lat58\u20130.83) . Genetic46\u20130.76) . The var01\u20130.26), since mvariable . The pheeen them . Table 2et al.et al.Model fit statistics for comparisons to the saturated models, and testing whetherparameters can be dropped, are presented in Supplementary Table S2. Model fit statisticscorroborate AE models and in the full models C estimates are very small. However, forcompleteness full ACE models are presented in Supplementary Tables S3\u2013S5. Full ACECholesky decompositions suggest smaller genetic innovation than AE models (SupplementaryTable S3). Otherwise dropping C from the models did not have impact on the interpretationof the results. The within-time analyses of these variables, including univariate ACEresults, are presented elsewhere (Lau The current study is the first to investigate how etiological influences contribute todevelopmental stability and change of depression, four anxiety symptom scales, and theirco-occurrence across adolescence and young adulthood. The results provide support forlargely stable and broad genetic influences accounting for co-occurrence and continuity overtime. Environmental influences were generally more specific to time and symptom scales,contributing to change in symptoms over time.et al.et al.et al.et al.et al.et al.et al.et al.Moderate homotypic continuity of depression and each anxiety symptom scale across the5-year period was observed, as expected (Costello et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.The non-shared environmental influences on homotypic continuity of each symptom werelargely time-specific, as expected (Scourfield et al.et al.et al.et al.et al.et al.et al.et al.The current study extends previous findings by investigating longitudinal etiologicalinfluences on homotypic continuity of depression and anxiety symptoms scales separately.A similar pattern of substantial genetic stability and largely time-specificenvironmental influences was observed on all symptoms, possibly due to a substantialoverlap between the genes influencing depression and anxiety (Kendler et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.Heterotypic continuity across the symptom scales was significant, reflecting highco-morbidity between depression and anxiety symptoms (Merikangas, change in co-morbidity over time. However, a modestproportion of environmental influences contributed significantly to the stability ofeach symptom scale, albeit to a lesser extent than the genetic influences. The resultsare in line with some previous findings (O'Connor et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.As expected, environmental influences were largely time- and symptom-specific, thuscontributing to the et al.et al.et al.et al.et al.et al.et al.et al.et al.The genetically informative, representative sample and multiple time points are strengthsof the current study. However, a number of limitations are noteworthy. First, our analysesused only self-report symptom scales and the results should be replicated in clinicalsamples with co-morbid diagnoses and using lifetime diagnostic interviews. This approachwas taken because clinical levels of internalizing disorders are rare in generaladolescent population and questionnaires might capture less severe symptoms of thesedisorders, for example self-reported panic might capture physical symptoms of anxietyrather than panic attacks. However, symptoms of internalizing disorders are importantmarkers of psychopathology (Pickles Our results suggest that both homotypic and heterotypic continuity of depression andanxiety symptoms across adolescence and young adulthood is underpinned largely by stablegenetic influences, while non-shared environmental effects tend to be time- andsymptom-specific. The results have multiple implications for future molecular geneticsresearch and clinical practice in the context of co-morbidity. They affirm the need tocontinue examining how the risk and maintenance factors for internalizing psychopathologyoperate across development to inform successful prevention and intervention strategies."} {"text": "L-sensor linear dynamic systems. Firstly, the paper establishes the fuzzy model for measurement condition estimation. Then, Generalized Kalman Filter design is performed to incorporate the novel neighborhood function and the target motion information, improving with an increasing number of active sensors. The proposed measurement selection approach has some advantages in time cost. As such, if the desired accuracy has been achieved, the parameter initialization for optimization can be readily resolved, which maximizes the expected lifespan while preserving tracking accuracy. Through theoretical justifications and empirical studies, we demonstrate that the proposed scheme achieves substantially superior performances over conventional methods in terms of moving target tracking under the resource-constrained wireless sensor networks.Moving target tracking in wireless sensor networks is of paramount importance. This paper considers the problem of state estimation for A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments . These set al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [Moving target tracking continuously reports the position of the target in terms of its coordinates to a fusion center or a central base station. It transfers one piece of sensor data to its coordinator, and determines its physical location relative to other neighboring coordinators . Moving et al. proposed [et al. proposed [et al. formaliz [et al. proposed [et al. develope [et al. proposed [et al. presente [et al. describe [et al. addresse [et al. proposed [et al. used two [et al. detectedMotivated by the above scenarios and concerns, the design of our approach relies on the prediction structure. In this paper, we propose a linear dynamic system with multiple sensors to track the target and monitor its surrounding area. The task is to extend the WSN lifespan without compromising the desired tracking accuracy. First, the paper establishes the fuzzy model for measurement condition estimation. Then, Generalized Kalman Filter (GKF) design is performed to incorporate the novel neighborhood function and the target motion information, improving with an increasing number of active sensors. The proposed measurement selection approach has some advantages in time cost. As such, if the desired accuracy has been achieved, the parameter initialization for optimization can be readily resolved, which maximizes the expected lifespan while preserving tracking accuracy.The rest of the paper is organized as follows. In n identical sensor nodes are densely deployed over a 2D area using a uniform random distribution. All nodes can only get connectivity information in neighbor nodes and measure Received Signal Strength (RSS) [r. These nodes are connected, that is to say at least one routing path exists between any pair of nodes. Note that two nodes are neighbor nodes if and only if i. The network consists of n nodes, and there are m anchor nodes and L-sensor linear dynamic system [nth sensor at time instant k. kth timestep, nth sensor, and We assume that th (RSS) in sensoc system .(1)znk=kth timestep is shown in the matrix form [In the practical application, the system that provides such detecting measurements at the rix form .(2)Zk=Hh and v in Equation 12), EquaThe value of Equation , we can i. \u03c3 is a correction parameter to make Equation Sij can buations or other information.i, i and j. In order to make the test more comprehensive, we test the impact of different communication range and number of anchor nodes on the EDE. All results are averaged over 100 different network deployments.Estimated distance error (EDE) is the average absolute difference between the estimated distance and corresponding real inter-node distance. In this section, we compare the values of EDE on three algorithms, including the method based on distance vector in hops (DV-HOP) , LGR andThe impact of communication range on EDE is shown in \u03c3 in Equation is defined by [We set fined by .(32)RMSM is the total number of Monte Carlo test. Here, L-sensor linear dynamic system for measurement selection. After analyzing the relationship between distance and intersection area of neighborhood nodes, we describe the novel neighborhood function as position estimation. The position optimization is smoothed by using the linear GKF to produce better positioning performance. The position prediction from the GKF is utilized for parameter initialization in the probability\u2013possibility transformation. Numerical experiments show that the proposed approach outperforms the existing algorithms in terms of EDE and average RMSE. In the future, we will implement a real world wireless sensor network to track moving targets.In this article, we have described the implementation of energy efficient moving target tracking in wireless sensor networks where measurements from a subset of sensors are employed at each time step. The three-step moving target tracking scheme is proposed to maximize the expected lifespan while preserving tracking accuracy. A fuzzy modeling method is developed under"} {"text": "Until recently, the cross-modal consequences of unisensory deprivation have been extensively studied in almost every sensory domain other than nociception. In this issue of PAIN\u00ae, Slimani et al. Their study has provided evidence that congenitally blind participants are hypersensitive to pain. Slimani et al. Slimani et al. The novel finding of pain hypersensitivity in blindness has several important implications for both basic and clinical science. This study is noteworthy for research on multisensory interactions and plasticity because it shows a strong link between vision and pain. This link is supported by a previous report of increased pain sensitivity in sighted volunteers who were temporarily visually deprived The \u2018hypersensitivity to threat\u2019 hypothesis b predictThe study by Slimani et al. Unfortunately, the assessment of thermal sensitivity requires expensive equipment, which could thwart routine testing of large samples of people. However, there are several low-cost alternative methods (e.g., punctate probes to stimulate A\u03b4 mechanical fibers The hope is that the work by Slimani et al. The author is aware of no conflicts of interest regarding this commentary."} {"text": "H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose In many cases, proton therapy is dosimetrically advantageous compared to other forms of external beam radiation therapy because it allows for uniform target coverage with lower doses to healthy tissues ,2,3. Howet al. [et al. [Polf and Newhauser reportednt vault ,11. This [et al. . Anferov [et al. reported [et al. , which tThe purpose of this study was to improve an analytical model of neutronet al. [Building upon the methods of Perez-Andujar et al. , we imprThe analytical model forIn this work, several improvements have been made to the model. Previously, the model required interpolation ofr closed .The previous model relied on lookup tables of 32 250 MeV . In the et al. [The epithermal regime was modeled asr et al. requiredet al. [Previous studies ,14 utiliet al. which utet al. . MCNPX iet al. ,20,21,22Centre de Proton th\u00e9rapie d\u2019Orsay (CPO) in France in a single-scattering proton beamline dedicated to ocular tumor treatments. During the neutron measurements, a closed patient collimator was used together with a pristine Bragg peak. We selected 75 MeV proton beam energy because it is representative of ocular treatments at CPO. Measurements were taken in air.Measurements of3He proportional counter . It is known to be suitable for ambient dose equivalent measurements in the energy range from thermal to 20 MeV [3He proportional counter [Two instrument types were used to acquire neutron ambient dose equivalent, LB 6411 is a cono 20 MeV . Additio counter . The modThe measurements ofPreviously, the model was trained separately at each proton beam energy considered . In thisH/D.The model was trained separately for the 75 MeV measurements. This was necessary because of the considerable differences between the beamlines. Because the measured data from the ocular beamline consist of a single proton beam energy and measurements were taken in air, some modifications were necessary. Theet al. [et al. [The parameters governingr et al. that ther et al. . Use of et al. [. et al. . We founMeasured and calculatedWe have improved an analytical model for predicting et al. [The major finding of this work is that an analytical model of neutronr et al. requireset al. [Another encouraging confirmatory finding is that the analytical model is applicable to low energy proton beams such as those used in ocular treatments. It is important that the model be easily adaptable to other passive scattering treatment systems, so that it can have the greatest possible impact and find use at many different institutions. Importantly, Farah et al. previouset al. [The results of this work are consistent with the findings of other studies. Perez-Andujar et al. found thA major strength of this study is that the improved model relies on far fewer free parameters than previous works. The inclusion of measured data from a second passively-scattered proton therapy beamline is another strength of this study. Specifically, the analytical model was configured for use at the lower energy (75 MeV) and compared against experimental data to validate its utility to predict stray radiation from an ocular beamline.One limitation of this study is that we only benchmarked the model with measured data at a single proton beam energy for the ocular beamline. Additionally, the measured data was taken in-air and not in a water phantom. These limitations are minor because we demonstrated good agreement for the model compared with Monte Carlo simulatedet al. [et al. [Future work on leakage radiation from proton therapy should include research and development to translate the analytical models to clinical practice. Specifically, the model should be integrated into treatment planning systems to facilitate routine clinical dose assessments for patients with heterogeneous anatomy and irregular external surfaces. A study from our group has yielded promising preliminary results indicating that the integration of a similar analytical model into a radiotherapy treatment planning system is technically feasible and the leakage-dose algorithm is sufficiently fast for routine clinical treatment planning applications . Specifiet al. , and dose et al. ), and ite et al. ,31, and e et al. .In this work, we improved an analytical model of neutron"} {"text": "Acta Cryst. (2009), E65, o1325\u2013o1326.Erratum to et al. , one H atom was placed incorrectly.In the paper by Rohlicek Since s al. 2014 were ablet al. (2014In our defence, in the powder study, we placed the H atoms geometrically according to a reasonable chemical structure for capecitabine, which shows the tautomeric H atom attached to the N atom of the carbamate group and the plausible formation of an inter\u00admolecular N\u2014H\u22efO hydrogen bond. As shown by Mali\u0144ska al. 2014, the H a al. 2014, therebyet al. (2009et al. (2014With respect to the fact that structure solution from powder diffraction data is based on the proposed molecular structure, readers should beware of the incorrectly placed H atom in Rohlicek al. 2009 and they al. 2014."} {"text": "This review presents an overview of the different techniques developed over the last decade to regulate the temperature within microfluidic systems. A variety of different approaches has been adopted, from external heating sources to Joule heating, microwaves or the use of lasers to cite just a few examples. The scope of the technical solutions developed to date is impressive and encompasses for instance temperature ramp rates ranging from 0.1 to 2,000 \u00b0C/s leading to homogeneous temperatures from \u22123 \u00b0C to 120 \u00b0C, and constant gradients from 6 to 40 \u00b0C/mm with a fair degree of accuracy. We also examine some recent strategies developed for applications such as digital microfluidics, where integration of a heating source to generate a temperature gradient offers control of a key parameter, without necessarily requiring great accuracy. Conversely, Temperature Gradient Focusing requires high accuracy in order to control both the concentration and separation of charged species. In addition, the Polymerase Chain Reaction requires both accuracy (homogeneous temperature) and integration to carry out demanding heating cycles. The spectrum of applications requiring temperature regulation is growing rapidly with increasingly important implications for the physical, chemical and biotechnological sectors, depending on the relevant heating technique. The development of lab-on-a-chip requires the integration of multiple functions within a compact platform, which is readily transportable and can deliver rapid data output. One such functionality is the control of temperature, either in terms of profile (homogeneous or gradient) or in terms of the accessible range, and in both cases with the greatest accuracy possible. Indeed, the regulation of temperature is a critical parameter in managing many physical, chemical and biological applications. Prominent examples of applications requiring tight temperature control are Polymerase Chain Reaction (PCR) ,13,14,15The review is organized as follows: The first sections focus on the different techniques reported to date. These techniques are described along with the corresponding specifications if explicitly stated in the original report see . Techniqi.e., by means of commercial heaters with some degree of integration, but entirely external approaches (such as using hot-plates) are not considered here,, i.e., the liquid is directly heated in the bulk material. In All the reported techniques, as well as the conditions reported , are summarized in a table at the beginning of the paper. Generally, it seems that there is currently no consensus on any given technique that would satisfy all the requirements specified by the complete range of applications; however each of the techniques described here has successfully demonstrated the integration of temperature control for specific applications.Note also that temperature mapping techniques are beyond the scope of the current paper, for this we refer the reader to the recent review by Gosse, Bergaud and L\u00f6w . This section describes techniques based on commercial heaters, either to heat liquids prior to being injected into the microsystem, (pre-heated liquids), or the incorporation of commercial components such as Peltier elements. Depending on the final application it is possible to generate either uniform temperature control or temperature gradients, as outlined in the next subsections.A number of techniques using pre-heated liquids have been reported for microfluidic devices. These methods utilize microheaters such as Peltier elements to establish either a uniform temperature or a constant gradient in a given region. Velve Casquillas and co-workers ,37 develThe previous example shows the potential to exploit external Peltier elements, typically by positioning these elements underneath a microchip. Maltezos and co-workers ,3 reportet al. [More sophisticated set-ups have been described by Khandurina et al. who haveet al. [E. Coli K12-specific gene fragment. During thermal cycling, the PCR device is sandwiched between two Peltier elements on the temperature distribution and the power required to circulate the fluid in the heat exchanger. The heating/cooling ramp of the PCR heat exchanger is equal to 150.82 \u00b0C/s, which is considerably higher than results reported elsewhere in the literature.Mahjoob et al. introducet al. [It is also possible to generate temperature gradients using the pre-heated liquids approach as reported by Mao et al. . A lineaet al. and its In a similar approach, Matsui and co-workers integratet al. [i.e., 2D-readable system: abscissa with temperature, and ordinate with concentration). The temperature field of the chip is controlled by two Peltier elements located underneath a silicon wafer which forms a chip support to optimize thermal transfers, and generates regular temperature gradients of about 0.7 \u00b0C/mm along the storage channels. This original technique is simple and cheap and could potentially be used in high throughput studies, given the small amount of reagents needed (around 250 \u03bcL).Finally, the generation of temperature gradients using Peltier elements can be applied to map-out solubility phase diagrams. Laval et al. devised Peltier elements are widely used to create hot/cold zones, and are able to generate a spatial distribution of temperature with impressive accuracy. However, for many techniques, these elements are not considered as an integral part of the microfluidic chip because of their size, which is typically several millimeters. However, methods have been developed to integrate heating or cooling functionalities directly into microfluidic systems. These approaches are presented in the following sections.et al. [We now turn to integrated techniques, from which heat diffuses from/to the integrated heating/cooling source. The first example we present derives from the use of a chemical reaction. In 2002, Guijt et al. made useet al. in 2006 [2 flux. They concluded that the most efficient solvent they tested was di-ethyl ether with an angle of 10\u00b0, which offers the possibility to cool down to \u221220\u00b0C with a steady state for several minutes. This method is again cheap and clearly suited for microfluidic applications but requires further refinement of the heating control to work efficiently in PDMS channels. This kind of approach was optimized by Maltezos in 2006 for coolThe following section concerns the most widely reported technique in the literature, based on Joule heating temperature control approaches ,44,45,46et al. [Thermal actuation of microfluidic valves by generating a heating pulse has recently been reported. Pitchaimani et al. used a Pet al. [Similarly, Gu et al. used a Pin situ fabrication of wires and surface patterning of metal resistors using classical microelectronic techniques. These techniques are summarized in two larger categories: the generation of a homogeneous temperature profile and generation of a temperature gradient.Finally, different temperature profiles may be required: either homogeneous as in PCR applications, or gradient-like for TGF or droplet actuation techniques. In both cases, it may be crucial to perform a temperature profile with the best achievable accuracy, although some applications do not require a sharp control. In order to meet such stringent requirements, different heating techniques and geometries of heaters have been investigated: the use of ionic liquids, The next two subsections are dedicated to spatial control of the temperature.et al. [To our knowledge, the only reported work using a conductive liquid is from De Mello et al. . The autet al. [The serpentine-like geometry was also studied by Lao et al. with intet al. [25 multiplication factor of DNA. Each thermal zone is about 25 mm \u00d7 10 mm. This study shows a characterization of the microheaters used through the resistance versus temperature plot . The microfluidic system contains three heating regions of different temperatures together with microfluidic channels. The temperature cycling is achieved by making a loop on the three regions. In 2009, Wang et al. designedet al. in approet al. [In the same spirit of shape optimization, Selva et al. ,45 proviet al. ,78, and By patterning the substrate with an optimized resistor, it is possible to generate a homogeneous temperature within a cavity with great accuracy and with short response times (standard deviation below 1 \u00b0C and asymptotic regime reached after 2.2 s).et al. [vs. Temperature is done with an IR camera and reveals a good spatial homogeneity in the middle of the serpentine. It also shows a good linearity in the 45\u2013105 \u00b0C range. They achieved a heating rate of 20 \u00b0C/s and a steady state error of about \u00b10.5 \u00b0C. With an applied voltage varying from 0.9 to 2.2 V, the authors obtained a temperature from 45 to 110 \u00b0C it is necessary to generate temperature gradients, either in a controlled way (controlled shape of the temperature profile) or not. For given applications . Nguyen et al. presente [et al. reported [et al. investiget al. [Another 1D droplet handling can be performed using the integration of a serpentine-like micro-heater which locally generates a temperature gradient together with a local decrease in the continuous phase viscosity. Considering such an integration in a 1D geometry, it is possible to control the breakup or switching of a droplet arriving in a T-junction as reported by Yap et al. ,19. The et al. [Jiao et al. ,21 preseIn such a configuration, it is possible to drive droplets by imposing a succession of different temperature gradients along the 2D substrate. The four microheaters actuated independently generate variable surface tension gradients. The droplet can be positioned anywhere in the channel depending on the strength of individual heaters .et al. [At a more integrated level, Darhuber et al. ,25,26,27et al. [Selva et al. also repet al. [et al. [Using this resistor pattern, another phenomenon has been emphasized by Selva et al. : thermomet al. . Under s [et al. report aet al. [etc.) and is capable of more than 10000-fold concentration of a dilute analyte.In order to generate a temperature gradient, copper blocks can also be integrated within a microsystem. Ross et al. describeet al. [An interesting technique of embedded heaters is reported by Vigolo et al. . The autet al. [This technique can be combined with the pre-heated liquid technique as reported by Vigolo et al. for theret al. [ was used to determine the degree of mixing in the chamber, where I is the normalized intensity of each pixel.Temperature gradient can also be used to generate natural convection for mixing purposes. Rapid and homogeneous mixing is difficult to achieve in microscale. Indeed, even if diffusion processes are favored in miniature fluidic systems, a pure diffusion-based mixing can be very inefficient, especially in solutions where macromolecules have a diffusion coefficient several orders of magnitude lower than that of most liquids. However, micromixing in chambers remains challenging even though many in-line micromixers have been developed and successfully demonstrated ,34. Kim et al. presentei.e., without using a thermal source. Up to now, we have seen that temperature control can be performed either in an external way or by integrating heating resistors. In both cases, heat diffuses from a source towards the liquid of interest. This section is devoted to techniques able to heat the liquid in the bulk, Microwave dielectric heating is a fundamentally different approach because of its preferential heating capability and non-contact delivery energy. Induced and intrinsic dipole moments align themselves with a time-varying electric field (from 3 to 20 GHz). The energy associated is viscously dissipated as heat into the surrounding media with no interference from the substrate materials. Compared to conventional techniques, enhanced thermocycling rates and reduced reaction times can be achieved . Heatinget al. [Microwave power at several GHz is delivered to the channel by transmission line integrated in the microfluidics device. In most cases, a coplanar waveguide configuration is used. Shah et al. carried et al. was aligvs. various microwave frequencies. Experimental data are compared to a theoretical model based on classical microwave absorption theory. They observed a 0.88 \u00b0C\u00b7mW\u22121 temperature rise at 12 GHz and 0.95 \u00b0C\u00b7mW\u22121 temperature rise at 15 GHz. In agreement with the theory, they concluded that the temperature rise of the fluid is predominantly due to the absorbed microwave power. These results have been confirmed by recent works done on microwave dielectric heating of fluids [The device is characterized by the scattering (S) parameter and the temperature. The S-parameters are the transmission and reflection coefficients. The fluid temperature is obtained by measuring the temperature dependent fluorescence intensity of a dilute fluorophore, Rhodamine B, added to the fluid . Figure f fluids .et al. [Kempitaya et al. generateThey report two distinct heating/cooling behaviors , it offers a cheap and spatially reconfigurable heat source that can be precisely addressed.et al. [0 = 514.5 nm) on a PDMS microchannel by using an inverted microscope. The aqueous phase is heated by adding 0.1 wt% of fluorescein . The maximum thermal gradient that can be obtained at the edge of the beam is given by P/w0 (P is the power of the laser) and reaches (for P around 100 mW) 10 to 20 \u00b0C/mm. The temperature gradient localized at the front of a droplet creates a surface tension gradient and induces droplet displacement in order to sort them. In et al. [In their study, Robert de Saint Vincent et al. used a ln et al. used theet al. [Kim et al. also useet al. [via thermocapillary forces. Contrary to other works, the temperature gradient is generated by the laser absorption in the silicon substrate and not in the liquid. In order to create optically actuated thermocapillary forces, they used an absorbing substrate consisting of a 0.85-mm-thick glass slide coated with a 100-nm-thick layer of indium tin oxide, followed by a 1-\u03bcm-thick layer of hydrogenated amorphous silicon (a-Si:H), which absorbs light in the visible and UV wavelengths. The laser power used is 10 mW, and its wavelength is 635 nm. The magnification is 20 and the beam obtained is a 6 \u03bcm-diameter spot on the surface of the absorbing substrate. They managed gradients in the surrounding oil up to 4 \u00b0C/mm. As one can see in Ohta et al. used thiet al. [Another application stemming from the use of a laser was reported by Weinert et al. . The autet al. [Hettiarachchi et al. presenteThis section introduces a series of applications to illustrate the efficiency of reported heating techniques. We have tried as much as is possible to cover a range of different fields from the chemical, physical and biological sciences, without being exhaustive in any one domain; our purpose is rather to explore the diversity of microfluidic applications exploiting thermal control. The strategies and approaches vary from one application to another; while spatial temperature homogeneity is imperative for many biological applications; a rough temperature gradient is sufficient for most applications in digital microfluidics and a net temperature gradient is required for Temperature Gradient Focusing (TGF) and thermophoresis.in vitro [For biological applications, two main requirements have been reported: 1) maintaining the temperature at 37 \u00b0C to keep cells alive , the eff maintainin vitro ,79. Two bona fide microreactors. The growing demand for handling picoliter to nanoliter volumes of biological samples has driven the development of droplet techniques where a variety of processes, including mixing, splitting and heating are efficiently controlled: these approaches have been designated as digital microfluidics. Such a prospective, demands tight control of fundamental operations such as droplet merging, fission, transport, exchanges, redirection, and storage. This has rekindled interest in the Marangoni surface effect, which refers to tangential stresses induced along an interface by a surface tension gradient [Another approach is the generation of temperature gradients to drive droplets. Indeed, the use of droplet-based microfluidics is largegradient . The prigradient which regradient .i.e., a constant temperature gradient. The detection of chemical species at small concentrations (nanomolar or lower) in small volumes is a core functionality of miniaturized bioanalytical devices. Temperature Gradient Focusing (TGF), on the other hand, involves the application of a temperature gradient across a microchannel or capillary. With an appropriate buffer, the temperature differential creates a gradient in both the electric field and the electrophoretic velocity. Ionic species can thus either be concentrated by balancing the electrophoretic velocity against the bulk flow, or separated according to their individual electrophoretic mobilities. In other words, tuning the temperature allows modulation of the electrophoretic mobility at will. Another application centered on the migration of small particles, is thermophoresis. Thermophoresis (Soret effect) is the phenomenon wherein small particles suspended in a fluid with a temperature gradient, experience a force in the direction opposite to that gradient [Conversely, the following applications, which focus on chemical applications, demand a net temperature gradient, gradient ,89. ThisFinally, thermal control has found application in a broad gamut of diverse fields such as the development of efficient and rapid mixing techniques (for example solutions with low Reynolds number), or the screening of solubility diagrams to study protein crystallization.etc. that are summarized in the following table. This table underlines the fact that despite there being no paradigm to implement microheaters, a huge improvement has been performed in terms of level of integration, cost and response time showing very good ability to integrate the focused application. However, an important feature has been emphasized by several authors [Applications concerned with the control of temperature are numerous and focus on physical, chemical and biotechnological issues. This article shows the great variety of technologies that have been developed to achieve integrated temperature control. All these techniques present different advantages or drawbacks in terms of easiness of integration, cost, area of control, accuracy of the control, authors ,30,45,82et al. [i.e., an accurate temperature profile is not necessary. For biological applications requiring a homogeneous temperature at the cavity level, Joule heating connected with shape optimization to avoid side effects has been shown to be very efficient with a high level of integration [\u22121\u00b7K\u22121 by adding 21% of silver powder. Such technology would avoid extra steps in the microfabrication process but would require optimization of thicknesses in order to proceed to optical measurements.As stated in the introduction, several strategies have been adopted to control the temperature within a microsystem, and most of the focused applications have been successfully achieved. It is thus difficult to isolate a technique that could be a paradigm. However, for some applications it is possible to extract a technique\u2014that seems to be more accurate and show a high level of integration. In the case of droplet handling, Darhuber et al. ,25,26,27egration ,13,45. Aegration . Finallyegration , such anegration succeede"} {"text": "Although subclinical hypothyroidism (SH) is a common clinical problem, its diagnosis tends to be incidental. According to the definition, it should be asymptomatic, only detectable by screening. The presence or coincidence of any symptoms leads to L-thyroxine treatment. The clinical presentation, especially in younger patients with subclinical hypothyroidism, is still under dispute. Accordingly, the aim of this paper was to review the literature from the past seven years. The literature search identified 1,594 potentially relevant articles, of which 24 met the inclusion criteria. Few studies focus on the symptomatology of subclinical hypothyroidism, and most of them analyzed a small number of subjects. A significant correlation was found by some authors between subclinical hypothyroidism and a higher risk of hypertension, dyslipidemia, and migraine. No evidence of the impact of subclinical hypothyroidism on weight, growth velocity, and puberty was revealed. As the quality of most studies is poor and no definite conclusions can be drawn, randomized, large-scale studies in children and adolescents are warranted to determine the best care for patients with SH. From the biochemical point of view, subclinical hypothyroidism (SH) is characterized by mildly elevated serum TSH concentrations, with normal concentrations of serum free and total triiodothyronine T3) and thyroxine T4), without the typical symptoms of thyroid disease. SH prevalence in adults ranges from 4 to 10% . Accordi and thyr, withoutIn the adult population with subclinical thyroid disease, SH is associated with a risk of progression to overt thyroid disease, lipid disorders, increased risk of atherosclerosis, and mortality due to cardiovascular diseases . The pubAccordingly, the aim of this paper was to analyze studies reporting signs and symptoms presented by children and adolescents diagnosed with subclinical hypothyroidism.In order to identify studies evaluating the clinical manifestations and symptoms of subclinical hypothyroidism in children, a systematic PubMed literature search was conducted. The search terms used in the medical subject headings (MeSH) included ) as well as AND (\u201cinfant\u201d [MeSH Terms] OR \u201cchild\u201d [MeSH Terms] OR \u201cadolescent\u201d [MeSH Terms])) and SH hypothyroidism [All Fields] AND \u201chumans\u201d [MeSH Terms] AND (\u201cinfant\u201d [MeSH Terms] OR \u201cchild\u201d [MeSH Terms] OR \u201cadolescent\u201d [MeSH Terms]) and hyperthyrotropinemia [All Fields] AND \u201chumans\u201d [MeSH Terms] AND (\u201cinfant\u201d [MeSH Terms] OR \u201cchild\u201d [MeSH Terms] OR \u201cadolescent\u201d [MeSH Terms]). These terms were combined in various ways to generate a wider search.In addition, the references of selected articles were checked with a view to identify papers not detected by our search strategy. Only full-length publications that met the following criteria were included: (1) long-term prospective or retrospective studies regarding the clinical signs of SH in the pediatric cohort; (2) only studies in English; (3) studies published between January 2008 and December 2014. Exclusion criteria were the following: (1) studies including patients with chronic systematic diseases, genetic syndromes, or autoimmune disorders or under concomitant therapy with lithium salts, antiepileptic agents, glucocorticoids, or iodinated drugs; (2) studies regarding the treatment and effects of L-thyroxine replacement therapy. At least two authors independently selected articles for inclusion and exclusion criteria .After reviewing 1,594 titles, abstracts, and full-length texts, 24 articles meeting the inclusion criteria were selected for the final analysis \u201331. TherIn their cross-sectional controlled long-term study of 36 children with SH and controls, Cerbone et al. assessedIn a case-control study, Kuiper and van der Gaag demonstrSome studies have confirmed the relationship between the levels of TSH and FT3 and change of body weight. The level of FT4 remained stable during follow-up. Leptin was hypothesized to link the weight status with TSH. This would suggest that TSH and FT3 concentrations in obese subjects are a consequence rather than the cause of obesity \u201315.Reinehr et al. analyzedSimilar conclusions were drawn by Ba\u015f et al. , who obsThe study by Grandone et al. encompasThe association between TSH, FT3, FT4, and weight status, as well as their changes during and after a lifestyle intervention in obese children, was also studied by Wolters et al. . They evTwo other studies investigating the correlation between BMI in obese children and hyperthyrotropinemia have produced interesting findings. Hari Kumar et al. comparedRadetti et al. investigR = 0.31; P = 0.02) were independently caused by both younger age and overweight/obesity status.The multicenter study by Rapa et al. was baseIn a study that included 30 SH children and 36 controls, Cerbone et al. evaluateErg\u00fcr et al. conducteZ-scores were found to be higher in subjects with subclinical hypothyroidism than in euthyroid subjects; in males, the scores increased linearly with increasing TSH.In their cross-sectional trial, Chen et al. investigIttermann et al. evaluateCerbone et al. in a croPositive correlation between the components of the metabolic syndrome and serum TSH was also found in Chinese adolescents by Zhang et al. .In their cross-sectional controlled long-term trial, Cerbone et al. also anaZ-scores (BTT Z-scores).di Mase et al. in theirOne cross-sectional study conducted by Fallah et al. assessedP = 0.014) and female gender (P = 0.047). The conclusions were as follows: (1) initial normal or slightly elevated TSH levels are likely to remain normal or spontaneously normalize without treatment; (2) patients with initial levels higher than 7.5\u2009mU/L, particularly girls, are at a greater risk for persistent abnormal TSH concentration.Lazar et al. presenteAlso Wasniewska et al. in theirIn their cross-sectional controlled long-term study, Cerbone et al. found thRapa et al. concludeCho et al. investigRapa et al. in theirTwo case-control studies , 30 asseSakka et al. studied Kratzsch et al. observedThe definition of SH is purely biochemical . The estThe analysis of SH-related symptoms was based on two different types of studies. In the first type, SH confirmation was sought as a result of symptoms/clinical presentation , 26\u201330. We only analyzed articles in which subjects presented biochemically pure SH. The prevalence of SH in children and adolescents in the analyzed literature was between 2.9% and 14.1% , 21, 25 SH should be distinguished from physiological or transiently increased TSH in the serum, especially during the recovery phase from nonthyroidal illnesses or after subacute, painless thyroiditis. For this reason, serum TSH should be checked after 3\u20136 months . This obRarely, higher serum TSH concentrations are seen in patients with TSH-receptor mutations causing mild TSH resistance. It can affect up to 0.6% of white population and should be suspected in positive family history of elevated TSH, not coexisting with thyroid autoimmunity .Subclinical hypothyroidism is a common clinical problem. According to the definition, SH is asymptomatic and detectable only by screening. However, there are a lot of controversies concerning population screening for SH as the benefits of therapy are not proven. Single placebo-controlled studies in older populations confirmed that screening and treatment by L-thyroxine improve the quality of life in only 1% of individuals . The symOur study limitation is searching in PubMed base only and restricting the search to English language. Probably a systematic search across multiple databases would yield more results. However, as the quality of most studies is poor and no definite conclusions can be drawn, randomized, large-scale studies in children and adolescents are warranted to determine the best care for patients with SH."} {"text": "Root resorption (RR) is defined as the loss of dental hard tissues because of clastic activity inside or outside of tooth the root. In the permanent dentition, RR is a pathologic event; if untreated, it might result in the premature loss of the affected tooth. Several hypotheses have been suggested as the mechanisms of root resorption such as absence of the remnants of Hertwig's epithelial root sheath (HERS) and the absence of some intrinsic factors in cementum and predentin such as amelogenin or osteoprotegerin (OPG). It seems that a barrier is formed by the less-calcified intermediate cementum or the cementodentin junction that prevents external RR. There are several chemical strategies to manage root resorption. The purpose of this paper was to review several chemical agents to manage RR such as tetracycline, sodium hypochlorite, acids , acetazolamide, calcitonin, alendronate, fluoride, Ledermix and Emdogain. Root resorption (RR) is the loss of dental hard tissues as a result of clastic activities within the pulp or periodontium. It might occur as a physiologic or pathologic phenomenon. RR in the primary dentition is a normal physiologic process except when it occurs prematurely , 2. The Etiology of root resorptionRR occurs in two phases: injury and stimulation. Injury is related to non-mineralized tissues covering the external surface of the root (pre-cementum) or internal surface of the root canal (predentin) , 5. The Without further instigation of the clastic cells, resorption is self-limiting. If the damaged surface is small, repair with cementum-like tissue will occur within 2 to 3 weeks. If the damaged root surface is large, bone cells attach to the root in competition with cementum-producing cells and ankylosis takes place , 5. ContMechanisms of root resorptionet al. [Several hypotheses have been suggested regarding the mechanisms of RR. According to a hypothesis, the remnants of Hertwig's epithelial root sheath (HERS) surround the tooth root, like a net impart, which is resistant to resorption and subsequent ankylosis , 6. Haseet al. showed t\u03baB ligand (RANKL). Binding to OPG reduces RANKL concentration and thereby inhibits its ability to bind to receptor activator of nucleic factor \u03baB (RANK) receptors on the surface of osteoclast precursors (circulating monocytes) and stimulate osteoclast production [OPG is a decoy receptor by binding to the receptor activator of nuclear factor oduction . Another hypothesis regarding some forms of external RR is the barrier formed by the less highly calcified intermediate cementum or the cementodentin junction. The intermediate cementum is the innermost layer of cementum which creates a barrier between the dentinal tubules and the periodontal ligament. Under normal conditions, this barrier does not allow irritants such as bacterial by-products to pass from an infected pulp space to stimulate an inflammatory response in the adjacent periodontal ligament. If the intermediate cementum is lost or damaged, pro-inflammatory mediators may diffuse from an infected pulp space into the periodontal ligament, setting up an inflammatory response and subsequent external inflammatory RR .Materials used to manage root resorptionTetracyclinesTetracyclines, including tetracycline-HCl, minocycline, demeclocycline and doxycycline, are a group of broad-spectrum antibiotics that are effective against a wide range of microorganisms . TetracyInflammatory diseases such as periodontitis include a pathological excess of tissue collagenases, which may be blocked by tetracyclines, leading to enhanced formation of collagen and bone formation .et al. [Based on the hypothesis that microorganisms reach the apical area of the recently replanted tooth from the oral cavity , and considering the inhibitory action of tetracyclines in preventing this route of bacterial contamination, Cvek et al. developeet al. .et al. [Bryson et al. evaluateTerranova showed tSodium hypochloriteet al. [Because of its excellent antibacterial activity and its ability to dissolve necrotic tissues, various concentrations of sodium hypochlorite (NaOCl) have been recommended for removing necrotic periodontal ligament remnants . The beset al. protocolAcidsHydrochloric acidet al. [Hydrochloric acid has also been used in association with an enzyme, hyaluronidase, with the aim of decalcifying the cementum without denaturing the collagen matrix and has been shown to significantly decrease RR . Nordenret al. evaluatePhosphoric acid et al. [Phosphoric acid at 50% concentration has been used for the same purpose, and the results revealed that its use alone increased the occurrence of RR . Accordiet al. protocolCitric acidIn an attempt to expose collagen fibers on root cementum and promote a contact surface for re-attachment of periodontal ligament collagen fibers, treatment of root surface with citric acid has been proposed. Large number of ankylotic areas and replacement resorption has been observed after treating the root surface with citric acid -25. AfteAscorbic acidAscorbic acid (vitamin C) is a fundamental vitamin in all body tissues. It is essential for the hydroxylation of proline and lysine, which are very important during the synthesis of collagen. Its absorption depends on tissue concentration. If available at an appropriate concentration, ascorbic acid is responsible for maintaining the efficacy and the phagocytosis activity of leucocytes .Vitamin C also plays a role in osteogenesis by the activation of alkaline phosphatase and increase in the functional activity of osteoblasts . VitaminAcetazolamide Acetazolamide is a carbonic anhydrase inhibitor that is used to treat several diseases such as glaucoma, epileptic seizures, cystinuria, periodic paralysis and dural ectasia . Mori anCalcitoninCalcitonin is a hormone synthesized by the thyroid gland which is a proven potent inhibitor of clastic cells and has been indicated for the treatment of external RR. The influence of this hormone as an intracanal dressing after tooth replantation has been investigated, and it has been observed that it causes a decrease in inflammatory RR and better control of the sequelae of dental trauma even in cases with uncertain prognosis .In an experimental study in dogs, Caldart histometThe association of calcitonin and CH as an intracanal medication for replanted teeth has been advocated mainly because of the recognized capacity of reducing the osteoclastic activity, interfering in the proliferation, motility, and vitality of these cells, and reducing the resorption rate. However, this association has not provided better results than CH alone .Alendronate Alendronate is currently being used to inhibit pathologic osteoclast-mediated hard tissue resorption in diseases such as Paget's disease, osteoporosis and osteoclastic malignancies of bone . The affet al. [et al. [in vitro. Moreira et al. [in vitro as well as in vivo. Lustosa-Pereira et al. [et al. [et al. [et al. [Levin et al. assessed [et al. indicatea et al. showed ta et al. indicate [et al. assessed [et al. evaluate [et al. showed tLedermixi.e. triamcinolone and demeclocycline) are capable of diffusing through dentinal tubules and cementum to reach the periodontal and periapical tissues [et al. [Ledermix is a glucocorticosteroid-antibiotic compound. Today, Ledermix paste remains a combination of the same tetracycline antibiotic, demeclocycline HCl (at a concentration of 3.2%), and a corticosteroid (1% triamcinolone acetonide), in a polyethylene glycol base . The two tissues . Abbott [et al. showed tin vivo [-3 to 10-6 mg/mL reversibly. Furthermore, mixing with Pulpdent paste did not modify this anti-mitotic effect [et al. [et al. [et al. [It has been demonstrated histologically that Ledermix eliminated experimentally induced external inflammatory RR in vivo . Furtherin vivo . Ledermic effect . Thong e [et al. found th [et al. evaluate [et al. evaluate [et al. evaluate [et al. . EmdogainEnamel matrix derivative (EMD) in the form of a purified acid extract of proteins from pig enamel matrix has been successfully employed to restore functional periodontal ligament, cementum and alveolar bone in patients with severe attachment loss , 54.One attractive possibility for application of Emdogain is for their use in replantation procedures with avulsed teeth. The underpinning mechanism is that root surface conditioning with amelogenin could prevent RR and ankylosis, and stimulate periodontal ligament formation after repositioning of the avulsed tooth. Some early case-reports and animal experimental findings suggested that EMD could be used as a bioactive root conditioning for reintegration of avulsed teeth , but subIn summary, interesting data on the utilization of amelogenin for replantation of avulsed teeth has accumulated. It seems plausible that amelogenin can stimulate regeneration of the tooth attachment apparatus even in cases where the tooth has been stored for significant time outside the oral cavity. However, additional treatment is needed to ensure stable and predictive treatment outcomes, and new protocols for combination of amelogenin treatment with anti-inflammatory and anti-microbial drugs must be developed and tested before the full potential of amelogenin can be exploited in dental traumatology.FluorideSeveral studies have recommended the use of fluoride solutions in different forms and concentrations to treat the root surface in cases of delayed tooth replantation, assuming that the demineralized dentin surface would be more prone to fluoride incorporation and might become more resistant to resorption. et al. [et al. [et al. [Among the fluoride solutions, the use of 2% acidulated sodium phosphate fluoride has shown a decreased inflammatory RR and the predominance of areas of ankylosis and replacement resorption -62. Accoet al. fluoride [et al. showed t [et al. did not et al. [et al. [2 to the root surface prior to replantation effectively reduces resorptive processes during the first postoperative weeks. By subsequently treating the root surface with tetracycline, the adverse effect of SnF2 on periodontal connective tissue repair reduced. In a study on dogs' teeth, Selvig et al. [2 followed by tetracycline resulted in complete absence of inflammatory resorption and ankylosis in the short-term experiment. In another study, Selvig et al. [2 solution from 1% to 0.1% may result in less persistent inflammation, however at the cost of less complete prevention of inflammatory resorption and ankylosis. Kameyama et al. [In a study on monkeys' teeth, Barbakow et al. showed t [et al. indicateg et al. showed tg et al. indicatea et al. revealedet al. [2, saturated citric acid, or saline control in conjunction with periodontal flap surgery. According to their findings SnF2-treated teeth healed with significantly longer junctional epithelium, less connective tissue repair to the root surface, and less bone regeneration than citric acid and control teeth. New cementum formation was limited in all treatment groups. RR was observed in almost all teeth exhibiting connective tissue repair, however to a lesser amount and not as frequent in SnF2 treated teeth due to limited connective tissue repair. Poi et al. [et al. [Wikesjo et al. evaluatei et al. showed t [et al. assessedi) Tetracyclines have antimicrobial and anti-resorptive activity.ii) Treatment of root surface with SnF2 followed by tetracycline resulted in complete absence of inflammatory resorption and ankylosis in short-term. iii) Ascorbic acid is capable of influencing tissue repair. iv) Acetazolamide is effective against root resorption. v) Combining calcium hydroxide with calcitonin seems to be effective in controlling inflammatory root resorption. vi) Alendronate is effective on inflammatory root resorption but has no effect on replacement resorption. vii) Ledermix prevents inflammatory root resorption.viii) Emdogain can support functional healing and periodontal regeneration after replantation, even when the avulsed teeth have a severely compromised cementum layer."} {"text": "To the Editor,1 recently published in ArquivosBrasileiros de Cardiologia. The investigators reported that rates of atrialfibrillation (AF) recurrence were similar in hyperthyroid and euthyroid patients andthat the duration of AF was the only predictor of AF recurrence in both.I have read with great interest the article entitled \"Predictors of Atrial FibrillationRecurrence in Hyperthyroid and Euthyroid Patients\" by G\u00fcrdogan et al.,2 have reported that a lowserum thyroid-stimulating hormone (TSH) level is an independent risk factor for AF. Allother factors predisposing to AF were mentioned and discussed in that article.Hyperthyroidism is a well-known risk factor for paroxysmal and permanent AF. Marrakchi etal.3 have found astrong relationship between vitamin D deficiency and nonvalvular AF. Serum vitamin Dlevels correlated with high sensitive C-reactive protein levels and left atrialdiameter, and were significantly associated with AF in Chinese patients with nonvalvularpersistent AF.4 Hanafy et al.5 have revealed the directelectromechanical effects of vitamin D administration on the left atrium and found thatvitamin D could effectively prevent and terminate AF.Additionally, Demir et al.1 should have reported the vitamin D levels of the patients intheir study and discussed the association between the levels of this vitamin and AFrecurrence.In the light of this knowledge, G\u00fcrdogan et al. Arquivos Brasileiros deCardiologia.1We are pleased that Dr. Cerit showed great interest in our article entitled''Predictors of Atrial Fibrillation Recurrence in Hyperthyroid and EuthyroidPatients'' published in 3 However, the relationship between this deficiencyand nonvalvular AF is not dependent on the occurrence of thyroid disorder. In thestudy published by Demir et al.,2the TSH levels were normal in all AF groups. In addition, thyroid dysfunction was anexclusion criterion in the studies by both Demir et al.2 and Chen et al.3 We did not evaluate the vitamin D levels in our study'sparticipants, which we can add as a limitation of our research. However, theparticipants in our study did not report any symptom or treatment of vitamin Ddeficiency. After thyroid surgery, in particular, patients may have vitamin Ddeficiency and hypothyroidism, but none of the patients in our study had priorthyroid surgery.Recent studies have found that vitamin D deficiency is related to nonvalvularAF.Considering the above, large-scale trials are still necessary to evaluate therelationship between vitamin D levels, thyroid function, and AF. We thank Dr. Ceritfor this great contribution to our work.Dr. Hasan ARI"} {"text": "On the basis of the continuum theory of micromagnetics, the correlation function of the spin-misalignment small-angle neutron scattering cross section of bulk ferromagnets is computed and discussed. e.g. elemental polycrystalline ferromagnets, soft and hard magnetic nanocomposites, nanoporous ferromagnets, or magnetic steels) is computed. For such materials, the spin disorder which is related to spatial variations in the saturation magnetization and magnetic anisotropy field results in strong spin-misalignment scattering d\u03a3M/d\u03a9 along the forward direction. When the applied magnetic field is perpendicular to the incoming neutron beam, the characteristics of d\u03a3M/d\u03a9 are determined by the ratio of magnetic anisotropy field strength Hp to the jump \u0394M in the saturation magnetization at internal interfaces. Here, the corresponding one- and two-dimensional real-space correlations are analyzed as a function of applied magnetic field, the ratio Hp/\u0394M, the single-particle form factor and the particle volume fraction. Finally, the theoretical results for the correlation function are compared with experimental data on nanocrystalline cobalt and nickel.On the basis of the continuum theory of micromagnetics, the correlation function of the spin-misalignment small-angle neutron scattering cross section of bulk ferromagnets ( Svergun & Koch, 2003e.g. in the analysis of polymers is a very popular method for investigating nanoscale structural and magnetic inhomogeneities in the bulk of materials. In most situations, SANS data are analyzed in reciprocal space, by fitting a particular model to the experimental SANS cross section. An alternative real-space approach to analyzing SANS data is the computation of the (auto)correlation function of the system, for instance by means of the indirect Fourier transformation technique shows a sketch of the nuclear (grain) microstructure of such a material, and Fig. 1b) displays qualitatively the magnetic (spin) distribution at a nearly saturating applied magnetic field.We consider polycrystalline statistically isotropic bulk ferromagnets. Examples of such materials are inert-gas condensed single-phase elemental ferromagnets , the elastic unpolarized SANS cross section V is the scattering volume, For the scattering geometry where the applied magnetic field via the function As shown by Honecker & Michels 2013, near mae.g. particle\u2013matrix) interfaces. The corresponding (dimensionless) micromagnetic response functions can be expressed as A: exchange-stiffness parameter; q and e.g. Fig. 11 in \u00a76.2et al., 2014The anisotropy-field scattering function (in units of h depend only on the magnitude i.e.By assuming that the functions 3.2.b), the total unpolarized SANS cross section et al., 1999For the scattering geometry where the external magnetic field 4.et al., 2003et al., 2004Before addressing the magnetic correlation functions, we will briefly recall the corresponding well known results from nuclear SANS theory Therefore, we define the correlation function We remind the reader that q and the direction \u03b8 of the scattering vector e.g. Fig. 11 in \u00a76.2via the Fourier coefficients et al., 2012et al., 2014i.e. with j\u2019, whereas Bessel functions are represented with an upper-case \u2018J\u2019. Equation (36)The spin-misalignment SANS cross section for the perpendicular scattering geometry depends on both the magnitude et al., 2014Since for statistically isotropic bulk ferromagnets In a SANS experiment, only the components of the momentum-transfer vector yz plane. By introducing the nth-order Bessel function AIn Appendix 5.q of the momentum-transfer vector i.e.In order to solve equation (36)i.e. the magnitude Under these assumptions (same size and shape), n ranging between 6 and 8 microstructure 6.The following materials parameters were used in the calculations: saturation magnetization 6.1.All results in this section are obtained by numerical integration of equations (45)i.e.Fig. 4b)]. For the chosen limiting case of Increasing et al., 2013b), we note that the correlation functions of bulk ferromagnets enter the origin et al., 2013k is a constant). This observation may be compared to the well known result for nuclear particle scattering, where (for isolated uniform particles) the first derivative of r, the correlation function can be expanded as and 7\u22121 and For soft magnets , the following relation for R; length: L) .It is seen in Fig. 8a) and correlation length . In order to model the effect of dense packing, we have used the Percus\u2013Yevick hard-sphere structure factor for R.Finally, Fig. 9th Fig. 9b. In ori.e. with increasing interparticle interactions, we progressively introduce, in addition to the original (diffuse) spin-misalignment length e.g. the hump in It is clearly seen that with increasing particle volume fraction \u03b7 the range of the correlations decreases. However, the characteristic features of the structure factor only become visible at relatively large values of \u03b7 (above about 20%), while at the lower end of \u03b7 values both The field dependence of this feature is depicted in Fig. 106.2.a)\u201311d) show a)\u201311d), from a spike-type anisotropy at low fields (a) to a clover-leaf-shaped anisotropy at large fields (d), is related to the field dependence of the Fourier coefficients and demonstrates that different terms in the response functions do not R of both materials are in the range 8\u201313\u2005nm, slightly smaller than the ones estimated previously . Therefore, the finding et al., 2003A value for Co using equation (41)et al., 2003e.g. nanocomposites (Michels et al., 2006As is seen in Fig. 147.via a structure factor. The result for et al., 2013et al., 2013et al., 2014et al., 2015et al., 2015On the basis of a recent micromagnetic theory for the magnetic SANS cross section of inhomogeneous bulk ferromagnets, we have studied the corresponding magnetic field-dependent spin-misalignment correlations in real space. The correlation function"} {"text": "A \u201cvitamin D hypothesis\u201d has been proposed to explain the increased prevalence of eczema in regions with higher latitude. This review focuses on the current available evidence with regard to the possible effect of vitamin D on the development of atopic eczema. Observational studies have indicated a link between vitamin D status and eczema outcomes, including lower serum vitamin D levels associated with increased incidence and severity of eczema symptoms. Vitamin D is known to have a regulatory influence on both the immune system and skin barrier function, both critical in the pathogenesis of eczema. However heterogeneous results have been found in studies to date investigating the effect of vitamin D status during pregnancy and infancy on the prevention of eczema outcomes. Well-designed, adequately powered, randomised controlled trials are needed. The study design of any new intervention trials should measure vitamin D levels at multiple time points during the intervention, ultraviolet (UV) radiation exposure via the use of individual UV dosimeters, and investigate the role of individual genetic polymorphisms. In conclusion, the current available evidence does not allow firm conclusions to be made on whether vitamin D status affects the development of atopic eczema. Changes to our modern lifestyles, with increased indoor employment and relaxation activities, along with increased sun protection behaviours, have led to limited sunlight exposure for many individuals. This can lead to vitamin D deficiency, as humans predominately derive vitamin D by cutaneous synthesis under the influence of sunlight, with limited vitamin D sourced from dietary intake . The conet al. [et al. [et al. [A potential link between vitamin D and the development of allergic disease, the so-called \u201cvitamin D hypothesis\u201d, first emerged when higher rates of allergic disease were observed in higher latitudes ,3,4,5,6 [et al. found th [et al. , where 3 [et al. . This hiThis review discusses the current evidence with regard to vitamin D and the development of atopic eczema, with a particular focus on the prevention of eczema in early life. MEDLINE and PUBMED database searches were performed using the keywords \u201catopic dermatitis\u201d or \u201ceczema\u201d and \u201cvitamin D\u201d or \u201c25-hydroxyvitamin D\u201d, limiting to publication dates of 1 January 2000 to 30 November 2014, with the search also limited to human subjects and English language. Some additional original research papers were also identified through other known articles on related topics.Eczema is generally the first manifestation of allergic disease . The natet al. [et al. [Observational studies have indicated a link between vitamin D status and eczema outcomes, with lower serum vitamin D concentrations associated with increased incidence, especially in children more so than in adults ,15,16. Iet al. and Lee [et al. observedRecently eczema phenotypes have also been found to be associated with multiple vitamin D pathway genes . Thus, vIn vitro work has demonstrated that cathelicidin is induced in keratinocytes in response to vitamin D metabolites, which enhances antimicrobial activity against Staphylococcus aureus [+/CD25+/Foxp3+ T lymphocytes [Vitamin D can influence the regulation of the immune system in a number of ways. Firstly, it is believed to play an important role with regards to susceptibility to cutaneous bacterial and viral infection . Individs aureus . Moreoves aureus . As wellphocytes ,29.In terms of the skin barrier function, vitamin D and the vitamin D receptor have a regulatory role in the control of proliferation in the stratum basale, regulation of proteins in the stratum spinosum and stratum granulosum (filaggrin and loricrin), and synthesis of lipids necessary for the barrier function of the strata corneum . Thus, vHumans naturally derive >90% of their vitamin D requirements by cutaneous synthesis under the influence of sunlight via ultraviolet B (UVB) radiation . VariatiIf vitamin D supplementation is to be used, the ideal dose of vitamin D will depend on individual sun exposure, intake of vitamin D rich foods and current vitamin D status. The best indicator of vitamin D status is considered the measurement of serum 25-hydroxyvitamin D (25(OH)D) which is the most abundant and stable vitamin D metabolite . Serum 2Other potentially linked outcomes are eczema, vitamin D status and bone health, especially given the regular use of corticosteroids , chronic inflammation and possible low vitamin D status of eczema affected individuals. A recent large adult study in the USA has observed a higher risk of fractures amongst patients with eczema . Furtherin utero and the risk of eczema development, while others suggest that high levels may be a risk factor [Some observational studies ,37 suppok factor . Howeveret al. [et al. [During pregnancy, fetal vitamin D levels are determined by maternal vitamin D status and the ability of 25(OH)D to cross the placenta . Investiet al. found hiet al. ,42,43,44 [et al. found thin utero. in utero period. The heterogeneous findings with regard to maternal antenatal or cord blood 25(OH)D status and off spring eczema outcomes from observational studies illustrates the need for well-designed RCTs on this topic.Measuring maternal 25(OH)D levels during pregnancy is a better methodological approach to determine the degree of vitamin D exposure to the developing fetus in utero on the risk of allergic disease development in the offspring.To date there has only been one published intervention trial investigating the effect of vitamin D exposure n = 180) RCT conducted by Goldring et al. [et al. [This small and are using higher doses (2400 IU or 4000 IU per day) of maternal vitamin D supplementation during pregnancy. Over the next few years the results of these trials should clarify any effect of vitamin D during pregnancy on early childhood allergic disease.Currently there are several other RCTs also investigating the effect of maternal vitamin D supplementation during pregnancy on allergic disease outcomes (NCT00920621 and NCT00856947). These RCTs are larger D levels of around 25 IU/L from lactating women who have \u201csufficient\u201d maternal 25(OH)D status . Howevern = 123 infants) placing major limitations on the interpretation of these findings. From the 1920s to the present, the use of vitamin D supplementation has been routinely recommended for the prevention of rickets in many countries particularly in the Northern Hemisphere ,57,58. HIn order to definitively determine whether postnatal vitamin D supplementation has an effect on childhood eczema development well-designed adequately powered randomised controlled trials are essential. The study design of any new intervention trial in this field should measure 25(OH)D levels at multiple time points during the intervention and follow-up period, and ideally also measure UVB radiation exposure via the use of individual UV dosimeters. UV radiation phototherapy as a treatment for moderate to severe eczema symptoms in adults has been appreciated for many years . Howevern = 82) or placebo (n = 82) for 6 weeks in lactating mothers of infants who had facial eczema by one month of age. There were no differences between the groups in eczema severity at 3 months or incidence of eczema at the two year of age follow-up assessment. Unfortunately, any interpretation of the findings from this trial are particularly limited as serum 25(OH)D levels were not measured in either the mothers or the infants. This was a critical study design flaw given that two studies [Maternal vitamin D supplementation in the postnatal period was studied in a RCT conducte studies ,54 have studies was thatPromising results of improved eczema symptoms have been found using oral vitamin D supplementation (1000 IU vitamin D for 1 month) in children, aged 2\u201317 years, during winter in two double-blinded placebo-controlled RCTs ,63. Bothet al. [et al. [et al. [In adults, mixed results have been found in several randomised, placebo-controlled, double-blind trials investigating vitamin D supplementation as a treatment strategy for atopic dermatitis. Using an oral vitamin D supplement of 1600 IU per day for 60 days, both Amestejani et al. and Java [et al. found si [et al. determinet al. [et al. [Calcipotriene is a topical vitamin D3 analogue approved for the treatment of scalp psoriasis. After a case of eczema flare in response to application of calcipotriene cream in a 2 year old boy, Turner et al. investig [et al. ,69. Hencet al. [et al. [With regard to the possible effects of vitamin D on allergen sensitisation, again we find heterogeneous results from observational studies. Some studies have found higher maternal vitamin D intake from foods during pregnancy or higheet al. found thet al. ,45,47 or [et al. . et al. [Some of these \u201cinconsistent\u201d results may be explained by possible \u201cU-shaped associations\u201d, with an increased risk of allergen sensitisation at both low and high levels of vitamin D. This is well illustrated in a study by Rothers et al. which obet al. . A potenet al. .et al. [In depth genotyping with regard to eczema, IgE synthesis and vitamin D metabolism will likely to be able to shed more light and provide further advances in our understanding of the role of vitamin D on the development of atopic eczema. Liu et al. identifiet al. .p = 0.0004). Furthermore two CYP2R1 haplotypes increased eczema risk whereas one vitamin D receptor haplotype lowered eczema risk. GC rs7041 and CYP2R1 rs7935792 also interacted to modulate total IgE among these Chinese eczema patients. Recently eczema phenotypes have also been found to be associated with multiple vitamin D pathway genes . In a laet al. [BsmI (rs1544410) G allele, ApaI (rs7975232) C allele and TaqI (rs731236) T allele were all over-represented in patients with severe atopic dermatitis compared to healthy controls. Together these findings suggest that the effect of vitamin D status on allergy outcomes may be influenced by the underlying genetic milieu.Heine et al. studied et al. [Along with genetic variability, another explanation for the heterogeneous findings with regard to the effect of vitamin D status and atopic eczema may be the source of the vitamin D. Could there be an effect of sunlight exposure or UV radiation on the skin that is independent of the parallel effect that UV radiation has on vitamin D status? An interesting finding from a recent study by Noh et al. was thatet al. , which iet al. , hence tA lack of well-designed randomised controlled trials investigating the effect of vitamin D on the development of eczema has resulted in limited and conflicting evidence to date on this topic. The sunlight exposure effect alone on the incidence and severity of eczema especially requires in detailed further examinations at various time points in early life. The use of devices such as individual UV dosimeters, which are now readily available to directly measure UVB radiation exposure, should be an essential measurement outcome in all future studies. Unlocking the role of individual genetic polymorphisms and their influence on vitamin D status in early life and eczema development will also be essential, and may be of some value in explaining the lack of consistency in findings to date. There remains enormous scope for addressing the role of these diet and lifestyle factors as potential allergy prevention strategies. To date, the current available evidence does not allow firm conclusions to be made on whether vitamin D status affects the development of atopic eczema."} {"text": "Acta Cryst. (2013), E69, m126.Corrigendum to et al. is corrected and a reference added for a previously published report of a closely related structure.An erroneous claim in the paper by Royappa The authors sincerely regret this unintentional oversight.However, the authors were unaware of a previous report (Jakob"} {"text": "Nearly a year ago, UNAIDS launched the ambitious \u201c90\u201390\u201390\u201d targets to help end the AIDS epidemic: by 2020, 90% of people living with HIV will be diagnosed, 90% of those diagnosed will be on sustained antiretroviral therapy (ART) with 90% viral suppression in those on ART . A welcoDespite an impressive 40% reduction in mother-to-child HIV transmission (MTCT) in the last five years, there were still an estimated 220,000 new paediatric infections in 2014 . Due to Infants, children and adolescents continue to have the largest gaps in HIV diagnosis and treatment ,4,8. DesIt is well-known that the treatment gap for children remains vast and substantially larger than that of adults, with less than a third of HIV-infected children <15 years receiving ART in 2014 \u201318, the et al. [In the absence of treatment, perinatally HIV-infected infants experience extraordinarily high mortality, which can be reduced by 75% with immediate ART in children <3 months of age \u201318,21. Tet al. . Howeveret al. \u201327.et al. [et al. [et al. [In older children, a causal modelling study showed small but significantly reduced mortality with universal ART in children aged 5\u201310 years, and studies consistently show better height gain with immediate treatment in children \u201331. Cohoet al. , increas[et al. and Rabi[et al. highligh[et al. ,41.The second and third \u201c90s\u201d, namely retention on ART and achieving viral suppression on first-line therapy, are paramount for children who face lifelong treatment with access to a limited range of alternative drugs. These goals are important to prevent exhausting limited treatment options and to achieve optimal ART outcomes, as well as to prevent transmission of multi-drug resistant viruses when these children grow up with HIV and become sexually active. In addition, sustained virologic suppression, especially from early infancy, is associated with better neurocognitive and growth outcomes as well as reduced viral reservoirs \u201344.While reports from individual research cohorts suggest that good retention and viral suppression are possible, more routine programmatic data reflects a less optimistic picture ,46. In aet al. [Adolescents experience obstacles to accessing health services on their own, including stigma, lack of youth-friendly services and parental consent policies, making this a key group for targeting 90\u201390\u201390 ,13. Whetet al. , and by et al. . Adolescet al. , with HIet al. ,56. Theret al. . Achieviet al. .While the 90\u201390\u201390 targets make us think about those already HIV-infected, one of their most important benefits will be in the reduction of new HIV infections . Ending The specific focus on children and adolescents in the UNAIDS 2020 targets, together with alignment of political commitment and financial resources, provides a much-needed opportunity to address previous inequities both in research and service delivery for paediatric HIV. The articles in this issue describe a number of challenges and barriers to achieving the targets, but also important linked strategies for overcoming them, which are represented in the conceptual framework in et al. [et al. [et al. [et al. [One of the major barriers to improving paediatric HIV care is the paucity of paediatric HIV research. There is frequently little or no high-quality evidence on which to base policies and guidelines. Many paediatric HIV care recommendations in WHO and national guidelines therefore remain conditional, rather than strong, with a risk of less commitment to their implementation ,62,63. Tet al. , Cotton [et al. , Penazza[et al. and Rabi[et al. , to the [et al. . While p[et al. , adolesc[et al. and thos[et al. , as well[et al. .The need for information extends beyond academic research to monitoring and evaluation of routine programmes \u2013 we will not know whether we have met the 90\u201390\u201390 targets unless we measure them, and we are unlikely to achieve them unless we monitor our progress (or lack thereof) towards them, using the information to improve programmes. In this respect, the lack of access to routine viral load monitoring in many settings is a major obstacle both to achieving 90% suppression and knowing how close or far off we are.et al. [et al. [et al. [We will not reach 90\u201390\u201390 for children with a \u201cbusiness as usual\u201d approach. Many articles in this issue discuss innovations both within and outside the health system that show promise in improving paediatric and adolescent HIV care. For example, Essajee et al. review fet al. emphasizet al. point ou [et al. argue th[et al. outline Abrams and Strasser emphasizet al. [et al. [et al. [Integration has been a \u201cbuzzword\u201d in adult HIV for several years, with emerging promising practices for children and adolescents. The need for integration is highlighted by a number of articles in this supplement. As described by Chamla et al. , the ratet al. . It aims[et al. identify[et al. emphasiz[et al. , prepareet al. [et al. [There is a huge diversity of role players and stakeholders in paediatric and adolescent HIV, both within and outside the health service. Stakeholders include funding agencies, policy makers, researchers, implementing partners, ministries of health, industry , health care workers, non-profit and community-based organizations, as well as, importantly, children, adolescents and caregivers themselves. Like previous targets, the 90\u201390\u201390 agenda provides an opportunity for these groups to work towards a common goal, facilitating collaboration. For example, Chamla et al. highligh [et al. ) is an i [et al. . Demand [et al. . In conset al. [Intersectoral collaboration needs to extend beyond the health system and its traditional partners. Cluver et al. point ouThere can be no keener revelation of a society's soul than the way it treats its children.\u2013 Nelson MandelaThere are many challenges to reaching the 90\u201390\u201390 targets for children and adolescents. They require a range of linked activities by multiple players working together with concerted effort at many levels within and beyond the health system. While targets can be criticized, they drive progress and help to consolidate and renew financial and political commitment to HIV prevention and treatment. These targets therefore offer the global community an opportunity to focus on children, and the very real and remarkable possibility of ending paediatric HIV."} {"text": "Fire has been used for centuries to generate and manage some of the UK's cultural landscapes. Despite its complex role in the ecology of UK peatlands and moorlands, there has been a trend of simplifying the narrative around burning to present it as an only ecologically damaging practice. That fire modifies peatland characteristics at a range of scales is clearly understood. Whether these changes are perceived as positive or negative depends upon how trade-offs are made between ecosystem services and the spatial and temporal scales of concern. Here we explore the complex interactions and trade-offs in peatland fire management, evaluating the benefits and costs of managed fire as they are currently understood. We highlight the need for (i) distinguishing between the impacts of fires occurring with differing severity and frequency, and (ii) improved characterization of ecosystem health that incorporates the response and recovery of peatlands to fire. We also explore how recent research has been contextualized within both scientific publications and the wider media and how this can influence non-specialist perceptions. We emphasize the need for an informed, unbiased debate on fire as an ecological management tool that is separated from other aspects of moorland management and from political and economic opinions.This article is part of the themed issue \u2018The interaction of fire and mankind\u2019. In the uplands, such ecosystems are often collectively referred to as \u2018moorland\u2019. While in some northern and western regions these ecosystems may have a natural origin (e.g. [Lagopus lagopus scoticus Latham 1787) on privately owned shooting estates. The current form of rotational patch burning associated with grouse moor management , in mosin versus continuing an intensive, stereotypical form of traditional rotational heather burning. In reality, existing practices are very spatially heterogeneous and the grouse moor stereotype \u2003Provide robust evidence of the interactions and trade-offs between the various practices associated with peatland management regimes .(2)\u2003Consistently classify the effects of all vegetation fires according to fire severity. At its simplest level this means not confounding severe wildfire effects with those from management burns. Management fires are set in winter or early spring when soil heating is minimal. By contrast, wildfires predominantly occur in spring and summer during dry periods when dee(3)\u2003Develop appropriate guidelines for classifying peatland condition that account for their fire ecology.(4)\u2003Generate informed and unbiased debate regarding peatland fire management that separates ecology from politics and economics.Here, based on recent peer-reviewed literature on the use of managed fire in the UK uplands, and its subsequent presentation in the wider media, we consider there is an urgent need for researchers to:2.(a)Calluna productivity in all situations. Results from the study by MacDonald et al. [Calluna can regenerate by \u2018layering\u2019 and the formation of adventitious roots. This led to the recommendations that managers not burn stands which have not experienced fires in the last 40 years and which have well-developed heather layering; avoid burning Calluna in wet, shaded or humid situations where layering is likely; and concentrate burning activity where Calluna forms dense, continuous stands. While these management suggestions may seem like common sense, there remains surprisingly little scientific evidence to suggest what their outcomes would be in terms of patch or landscape-scale ecosystem structure, function and diversity.Any ecological disturbance has benefits and costs depending on the species or ecosystem in question. Where humans plan ecological disturbances for landscape management goals, it is essential to weigh up the trade-offs involved and make decisions that reflect the weighting given to different priorities. Debate over who should get to make such decisions, and how, is an important philosophical and political issue but is beyond the scope of this paper. In many ecosystems, fire is a natural process that plays a vital role in facilitating plant regeneration, improving forage quality and productivity, defining vegetation community composition, controlling landscape-scale variation in habitat structure, and modulating subsequent wildfire behaviour and severity e.g. ,2,26). M,26. M2,2d et al. show thaCalluna via burning creates hydrological and light conditions that favour Sphagnum species over pleurocarpous mosses. Evidence from fire-prone black-spruce forested bogs in North America and mires in Sweden, for example, show that Sphagnum species are replaced by pleurocarpous mosses under dense canopies that can be removed by wildfire [Sphagnum plants can regenerate from deeply buried stems [Sphagnum plants have been observed to vigorously resprout following intense wildfires [Calluna canopy density are likely to be required to restore peat-forming vegetation on many degraded bogs, and fire may be an effective way to achieve this particularly if the Calluna is old and unlikely to resprout [Sphagnum abundance was higher in 10-year burn rotations than in both 20-year rotations and locations that had not been burnt for approximately 90 years [Sphagnum populations following managed fires Hedw, and Sphagnum fuscum (Schimp.) H. Klinggr are particularly resilient to fire, but data are needed on burn effects on other species.It is not our aim here to provide an exhaustive review of the effects of fire on peatland environments or other ecosystems. Instead, we suggest readers refer to holistic reviews of the effect of fire on the environment and specwildfire ,37. Expeed stems , and in resprout . Evidenc90 years . This rees (e.g. ) and thaum plant . Based oet al. [et al. [et al. [Pluvialis apricaria L. 1758) populations were positively affected by prescribed burning, meadow pippits (Anthus pratensis L. 1758) were negatively impacted. Elsewhere in Europe, researchers have shown the benefit of prescribed fire use in preventing the loss of protected, internationally rare moorland ecosystems more generally , and, andet aly production (e.g. ) that maet al. showed tet al. ; (ii) DOg per se ,57, and g per se ,59. Therg per se ,57 thereg per se ) and catg per se ). Furtheg per se . In-streg per se ) will alet al. [Rates of peat accumulation have been noted to be lower in areas burnt by management fires ,65, sugget al. would suet al. ), recentet al. .Sphagnum, which are highly sensitive to increased nutrient loadings [et al. [Other impacts of long-term use of prescribed burning on the peatland terrestrial habitat may include a lowered water table and lower pH , changesloadings . In this [et al. , that laSphagnum cover in a single case-study catchment to repeated severe wildfires but ascribes its lack of recovery to managed burning. However, they also acknowledge that subsequent nutrient and acid deposition from air pollution may also have been important. In aerial photographs, the area they studied shows extensive evidence of gullying and it is unclear whether this is related to a transition in the site's hydrological and ecological state following the compounded severe wildfires. When burning, grazing and drainage are carried out indiscriminately, these management practices are likely to be damaging to blanket bogs and may even lead to loss of habitat [Prescribed fire also has the potential for negative interactions with other land-management practices\u2014especially, drainage and grazing (e.g. ). Howeve habitat and C. R(b)Sphagnum cover [et al. [et al. [While the effects of prescribed burning demand that we make trade-offs between different ecosystem services, there is growing evidence and consensus that severe, uncontrolled wildfires can have very serious consequences. Under drought conditions, wildfires can ignite peat layers causing smouldering peat fires and large emissions of C to the atmosphere ,89. Seveum cover . We re-e [et al. ,39 for i [et al. ,25 and C [et al. demonstr(c)sensu [et al. [et al. [Calluna-dominated vegetation and mapped burning within such communities. Taking their estimates of a mean proportion of moorland burnt in the UK during the last 25 years (16.7%), the annual percentage area burned and mean fire rotation can roughly be estimated ; Wieder et al. [The effects of fire vary both temporally and spatially with associated benefits and disbenefits depending on the scale one considers as well as the ecosystem services one is most interested in. Some changes are associated with the immediate aftermath of a fire , while others may only become apparent by taking a longer-term perspective. Fires vary in both their intensity and their severity, which is the result of spatial variation in vegetation/fuel structure and climate, temporal variation in fire weather between and within seasons and, in the case of prescribed burns, the expertise and care with which burns are managed. It is only by understanding the overall character of current and historic fire regimes (sensu ) that onsensu . In thessensu . That dosensu and natisensu , while osensu . Some ofsensu ,25. Cruc [et al. compared [et al. is one od (sensu ). This r [et al. suggests [et al. who show [et al. summarizr et al. documentr et al. . That no(d)et al. [Despite the central role of fire in the ecology of UK peatland and moorland ecosystems, and the promotion of fire use for restoration of similar ecosystems in both southern and nortet al. suggesteet al. . No atteBox 1.With regard to burning, to be in \u2018good condition\u2019 the following conditions must be met in blanket bog habitats:(1)\u2003There should be no observable signs of burning into the moss, liverwort or lichen layer or exposure of peat surface due to burning.Sphagnum, other mosses, liverworts and/or lichens.\u2014\u2003Ground with abundant and/or an almost continuous carpet of 2 or less. The unevenness should be the result of Sphagnum hummocks, lawns and hollows, or mixtures of well-developed cotton-grass tussocks and spreading bushes of dwarf shrubs.\u2014\u2003Areas with notably uneven structure, at a spatial scale of around 1 m(2)\u2003There should be no signs of burning or other disturbance (e.g. mowing) in the following sensitive areas:For wet heath habitats to be in good condition, the following conditions must be met:(1)\u2003There should be no observable signs of burning into the moss, liverwort or lichen layer or exposure of peat surface due to burning.Sphagnum, liverworts and/or lichens. This target should also be recorded if any evidence of this is found while walking between sample locations.(2)\u2003There should be no signs of burning and other disturbance inside the boundaries of the \u2018sensitive areas\u2019 which includes ground with abundant, and/or an almost continuous carpet of 3.(a)Prescribed fire provides an array of management benefits and challenges within a UK context that vary depending on the prioritized ecosystem services. Research has a key role in informing scientific, policy and public perceptions and debates on appropriate prescribed fire use. The interaction between research outcomes and society for a large part occurs through the public media. While science communication represents a difficult process of distilling technical research findings and complex messages into simplified media stories, effective and accurate communication is essential if appropriate land and fire management strategies are to be implemented. Unfortunately, the way in which research is presented in the media is not always unbiased, and research can be manipulated or misinterpreted by persons or groups that may have a pre-determined agenda. We emphasize the challenges of such debate through the discussion of recent case studies ,18,99, s(b)et al. [et al. [et al. [et al.'s C-focused discussion there is a reliance on wildfire papers from boreal studies outside the UK . Th. Thet alet al. [The study of Douglas et al. sets a cet al. ,4,106, cet al. and sugget al. themselvet al. point ouet al. [Sphagnum plants [et al. [et al. [et al. [et al. [Sphagnum plants, but the focus of this report is on aquatic ecosystems and catchment hydrology; the authors make no direct observations of fire's impact on Sphagnum itself and this assertion is in conflict with the results of Lee et al. [et al. [et al. [et al. [et al. [Douglas et al. refer toet al. , nutrienet al. , water qet al. , air polet al. , and Sphm plants . However [et al. showed t [et al. reviewed [et al. assessed [et al. is citede et al. . Further [et al. contextu [et al. ,112), he [et al. one aspe [et al. did not [et al. . Unfortu [et al. report that ha [et al. actually species . There i to fire .et al. [et al. [et al. [Later, Brown et al. point toet al. , there iet al. . Where B [et al. suggest [et al. ,25,39. W [et al. are righet al. [et al. [Calluna regeneration at risk. Further data on variation in prescribed burning practice would be welcome. The results of Allen et al. [Finally, Brown et al. rightly [et al. are beinn et al. show thaIn summary, these three case studies create an unbalanced tone in which the outcomes of fire are presented as generally negative. Of course, it is clear that episodic disturbances induce significant changes in a range of environmental parameters, and that variation in disturbance regimes can drive changes in ecological structure and function. Whether these changes/differences are positive, negative or of no consequence is likely to depend upon the spatial and temporal scales, and ecosystem services, one chooses to focus on. A key issue with all three case studies is that some of the evidence upon which they base their assertions is limited or incomplete, and following the citation trail often reveals insufficiently critical reliance upon either unpublished reports or a simplistic (mis)interpretation of complex scientific findings.(c)et al. [et al. [The use of fire as a management tool within the media often appears to similarly lack nuance. For example, a recent newspaper article by Monbiot , provocaet al. and Doug [et al. , the sub [et al. , the foc [et al. , and the [et al. , the newet al. [et al. [Unfortunately, as scientists we often have little control over the representation of our research within the media. Others have noted how the characteristics of scientific claims change between scholarly writing and non-specialist audiences , and thiet al. , but in [et al. paper [1 [et al. \u2013135 are [et al. . Individ [et al. . We, the [et al. ), and it [et al. . While t [et al. ,139, it [et al. .Tetrao urogallus Linnaeus, 1758) habitat [There are a wide range of views on issues regarding the socio-economics and ethics of private estate ownership and driven grouse shooting in the UK, both within the research community at large and among the authors here. Effective communication and understanding between different groups currently seems to be minimal. Studies suggesting that some land managers believe they have access to \u2018special\u2019 knowledge regarding moorland that others cannot comprehend are thus habitat . By camp habitat : \u2018I was (d)et al. [et al. [et al.'s own discussion of their findings, and the associated outreach and media coverage gives the impression that the paper focused on the environmental and ecological effects of burning. In reality, the work described the spatial distribution of burning and short-term temporal trends in fire; the results of which have been questioned [To determine how non-specialists' perceptions of fire are influenced by differences in reporting in academic and public media, we distributed one of the following to each of six separate groups of six to seven senior undergraduate and graduate students of restoration ecology at The Ohio State University (USA): the results or discussion sections of Douglas et al. ; an assoet al. ; and subet al. ,125. The [et al. was modiestioned ,25.If we are to debate the use of fire as a management tool, it is essential that authors ensure that the press releases associated with their findings accurately reflect the content of their research as well as the uncertainty associated with ongoing research questions. At the same time, it is also essential that journalists reporting on this clearly contentious topic do not just rely on the content of press releases from campaigning organizations but verify facts by reading the actual paper and consulting with an independent academic expert not involved in the study. Journalists reporting on scientific findings need to decide whether their duty is to report science or further their own or others' agendas. Journalists should preferably adopt a neutral tone and make a clear distinction between research reporting and opinion pieces.4.2 [et al. [et al. [et al. [et al. [et al. [Fire as a management tool is carried out at the landscape scale and induces ecological processes that span from minutes to decades following the burn. Most research relies on small plots of 1 to tens of meters and monitoring might, at best, extend for a couple of years following the fire. The only UK site where long-term evidence is available on peatland burning is Moor House in the Pennines. Even these experimental plots are not at a landscape scale (900 m2 ) and the2 ). Altern [et al. ). Coordi [et al. across d [et al. ). Limite [et al. and Davi [et al. ,25. Here [et al. was reve [et al. , as it b [et al. that mor [et al. ,149.Recent reviews (e.g. ) have drSphagnum species [et al. [et al. [Whether or not current land-management priorities, burning regimes and other practices are ecologically sustainable, or morally justifiable, in the context of social and environmental change are questions that still require much further study and debate. There is currently little scientific consensus either way, with often contradictory results on the effects of fire on DOC concentrations in moorland water and gase species ,155 dire [et al. were rig [et al. should b [et al. suggest Calluna dominance. Areas associated with burning tend to have greater Calluna cover but managers do not distribute their effort randomly across landscapes and it is unclear if burning is the result or cause of increased Calluna cover. Time scale is also important. Indeed, not burning vegetation with a substantive Calluna component will increase its dominance at least over a 90-year period, a time range close to the natural historic fire-return interval of 120\u2013200 years [\u2014\u2003That regular burning alone increases 00 years .Sphagnum. We need to quantify species responses to fire and to understand the importance of variation in burn severities and frequencies. Sphagnum species display micro-habitat differences and it is likely that micro-habitats will respond to burning differently given their distinct topography and moisture regimes. We also need to know whether burning limits Sphagnum recovery during peatland restoration and if so, under what fire regimes?\u2014\u2003That fire kills or significantly damages \u2014\u2003That peatlands are particularly sensitive sites with regard to fire. Northern peatlands elsewhere in the world, notably within boreal regions, can show remarkable ecohydrological resilience to burning ,157. Int\u2014\u2003That managed burning helps protect against future wildfires, minimizing fire likelihood and burn severity. How does managed burning affect landscape-scale patterns in flammability; does it reduce the frequency or burn severity of wildfires? How many wildfires actually result from managed burning? In other words, how do wildfire and managed fire regimes interact?\u2014\u2003That fire alone can contribute to peatland degradation. At what frequencies or severities is this true, if at all? How can we separate the confounded effects of drainage, grazing, acidification and nutrient deposition? Unlike wildfires, managed burns appear rarely to leave areas of peat exposed, but might this vary according to fire frequency? Over what spatial and temporal scales should degradation be defined?We argue here that the following important factoids are not verified. They require further study and should not be perpetuated in discussions until they are formally addressed:5.Fire is a valued and integral component of the ecosystem manager's tool kit capable of being used as well as abused in a multiplicity of different ways. Throughout Europe, managers, ecologists and conservationists value prescribed burning as a tool to protect and restore globally rare heathland and moorland ecosystems and there is a growing body of scientific literature to inform best practice. Much of this knowledge comes from research in the UK and it is ironic that while the public debate here has shifted strongly against the use of fire, scientists in other countries are using this evidence to promote the reintroduction of burning. Further scientific evidence is urgently needed on the benefits and costs of differing fire regimes for peatland and moorland ecosystem services. Such assessments need to focus on the landscape scale and on elucidating trends over the entire fire rotation rather than just looking at the short-term outcomes of single burns that are a pulse disturbance with obvious negative outcomes for particular metrics. Until integrated evidence is available, all scientists should be concerned when potentially interesting and informative research is used as a forum to propagate what amounts to hearsay or to promote political agendas. The use of press releases to publicize a particular point of view when the actual scientific evidence from a study is incomplete or unrelated should be discouraged.In the absence of sound evidence and consensus, it is vital that managers and scientists adopt an \u2018adaptive\u2019 approach to decision making . Core prRestoring resilient peatland ecosystems that protect existing carbon stocks and function as a carbon sink is a priority for the UK and we welcome initiatives such as Scottish Natural Heritage's Peatland Plan . What is"} {"text": "The shape of plasmonic nanostructures such as silver and gold is vital to their physical and chemical properties and potential applications. Recently, preparation of complex nanostructures with rich function by chemical multistep methods is the hotspot of research. In this review we introduce three typical multistep methods to prepare silver nanostructures with well-controlled shapes, including the double reductant method, etching technique and construction of core-shell nanostructures. The growth mechanism of double the reductant method is that different favorable facets of silver nanocrystals are produced in different reductants, which can be used to prepare complex nanostructures such as nanoflags with ultranarrow resonant band bandwidth or some silver nanostructures which are difficult to prepare using other methods. The etching technique can selectively remove nanoparticles to achieve the aim of shape control and is widely used for the synthesis of nanoflowers and hollow nanostructures. Construction of core-shell nanostructures is another tool to control shape and size. The three methods can not only prepare various silver nanostructures with well-controlled shapes, which exhibit unique optical properties, such as strong surface-enhanced Raman scattering (SERS) signal and localized surface plasmon resonance (LSPR) effect, but also have potential application in many areas. In decades past, synthesis of silver nanostructures has been an active research area because of their excellent optical properties such as surface-enhanced Raman scattering (SERS) 1 and [AgNO3]2 stand for the concentration of AgNO3 in the first and second step respectively. In the first step silver nanorod as shown 3 in EG in the presence of PVP, and then was dispersed in DMF as a seed solution. In the second step, seed solution was added into a DMF solution containing AgNO3 and PVP. With the increase of the ratio of [AgNO3]2/[AgNO3]1, different structures were observed and PVP. In the absence of Na3CA, nanospheres cannot be prepared because Ag nucleuses are immediately protected by PVP as soon as they form leading to the anisotropic growth of Ag nanoparticles due to selective adsorption of PVP on the (100) facets. When PVP was absent, the products were also nanospheres with larger size. The results indicate that the role of Na3CA is to promote the reduction of Ag+ ion into nanospheres. On the other hand, PVP can promote nucleation and prevent the aggregation of nanoparticles. In our previous work [4 and Na3CA in the presence of PVP. Silver seeds were prepared firstly, and then exposed under light leading to aggregation and were thus welded together, so the final morphology was decided by the concentration of PVP. Wojtysiak [+ ions by NaBH4. Stable Ag nanoclusters only appeared when citrate was added, otherwise reduction of Ag+ ion by NaBH4 under the conditions with citrate led to a precipitate at the bottom of solution. Citrate is a stabilizer as well as a reductant. In this regard, Yi et al. [4 and Na3CA as a double reductant system to prepare Ag nanoplates via a multistage procedure. In the first stage, Ag nanoplates were prepared as seeds with NaBH4 and Na3CA. Then these as-prepared seeds evolved into larger nanoplates with tunable sizes of 40 nm to 260 nm by adding different volumes of Na3CA solution. Each addition of Na3CA solution and seed solution was named one stage, the size of the nanoplate increased without change of shape, and the SPR peak shifted red. To get triangular nanoplates with satisfactory size distribution and yield, Yang et al. [3 by NaBH4 in the presence of Na3CA. In the second step, sodium dodecyl sulfate (SDS) was added as stabilizer into the as-prepared silver nanoparticle colloid and then citrate-stabilized silver nanospheres were converted into SDS-stabilized silver nanospheres. In the last step, silver nanoplates were formed under the reduction of SDS-stabilized silver nanospheres by citrate and aged in NaCl solution. When SDS-stabilized silver nanospheres aged in NaCl solution for one week, they began to transformed into nanoplates. Three weeks later, high-yield silver nanoplates with larger size were formed. In their further experiments, if only NaBH4 was used as reductant in the presence of SDS, no silver nanoplates appeared, which indicated citrate is essential to reduce SDS-stabilized silver nanospheres in the third step.Aside from EG and DMF, NaBHg et al. exploredous work , we prepojtysiak also stui et al. used NaBg et al. applied et al. [3 by AsA in the presence of cetyltrimethylammonium bromide (CTAB) and SDS. Lou et al. [3 by AsA with Na3CA. Based on these researches of hyperbranched silver nanostructures mentioned above, Wang et al. [3 by AsA in the initial stage of reaction, which are easy to aggregate in AsA [et al. [More recently, a method of using AsA to prepare unique silver nanostructures such as flower-like and string forms has been developed . Zheng eet al. got silvu et al. also foug et al. explorede in AsA . After 2e in AsA and PEO-e in AsA also pla [et al. . Their kIn the formula, D is the average size of product, d is the average size of seeds, and N and n are the molar amount of AgCl and seeds respectively. They fixed the amount of AgCl, and used 50 nm silver nanoparticles as seeds, so in theory if they want obtain the products with 100, 150, 200, 250 nm, the (N+n)/n should be 8, 27, 64, and 125 nm respectively. As shown in 4 which are usually used as seeds for further growth of silver nanocrystals, while silver nanoparticles are easy to aggregate together in AsA. Samanta [4 and AsA as double reductants. They prepared Ag nanoparticles in NaBH4 in the first step. When these nanoparticles used as seeds with different amounts were added into AgNO3 solution in AsA, silver nanodiscs and triangular nanoplates formed, respectively. A high yield of triangular nanoplates can be obtained in lower amount of seed solution which provides a new path to synthesize triangular nanoplates.Above all, we can get a conclusion which is that the products tend to be nanospheres of small sizes in NaBH Samanta presente3.Recently, shape and size control as well as novel nanostructures which can increase SERS enhancement have attracted more attention. The optical properties of nanostructures can be tuned by varying their sizes, therefore it is vital to provide a way to control the size. In addition, these nanostructures with sharp horns such as nanoflowers or nanostars can focus the electromagnetic field on their tips leading to significant SERS effect. The etchant technique can not only control shapes and sizes of nanostructures, but also creates novel nanostructures with hot spots or hollow nanostructures.3.1.2O2 is used to etch pits in metals, which plays an important role of dislocations in determining or affecting the mechanical properties of crystalline materials [2O2\u2014water couple is dependent on the pH value of the solution. In acidic solutions:Early in the 1950s, Haterials . The staIn alkaline solutions:Ag+/Ag (E0=0.7996V) [2O2 can be used as an effective etchant to dissolve metallic silver. Very recently, etching techniques are used for controlling the shapes of Ag nanoparticles because some nanostructures are not easily prepared in high yield or mono-dispersed sizes.Because the potentials are higher than that of 0.7996V) , H2O2 ca2O2, the products were nanospheres, which indicated that H2O2 plays a critical role of Ag nanoprism formation. Sequentially, Zhang et al. [2O2 in the same reaction system. They proposed that H2O2 can remove the relatively unstable nanoparticles in the nucleation stage and promote the formation of anisotropic structures, finally all metallic silver particles can be directly transformed into silver nanoplates regardless of size and shape, but if the concentration of H2O2 was too high, the obtained nanoprisms also would be etched and disappear. To further study the mechanism, Tsuji et al. [\u03bdprism < sphere\u03bd and \u03bdprism < rod\u03bd. If either H2O2 or Na3CA is absent, the transformation from spheres to prisms will not happen and Na3CA must be added before H2O2. The major difference between the method and Zhang's is that the etching of metallic Ag nanostructures and reduction of Ag+ ion occur at the same time in the latter reaction process, but the former argued that the first step is complete dissolution of Ag nanowire to Ag+ ion and then reduction of Ag+ ion to Ag0. They also successfully prepared Ag nanoprisms from nanocubes and nanobipyramids. This essay made a great contribution to preparing high-yield Ag nanoprisms from different metallic Ag nanostructures. Among these various shapes, Ag nanoprisms and triangulars have attracted intense interest due to their unique optical properties and related applications. Generally, two main methods have been developed to prepare Ag nanoprisms, including the photochemical method and chemical reduction method. Compared with the photochemical method \u201363, the g et al. studied i et al. examinedet al. [2O2 etching to prepare Au nanoboxes. Because the standard reduction potential of AuCl4\u2212/Au is higher than that of Ag+/Ag, Au can be replaced by Ag following Recently, metallic hollow nanostructures such as cages \u201370 and fet al. explored4. By controlling the amount of HAuCl4 aqueous, they can precisely control the thickness of nanoboxes via titrating HAuCl4 aqueous solution as presented in 2O2 etching. 2O2. McEachran et al. [et al. they controlled the thickness of frames by adjusting the amount of deposited gold. Furthermore, they attempted to make similar nanoframes using other silver nanostructures. Results indicated that only nanostructures with (111) facets such as nanorods and icosahedra can be used for the formation of nanoframes. They argued that gold prefers to deposit on the (111) facets, whereas (100) facets are more reactive to cause nanocages or nanoshells. We believe that the etching technique is a facile method to prepare hollow nanostructures with well-controlled shapes and sizes.Based on the chemical equation, silver nanoboxes can be transformed into gold nanoboxes. The synthesis procedure is to form Au-Ag alloy nanoboxes by titrating Ag nanocubes with aqueous HAuCln et al. proposedet al. [2O2 and NH4OH mixtures. From et al. [2O2 and NH4OH mixture resulting in a 104-fold enhancement of SERS compared with that obtained by pre-etching. Some ways have been reported to attach nanoparticles onto nanowires [It is well known that nanostructures with sharp edges or gaps can enhance SERS intensity \u201378, theret al. etched sm et al. increaseanowires \u201389, howe3.2.4OH or Fe(NO3)3. Sometimes NH4OH is used with H2O2, which has been discussed in Section 3.1., therefore we will not reintroduce it here. Lu et al. [4OH or Fe(NO3)3 play an important role in the formation of Au nanoboxes. The process consists two steps as follows: (1) preparation of Au/Ag alloy nanoboxes through depositing Au on the surface of Ag nanocubes; (2) removal of Ag from Au/Ag alloy nanoboxes via etching by NH4OH or Fe(NO3)3. After the formation of the Au nanobox, sequent addition of NH4OH or Fe(NO3)3 causes the transformation into nanocages and nanoframes. Another advantage of this method is that NH4OH or Fe(NO3)3 can selectively remove Ag from the walls of Au/Ag alloy nanoboxes leading to good control in the thickness of nanocages which is difficult to achieve by other methods.There are many other etchants applied to modify the morphology of silver nanoparticles, such as NHu et al. found th4.Although Ag nanostructures have received considerable attention because of their excellent optical properties and wide applications in many areas, people still attempt to develop novel nanostructures for higher requirements. The optical properties of nanostructures can also be tailored by controlling their elemental composition, as well as the internal and surface structures. Most recently, special attention has been paid to core-shell nanostructures because they provide a new system with tunable optical properties. In this case, core-shell nanostructures can be achieved through various methods which have been developed, such as chemical reduction method, nanosphere lithography (NSL) ,92, phot4.1.Au@Ag core-shell nanostructures exhibit finely tuned LSPR and SERS properties based on the shape and size, and have potential applications in sensors ,97\u201399 anet al. [3 by AsA on the surface of Au nanorods with sodium citrate used as stabilizer, but the repeatability of results is not good and it is difficult to take TEM pictures due to the high concentration of sodium citrate. In this regard, they chose PVP as stabilizer instead of sodium citrate. However, they found that Ag+ ions cannot be reduced in low pH solution, because the redox potential of AsA varied with pH until it was sufficient to overcome that of Ag. Therefore, when NaOH was added or AsA was replaced by citric acid which is a weak acid, Ag shells formed. The results indicated that with the increased amount of AgNO3 added, the thickness of the Ag shell increased and the blue-shift of the longitudinal plasmon mode of the nanorods was enhanced. Later on, Seo et al. [+ ions were reduced and deposited on the decahedral seeds. Then, Ag nanocrystals grew along the longitudinal direction with slight changes along the lateral direction assigned to the selective adsorption of PVP on the (100) facets. The results correspond to the growth mechanism of Ag nanorods proposed by Xia et al. However, in the research of Xia et al. [3 added to the reaction system. In the case of other Au nanorods as seeds, Sanchez-Iglesias et al. [et al. [The shape and size of the Au core is important for the evolution of Ag on Au nanoparticles. Many groups select Au nanorods as cores to construct the Au@Ag core-shell nanostructures to which excellent optical properties are ascribed. Liu et al. describeo et al. demonstra et al. , the mors et al. also pro [et al. , they co2O2 etching, additional seed-mediated growth method are often used to construct Au@Ag core-shell nanoprisms and triangulars. Xue et al. [3, significantly affecting the SPR spectrum. Recently, Tsuji's group developed a combination of microwave and polyol reduction route to prepare Au@Ag core-shell triangles [4 in EG in the presence of PVP under microwave heating conditions. In the second step, as-prepared Au seeds were added into a DMF solution of AgNO3 to form Ag shells in an oil bath. The method can prepare not only triangular Au@Ag core-shells, but also nanoplates, octahedrons and decahedrons depending on the shapes of the Au seeds. In addition, these Ag shell shapes are significantly different from those prepared in EG by microwave heating from the same Au seeds. The reason is the favorable facets of Ag nanostructures produced in EG and DMF are different, which has been discussed before. Using the microwave-polyol method, their group also prepared Au@Ag core-shell icosahedrons [As with the above-mentioned Ag nanoprisms and triangular synthesized by He et al. reportede et al. . Compareriangles as Figurahedrons . In 2012ahedrons . They aret al. [et al. [3 or Au seeds which were close to the calculated results, and the thickness of Ag shell can be controlled precisely from 1.2 to 20 nm because the seeds are isotropic and have uniform size. Moreover, they found that the effect of CTAC as capping agent on the formation of Au@Ag core-shell nanocubes was better than that of CATB.In addition to these Au@Ag core-shell nanostructures above, Ag nanocubes also receive much attention. Fan et al. synthesi [et al. describe4.2.et al. [3 and Cu(OAc)2\u00b72H2O in EG, then they used Ag@Cu alloy to produce new Ag@Cu alloy Cu shells, which was consistent with the rules. Hu et al. [et al. [According to et al. synthesiu et al. proposed [et al. via a ch5.In the past decades, silver nanoparticles have been applied to many areas because of their excellent optical properties. The three methods introduced in this essay can not only control the shapes and sizes of silver nanostructures well, but also their LSPR properties, leading to more potential applications.et al. have synthesized silver nanoflags which combine a nanowires and a triangular nanoplate by a double reductant method as shown in et al. [et al. [2O2 etching technique. In 2O2. Because the scattering and absorption of Au nanocages can be tailored by controlling the size and porosity, Chen et al. [As mentioned in Section 1.1, Tsuji n et al. studied n et al. . The res [et al. . They prn et al. functionet al. [et al. [et al. [Au@Ag core-shell nanostructures have attracted considerable attention because of their improved properties compared with silver nanostructures or gold nanostructures, especially optical properties, which allows them to be used as SERS substrates. Generally, SERS substrates should be sensitive, stable and rapid. Khlebtsov et al. investiget al. . This op [et al. . They pr [et al. found th2+, Cu2+) in a simple, rapid and ultrasensitive way [2+ was 10\u221212 M by using a single-metallic nanostructures as sensor. To further decrease the detection limitation, Du et al. [2+ as shown in 2+ was added onto the chip, the Raman signals of Dpy obviously quenched due to the formation of Hg2+-Dpy complex. The results in \u221214 M Hg2+ can been detected due to the more excellent SERS properties of Au@Ag core-shell nanostructures. In addition, this approach just needs 4 min and 20 \u03bcL of sample. Therefore, Au@Ag core-shell nanostructures can be used for the ultrasensitive detection of trace Hg2+. Similar to Hg2+, Cu2+ has paramagnetic properties which can transfer electrons or energy resulting in photoluminescence (PL) quenching of Au@Ag core-shell nanoparticles. Gui et al. [2+. As the concentration of Cu2+ increased, the PL spectra of Au@Ag core-shell nanoparticles dropped dramatically, so we can analyze Cu2+ according to the intensity of PL spectra. In addition to metallic ions analyses, biomolecules and organic molecules have attracted much attention. Panfilova et al. [et al. [Metal ions do not have Raman signals, which have to be detected by indirect methods leading to inefficiency. To solve the problem, many efforts have been devoted to find ways to detect metal ions (such as Hgtive way \u2013120. Howu et al. applied i et al. first ema et al. first st [et al. reportedet al. [In the research of Zheng et al. , Au@Ag cet al. . Althouget al. , they ca6.In this review, we present double reductant methods, etching techniques and construction of core-shell nanostructures routes which can synthesize novel shape-controlled silver nanostructures. Various silver nanostructures have been prepared by using the three methods. The double reductant method takes full advantage of different favorable facets of nanostructures produced in different reductants. In this case, complex nanostructures which are not easily prepared by one-step methods can be prepared in high yield with mono-dispersed sizes. The etching technique can synthesize hollow nanostructures with tunable silver shell thickness which can be used for controlling their LSPR. Moreover, etching technique can also increase the roughness of the surface of silver nanostructures, significantly increasing SERS enhancement and LSPR effect. The construction of core-shell nanostructures can not only control their shapes and sizes well, but also exhibit more excellent optical properties than that of individual metal nanostructures. Finally, we introduce applications of silver nanostructures produced by the three methods. Because of their excellent optical properties, such as LSPR properties and SERS effect, these silver nanostructures have potential applications for the development of plasmonic nanodevices and detection."} {"text": "As lame cows produce less milk and tendto have other health problems, finding and treating lame cows is very importantfor farmers. Sensors that measure behaviors associated with lameness in cowscan help by alerting the farmer of those cows in need of treatment. This reviewgives an overview of sensors for automated lameness detection and discussessome practical considerations for investigating and applying such systems inpractice.Despite the research on opportunities toautomatically measure lameness in cattle, lameness detection systems are notwidely available commercially and are only used on a few dairy farms. However, farmers need to be aware of the lame cows in their herds in order treat themproperly and in a timely fashion. Many papers have focused on the automatedmeasurement of gait or behavioral cow characteristics related to lameness. Inorder for such automated measurements to be used in a detection system, algorithms to distinguish between non-lame and mildly or severely lame cowsneed to be developed and validated. Few studies have reached this latter stageof the development process. Also, comparison between the different approachesis impeded by the wide range of practical settings used to measure the gait or behavioralcharacteristic and by the differentdefinitions of lame cows. In the majority of the publications, mildly lame cowsare included in the non-lame cow group, which limits the possibility of alsodetecting early lameness cases. In this review, studies that used sensortechnology to measure changes in gait or behavior of cows related to lamenessare discussed together with practical considerations when conducting lamenessresearch. In addition, other prerequisites for any lameness detection system onfarms are discussed. To properly tackle the lameness problem, farmers need to be aware of the number of lame cows in their herd and the severity of their lameness. The commonly accepted methodologies to quantify lameness rely on identifying changes in the gait and posture of the cows and are discussed in the first part of this review . In pracSince the 1980s, various sensors and technologies have been investigated for their potential to measure health indicators from individual cows . AutomatAutomatic measurement systems could support the dairy farmer and tackle the problem of the visual scoring systems, resulting in more objective measurements. However, the raw data collected from such sensor-based techniques still have to be translated into understandable gait variables that are functionally relevant for cattle gait (and therefore related to gait attributes used in locomotion scoring systems). This review summarizes several approaches to automatically measure lameness and related characteristics in dairy cows. The wide range of practical settings during the experiments, together with the definitions used to distinguish non-lame from lame cows, impede comparison between the different automated approaches. The factors obstructing such comparison and the need for specific characteristics of an automated lameness detection system are discussed.et al. [Several techniques have been proposed for automatic gait analysis, such as force platforms, electromyography, accelerometers and image-based technologies. In their review, Rutten et al. divided et al. [In 2013, 38 publications on lameness were described in the review by Rutten et al. , but onlet al. [i.e., integral of GRF over time), and area under the Fourier transformed curve of GRF [Rajkondawar et al. developeet al. . In 2006e of GRF .\u00ae was introduced into the market in 2008. The model in this system used 5 limb movement variables and calculated the probability of one of the hind limbs being lame (SMX score). The StepMetrix was tested in a field trial by Bicalho et al. [\u00ae. Hence, lame cows were still being misclassified by the model. Liu et al. [et al. [Based on these publications, the automatic lameness detection system StepMetrixo et al. . They reu et al. improved [et al. the sensForces exerted by the hooves on the ground can be measured using force plates\u2014usually installed in the ground\u2014that give information in one or three dimensions . Useful information of the ground reaction force can only be collected when no more than one limb at a time is on the force plate . In cattet al. [et al. [Thorup et al. used the [et al. and showet al. [Pastell et al. introducet al. .et al. [In an earlier study, Pastell et al. started et al. . Pastellet al. built a et al. used CUSet al. . For theet al. [et al. [et al. [et al. [Neveux et al. used a p [et al. ,24 and P [et al. to measu [et al. suggesteChapinal and Tucker used a set al. [et al. [Using a pressure distribution plate (RsScan), van der Tol et al. measured [et al. used thiet al. [et al. [Maertens et al. develope [et al. revealed [et al. . In a fo [et al. . Indeed, [et al. . Moreove [et al. . A test- [et al. .et al. [et al. [et al. [2 = 0.81). In an initial experiment on 15 cows, the measured step overlap seemed to be significant for the distinction between non-lame and mildly lame cows. However, in a second experiment using a simplified scoring system on 104 cows, this distinction was only seen between non-lame and severely lame cows for the minimal step overlap and between mildly and severely lame cows for the maximal step overlap. When combining a camera system with the Gaitwise system, an algorithm to automatically calculate touch and release angles and range of motion was developed [Flower et al. were the [et al. used vid [et al. wrote aneveloped . A decreet al. [et al. [Another promising variable for lameness detection is the back arch. Poursaberi et al. automatiet al. . An algo [et al. , who repet al. [et al. [et al. [et al. [et al. [et al. [The algorithm proposed by Viazzi et al. was not [et al. showed t [et al. , Van Her [et al. and Viaz [et al. ; (1) lac [et al. could no [et al. .et al. [et al. [et al. [et al. [To overcome the difficulties with back posture analysis based on side-view images from 2D cameras, Van Hertem et al. and Viaz [et al. tested t [et al. . In the [et al. , the cor [et al. reported [et al. . The use [et al. .et al. [Driven by the positive results of infrared thermography for lameness detection in horses , this teet al. used infet al. [i.e., average steps / hours) for lame cows ranged from 9 to 68% and almost half of the lame cows showed a reduction of more than 5% during the 7 to 10 days prior to clinical signs. In 92% of the lameness cases, the decrease in activity was above 15%. As such devices are currently used for oestrus detection in cattle, Steensels et al. [et al. [Other studies use accelerometers to measure the activity or gait features of cows and their relation to lameness. In a study by Mazrier et al. the redus et al. also sug [et al. used a cet al. [et al. [Chapinal et al. used fivet al. . On theset al. . Using a [et al. were ablet al. [et al. [et al. [Almost a decade ago, Munksgaard et al. suggeste [et al. were con [et al. , where det al. [et al. [Similar sensors are used to investigate the link between lying behavior of cows and lameness. The results of Yunta et al. suggest [et al. and Cald [et al. were ablet al. [i.e., the daily milk yield 4 days before diagnosis, the slope coefficient of the daily milk yield 4 days before diagnosis, the night time to day time neck activity ratio 6 days before diagnosis, the milk yield week difference ratio 4 days before diagnosis, the milk yield week difference 4 days before diagnosis, the neck activity level during the daytime 7 days before diagnosis, and the ruminating time during night time 6 days before diagnosis. Similarly, Kamphuis et al. [Van Hertem et al. used sens et al. used datet al. [et al. [Activity data in terms of lying behavior, combined with milk yield and feeding data in the form of concentrate left-overs in the milking robot, were used to detect lameness by de Mol et al. with an et al. combined [et al. combinedDiscrepancy in the experimental set-ups during validation. As this review focuses on the development of systems to help farmers in their daily routine, the sensors and technologies summarized were limited to measurements of variables related to \u2018visual\u2019 characteristics of lameness in cows that are used in daily practice whether or not combined with information on feeding/drinking behavior, milking process and others to 5 (severely lame) using different scoring systems. Next, different cut-off values to differentiate between non-lame and lame groups of cows were used . Based on the summary in In addition, there is large variation in Finally, if data and lameness classification results were to be available in real-time, cows in need of treatment could be separated from the herd automatically. Obviously, such a procedure would require barn equipment where the cows can be identified and then guided to a separate area . In three out of four studies, the variables were calculated afterwards, and calculations were not always performed fully automated. This means some additional integration work would be needed to obtain a real-time automatic lameness detection system.Need for automated and continuous measurements? Automated measurements can gather data continuously, such that cows can be monitored on a daily basis. In addition, their major advantage is the lack of need for herding the cows. As cows have a stoic nature, guiding them can bias the measurements, because they will try to hide their weakness and pain compared to measurements during normal routine without the presence of a human or predator [predator . IdeallyNeed for \u2018early\u2019 detection? Throughout the past 20 years of research on dairy cow lameness and automatic lameness detection systems, scientists have claimed that early detection of lameness signs is beneficial in treating affected animals before the problem becomes too severe. In doing so, long lasting and costly treatments, production losses and reduced welfare long term can be avoided. From a theoretical point of view this might indeed be true, but implementing the concept of \u201cearly lameness detection\u201d in practice poses reasonable questions: What is early detection? When is it needed or when does a farmer perceive it as an added value? Nowadays, the biggest challenge for any lameness detection system is the detection of early onset. The definition of \u2018early detection\u2019, however, is not so clear, as \u2018early\u2019 can be considered in several ways.et al. [A first approach for early detection is \u2018detection before the visual clinical signs of lameness are present\u2019. Even though this is the most logical interpretation, this might not be relevant for a lameness detection tool on commercial farms. It can be expected that farmers will not start treatment before obvious signs of lameness are present. According to a study of Alawneh et al. , over 65et al. . As seveet al. ,75). TheNeed for custom-made detection systems? Early detection of lameness should be used only if it is relevant to the farmer, which suggests that there may be a need for custom-made detection systems. For a farmer with low herd lameness prevalence and a good general lameness management, early detection of new cases of lameness or mildly lame cows might create an added value. However, such farmers are rare. Most farmers hugely underestimate not only the prevalence of the lame cows in their herds [i.e., those that cause recurring alarms). In the next step, after prevention and treatment programs have successfully resulted in a lower lameness prevalence at the herd, the threshold settings of the detection system could be changed to also detect the mildly lame cows to further decrease the lameness prevalence to a lower level. In general, it is very important to convince the farmer to trust the lameness detection system. Many farmers may be reluctant to rely on the judgement of an automatic system rather than their own. The farmer\u2019s suspicion might be even worse for early lameness detection, especially if he or she cannot diagnose a treatable problem. In general, a lameness detection system should provide the ideal balance between detecting almost every lame cow (high sensitivity) and having as few false detections as possible (high specificity). Farmers might be more reluctant about a high number of false alerts compared to a missed lame cow, as false alerts create unnecessary labor and time to check the cows on the alert list. However, research on the expectations and use of farmers for a lameness detection system might provide more information on the requested sensitivity and specificity of lameness detection systems.ir herds ,63, but ir herds . The majir herds . Severe Need for real-time measurements? If the detection could be performed in real-time, i.e., immediately after the cow was measured by the system, it becomes possible to automatically separate a cow that is identified as lame by the software and needs extra attention from the farmer. Such decisions made on the spot must be done in less than five seconds, requiring technology with high-speed calculations.Of the measurement approaches shown in Available space. In practice, no free space is available in dairy barns to install any lameness detection system. This might create drawbacks for those sensor technologies that need an alley set-up where measurements are performed or where video can be recorded (vision techniques). Therefore, measurement systems that need less space or that can be included in the existing farm infrastructure, like measurements of weight differences in the milking robot or measurements with accelerometers, might be more feasible in practice. If not, additional space could be incorporated in the plans for new barns. However, creating sufficient space in any existing barn is challenging, especially as cows should preferably pass this measurement zone daily\u2014if possible, after milking\u2014and should be identified simultaneously. This requires a free zone inside the barn after the milking parlor, rotary or milking robot or even at the exit to the pasture for measurements during the grazing season. This drawback however, is especially present in the smaller dairy farms. In larger dairies, which are the kind of farms which would benefit hugely from this technology, installing a lameness detection system might be easier. However, the most important thing is to ensure good cow traffic, especially if a system is installed after the milking parlor or milking robot. Cows blocking the area around the measurement zone will disrupt the measurements and create measurement failures and might even affect the milking routine [ routine .In spite of the amount of research available on measurement of gait and behavioral characteristics that are relevant to lameness detection, no efficient automated lameness detection system is available on the market yet. This review focused on the discrepancy between the experimental set-ups used in the studies, the stage of automation of the measurements, and the practical considerations when implementing a lameness detection system on farm. Most research on lameness detection focuses on the detection of severely lame cows, often ignoring mildly lame cows or considering them non-lame. On the other hand, the practical feasibility of also detecting the mildly lame cases should be investigated on farms. This might result in custom-made lameness detection systems that are adjustable depending on the degree and severity of mildly and severely lame cases on that farm and the preferences of the farmer for specific characteristics of the system. In addition, several sensor technologies take up quite some space and will need to be very cost-efficient in order for farmers to decide to buy and create the needed space for the installation."} {"text": "Skeletal muscle mass and function are progressively lost with age, a condition referred to as sarcopenia. By the age of 60, many older adults begin to be affected by muscle loss. There is a link between decreased muscle mass and strength and adverse health outcomes such as obesity, diabetes and cardiovascular disease. Data suggest that increasing dietary protein intake at meals may counterbalance muscle loss in older individuals due to the increased availability of amino acids, which stimulate muscle protein synthesis by activating the mammalian target of rapamycin (mTORC1). Increased muscle protein synthesis can lead to increased muscle mass, strength and function over time. This review aims to address the current recommended dietary allowance (RDA) for protein and whether or not this value meets the needs for older adults based upon current scientific evidence. The current RDA for protein is 0.8 g/kg body weight/day. However, literature suggests that consuming protein in amounts greater than the RDA can improve muscle mass, strength and function in older adults. The older adult population in the United States is a segment of unprecedented growth . Longer Muscle mass, strength and function are progressively lost with aging . A loss Additional physical changes with aging may occur as a result of changes in skeletal muscle mass . For exaNutrition is an important modulator of health and function in older adults. Inadequate nutrition can contribute to the development of sarcopenia and obesity ,5. As liSeveral studies identify protein as a key nutrient for older adults recommended by the Food and Nutrition Board of the Institute of Medicine ,20. The r adults .The Food and Nutrition Board recognizes a distinction between the RDA and the level of protein intake needed for optimal health. The recommendation for the ADMRs includesetc., which are common in older adults and have been shown to require protein intake above recommended levels.Experts in the field of protein and aging recommend a protein intake between 1.2 and 2.0 g/kg/day or higher for older adults ,22. CleaIt is not only the quantity of protein intake that is important for optimal health in older adults, it is the quality of the protein . There aEssential amino acids, especially the branched-chain amino acid leucine, are potent stimulators of muscle protein synthesis via the protein kinase mTORC1 ,25,26. Set al. [The majority of published results indicating a potential beneficial effect of increasing protein intake in older adults are either from epidemiological or short-term studies. Two recent publications by Tieland et al. ,34 indicet al. ,34. The et al. . One impversus slow proteins need to be taken into consideration when developing protein recommendations. When young, healthy subjects were provided with either a whey protein meal (30 g) or a casein meal (43 g), both containing the same amount of leucine, and whole body protein anabolism was measured, the subjects consuming the whey (fast) protein meal had high, rapid increase in plasma amino acids, while subjects consuming the casein (slow) protein meal had a prolonged plateau of essential amino acids [In addition, the difference in digestibility and bioavailability of a protein can impact the quantity of protein that needs to be ingested to meet metabolic needs. The speed of protein digestion and absorption of amino acids from the gut can influence whole body protein anabolism . Proteinno acids . In addino acids . These rno acids . When olno acids . These rversus a beefsteak [versus the beefsteak. Muscle protein synthesis did not differ between the two meals over a six-hour period [Whether or not the amino acid source is an intact protein or a mixture of free amino acids can also influence the rate muscle protein synthesis . For exaeefsteak . Older mr period . These dThe interaction of protein with other substrates may impact functionality of protein, especially at the molecular level. It is well established that the EAA, especially leucine, are essential for the functional benefits associated with increased protein intake in older adults, such as muscle protein synthesis ,44,45,46Older adults are less responsive to low doses of amino acid intake compared to younger individuals . Howeverversus 13 g of protein at breakfast. However, ingestion of more than 30 g of protein in a test meal does not further stimulate muscle protein synthesis [et al. [et al. [Another issue related to defining the optimal protein recommendations for older adults is defining the optimal timing of protein intake throughout the day. Adults typically consume the majority of their protein (and energy) intake at dinner , 38 g veynthesis . Mamerow [et al. examined [et al. . This is [et al. who exam [et al. .et al. [et al. [et al. [versus protein being evenly distributed over four daily mails in hospitalized older patients for six weeks [et al. [versus older adults). In addition, the study by Mamerow et al. [A recent study published by Kim et al. fails toet al. . Likewis [et al. , conduct [et al. . However [et al. , the womix weeks ,60. Thesix weeks and incr [et al. due to dw et al. only meaThe current RDA for protein may not provide adequate EAA for optimal metabolic roles in adults. It is estimated that 7 to 12 grams of the branched-chain amino acid leucine are necessary per day to see fversus 113 g of protein.The mechanism by which dietary protein affects muscle is through the stimulation of muscle protein synthesis by the absorbed amino acids consumed in the diet ,63. HoweIt is widely accepted that signaling via mTORC1 is involved in the regulation of several anabolic processes including protein synthesis ,65,66. Iet al. [et al. [versus young adults. In addition, after only seven days of bed rest, older adults had a reduced response to EAA ingestion resulting in no increase in muscle protein synthesis, activation of translation initiation factors (4E-BP1 and p70S6K) and no increase in amino acid transporters [Age-related muscle loss may involve a decreased response to EAA due to dysregulation of translation initiation factors. Older adults also have decreased levels of translational proteins related to muscle protein synthesis as compared to young adults ,73. Olde [et al. who foun [et al. . While nsporters ,76. ThesIt is also possible that the beneficial effect of protein intake on body composition is due to the stimulation of IGF-1 (insulin-like growth factor 1) secretion. Aging individuals have lower levels of IGF-1 , which cThere are several risk factors associated with reduced protein intake in older adults . These rIt is important to consider optimizing health care and those factors influencing health outcomes when determining dietary recommendations for dietary intake in older adults . The cosInadequate nutritional intake is common in older adults and may explain the depleted muscle mass. Once in the hospital, physical inactivity combined with inadequate protein intake can result in additional loss of muscle mass which can delay recovery and contribute to higher readmission rates . The breThere is sufficient evidence that protein intake higher than the current dietary recommendations (0.8 g/kg/day) is beneficial for most older adults. Higher protein intakes are associated with increased muscle protein synthesis, which is correlated with increased muscle mass and function. This, in turn, is linked to improved physical function. The evidence presented in this review supports the need for a higher RDA for protein for older adults in order for them to achieve optimal protein intake. Many older people suffering from chronic disease have reduced appetite and do not consume adequate levels protein. Therefore, it is essential that they have adequate protein intake from high quality protein sources."} {"text": "Objective of the present study was to investigate the relation between antioxidant status and postpartum anestrous (PPA) condition in Murrah buffalo.Jugular blood samples were collected from two different groups of Murrah buffaloes each group consisting of 20 animals. Group I was of PPA and Group II were of cyclic buffaloes. The animals selected were examined for confirmation for cyclic and acyclic condition (>120 days) after calving by routine transrectal ultrasonography. Heard record was also used for cross confirmation.The analysis of antioxidants in plasma and hemolysates revealed that the levels of vitamin E, \u03b2-carotene and reduced glutathione in plasma and superoxide dismutase (SOD) in hemolysate were significantly higher in cyclic animals than PPA animals. The levels of vitamin C, SOD and glutathione peroxidase in plasma did not show any significant difference among the two groups studied. The low antioxidant level in affected animals may predispose them toward PPA condition.Stress imposed by pregnancy and lactation affected the reproductive performance in PPA animals which might be inherently more susceptible to these stressors than those who were normal cyclic as all the animals were maintained under similar feeding and management practices. As the second largest source of milk in the world and about 56.5% of total milk production in India, Buffalo bear premier importance in dairy industry along with its significant contribution in foreign exchange earnings by export of meat . The lowStress responses in heat, pregnancy and milk production lead to formation of reactive oxygen and nitrogen species (ROS and RNS). These ROS and RNS include hydroxyl radicals, superoxide ion, hydrogen peroxide, nitric oxide radicals and are involved in free radical chain reaction affecting lipid peroxidation, apoptosis, and fertility . The bioSo, hypothesizing that the pro-oxidant and antioxidant balance underlining the animal reproductive efficiency; the present study was designed to explore the relation between antioxidant status and PPA condition in Murrah buffalo.The experiments on animals including all procedures of this study were approved by Institutional Animal Ethics Committee.The present investigation was carried out at the animal farms of Central Institute for Research on Buffaloes, Hisar; and Lala Lajpat Rai University of Veterinary and Animal Science, Hisar. Twenty PPA and twenty normal cyclic Murrah buffaloes were selected on the basis of their reproductive history obtained from farm records. According to the herd records, the buffaloes that had shown anestrous for more than 120 days were selected in postpartum group (PPA) and animal coming in estrous before 65 days of postpartum for more than three consecutive lactations including present lactation were selected in normal cyclic group for conducting the study. The animals selected in the current lactation were having average postpartum anestrous period of 191.47\u00b113.37 days and those in normal cyclic had 60.64\u00b15.38 days. Current status of reproductive organs of all animals in the study was also examined and verified by per rectal examination and ultrasonography. The animals were maintained as per the standard feeding and management practices followed at the farms and were fed for body maintenance and according to the level of milk production, so that the green and dry fodder were appropriately supplemented with concentrate mixture containing mineral mixture. Buffaloes were loose housed in open and closed paddock.Approximately,10 ml of jugular blood sample was collected from each experimental animal in 15 ml sterile polypropylene centrifuge tube containing ethylenediaminetetraacetic acid as anticoagulant in the month of November and December. Plasma was separated in refrigerated centrifuge at 3000 rpm for 15 min and stored in aliquots at \u221220\u00b0C until analysis of vitamins and antioxidants. Following separation of plasma from blood samples by centrifugation, the white blood cells layer was separated, and the remaining erythrocytes were washed thrice with a cold normal saline solution. Then distilled water was added to erythrocyte pellet slowly and with constant stirring up to 1:1 dilution to prepare hemolysate. It was stored at \u221220\u00b0C for estimation of SOD. All chemicals used in this study were procured from Sigma-Aldrich chemicals, USA.et al. [et al. [Nitric oxide scavenging activity of plasma was measured by the method of Sreejayan and Rao . R-GSH aet al. . Estimat [et al. and plast-test.All the data were expressed as mean\u00b1standard error values. Statistical analyses were carried out using GraphPad Prism v6.0 software implementation of Student\u2019s Plasma vitamin E, \u03b2-carotene, and vitamin-C levels were compared between PPA and normal cyclic animals. Vitamin E and \u03b2-carotene levels were observed to be significantly (p<0.05) higher in normal cyclic animals in comparison to PPA animals whereas vitamin-C level did not differed significantly between the two groups .The mean plasma nitric oxide scavenging activity, plasma R-GSH, plasma GPX-3, plasma SOD and SOD in RBC between PPA and normal cyclic animals along with their standard error has been depicted in The total plasma protein concentration was observed to be significantly (p<0.01) higher in normal cyclic animals .et al. [et al. [et al. [In the present study, vitamin E was significantly (p<0.05) higher in normal cyclic animals than PPA group which is in corroborate the finding of Kahlon and Singh , Surapanet al. and Dera [et al. in cattl [et al. .et al.[et al. [et al. [et al. [The higher level of plasma GSH concentration in normal cyclic animals is also corroborate with the experiments of Ahmed et al., Hanafi .[et al. and Ahme [et al. , who rep [et al. reported [et al. observed [et al. .SOD converts ROS generated by cell to hydrogen peroxide by spontaneous dismutation . In our The current study revealed that plasma GPX-3 level was not significantly different in both group. Similar reports for dairy cow are also documented in related experiments .et al. [et al. [Significantly higher protein in normal cyclic animals than PPA buffaloes has been reported by Amanullah et al. and Kuma [et al. . The pre [et al. .The vitamin E, \u03b2-carotene, reduced glutathione and protein investigated in plasma of buffaloes suffering from PPA were significantly (p<0.05) lower than normal cyclic buffaloes. The SOD level in hemolysate was also significantly (p<0.01) lower in PPA group animal than normal cyclic animals. Whereas plasma vitamin C level, SOD, GPX-3, and nitric oxide scavenging activity (%) were non-significantly (p<0.05) different in both groups. This indicates buffaloes under PPA condition are suffering from stress which may be because of production or reproduction. It also indicates that these animals are more susceptible to stress and apart from normal feeding additional supplementation of antioxidant vitamins can be beneficial to them at time of stress to optimize the performance in buffaloes.RK, MG, and AKB have designed the study and planned the research experiment. RK, MG, and SK performed the research experiments. IS, AKB, and MG supervised the research and performed manuscript preparation. All the authors read and approved the final manuscript."} {"text": "Obsessive-compulsive personality disorder (OCPD) traits and obsessive-compulsive disorder (OCD) are commonly associated with patients with Anorexia Nervosa (AN). The aim of this review was to systematically search the literature to examine whether OCPD and OCD are positively associated with excessive exercise in patients with AN.A systematic electronic search of the literature was undertaken to identify relevant publications until May 2012.A total of ten studies met criteria for inclusion in the review. The design of the studies varied from cross-sectional to retrospective and quasi-experimental. Seven out of the ten studies reviewed demonstrated a positive relationship between OCPD and/or OCD in AN patients who exercise excessively, whilst three studies found a lack of relationship, or a negative relationship, between these constructs.There is evidence from the literature to suggest that there is a positive relationship between OCPD and excessive exercise in patients with AN. However, the relationship between OCD and excessive exercise is less clear and further research is required to qualify the strength of such relationships. Future research should utilise the most comprehensive and reliable clinical assessment tools, and address prognostic factors, treatment factors and specific interventions for patients with OCPD and/or OCD and excessive exercise. Anorexia Nervosa (AN) is recognised as one of the most serious chronic mental illnesses, with significant physical and psychosocial consequences . In ordeRothenberg proposedExcessive exercise plays a detrimental role in the pathogenesis and maintenance of AN -21, and The reviewed research suggests that AN is associated with increased OCD symptomatology and a higher prevalence of OCPD traits. Research is warranted to determine personality and psychological variables for excessive exercise, in particular those that may be remedial to interventions . The aimThe search strategy was designed to identify all studies of patients with AN, in which OCPD or its traits, or OCD and its features were formally assessed, and in which excessive exercise was formally measured through clinical interview or clinical judgement.The following databases were systematically searched from April-beginning of May 2012: PsycINFO (1806-present), Medline (1950-present) and Web of Knowledge (1864-present). Reference lists from relevant articles were also manually searched for additional studies. The following search terms were used: (anorexia* OR anore* OR eating disorder*) AND AND . Peer-reviewed research articles that focused on the relationship between exercise and obsessive compulsive disorder and/or obsessive compulsive personality traits in patients with anorexia nervosa were included. A total of 443 papers were retrieved from the electronic search. The titles and abstracts were screened to assess the suitability of papers. A second reviewer also screened a proportion of the titles and abstract to reduce selection bias. 79 papers were excluded from their title, and 302 papers were excluded from their abstract. The full text of 62 papers was read, and 54 were excluded. The reference lists of the final full text papers were searched manually, and a further two articles were retrieved. The second reviewer also read full texts of papers meeting the inclusion criteria, and there were no discrepancies in the inclusion of articles, thus a total of 10 studies were included in the review.A detailed map of the search strategy can be seen in Figure\u00a0The final retrieved articles underwent quality assessment utilising an amended version of the original Quality Index by Downs and Black . The QuaA total of ten studies were reviewed: four studies utilised AN participants who were receiving inpatient treatment; three studies used inpatients and outpatients; one study used outpatients only, whilst another study stated that they recruited from four eating disorder services, but did not specify the settings. The final study included patients with AN who were from the multisite international Price Foundation Study of AN, BN and AN Trios studies, and their affected relatives who met lifetime diagnosis of AN interview . These sOther assessment protocol for excessive exercise included questions regarding duration, frequency and intensity of exercise per week ,21,30 anOCD symptomatology was measured using different self-report questionnaires. The OBQ-44 assessed constructs of inflated responsibility/threat estimation, perfectionism/tolerance of uncertainty, and importance/control of thoughts . The ObsOCPD traits were measured through a number of methods. The EATATE interview was inclp\u2009=\u2009.007) than non-excessive exercisers, unaffected by dietary status. Both excessive exercisers and non-excessive exercisers demonstrated a reduction in OCD symptoms between admission and discharge (p\u2009<\u2009.001). There was an interaction trend demonstrating that, after re-feeding, OC symptoms decreased less in the excessive exercisers group. It was also noted that patients who presented with excessive exercise reported a higher number of obsessive compulsive symptoms on the Maudsley Obsessive Compulsive Inventory (at admission and discharge) than a group of patients diagnosed with OCD [Davis and Kaptein demonstrwith OCD .p\u2009<\u2009.01). Obsessive-compulsiveness was positively related to level of activity in AN patients (p\u2009<\u2009.01) and exercising participants also demonstrated higher OC symptomatology than non-exercisers (p\u2009=\u2009.02).Similar findings were demonstrated in studies by Davis et al. ,39 in whp\u2009<\u2009.01), obsessive-compulsive behaviours (p\u2009<\u2009.01), as well as higher eating disorder psychopathology (p\u2009<\u2009.01). Specifically, the Checking subscale from the OCI-R contributed uniquely and significantly to the overall model explaining weight control exercise, signifying that these obsessive beliefs and behaviours predict the variance in exercise for purpose of weight control, after controlling for eating disorder psychopathology. Furthermore, Shroff et al. [p\u2009<\u2009.001), and higher frequency of obsessions and compulsions (p\u2009<\u2009.001), when compared with AN patients who completed no or regular exercise.Naylor et al. concludef et al. reportedp\u2009=\u2009.038). Penas-Lledo et al. [p\u2009>\u2009.05). Finally, Holtkamp et al. [p\u2009=\u2009.705) and that obsessive-compulsiveness was not a significant contributor in the regression model predicting physical activity with other factors such as BMI, level of food restriction, depression and anxiety.However, Anderluh et al. found noo et al. found thp et al. concludep\u2009=\u2009.03) than non excessive exercisers, and this was unaffected by dietary status.Davis and Kaptein demonstrp\u2009<\u2009.05) and levels of self-oriented perfectionism (p\u2009<\u2009.05) than non-excessive exercisers. Anderluh et al. [p\u2009<\u2009.005) and cautious (p\u2009<\u2009.02), however they did not find any significant differences in current OCPD comorbidity (p\u2009>\u2009.05).Davis et al. showed th et al. reportedp\u2009<\u2009.05) and historically throughout the eating disorder (p\u2009<\u2009.01). Furthermore, Shroff et al. [p\u2009<\u2009.001), measuring factors such as concern over mistakes, personal standards, organisation and parental criticism.Davis and Claridge demonstrf et al. reportedThe aims of this systematic review were to critically examine evidence as to whether OCPD traits and/or OCD are associated with excessive exercise in patients with AN, and to determine the nature of relationships between these constructs. The results of the systematic review indicated a positive relationship between excessive exercise and obsessive-compulsive personality traits. However, the relationship between OCD and excessive exercise in AN patients is less clear with studies producing varying results.Davis et al. proposedFurthermore, the findings of Naylor et al. are consBewell-Weiss and Carter reportedDavis et al. speculatAnderluh et al. found thThere are a number of limitations that were evident in the reviewed literature. The number of studies examined in the review (10) is small, and results must be interpreted with some caution as some of the studies did not support the association between OCD and excessive exercise. A number of the reviewed studies did not clearly differentiate between OCD and OCPD constructs in their presentation of results. Nine different types of measures were employed across the studies to assess OCPD traits and OCD symptomatology, many of these being self-report questionnaires, which may increase the incidence of socially desirable response styles. Others involved expert clinical assessment in the form of an interview . These Denial commonly occurs in anorexia nervosa, and subjective measures may have underestimated the amount of physical activity completed by patients. Such underestimation may have led to significant bias and consequently inaccuracy of data regarding the extent of exercise, and its relationships with coexisting OCD and/or OCPD. This potential room for bias has led to the utilisation of more objective means of assessing for physical activity in patients with AN, for example accelerometers ,78. Yet There are also a number of limitations in regards to participants utilised in the studies. The small sample size of Holtkamp et al. affectedAs the vast majority of the studies employed a cross-sectional design, only relationships between variables could be determined, and there can be no demonstration of the direction of such associations. Acute starvation syndrome and severity of eating disorder psychopathology both have significant impact upon level of obsessive-compulsive symptoms and exercise . There wWhilst this review has focused on the relationships between OCPD traits and/or OCD symptomatology with excessive exercise in patients with AN, it would be remiss not to mention the significant relationships which excessive exercise shares with other psychopathologies, which have important implications for treatment. Studies included in the review demonstrated that AN patients who exercised excessively demonstrated lower minimum BMI and lower novelty seeking, but higher harm avoidance, persistence and cooperativeness . In otheFrom the research reviewed, it appears that there is a positive association between obsessive- compulsive personality traits in patients with AN who excessively exercise, yet the relationship with obsessive-compulsive disorder is less clear. Although it is known that excessive exercise is associated with poor treatment outcome in AN ,29 the eThe authors declare that they have no competing interests.SY undertook the systematic review and with PH prepared this manuscript for journal submission. PH, ST and PR assisted with the editing of the manuscript and data interpretation. All authors read and approved the final manuscript.SY submitted an alternate version of this manuscript as a requirement for the Doctor of Clinical Psychology/Master of Science thesis at University of Sydney. She is supervised by ST, PH and PR. PH, ST and PR are conducting a randomised controlled trial and evaluation of the Loughborough Eating Disorders Activity Programme (LEAP) referenced in this paper."} {"text": "Based on previous research [\u03bcg/day dose of elemental chromium was unable to produce desired results [We read the research of Huang et al. with greresearch , 4 and tresearch . Third,"} {"text": "Setaria tundra (Nematoda: Filarioidea) and an unidentified filarial species in mosquitoes in Germany\u201d Parasites & Vectors 2012, 5:14.Comments concerning interpretation of the PCR xenomonitoring results in the article \u201eMolecular detection of Dear Editor,D. repens DNA detection in mosquitoes using the primers applied in the second round PCRs [et al. [et al. [D. repens sequences deposited in GenBank [In the context of an evaluation of the use of xenomonitoring (sensu ) for detund PCRs \u20135, was d [et al. nor such [et al. (to the [et al. of the p [et al. \u20135 perfec GenBank \u2013 the pr GenBank , 5. We a GenBank .The absence of Dirofilaria spp. or other zoonotic filariae in our sample allows the conclusion that the risk of autochthonous infection in Germany is still very low, although dirofilariasis is emerging and spreading in Europe\u201d [The authors interpreted every positive result of the screening real time PCR performed on DNA obtained from mosquito pools as positive result of filaria detection. However, the positive results of real time PCR might have been false positive; there were no positive PCR controls described, there were no negative PCR controls described. The sensitivity and the specificity of the PCRs applied in the study were not reported. Considering the above mentioned deficits, the following statement based on PCR results, seems unsupported: \u201c Europe\u201d . The res Europe\u201d . In the Europe\u201d the auth Europe\u201d .\u00a0Only th Europe\u201d . It woul"} {"text": "Assessment of volume and hydration status is far from easy and therefore technology such as bioelectrical impedance vector analysis (BIVA) may complement our examination techniques. This study highlights the fact that clinical assessment of volume balance and BIVA may correlate, but whether the routine use of BIVA will avoid significant volume overload in the critically ill remains unknown. Further studies are needed but at the moment appear a little way off. Critical Care, Jones et al. [2O) [In the current issue of s et al. investigs et al. . Hydratis et al. . With des et al. . The golal. [2O) . TracersBioelectrical impedance analysis was originally introduced as a tool for assessing body composition and nutritional status, but early studies highlighted some limitations of this technique with variation in electrolyte levels, acute changes in hydration status and problems with some of the standard equations employed creating some disaffection , 7. BIVAIn the present study by Jones et al. , 344 meaIn summary, can BIVA guide fluid management in critically ill patients? As pointed out by Jones et al., this can only be addressed in well-designed interventional studies particularly with regard to the patient population. Given these preliminary results it seems unlikely that BIVA will play a major role in the critically ill level 3 patient with sepsis where rapid fluid shifts are occurring or in the unstable postoperative patient, but the technique may inform in less acute situations such as renal replacement therapy in a step-down unit."} {"text": "Gertig and Hanisch , always on the edge of causing massive destruction when provoked. Nevertheless, microglia are responsible and/or strongly contribute to the maintenance of CNS homeostasis, immune surveillance and synaptic plasticity. Until now, it was believed these complex processes depended on two polarizing stimuli received by microglia: the \u201cbad\u201d ones leading to a classical pro-inflammatory response (M1), and the \u201cgood\u201d ones leading to a typical anti-inflammatory profile (M2). However, under M1 and M2 polarizing conditions, microglia can Hanisch extensiv Hanisch distingu Hanisch show tha Hanisch extensiv Hanisch . The groa et al. review ca et al. propose l et al. show howo et al. comment The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Venous thromboembolism (VTE) is a common complication of malignancies and epidemiological studies suggest that lung cancer belonged to the group of malignancies with the highest incidence rates of VTE. Risk factors for VTE in lung cancer patients are adenocarcinoma, NSCLC in comparison with SCLC, advanced disease, pneumonectomy, chemotherapy including antiangiogenic therapy. Other risk factors are pretreatment platelet counts and increased release of TF-positive microparticles. Elevated D-dimer levels do not necessarily indicate an increased risk of VTE but have been shown to be predictive for a worse clinical outcome in lung cancer patients. Mechanisms responsible for the increase in venous thrombosis in patients with lung cancer are not understood.Currently no biomarker is recognized as a predictor for VTE in lung cancer patients.Although several clinical trials have reported the efficacy of antithrombotic prophylaxis in patients with lung cancer who are receiving chemotherapy, further trials are needed to assess the clinical benefit since these patients are at an increased risk of developing a thromboembolism. Thromboembolism is a well recognized complication of malignant disease and it is known that cancer patients have a higher incidence of venous thromboembolism (VTE), including pulmonary embolism (PE) and deep venous thrombosis (DVT), compared to the general population .The incidence of thromboembolic disease in general population is relatively low, about 1\u20133 per 1.000 per year while amIt is estimated that cancer is responsible for 20\u00a0% of all cases of incident VTE .Epidemiological studies suggest that lung cancer belonged to the group of malignancies with the highest incidence rates \u20139 of VTEThe association between VTE and lung cancer has been reported more than 20\u00a0years ago , 11.Lung cancer is the leading cause of cancer death in the United States for both men and women, with over 156,000 deaths in 2011 in the United States . It is aIn recent years, there has been increasing attention to the phenomenon of VTE in lung cancer patients and several studies have been made to better characterize it.It is recognized that patients with lung cancer are at risk for development of VTE. The incidence of VTE in lung cancer patients has been described in several studies.et al. [Chew et al. investgaet al. .In a retrsospective analysis of a lung cancer cohort of 6732 patienst , VTE occurred in 13,9\u00a0% of the lung cancer cohort and 1,4\u00a0% of the control cohort .In another retrospective study of 1940 patients with diagnosis of lung cancer, thromboembolic events were documented in 9.8\u00a0% cases, venous thromboembolic complications in 78\u00a0%, arterial thromboembolic complications in 27\u00a0% cases. About venous thromboembolic complications, it was documented deep venous thrombosis in 55\u00a0% cases and pulmonary embolism in 66\u00a0% cases .et al. [Recently Zhang et al. describeet al. .et al. [The incidence of thrombotic event was evaluated in a group of 950 patients with lung cancer by Kadlec et al. . 91 . Almost all of these unexpected VTE cases were significantly associated with a diagnosis of metastatic-stage cancer [White et al. identifie cancer .In a more recent study, it was observed among 207 subjects with lung cancer that one-third of VTE events was incidentally discovered .et al. [Calleias et al. have evaet al. . Patientet al. .Mechanisms responsible for the increase in venous thrombosis in patients with cancer are not understood. Rather than one unifying mechanism, the etiology is likely multifactorial with different factors assuming lesser or greater degrees of importance depending on the clinical setting.Over the years, researchers have investigated intrinsic properties of tumor cells that lead to a prothrombotic state. It is recognized that tumors shed membrane particles that contain procoagulant activity including tissue factor and membrane lipids that propogate the coagulation response , 24.Several studies have focused on the role of Tissue Factor (TF) in the pathophysiology of cancer associated thrombosis.Tissue factor (TF) is the physiologic initiator of coagulation and is commonly expressed in a variety of malignancies , 26. OveIn the past, some studies demonstrated the expression of TF by lung cancer cells , 29.et al. [et al. [Sawada et al. have pre [et al. also demMore recently the expression TF protein and/or mRNA have been documented in malignant cells or tissue in NSCLC .et al. [Sato et al. describeet al. .et al. [Similarly, Del Conde et al. reportedet al. .Cancer increases the risk of thromboembolic events. Risk factors, such as age, gender, bed-rest, venous catheters, surgery, chemotherapy with or without adjuvant hormone therapy, radiotherapy and infections, also increase the risk of thrombosis in cancer patients.Rates of VTE vary substantially between cancer patients, considerably depending on clinical factors, the most important being tumor type and stage.In this review we have focused on studies which investigated factors could be identified as predictors of vascular events in lung cancer and their prognostic significance.Thrombocytosis can result in thrombosis and is frequently observed in patients with malignancies The mechanism underlying development of thrombocytosis in lung cancer patients remains unclear; one of possible mechanisms might be tumor-associated elevation of bone marrow-stimulating cytokines such as interleukin (IL)-6, IL-1 and macrophage colony stimulating factor (M-CSF). These cytokines could exhibit thrombocytosis. Incidence and prognostic significance of thrombocytosis in patients with lung cancer has been investgated in some studies .Pedersen and Milman examinedet al. [p\u2009<\u20090.0001). Therefore preoperative platelet count was a prognostic factor for resectable NSCLC patients [Tomita et al. evaluatepatients .p\u2009<\u20090,001) [These findings are supported by a more recent study that ass<\u20090,001) .et al. [9 was associated with OR for major thromboembolic events was 3.66 (2.25\u20135.96) \u2014 a statistically significant value. Thrombocytosis was found in patients with advance disease, more commonly in men, with no apparent correlation with the clinical stage of the disease [Recently Kadlec et al. observed disease .et al. in a restrospective study of consecutive 281 patients with lung cancer did not observe a significant correlation between vascular events and thrombocytosis [On the contrary, Demirci ocytosis .Plasma levels of D-dimer are elevated in cancer patients. Activation of the extrinsic coagulation system and the fibrinolytic cascade within a tumour is thought to be related with growth, invasion and metastasis. Although high D-dimer was described as a predictor of VTE in earlier works, it is now recognized that it does not necessarily indicate an increased risk of VTE in cancer patients .Several studies have demonstrated that elevated D-dimer levels are associated with a poor prognosis in cancer patients. In patients with both non-small cell and small cell lung cancer, increased plasma D-dimer levels were shown to be predictive for a worse clinical outcome .In addition in cancer lung patients it has been shown that D-Dimer plasma levels decrease or increase after response and progressive disease, respectively, and can act as a predictive factor of the evolution of the disease .et al. [p\u2009=\u20090.0001) [Recently Ferroni et al. hypotheset al. . Overall\u20090.0001) . The use\u20090.0001) .With regard to other hemostatic factors, there are different results in the literature.et al. [Gabazza et al. found siet al. [In a prospective study Zecchina et al. investigElevated levels of procoagulant factors, such as lupus anticoagulant, anticardiolipin antibodies to factor VIII, and certain cytokines such as interleukin-6 and tumor necrosis factor, have also been identified as increased risk factors for VTE in patients with lung cancer , 29.et al. in a study population of 950 lung cancer patients did not find significantly higher medians of coagulation parameters [Recently Kadlec nd aPTT) .Over the years several studies investigated the VTE risk in different hystological types of lung cancer.et al. [In the past, autopsy and retrospective studies indicated that various adenocarcinomas are most strongly associated with VTE , 46. Thiet al. investiget al. .In a restrospective analisys of a large cohort of patients with NSCLC and SCLC, was observed that adenocarcinoma histology and advancing cancer stage were significant predictors of developing VTE within 1\u00a0years of NSCLC diagnosis .et al. and Lee [Recently Kadlec and Lee , 48 idenet al. [Tagalakis et al. describeet al. [In a retrospective analysis of lung cancer patients, Alexander et al. observedet al. .et al. [Differently not a significant correlation between vascular events and histological type, or TNM stage was observed in a retrospective study by Demirci et al. .VTE was a significant predictor of death within 2\u00a0years for both NSCLC and SCLC, HR\u2009=\u20092.3, 95\u00a0% CI\u2009=\u20092.2\u20132.4, and HR\u2009=\u20091.5, 95\u00a0% CI\u2009=\u20091.3\u20131.7, respectively .Cancer therapy itself has been shown to increase the risk of VTE, whether it be chemotherapy, antiangiogenic therapy, or hormonal therapy.Combined chemotherapy (in combination with radiotherapy) is the current standard treatment for advanced stage NSCLC as well as in the treatment of SCLC patients and VTE is a well known complication of anticancer therapy , 50.How chemiotherapy contributes to VTE risk has not to our knowledge been elucidated completely but likely involves a combination of direct endothelial damage and down regolation of endogenous anticoagulants .et al. [In an observational study by Blom et al. , the oveet al. .et al. [Khorana et al. analysedet al. .et al. [The results of this study are supported by Hicks et al. that havet al. .et al. [Numico et al. prospectet al. .Recently a retrospective study of lung cancer patients showed that chemotherapy-treated patients experienced thromboembolism more often than patients who did not receive chemotherapy (HR 5.7 95\u00a0% CI 2.2\u201314.8) .Among lung cancer patients receiving chemotherapy, it was described that the majority of VTE events occurring within 6\u00a0months of initiation of chemotherapy . The preWith regard to antiangiogenic chemotherapy, researchers have focused on neoangiogenesis inhibition with VEGF inhibitor bevacizumab.While some trials suggested that bevacizumab may be associated with increased risk for VTE , others et al. [p\u2009=\u20090.031) but was not found to be associated with an increased risk of venous thromboembolic events [Scappaticci et al. conductec events .et al. [p\u2009<\u20090.001) compared with controls [These data are in contrast to a recent systematic review and meta analysis by Nalluri et al. ; resultscontrols .Antiangiogenic agents also contribute to thrombosis, perhaps through endothelial cell and platelet activation .Postoperative use of angiogenesis drugs EGFR-TKI application were high risk factors for VTE in lung cancer patients .With regard to metalloproteinase inhibitor prinomastat there are divergent data.et al. [According to Behrendt et al. , the useet al. .In contrast to these data in a more recent study the addition of another metalloproteinase inhibitor BMS-275291 to chemotherapy in patients with advanced NSCLC did not increase the risk .The addition of aprinocarsen, a protein kinase C- alfa antisense oligonucleotide, to standard chemotherapy in patients with advanced stage NSCLC, significantly increased thromboembolism .It is known that patients undergoing surgery for cancer have a higher risk of postoperative DVT, despite thrombosis prophylaxis , than thAmong patients who undergo surgical resection due to oncological processes, VTE is considered a major cause of mortality and may serve as an important predictor of survival.et al. [With regard to lung cancer, Ziomek et al. analyzedet al. .et al. [Lyman et al. establisPneumonectomy has been associated with a higher risk of VTE than lobectomy for stage I and II lung cancer, due to greater activation of coagulation, which appears from the seventh postoperative day .et al. [Mason et al. describeet al. [Yang et al. exploredet al. .et al. [Recently Kadlec et al. reportedet al. [Differently Dentali et al. have obset al. .et al. [et al., this low prevalence may be attributable to the use of thromboprophylaxis protocols, which include the use of anticoagulant drugs and mechanical measures (intermittent compression systems and elastic stockings) and early ambulation [Gomez-Hernandez et al. have fouet al. . Accordibulation .et al. [A recent published review by Christensen et al. analysedet al. .et al. [With respect to mortality, it was described that pulmonary embolism is the second cause of mortality after pneumonectomy for a malignancy and such patients have the highest risk of dying from pulmonary embolism . Weder eet al. in theiret al. .Interestingly, mortality due to VTE has been reported to occur predominantly in patients with squamous cell lung carcinoma following surgery .In cancer patients the risk of VTE is particularly high in association with surgery for cancer and chemotherapy.et al. [P\u2009=\u2009.032) without increased bleeding [Haas et al. investigbleeding .et al. [p\u2009=\u20090.04) and thromboembolic events . Furthermore anticoagulation showed significant improvement in survival at 1\u00a0year and at 2\u00a0years , especially for patients with SCLC and prolonged life expectancy [Zhang et al. have recpectancy . Betweenpectancy .In a retrospective analysis of lung cancer patients, patients receiving chemotherapy experienced thromboembolism more often than patients who did not receive chemotherapy (HR 5.7 95 % CI 2.2\u201314.8) . PharmacRecently the neutrophil-lymphocyte ratio (NLR) was identified as a potentially useful marker for predicting clinical outcome in patients with anticoagulation for VTE. The study has demonstrated that NLR at the time of VTE diagnosis could be a useful biomarker for predicting the response and prognosis following anticoagulation in patients with lung cancer and VTE .The CARMEN study is an interesting french observational study, in which the researchers evaluated the adhesion to guidelines for treatment of VTE in hospitalized patients .Among cancer patients, 64\u00a0% had metastatic disease. Cancer sites were gastro-intestinal (25\u00a0%), gynecologic (23\u00a0%), pulmonary (21\u00a0%), hematological (14\u00a0%), urologic (10\u00a0%), or other (8\u00a0%) .It was observed that overall adhesion to guidelines was present in 59\u00a0% of patients. During initial treatment, adhesion was high but dropped during the long-term mantenance. Lung and hematological malignancies were significantly associated with the highest and lowest rates of adhesion .et al. [Conversely Alexander et al. made a set al. .Venous thromboembolism, including pulmonary embolism and deep venous thrombosis, is a common occurrence within the cancer population. Lung cancer is one of the most common cancer in western countries with reported high incidence of VTE. VTE in hospitalized lung cancer patients is associated with longer length of stay, higher inpatient mortality rates, increased cost and greater disability upon discharge compared to other inpatient lung cancer patients . Risk faCurrently no biomarker is recognized as a predictor for VTE in lung cancer patients.Although several clinical trials have reported the efficacy of antithrombotic prophylaxis in patients with lung cancer who are receiving chemotherapy, further trials are needed to assess the clinical benefit since these patients are at an increased risk of developing a thromboembolism."} {"text": "Life devoted to The Origins and Early Evolution of RNA, 17 papers explore a remarkably broad range of topics surrounding this hypothesis. I would not go so far as to say the hypothesis is experiencing a mid-life crisis. However it is clear that it has generated a spectrum of viewpoints, from ardent devotees to outright skeptics. I perceive all these vantage points as giving us a richer appreciation of the chemical origins of life.The RNA World is now some four billion years behind us, but only recently turned 50 as a human hypothesis. As early as 1962 Alex Rich suggested that RNA might have a phenotype in addition to its informational role [i.e., RNA enzymes, and thus operating on reversible reactions) generate such products. Martin et al. also briefly consider the role of 2\u00b4,3\u00b4-cyclic phosphates in their paper, which is an in-depth survey of the problems and potentials of RNA ligase, nucleotide synthase, and RNA replicase ribozymes [The Issue was initiated by a cogent argument from Scott and colleagues that the 2\u00b4,3\u00b4-cyclic phosphate version of nucleotides should be considered a viable prebiotic source of monomers for the abiotic polymerization into RNA [et al. revisit the role of formamide as both a reactant and a solvent for activated nucleotide synthesis that obviates much of the water problem [The key problem of how RNA nucleotides, if present prebiotically, could polymerize into oligomers in the face of an uphill battle against hydrolysis in an aqueous medium, was taken up by two papers in the Special Issue. Hashizume details problem . Of note problem ,7.The transition from \u201cshort-mers\u201d to \u201clonger-mers\u201d was also investigated by Mungi and Rajamani and KanaOnce RNAs were long enough, the RNA World hypothesis generally turns to how the evolutionary process could speed up. Kun and Szathm\u00e1ry, utilizing a recent influx of experimental data on RNA function, explore the nature of RNA fitness landscapes . A very et al. [et al. [e.g., [Another \u201cnext step\u201d in RNA World discussions is the origin and evolution of the genetic code. This problem was pondered long ago by Jukes , Woese , and othet al. take thiet al. takes th [et al. also tak [et al. also con. [e.g., , and woret al. [Once the RNA World had a foothold, a next logical question is, how did life bring in DNA? In today\u2019s biology, DNA synthesis requires a very challenging enzymatic reaction catalyzed by the ribonucleotide reductase (RNR) protein enzymes, which use RNA as a substrate. Given the complexity, yet apparent antiquity, of this reaction, the evolutionary provenance of these enzymes is of extreme importance. Lundin et al. provide i.e., RNA per se). The same stance is taken by van der Gulik and Speijer, who argue forcefully on catalytic grounds that an RNA world without amino acids could never have existed [et al. [The last group of papers in the Issue deal with the highly probable case where the RNA World was not strictly consisting of RNA, and that peptides or other molecules either initiated life or co-evolved with RNA. The so-called \u201cRNP World\u201d \u2013 for ribonuleoprotein world \u2013 posits that RNA could not have had enough catalytic prowess without the help of peptide cofactors. This is a rather attractive hypothesis and various authors have varying degrees of opinions on its strength. The \u201cStrong RNP World\u201d is advo existed . Smith e [et al. , perhaps [et al. makes pe [et al. .In sum, all of these papers serve to richen the discussion of the RNA World, in all of its various forms. Although definitive confirmation of any of these ideas may require a time machine, I sense that we are at the precipice of a unified theory that accommodates a wide spectrum of RNA-related observations."} {"text": "Lack of clean water sources, starvation, insufficient hygiene, and poverty are some of the greatest barriers to health for the world's growing population. All these features are closely associated with parasitic infections. Approximately one third of the world's population has been infected with parasites at some point in their lives and these infections are often life-threatening. Since most parasitic diseases progress with few to no symptoms, patients do not obtain accurate diagnosis and treatment in a timely manner. Further, there are currently no accessible and effective antiparasitic vaccines despite enormous efforts and monetary investment into the development of vaccines, drugs, and treatments to combat these infections. Today parasitologists are looking for new alternatives for treatment such as immunotherapies, gene manipulation, or transfection in order to improve their fight against these \u201celusive\u201d organisms. It is also clear that more studies on various parasites are necessary, even those which have low incidence, so that we are well-prepared during threats of reemerging parasitic infections.In this special issue, we bring together several reviews as well as original reports that are intended to provide a summary of some of the current knowledge regarding the \u201cimmunology and cell biology of parasitic diseases.\u201d These papers include basic, clinical, and epidemiologic studies that we believe are interesting and very important in our field. The first section of this special issue is focused on helminthic diseases ranging from vaccine development to helminth therapy and includes some research on the basic mechanisms of susceptibility, modulation, and protection against these parasites. Schistosoma mansoni Antigen in Cutaneous Leishmaniasis Patients,\u201d describes how helminth-derived molecules from S. mansoni can modulate dendritic cell activities during Leishmania infection. Additionally, V. H. Salazar-Casta\u00f1on et al. have written an interesting review on how different helminth infections can improve or worsen the development of malaria in \u201cHelminth Parasites Alter Protection against Plasmodium Infection.\u201d Data from their epidemiological and experimental studies indicates that helminth infections are a double-edged sword, in the context of malaria.The work by D. M. Lopes et al., \u201cDendritic Cell Profile Induced by T. solium in their original paper \u201cCharacterization of a Thioredoxin-1 Gene from Taenia solium and its Encoding Product.\u201d Thioredoxin-1 is an essential component of the thioredoxin system and it performs functions such as antioxidative, protein-reducing, and signal-transducing ones for development, proliferation, migration, apoptosis, inflammation, and metabolism. It is secreted by T. solium and is able to modify the immune response by driving a Th2 biased response and allowing for the establishment of this parasite. Further, E.-V. Marcela et al. studied how the metacestodes of T. crassiceps \u201ccommunicate\u201d when they grow in vitro in a crowded manner. Their paper is entitled \u201cCrosstalk among Taenia crassiceps (ORF Strain) Cyst Regulates Their Rates of Budding by Ways of Soluble and Contact Signals Exchanged between Them.\u201dThe next series of papers are focused on the biology of helminths. L. Jim\u00e9nez et al. characterize the thioredoxin-1 gene and gene product from Taenia crassiceps without Modifications of Parasite Loads.\u201d Using the same model, M. Khumbatta et al. show the important role that somatostatin has on the immune response and susceptibility to experimental cysticercosis in their paper \u201cSomatostatin Negatively Regulates Parasite Burden and Granulomatous Responses in Cysticercosis.\u201d A couple more papers focused on the immune response elicited by different helminth parasites. A. Prasad et al. report the first advances on immune response to the flat worm Paramphistomum epiclitum in their contribution \u201cEvaluation of Antibody Response to Various Developmental Stage Specific Somatic Antigens of Paramphistomum epiclitum in Goats.\u201d Y. Gu et al. report a promising experimental vaccine against trichinellosis in their paper \u201cProtective Effect of a Prime-Boost Strategy with the Ts87 Vaccine against Trichinella spiralis Infection in Mice.\u201d We have another paper on immunoregulation by helminths in the contribution of Y. Ledesma-Soto et al., \u201cExtraintestinal Helminth Infection Limits Pathology and Proinflammatory Cytokine Expression during DSS-Induced Ulcerative Colitis: A Role for Alternatively Activated Macrophages and Prostaglandins.\u201d This paper suggests that helminth infections can modulate intense inflammatory processes such as colitis through different mechanisms and ameliorate signs of illness in mice.Next, we put together a series of papers related to the development and regulation of immune responses to different helminths. K. E. Nava-Castro et al. demonstrated how early exposure to estrogens can imprint the immune response and have a positive or negative outcome during adulthood. Their results can be found in the paper \u201cDiethylstilbestrol Exposure in Neonatal Mice Induces Changes in the Adulthood in the Immune Response toFinally in this helminth section, B. Moguel et al. show in their review \u201cTransfection of Platyhelminthes\u201d how knowledge of the genome of helminths and genetic manipulation can be useful for designing new and more effective antihelminthic drugs. They also explain the molecular crosstalk that occurs between the host and parasite which has been partly inaccessible to experimentation.P = 0.049) and homozygous mutant were significantly associated with amoebic liver abscess when compared with homozygous wild type (QQ).The second section of this special issue deals with protozoan infections which are widely spread around the world. We started this section with an old \u201cfriend\u201d from developing countries, the amoeba, which causes amebiasis and remains as a public health problem in this part of the world. A. Aceves-Cano et al. present an original research \u201cMorphological Findings in Trophozoites during Amoebic Abscess Development in Misoprostol-Treated BALB/c Mice,\u201d where they show how trophozoites are altered with this kind of treatment. At human level A. K. Verma et al. contributed with the paper \u201cThe Trend in Distribution of Q223R Mutation of Leptin Receptor Gene in Amoebic Liver Abscess Patients from North India: A Prospective Study,\u201d they show that heterozygous mutant and E. dispar (a noninvasive strain). Talam\u00e1s-Lara et al. demonstrated a dramatic difference in the ability to phagocyte in these two species in their paper, \u201cErythrophagocytosis in Entamoeba histolytica and Entamoeba dispar: A Comparative Study.\u201d They discovered that E. histolytica displayed a superior erythrophagocytosis activity which possibly contributes to its more pathogenic nature.D. M. Meneses-Ruiz et al. developed a new experimental vaccine focused on the expression of a lectin in their paper \u201cProtection against Amoebic Liver Abscess in Hamster by Intramuscular Immunization with an Trypanosoma, Leishmania, and Plasmodium.Other protozoan parasites included in this special issue are Trypanosoma cruzi Reduce Cross-Reaction with Leishmania spp. in Serological Diagnosis Tests.\u201d The aim of this study was to improve serological tests already standardized for Chagas disease diagnosis, by using a high molecular weight protein fraction from T. cruzi extracts. They developed an easier and cheaper assay which is much needed in poor countries that suffer from this debilitating disease. From the same group, A. Vizca\u00edno-Castillo et al. describe the processes of acute inflammation derived after experimental T. cruzi infection in their original paper \u201cExacerbated Skeletal Muscle Inflammation and Calcification in the Acute Phase of Infection by Mexican Trypanosoma cruzi DTUI Strain.\u201d In Leishmaniasis research, we have several interesting papers, such as \u201cCK2 Secreted by Leishmania braziliensis Mediates Macrophage Association Invasion: A Comparative Study between Virulent and Avirulent Promastigotes\u201d by A. M. B. Zylbersztejn et al., where the authors comparatively analyze the effect of the kinase enzyme CK2 of virulent and avirulent L. braziliensis strains on parasite growth and macrophage invasion. They show interesting data that demonstrates that CK2 has a critical influence as a mechanism of invasion used by L. braziliensis. On the clinical side, N. Verma et al. show important cases where immunological changes were observed according to the treatment carried out by the patients, in their work \u201cClinicopathological and Immunological Changes in Indian Post Kala-Azar Dermal Leishmaniasis (PKDL) Cases in relation to Treatment: A Retrospective Study.\u201d In another review by S. A. G. Da-Silva et al. entitled \u201cThe Dialogue of the Host-Parasite Relationship: Leishmania spp. and Trypanosoma cruzi Infection,\u201d they have analyzed the host-parasite interaction in both Leishmania spp. and Trypanosoma cruzi infections. This review is comprehensive and is interesting since it indicates that there are some differences in host invasion strategies between these two major protozoan parasites. Three schematic figures summarizing the main escape mechanisms of Trypanosoma and Leishmania and modulation of host cells are included.A. Y. Cervantes-Land\u00edn et al. contributed their work \u201cHigh Molecular Weight Proteins of P. berghei ANKA,\u201d where the authors highlight the important effect of sexual hormones on the development of Plasmodium infection.Another interesting original study related to malaria is presented by N. A. Mosqueda-Romo et al. in \u201cGonadal Steroids Negatively Modulate Oxidative Stress in CBA/Ca Female Mice Infected withFinally, a review of the basic cell biology of protozoan parasites and their virulence is discussed by C. Mu\u00f1oz et al., in their latest paper, \u201cRole of the Ubiquitin-Proteasome Systems in the Biology and Virulence of Protozoan Parasites.\u201dWe believe that this compilation of original research as well as the latest reviews written by authors from all around the world is a small sample about interesting yet complicated field of immunoparasitology. This research shows us that it is necessary to develop a deeper knowledge about the different mechanisms that parasites employ to invade the host and to avoid antiparasitic immune responses. We also require a better understanding of how they develop resistance to constant exposure to old drugs.Luis I. TerrazasAbhay R. SatoskarMiriam Rodriguez-SosaAbraham Landa-Piedra"} {"text": "Sapajus spp). Specifically, variation in the facial width-to-height ratio (fWHR) was positively correlated with alpha status and a composite measure of assertiveness. This novel finding adds to a growing body of evidence indicating that variation in facial structure reliably maps onto individual differences in dominance-related phenotypes.A recent paper by Lefevre et al. in PLoS Research into fWHR was propelled by an anthropological study of human skulls indicating that fWHR was a size-independent sexually dimorphic feature of the human skull that arose around puberty coincident with the rise in pubertal testosterone with aggression , a careful examination of Figure 4 from Lefevre et al. (r(23) = 0.54, p = 0.005], but not alpha monkeys . I decided to perform a re-analysis of Lefevre and colleagues' data which were freely available on the PLoS One website to investigate the extent to which the link between fWHR and assertiveness was driven by low status monkeys. In this model, assertiveness was the dependent variable and I included fWHR and alpha status on Step 1 and the fWHR-\u00d7-alpha status interaction on Step 2. As per Lefevre et al. .In their paper, Lefevre et al. reportede et al. certainle et al. , I also B = \u22121.32, SE = 0.30, p < 0.01) and fWHR and a trend for a fWHR-\u00d7-alpha status interaction . Lefevre et al. (p < 0.05). However, the lack of statistical significance is almost certainly due to a lack of statistical power given the small sample size (n = 43). Because previous work has found that the relationship between fWHR and aggressive behavior is specific to individuals with relatively low social status but not alpha monkeys , sex-\u00d7-fWHR (p = 0.84), or sex-\u00d7-alpha status-\u00d7-fWHR interactions (p = 0.15), suggesting that the relationships between fWHR, alpha status, and assertiveness were not moderated by sex.Results revealed main effects of alpha status . Thus, future work will require a larger sample to verify the extent to which fWHR maps onto dominance-related traits, whether such effects are moderated by social status, and whether relationships between fWHR, social status, and assertiveness hold across males and females.One important finding from Lefevre et al. and the In summary, these findings in brown capuchin monkeys, along with work in humans (Goetz et al., The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Glycation is important in the development of complications of diabetes mellitus and may have a central role in the well-described glycaemic memory effect in developing these complications. Skin fluorescence has emerged over the last decade as a non-invasive method for assessing accumulation of advanced glycation endproducts. Skin fluorescence is independently related to micro- and macrovascular complications in both type 1 and type 2 diabetes mellitus and is associated with mortality in type 2 diabetes. The relation between skin fluorescence and cardiovascular disease also extends to other conditions with increased tissue AGE levels, such as renal failure. Besides cardiovascular complications, skin fluorescence has been associated, more recently, with other prevalent conditions in diabetes, such as brain atrophy and depression. Furthermore, skin fluorescence is related to past long-term glycaemic control and clinical markers of cardiovascular disease. This review will discuss the technique of skin fluorescence, its validation as a marker of tissue AGE accumulation, and its use as a clinical tool for the prediction of long-term complications in diabetes mellitus. Worldwide, the prevalence of diabetes mellitus is increasing at an alarming rate, mainly because of an increasing incidence of type 2 diabetes mellitus (T2DM) . The conAGEs are formed in a multistep process by glycation and oxidation of free amino groups in proteins, lipids, and nucleic acids. These AGEs promote tissue dysfunction through cross-linking of long-lived molecules and through binding to the receptor for advanced glycation endproducts (RAGE) . AGEs acet al. [In 1986, Monnier et al. first pret al. .However, assessment of AGEs in skin biopsy is not suitable for clinical use, because of the invasive method and high costs. In 2004, a first report appeared that skin fluorescence, non-invasively assessed, was related to AGE levels in dermal biopsies in diabetes patients and in healthy controls . The derSo far, two devices have been used for assessment of skin fluorescence in clinical studies: the AGE Reader and the SCOUT DS SF spectrometer . Although both devices have been developed to reflect skin AGE accumulation, the measurement techniques differ, and the measurement values are not directly comparable. Therefore, skin fluorescence as measured by the AGE Reader, previously called Autofluorescence Reader (AFR), will be referred to as skin autofluorescence (SAF). Skin fluorescence as measured by the SCOUT device will be referred to as skin intrinsic fluorescence (SIF). When general statements on either device are made, the term \u2018skin fluorescence\u2019 will still be used.SAF is measured on the volar side of the forearm. Care should be taken to perform this measurement in an area with normal skin with minimal sunlight exposure. The AGE Reader contains a UV-A light emitting lamp that emits light with a peak wavelength of 360\u2013370\u00a0nm. Light reflected and emitted in the 300\u2013600\u00a0nm range from the skin is measured in the research version by an inbuilt spectrometer, using a UV glass fibre. In later, simpler versions, the spectrometer is replaced by a set of photodiodes with peak sensitivities for different wavelengths. Before every measurement, dark and white reference readings are performed to correct for background light and to calculate reflectance, respectively. Initially, SAF measurements were not considered for analysis if the reflectance level was less than 10\u00a0%. After introduction of more sophisticated and validated skin colour correction software (version 2.3), this limit was lowered to 8\u00a0%. This adaptation allowed the use of SAF in a broader group of persons with dark skin colour. To correct for differences in light absorption, SAF is calculated as a ratio of excitation light (300\u2013420\u00a0nm) to emitted light (420\u2013600\u00a0nm). Consequently, SAF is expressed in arbitrary units (AU). Intra-observer variation of repeated autofluorescence measurements is 5\u00a0% to 6\u00a0% within one day in different publications [Although the exact molecular structures and the diversity of species contributing to skin fluorescence are not established, tissue fluorescent species have their own specific excitation-emission spectrum. The used wavelength band of the AGE Reader was selected such that the major contribution in fluorescence comes from fluorescent AGEs. The majority of identified AGEs are characterized by fluorescence in the area around an excitation wavelength of 370\u00a0nm and an emission of 440\u00a0nm \u201311. For SCOUT measurements are performed on the inner side of the forearm and are corrected for factors that affect light scattering and absorption as well. SIF is excited with a light-emitting diode (LED) centred at 375\u00a0nm, and the emission is detected over 435\u2013655\u00a0nm . In secoSeveral validation studies have been conducted to determine the performance of SAF as a tool for measuring dermal AGE accumulation. First, SAF was compared with several specific fluorescent and non-fluorescent AGEs in skin biopsies of diabetes patients and control subjects. Later, SAF was validated in a comparable way in renal failure and some other conditions. Moreover, SAF was compared to another classical AGE assay method for measuring AGEs, namely collagen-linked fluorescence , 15, 16.versus control subjects. This study demonstrated that SAGE, a skin derived skin fluorescence parameter, could accurately classify disease in a case-control population [SIF was originally based on a comparison of non-invasive fluorescence spectroscopy and skin collagen AGEs (determined using chromatographic and mass spectrometric methods) in a pig skin collagen model . Furtherpulation . HoweverA remark on the use of skin fluorescence deserves attention; skin fluorescence measurements may be influenced by dark pigmentation, the use of skin products, the fasting state, and lifestyle factors.For the AGE Reader, it has been demonstrated that SAF levels show expected relations with calendar age and presence of diabetes up to Fitzpatrick skin phoSeveral skin products may affect the measurement of SAF, especially sunscreens and skin tanners. However, some conventional skin creams may also make the measurement unreliable. When possible, persons should be asked to avoid use of skin products several days before a SAF measurement. Otherwise, persons should be asked about recent use of skin products .et al. [An increase in SAF was seen 2\u00a0h postprandial in a study performed by Stirban et al. . This maet al. . TherefoN-acetyltransferase 2 polymorphism, (pack-years of) smoking, and coffee consumption. In the T2DM population, 47\u00a0% of the variance in SAF was explained by the same factors except for coffee consumption.As AGEs accumulate over time, SAF increases with aging . In addiversus post-glucose challenge) was comparable to the variation (5.5\u00a0%) for two non-fasting measurements [For the SCOUT device, it was claimed that the skin colour range, in which reliable measurements could be made, was somewhat broader than for the AGE Reader. In subjects at risk for T2DM, it is observed that SIF measurements are not majorly influenced by the fasting status since the variation (5.7\u00a0%) for fasting subjects studies showed an independent relation between SAF and both micro- and macrovascular complications in T2DM \u201355. Ahdiet al. demonstr [et al. confirme [et al. showed tet al. [Regarding diabetic retinopathy, two Japanese studies specifically investigated the relationship between SAF and the severity of the disease in T2DM , 57. BotIn T1DM, four studies reported the relation of SAF with retinopathy, nephropathy, and/or neuropathy in the past 5\u00a0years , 58\u201360. In summary, these recent reports show consistent evidence of an association between SAF and end-organ complications in diabetes. Several studies have now confirmed that SAF is associated with retinopathy, even after correction for age and nephropathy. In addition, a few studies established an independent association between SAF and vascular complications in diabetes patients of Asian descent. However, no relation was found between SAF and vascular complications in diabetes patients with darker skin colour types. Unfortunately, none of the recent studies had a prospective design, but from earlier studies, some evidence remains for the predictive value of SAF in the development of diabetes complications in T2DM , 50. Theet al. [et al. [et al. [et al. [Three studies have been performed in which the association between SIF and diabetes complications is investigated \u201363. All et al. reported [et al. showed tet al. [Supporting the association between SAF and cardiovascular complications, SAF has been related to several clinical markers of cardiovascular disease: diastolic function, arterial compliance, markers of endothelial function, and markers of atherosclerosis. A relation between SAF and functional anatomical changes, reflected in diastolic dysfunction, was first shown in a study by Hartog et al. in patieet al. , 65.et al. [et al. [et al. [et al. [et al. [et al. [Several studies have shown that SAF is related to arterial compliance evaluated by arterial pulse wave velocity (aPWV). Ueno et al. and Janu [et al. describe [et al. , de Groo [et al. , and Lla [et al. . In acco [et al. recentlyet al. [et al. [Relations between SAF and markers of microvascular and endothelial function have been described mainly in diabetes and renal failure patients. Araszkiewicz et al. found a [et al. observedIn earlier studies, it had already been shown that AGEs measured in skin samples using collagen fluorescence are associated with coronary calcium score, as a surrogate marker for coronary atherosclerosis. More recently, SAF has been reported to be associated with clinical markers of atherosclerosis, such as intima media thickness and coronary calcification score, by several groups , 73\u201375. Vascular complications are common and receive much attention in diabetes research. More recently, diabetes have also been linked to other disorders, such as impaired cognitive function, Alzheimer\u2019s disease , and depet al. [et al. [Spauwen et al. reportedet al. . Moran e [et al. demonstret al. [Van Dooren et al. showed tDiabetes has been shown to be a risk factor for retinal detachment after cataract surgery . In a piAGE measurements, SAF in particular, have been included in some large cohorts, such as the Maastricht study (enricheIt has been extensively shown that skin fluorescence is associated with a wide variety of long-term complications in diabetes mellitus. This association is supported by the relation between skin fluorescence and AGE accumulation in several tissues with slow turnover, between skin fluorescence and past long-term glycaemic control, and between skin fluorescence and clinical markers of cardiovascular disease. A limited number of studies have also provided evidence for an independent predictive role of skin fluorescence for cardiovascular complications and mortality in diabetes. More prospective studies, with longer follow-up period and larger group size, are being conducted to establish the predictive role and potential benefit of skin fluorescence in disease management of diabetes patients."} {"text": "In recent years, PCR has been become widely applied for the detection of trypanosomes overcoming many of the constraints of parasitological and serological techniques, being highly sensitive and specific for trypanosome detection. Individual species-specific multi-copy trypanosome DNA sequences can be targeted to identify parasites. Highly conserved ribosomal RNA (rRNA) genes are also useful for comparisons between closely related species. The internal transcribed spacer regions (ITS) in particular are relatively small, show variability among related species and are flanked by highly conserved segments to which PCR primers can be designed. Individual variations in inter-species length makes the ITS region a useful marker for identification of multiple trypanosome species within a sample.Trypanosoma congolense, Trypanosoma brucei and Trypanosoma vivax and compared to a modified (using eluate extracted using chelex) ITS-PCR reaction.Six hundred blood samples from cattle collected in Uganda on FTA cards were screened using individual species-specific primers for Trypanozoon, a prevalence of 10.5% was observed as compared to 0.2% using ITS PCR (Kappa\u2009=\u20090.03). For Trypanosoma congolense, the species-specific PCR reaction indicated a prevalence of 0% compared to 2.2% using ITS PCR (Kappa\u2009=\u20090). For T. vivax, species-specific PCR detected prevalence of 5.7% compared to 2.8% for ITS PCR (Kappa\u2009=\u20090.29).The comparative analysis showed that the species-specific primer sets showed poor agreement with the ITS primer set. Using species-specific PCR for Trypanozoon (T. b. brucei s.l). While ITS primers are useful for studying the prevalence of trypanosomes causing nagana (in this study the species-specific primers did not detect the presence of T. congolense) there were discrepancies between both the species-specific primers and ITS for the detection of T. vivax.When selecting PCR based tools to apply to epidemiological surveys for generation of prevalence data for animal trypanosomiasis, it is recommended that species-specific primers are used, being the most sensitive diagnostic tool for screening samples to identify members of Routine diagnosis of trypanosomiasis using classical parasitological approaches shows poor sensitivity under field conditions . This isTrypanosoma brucei s.l., Trypanosoma congolense and Trypanosoma vivax) and PCR based methods for their amplification developed.In recent years, PCR has been widely applied for the detection of trypanosomes and has been shown to be highly sensitive and specific in the laboratory . The useTrypanozoon , the most common target is the 177\u00a0bp DNA satellite repeat sequence originally described by Sloof et al. [Trypanozoon genomic DNA, were able to detect 0.1\u00a0pg of parasite DNA, i.e. the amount of DNA equated to that present in a single trypanosome [et al. [Trypanozoon are used to identify T. b. brucei as a first stage in the process of identifying human infective parasites within an animal population. To further discriminate T. b. brucei from T. b. rhodesiense PCR reactions targeting the single copy serum-resistance-associated (SRA) gene [SRA gene as well as the gene for phospholipase C (GPI-PLC) was developed [T. b. rhodesiense, but only GPI-PLC is present in T. b. brucei, so if a sample shows positive amplification for GPI-PLC this indicates that sufficient T. brucei s.l. genomic material is present to detect a single copy gene while the presence or absence of SRA determines whether T. b. rhodesiense is present.For detection of f et al. that exipanosome . Weaker [et al. found thRA) gene have beeRA) gene . To permeveloped . Both ofTrypanosoma congolense (savannah) is a highly pathogenic trypanosome that is the most widespread, in terms of both geographical and host range across Sub Saharan Africa [T. congolense, which target a satellite sequence of 316\u00a0bp have been developed [T. congolense savannah was 0.1\u00a0pg of parasite DNA, again the amount of DNA in a single trypanosome and weak bands containing the amplification product could be detected when the DNA content was 0.01\u00a0pg or an estimated 1/10th that of a single parasite [n Africa . Specifieveloped ,14. The parasite .Trypanosoma vivax can be identified using universal primers targeting a fragment of the gene encoding T. vivax specific antigen [T. vivax isolates from diverse geographic locations in Africa and South America. antigen ,16; thisThese species-specific primers show good sensitivity and specificity against their target species. However, to screen large numbers of samples such as are derived from naturally infected hosts to determine prevalence data, multiple PCRs need to be undertaken for each sample which is both time consuming and expensive. It would be desirable to be able to apply a single PCR reaction that could simultaneously differentiate all of the trypanosome species present within a sample, making screening of a large number of samples viable in terms of time and cost.et al. [et al. [et al. [et al. [et al. [et al. [et al. [Trypanozoon and T. congolense but reported more T. vivax positives using the ITS primer sets than were derived using the species-specific primers. The species specific primer set had been developed for West African T. vivax isolates and this could explain the lower levels of positives generated and therefore the disagreement between the ITS PCR tests and species-specific PCR when applied to East African samples. In disagreement with the work of Thumbi et al. [et al. [et al. [Ribosomal RNA (rRNA) genes are highly conserved and have been proved useful for comparisons of closely related species. Eukaryotic rRNA genes occur as tandem repeats of units separated by a non-transcribed spacer (NTS) region and internal transcribed spacer regions (ITS); the ITS regions are relatively small, show variability among related species and are flanked by highly conserved segments to which PCR primers can be designed. Individual variations in inter-species length makes the ITS region a useful marker for species differentiation in trypanosomes and this and their high copy number, of around 200 makes the (ITS) a useful target for species differentiation in trypanosomes and other species ,18. Njiret al. and Cox [et al. is a sin [et al. compared [et al. and Cox i et al. a large i et al. , showed [et al. were siget al. [et al. [Although FTA cards present an efficient method of collection, storage and transportation of field material using one punch to seed the PCR reaction can reduce the chances of a positive PCR results . This iset al. have rec [et al. to incluTrypanozoon[T. congolense[T. vivax[et al. [In this study we have extensively tested a substantive number of cattle samples from Uganda to compare the application of species-specific primer sets targeting ypanozoon,12, T. congolense and T. v[T. vivax,15 with x[et al. .A total of 600 blood samples were available for testing that had been collected from cattle in Uganda in 2006 from baseline sampling for the Stamp out Sleeping Sickness campaign . SamplesAt the national and district levels, the study was conducted with the approval of the Coordinating Office for Control of Trypanosomiasis in Uganda (COCTU) as well as the District Veterinary Officers (DVOs) in each of the study districts.et al. [et al. [et al. [et al. [In order to allow comparison between the species-specific and the ITS-PCR of Cox et al. the prot [et al. . Ten dis [et al. . For PCR [et al. for both [et al. . The secTrypanozoon DNA , T. con TCK-PCR ,14) and (TV-PCR ). The ba [et al. are showT. brucei s.l was used. Negative controls consisting of water only instead of eluate and washed blank discs were run with each PCR.One positive control (genomic DNA) for each trypanosome species was used in the corresponding species specific PCR, while for ITS-PCR, a positive control of Amplification products were resolved in 1% (w/v) agarose gels along with 100\u00a0bp molecular weight Superladder . The agarose gel was prepared in 1 \u00d7 TBE stained with 5\u00a0\u03bcM ethidium bromide. The gels were run in 1xTBE, 5\u00a0\u03bcM ethidium bromide for at least 45\u00a0minutes at 100 volts and visualized under an ultra violet transilluminator .When the amplification reaction gave a signal of the expected size according to the set of primers used without any signal in the negative control, infection was considered to be confirmed.http://www.statstodo.com/CohenKappa_Pgm.php) was used to compare the three PCR reactions targeting the Trypanozoon. All other Kappa tests were conducted using WinPepi with a ion 3.15 .Trypanozoon, species-specific PCR using TBR detected 63 \u2009=\u20098.2%-13.2%) parasitic events, GPI-PLC-PCR detected 46 parasitic events while ITS-PCR detected 1 parasitic events parasitic events while ITS-PCR detected 13 . The species-specific reactions and ITS-PCR did not agree positively on any samples with ITS-PCR identifying 13 positives in total. Kappa testing showed poor agreement between the species-specific reactions and ITS-PCR .Comparing the sensitivity parasitic events while ITS-PCR detected 17 parasitic events. TV-PCR and ITS-PCR only agreed positively on eight animals with TV-PCR and ITS-PCR identifying a further 26 and nine positives respectively. Kappa testing showed poor agreement between TV-PCR and ITS-PCR .Comparing the sensitivity infected with T. theileri. A species-specific PCR reaction that targets T. theileri satellite sequence of 500\u00a0bp size was developed by Rodrigues et al. [T. theileri DNA (data not shown).In addition to these pathogenic trypanosome species, ITS-PCR was able to identify non-pathogenic s et al. and was et al. [Trypanozoon it performed much better for the detection of T. congolense when compared to species-specific reactions.The use of ITS-PCR in diagnosis of African trypanosomes allows the identification of several trypanosome species in the same reaction. This is both a saving in time and cost, as less PCR reactions need to be carried out to gain an understanding of the prevalence of trypanosomes in an area. Although FTA cards have simplified the collection and transport of blood samples , often Det al. modifiedTrypanozoon while the species-specific reaction diagnosed 63 parasitic events. The difference between the sensitivity of ITS-PCR and species-specific PCR reactions observed here might relate to the frequency of the target within the parasite genome. The copy number of ITS target in the genome is 100\u2013200 compared to 10,000 for TBR and the higher sensitivity of the latter reaction could be due to the increased copy number of the target DNA. However, good agreement was seen between TBR and GPI-PLC; with the latter targeting a single copy gene [et al. [Trypanozoon DNA while TBR-PCR identified 21% (n\u2009=\u2009245) of samples; these samples were collected on FTA\u00aecards from cattle in Uganda but PCR was conducted directly on the discs after washing. However, more research by de Clare Bronsvoort et al. [et al. [et al. [et al. [et al. [et al. [Trypanozoon (1.9-3.9%).In this study, ITS-PCR identified only one (0.2%) sample to contain opy gene it wouldt et al. had show [et al. showed a 21% n\u2009=\u20095 of samp [et al. reported [et al. extracte [et al. , the speT. congolense savannah type, T. congolense forest type and T. congolense kilifi type using ITS PCR even after prolonged separation time of the bands by electrophoresis. This was attributed to the band sizes of the amplified products of the three types are close to each other . For this reason, any product with a band size above 1400\u00a0bp was classified as T. congolense species. The PCR reactions specific for T. congolense savannah, T. congolense forest and T. congolense kilifi were negative for all the examined samples with Kappa testing suggesting poor agreement between the two diagnostic PCR tests. Previously Cox et al. [T. congolense while only one sample was diagnosed using specific reactions for T. congolense savannah. The inability of the species-specific PCR reactions to identify the T. congolense species diagnosed with ITS-PCR could be explained by the existence of a new T. congolense sub-species which is similar to T. congolense savannah and T. congolense forest in ITS target, while there is no specific reaction to identify these isolates at present. Isoenzyme analysis by Gashumba et al. [T. congolense from Uganda was separated from other T. congolense isolates suggesting that further analysis of T. congolense from Uganda may be required.It was difficult to differentiate between x et al. had compa et al. had prevT. vivax specific antigen were used for the diagnosis of T. vivax species. The universal primer set diagnosed 34\u2009T. vivax infections while ITS-PCR identified 17 positive samples. Another set of primers targeting satellite DNA sequence were used and none of the samples were positive with these primers (data not shown). This was not unexpected since the primers targeting the satellite sequence are not present in some isolates of T. vivax[et al. [T. vivax infections in cattle blood samples using the primer set amplifying the satellite sequence, while ITS-PCR identified six T. vivax infections. Although Cox et al. [T. vivax may not accurately assess the level of infection within wild animal samples from Tanzania [In the present study, universal primers targeting the gene encoding T. vivax. Cox et x[et al. identifix et al. found liT. theileri was identified in 7.6% of samples using the modified nested ITS PCR. The prevalence obtained was similar to that obtained by Cox et al. [Trypanozoon prevalence village (3%) but lower than that seen in a high Trypanozoon prevalence village (47%). As there were no positive results using the species-specific PCR reaction targeting T. theileri it was difficult to gauge the sensitivity of the modified ITS PCR in the current work.x et al. in a lowTrypanozoon within cattle populations, on samples that are collected on to FTA cards (that includes the zoonotic T. b. rhodesiense) species-specific primers are more suitable in terms of sensitivity. However, as species-specific primers did not identify any T. congolense and there were differences in the amplification of T. vivax, the ITS reactions of Cox et al. [et al. [The present study suggests that for estimating prevalence of x et al. and othe [et al. ) might bThe authors declare they have no competing interests and the sponsors had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.HAA carried out the study, the molecular approaches and participated in preparing the manuscript. KP and SCW designed the study supervised the molecular approaches and participated in preparing and editing the manuscript. ETM carried out the statistical analyses and participated in preparing and editing the manuscript. All authors read and approved the final version of the manuscript."} {"text": "Using ultrasonic devices in endodontics can enhance the antibacterial and tissue dissolving ability of different root canal irrigants such as sodium hypochlorite (NaOCl) which is the most common irrigant with excellent antibacterial and tissue dissolving abilities. However, due to its high surface tension, its penetration into the irregularities of the root canal system is a challenge. The purpose of this paper was to review the different ultrasonic devices, different types of ultrasonic irrigation, the effect(s) of ultrasonic activation on the antibacterial and biofilm-removal abilities of NaOCl as well as the effect of ultrasonic activation on the smear layer removal ability of NaOCl. Accordiet al. , only 40et al. , 4. Enterococcus faecalis can penetrate the dentinal tubules of root canal walls up to 800-1000 \u00b5m deep [\u00b0C [The aim of root canal irrigation is to remove the pulp tissue remnants and microorganisms (in either planktonic or biofilm forms) , eliminadeep [\u00b0C .Active and passive root canal irrigationPassive irrigation is conducted by slow dispensing of the irrigant of choice into a canal through a variety of different gauged needles [ needles . In orde needles . Passive needles .active irrigation initiates dynamics and flow within the fluid and thus improves root canal disinfection. In well-shaped canals, fluid activation has a critical role in cleaning and disinfection of the canal irregularities by facilitating the fluid penetration through all aspects of the root canal system [On the other hand, l system , 11. Physics of ultrasonicUltrasound is a vibration or acoustic wave with similar nature as sound but with a frequency higher than the highest frequency detectable by the human ear (approximately 20000 Hz) . Ultrasomagnetostriction that converts the electromagnetic energy into mechanical energy. The second method works according to the piezoelectric principle and uses a crystal which changes in size by applying electrical charge [There are two basic methods for producing an ultrasonic wave. First is l charge , 15. TheMagnetostrictive units have two major drawbacks for endodontic application. First they have elliptical movement and oscillate in a figure-eight manner and second, they generate heat, so adequate cooling is required. vs. 24 in magnetostrictive devices). The other advantage is the piston-like linear movement of tip in piezoelectric units from back to front which is ideal for endodontic treatment [One major advantage of piezoelectric units over magnetostrictive devices are production of more cycles per second \u201d, \u201cUltrasonics Activation AND Sodium Hypochlorite (47 articles)\", \"Ultrasonics AND NaOCl (51 articles)\u201d, \u201cPassive Ultrasonic Activation AND Sodium Hypochlorite (24 articles)\u201d.Effects of ultrasonic irrigation in endodonticsUsing ultrasonic energy in endodontic treatment has improved treatment quality in many aspects, including access to root canal entry holes, cleaning, shaping and filling the canals, eliminating the obstructions and intracanal materials and endodontic surgery . Ultrasonic devices can be utilized in two manners; simultaneous combination of ultrasonic irrigation/instrumentation and passive ultrasonic irrigation (PUI) , 18. BecApplying ultrasound for passive irrigation seems more advantageous , 21. ForEffect of ultrasonic energy on antibacterial activity of NaOClNaOCl is the most common root canal irrigant with excellent antibacterial and tissue dissolving abilities . Irrigatin vitro study by Tardivo et al. [E. faecalis. Huque et al. [et al. [et al. [In an o et al. there wae et al. showed t [et al. and Siqu [et al. have indUltrasonics and bacterial biofilmset al. [E. faecalis biofilms. Harrison et al. [According to Bhuva et al. both conn et al. concludeet al. [E. faecalis biofilm. Neelakantan et al. [E. faecalis biofilm compared to the ultrasonic. Bhardwaj et al. showed tn et al. showed tEffect of ultrasonic on smear layer removalet al. [Ahmad et al. claimed et al. showed tet al. also comet al. -37.et al. [et al. [Researchers who found the cleaning effects of ultrasonic beneficial, used the technique only for the final irrigation of root canal after completion of hand instrumentation , 38, 39.et al. , 40 clai [et al. recommenet al. [Prati et al. also menet al. , 44 showet al. [Baumgartner and Cuenin also obset al. evaluateet al. [et al. [et al. [in vitro model. Findings showed that both activation techniques are important adjuncts in removing the smear layer.Mozo et al. showed t [et al. showed t [et al. comparedCurtis and Sedgley showed tet al. [et al. [2O2. Superiority of ultrasonication of the intra-canal irrigant over the manual technique in removing the smear layer was demonstrated by Ribeiro et al. [Kocani et al. showed t [et al. showed to et al. . et al. [et al. [Blank-Goncalves et al. showed t [et al. ultrasonet al. [et al. [Paque et al. confirme [et al. assessedUltrasonics vs. sonic irrigation Sonic instruments use a lower frequency (1000-6000 Hz) compared to ultrasonic instruments (25000 Hz). In both types of instruments the file is connected at an angle of 60-90 degrees to the long axis of the handpiece. However, the vibration pattern of ultrasonic files is different from that of sonic instruments. Ultrasonically activated files have numerous nodes and antinodes across the length of the instrument, whereas sonic files have a single node near the attachment of the file and one antinode at the tip of the instrument. Sonic instruments produce an elliptic, lateral movement, similar to that of ultrasonic files , 17.\u00b0C (in areas close to the tip of the instrument) and 37\u00b0C (away from the tip) when the irrigant was ultrasonically activated for 30 sec without replenishment. A cooling effect from 37\u00b0C to 29\u00b0C was recorded when the irrigant was replenished with a continuous flow of irrigant. The temperature of the irrigant was 25\u00b0C. The external temperature stabilized at 32\u00b0C during a continuous flow of the irrigant and reached a maximum of 40\u00b0C in 30 sec without continuous flow. Ahmad [\u00b0C-rise of temperature during a continuous flow of irrigant. The initial temperature of the irrigant was 20\u00b0C. A rise of temperature within these ranges will not cause pathological temperature rises in the periodontal ligament.Cameron reportedw. Ahmad reported1. Superiority of ultrasonic irrigation with NaOCl over passive irrigation with syringe is still controversial.2. Superiority of ultrasonic activation of NaOCl on endodontic biofilms over other irrigation methods is controversial.3. Superiority of ultrasonic activation of NaOCl on smear layer removal is controversial."} {"text": "Iron is essential for the Central Nervous System development, i.e. mielinization process and cellular differentiation as well as correct functioning of neurotransmitters . Preterm"} {"text": "Aptamers are a class of therapeutic oligonucleotides that form specific three-dimensional structures that are dictated by their sequences. They are typically generated by an iterative screening process of complex nucleic acid libraries employing a process termed Systemic Evolution of Ligands by Exponential Enrichment (SELEX). SELEX has traditionally been performed using purified proteins, and cell surface receptors may be challenging to purify in their properly folded and modified conformations. Therefore, relatively few aptamers have been generated that bind cell surface receptors. However, improvements in recombinant fusion protein technology have increased the availability of receptor extracellular domains as purified protein targets, and the development of cell-based selection techniques has allowed selection against surface proteins in their native configuration on the cell surface. With cell-based selection, a specific protein target is not always chosen, but selection is performed against a target cell type with the goal of letting the aptamer choose the target. Several studies have demonstrated that aptamers that bind cell surface receptors may have functions other than just blocking receptor-ligand interactions. All cell surface proteins cycle intracellularly to some extent, and many surface receptors are actively internalized in response to ligand binding. Therefore, aptamers that bind cell surface receptors have been exploited for the delivery of a variety of cargoes into cells. This review focuses on recent progress and current challenges in the field of aptamer-mediated delivery. In contrast to antisense oligonucleotides and small interfering RNAs (siRNAs) that inhibit translation of proteins by Watson-Crick base-pairing to their respective messenger RNAs, aptamers bind to existing proteins with high affinity and specificity, analogous to monoclonal antibodies. Aptamers are typically generated by an iterative screening process of complex nucleic acid libraries (>10 (SELEX) . The SELO-methyl pyrimidines [RNA and DNA aptamers both have theoretical advantages and proponents, but aptamers of comparable affinity and specificity can be generated from RNA or DNA. Since nuclease resistance is critical for aptamer stability in biological fluids, RNA libraries employed in SELEX are front-loaded with 2'-modified nucleotides, most commonly 2'-fluoro- or 2'-imidines . Probablimidines , but modimidines .in vitro transcription for testing in vitro and in certain small scale animal models. However, chemical synthesis is necessary for larger scale applications. Since the efficiency of chemical synthesis decreases with oligonucleotide length, it is often necessary to \u201cminimize\u201d or \u201ctruncate\u201d an aptamer prior to study in vivo. Although the technology continues to improve, aptamers longer than 60 nts are not currently amenable to cost-effective chemical synthesis, and the shorter, the better. Due to the relatively small size (8 kDa to 15 kDa) of truncated aptamers, their circulating half-lives are limited not by plasma stability but by renal clearance, which can be improved by conjugation to high molecular weight groups such as polyethylene glycol (PEG) [Most SELEX libraries have random regions ranging from 20 to 60 nucleotides (nts), flanked by constant regions for amplification and transcription, and therefore total lengths ranging from 70 to greater than 100 nts. Sufficient quantities of any length aptamer can be generated by Over the past two decades, this technology has enabled the generation of aptamers to a myriad of proteins including reverse transcriptases, proteases, cell adhesion molecules, infectious viral particles, and growth factors .SELEX has traditionally been performed using purified proteins, and cell surface receptors may be challenging to purify in their properly folded and modified conformations. Therefore, relatively few aptamers have been generated that bind cell surface receptors. However, improvements in recombinant fusion protein technology have increased the availability of receptor extracellular domains as purified protein targets, and the development of cell-based selection techniques has allowed selection against surface proteins in their native configuration on the cell surface. With cell-based selection, a specific protein target is not always chosen, but selection may also be performed against a target cell type with the goal of letting the aptamer choose the target. The past decade has seen the generation of several aptamers that bind to cell surface receptors ,20,21,22et al. [99mTc. In order to assess the tumor uptake and biodistribution property of the radiolabeled anti-TN-C aptamer it was injected intravenously into mice bearing gliobastoma (U251) and breast cancer (MDA-MB-435) tumor xenografts. Scintigraphic images of the tumors were taken 18 hours after injection and revealed that the aptamer was exclusively localized in tumors. This was achieved due to the combined effect of the efficient uptake of aptamer by the tumor and its rapid clearance from the blood [Tenascin-C (TN-C) is an extracellular matrix protein (ECM) that is implicated in the process of tissue remodeling. It is overexpressed in the tumor stroma where it is thought to enhance angiogenesis and invasion ,41. In oet al. performeet al. . A truncbastoma U1 and breet al. [N-acetylated-\u03b1-linked-acidic-dipeptidase) of PSMA. A truncated version of A10 was created that bound to a PSMA-positive (LNCaP) prostate cancer cell line. The binding was selective, because A10 did not bind the PSMA-negative PC3 prostate cancer cell line. Prostate-specific membrane antigen (PSMA) is a type II membrane-associated metallopeptidase that is overexpressed on the surface of prostate cancer cells. PSMA is also expressed in the vasculature of many other solid tumors . Therefoet al. selectedBased on these findings and the previous knowledge that PSMA is internalized via clathrin-coated pits to endosome , three det al. [in vitro application. This was the first demonstration that aptamers could be used to deliver lethal siRNAs to targeted cancerous cells. These groups utilized different strategies to accomplish this goal. Chu et al. used a bet al. [in vivo efficacy of this delivery approach, the authors directly injected the chimera into LNCaP tumors that were established in mice. As a control, the same application was performed on mice bearing PC3 (PSMA-negative) tumors. Mice bearing the LNCaP tumors showed reduction in the tumor volume due to the chimera injection (but not the injection control chimeras), whereas no effect was observed in the PC3 tumors. This study clearly demonstrated the specificity and efficacy of the aptamer mediated siRNA delivery approach. In a recent study done by the same group [McNamara et al. also demme group , the chiet al. [In a subsequent study by Wullner et al. , the autet al. [In situ immunofluorescence microscopic analysis and cytotoxic assay demonstrated that the conjugate was internalized into the cells and resulted in targeted cell killing. In parallel, a series of other studies demonstrated that PSMA aptamers could be exploited in a variety of ways to deliver cargo into cells. Gelonin is a ribosomal toxin that can inhibit the process of protein synthesis and is cytotoxic. However, it is membrane impermeable and needs an usher for its cellular entry. In one of the first examples of aptamer-mediated delivery, Chu et al. realizedco-glycolic acid) PLGA were used to solve this targeting problem [In vitro cytotoxicity assays done with LNCaP cells demonstrated that the A10 conjugated Pt(IV) prodrug-PLGA-PEG-nanoparticle was more potent than either free cisplatin or nontargeted (aptamer non-conjugated) nanoparticles. Farokhzad et al. [Since direct conjugation limits the amount of cargo that each aptamer can deliver, other strategies have incorporated aptamers as targeting moieties for a variety of functional polymers and nanoparticles, each of which can carry many molecules. For example, tumor resistance to cytotoxic chemotherapeutic agents is due in part to insufficient delivery to and uptake by cancer cells. Biodegradable nanoparticles (NP) derived from poly, the causative agent of AIDS (Acquired Immuno Deficiency Syndrome) is an enveloped retrovirus ,49 that et al. selectederaction and neuteraction . Zhou et [et al. of the R [et al. . To one et al. [Nucleolin, first described by Orrick et al. is a preet al. . In addiet al. . Consequet al. . Moreoveet al. . Therefoet al. to have anti-proliferative activity and subsequently found to bind nucleolin. This aptamer was therefore not \u201cselected\u201d in the way that most of the other aptamers described in this review were. It was further established that AS1411 inhibited the pro-survival NF-\u03baB signaling pathway [et al. Anti-nucleolin aptamer was found to inhibit the binding of nucleolin to Bcl-2 mRNA. This resulted in the destabilization of the mRNA with a consequent decrease in the level of anti-apoptotic Bcl-2 protein in the breast cancer cells [et al. [in vivo. Currently, the anti-nucleolin aptamer AS1411 is in phase II clinical trails for acute myeloid leukemia and renal cell carcinoma. A 26-nucleotide guanosine-rich (G-rich) DNA sequence (AS1411) was discovered serendipitously by Bates pathway and thuser cells . Studies [et al. in nude et al. [et al. used radio-labeled F3-peptide to deliver \u03b1-particle emitting 213Bi isotope into the nucleus of tumor cells in the mouse intraperitonial xenografts model [In addition to the direct anti-cancer effects of AS1411, the fact that nucleolin shuttles between the cell surface, cytoplasm, and nucleus in rapidly dividing cells means that, in principle, nucleolin can be used to deliver cargo into cancer cells. A precedent for this has already been set. A 34-amino acid peptide (F3) discovered by Christian et al. using a ts model . Reductiet al. who conjugated it with a multimodal nanoparticle (MFR-AS1411) and monitored uptake into C6-rat glioma cells by using fluorescence confocal microscopy. For the in vivo tracking of MFR-AS1411, it was injected systemically into nude mice bearing C6 tumor xenografts subjected to both whole body scintigraphic and Magnetic Resonance (MR) imaging techniques. Using these in vivo and in vitro methods, these authors have demonstrated that the anti-nucleolin aptamer can target nanoparticles to cancer cells expressing nucleolin on their cell surface and can potentially be used as a non-invasive imaging tool for the diagnosis of cancer [et al. [In vitro studies demonstrated that the aptamer-TMPyP4 complex was readily taken up by MCF7 cells expressing the nucleolin and were severely damaged in response to photodynamic therapy (PDT) as compared to the control normal epithelial cells. Given the observation that AS1411 is internalized at much lower concentrations than those necessary for anti-proliferative effects [However, the unusually high serum stability and low immunogenicity of the anti-nucleloin aptamer AS1411 make it f cancer . Recentl [et al. used AS1 effects , AS1411 +3 binds TfR, the transferrin-Fe+3-TfR complex undergoes endocytosis and is transported to the endosomal compartment where the iron is released and the TfR-apotransferrin is recycled back to the plasma membrane [et al. [TfR is a ubiquitously-expressed membrane bound protein that is involved in the process of iron uptake into cells and thus maintains cellular iron homeostasis. After transferrin-Femembrane ,66. To t [et al. selected [et al. . PrompteO-glycosylation. Aberrant and incomplete glycosylation of MUC-1 is often associated with various epithelial cancer cells . These abnormally glycosylated proteins, termed as glycoforms, represent a valuable class of tumor biomarkers because they are expressed only on cancer cells and are distinct from those expressed on normal cells [et al. [e6 was coupled to the 5\u2019 amino group of the MUC-1 aptamers and targeted to cancer cells expressing the aberrant MUC-1 glycoforms. These conjugated MUC-1 aptamers produced cytotoxic singlet oxygen species upon photodynamic therapy (PDT) and displayed greater than 500 times enhanced cellular toxicity as compared to when chlorin e6 was used as a free drug. Interestingly, normal human mammary cells that express the fully glycosylated MUC-1 were not affected by the conjugated MUC-1 aptamers upon PDT thus demonstrating remarkable cancer cell-targeting specificity of these aptamers [et al. [2) and subsequently radio-labeled them with the isotope 99mTc. The 99mTc-MAG2-conjugated aptamers were injected systemically into mice bearing MCF7 (breast cancer cell) xenografts, and the bio-distribution was studied. The aptamer-radionuclide was taken up by tumor cells suggesting these aptamers may also be useful for the diagnosis and staging of breast cancer. MUC-1 is a cell surface associated glycoprotein that is extensively modified by al cells . Studiesal cells . Aptamer [et al. selectedaptamers . Pieve e [et al. from theet al. [et al. [et al. [PTK7 is a membrane bound receptor tyrosine kinase-like molecule. It is over-expressed in colon carcinomas and is also known as colon carcinoma kinase-4. Although it contains a catalytically inactive tyrosine kinase domain, it has been suggested to retain a role as a signal transducer in some tumors types . A DNA aet al. . Howeveret al. . Using pet al. . In subset al. ,73,74. I [et al. recentlyet al. [50 of sgc8c-Dox was found to be same as free-Dox. The sgc8c aptamer is therefore a promising candidate for targeted drug delivery. Huang et al. covalentet al. [et al. [More recently, Taghdisi et al. used the [et al. recentlyAnthracycline family-based chemotherapeutic drugs are membrane permeable and are randomly taken up by the cells through the process of passive diffusion. However, conjugating them with the sgc8c aptamer restricts their entry into cells that express PTK7. This \u201csieve\u201d mechanism should curb the non-specific uptake of chemotherapeutic drugs and minimize the toxic effects of chemotherapeutic agents on normal cells.et al. [Immunoglobin heavy mu chain (IGHM) is the large polypeptide subunit of the IgM antibody. IGHM expression level on premature B-cell correlates with the development of Burkitt\u2019s lymphoma ,78. As det al. with theet al. . This pret al. [in vivo stability issues; therefore, these authors have also generated a modified RNA aptamer against EGFR that is currently being tested .EGFR is a transmembrane receptor tyrosine kinase and is considered the \u201cprototype\u201d for receptor-mediated endocytosis. Binding of EGFR to its cognate ligand causes receptor dimerization leading to autophosphorylation, internalization of the receptor, and activation of intra-cellular signal transduction pathways . EGFR ovet al. recentlyAptamers are nucleic acid ligands with several properties that make them attractive as pharmaceutical agents. Aptamers bind their targets with high affinity and specificity and are amenable to large-scale chemical synthesis. The versatility of the aptamer selection process has facilitated the generation of aptamers that bind a wide array of targets, including several cell surface receptors. Aptamers which bind cell surface receptors that are internalized have been exploited to deliver a variety of cargoes into cells. Aptamers therefore may be used to deliver molecules that are not otherwise taken up efficiently by cells or to limit delivery of molecules that are efficiently taken up by cells to cells that express aptamer targets.in vivo SELEX techniques should facilitate the identification of additional cell type-specific targets and aptamers. The perfect target for aptamer-mediated delivery is one that is highly expressed on all target cells, is efficiently internalized, and is not expressed on the surface of non-target cells. Although a perfect target may not exist, several aptamers against excellent targets have been identified. Studies using these aptamers have provided \u201cproof of concept\u201d that aptamers can mediate cell type-specific delivery. Cargoes have included enzymes, toxins, chemotherapeutic agents, imaging agents, and siRNAs. Current challenges are both to optimize the cargo (what to attach and how to attach it) and to identify even better targets (and aptamers that bind them). With respect to the former, the fields of nanotechnology and RNA interference are rapidly maturing and will result in even more sophisticated aptamer constructs. With respect to the latter, further refinement of cell-based SELEX and in vitro data, we have only a modest amount of in vivo animal data and\u2014to date\u2014no human data demonstrating that aptamer-mediated delivery is feasible. One factor that has potentially slowed the preclinical development of some aptamer therapeutics is the cost of synthesis, which may be prohibitive for aptamers longer than 40\u201350 nucleotides, particularly if the application requires repeated and/or systemic delivery. However, as methods for the chemical synthesis of olignonucleotides have improved, the yields and costs of synthesis have improved. The synthesis of oligonucleotides is also very \u201cscalable\u201d, as evidenced by the growing number of olignonucleotide therapeutics entering clinical trials. Furthermore, by using aptamers to deliver highly potent cargo, the amount of aptamer required (and therefore synthesis costs) may be significantly less than required for using aptamers as direct inhibitors. Therefore, we anticipate that aptamer-mediated delivery will prove to be feasible in vivo and that translation of this approach to human patients is a realistic goal for the near future. Meanwhile, although we have an abundance of promising"} {"text": "In the literature, there have been conflicting reports on an association between male baldness and the incidence of coronary artery disease (CAD).In 1979, Cooke identifiIn 2001, Rebora et al. performeThese predominantly inconclusive studies from the previous century are counterbalanced by several other positive studies performed in the current century. In 2000, Lotufo et al. showed, Interestingly, in 2005 Mansouri et al. found anp\u2009<\u20090.05). This study implies that early-onset androgenetic alopecia in males is independently associated with CAD, though the mechanisms need to be investigated.Recently, two studies from the same group showed that early onset androgenetic alopecia in males was independently associated with CAD , 9. In 2In the current issue of the Netherlands Heart Journal, Sari et al. investigThe present study by Sari et al. providesAltogether, it goes too far to say that bald is beautiful, but for many men it might be reassuring that male baldness is not significantly associated with CAD."} {"text": "Aim: The purpose of this study was to systematically review clinical studies examining the survival and success rates of implants placed with intraoral onlay autogenous bone grafts to answer the following question: do ridge augmentations procedures with intraoral onlay block bone grafts in conjunction with or prior to implant placement influence implant outcome when compared with a control group ? Material and Method: An electronic data banks and hand searching were used to find relevant articles on vertical and lateral augmentation procedures performed with intraoral onlay block bone grafts for dental implant therapy published up to October 2013. Publications in English, on human subjects, with a controlled study design \u2013involving at least one group with defects treated with intraoral onlay block bone grafts, more than five patients and a minimum follow-up of 12 months after prosthetic loading were included. Two reviewers extracted the data. Results: A total of 6 studies met the inclusion criteria: 4 studies on horizontal augmentation and 2 studies on vertical augmentation. Intraoperative complications were not reported. Most common postsurgical complications included mainly mucosal dehiscences (4 studies), bone graft or membrane exposures (3 studies), complete failures of block grafts (2 studies) and neurosensory alterations (4 studies). For lateral augmentation procedures, implant survival rates ranged from 96.9% to 100%, while for vertical augmentation they ranged from 89.5% to 100%. None article studied the soft tissues healing. Conclusions: Survival and success rates of implants placed in horizontally and vertically resorbed edentulous ridges reconstructed with block bone grafts are similar to those of implants placed in native bone, in distracted sites or with guided bone regeneration. More surgical challenges and morbidity arise from vertical augmentations, thus short implants may be a feasible option. Key words:Alveolar ridge augmentation, intraoral bone grafts, onlay grafts, block grafts, dental implants. Localized or generalized bone defects of the alveolar ridge, due to atrophy, periodontal disease and trauma sequelae, may provide insufficient bone volume or unfavorable vertical, transverse, and sagittal inter arch relationship, which may render implant placement impossible or incorrect from a functional and esthetic viewpoint . A varieHowever, despite a relevant number of publications reporting favorable results with these different surgical procedures, considerable controversy still exists as far as the choice of the most reliable technique is concerned; this is frequently due to the lack of comparative studies . Six sysThe aim of this study was to systematically review the following question: In patients with localized alveolar ridge defects, how do clinical and radiographic outcomes obtained with augmentation with intraoral autogenous block bone grafts compare with those of other techniques ? This was done by assessing the complications related to the augmentation procedure, graft success, implant survival, implant success, and radiographic peri-implant marginal bone loss.This systematic review complies with the PRISMA statement .- Inclusion criteria Inclusion and exclusion criteria were established before carrying out the literature search. Inclusion criteria were as follows: publicatExclusion criteria were: case reports, reviews, or technical notes; studies on sinus bone grafting, studies only providing histological data or volumetric measurements ; patients affected by bone defects following ablation due to tumors or osteoradionecrosis; bone defects related to congenital malformations , as the initial clinical situation is very different and not comparable to defects following atrophy, periodontal disease, or trauma; studies including both lateral and vertical augmentation procedures but which did not separate dental implant data according to augmentation procedure and studies with missing data.No restrictions were placed on the year of publication. Authors were contacted for clarification of missing information when necessary.- Outcome measures and follow-up periodThe survival rate was presented (when possible) as a cumulative survival percentage rate indicating that a certain percentage of implants were still present in the mouth at the end of the observation period. Any other definitions of implant survival, as described in individual studies, were also considered. Due to the lack of consensus regarding a set of universally accepted success criteria, all definitions of implant success were considered according to the criteria of each individual study. Mean marginal peri-implant bone loss was collected (when possible). All intra or postoperative complications reported in the studies were collected.- Initial literature search The Pubmed (MEDLINE) database of the United States National Library of Medicine was used for a literature search of articles published until October 2013. The following terms were used in different combinations: \u2018bone grafts,\u2019 AND \u2018dental implants\u2019 AND \u2018humans\u2019 AND \u2018augmentation\u2019, NOT \u2018sinus\u2019. This search was combined with the following search terms: \u2018simultaneous\u2019, \u2018delayed\u2019, \u2018intraoral\u2019, \u2018onlay\u2019, \u2018block\u2019, \u2018horizontal\u2019, \u2018vertical\u2019. Duplicates were removed from the search.The search was completed by a review of the references given in each of the studies found in order to identify any additional studies that the initial search might have missed. In addition, a manual search in the private library of MP which included the following journals: British Journal of Oral and Maxillofacial Surgery, Clinical Oral Implants Research, Clinical Implant Dentistry and Related Research, International Journal of Oral and Maxillofacial Implants, International Journal of Oral and Maxillofacial Surgery, Oral Surgery Oral Medicine Oral Pathology Oral Radiology and Endodontology, Journal of Oral and Maxillofacial Surgery, and Medicina Oral Patolog\u00eda Oral y Cirug\u00eda Bucal.- Searching for relevant studies.The comprehensive nature of the search methodology produced a large volume of published studies related to the topic. A three-stage screening process was then performed independently in duplicate to maximize the reliability of the extracted data Fig. . Indepen- Quality and risk of bias assessmentTwo reviewers independently and in duplicate evaluated the quality of the included studies as part of the data extraction process. Four main quality criteria were examined: concealm- Data synthesis and analysis Evidence tables were created with the study data. An initial descriptive analysis (summary) was performed to determine the quantity of data, at the same time assessing variations in study characteristics. The following information was collected from the publications: type of study, type of procedure, number of treated patients (gender and age), number of implants, donor site of the grafts, delayed or simultaneous implant placement, follow-up (months), implant survival and success rates, mean marginal bone loss (mm), number and type of intra/postoperative complications.- Study selection and descriptionThe electronic and hand searches yielded a total of 608 articles. 453 studies were excluded after screening the titles and 92 after reading the abstracts. Overall, 63 full articles were retrieved for more detailed evaluation but only 4 fulfilled the inclusion criteria for data extraction Fig. . The k v- Horizontal augmentationPatient and intervention characteristicsFour studies were identified involving a total of 167 patients and 254 implants. A total of 160 patients with 216 dental implants received lateral ridge augmentation with intraoral block bone grafts: 38 were placed simultaneously and 216 delayed to the bone grafting procedure ,14. In met al. . The samet al. also repet al. ,16. Noneet al. (et al. (Implant survival for delayed implant placement varied from 96.9% at one year post-loading to 100% et al. applied (et al. applied (et al. . Success (et al. . The mar (et al. to 0.20\u00b1 (et al. for dela (et al. . So, des- Vertical augmentationPatient and intervention characteristicsTwo studies included 54 patients with 120 dental implants. A total of 28 patients with 64 dental implants underwent vertical ridge augmentation with intraoral onlay block bone grafts, all with delayed placement ,18 (see Outcomeset al. (et al. (With regard to postoperative surgical complications, Chiapasco et al. reported (et al. reportedet al. (et al. (et al. (et al. (et al. (et al. (et al. (Implant survival varied from 95.6% after one year post-loading to 100% et al. obtained (et al. , the suc (et al. . Due to (et al. obtained (et al. obtained (et al. . The dif (et al. placed d (et al. submerge (et al. did not - Quality assessment of trials and risk of biasOne study had a loThe data reported in the literature seem to demonstrate that bone augmentation by means of the positioning of intraoral onlay grafts can be considered a reliable surgical technique for obtaining sufficient bone volume for the placement of dental implants where it would not otherwise be possible . HoweverCommon techniques introduced for horizontal bone augmentation are guided bone regeneration, ridge splitting and expansion, and block grafts . In a syMore challenging is the vertical augmentation procedure; a variety of surgical procedures have been proposed, such as: autogenous bone grafts, vertical guided bone regeneration, and alveolar distraction osteogenesis. As many atrophic alveolar ridges are deficient in height and width, they may require flattening for better graft adaptation . A seconet al. (et al. (et al. (et al. (et al. (et al. (Among the disadvantages of lateral or vertical bone grafting procedures is the resorption of a significant proportion of the graft ,22-24. Aet al. . Maioran (et al. added a (et al. . Regardi (et al. ,25. Rocu (et al. reported (et al. and Chia (et al. studied (et al. found th (et al. and Bren (et al. wound de (et al. ,30.The main limit encountered in this literature review was the lack of studies with a comparative design and randomized controlled studies. Future research must include control groups and standardized criteria for defining implant success or failure for both simultaneous and delayed protocols, in order to obtain rigorous evidence-based results. In this way, the data presented in this review should be considered indicative rather than conclusive.Survival and success rates of implants placed in horizontally and vertically resorbed edentulous ridges reconstructed with block bone grafts are similar to those of implants placed in native bone, in distracted sites or with guided bone regeneration. More surgical challenges and morbidity arise from vertical augmentations, thus short implants may be a feasible option. Our recommendations for future research focus on the performance of large-scale randomized controlled studies with longer follow-ups involving the assessment of esthetic parameters and hard and soft peri-implant tissue stability."} {"text": "To evaluate some forage feeds of ruminants in terms of their carbohydrate (CHO) and protein fractions using Cornell Net Carbohydrate and Protein System (CNCPS).1 - intermediate degrading, CB2 - slow degrading and CC - non-degrading or unavailable) and protein fractions of test feeds.Eleven ruminant feeds were selected for this study. Each feed was chemically analyzed for proximate principles , fiber fractions , primary CHO fractions and primary protein fractions . The results were fitted to the equations of CNCPS to arrive at various CHO (CA - fast degrading, CB1 content of all feeds was low but similar. All feeds except cowpea, berseem, and hedge lucerne contained higher CB2 values. Oat among green fodders and hybrid napier among range herbages had lower CC fraction. Feeds such as bajra, cowpea, berseem and the setaria grass contained lower PA fraction. All green fodders had higher PB1 content except maize and cowpea while all range herbages had lower PB1 values except hedge lucerne. Para grass and hybrid napier contained exceptionally low PB2 fraction among all feeds. Low PC contents were reported in oat and berseem fodders.Among green fodders, cowpea and berseem had higher CA content while except hedge lucerne all range herbages had lower CA values. CBBased on our findings, it was concluded that feeds with similar CP and CHO content varied significantly with respect to their CHO and protein fractions. Due to lower CC fraction, oat and hybrid napier were superior feeds in terms of CHO supply to ruminants. Similarly, among all feeds oat and berseem had a lower PC fraction, thus were considered good sources of protein for ruminants. Difference between total CHO and NSC was the indirect measure of structural CHO (SC) content of test feeds. Starch estimation in the feeds was done as per the procedure of Sastry [et al. were useet al. [et al. [1 (globulins mainly), PB2 , PB3 and PC .NDICP, acid detergent insoluble CP (ADICP), non-protein nitrogen (NPN) and soluble protein (SP) content of test feeds were estimated as per Licitra et al. . ADICP f [et al. were useThe results obtained were subjected to statistical analyses using software package SPSS version 16.0 . Means wet al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [Chemical constituents of forage feeds revealedet al. reported [et al. reported [et al. evaluate [et al. evaluate [et al. were in [et al. was high [et al. reported [et al. , it was [et al. were in [et al. , Mahala [et al. , Subhala [et al. and Teka [et al. .et al. [et al. [Total CHO content of all fet al. . The quaet al. regardin [et al. were preet al. [et al. [Primary protein fractions of forage feeds are presented in et al. and Gupt [et al. regardinet al. [et al. [1 was comparable between green fodders and range herbages. Sorghum and bajra among green fodders had lower CB1 content while among range herbages para and guinea grass had lower CB1 content. In typical ruminant diet, the amount of fraction CB2 is very important as this fraction represents the available cell wall portion of ruminant feeds. Obviously legume forages contained lower CB2 fraction as these feeds are high in protein content and low in NDF content. Among other feeds, range herbages contained higher fraction CB2 than green fodders reflecting their high cell wall availability to ruminants. Fraction CC is the lignin bound cell wall content of a feed. Hence this fraction is indigestible both by ruminal microbes and the animal itself. Feeds with low CC fraction will be of superior quality in terms of CHO supply to ruminants and vice-versa. On this aspect, forages like oat and hybrid napier were found to be better feeds. Findings of Trivedi et al. [2 and PB3 are degraded in the rumen to a lesser extent than PA and PB1, thus feeds with high PB2 and PB3 content will have more by-pass protein value. All legume forages had higher PB2 + PB3 content while among non-legume feeds maize fodder, and the setaria grass contained higher PB2 + PB3 content. Very similar observations were made by Kamble et al. [et al. [When CNCPS CHO fractions of forage feeds were intet al. and Gupt [et al. . Para ani et al. regardini et al. . Averagee et al. and Gupt [et al. .Based on the findings of the above study it was concluded that among typical ruminant forage feeds, oat fodder and hybrid napier grass were better feeds of ruminants from CHO supply point of view while oat and berseem fodders were found to be good forage protein sources to ruminants. Maize fodder was evaluated to be a good feed with more by pass protein value. All the above feeds are extensively used in our country as forages for ruminant feeding and their preferential selection as animal forage sources are established by CNCP system. Therefore, this CNCPS model could be successfully implemented for nutritive evaluation of forage feeds of Indian origin in terms of their CHO and protein fractions though sufficient database should be developed before its applicability in dairy ration formulation.SSK and CD designed the plan of the present study. LKD and DK carried out the experimental work. Manuscript preparation along with data analysis was done by LKD. All authors read and approved the final manuscript."} {"text": "Tick-borne encephalitis virus (TBEV) and Crimean-Congo haemorrhagic fever virus (CCHFV) are important tick-borne viruses. Despite their wide geographical distribution and ease of acquisition, the prevalence of both viruses in Malaysia is still unknown. This study was conducted to determine the seroprevalence for TBEV and CCHFV among Malaysian farm workers as a high-risk group within the population.We gave questionnaires to 209 farm workers and invited them to participate in the study. Eighty-five agreed to do so. We then collected and tested sera for the presence of anti-TBEV IgG (immunoglobulin G) and anti-CCHFV IgG using a commercial enzyme-linked immunosorbent assay (ELISA) kit. We also tested seroreactive samples against three other related flaviviruses: dengue virus (DENV), West Nile virus (WNV) and Japanese encephalitis virus (JEV) using the ELISA method.The preliminary results showed the presence of anti-TBEV IgG in 31 (36.5\u00a0%) of 85 sera. However, when testing all the anti-TBEV IgG positive sera against the other three antigenically related flaviviruses to exclude possible cross reactivity, only five (4.2\u00a0%) sera did not show any cross reactivity. Interestingly, most (70.97\u00a0%) seropositives subjects mentioned tick-bite experience. However, there was no seroreactive sample for CCHFV.These viruses migrate to neighbouring countries so they should be considered threats for the future, despite the low seroprevalence for TBEV and no serological evidence for CCHFV in this study. Therefore, further investigation involving a large number of human, animal and tick samples that might reveal the viruses\u2019 true prevalence is highly recommended. Ixodes ricinus is the main vector for TBEV-EU, while the other two subtypes are transmitted mainly by I.persulcatus [Ticks are important and prevalent vectors for several animal and human infectious diseases, carrying harmful pathogens such as Borrelia spp, Rickettsia spp, Babesia spp, and various viruses including TBEV and CCHFV. TBEV is a member of the genus Flavivirus within the Flaviviridae family. This etiologic agent of tick-borne encephalitis can cause a potentially fatal neurological infection affecting the human central nervous system. TBEV has three subtypes: European (TBEV-EU), Far Eastern (TBEV-Fe) and Siberian (TBEV-Sib) . Ixodes sulcatus . The vecsulcatus . The virsulcatus . Severalsulcatus , 5.CCHFV is a tick-borne virus belonging to the genus Nairovirus in the Bunyaviridae family and a causative agent for a deadly viral haemorrhagic fever. Despite a history of isolation from many genera and species of ticks, the main vector for this virus is a tick from the Hyalomma genus . SimilarAlthough TBEV and CCHFV are not endemic in Malaysia, circulation of other vector borne viruses are reported in this country, including DENV, WNV, JEV and the Langat virus (LGTV) \u201315. FlavSeroepidemiological studies related to the prevalence of TBEV and CCHFV have been widely performed, especially among high-risk groups including farmers. To date, there is no reported case or outbreak of those viruses in Malaysia, but there was a study by Thayan and colleagues on screening for TBEV antibodies among MaThe study\u2019s protocol was approved by the Ethics Committee Universiti Malaya Medical Centre (MEC Ref. 824.11 and MEC Ref. 944.20). Participation in the study was entirely voluntary. Potential participants were briefed on the project and given sufficient time for consideration. All subjects gave written informed consent for inclusion before they participated in the study. The blood samples were handled with strict anonymity. All participants gave written consent for the samples to be used after anonymisation.Eleven cattle and goat farms in Peninsular Malaysia were identified from information from the Department of Veterinary Services (DVS), Ministry of Agriculture and Agro-based Industry, Malaysia. All were contacted and invited to take part in the project. Eight of the 11 farms agreed to participate in the study. These eight farms were located in different regions of Peninsular Malaysia and VectoCrimean-CHF-IgG kit in accordance with the manufacturers\u2019 instructions. We then tested positive sera for the presence of anti-DENV, anti-WNV and anti-JEV IgG antibodies to exclude cross reactivity with the positive sera for TBEV IgG. Finally, we performed cross reactivity tests using the DENV IgG ELISA kit , the WNV IgG capture DxSelect ELISA kit and the JEV IgG ELISA kit .p\u2009<\u20090.05 is considered significant.All data were analysed using the Fisher exact test for analysis of contingency table. We determined the correlations between seropositivity and different age groups using Spearman non-parametric correlation. All statistical analyses were conducted using GraphPad Prism 6 , where P\u2009=\u20091.0000 by Spearman correlation test). when seropositivity was analysed between different age groups, the outcome showed no significant difference (P\u2009=\u20090.5889). The highest reactivity towards anti-TBEV IgG ELISA was 68.4\u00a0% (13/19) in males aged 51\u201360 years and 50\u00a0% (2/4) in females in the same age group.Seroprevalence data for TBEV and CCHFV are shown in Table\u00a0et al. [et al. [et al. [We have shown the seropositivity for TBEV among a small group of farm workers in Peninsular Malaysia. Although our initial data showed that the prevalence of IgG against TBEV was 36.5\u00a0%, after testing the positive sera we found that only five (4.2\u00a0%) samples could be linked to the presence of TBEV. It was not surprising that there was cross reactivity between TBEV, WNV and DENV because they are all flaviviruses that share a complex antigenic relationship . Our finet al. in whichet al. . Anotheret al. . LGTV iset al. . Our dat [et al. showed a [et al. . However [et al. . A possi [et al. . By keep [et al. . Accordi [et al. . However [et al. . Tick ma [et al. . There i [et al. . The out [et al. . Althoug [et al. . This fi [et al. . Youngst [et al. . Certain [et al. . Healthc [et al. . Nabeth [et al. suggeste [et al. . As ment [et al. . In Japa [et al. .Our study showed a low seroprevalence for TBEV (4.2\u00a0%) among animal farm workers, after ruling out possible cross reactivities, and no correlation with age or gender. We also found no serological evidence of CCHFV transmission among Malaysian farm workers. Further studies involving a larger sample size, and also considering LGTV as a Malaysian counterpart of TBEV that might lead to false positivity, are highly recommended to provide sufficient evidence and reflect the true state of TBEV and CCHFV prevalence in Malaysia."} {"text": "The current study explored this decision regarding exclusionary illnesses using the SEID criteria with four distinct data sets involving patients who had been identified as having CFS, as well as healthy controls, community controls, and other illness groups. The findings indicate that many individuals from major depressive disorder illness groups as well as other medical illnesses were categorized as having SEID. The past CFS Fukuda et al. prevalence rate in a community based sample of 0.42 increased by 2.8 times with the new SEID criteria. The consequences for this broadening of the case definition are discussed.The Institute of Medicine recently proposed a new case definition for chronic fatigue syndrome (CFS), as well as a new name, Systemic Exertion Intolerance Disease (SEID). Contrary to the Fukuda In reacfinition , the Canfinition was devefinition were devfinition . Each ofet al. [Recently, the IOM issued aet al. CFS critet al. , and theet al. excludedet al. had a diet al. p. 186)86et al. et al. (p. 4), etc.). Thus, according to the above IOM guidelines, if these illnesses account for the SEID symptoms, then it is another illness and not SEID. Therefore, many illnesses are now considered a comorbid condition with SEID. However, trying to determine whether an illness is exclusionary vs. comorbid is a challenging diagnostic task. The IOM [The problem for diagnosticians in interpreting these guidelines is that the core IOM symptoms are not unique to SEID, as other illnesses have comparable symptoms In addition, Ze-dog pointed In the first study, a CFS screening questionnaire had a combination of existing and new measures including: (1) several demographic related items; (2) The Fatigue Scale ; and (3)et al. [et al. [A total of 60 individuals , and 15 with Lupus) were recruited from the greater Chicago area for the present study. Fifteen of the participants were diagnosed by a physician in Chicago with experience in diagnosing and treating CFS. Each of these participants met the Fukuda et al. definiti [et al. criteriaet al.\u2019s [Fifteen healthy control participants had not been diagnosed with CFS or any other illness that could cause significant fatigue. These participants had also been seen by a physician, and no illnesses that could cause fatigue were found . In addition, fifteen participants with a diagnosis of Multiple Sclerosis (MS) were recruited from self-help groups in the Chicago area. Each of these participants met Poser et al.\u2019s criteriaet al.\u2019s . There wTo meet the SEID criteria within tn = 15) of those in the CFS group met the SEID criteria, whereas 47% (n = 7) in the Lupus group, 33% (n = 5) in the MS group, and 0% in the control group met the SEID criteria. In an effort to compare this new SEID case definition to the older Fukuda et al. [As indicated in a et al. criteriaet al. [In the second study, participants were screened by a trained interviewer to determine if they met the inclusion and exclusion criteria for CFS, MDD, or healthy controls . As paret al. case defet al. found thThe Structured Clinical Interview for the DSM-IV (SCID) is a valid and reliable semi-structured interview guide that closely resembles a traditional psychiatric interview . The SCIThe SF-36 is 36-item instrument that is comprised of multi-item scales that assess physical functioning, role limitations, social functioning, bodily pain, general mental health, vitality, and general health perceptions. Higher scores indicate better health, lower disability, or less impact of health on functioning. Reliability and validity studies have demonstrated that the 36-item version of the SF-36 has high reliability and validity in a wide variety of patient populations .et al.\u2019s [et al. [A total of 45 individuals were recruited from the greater Chicago area [et al.\u2019s diagnost [et al. case defFifteen participants with a diagnosis of MDD were solicited from a local chapter of the National Depressive and Manic Depressive support group in Chicago. Participants were required to have been diagnosed with major depression by a licensed psychologist or psychiatrist. All participants were screened with the SCID-IV to ensure that they met criteria for a current (active) case of major depression and did not have any other current psychiatric illnesses. Individuals who had other current psychiatric conditions in addition to major depression were excluded. Individuals who reported having uncontrolled or untreated medical illnesses were also excluded. In the MDD group, all 15 (100%) participants met DSM-IV diagnostic criteria for MDD. None of the participants in the MDD group met criteria for MDD with catatonic, melancholic, psychotic, or atypical features. Participants in the MDD group did not meet criteria for any other Axis I disorders.Finally, fifteen healthy control participants were solicited from the greater Chicago area. Individuals who did not have any medical illnesses or who did not have any uncontrolled or untreated illnesses were allowed to participate. All participants were screened with the SCID-IV to ensure that they did not have any current psychiatric illnesses. Individuals with current psychiatric conditions were excluded. Sociodemographic data were compared across the three groups, and there were no significant differences with respect to gender, race, age, SES, education, marital status, occupation, work status, and additional roles .To meet the SEID criteria within tn = 14) of those in the CFS group, 27% (n = 4) in the MDD group, and 0% in the control group met SEID criteria. In an effort to compare this new SEID case definition to the older Fukuda et al. [As indicated in et al. [The data were derived from a larger community-based study of CFS that was carried out in three stages . Stage 1et al. criteriaAccording to the Phase 1 screen, of the 18,675 interviewees, 16,453 (88%) had no prolonged or chronic fatigue, 1435 (7.7%) had prolonged fatigue, and 780 (4.2%) had chronic fatigue (seven cases refused to answer the fatigue questions). Among those 780 respondents with chronic fatigue, at Phase 1; 304 had ICF-like illness , 68 had a CF-explained-like condition, and 408 had CFS-like profiles. All 408 members of the CFS-like group were invited to participate in Phase 2. Of this group of 408 individuals with CFS-like symptoms, the physician review team reviewed data on 166 individuals, who provided data during the Phase 2 evaluation. There were 47 individuals who were evaluated in a control group, and these individuals screened negative for CFS-like illness during Phase 1.et al. [i.e., melancholic depression, bipolar disorders, anorexia nervosa/bulimia nervosa, psychotic disorders, drug or alcohol related disorders, or medical explanations for their fatigue).A team of four physicians and a psychiatrist were responsible for making a final diagnosis with two physicians independently rating each file using the current U.S. case definition of CFS . Where pet al. case defTo meet the SEID criteria within tt). The proportion of screened positives is PI (\u03c0), and the proportion of screened negatives is 1 \u2212 PI (1 \u2212 \u03c0). The proportion of screened positives evaluated in Phase 2 who were diagnosed with SEID (Number of cases with SEID/166) is L1 (\u03bb1), and the proportion of screened negatives evaluated in Phase 2 who were diagnosed with SEID (0/47 = 0.0) is L2 (\u03bb2).Prevalence, which is the number to be estimated, is represented by P (p). The total number of respondents screened in Phase 1 is N (Nn = 24) of those in the CFS group met the SEID criteria, whereas 47% (n = 42) for the CF group, 44% (n = 20) for the ICF group, and 6% (n = 3) for the controls. Within the Chronic Fatigue explained by medical or psychiatric illness (CF), of those 19 with Melancholic Depression, 47% (n = 9) met the SEID criteria. In addition, for those with a medical reason for their fatigue, 48% (n = 16) met SEID criteria. In an effort to compare this new SEID case definition to the older Fukuda et al. [As indicated in et al. [This data set had been previously used to estimate the prevalence of CFS , which wet al. prevalenet al. [etc.). For each of the eight Fukuda et al. [et al. [et al. [et al. [We solicited participants with a diagnosis of MDD and CFS to participate in this study . We admiet al. symptomsa et al. symptoms [et al. symptoms [et al. criteria [et al. .et al. [et al. criteria. We excluded individuals who had other current psychiatric conditions in addition to major depression or who reported having untreated medical illnesses .We recruited 64 individuals, 27 with CFS and 37 with MDD. We obtained our sample of participants with CFS from two sources, local CFS support groups in Chicago and a previous research study conducted at DePaul University. To be included in the study, participants were required to have been diagnosed with CFS, using the Fukuda et al. [For the MDD group, we found participants from three sources, local chapters of the Depression and Bipolar Support Alliance group in Chicago; Craigslist\u2014a free local classifieds ad forum that is community moderated; and online depression support groups. To be included in the study, all participants were required to have been diagnosed with a MDD by a licensed psychologist or psychiatrist. We excluded individuals who had other current psychiatric conditions in addition to a MDD or who reported having untreated medical illnesses were also excluded. We carefully screened participants to ensure that participants from the MDD group did not have CFS as defined by the Fukuda et al. criteriaet al. [To meet the SEID criteria within tet al. , when uset al. .n = 22) of those in the CFS group met the SEID criteria, whereas 24% (n = 9) of those in the MDD group met SEID criteria. In an effort to compare this new SEID case definition to the older Fukuda et al. [As indicated in et al. [et al. [et al. [et al. [Rates of SEID could increase due to the reduction of many exclusionary criteria. Based on study 3, using the Jason [et al. criteria [et al. criteria [et al. , then th [et al. criteria [et al. .et al. [et al. [et al. [The current study suggests that the core SEID symptoms are not unique to SEID, as some patients with other illnesses, such as those evaluated in this study, have comparable symptoms. As a consequence, some patients with illnesses that had previously been exclusionary under past case definitions such as Fukuda et al. will nowet al. , rather [et al. CFS crit [et al. CFS critThe current study suggests that some patients with MDD, who also have chronic fatigue, sleep disturbances, and poor concentration, will be misdiagnosed as having SEID. MDD can occur for anyone with a serious medical illness. Some patients might have been depressed prior to becoming ill with SEID, and probably others as a reaction to this illness . Howeveret al. [et al. [et al. [Mood disorders are the most prevalent psychiatric disorders after anxiety disorders: for major depressive episode, the one-month prevalence is 2.2%, and lifetime prevalence is 5.8% . The erret al. found th [et al. . Fortuna [et al. empiric [et al. has cons [et al. reviewedvs. orthostatic intolerance [There are additional aspects of the IOM case defolerance . We beliolerance .There are a number of limitations in the present study. As we used archival data sets, some of the questions that have been proposed to define SEID were not available. Clearly, the current study needs to be replicated with questions that are now proposed , howeverThe recent IOM report is being"} {"text": "After more than two decades of research, the efforts to translate the concept of RNA based vaccination have reached a critical mass. Several preclinical and clinical projects located in the academic or industrial setting are underway and the coming years will allow us to get broad insight into clinical feasibility, safety, and first efficacy data. It can be anticipated that some RNA based vaccines will be approved within the near future. in vitro transcribed RNA is now viewed as an attractive approach for vaccination therapies, with several features contributing to its favorable characteristics. RNA allows expression of molecularly well-defined proteins and its half-life can be steered through modifications in the RNA backbone. Moreover, unlike DNA, RNA does not need to enter the nucleus during transfection and there is no risk of integration into the genome, assuring safety through transient activity. Rapid design and synthesis in response to demand, accompanied by inexpensive pharmaceutical production, are additional features facilitating its clinical translation.The use of ex vivo transfection of mRNA into autologous dendritic cells (DCs) which was initially described by Boczkowski et al. [The seminal work of Wolff et al. which showed that RNA injected directly into skeletal muscle can lead to protein expression opened the era of RNA based therapeutics . This obi et al. . Along wi et al. , severali et al. . In a dii et al. \u201310. Persi et al. , 12 are i et al. \u201315. in vitro analytics such as cytotoxicity or effects of RNA on transcriptome of DCs . Finally, E. Hattinger et al. will also demonstrate, with a different disease focus, the efficacy of prophylactic RNA vaccination against allergy.In this special issue, a number of papers will illustrate and summarize the advances in this emerging field. M. A. McNamara et al. will provide a comprehensive review on RNA based vaccines in cancer immunotherapy, which is further detailed for the use of mutanome engineered RNA by M. Vormehr et al. These will be complemented by a review from K. K. L. Phua describing targeted delivery systems for RNA based nanoparticle tumor vaccines. Other contributions will describe RNA based methods forIn conclusion, this special issue covers many aspects of RNA based vaccines. As RNA based vaccination is not the only application of the RNA technology , we hope to have sparked the readers interest in RNA based therapies in general.Sebastian KreiterSebastian KreiterMustafa DikenMustafa DikenSteve PascoloSteve PascoloSmita K. NairSmita K. NairKris M. ThielemansKris M. ThielemansAndrew GeallAndrew Geall"} {"text": "Since the introduction of engine-driven nickel-titanium (NiTi) instruments, attempts have been made to minimize or eliminate their inherent defects, increase their surface hardness/flexibility and also improve their resistance to cyclic fatigue and cutting efficiency. The various strategies of enhancing instrument surface include ion implantation, thermal nitridation, cryogenic treatment and electropolishing. The purpose of this paper was to review the metallurgy and crystal characteristics of NiTi alloy and to present a general over review of the published articles on surface treatment of NiTi endodontic instruments. In addifracture . NiTi infracture , 5. Thusfracture .3N4) particles [Improvement of the cutting efficiency of NiTi instruments especially through their surface treatment has been the subject of numerous investigations. The implantation of boron ions on the surface of the instruments increases their surface hardness . Also, iarticles . The aimRetrieval of literatureAn English-limited Medline search was performed through the articles published from May 1988 to May 2014. The searched keywords included \u201celectropolishing AND NiTi rotary instruments\u201d, \u201cthermal nitridation\u201d, \u201ccryogenic treatment AND nickel-titanium\u201d, \u201cplasma immersion ion implantation AND nickel-titanium\u201d, and \u201cendodontic treatment AND nickel-titanium\u201d. Then, a hand search was done in the references of result articles to find more matching papers.A total of 176 articles were found which according to their related keywords were \u201c10-electropolishing AND NiTi rotary instruments\u201d, \u201c29-thermal nitridation\u201d, \u201c3-cryogenic treatment AND nickel-titanium\u201d, \u201c135-plasma immersion ion implantation AND nickel-titanium\u201d, and \u201cendodontic treatment AND nickel-titanium\u201d.Metallurgy of NiTiThe NiTi alloys used for manufacturing of the endodontic instruments contain approximately 56% (wt) nickel and 44% (wt) titanium. In some NiTi alloys, a small percentage (<2% wt) of nickel can be substituted by cobalt. The resultant combination is a 1:1 atomic ratio (equiatomic) of the major components. The generic term for these alloys is 55-nitinol and as other metallic systems, the alloy can exist in various crystallographic forms. NiTi has inherent ability of shape memory (SM) and super-elasticity (SE) that steThe concept of SM was first described by \u00d6lander during eUnder low tensile loading, the superelastic NiTi alloy shows a normal elastic behavior. At higher tensile loading, the elastic stress reaches plateau at which there is an extended horizontal region of elastic strain. Elastic deformation in SS and NiTi is 3% and 7%, respectively. The atoms in SS can move against each other by a small specific amount before plastic deformation occurs which is called the Hookean elasticity . Crystal characteristics of NiTi alloys) and finish (Mf) temperature. Likewise the temperatures at which the austenite starts and finishes are called the As and Af points, respectively. The event causes alterations in the physical properties of the alloy and allows SM [All the alloys with SM show a change in their lattice structure or atomic arrangement, characterizing a phase change while receiving or releasing thermal energy. The critical deformation and shape recovery are explained according to the changes in the lattice parameters through a transformation between the austenite and martensite phases and the characteristics of the crystal structure . At highllows SM , 12, 13.Surface of NiTi instrument2) and smaller amounts of nickel oxides (NiO and Ni2O3) and metallic nickel (Ni) [The surface of NiTi instrument mainly consists of oxygen, carbon, and titanium oxides (TiOkel (Ni) -19. The kel (Ni) . Ni may kel (Ni) .et al. [et al., the surface of titanium-aluminum-vanadium alloy (Ti6Al4V) had amounts of aluminum similar to the amount of Ni in NiTi, even though the bulk material of Ti6Al4V had only 6% Al and NiTi had 50% Ni. There was no Ni on the surface of SS, but some amounts of Cr and Fe were found.Shabalovskaya found thet al. were sim2. In other words, calcium phosphate formed on an inert oxide layer. This layer was thicker on pure titanium than on titanium alloys (including NiTi), and the Ca:P ratio of the film was close to that of hydroxyapatite. The calcium phosphates formed on NiTi or Ti6Al4V were less similar to hydroxyapatite. The presence of Ni in the surface of NiTi alloy and aluminum in the surface of Ti6Al4V may have caused these results. SS also has a calcium phosphate layer of this kind. However, the formation of this layer is slower and differs from NiTi [Pure Ti and some of its alloys are amongst the most biocompatible materials . Their from NiTi , 21, 22.Surface treatment of NiTiAttempts to enhance the surface of NiTi instruments, minimize or eliminate their inherent defects, increase the surface hardness/flexibility and improve the resistance to cyclic fatigue and cutting efficiency of endodontic instruments have resulted in a variety of strategies . Plasma immersion ion implantationThere have been several attempts to reduce the release of Ni from NiTi, without deteriorating the mechanical properties of the bulk. This has been done by coating technologies either with titanium nitride (TiN) or with polymers . The polet al. [et al. [Plasma immersion ion implantation (PIII) was first introduced in the late 1980s by Conrad et al. and Tend [et al. . This te [et al. .During PIII, the specimen is placed in a chamber and immersed in the plasma; then a highly negative pulsating voltage is applied to the sample. PIII is regularly performed to modify the surface of metals and to improve the mechanical properties such as hardness, friction coefficients, and wear/corrosion resistance , 31. Briet al. [et al. [Gavini et al. showed t [et al. evaluate [et al. atoms diffuse into the samples and atmospheric oxygen is stopped by a steel foil consisting of a notable amount of Cr. The modified surface consists of a thin outer layer of TiN and a thicker Ti2Ni layer underneath [TiN belongs to the refractory transition metal family and consderneath , 35.et al. [et al. [et al. [et al. [Shenhar et al. and Huan [et al. showed t [et al. showed t [et al. revealed [et al. involves submersing metal in a super-cooled bath containing liquid N (-196\u02daC/-320\u02daF) , 37 and Two mechanisms can change the properties of metal during CT. First is a more complete martensite transformation from the austenite phase following CT and secoet al. [et al. [et al. [There are few studies on the CT of NiTi rotary instruments. Kim et al. evaluate [et al. showed t [et al. reportedElectropolishingaka electrochemical polishing, electro-lytic polishing and reverse plating, is an electrochemical process for removal of the material layer from a metallic surface. It often acts against electroplating. It may be used in lieu of abrasive fine polishing during microstructural preparation [Electropolishing (EP), paration . 2, which protects the underlying material from further corrosion. Generally, EP removes the native oxide layer and sinters a more homogeneous and stable passive TiO2 layer. In this process, the amount of Ni on the surface decreases [EP is a standard surface treatment process employed as a final finish during manufacturing of NiTi instruments. An electric potential and current are applied, which result in ionic dissolution of the surface. In this process, the surface chemistry and morphology are altered while surface imperfections are removed as dissolved metal ions. Simultaneously, Ti is oxidized to TiOecreases -47.Typically, the instrument is immersed in a temperature-controlled bath of electrolyte and serves as the anode when it is connected to the positive terminal of a direct current power supply, and the negative terminal is attached to the cathode. As the current passes the surface of metal oxidizes and dissolves in the electrolyte. At the cathode, a reduction reaction occurs, which normally produces hydrogen. Electrolytes used for EP are most often concentrated acid solutions with a high viscosity, such as mixtures of sulfuric/phosphoric acid. Other EP electrolytes include mixtures of perchlorates with acetic anhydride and methanolic solutions of sulfuric acid , 46.EP of NiTi instruments is efficient for the elimination of defective surface layers through surface oxidizing. Owing to a gain in total energy caused by the differences in the enthalpy of Ti and Ni oxides forming, the preferential oxidation of Ti on NiTi surface always occurs . Therefoet al. [et al. [et al. [Using scanning electron microscopy (SEM), Herold et al. showed t [et al. EP may h [et al. investiget al. [et al. [et al. [et al. [et al. [et al. [However, the torque at failure and amount of unwinding were decreased. Cheung et al. showed tet al. . Boessle [et al. showed t [et al. demonstr [et al. showed t [et al. revealed [et al. showed t1. Argon implantation caused a moderate improvement in the performance of S1 files, whereas nitrogen ion-implanted files performed worse in the fatigue test. Furthermore, nitrogen ion implantation improves the cyclic fatigue of instruments.2. Thermal nitridation increases the cutting efficiency and corrosion resistance of NiTi files in contact with NaOCl. 3. Cryogenic treatment improves the cutting efficiency, cyclic fatigue resistance and microhardness of NiTi instruments."} {"text": "Anatomic variation can potentially impact the surgical safety.The purpose of this cross-sectional study was to assess the prevalence of ostiomeatal complex variations based on cone beam computed tomography (CBCT) images of the patients seeking rhinoplasty.p< 0.05 was considered to be statistically significant.In this cross-sectional study, CBCT images of 281 patients including 153 female and 128 male with Mean\u00b1SD age of 26.97\u00b17.38 were retrieved and analyzed for presence of variations of ostiomeatal complex and mucosal thickening. All CBCT images were acquired by NewTom VGi scanner with 15\u00d715 field of view, as a part of preoperative recording of patients seeking rhinoplasty in an otolaryngology clinic. Chi- square test and Odds ratio were used for statistical analysis of the obtained data and Agger nasi cells which were seen in 93.2% of the cases were the most common anatomic variation. It was followed by Haller cells (68%), concha bullosa (67.3%), uncinate process variations (54.8%), nasal sepal deviation (49.5%) and paradoxical curvature of middle turbinate (10%). Mucosal thickening were detected in 60.7% of the studied cases.Ostiomeatal complex variations and mucosal thickening are considerably prevalent among the patients seeking rhinoplasty. This study also revealed that CBCT evaluation of paranasal sinuses has comparable result in delineation of the sinonasal anatomy. Otolaryngologists are interested in radiological assessment of paranasal regional anatomy. Certain aRegardless of the controversy about the role of anatomic variations of ostiomeatal complex in inducing rhinosinusitis, being aware of prevalence of these variations might be influential during surgical procedures that involve paranasal sinuses such as Functional Endoscopic Sinus Surgery (FESS) and rhinoplasty. -11For avoiding dissatisfaction after esthetic rhinoplasty, focus on esthetic improvement of the\u00a0nasal\u00a0shape should not sacrifice sinonasal health, and in tComputed tomography (CT) scan is the method of choice for evaluation of paranasal sinuses and the coronal plane is the preferred imaging plane that best displays the ostiomeatal complex. Moreover, since introducing the first cone beam computed tomography (CBCT) system for dentomaxillofacial imaging in 2001 researches have focused on the feasibility of CBCT in several applications including diagnosis of the problems of nose and paranasal sinuses. Considering sinus scanning protocol, the CBCT systems provide comparable high-contrast resolution and inferior low-contrast resolution relative to those obtained with the multi detector CT scanners )MDCT, with 15\u00d715 field of view, taken as a part of preoperative recording of patients seeking rhinoplasty in an otolaryngology clinic over a 1-year period. Coronal cross sections for each patient were reviewed in NNT workstation by authors, for the following features:The incidence of anatomical variations affecting the ostiomeatal complex including the presence of Concha bullosa , Haller cells\u00a0, nasal septum deviation, paradoxical middle turbinate , Agger nasi , as well as variations in the shape direction and attachment of uncinate process.The incidence of mucosal thickening.Any alteration of the paranasal sinus anatomy resulting from previous surgery, benign tumors of sinonasal mucosa and facial trauma were considered as exclusion criteria. Data were statistically analyzed using SPSS Software, Version 15 . Chi- square test was used for statistical comparison of ostiomeatal anatomic variations between the two genders and between the two sides. Odds ratio was used to assess the significance of association between each of the anatomic variations and the presence of mucosal thickening. A total of 281 subjects who met the study criteria including 153 female (54.44%) and 128 male (45.55 %) patients were included in this study. The subjects were 17-52 years old with the Mean\u00b1SD age of 26.97\u00b17.38 years. Various degrees of mucosal thickening were detected in 60.7% of the studied cases. According to p= 0.02).As represented in Prevalence of mucosal thickening was also significantly higher in men than women.There was also significant relation between the presence of Agger nasi cells and absence of mucosal thickening .There was significant relation between the uncinate process variations and presence of mucosal thickening .Regardless of the controversy about the possible role of anatomic variations of paranasal sinus structures in predisposing the patients to recurrent rhinosinusitis, -8, 11-12et al. [et al. which evAccording to et al. [et al. [As displayed in et al. and Math [et al. The variet al. [et al. [et al. [et al., [Agger nasi was the most prevalent among the cases investigated in the present study (93.2%); which is comparable with \u00ad\u00adthe results of Bolger et al. Perez-Pi [et al. Zinreich [et al. Much les [et al. and Wani[et al., The vari[et al., , 29et al. (65%); [et al. (4.5%), [et al. (3%). [The prevalence of uncinate process variations in this study was 54.8% which correlated with the results of the studies performed by Wanamaker (45%), Mamatha . (65%); but high. (65%); Perez-Pi (4.5%), and Zinrl. (3%). et al. (53%), [et al. (67%), [et al. (73%). [et al. (36%), [et al. (15%). [The prevalence of concha bullosa in this study was 67.3% and correlated to Bolger . (53%), Scribano. (67%), Perez-Pi. (73%). It is hi. (73%). Zinreich. (36%), and Mama. (15%). et al. [et al. [It is important to note that the degree of pneumatization could be attributed to racial factors. Badia etet al. reported [et al. evaluate [et al. et al. (10%) [The prevalence of paradoxical middle turbinate in this study was 10% which is in line with the study performed by Perez-Pinas l. (10%) and Lloyl. (10%) , 31et al. [et al. [As shown in et al. were uni [et al. reported et al.; [et al., [et al. [et al. [Nearly all Agger nasi cells detected in the current study were bilateral (97.33% vs. 2.67%). This variation was also more bilateral among the cases investigated by Fadda et al.; whereas [et al. studies iet al. [et al. Bilateral and unilateral occurrence of uncinate process variations was almost similar in our cases (47.40% vs. 52.60%) while in the studies by both Wani et al. [et al. [In this study, unilateral paradoxical curvature of middle turbinate 85.71%) were detected to be more than bilateral ones; which was in accordance with the studies by Wani et al. and Faddi et al. and Fadd.71% wereet al. [Picavet et al. performedet al. [In addition, we found mucosal thickening in 60.7% of cases while Picavet et al. found muet al., [Based on Mathew et al., there waet al., , 26, 30 et al., , 8-9 AccConsidering odds ratio, uncinate process variations predisposed the cases to mucosal thickening which is in accordance with the previous studies. , 5, 8 AdOstiomeatal complex variations and mucosal thickening have considerable prevalence among patients seeking rhinoplasty. To reduce the possible complications of the surgery and to achieve optimum satisfactory results, these structural and mucosal alterations could deliberately be evaluated by CBCT with relatively lower radiation exposure."} {"text": "Regional climate modeling using convection\u2010permitting models emerges as a promising framework to provide more reliable climate information on regional to local scales compared to traditionally used large\u2010scale models . CPMs no longer rely on convection parameterization schemes, which had been identified as a major source of errors and uncertainties in LSMs. Moreover, CPMs allow for a more accurate representation of surface and orography fields. The drawback of CPMs is the high demand on computational resources. For this reason, first CPM climate simulations only appeared a decade ago. In this study, we aim to provide a common basis for CPM climate simulations by giving a holistic review of the topic. The most important components in CPMs such as physical parameterizations and dynamical formulations are discussed critically. An overview of weaknesses and an outlook on required future developments is provided. Most importantly, this review presents the consolidated outcome of studies that addressed the added value of CPM climate simulations compared to LSMs. Improvements are evident mostly for climate statistics related to deep convection, mountainous regions, or extreme events. The climate change signals of CPM simulations suggest an increase in flash floods, changes in hail storm characteristics, and reductions in the snowpack over mountains. In conclusion, CPMs are a very promising tool for future climate research. However, coordinated modeling programs are crucially needed to advance parameterizations of unresolved physics and to assess the full potential of CPMs. Convection\u2010permitting climate models reduce errors in large\u2010scale modelsAdded value in convective processes, regional extremes, and over mountainsDiscusses challenges/potentials of convection\u2010permitting climate simulations Although climate mitigation and adaptation measures are evaluated and applied at local to regional level, current state\u2010of\u2010the\u2010art large\u2010scale models (LSM) and regional climate model (RCM)) operate at horizontal grid spacing that operates on the kilometer scale. Several different terms have been used for CPMs throughout the literature. Terminologies such as cloud\u2010resolving, convection\u2010resolving, cloud\u2010permitting, or convection\u2010permitting simulations have been frequently used interchangeably [e.g., Weisman et al., Prein et al., Lauwaet et al., Trusilova et al., In mesoscale atmospheric research, CPMs have been used for decades in studies of idealized cloud systems and real weather events. Since the beginning of the 21st century, advances in high\u2010performance computing allowed steady refinement of the numerical grids of climate models well beyond 10\u2009km. At these scales, convection parameterization schemes may eventually be switched off as deep convection starts to be resolved explicitly [e.g., Bernardet et al., Done et al., Schwartz et al., Weusthoff et al., Davis et al., Grell et al., x = 1\u2009km and showed drastic changes in the seasonal average precipitation patterns compared to LSM simulations. More recent studies have performed CPM simulations on time scales longer than 1\u2009year to investigate climatological features in CPM simulations [e.g., Brisson et al., Chan et al., Ban et al., Fosser et al., Kendon et al., Junk et al., T\u00f6lle et al., Prein et al., Rasmussen et al., Ikeda et al., Gensini and Mote, Chan et al., Kendon et al., Knote et al., Rasmussen et al., Four basic modeling approaches used to perform CPM climate simulations are visualized in Figure\u2009Miura et al., Satoh et al., Putman and Suarez, Miyamoto et al., Moore, The second approach is to run global integrations at convection\u2010permitting grid spacing Figure\u2009b. This aGrabowski and Smolarkiewicz, Khairoutdinov and Randall, Li et al., Benedict and Randall, Pritchard et al., Kooperman et al., Goswami et al., Stan et al., Randall et al. [2 to 103 larger than the costs for traditional large\u2010scale model (LSM). However, this technique is still computationally inexpensive compared to the second approach using global CPMs, which is 106 times more expensive than traditional LSM [Randall et al., To reduce the computational challenges of global CPMs, the third approach uses GCM with the so\u2010called superparameterizations Figure\u2009c [Grabowl et al. estimateSchmidt, Staniforth and Mitchell, Skamarock et al., Rauscher et al., Hagos et al., Tang et al., x = 17\u2009km is refined to \u0394x = 4\u2009km over Europe and to \u0394x = 1.4\u2009km over the British Isles. Using this stretched\u2010grid approach, Tang et al. [Skamarock et al., Wan et al., Slingo et al., Cornford et al., The fourth approach uses stretched\u2010grid models [e.g., g et al. showed tOnly the first approach, grid telescoping, is considered in this review, and various forms of this methods will be studied and compared in Since the beginning of the 21st century more and more studies have focused on CPM climate simulations and there is now a strong need to synthesize these studies and to build the foundation and common basis for future advances in climate modeling. Furthermore, impact researchers and stakeholders should be informed of what to expect from CPM climate simulations. This review paper aims to provide this kind of scientific basis by summarizing the knowledge acquired up to now and by highlighting existing challenges and important research questions in this field. In particular, we review the following: (1) What grid spacing is needed for CPM climate simulations ? (2) Wha2Grell et al., Gensini and Mote, In this section, we provide a brief summary of the most important CPM climate simulations reviewed in this paper. Their selection is based on a careful literature review. Figure\u2009Gensini and Mote, Additionally, the parent\u2010grid ratios, i.e., the integer parent\u2010to\u2010nest ratios of the horizontal grid spacing, differ widely among individual experiments. Most studies use a parent\u2010grid ratio between 1:3 and 1:9, except one study that used a parent\u2010grid ratio of 1:38 [3Gage, Nastrom and Gage, Wyngaard, Moeng et al., Pielke, The energy spectrum of deep convective clouds is continuous across kilometer scales without an apparent energetic gap indicating a scale separation [Weisman et al. [Deng and Stauffer, Lean et al., Roberts and Lean, Done et al., Weisman et al., Schwartz et al., Prein et al., The upper bound on the horizontal grid spacing of convection\u2010permitting simulations was investigated by n et al. using idBenoit et al., Richard et al., Lean et al., Skamarock and Klemp, Schwartz et al., Weusthoff et al., Baldauf et al., Kendon et al., Ban et al., Fosser et al., Dai et al., Randall et al., Guichard et al., Bechtold et al., Brockhaus et al., Hohenegger et al., Wyngaard, Moeng et al., Indeed, the explicit treatment of convection using models with grid spacings less than 4\u2009km has led to considerable improvements of quantitative precipitation forecasts [Bryan et al., Cullen and Brown, Khairoutdinov et al., Craig and D\u00f6rnbrack [Petch et al., Adlerman and Droegemeier, Bryan et al., Petch, Lang et al., Fiori et al., In the limit of extremely fine horizontal grid spacings, CPMs are thought to converge to large\u2010eddy simulations [\u00f6rnbrack found thLorenz, Zhang et al., Hohenegger and Sch\u00e4r, It is here important to distinguish the purpose and concept of CPM climate simulations from that of weather forecasts or studies of individual cloud systems. At kilometer scales, deterministic predictability is limited to a few hours and small\u2010scale structures are dominated by stochastic processes [Schwartz et al. [Kain et al. [Langhans et al. [This question was addressed by z et al. who analn et al. . Similars et al. of 4.4\u2009km, 2.2\u2009km, 1.1\u2009km, and 0.55\u2009km. Figure\u2009The minimum requirement on the horizontal grid spacing to simulate the bulk heat and moisture tendencies and surface precipitation from an ensemble of convective cells has been addressed by s et al. might modulate locally the dynamics in the boundary layer [Segal and Arritt, Taylor et al., Froidevaux et al., Lauwaet et al., It is of interest to point out that simulations on even finer numerical grids will certainly benefit from better resolved topographic features . Sufficient horizontal grid spacing is of particular importance to simulate the small\u2010scale variability of surface precipitation over complex terrain [Wang et al., Roh and Satoh, Varble et al. [Miyamoto et al. [Certainly, the question emerges of how much deviation from the converged solution is tolerated in CPMs. Considering the weak grid sensitivity reported above from models running with grid spacings below 4\u2009km and the fact that physical parameterizations result in a similar or even larger spread [e et al. or Miyamo et al. ) could o4Dickinson et al., Giorgi and Bates, Anthes et al., Dynamical downscaling is conceptually based on the generation of fine scales with RCM simulations initialized and driven by a coarse\u2010mesh GCM [In this section, we review the rationale behind specific downscaling strategies. Aspects addressed here include the nesting technique including the parent\u2010grid ratio, the effect of two\u2010way nesting, domain size, and nudging.4.1Antic et al., Denis et al., Brisson et al. [x = 80\u2009km). They concluded that an intermediate nesting step with \u0394x = 25\u2009km was essential for the correct representation of precipitation in the CPM. Introducing an additional nest with \u0394x = 7\u2009km did not improve the results while both directly nesting the CPM into ERA\u2010Interim and replacing the \u0394x = 25\u2009km with the \u0394x = 7\u2009km nest lead to a strong dry bias. The deterioration in the ERA\u2010Interim to CPM nesting is probably related to the small parent\u2010grid ratio of 1:30 and the small domain size, while the deterioration in the intermediate \u0394x = 7\u2009km experiment is probably related to its grid spacing that is in the gray zone where assumptions of convection parameterizations starts to break down.How to downscale large\u2010scale GCM output to regional and local scales over a limited area is a common challenge of LSM and CPM. Usually, multiple nested limited\u2010area domains at decreasing horizontal grid spacings are applied until the convection\u2010permitting scale is reached. However, there is no common agreement on how many steps are needed and how small the parent\u2010grid ratio between the individual nests could be because they might be too small to allow the RCM to spin\u2010up. To provide a smoother transition between the lateral boundary conditions and the RCM simulation, a boundary relaxation zone with typically exponentially decreasing weights in the outermost \u223c10 grid cells is applied [Davies, Marbaix et al., Regional climate models (RCMs) and CPMs need some spatial spin\u2010up for the generation of fine\u2010scale features because they are forced by lateral boundary conditions which are given on a coarser scale and which are often physically inconsistent with the RCMs physics . The extent of this spatial spin\u2010up depends on the speed of the flow, the parent\u2010grid ratio, hydrodynamic instabilities, nonlinear processes, and surface processes Laprise, . It is ht al.'s, estimateBrisson et al. [x = 3\u2009km simulation. This was partly related to the morphogenesis of hydrometeors which were not considered in the driving model.n et al. to about eight horizontal grid spacings (\u0394x). This reduces the effective resolution of such models implicitly diffuse the prognostic variables at scales ranging from two horizontal grid spacings . This w5.2Pruppacher et al., Adams\u2010Selin et al., Since convection parameterization schemes are not used in convection\u2010permitting models (CPMs), cloud microphysical processes and processes that contribute to the explicit triggering of deep convection on the grid gain in relevance. Microphysical processes in convective clouds are much more complicated than in stratiform clouds [e.g., Van Den Heever and Cotton, Cohen and McCaul, Adams\u2010Selin et al., Van Weverberg et al., Morrison and Milbrandt, Adams\u2010Selin et al. [Brisson et al. [Li et al., Several studies examined the effect of including graupel or hail to the microphysics scheme of convection\u2010permitting models (CPMs). It is not clear if introducing graupel or hail weakens [n et al. found thn et al. in CPM Tao et al., Karydis et al., Koren et al., Fan et al., Ekman et al., Lee et al., The limited knowledge about these processes and their interactions can have far\u2010reaching consequences. In particular, the representation of cloud\u2010radiative feedbacks relies heavily on accurate representations of cloud cover and cloud\u2010radiative properties . Most importantly, the parameterization of cloud\u2010aerosol interactions remains poorly understood and a key uncertainty in both global models and CPMs [e.g., Tao et al., Rosenfeld et al., Wang et al., Levin and W. R. Cotton [Tao et al. [Because of the widespread effect of aerosols on the climate system, it is important to improve the prescription of aerosols in climate models. Therefore, coupling CPM with aerosol modules that are able to model properties such as particle size, chemical composition, and mixing state [. Cotton and Tao Liu et al. [Not only convective precipitation is sensitive to the representation of cloud microphysics. u et al. remain considerably larger than those due to a mesh refinement at kilometer scales [oncrieff or textb [Straka .Colette et al., Gu et al., Liou et al., Since CPM climate simulations have a more realistic representation of orography, topographic shading, and differential heating in narrow valleys can be simulated. Nonhydrostatic climate models, such as the COSMO\u2010CLM and the Weather Research and Forecasting Model (WRF) offer an option to include orographic shading and slope effects on radiative transfer. Including those effects delays the breakup of valley inversion layers [5.3Stull, Bryan et al., The planetary boundary layer is the part of the atmosphere that is directly influenced by the Earth's surface and where turbulent processes act on time scales shorter than 1\u2009h [e.g., x)) [e.g., Skamarock, Smagorinsky, Lilly, Deardorff, Bryan et al., Basically, two conceptually different approaches for turbulence parameterizations might be considered relevant to CPM simulations. Mesoscale models and GCM solve for the ensemble\u2010averaged Navier\u2010Stokes equations, and the mean flow can be considered laminar. All turbulent kinetic energy and all turbulent fluxes remain unresolved and need to be parameterized. The grid spacing and the effective resolution with the land surface and atmosphere are generally neglected or oversimplified [e.g., Freeze and Harlan [Gochis et al., Maxwell et al., Leung et al., Miguez\u2010Macho et al., Niu et al., Shrestha et al., York et al., Maxwell et al., Shrestha et al., 0\u2009m in the lateral and 10\u22122\u2009m in the vertical direction [Vogel and Ippisch, Based on the early blueprint for integrated hydrologic response models by d Harlan , a numbe5.5Cosgrove et al., Prein et al., Rasmussen et al., Ban et al., A challenging task for CPM climate simulations is the initialization of slow varying fields such as soil moisture and deep soil temperature. While the atmospheric component of climate models reaches an equilibrium state after a few days (spin\u2010up time) it can take several years for deep surface layer properties [e.g., Kantha and Clayson, Carton and Giese, Even longer spin\u2010up times are needed for coupled ocean models. In coupled GCM ocean models, deep ocean layers require hundreds of years to adjust while the upper ocean only requires about 50\u2009years [5.6Loveland et al., Masson et al., Arino et al., Sanchez et al., Sanchez et al., Mironov, Kumar et al., Subin et al., Martynov et al., To explore the full potential of CPM climate simulations, high gridded surface fields are necessary. Land cover information is typically provided on grids between 1\u2009km and 300\u2009m [Hastings and Dunbar, Suwandana et al., Messmer and Bettems, Suwandana et al., Another precondition for CPMs are highly resolved digital elevation models. State\u2010of\u2010the\u2010art digital elevation models have sufficiently high grid spacing between 1\u2009km and 30\u2009m [5.7Navarra et al., Smari et al., http://www.top500.org).CPM climate simulations pose a number of high\u2010performance scientific computing challenges. The continuous weather and climate model development that makes CPM climate simulations possible is concurrent to considerable progress of highly scalable supercomputing infrastructure with high\u2010bandwidth and low\u2010latency network connections (interconnects), multicore CPUs, or parallel file systems leading to ever\u2010increasing computational resources [6 grid elements, e.g., for mesoscale river catchments to up to more than 130\u00d7106 for continental domains, CPM climate simulations are computationally very demanding and require specific approaches and solutions that also enable them to efficiently run on next\u2010generation exascale computing systems (>1018\u2009floating point operations per second) [Attig et al., Keyes, Davis et al., Michalakes et al., Primarily, due to their small horizontal grid spacing, which demands small time steps below 60\u2009s in combination with many model levels , and domain sizes that may range from overall 8\u00d710Geimer et al., Carns et al., The speedup and parallel efficiency are measures to evaluate the success of a parallelization effort of an application, for example, in strong scaling studies with a constant problem size during the experiment. Figure\u2009Geer, Michalakes et al., Jin et al., Michalakes et al., Jin et al., Today's high\u2010performance computing systems are massively parallel distributed memory multicore supercomputers with very fast communication networks for data exchange between the individual compute nodes with a shared memory [e.g., Brodtkorb et al., Liu et al., Davis et al., Hwu, x = 2.5\u2009km for a continental U.S. model domain [Meadows, Michalakes and Vachharajani [Lapillonne and Fuhrer, A development in supercomputing that seems especially relevant for highly scalable resource\u2010intensive CPM climate simulations is the evolution of hybrid or heterogeneous high\u2010performance computing architectures where multicore CPUs are combined with accelerators (either graphic processing units (GPUs) or Many Integrated Core chip designs) on a single compute node [Meadows, . Michalaharajani reach anAloisio and Fiore, \u22121, i.e., 204\u2009Tb for a single decadal run. Data input and output operations, handling and transfer, and analysis as well as storage and archival of such data volumes therefore become a grand challenge [Overpeck et al., NAFEMS World Congress, Clyne and Norton, Zhang et al., Childs et al., The small spatial grid spacing of the CPM climate simulations and necessary frequent output intervals combined with increasing ensemble sizes pose a substantial big data challenge [Steed et al., Generally, efficient generic analysis frameworks for big geoscience data are still rare, leading to a disparity between parallel, highly scalable model simulations and file systems versus serial analysis/postprocessing tools and often still serial input and output [5.8Kirchengast et al., Haiden et al., W\u00fcest et al., Golding, Lin and Mitchell, Overeem et al., Mass and Madaus, Smith et al., Many of the above described model developments, the evaluation, and assignment of added value in CPM climate simulations are crucially dependent on high\u2010quality subdaily observational data sets with grid spacings at the kilometer\u2010scale. The high temporal and spatial resolution is needed to examine the simulation of small\u2010scale extreme events; an area where CPM climate simulations have high potentials to improve LSM simulations see . GloballBougeault et al., Grubi\u0161ic et al., Wulfmeyer et al., Alternatives for model development are observations derived from measurement campaigns like the Mesoscale Alpine Programme [66.1As already discussed in Denis et al., x = 4\u2009km) and with two LSMs is shown in Figure\u2009x). The higher variability at small wavelengths is a precondition for potential added value in CPM climate simulations, but no guaranty. The form of the spectra also indicates that spatial or temporal averaging of the CPM climate simulation's precipitation will smooth out the potential added value. The variance spectra of other atmospheric variables have similar characteristics [e.g., Skamarock, Horvath et al., Prein et al. [A useful tool to investigate the potential added value is the decomposition of the variance of atmospheric fields into contributions from different wavelengths [e.g., n et al. .6.2Frei and Sch\u00e4r, Isotta et al., This section gives an overview of studies that have evaluated precipitation in CPM climate simulations against LSMs and observations. The observations used for validation are fine\u2010gridded precipitation data sets, mostly based on radar and rain gauge measurements. Thus, during the evaluation one should account also for the uncertainties in the observations [6.2.1Langhans et al., Langhans et al., Prein et al., A major added value using CPMs is the improved diurnal cycle of summer precipitation. Figure\u2009Fosser et al. [Baldauf et al., Fosser et al., r et al. . Applyin et al. of the applied CPM. However, Langhans et al. [Ban et al., Fosser et al., CPMs have a tendency for too intense daily heavy precipitation in mountainous regions [s et al. found noBan et al., Fosser et al., Chan et al., Gensini and Mote [The largest differences between LSM and CPM climate simulations occur on short (subdaily) time scale and for summertime high\u2010precipitation intensities. Heavy hourly precipitation is typically underestimated in LSM, while large improvements were found in CPM climate simulations [and Mote and a high vertical resolution in the boundary layer because of the small\u2010scale processes involved. Short\u2010term convection\u2010permitting model (CPM) simulations are able to realistically simulate valley inversions and the related temperature and wind structures [Zhong and Whiteman, Vosper et al., Wei et al., Daly et al., Simulating inversions in CPM climate simulations usually demands subkilometer\u2010scale horizontal grid spacing with the WRF model. They found that decreasing horizontal grid spacing (\u0394x) from 8\u2009km to 1\u2009km leads to a drop in minimum central pressure of 30\u2009hPa attributed to small\u2010scale physical processes, which are important for tropical cyclone intensity. The domain size of the CPM should not be smaller than 500\u2009km to simulate realistic small\u2010 and large\u2010scale features of the storm.Although tropical cyclones are synoptic\u2010scale objects with spatial extents of several hundreds to thousands of kilometers, small\u2010scale features such as deep convection or narrow wind systems are crucial for their formation and characteristics such as maximum wind speed and central pressure. Several studies indicate improvements in the simulation of tropical cyclones when CPMs are used [e.g., Lackmann tested tTaraphdar et al. [x = 1.1\u2009km with LSM simulations and found improvements in the observed tracks and intensities in their CPM simulations.r et al. compared6.7Seager et al., The decrease in grid spacing of RCM to the CPM scale has important benefits for the representation of the local city climate. This opens the way for an improved assessment of the climate change impact due to urbanization. It is important that this impact is included in climate projections for the future: For example, for North America, urban\u2010induced climate change is of the same order of magnitude as that due to long\u2010lived greenhouse gases [Gimeno et al., Grossman\u2010Clarke et al., Masson, For numerical weather prediction, the typical convection\u2010permitting model (CPM) spatial scale has already been reached a decade ago, implying that some grid cells are completely urbanized [Trusilova et al. [x = 10\u2009km), indicating a reduced diurnal temperature range in urbanized regions. Zhang et al. [Van Weverberg et al., Hamdi and Vyver, Grossman\u2010Clarke et al., Grawe et al., Bohnenstengel et al., Wouters et al., x > 10\u2009km) [Trusilova et al., Wouters et al., One of the first to assess how urbanization affects climate in Europe was a et al. . Therefore, our understanding of how those processes are acting on climate time scales is very limited.7IPCC, As shown in the previous sections, CPM climate simulations facilitate the understanding of the climate system's behavior at scales most relevant to policy makers and citizens. Additionally, climate models are used to make projections of possible future evolutions of the climate system, typically up to 2100 IPCC, . In thisNakicenovic and Swart, Van Vuuren et al., Rogelj et al., Climate projections are based on possible future evolutions of the emissions of greenhouse gases, aerosols, and precursor gases as well as land use/land cover changes, which are derived based on socioeconomic and technological assumptions. As future anthropogenic choices cannot be predicted, different projections covering a range of possible scenarios are used to make projections for the future climate [IPCC, Cholette et al., Hohenegger et al., These scenarios are used to prescribe atmospheric composition in general circulation models, which in turn, simulate the response of the climate system to these changes. Due to their coarse grid spacings and the7.1Junk et al. [T\u00f6lle et al. [T\u00f6lle et al. [k et al. and T\u00f6lle et al. show thae et al. shows thKnote et al. [e et al. between 60\u2009km and 20\u2009km results shows an increase in the number of intense cyclones [e.g., Bengtsson et al., Knutson et al., x) around 20\u2009km can simulate tropical cyclone with central pressures below 900\u2009hPa [Murakami et al., Braun and Tao, Gentry and Lackmann, The skill in simulating tropical cyclones depends on the horizontal grid spacing of climate models see . There iKanada et al. [Kanada et al. [a et al. .8.1x) for which numerical weather and climate models start to permit deep convection sufficiently to avoid the use of error\u2010prone deep convection parameterization schemes is around 4\u2009km. The physical justification for the application of convection parameterizations starts to break down for horizontal grid spacing (\u0394x) approximately smaller than 10\u2009km such that parameterizations have to be either reformulated or switched off. Simulations with horizontal grid spacing (\u0394x) in between the convection\u2010permitting (\u0394x < 4\u2009km) and the convection\u2010parameterized scale (\u0394x\u226510\u2009km), called \u201cgray zone,\u201d should be avoided before suitable scale\u2010aware parameterizations are designed. Several studies report minor sensitivities for grid spacings smaller then 4\u2009km, especially when compared to the sensitivities stemming from physical parameterizations such as microphysics. A more stringent upper bound on the horizontal grid spacing of CPM climate simulations will have to be enforced if small\u2010scale topographic features or land surface heterogeneities need to be resolved.The horizontal grid spacing should be kept larger than 1:12 but also smaller ratios have already been applied with success.x < 10\u2009km. (2) CPM simulations demand higher accuracy and stability of numerical discretization schemes because of steeper slopes (better resolved orography). (3) The effective resolution, which is largely set by the implicit diffusion of discretizations, needs to be high in order to prevent too strong smoothing of the small\u2010scale dynamics and in order to not unnecessarily waste computational resources.Several approximations and simplifications in the numerics of LSMs lose their validity or cause instabilities when convection\u2010permitting grid spacings are approached. (1) CPM simulations demand a nonhydrostatic formulation of the dynamical core since the hydrostatic approximation is no longer valid for \u0394The setup of CPM climate simulations is often adapted from numerical weather prediction models because of the high computational costs of testing the physical settings in CPM climate simulations. Whether these settings are appropriate for simulations on climate time scales is, however, largely unknown.8.2Since in CPM climate simulations deep convection is explicitly modeled on the numerical grid, the initiation of convection as well as the evolution and morphology of individual clouds becomes overall more dependent on parameterizations of microphysical processes. The introduction of additional hydrometeor species, such as graupel and hail, or the simulation of the number concentration of cloud particles using two\u2010moment microphysics schemes can be beneficial for the representation of wintertime orographic precipitation and high cloud cover. However, several physical processes in convective and mixed\u2010phase clouds and their interaction with aerosols are not well understood and somewhat constrain the full potential of two\u2010moment schemes. Sensitivities related to cloud microphysical parameterizations and the related sensitivity of the cloud\u2010radiative feedback remain considerably larger than those due to mesh refinement at kilometer scales.Since soil\u2010atmosphere interactions are important for many atmospheric processes, from energy fluxes at the surface that interact with near\u2010surface fields to the boundary layer dynamics (including the initiation of convection), the introduction of more advanced representations of soil and vegetation processes seems to be crucial in CPM climate simulations.8.3Houze, [Romps, Edman and Romps, There is clear evidence that CPM climate simulations are able to add value to LSMs. However, it is also important to mention that CPM climate simulations are not the cure for all model biases. The largest added value can be found on small spatial and temporal scales (<100\u2009km and subdaily), in regions with steep orography, and higher\u2010order statistics . High potential for added value lies in process\u2010based analyses such as analysis of storm dynamics, local wind systems, and the interaction of atmospheric flows with orography representation of extreme precipitation on hourly time scales; (2) timing (onset and peak) of the diurnal cycle of summertime convection; (3) improved structures of precipitation objects ; (4) improved simulation of wet\u2010day frequencies; (5) the added value for precipitation is smaller for winter than for summer except in mountainous regions because of the stronger orographic forcing and reduced role of convection; (6) improved simulations of the buildup and melting of snowpack; (7) improvements in temperature at a height of 2\u2009m related to improved representation of orography; and (8) simulation of the center pressures and small\u2010scale processes in tropical cyclones.Due to the high computational costs, only a small number of groups investigated climate change issues using CPM climate simulations. All these studies show highly relevant differences in the regional to local climate change signals between CPM climate simulations and LSM simulations. In addition, these studies delivered insights into changes of some meteorological processes which would not have been gained with LSM. Those differences are as follows: (1) a significant increase of short\u2010duration extreme precipitation events during summer; (2) hail storms over mountains are likely to produce more hail in the upper parts of clouds in a warmer climate, but the amount of hail reaching the surface is likely reduced to zero due to an average increase of the melting level by 500\u2009m; (3) stronger decrease of central pressures and stronger maximum 10\u2009m wind speed in extreme intense tropical cyclones in the northwestern Pacific Ocean; (4) decreases in the future runoff from the Colorado Headwaters due to increased evapotranspiration which counteracts the predicted increase in runoff due to increased precipitation; (5) decreased snowfall to precipitation fraction reduces the snowpack especially in lower elevated areas and late spring; and (6) stronger increase of the daily minimum/maximum temperature in mountain/valley areas of Western Germany and parts of Luxemburg.Climate\u2010relevant feedback mechanisms not only reveal different magnitudes but some even differ in sign depending on whether a LSM or a CPM was used: (1) in LSMs the soil moisture\u2010precipitation feedback depends on the used convection parameterization, while CPMs are able to simulate this feedback more realistically; (2) a potential increase in future forest cover, as a result from converting agricultural land to bioenergy plantations, could cause a vegetation\u2010atmosphere interaction which cools down regional maximum temperature at a height of 2\u2009m.The inability of LSMs to replicate climate\u2010relevant feedback mechanisms casts doubt on the reliability of regional\u2010scale information from LSM projections of future climates.8.4M\u00f6lg and Kaser [Simulations with LSMs typically have to be statistically downscaled to generate information relevant for impact models . Because of the fine grid spacing of CPM climate simulations, this statistical postprocessing can be avoided as shown in a study by nd Kaser . A direc9Soares et al. [Bogenschutz and Krueger [Moeng et al. [Moeng [Turbulent parameterizations have to be developed to accurately represent the planetary boundary layer, shallow convection, subgrid\u2010scale cloud cover, and turbulent fluxes related to deep convective systems at kilometer scales. Recent developments, such as those by s et al. , Bogensc Krueger , Moeng eg et al. , and Moe. [Moeng , show prMicrophysics schemes applicable in CPMs are able to simulate additional hydrometeors such as graupel or hail. While this can be beneficial for the simulation of precipitation, their interaction with radiation is often neglected because of the poorly understood optical properties of those hydrometeors.Two\u2010moment microphysics schemes allow for a more realistic distribution of hydrometeors. However, those schemes are highly tunable, and many key processes such as drop\u2010drop interactions, the formation of ice particles, or the shape of the particle size distributions are not well known. To unfold the full potential of these schemes, further research is necessary to improve the representation of these processes.There is very limited knowledge about cloud\u2010aerosol interactions with far\u2010reaching consequences on the development of deep convection, cloud cover, and precipitation. Cloud\u2010aerosol interactions are a major source of uncertainties in future climate projections. Besides coupling CPM climate simulations with aerosol modules that are able to describe particle sizes, chemical compositions, and mixing states, basic research on cloud\u2010aerosol interactions is needed.x). Reducing the numerical diffusion can therefore save computational resources. Numerical schemes with higher order of accuracy like those suggested by Morinishi et al. [Ogaja and Will [CPM climate simulations demand for high accuracy and stability of the numerical solver to avoid instabilities and numerical diffusion. Numerical diffusion leads to effective model resolutions that are many times greater than horizontal grid spacing . An efficient performance on such architectures demands a restructuring or rewriting of parts of the model code. This may lead to speedups of up to 7X for dedicated parts of the code [e.g., Overpeck et al., Zhang et al., Childs et al., Not only the computational time but also the data amount processed during CPM climate simulations is challenging. Data input/output operations, handling and transfer, analysis as well as storage, and archival of such data volumes become a grand challenge [A frequently limiting factor for the detection of added value and the evaluation of CPM climate simulations is the availability of suitable observational data sets. Since the potential added value is expected at small temporal and spatial scales and for extremes, fine\u2010gridded observational data sets in high temporal resolution are needed. In addition, such measurements are required to cover long temporal and spatial scales in order to be utilized in climate model evaluations.Detecting added value in CPM climate simulations demands suitable evaluation methods. Traditional statistics used in climate research such as climatological mean values, or annual to decadal variabilities are mostly not suitable. The new methods should be able to decompose the spectrum of variability in order to isolate small\u2010scale features, to analyze the tails of climatological distributions, the extreme value statistics, or to investigate joint distributions . Furthermore, process\u2010based analysis methods can reveal deeper insights into the more physically and dynamically consistent atmospheric phenomena in CPM climate simulations.Meehl et al., Giorgi et al., Most of the studies presented in this review are based on results derived from a single model and/or considerably short simulation periods. Therefore, assessing the reliability and uncertainty in the presented results is challenging. More decadal\u2010scale CPM climate simulations are needed to improve our understanding of the climate system and involved feedbacks. A joint effort to address both added value and climate change signals in CPM climate simulations in an organized and coordinated way comparable to other programs like the coupled model intercomparison project [Although CPM climate simulations already have proven to add value to LSM simulations and to provide better insights in regional processes that are highly relevant for society and policy makers, there are several challenges which have to be addressed to exploit their full potential."} {"text": "A minimum duration of 10\u201315 min may result in increased cortisol levels with peak concentrations 20\u201330 min after the cessation of the exercise bout led to an increase in cortisol levels contrasted to a group exercising with moderate intensity (50\u201365% HRmax) , and 45 min of intense strengthening . They limited their findings by suggesting that the diurnal timing of the exercise session may be partially responsible for the observed decrease in cortisol levels. The 36% decrease during the first sampling interval (7:45\u20139:05 a.m.) was significantly higher than the 14% decrease during the longer second interval (9:05\u201311:00 a.m.). Thus, the reported cortisol data in the paper by Tsai et al. (These results confirm the threshold phenomenon for adults also in adolescents of late puberty stages and indicate that the concentration of cortisol after acute bouts of exercise is intensity dependent. In contrast to these results Tsai et al. found thy et al. formulatr et al. used acui et al. lack supi et al. is the di et al. reportedi et al. waited fi et al. . Most ofThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "However, despite their importance to the global climate history they have never been recorded in shallow marine carbonate successions. The Decontra section on the Maiella Platform , however, allows to resolve them for the first time in such a setting during the early to middle Miocene. The present study improves the stratigraphic resolution of parts of the Decontra section via orbital tuning of high\u2010resolution gamma ray (GR) and magnetic susceptibility data to the 405\u2009kyr eccentricity metronome. The tuning allows, within the established biostratigraphic, sequence stratigraphic, and isotope stratigraphic frameworks, a precise correlation of the Decontra section with pelagic records of the Mediterranean region, as well as the global paleoclimatic record and the global sea level curve. Spectral series analyses of GR data further indicate that the 405\u2009kyr orbital cycle is particularly well preserved during the Monterey Event. Since GR is a direct proxy for authigenic uranium precipitation during increased burial of organic carbon in the Decontra section, it follows the same long\u2010term orbital pacing as observed in the carbon isotope records. The 405\u2009kyr GR beat is thus correlated with the carbon isotope maxima observed during the Monterey Event. Finally, the Mi\u2010events can now be recognized in the \u03b4 Orbitally tuned shallow marine carbonate succession in the Mediterranean regionCorrelation of Mi\u2010 and CM\u2010events within Miocene shallow marine carbonatesGamma ray and magnetic susceptibility as proxies on isolated carbonate ramps All of which are well recorded in pelagic successions, especially by stable carbon and oxygen isotope records and the resulting global \u03b4Miller et al., Woodruff and Savin, Woodruff and Savin, Miller et al., Westerhold et al., Holbourn et al., John et al., Mourik et al., Mutti et al., Mutti et al., Mutti and Bernoulli, Brandano et al., Reuter et al., Extensive records of both the Miocene oxygen isotope excursions (Mi\u2010events) [Mutti et al., Mutti et al., Reuter et al., For the correlation of shallow marine carbonate records with pelagic reference sections different methods have been developed and applied more or less successfully. Biostratigraphy is still the preferred method to place shallow marine sections into the global chronostratigraphy. Since planktonic marker species are mostly rare in these settings, biostratigraphy is often of only limited use for correlation with the global chronostratigraphy [e.g., Mutti et al., John et al., Brandano et al., Iryu et al., Reuter et al., 13C stratigraphy is still comparably low, limiting environmental interpretation as well as the accuracy of the correlation of shallow marine sections to higher resolution global records. One possible way to achieve an increase in stratigraphic resolution would be the application of astrochronology and orbital tuning in order to refine stratigraphic models established with conventional methods.Carbon isotope stratigraphy is a widely used method for stratigraphic correlations and the paleoclimatic interpretation and timing of environmental changes [Hilgen et al., Westerhold et al., Lirer et al., Mourik et al., Hinnov and Hilgen, Zeeden et al., Hilgen et al., Abels et al., Westerhold et al., Raffi et al., Lirer et al., Mourik et al., Zeeden et al., Zachos et al., Wade and P\u00e4like, P\u00e4like et al., Holbourn et al., Astrochronology became a standard method for the temporal refinement of the geological record [In this study we apply astrochronological principles to two geophysical proxy records (gamma ray and magnetic susceptibility) of an already biostratigraphically and chemostratigraphically dated succession of shallow water carbonates of the Maiella Platform , in order to investigate a possible expression of Mi\u2010 and CM\u2010events in the section. Furthermore, the use of orbital tuning of the section using the 405\u2009kyr eccentricity \u201cmetronome\u201d improved the already established age model.2The studied section is located in the northwest of the Maiella mountain range in eastern Central Italy. The 120\u2009m\u2009thick Decontra section is exposed along a trail near the village Decontra leading into the Orfento river\u2010valley. The base of the lithostratigraphic section is located at the GPS coordinates 42\u00b009\u203243.5\u2033N, 014\u00b002\u203221.6\u2033E at the far end of the trail from the village . Results are reported in total counts (GR) and dimensionless SI units (MS). Measuring distances were 10\u2009cm for GR and 5\u2009cm for MS, based on size restriction of the used devices. Results of the measurements are summarized in Table\u00a0Reuter et al. [During the logging of the section high\u2010resolution geophysical measurements, including total gamma radiation (GR) and magnetic susceptibility (MS), were carried out on site [r et al. .18O and \u03b413C) were measured on 89 bulk samples. TOC and carbonate were measured using a LECO CS300 analyzer at the University of Graz. For TOC 0.1\u20130.15\u2009g of powdered bulk sample was decalcified using 2N HCl prior to the LECO analysis. To calculate the carbonate content of the samples the total carbon (TC) was first measured for each bulk sample. Afterward total inorganic carbon (TIC) was calculated by subtracting the TOC from the TC content in each sample. Carbonate content was then calculated as calcite equivalent percentages using TIC with the stoichiometric formula (TIC\u2009\u00d7\u20098.34) [Reuter et al., Total organic carbon (TOC), carbonate content, and stable isotopes . The REDFIT settings used for each data set are shown in Figure\u00a0Schulz and Mudelsee, Since different sedimentation rates within lithological units are likely, all subsequent analyses were conducted separately on each unit. Spectral analyses for both GR and MS were performed using REDFIT and Wavelet spectra . Linear interpolation was performed with double the original sample points in order to reduce aliasing of the data series. Interpolated data sets were used for both wavelet analysis as well as the used Gaussian band\u2010pass filters.The data sets were reinterpolated using Analyseries and magnetic susceptibility (MS) values are relatively low throughout the section, they show clear patterns Figure\u00a0. MS showReuter et al., Stonecipher, N\u00e4hr et al., Calcium carbonate content of the section typically varies between 90% and 95% in the Bryozoan Limestone as well as the lower part of the Cerratina cherty Limestone. Marked decreases (down to ~70%) occur in the upper part of the Cerratina cherty Limestone, which coincide with recorded occurrences of \u201cmarl layers\u201d described in the literature [Reuter et al., Conversely, total organic carbon (TOC) is quite low throughout the section. Overall, the recorded trends of TOC seem to be directly related to lithology within the section. Average values for the Cerratina cherty Limestone are reported above ~0.2%, while the Bryozoan Limestone consistently shows values ranging from ~0.1 to ~0.15% TOC Figure\u00a0. The pos4.2Reuter et al., REDFIT analysis of both GR and MS measurements resulted in significant peaks for both studied units, which are summarized in Table\u00a0Laskar et al. [Calculating the sedimentation rates for both the Bryozoan and Cerratina cherty Limestones in this manner results in a good fit of the selected periodicities with the long (~405\u2009kyr) and short (~100\u2009kyr) eccentricity, as well as obliquity (~41\u2009kyr). Sedimentation rates were then adjusted to improve the fit with the reported periodicities of known orbital cycles, resulting in the present age model Table\u00a0. Supportr et al. performeSignificant spectral peaks that fit orbital cyclicities using the estimated sedimentation rates for the Cerratina cherty Limestone are recorded at periodicities of 0.22\u2009m, 0.54\u2009m, and 2.28\u2009m, respectively, with significances well above, or close to, the 95% AR1 Monte Carlo tested confidence interval are not well reflected in the GR record. The absence of higher\u2010frequency peaks can be explained by the lower resolution of the GR measurements compared to that of the MS record.Wavelet spectra calculated using the reinterpolated data sets support the detected periodicities in GR and MS in both Cerratina cherty Limestone and Bryozoan Limestone. Additionally, the resolution of the wavelet spectra appears to be better for higher (shorter) frequencies (periodicities) for the GR signal, which in turn allows a better identification of the 100\u2009kyr and 41\u2009kyr peak in the Bryozoan Limestone Figure\u00a0. NotableFilters of the assumed 405\u2009kyr periodicity were examined for shifts in amplitude in GR and MS for both studied units, respectively. Amplitudes for the MS in the Cerratina cherty Limestone are strong in the lower part of the unit. The middle part is characterized by a slow decrease in amplitude, before an increase can be observed right at the boundary to the Bryozoan Limestone. Amplitudes for GR broadly reflect the trends of MS but are generally lower and less pronounced, reflecting the low\u2010significance 405\u2009kyr peak recorded in the REDFIT power spectrum of GR.Within the Bryozoan Limestone the 405\u2009kyr filter of MS shows constantly strong amplitudes, except for a single peak, that records a marked amplitude minimum in the middle part of the section. This amplitude minimum correlates directly with a planktonic foraminifera\u2010dominated Limestone that occurs in the middle part of the Bryozoan Limestone Figures\u00a0 and 7. T55.1Reuter et al. [McManus et al., Reuter et al., r et al. interpreMcManus et al., McManus et al., Chase et al., Anderson et al., McManus et al., McManus et al., McManus et al., Recent studies underscored the strong relationship between uranium precipitation, organic carbon rain, and prevailing redox conditions [e.g., Klinkhammer and Palmer, Russell et al., F\u00f6llmi, Mutti and Bernoulli, Reuter et al., Veizer, ten Kuile and Erez, Russell et al., Dunk et al., Russell et al., Accepting these hypotheses, it can be assumed that higher\u2010order variations observed in the natural gamma radiation of the Decontra section are likely a direct proxy for burial rates of organic matter. While it is currently unknown in which mineral phase uranium is preserved in the Decontra section, only calcium fluorapatite or calcium carbonate seems likely options considering the known lithology of the section [Veeh et al., Ku et al., Anderson, Zheng et al., Furthermore, dissolved uranium concentrations in the water column are rather unresponsive to changes caused by varying terrigenous input, since uranium is known to have a long residence time in the ocean (200\u2013400\u2009kyr) [5.2Ellwood et al., da Silva et al., Hladil et al., Karlin et al., Sparks et al., Hladil et al., Simmons et al., Kopp and Kirschvink, Magnetic susceptibility in carbonates deposited on isolated carbonate platforms can result from two different processes: (1) Deposition of detrital paramagnetic and ferromagnetic particles by means of aeolian transport processes [e.g., Bar\u2010Or et al., Reuter et al. [For aeolian transport of magnetic particles, variations in glaciation are suspected as major controlling factor. This is based on the assumption that higher surface area glaciation increased dust flux in the atmosphere through stronger surface winds, lowered surface humidity, and soil moisture, as well as increased desertification as a result of falling sea levels and decreases in vegetation [r et al. as the mAissaoui et al., McNeill, Maloof et al., Kopp and Kirschvink, Kopp and Kirschvink, Kopp and Kirschvink, Hesse, Lean and McCave, Yamazaki and Kawahata, Kopp and Kirschvink, However, studies on the magnetic properties of platform carbonates in recent and ancient sediments suggest that magnetic susceptibility in carbonates with low\u2010terrigenous input is predominantly caused by the activity of magnetotactic bacteria [Kopp and Kirschvink, However, this relationship is only true when a significant amount of organic carbon is present in the first place to create an extended OATZ. In areas exhibiting habitually low organic carbon concentrations (\u22640.4\u2009wt %) the OATZ will never develop strong enough to facilitate an extensive population of magnetotactic bacteria [Applying these assumptions to the Decontra section where TOC levels are generally low (ranging from 0.06 to 0.31\u2009wt %), variations in magnetic susceptibility are most likely related to an increase in organic carbon burial during times of higher primary productivity that caused the formation of an OATZ conductive to the growth of magnetotactic bacteria. Conversely, during times of lower productivity the generally rather high water energy at the Maiella carbonate ramp\u2014especially in the Bryozoan Limestone\u2014inhibited the establishment of anoxic conditions in the sediment largely preventing the growth of magnetotactic bacteria.5.3Cramer et al., Wade and P\u00e4like, Holbourn et al., Mourik et al., Diester\u2010Haass et al., The available evidence can now be used to propose a positive feedback of primary productivity on both GR and MS data within the Decontra section. It can thus be assumed that the phase relationship of the two records with orbital eccentricity should be the same as other primary productivity proxies. It is further well established that increases in primary productivity are generally associated with eccentricity minima during the Oligocene/Miocene [see The correlation of long\u2010term GR maxima and with pronounced amplitude minima of the eccentricity curve, as well as larger trends in the MS record with the eccentricity amplitude modulation, offer additional support for the assumed phase relationship Figures\u00a0.5.4Mutti et al., Reuter et al., Reuter et al., Praeorbulina, (2) the occurrences of Nephrolepidina praemarginata and N. morgani [Reuter et al., Lithothamnium Limestone [Carnevale et al., Until recently the poor biostratigraphic resolution of the Decontra section was a well\u2010known problem, which was resolved by applying chemostratigraphy as a main correlation tool [e.g., Praeorbulina is of particular importance as it constrains the age of the upper part of the Cerratina cherty Limestone. The currently accepted global first appearance datum (FAD) of Praeorbulina scianus occurs at 16.38\u2009Ma [Wade et al., Iaccarino et al., Turco et al., Praeorbulina lineage is still highly debated. Nevertheless, the appearance of Globigerinoides sicanus with a morphotype showing a near\u2010spherical outline can be dated to 16.177\u2009Ma in the Mediterranean, which significantly postdates the FO of Globigerinoides sicanus (in a broad sense) in the northern Atlantic, which occurs at 16.844\u2009Ma [Iaccarino et al., Praeorbulina lineage in the northern Atlantic significantly predates the global datum, an occurrence of older specimen cannot be excluded in the Mediterranean. However, all currently available data support a maximum age of 16.177\u2009Ma for the first occurrence of Praeorbulina in the Mediterranean. Based on the currently available data, this datum also needs to be assumed as correct for the Decontra section of Praeorbulina, the FO of the taxon only offers a broadly constrained tie point by itself. Combining this loose tie point with constraints provided by chemostratigraphy allows an independent confirmation. Particularly, the correlation of the marked carbon isotope excursion correlated with the Monterey event, which lasted from ~16.9 to 13.5\u2009Ma, would be shifted to an age of ~17.7 to ~14.3 Ma, if the maximum reported age (16.844\u2009Ma) of the FO of Praeorbulina is used for the Decontra section. Using the onset of the Monterey event, defined by chemostratigraphic data, as an additional tie point, thus furthermore constrains the FO of Praeorbulina in the section roughly between the currently accepted local Mediterranean FO (16.177\u2009Ma) and the proposed global FAD of Praeorbulina sicana (16.38 Ma) [Reuter et al., 18O and \u03b413C records to the respective global records of Zachos et al. [Reuter et al. [Similarly, the occurrence of s et al. by Reuter et al. .Reuter et al., Reuter et al., Praeorbulina cannot be older than 16.38\u2009Ma. (4) The FO of Praeorbulina in the Decontra section is likely not the FAD, making this horizon likely younger than the maximum 16.177 Ma (16.38 Ma?) datum. (5) Cyclic variations in both the GR and MS record reflect variations in primary productivity that can be directly linked to changes in the orbital parameters. (6) Primary productivity maxima are correlated with minima in the long eccentricity cycle. (7) Sedimentation rates stayed reasonably constant within the lithological units of the section and no significant (recognizable) hiatuses occurred within the considered interval.Based on this biostratigraphic and chemostratigraphic frameworks, we are now able to use the cyclic variations detected in the MS and GR record to further tune the section to the long (405\u2009kyr) eccentricity parameter. For this approach we used assumptions resulting from the preceding discussion: (1) The total age range of the Cerratina cherty Limestone is known to be from middle Aquitanian to late Burdigalian/early Langhian Figure\u00a0 [Reuter Praeorbulina in the upper part of the Cerratina cherty Limestone with the first 405\u2009kyr eccentricity minimum following the 16.177\u2009Ma Mediterranean FO of Praeorbulina . This aFollowing this initial tuning to the 405\u2009kyr filter, it was subsequently possible to correlate the eccentricity minima to smaller peaks in the MS and, to a smaller degree, in the lower resolution GR record. The current tuning thus results from the well\u2010constrained correlation with the 405\u2009kyr eccentricity cycles, especially during the Monterey event, while offering a tentative tuning to the orbital eccentricity on a 100\u2009kyr scale Figure\u00a0. The tunWade and P\u00e4like [Woodruff and Savin, Miller et al., Flower and Kennett, Zachos et al., Since no significant hiatus occurs between the transition from the Cerratina cherty Limestone to the Bryozoan Limestone, we correlated the minima in both filtered MS and GR with the 405\u2009kyr eccentricity maxima of Cycle 37 (37 MI\u2010C5Bnr after the scheme of d P\u00e4like ) at 15.25.5Vincent and Berger, Jacobs et al., John et al., Sprovieri et al., Brandano et al., Mourik et al., Reuter et al., Reuter et al. [The Monterey Excursion [r et al. were abl13C record of the Monterey Excursion were first recognized by Woodruff and Savin [Jacobs et al., John et al., Abels et al., Mourik et al., Internal periodic variations in the \u03b4nd Savin , who defHolbourn et al., Mourik et al., More recently, improvements in orbital theory and orbital tuning of Integrated Ocean Drilling Program sites allowed a detailed interpretation of the CMs and to link their occurrences to the 405\u2009kyr eccentricity cycle [Accepting the hypothesis that natural gamma radiation is a proxy for estimating organic carbon burial rates in isolated carbonate settings implies that maxima in GR and the carbon isotope maxima during the Monterey Excursion were caused by changes of the same ecological parameters. This, in turn, would indicate that GR maxima in the Decontra section should align well with the reported CMs of the Monterey Excursion.Weedon, 13C maxima.Unfortunately, the 405\u2009kyr eccentricity signal is not well expressed in the GR signal of the Cerratina cherty Limestone, as result of increased aliasing caused by lower measurement rates [see Testing this hypothesis clearly shows that the observed GR maxima in the Bryozoan Limestone coincide closely with the occurrences of CMs during the time from 15 to 13.5\u2009Ma Figures\u00a0 and 7. F5.6Woodruff and Savin [Miller et al., Wright and Miller, Miller et al., Westerhold et al., John et al., Westerhold et al. [John et al. [Gradstein et al., Analogous to the carbon isotope maxima recorded by nd Savin , similard et al. , John etn et al. , and theUsing the new tuning of the Decontra section, all known oxygen isotope shifts (except Mi4) can now be directly correlated with significant sedimentological features observed in the Decontra section that can be interpreted to reflect changes in local paleoenvironmental conditions during the Mi\u2010events:Gattacceca et al., Brandano et al., Reuter et al., 2 and other nutrients causing higher primary productivity are a well\u2010known effect of volcanic events in the modern ocean [e.g., Uematsu et al., Mi1b can be correlated with the first occurrence of cherty limestone in the Cerratina cherty Limestone, expressed by a marked decrease in carbonate content (<70%) of the section. The generally high content of siliceous fossils in the Cerratina cherty Limestone is a result of extensive marine volcanism during the rotation of the Sardinia\u2010Corsica block between ~22 and 15\u2009Ma [Mutti et al., Reuter et al., The coincidence of a cherty layer containing silica and clinoptilolite with the Mi1b event points toward an increased input of siliceous skeletal material during that time. The silica is derived from siliceous sponges and radiolarians [Similarly, Mi2a seems to be expressed by the occurrence of a second cherty layer in the Cerratina cherty Limestone, followed by a layer containing spiculitic chert nodules. Thus, production of siliceous plankton and sponges again increased during Mi2a at the Maiella Platform. The layer containing spiculitic chert nodules possibly points toward increased upwelling\u2010derived primary productivity causing extensive growth of siliceous sponges during the later stage of Mi2a.John et al., F\u00f6llmi, Mutti and Bernoulli, Mi2b [after Thiede, Sautter and Thunell, Curry et al., Little et al., Vecsei and Sanders, Eguchi et al., Flower and Kennett, Mutti and Bernoulli, The onset of Mi3a can be correlated to the first occurrence of a bed of planktonic foraminiferan limestone in the lower quarter of the Bryozoan Limestone are either expressed as plankton\u2010rich , siliceous (Mi1b and Mi2), or phosphatic (Mi2b) horizons in the Decontra section. Siliceous horizons only occur in die Cerratina cherty Limestone deposited during the time of the Sardinia\u2010Corsica block rotation (~22\u201315\u2009Ma.) [Reuter et al., 18O record precluded a more detailed correlation with the global oxygen isotope records.Although the oxygen isotope record of the Decontra section is at least partly altered by diagenesis, general patterns related to changes in local paleoceanographic conditions are still recognizable [18O record of the Decontra section and the global oxygen isotope stack of Zachos et al. [18O excursions (Mi1b to Mi5a) with positive excursions of the Decontra oxygen isotope record. Especially Mi4, which is not expressed in the lithological record, can be correlated with an excursion in the Decontra \u03b418O record . We applied a new integrated stratigraphy using cyclostratigraphy and eventstratigraphy in combination with an already established chemostratigraphic and biostratigraphic framework in order to significantly improve the stratigraphic resolution. Average accuracy of the new stratigraphic framework is estimated to be well below 200\u2009kyr.Reuter et al. [Praeorbulina), carbon isotope stratigraphy, and also the processes involved in the formation of the GR and MS signals allowed the tuning of the section to filtered long (405\u2009kyr) eccentricity using the orbital solution La2004 [Laskar et al., The results of the performed spectral analyses were used to orbitally tune a part of the section in order to significantly improve on the resolution of the biostratigraphy and chemostratigraphy established by r et al. . SedimenThe results of this first tuning were confirmed using event\u2010based (Mi\u2010 and CM\u2010events) stratigraphy. This integrated approach resulted in an age of 21.58 to 15.24\u2009Ma for the Cerratina cherty Limestone. The Bryozoan Limestone can be constrained to a total age of 15.24 to 11.92\u2009Ma. Further fine\u2010tuning allowed a final tentative correlation of the record to the unfiltered La2004 eccentricity solution, resulting in the first tuned shallow marine section spanning the early to middle Miocene (from 21.58 to 11.92\u2009Ma)Woodruff and Savin, Holbourn et al., Prominent maxima in the GR\u2010log and the CM\u2010events [after 18O record of the Decontra section. Trends observed in the Decontra oxygen isotope curve can now be closely correlated to the global isotope record of Zachos et al. [Reuter et al. [The new age model also allowed a reevaluation of the \u03b4s et al. . Even cor et al. , was conHardenbol et al. [Haq et al. [Finally, in addition to the well\u2010preserved orbital cycles in the MS record long\u2010term trends further show a close correlation with third\u2010order sequences of l et al. and the q et al. .ReadmeClick here for additional data file.Table S1Click here for additional data file.Figure S1Click here for additional data file.Figure S2Click here for additional data file.Figure S3Click here for additional data file."} {"text": "Fish is a source of several nutrients that are important for healthy foetal development. Guidelines from Australia, Europe and the USA encourage fish consumption during pregnancy. The potential for contamination by heavy metals, as well as risk of listeriosis requires careful consideration of the shaping of dietary messages related to fish intake during pregnancy. This review critically evaluates literature on fish intake in pregnant women, with a focus on the association between neurodevelopmental outcomes in the offspring and maternal fish intake during pregnancy. Peer-reviewed journal articles published between January 2000 and March 2014 were included. Eligible studies included those of healthy pregnant women who had experienced full term births and those that had measured fish or seafood intake and assessed neurodevelopmental outcomes in offspring. Medline, Scopus, Web of Science, ScienceDirect and the Cochrane Library were searched using the search terms: pregnant, neurodevelopment, cognition, fish and seafood. Of 279 papers sourced, eight were included in the final review. Due to heterogeneity in methodology and measured outcomes, a qualitative comparison of study findings was conducted. This review indicates that the benefits of diets providing moderate amounts of fish during pregnancy outweigh potential detrimental effects in regards to offspring neurodevelopment. It is important that the type of fish consumed is low in mercury. This study reported an improved verbal Intelligence Quotient (IQ) in offspring aged nine years in children born to mothers who consumed up to two servings of fish per week compared with children born to mothers who had not consumed any fish during late pregnancy (32 weeks gestation). This association was not significant for fish consumption in early pregnancy (15 weeks gestation) suggesting that fish consumption may be of more benefit during the third trimester [A study by Oken regnancy . Mendez regnancy . Authorsregnancy . Gale etrimester .A report from the Avon Longitudinal Study of Parents and Children (ALSPAC) found that fish consumption during pregnancy of one to three servings per week was shown to provide a modest but significant improvement in developmental scores of the offspring for language and social activity at fifteen to eighteen months of age . A longen = 498) Japanese study did not demonstrate a positive or negative association between maternal fish intake during pregnancy and neurodevelopment as measured by the Neonatal Behavioural Assessment tool in infants at three days of age [n = 25,446) indicated a significant improvement in motor, cognitive and total developmental scores for eighteen month old children who were born to women within the highest three quintiles of fish intake during pregnancy. At six months, this improvement was only significant for children of women in the highest quintile of fish consumption, suggesting that age of testing may be relevant [A smaller . More research in this area is required to draw sound conclusions.Mendez et al. found th [et al. may have [et al. reportedIt is important to note the number methodological limitations in research on diet and infant neurodevelopment that are present in these studies. This prevents the conclusion of a definitive relationship without further research, preferably clinical randomised controlled trials, and a proper meta-analysis.et al. 2011 [Measuring dietary intake in cohort studies is problematic due to the difficulty in obtaining detailed information without causing significant subject burden. All identified observational cohort studies in the current review used food frequency questionnaires (FFQs) to assess fish intake. No studies reported adjusting results from the FFQ for energy intake, a recommendation made by Freedman al. 2011 to preveal. 2011 ,21,24 reAssessing cognitive development differences in infancy and childhood is fraught with difficulties due to the nature of childhood development and the accurate measurement of such. Firstly, children develop in \u2018spurts\u2019 rather than in a continuous fashion, which means they may slip in and out of the \u2018normal\u2019 reference ranges, particularly in the earlier years ,28. To cet al. [et al. [et al. adjusted for paternal IQ but not the home environment [et al. considered maternal diet by classifying women as following a \u201cprudent\u201d or \u201cwestern\u201d dietary pattern [et al. [et al. [et al. [et al. [et al. [et al. [et al. [Secondly, there are multiple interrelated factors which impact on neurodevelopment, and not all confounders were accounted for in all analyses. Maternal intelligence, alcohol consumption, smoking and breastfeeding practices were included as covariates in all studies. However, factors including ethnicity, paternal intelligence, the home environment, drug use, dietary patterns, supplement use and maternal responsiveness were not always measured. The ALSPAC study reports by Daniels et al. and Hibb [et al. includedironment ,23. The ironment . An inta [et al. also adj [et al. adjusted [et al. , Mendez [et al. and Hibb [et al. measured [et al. reported [et al. includedet al. [et al. [et al. [There is no universal standard for which neurodevelopmental tests are most appropriate for use in children of varying ages and at what age meaningful differences in neurocognitive development can be detected . Performet al. , Hibbeln [et al. and Oken [et al. used devn-3PUFA supplementation during pregnancy found no clear association between supplement use and infant cognitive outcomes [n-3PUFA [Due to the risks associated with consuming fish and seafood during pregnancy related to food safety and heavy metal contamination, pregnant women may question the necessity of including these foods in their diets, when nutrition supplements are readily accessible in Western countries . A systeoutcomes . This maoutcomes and assooutcomes . Recent [n-3PUFA . BecauseThis review assessed the hypothesis that fish intake during pregnancy improves offspring neurodevelopmental outcomes. A review of the available evidence indicates that intake of fish during pregnancy is associated with positive foetal neurodevelopmental outcomes, as supported by seven of eight articles reviewed, which showed a beneficial impact on foetal neurodevelopment with one or more servings of fish per week compared with no fish intake. Based on the results from these observational studies the current recommendation of two to three servings per week appears appropriate. Randomised clinical trials have been conducted using fish oil supplementation in pregnancy, but not with fish considered as a whole food. Existing evidence is currently insufficient to inform advice regarding fish intake during pregnancy. Further well designed studies are required to strengthen the evidence base regarding the type and quantity of maternal fish consumption during pregnancy and associated neurodevelopmental outcomes in the offspring, while considering the contribution of mercury from fish-containing diets."} {"text": "Thomas BerletI read with interest the recent meta-analysis by Chavez and colleagues that was published in Respiratory Research, addressing the diagnostic performance characteristics of thoracic ultrasonography (TUS) in the diagnosis of pneumonia in adults. Favourable results for the diagnostic accuracy were reported, with calculated pooled sensitivities and specificities of 94\u00a0% and 96\u00a0%, respectively .et al. enrolled 80 patients; 23 of these were diagnosed with interstitial pneumonia that could not be visualised with TUS [When I reviewed the original data that the meta-analysis by Chavez and colleagues was based upon, I detected a number of issues that I wish to comment on. The authors analysed ten studies \u201311. Six with TUS . Howeverwith TUS . These aChavez and colleagues included four studies that were performed in critically ill patients in intensive care units who suffered from a variety of respiratory conditions \u20135, 8. I Lichtenstein and colleagues studied 32 acute respiratory distress syndrome patients; 22 of these were diagnosed with pneumonia . The incA re-run of the meta-analysis by Chavez and colleagues , followiet al. [et al. [Since Chavez and colleagues performed their meta-analysis, another two clinical studies of the diagnostic accuracy of TUS for pneumonia have been published by Bourcier et al. , and Ber [et al. . One-hun [et al. , and 32 [et al. patientsRe-analysis of the results of studies of the use of diagnostic ultrasonography for pneumonia confirms that TUS is a useful tool for the diagnosis of the inflammatory consolidation of pneumonia. However, further research is required to improve the diagnostic accuracy of TUS in the diagnosis of pneumonia in adults.Miguel A Chavez, Laura E Ellington, Robert H Gilman, William Checkleyet al. [et al. did not include interstitial findings by lung ultrasound within their methods [Over the last decade, there have been a growing number of research studies that evaluated the role of lung ultrasound for the diagnosis of pneumonia in adults . Many ofet al. . However methods , we deci methods .et al. could be revised [et al. found that 7 out of 17 participants who did not have pneumonia had an abnormal ultrasound. Of those, Parlamento et al. were able to rule out infectious causes of consolidation in two participants by examining the air bronchogram characteristics or by the absence of air bronchograms. Although Parlamento et al. did not explicitly discuss specific songraphic findings for the remaining five participants with alveolar-interstitial syndrome, and were implicitly considered as a negative ultrasound for pneumonia [Second, we agree with Berlet that the numbers contributed by the study of Parlamento revised . Specifineumonia . Assuminet al. [et al. [et al. had expert sonographers who performed the ultrasound, and in our own discussion we further emphasize that sonographer expertise is a critical element in assessing diagnostic accuracy.Third, we agree with Berlet that equivocal ultrasound results (1.7\u00a0%) may affect estimates reported by Reissig et al. and we c [et al. had alreet al. [Fourth, Berlet raises important concerns regarding studies performed in critically ill patients in intensive care units \u20135, 8 thaet al. , 4, 8 haet al. , 7, 9\u201311Finally, since the publication of our meta-analysis, at least four new studies have been published , 15, 16."} {"text": "Hyponatremia is especially common in older people. Recent evidence highlights that even mild, chronic hyponatremia can lead to cognitive impairment, falls and fractures, the latter being in part due to bone demineralization and reduced bone quality. Hyponatremia is therefore of special significance in frail older people. Management of hyponatremia in elderly individuals is particularly challenging. The underlying cause is often multi-factorial, a clear history may be difficult to obtain and clinical examination is unreliable. Established treatment modalities are often ineffective and carry considerable risks, especially if the diagnosis of underlying causes is incorrect. Nevertheless, there is some evidence that correction of hyponatremia can improve cognitive performance and postural balance, potentially minimizing the risk of falls and fractures. Oral vasopressin receptor antagonists (vaptans) are a promising innovation, but evidence of their safety and effect on important clinical outcomes in frail elderly individuals is limited. Hyponatremia is the commonest electrolyte imbalance encountered in clinical practice . It is aPrevalence of hyponatremia is known to increase in frail patient groups, particularly elderly patients where hyponatremia is observed in almost half of acute geriatric admissions ,10. Oldei.e., serum sodium 130\u2013134 mmol/L developing over >48 h) of the hyponatremic and normonatremic groups were 110 \u00b1 2 mmol/L and 141 \u00b1 1 mmol/L respectively. Following three months in a hyponatremic state, the rats had a 30% reduction in femoral bone mineral density (BMD) in comparison to the normonatremic rats (p < 0.001). Barsony et al. [et al. [+] of 112.7 \u00b1 1.3 mmol/L and 10 rats were used as controls with a mean [sNa+] of 142.7 \u00b1 1.1 mmol/L. A 16% reduction in BMD was seen in the hyponatremic group (p < 0.05). Thus, rodent studies by both Verbalis et al. in the hyponatremic subjects (p < 0.01) whereas no relationship was seen in the normonatremic individuals (p = 0.99). This correlation was further established as the ORs for hyponatremia and osteoporosis at the femoral neck and total hip (defined by BMD with T-score < \u22122.5) were 2.87 and 2.85 (95% CI: 1.41\u20135.81 and 1.03\u20137.86). These results substantiate an association between hyponatremia and osteoporosis. An association between hyponatremia and osteoporosis was also demonstrated in the study by Kinsella et al. [T-scores of \u22122.6 and \u22122.3 (p = 0.03) respectively. Hyponatremic individuals also had a significantly higher prevalence of osteoporosis of 57.6% compared with 44.3% in controls (p = 0.04). Hoorn et al. [p = 0.105 and p = 0.473). Kinsella et al. [T-score, this shows that the increased risk of bone fractures caused by hyponatremia occurs independently of BMD.Verbalis et al. demonstra et al. . They ren et al. did not a et al. establiset al. [Hoorn et al. establiset al. [The work of Renneboog et al. indicateet al. [et al. [et al. [et al. [In summary, human studies have consistently established a relationship between chronic hyponatremia and an increased risk of bone fractures. Chronic hyponatremia leads to decreased BMD in young and old rats but the effects of chronic hyponatremia on BMD in humans is less clear as studies have reported conflicting results. Verbalis et al. and Kins [et al. establis [et al. reported [et al. found noClinical management of hyponatremia is based on treating the underlying causes see but accui.e., documentation of volemic status, appropriate investigations). The reported commonest cause of hyponatremia is SIADH [Accurate appreciation of the etiology of hyponatremia is essential, not only for appropriate clinical management but also to prevent development of hyponatremia. At present, there are few quality studies reporting etiology of hyponatremia in older people. Most reports are from retrospective studies that rely on diagnosis made by non-expert clinicians retrospectively reviewing case notes, which frequently lack sufficient detail to allow accurate diagnosis , co-morbidities , fluid overload and volume depletion ,32,33,34et al. report tet al. ,33. HypoFurther research is required to accurately delineate the cause(s) of hyponatremia in older people. Future studies should be prospective and include expert panel review, with completion and full data availability of appropriate investigations. In the meantime establishing a reliable biomarker of hydration for older people is necessary in order to increase accuracy of the diagnostic process. Accurate understanding of etiology of hyponatremia in older people may help prevent and improve clinical management of hyponatremia which could potentially deliver significant health and economic benefits in the form of reduced complications of hyponatremia.Treatment will depend on the underlying cause, although a recent review article highlighted significant differences in expert panel consensus recommendations for treatment of hyponatremia . InvestiJudicious use of medications and optimal drug prescribing is one of the cornerstones of comprehensive geriatric assessment . Almost Since hyponatremia is usually a disorder of fluid balance rather than pure salt depletion, correction of volemic status is a mainstay of treatment. Fluid restriction, typically to about 800 mLs in 24 h, has long been the first line treatment of SIADH. It may be more acceptable in older people than younger patients but is often ineffective or poorly tolerated nevertheless. It can take a long time to be effective and is prone to failure due to hidden liquids in foods and discomfort from thirst. Moreover, in SIADH, there is a downward re-setting of the \u201cthirst osmostat\u201d and a lower plasma osmolality than normal will trigger thirst and increased fluid intake .Intravenous fluid resuscitation is indicated in all forms of hypovolemic hyponatremia. In hypovolemic hyponatremia, isotonic saline suffices and will often lead to very rapid sodium increase, as non-osmotic vasopressin secretion is rapidly suppressed. Most experts recommend using only isotonic saline (0.9% solution of sodium chloride), but boluses of hypertonic solutions are often used in acute and/or severe hyponatremia. No more than 100 mLs of 3% saline are recommended, though these can be repeated as needed . The latThe development of oral vasopressin receptor antagonists (vaptans) represents a major breakthrough in the treatment of SIADH and, potentially, other forms of euvolemic or hypervolemic hyponatremia. Unlike other treatments used for SIADH see , the vapA major concern in treatment of older people with any new agent is the safety profile. Preclinical data give some cause for concern as these are such powerful aquaretic agents, e.g., the diuresis observed in early studies of tolvaptan averaged over 5 liters per day . NeverthOverall, the vaptans currently have a role where simple and less expensive measures have failed. Yet, the potential for their use in very elderly people might be under-estimated , becauseHyponatremia is especially common in frail older patient groups. Even mild, chronic hyponatremia is associated with increased cognitive impairment, falls and fractures, possibly via a decrease in bone mineral density and/or bone quality. There is some evidence to show treatment of hyponatremia improves cognitive performance and postural stability. Therefore, correction of hyponatremia seems sensible in groups at high risk of falls, immobility or loss of independence from cognitive impairment. However, accurately diagnosing the causes of hyponatremia in this group is especially challenging, as hyponatremia may be multi-factorial, history hard to obtain due to cognitive or sensory impairment, and examination to determine volemic status is unreliable. Successful treatment also comes with its own challenges, both due to the potential for mis-diagnosis of underlying cause(s) and because all current therapies carry their own risks. Oral vasopressin receptor antagonists may be very helpful in this patient group, but the evidence base for their safety and efficacy on important outcomes in frail elderly groups is limited."} {"text": "Oxidative stress is implicated in the pathophysiology of a wide variety of neurodegenerative disorders and its role in neurogenesis is becoming increasingly acknowledged. This special issue includes 8 articles that emphasize the implications of oxidative stress in neurodegeneration, neurotoxicity, and neurogenesis.\u03b2 plaques and tau neurofibrillary tangles. In addition, the authors examine evidence of epigenetic regulation of A\u03b2 plaque formation in AD neurons and discuss different potential therapeutic approaches. J. S. Cristov\u00e3o et al. concentrate on the role of metal ions in AD and overview different proteins implicated in AD whose metal binding properties may underlie important biochemical and regulatory processes occurring in the brain during the pathophysiological process. Hyperphosphorylation and aggregation of tau in neurons not only are a central feature in AD but are present in other neurodegenerative diseases, termed tauopathies. S. M. A. Naini and N. Soussi-Yanicostas review the relationship between tau pathology and oxidative stress and present arguments in favor of the hypothesis that these are key components of a pathologic vicious circle in tauopathies. In a different direction, X. Hu et al. summarize recent literature describing the contribution of oxidative stress to brain damage after intracerebral hemorrhage. H. J. Olgu\u00edn et al. describe dopamine dysfunction as a consequence of oxidative stress and discuss its implication in disease conditions, such as in Parkinson's disease. In an original article, A. Seguin et al. present interesting data on a drug screen performed using two different models of Friedreich's ataxia, yeast and Drosophila. In the original article by B. P. Carreira et al., the authors examine the role of nitric oxide in neurogenesis in adult hippocampus following seizures. They show that although nitric oxide is beneficial in the early stages of production of newborn neural cells, it is detrimental to the survival of newly differentiated neurons due to inflammation.Several original and review articles discuss the role of oxidative stress in neurodegenerative disorders and upon brain injury. S. K. Singh et al. open the issue by presenting a comprehensive review on the pathology of Alzheimer's disease (AD). Two other review articles focus on specific aspects implicated in AD. L. Zuo et al. discuss how oxidative stress is related to AD progression and to the formation of A"} {"text": "Nature Communications6: Article number:5862 10.1038/ncomms6862 (2015); Published: 01092015; Updated: 03292016et al. reporting the generation and use of Kaede transgenic mice was inadvertently omitted from the reference list of this article and should have been cited at instances where these mice are referred to. For example, in the Results section, Tomura et al. should have been cited as follows: \u2018Since ILC3s are concentrated within the gut, we sought to test whether an ILC3 bias in the mLN reflected direct trafficking of ILC3s from the intestine to the mLN using transgenic Kaede mice '. The Methods section should have included the following: \u2018The Kaede mice used in this study were kindly provided and transferred by Dr Miwa in Tsukuba University and Dr Tomura and Dr Kanagawa in RCAI, RIKEN, Japan.'Previous work by Tomura et al. Monitoring cellular movement in vivo with photoconvertible fluorescence protein \u2018Kaede' transgenic mice. Proc. Natl Acad. Sci. USA. 105, 10871\u201310876 (2008).Tomura, M.,"} {"text": "Further, each control diet was fortified with either synthetic lysine or LPM to meet BIS (1992) specified lysine requirement resulting in the set of 12 test diets. Each of the eighteen diets was offered to quadruplets groups of 4 post-peak (52 weeks) commercial laying hens in each. The trial lasted for 119 days.The results revealed that the feed consumption and body weight changes and Roche yolk color and yolk index were significantly (p \u2264 0.05) different among different treatments. However, egg production, feed efficiency, egg weight, egg shape index, Haugh unit score, albumen index and shell thickness, and net returns remained non-significant (p \u2264 0.05) among different treatments. Among main factors, protein level (16% and 15% CP) made a significant (p \u2264 0.05) difference in egg production (79.6 and 75.1%) and feed efficiency . Among protein source GNE- and SFE-based diet fed groups showed significantly (p < 0.0%) higher feed consumption and body weight gain than SBM based diets fed birds. Yolk color and yolk index were significantly (p \u2264 0.05) different from the protein sources. CP level and Protein source interaction effects showed significant differences in albumen index and Haugh unit score.Optimum level of protein (16% CP) and GNE as a source of protein tended to be superior in increasing the performance and egg characteristics of post-peak layers and supplementation of lysine in either synthetic or LPM form found to be beneficial in optimizing their performance. Several commercial guidelines for laying hens ,2 were ret al., obtainedet al.,.Lactobacillus species [Lactobacillus species in layer diets did positively influence hen day egg production, feed efficiency, and egg weight [Supplementation of the low protein diets with crystalline amino acid is becoming relevant in feed formulation to minimize the nitrogen excretion and cost of production . Lysine species in layer species reportedg weight .Keeping in view the aforesaid facts, a comparative study was designed to put on record the effect of lysine producing microbes, a mixed culture of about ten microbes which can produce lysine in the gut of the birds compared with that of synthetic lysine in post-peak commercial layers using soya or groundnut extraction or sunflower extraction protein source based diets at two levels of dietary protein.This research work was carried out as per the guidelines in force at the time of carrying out the experiment as well as in accordance with the International Ethics Committee guidelines to minimize pain or discomfort of the birds. The study was approved by the Institutional Ethics Committee.Bacillus subtilis, Trichoderma reessi, Corynebacterium glutamicum, Cellulomonas uda, Alcaligenes faecalis, Conidiobolus coronatus, Penicilium roquefortii, Aspergillus oryzae, Aspergillus niger, Sachharomyces cereviseae with a total viable count of 9000 million/g producing 1.238 g/day of L-Lysine in situ in bird in a 24 h period when fed at the rate of 1g LPM/bird. Such formulated diets with synthetic lysine or LPM irrespective of protein source provided 726 mg (16% CP) and 626 mg of lysine/kg (15% CP) while SBM, GNE, and SFE based diets without added lysine provided 706, 591, 593 (16% CP) and 646, 553 and 546 mg lysine/kg (15% CP), respectively. The ingredient and nutrient composition of the experimental diets are depicted in A set of 18 experimental diets were ford 726 mg % CP and \u00ae spherometer and by Vernier Caliper, respectively. Yolk index score, the ratio of height to diameter of yolk calculated in similar way. Egg shape index expressed as a relationship of length and breadth of the egg (in cm) obtained using Vernier Calipers. The height of albumin was recorded at two consistent places using Ames Haugh unit meter to obtain the Haugh unit score.The daily required quantity of specified diet was collectively offered to each replicate of four birds in divided dose of about 50% in morning and remaining 50% in the afternoon hours. All the birds were weighed individually at the beginning of the experiment as well as at every 28 days interval to monitor the pattern of body weight changes if any due to dietary regimen. The average body weight in the replicate group was then calculated. Egg produced from each replicate were recorded and weighed twice (Tuesday and Friday) in a week of the experimental period. The feed efficiency was calculated based on the amount of feed consumed to produce one kg egg mass as per replicate. On the terminal day of every 28-day interval, all the eggs produced from different replicate groups were collected and were weighed individually during the experimental period of 84 days. Further, on the immediate next day, each egg was broken and the entire contents were carefully placed on a glass slab for egg quality study. Albumen index score was calculated as the ratio height and diameter of thick albumen (in mm) which were measured by AmesH: is the height of albumen (mm)G: Gravitational force The shell pieces devoid of shell membranes at a broad end, narrow end, and middle band were selected and their thickness was measured using a digital calipers. The average of all the three pieces was represented as shell thickness. The color of the yolk of every broken open egg was scored by matching (contrast) technique using Roche yolk color fan .The data generated during the experiment were statistical analyzed by completely randomized design as well as by factorial design according to the methods described by Snedecor and Cochran .The performance of layers under different treatment is presented in viz., protein level \u00d7 protein source, protein levels \u00d7 supplemented lysine source and protein source \u00d7 lysine source interaction also remained statistically non-significant (p > 0.05) in influencing the feed consumption, body weight gain, egg production, and net returns.Among protein sources, the SBM based diet group of layers recorded significantly (p < 0.05) lower daily feed consumption (118.9 g) as against those of GNE (125.8 g) and SFE (124.1 g) groups. Weight gain was significantly higher in SFE (51.8) with lowest being SBM group (31.1). Egg production was 78.2 per cent in GNE groups while that of SBM (77.7%) and SFE (76.1%) groups which were statistically similar (p \u2264 0.05). Sources of lysine did not show any significant difference in production parameters. Various interaction effects viz., prLevel of protein, source of protein and lysine sources were non-significant in influencing egg weight, shape index, albumen index, Haugh unit score and shell thickness . Howeveret al. [et al. [et al.[et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [Among different treatment, the feed consumption ranged significantly with, GNE-based diet fed group had highest feed consumption followed by that of SFE and SBM groups. Cumulative average egg production, feed efficiency, body weight gain, and net returns remained non-significant among different treatments. However, SFE-based groups showed numerically higher body weight gains (72 g) when compared to SBM based diet fed groups (17.4 g) which may be due to, relatively lower production (due to lower availability of lysine) in SFE group compared to soya groups leading to more body fat accumulation. Level of protein in the diet significantly affected the egg production and feed efficiency this is in contrary to Chaiyapoom et al. were he [et al. also rep [et al. and Parl. [et al. reported.[et al. who in t.[et al. , yet in .[et al. . The egg.[et al. and Proc [et al. . Kurtogl [et al. , Panda e [et al. , and Bon [et al. observed [et al. or the m [et al. and in L [et al. . Yalcin [et al. reported [et al. and Parl [et al. noted be [et al. also rep [et al. and Soha [et al. also repviz. protein level (16 and 15% CP), protein source and lysine source showed non-significant difference on egg weight, shape index, albumen index, Haugh unit score, and shell thickness this was also reported by Figueiredo et al. [Numerically, the data indicated better shape index with supplementation of LPM in low protein groups. The main factors o et al. , who reget al. [et al. [et al. [et al. [et al. [et al. [et al [et al. [Numerically, LPM supplementation was found to increase shape index, yolk color and yolk index. These results are similar to those of Sohail et al. who repo [et al. also rep [et al. ,32. Low [et al. findings [et al. also rep [et al. showed s [et al. and Trin. [et al . GNE-bas [et al. , also foIt was concluded that the supplementation of lysine in the form of LPM was as good as synthetic lysine in optimizing the production performance of the birds; however, further studies are required in this direction to justify the above claim. Protein level was of greater significant in influencing the egg production and feed efficiency, indeed level of lysine to a some extent showed its importance but statistically it was not significant.This study was part of MVSc research program of GUM. BSVR designed the research program as mentor and GUM conducted the research work. GG and TMP worked as co \u2013 mentors for the MVSc program, provided technical guidance and also helped in preparing this manuscript. NS and KSG helped in carrying out research trial, laboratory analysis and in manuscript preparation All authors read and approved the final manuscript."} {"text": "Recent phase II and III studies with intravenous immunoglobulin (IVIG) in patients with Alzheimer\u2019s disease (AD) did not find evidence for the slowing of AD progression compared to placebo-treated patients, in contrast to encouraging results in pilot studies. An additional phase III trial is ongoing. If negative results are found, then further AD studies with IVIG are unlikely unless a manufacturer opts for a trial with high-dose IVIG, which would increase its anti-inflammatory effects but also the risk for adverse events. An alternative approach could be an AD-specific IVIG, supplementing IVIG with higher concentrations of selected antibodies purified from it or produced via recombinant polyclonal antibody technology. These antibodies could include those to amyloid-beta fragments containing terminal sialic acid could be added to increase anti-inflammatory effects. While this product might be more effective in slowing AD clinical progression than current IVIG, there are difficulties with this approach. Preclinical studies would be required to determine which of the antibodies of interest for supplementing current IVIG are actually present in IVIG, and the effects of the product in mouse models of AD. An Investigational New Drug application for an AD-specific IVIG would require United States Food and Drug Administration approval. If the drug would be found to benefit AD patients, meeting the increased demand for IVIG would be challenging. Approximately 5.2 million Americans are currently diagnosed with Alzheimer\u2019s disease (AD). The prevalence of this disorder in the United States is 4% for individuals under 65\u00a0years of age, 15% for those between 65 and 74\u00a0years of age, 44% for those between 75 and 84\u00a0years of age, and 38% for individuals 85\u00a0years of age and older [The five drugs approved by the United States Food and Drug Administration for treating AD provide short-term symptomatic benefits to approximately half of the patients who receive them, but are not believed to influence neuropathological progression of the disease. Since the amyloid cascade hypothesis was published in 1991 , AD ther9 [et al. [et al. [Intravenous immunoglobulin (IVIG) is another approach that has been examined for the treatment of AD. IVIG products contain purified plasma immunoglobulins (primarily IgG) from large numbers of healthy donors. These drugs are used to treat many autoimmune, immunodeficiency, and inflammatory disorders. The full range of antibodies in the human immune repertoire, estimated at 109 , is thou [et al. and Schw [et al. ). Natura [et al. ,19, are [et al. . Some of [et al. ). IVIG\u2019s [et al. ,22.et al. in 2002 to contain anti-A\u03b2 antibodies [+ AD patients and in patients with mild AD, but the trial was not powered to detect differences between subgroups. A phase III trial with Grifols\u2019 IVIG Flebogamma\u2122 is underway in which AD patients will undergo plasmapheresis and be treated with albumin and Flebogamma [Many natural antibodies are polyreactive, meaning they can bind to more than one antigen . Octaphatibodies , and thetibodies ,26. On ttibodies ,28 whichtibodies which intibodies , there wtibodies , no signebogamma . The effebogamma , thoughtebogamma .et al. [Svetlicky et al. reviewedet al. , anti-phet al. , myastheet al. , pemphiget al. , and smaet al. . In eachet al. . Anotheret al. . It was et al. .et al. [The doses of IVIG employed in the AD trials were those used for antibody replacement in immunodeficiency syndromes, so whether the higher doses which are required for IVIG\u2019s anti-inflammatory effects (1\u00a0g/kg ) might bet al. concludeet al. [+ (hyperphosphorylated tau-containing) hippocampal CA1 neurons [A second option could be an AD-specific IVIG. The effects of this product could initially be compared to those of current IVIG products in a mouse model of AD which develops both plaques and NFTs, such as the 3xTg-AD mouse . Most IVet al. , using aet al. . A secon neurons . Evidencin vitro and in some mouse models of AD [An AD-specific IVIG product could be generated by simply supplementing a current IVIG product with higher levels of its anti-A\u03b2 antibodies, which have been shown to exert neuroprotective effects ls of AD ,54,57-59The few studies that have compared the levels of AD-related antibodies between IVIG products found differences between the products ,26,60. TA potential advantage of IVIG over monoclonal antibodies for AD therapy is that it contains antibodies against multiple proteins that are thought to contribute to AD\u2019s development and progression. However, IVIG\u2019s polyvalent antibodies have a range of antigen-binding affinities . An AD-sSome studies have reported that IVIG\u2019s anti-A\u03b2 antibodies are limited to those that are \u2018conformation-specific\u2019 (they do not recognize linear A\u03b2) , while oTau is an intraneuronal protein located primarily in axons, where it plays an important role in microtubule formation and stabilization through its binding to tubulin . The extet al. [et al. [IVIG products contain anti-tau antibodies , includiet al. showed t [et al. reportedet al., [et al. [There is an extensive literature about increased inflammatory processes in the AD brain . Centraet al., . Anti-cyet al., , interfeet al., , interfeet al., , granuloet al., , BAFF . Onet al., proposed [et al. , in whic [et al. and IL-1 [et al. , both of [et al. .IVIG inhibits cell surface deposition of early complement activation fragments, the opsonins C4b and C3b . It bindAdvanced glycation end products (AGE) form when reducing sugars react with amino groups in proteins, lipids, and amino acids . The recIVIG has additional neuroprotective actions, including antioxidant and antiTreatment with IVIG can raise serum viscosity, increasing the risk for thromboembolic events . Howeveret al. [Even if an AD-specific IVIG would be found to exert beneficial effects in AD patients, its relatively short duration of action would require continued treatment, which would consume a great deal of IVIG. The subjects in the IVIG AD trials received IVIG every two or four weeks ,28,30 anet al. ). Furtheet al. .With the exception of IVIG\u2019s anti-A\u03b2 antibodies , the effet al. [et al. subsequently reported the development of paralytic disease in wild-type mice and \u2018tauopathy\u2019 mice that were repeatedly immunized with a mixture of three phosphorylated tau fragments emulsified in CFA and pertussis toxin [The anti-tau antibodies which were reported in IVIG bound toet al. found thet al. , was givis toxin . Howeveris toxin . The ratis toxin -77 foundRegulatory issues would be of concern with regard to United States Food and Drug Administration approval of an AD-specific IVIG. There is a precedent for United States Food and Drug Administration approval of a multi-component plasma product: Factor Eight Inhibitor Bypassing Activity is an activated prothrombin complex concentrate which, similar to IVIG, is produced from pooled human plasma. It contains four coagulation factors, namely factors II, VII, IX, and X ,152. FEIIf studies in mouse AD models suggest increased benefits of AD-specific IVIG compared to standard IVIG, then consideration could be given to producing the antibodies required to supplement current IVIG through recombinant technology, rather than purifying them from IVIG. The technology is available for production of recombinant human polyclonal antibodies on an industrial scale ,155. ThiA final obstacle would be the economic one. The question of how adequate supplies of IVIG could be maintained if there is an increased demand for it in AD patients was discussed previously and willAlthough the multiple antibodies in IVIG should give it an advantage over monoclonal antibody for treatment of AD, its encouraging results in pilot studies have not been replicated in larger trials. Administration of high-dose IVIG would increase its anti-inflammatory effects but would likely be associated with increased adverse events, and many AD patients have risk factors which could preclude their receiving high-dose IVIG. An alternative approach could be to develop an AD-specific IVIG. This would require additional time and expense, but it might be the last and best chance for IVIG treatment of AD to succeed."} {"text": "Curiosity found host rocks of basaltic composition and alteration assemblages containing clay minerals at Yellowknife Bay, Gale Crater. On the basis of the observed host rock and alteration minerals, we present results of equilibrium thermochemical modeling of the Sheepbed mudstones of Yellowknife Bay in order to constrain the formation conditions of its secondary mineral assemblage. Building on conclusions from sedimentary observations by the Mars Science Laboratory team, we assume diagenetic, in situ alteration. The modeling shows that the mineral assemblage formed by the reaction of a CO2-poor and oxidizing, dilute aqueous solution in an open system with the Fe-rich basaltic-composition sedimentary rocks at 10\u201350\u00b0C and water/rock ratio (mass of rock reacted with the starting fluid) of 100\u20131000, pH of \u223d7.5\u201312. Model alteration assemblages predominantly contain phyllosilicates , the bulk composition of a mixture of which is close to that of saponite inferred from Chemistry and Mineralogy data and to that of saponite observed in the nakhlite Martian meteorites and terrestrial analogues. To match the observed clay mineral chemistry, inhomogeneous dissolution dominated by the amorphous phase and olivine is required. We therefore deduce a dissolving composition of approximately 70% amorphous material, with 20% olivine, and 10% whole rock component.The Mars Science Laboratory rover Thomson et al., Farley et al., Gale Crater is thought to have formed near the Noachian-Hesperian boundary with an age of about 3.7\u2009Gyr, and although the exact age of the Gale sediments is not certain, crater counting suggests an ancient age [Mars Science Laboratory (MSL) rover Curiosity has identified and analyzed, for the first time on Mars, a set of mudstones. The mudstones record a history of deposition within a fluvio-lacustrine environment followed by low temperature, in situ diagenesis [Grotzinger et al., McLennan et al., Vaniman et al., Vaniman et al., Grotzinger et al. [At the Yellowknife Bay locality of Gale Crater, the Grotzinger et al., Grotzinger et al., Grotzinger et al., McLennan et al., Leveille et al. 2014[. Key textural observations are that the raised ridges postdate the sedimentary layering and sulfate veins postdate the raised ridges. The notably pure Ca-sulfate composition of the late veins was initially established by ChemCam (Laser Induced Breakdown Spectroscopy) and was confirmed by Alpha Proton X-ray Spectrometer (APXS) [McLennan et al., Vaniman et al., The 4.5\u2009m thick Yellowknife Bay formation is subdivided into different members with the lowest one, Sheepbed, being an at least 1.5\u2009m thick mudstone, but its lower contact is not visible; its upper contact to the overlying Gillespie member is sharp. The Sheepbed member is a mudstone of overall basaltic chemical composition with \u223d15% smectite, \u223d50% igneous minerals, and \u223d35% X-ray amorphous material [Grotzinger et al., Williams et al., Bish et al., Morris et al., The Sheepbed mudstone has a sharp contact with the overlying 3\u2009m thick succession of the Gillespie and Glenelg members, which contain fluvial sediments [e.g., Vaniman et al., Ming et al., McLennan et al., Grotzinger et al., McLennan et al. [Nesbitt [Two drilled samples of the mudstone, at locations named John_Klein and Cumberland, took place between Martian solar days (sols) 180 and 292 of the mission and allowed analysis of material beneath the uppermost, reddish oxidized dust coating. The samples were analyzed in the CheMin instrument by X-ray diffraction [Bish et al., Bish et al., Bish et al., CheMin and APXS analyses of the Portage soil were carried out between sols 55 and 102 at the Rocknest locality. This provides a mineralogical control on the country rock in the Gale Crater region [Schmidt et al., McLennan et al., Stolper et al., Schmidt et al., Schmidt et al., Chemically, the APXS analyses of other Gale Crater rocks have established the presence of a range of compositions. These include Fe-rich basaltic sediment as shown by the in situ analyses at Yellowknife Bay and the Portage soil analysis [Bishop et al., Curiosity team. Starting with unaltered rocks and soils found in the area, we aim to calculate a realistic mixture of dissolving minerals within those rocks and soils that reacted to form the secondary, clay-bearing assemblage during diagenesis. This will also help to decide whether some of the phases are detrital or authigenic or a mixture of both.By using the sedimentological constraints together with ChemCam and APXS major element analyses of representative basaltic and alkaline compositions of the Gale Crater rocks and soil, and the CheMin and SAM results during the first 300 sols, we establish an equilibrium thermochemical model for the subsurface mineral reactions in the Yellowknife Bay sediments of Gale Crater. This model envisages reaction of a pore water , see Methods) with the enclosing detrital sediment. In our model, we primarily study the production of clay through the inhomogeneous alteration of a Rocknest-type host rock, within which olivine and amorphous material are the predominant alteration phases, because both of which are relatively reactive compared to other phases. We also consider other host rock end-members (see Methods). There is clear evidence from terrestrial analogue environments such as altered Icelandic basalts and tuffs that olivine and glassy material are the most reactive phases [e.g., Reed and Spycher, Reed et al., Reed et al., K\u00fchn [2004, chapter 3[ and for a discussion on databases and the mathematical-theoretical background [see, e.g., Ganguly, Oelkers and Schott, Reed, Reed, DeBraal et al., De Caritat et al., Schwenzer and Kring, Bridges and Schwenzer, Schwenzer et al., Schwenzer and Kring, Filiberto and Schwenzer, For the thermochemical modeling, we use the program CHIM-XPT (previously CHILLER) [Vaniman et al., Grotzinger et al., For host rock compositions in our modeling, we used a variety of rocks observed by Curiosity Table\u2009 and modereacted with the starting fluid). The plotted W/R ratio is thus a progress variable with very limited rock dissolution at the high W/R end and increased rock dissolution at the low W/R end. Note that W/R end represents the amount of rock reacted with the fluid not the total amount of rock present in a given volume of rock on Mars. Original magmatic minerals are observed in the mudstones [Vaniman et al., McLennan et al., 2. The amount of precipitation increases from a few milligram at high W/R to about 1\u2009g at W/R of 1000 and on the order of 10\u2009g at W/R of 100. We model between W/R of 1 and 100000 but only show 10 to 10000 for most of the runs. Higher W/R is unlikely within a sediment, but the lowest W/R would also produce phases with less H2O than phyllosilicates. W/R therefore describes the environment (freshwater inflow at high W/R in contrast to stagnant fluids with no fresh inflow at the low W/R), but at the same time reaction progress, because in a stagnant situation, more host rock will react over time, especially at low temperatures, where reactions are slow.Results of calculated equilibrium mineral assemblages are presented in diagrams of mineral abundance versus W/R ratio . CHIM-XPT can be controlled either by the set of O2-H2O-SO4-H+ or expressed in terms of HS-SO4-H2O-H+. During the reactions, the SO42\u2212/HS\u2212 pair controls redox in the fluid [Reed et al., 2+/Fe3+ ratio of the host rock or soil (see section 2.3.2). Sulfur concentration of the fluid was taken as found in the Deccan Trap fluids [Minissale et al., In order to model a realistic starting fluid representative of water associated with diagenesis in the Yellowknife Bay sediments, we start with adapted water (AW). This is the fluid used in our previous Mars studies [see 2, depending on kinetics) dominated precipitate were excluded from forming during the runs.For other calculations, we used the varying proportions of individual components, e.g., olivine and amorphous component. In total, over 100 runs with varying composition, temperature, and redox conditions were performed. APXS analyses were used for rock and soil compositions [3+ Table\u2009, thus giZolotov and Mironenko, Hausrath et al., McAdam et al., Gudbrandsson et al., Bridges and Schwenzer, Vaniman et al., Vaniman et al., Bish et al., The bulk rock models provide insights into the expected alteration mineralogy associated with the general chemistry of the rocks encountered at Yellowknife Bay. However, mineral dissolution is inhomogeneous and highly dependent on temperature and fluid chemistry [e.g., 3+ clay and the daphnite end-members of chlorite as the Fe2+ clay. In our plots of mineral abundance versus W/R, we plot the combined end-members of chlorite. Phases that are not known to form at low temperatures were excluded from the runs, these include garnet, amphiboles, and pyroxenes, as well as high-T mica.We use the following sheet silicates in our CHIM-XPT database: talc ; pyrophyllite; from the chlorite group clinochlore, daphnite, Mn-chlorite, Al-free chlorite; from the kaolinite group kaolinite, illite; from the smectite group montmorillonite ; beidellite , nontronite ; and serpentine . We note that there is no other kaolinite group mineral other than kaolinite, e.g., vermiculite, saponite, and hectorite are not in our database. In the interpretation of our models, nontronite serves as the Fe2+/Fe3+ in those assumptions, and the fact that the thermochemical database is necessarily limited relative to the full large range of possible natural mineral assemblages, we have modeled a best chemical match for the observed clays. Thus, in our runs, we take a nontronite\u2009+\u2009chlorite assemblage as a chemical analogue to the clay identified by Vaniman et al. [de Caritat et al., Ryan and Reynolds, Changela and Bridges, Hicks et al., Treiman et al., 2014[.As outlined above, we use a set of assumptions about the redox conditions of the starting fluid and the redox conditions in the dissolving rock. Bearing in mind the uncertainty of the FeVaniman et al., Ming et al., Bridges et al. [2 partial pressure of 185\u2009mbar and thus a possible atmospheric pressure associated with ancient Mars. Therefore, to test the influence of CO2 dissolved in the incoming fluid, Portage soil was exposed to GPW fluid equilibrated with 185\u2009mbar CO2 over almost all of the modeled W/R range; there is over 40% between W/R of 10,000 and \u223d20 .In contrast to Jake_M, Ekwir_brushed Figure\u2009b, precip2 contents of the host rock , and at W/R mass of rock reacted with the incoming fluid of 100\u20131000, produce calculated mineral assemblages that most closely match the saponite-, sulfide-, and Fe oxide-bearing assemblages identified in the Sheepbed mudstone by CheMin [Morris et al., Vaniman et al., Bish et al., 2O. In contrast, in the nakhlite meteorites, the amorphous component is the last product of alteration and has a similar composition to the crystalline saponite phase, and crystallization of the amorphous gel is assumed to have been inhibited by kinetic effects [Changela and Bridges, Hicks et al., The CheMin analysis on Rocknest soil and Cumberland and John Klein drill fines returned three different amorphous compositions in the samples [3+ in the Portage amorphous component [see also Morris et al., Figure\u20091.24Fe0.76SiO4), pure forsterite (Mg2SiO4), olivine\u2009+\u2009host rock, olivine\u2009+\u2009amorphous, forsterite or fayalite\u2009+\u2009amorphous, and mixtures of all three components. We show (a) 50% forsterite and 50% Portage amorphous, Fe3+/Fetot\u2009=\u200945%; (b) 50% forsterite and 50% Portage amorphous, Fe3+/Fetot\u2009=\u200945%; (c) 70% olivine, 15% augite, 15% plagioclase, and Fe3+/Fetot\u2009=\u20090%; and (d) 70% Portage amorphous, 20% olivine, and 10% whole rock Fe3+/Fetot\u2009=\u200940% in Figure\u2009T. The bulk W/R of the rock unit is lower than the W/R of the models because of the presence of unreacted minerals in the mudstone, see also section 2 for more details. Pure forsterite runs produce an Al-free brucite dominated assemblage, which does not match the Al-bearing nature of the observed phyllosilicates. In contrast, other olivine-rich runs produce the expected serpentine-Fe oxide-SiO2 assemblage. A mixture of forsterite and amorphous component produces precipitates dominated by serpentine and \u223d10% of nontronite, chlorite, and Fe oxide each into account, we have modeled the reaction of a variety of mixtures leading from that observation: the minerals and mineral mixtures are ranging from pure olivine as observed in Gale . In contrast to the CO2-poor case . Nontronite formation, too, is not possible during the peak siderite formation . This observation alone would not rule out comprehensive alteration, but the minerals accompanying the clays, e.g., SiO2 (at larger quantities) and zeolite, in the model 0.08Ca0.38\u2009Mg1.10Fe1.544O10(OH)2 at W/R of 100 and 0.17Ca0.38\u2009Mg1.08Fe1.334O10(OH)2 at W/R of 10.It is not possible to exactly match the clay mineralogy in our models to the reported CheMin results, so we average the composition of all sheet silicates in the system at W/R 1000, 100, and 10 Figure\u2009 and compHowever, if the olivine preferentially reacts before the other phases and thus dominates the alteration assemblage at an early stage then this is a possible explanation for the Mg-rich ridges described by Leveille et al. [2014[. On Figure\u2009The presence of \u223d20% clay in the Sheepbed mudstone is consistent with our model. The rock mixture dissolves, so 100% of the silica in the alteration assemblage stems from the rock dissolving in situ. For instance, at W/R of 1000 , we have 0.64E-02 mol of Si in the system, of which 0.11E-02 mole are in the resultant fluid. This means that a maximum of 20% of the silica is in the fluid after the reaction and could potentially have left the system. Silica from the \u201cfluid in\u201d component is negligible; it is in the 10E\u20135 range, 3 orders of magnitude smaller. If we look at silica at W/R of 100, then we predict 0.65E-01\u2009mol in the system and 0.37E-02\u2009mol in the fluid. So there is now 6% remaining in the fluid. This means that for silica, between 80 and 94% of the silica from the rock can be deposited as clay. In other words, at W/R of 1000, the redeposition is almost complete and 20% clay alteration product would stem from dissolving the preexisting rock. Therefore a W/R ratio of 100\u20131000 is consistent with the magnitude of clay abundances in the Sheepbed mudstone.3+ or conditions to oxidize iron. We note that in the models shown in this study, the fluid at the start of the alteration is oxidizing (see section 2.1). This is in accordance with the observed presence of oxychlorine species in the sediment [Ming et al., The amount and nature of the Fe oxide precipitated at low to intermediate W/R is dependent on a complex interplay of the overall redox conditions of the system, the assemblage of Fe-bearing phases and their redox states. For nontronite (or ferric saponite) formation, the fluid and the host rock need to provide FeBristow et al. [T runs (not shown)\u2014the amount of magnetite precipitated increases at higher temperatures. At this point, it is important to bear in mind that post formational processes can change the oxidation state of Fe [Cornell and Schwertmann, Leveille et al., Bristow et al. [Vaniman et al., 3+-oxide formation) appears to correlate with the absence or decrease in ferric relative to ferrous clay forms alongside \u223d10% Fe oxide/hydroxide. The assemblage is slightly dependent on W/R but overall similar at all W/R as shown by our Portage, 50% fayalite or forsterite and 50% amorphous runs plotted on Figure\u2009In accordance with the observations in Icelandic low-temperature surface fluids [The second stage is the main clay forming event\u2014as discussed above\u2014and occurs by the reaction of the local pore fluid (GPW) and selective dissolution of the basaltic sediments, in detail, a mixture of 70% amorphous material with 20% olivine and 10% host rock . This stage is pervasive, and olivine dissolution is congruent. This calculation results in a good match of the modeled clays to the CheMin observations\u2014and in fact the nakhlite clays, too. We assume the W/R of the reaction is higher than 10, because at very low W/R talc formation exceeds 10%, but talc is not observed in the CheMin data. However, the bulk W/R, given the preservation of olivine in the rocks, is likely not high. The pH at W/R of 1000 to 10 changes from around 7.5 to 12 as the alteration progresses from that associated with the Mg-rich ridges and initial olivine dissolution to the main diagenetic assemblage of saponite and magnetite.Grotzinger et al., Nachon et al., Schwenzer et al., After this main clay-forming event, the sulfate veins formed [Cabrol et al., Pelkey and Jakorsky, Pelkey et al., Thomson et al., Schwenzer et al., Fair\u00e9n et al., Williams et al., Grotzinger et al., Vaniman et al., 2-rich Martian atmosphere during precipitation of the minerals, because the availability of CO2 in the system would cause not only significant carbonate precipitation but also the formation of zeolites, quartz/SiO2, and kaolinite ) is a dilute aqueous solution derived from the mediation of a brine with the cation and anion contents in equilibrium with rocks of the Gale area.An early stage of diagenesis associated with Mg-rich ridges suggest some initial, localized alteration reactions associated with the early preferential alteration of olivine with GPW.3+/Fetot ratio of 0.3\u20130.5. The resultant relatively ferrous phyllosilicate produced is consistent with the dominance of magnetite rather than ferric oxide at a W/R of 100 and below. However, only a minor change in the redox state of the fluid might trigger magnetite formation at higher W/R.Inhomogeneous host rock dissolution of predominantly amorphous phase identified by CheMin, with lesser olivine and minor overall host rock contribution, occurred via reaction with GPW at 10\u201350\u00b0C, with a W/R of 100\u20131000, and pH of 7.5\u201312. This occurred in an open system with fluid flow and led to a clay-Fe oxide assemblage. The bulk compositions of the modeled phyllosilicate assemblages are similar to saponite clays observed at Yellowknife Bay and in the Lafayette Martian meteorite, though more ferrous with Fe2-rich atmosphere was possible. This is predicted by the absence of a significant carbonate abundance and the absence of phases including zeolites that our models predict are likely to be associated with such a CO2-charged fluid.The reactions associated with the clay and magnetite formation did not occur in a setting where exchange with an overlying CO"} {"text": "Bioimpedance analysis is a noninvasive, low cost and a commonly used approach for body composition measurements and assessment of clinical condition. There are a variety of methods applied for interpretation of measured bioimpedance data and a wide range of utilizations of bioimpedance in body composition estimation and evaluation of clinical status. This paper reviews the main concepts of bioimpedance measurement techniques including the frequency based, the allocation based, bioimpedance vector analysis and the real time bioimpedance analysis systems. Commonly used prediction equations for body composition assessment and influence of anthropometric measurements, gender, ethnic groups, postures, measurements protocols and electrode artifacts in estimated values are also discussed. In addition, this paper also contributes to the deliberations of bioimpedance analysis assessment of abnormal loss in lean body mass and unbalanced shift in body fluids and to the summary of diagnostic usage in different kinds of conditions such as cardiac, pulmonary, renal, and neural and infection diseases. The essential fundamentals of bioimpedance measurement in the human body and a variety of methods are used to interpret the obtained information. In addition there is a wide spectrum of utilization of bioimpedance in healthcare facilities such as disease prognosis and monitoring of body vital status. Thus, with such a broad utilization, we feel that this warrants a review of the most fundamental aspects and healthcare applications of bioimpedance analysis.Studies on the electrical properties of biological tissues have been going on since the late 18th century exploredi.e., active and passive response. Active response (bioelectricity) occurs when biological tissue provokes electricity from ionic activities inside cells, as in electrocardiograph (ECG) signals from the heart and electroencephalograph (EEG) signals from the brain. Passive response occurs when biological tissues are simulated through an external electrical current source [The electrical properties of biological tissues are currently categorized based on the source of the electricity, t source . Bioimpet source .et al. [Due to the noninvasiveness, the low cost and the portability of bioimpedance analysis systems, numerous researchers have conducted studies on bioimpedance analysis and its applications in body composition estimation and evaluation of clinical conditions. Recently, Mialich et al. reviewedet al. has revi2.c) that is caused by the capacitance of the cell membrane [Impedance (Z), from an electrical point of view, is the obstruction to the flow of an alternating current and, hence, is dependent on the frequency of the applied current, defined in impedance magnitude (|Z|) and phase angle (\u03c6) as shown in membrane :(1)Z=R+c) of an object as shown in Resistance of an object is determined by a shape, that is described as length (L) and surface area (A), and material type, that is described by resistivity (\u03c1), as shown in 0 \u2248 8.854 \u00d7 10\u221212 F\u00b7m\u22121) and the relative dielectric permittivity constant (\u03b5r) that is defined based on the material between the plates , as shown in Capacitance (C) is defined as the ability of the non-conducting object to save electrical charges, that is equal to the ratio between differentiation in voltage across object (dV/dt) and current that is passed through the object (I(t)), as shown in b) through the basic means of resistance measurement. From b) can be obtained by substituting the surface area (A) with the numerator and denominator of the length (L), as in Body composition estimation using bioimpedance measurements is based on determination of body volume (VBody) and fat free mass (FFM), as shown in The human body as a volume is composed generally of fat mass (FM) which is considered as a non-conductor of electric charge and is equal to the difference between body weight that include protein and total body water that consists of extracellular fluid (ECF) and intracellular fluid (ICF) . Figure 2/R) [2/R50 that was introduced by Kyle et al. [et al. [Most of the known prediction methods rely on the relation between water volume and the ratio between square length to resistance (L2/R) , howevere et al. ,14 and H [et al. .Measurement of bioimpedance is obtained from the whole body and body segments separately, using single frequency, multiple frequencies and bioimpedance spectroscopy analysis. In addition to several alternative assessments method such as bioimpedance vector analysis and real time bioimpedance analysis.2.1.Analysis of bioimpedance information obtained at 50 KHz electric current is known as single-frequency bioimpedance analysis (SF-BIA). SF-BIA is the most used and is one of the earliest proposed methods for the estimation of body compartments, It is based on the inverse proportion between assessed impedance and TBW, that represents the conductive path of the electric current .SF-BIA predicts the volume of TBW that is composed of fluctuating percentages of extra cellular fluid (ECF) which is almost equal to 75% of TBW, and ICF that represent the rest . Studies2.2.et al. [et al. [et al. [et al. [Analysis of bioimpedance that is obtained at more than two frequencies is known as multiple-frequency bioimpedance analysis (MF-BIA). MF-BIA is based on the finding that the ECF and TBW can be assessed by exposing it to low and high frequency electric currents, respectively. Thomasset has propet al. stated t [et al. state th [et al. report t [et al. reported [et al. .2.3.0) and resistance at infinity frequency (Rinf) that is then used to predict ECF and TBW, respectively. The use of 100 and 1 kHz, respectively, was earlier proposed by Thomasset [Analysis of bioimpedance data obtained using a broad band of frequencies is known as bioimpedance spectroscopy (BIS). The BIS method is based on the determination of resistance at zero frequency , in The determination of Cole module parameters (Rb) and substituting the surface area (A) by body volume (Vb). Ayllon et al. [0, Rinf, \u03b1, Fc) that is obtained by using only resistance achieves slightly better results and there is less standard error based on the Non-Linear Least Squares technique as compared to the capacitive and impedance complex components. Ward et al. [b is the body volume and Kb is a dimensionless shape factor calculated from the length and perimeters of the upper and lower limbs, and the trunk, taken into consideration the body shape composed of the five cylinders.; Van Loan et al. [b) from statistical anatomical measurements in adults to be equal to 4.3.In n et al. reports d et al. concluden et al. calculat2.4.et al. [et al. [et al. [Measurement of total body bioimpedance is the most commonly used method for estimating whole body compartments. Many of the whole body bioimpedance instruments apply three approaches for impedance measurement: hand to foot method ,17, footet al. through et al. . Hand to [et al. by perfo [et al. validate2.5.et al. [Segmental bioimpedance analysis achieves better estimation of skeletal muscle mass (SMM) than whole body bioimpedance analysis, with a reported standard error of 6.1% in reference to MRI measurements among 30 male subjects . Baumgaret al. stated tSegmental bioimpedance analysis detects the fluctuation in ECF due to differences in posture and is more precise than the ankle foot method , and givet al. stated that the trunk represents 50% of the body mass [et al. pointed out that total bioimpedance measurement assesses mainly the upper and lower limb compartments, and shows some limitation to predict water compartments of the trunk [Segmental or perpendicular bioimpedance analysis defines the measurement method of body segments that is mostly treated as five cylinders as in ody mass . Kyle ethe trunk .et al. [et al. [et al. [et al. [Measurement of segmental bioimpedance can be achieved through four types of protocols. The first approach, as suggested by Scheltinga et al. , uses du [et al. , through [et al. , who sug [et al. ,59,60, iet al. [2 = 0.99; furthermore Seward et al. [Limitations of whole body bioimpedance measurement in evaluating body segment compartments have given rise to the demand for segment localized bioimpedance analysis applications. Scharfetter et al. , reported et al. , introduet al. [Studies report that the segmental bioimpedance analysis method shows some limitations in the estimation of FFM ,63, withet al. conclude2.6.et al. [c graph) from bioimpedance measurements. Using 8,022 normal subjects Piccoli et al. [Bioimpedance analysis, as an independent method for the assessment of the human health status from absolute bioimpedance measurements, has triggered a new path of data analysis and interpretation. The bioimpedance vector analysis method (BIVA) is a novel approach established essentially by Piccoli et al. ,65 to esi et al. formulatet al. [Evaluation study of the BIVA method by Cox-Reijven et al. , on 70 det al. .In the BIVAThe BIVA method is also considered as a valid tool for the estimation of dry weight in 24 haemodialysis patients' with reference to the Bilbrey Index based on different allocation of values before and after obtrusion .et al. reported that the BIVA method is affected by differences in biological factors and measurement artifacts [Kyle rtifacts . Ward anrtifacts .et al. [2/4\u03c0 and (c) is the circumference in meter of the arm, waist and calf, respectively; L = 1.1 (Ht), where Ht is body height in meters.A specific BIVA method has been proposed by Marini et al. to neutret al. ,74, wheret al. [et al. [Another alternative method for analysis is real time processing of bioimpedance data which is currently introduced as a key feature for body health monitoring applications. A logarithmic analysis carried out between 0.01 and 10 Hz with five frequencies needs 276 s to be completed, this includes the calculation time . Sanchez [et al. introduc [et al. .Use of multi-sine excitation signals in bioimpedance measurements that is proposed in ,79 helpe3.Body composition assessment is considered a key factor for the evaluation of general health status of humans. Several methods use different assumptions to estimate body composition based on the number of compartments. This review considers that the human body is composed of two main compartments, FM and body lean mass or FFM. FFM is composed of bone minerals and body cell mass (BCM) that includes skeletal muscle mass (SMM). BCM contains proteins and TBW that represents 73% of lean mass in normal hydrated subjects. TBW is composed of ICF and ECF as illustrated in 3.1.FM and FFM estimations are considered one of the main objectives of body composition assessment techniques. Variations in FM among the reference population are due to several factors, but are believed to follow aging factors in addition to gradual changes in lifestyle .Anthropometric and skin fold thickness measurements are traditional, simple and inexpensive methods for body fat estimation to assess the size of specific subcutaneous fat depots comparedBioimpedance analysis has been shown in recent studies to be more precise for determining lean or fat mass in humans . In compet al. [2 in reference to DXA method:50) and is resistance and reactance at 50 KHz, and (Wt) is body weight. The developed equation achieved a correlation coefficient (R) that is equal to 0.986, standard error of the estimate (SEE) is equal to 1.72 kg and technical error is 1.74 kg.Kyle et al. developeIn ,85, FFM et al. [Sun et al. , used a R2 = 0.90 and 0.83 and root mean square errors of 3.9 and 2.9 kg for males and females, respectively.The mean FFM prediction equations achieved a correlation coefficient et al. [Deurenberg et al. , used deR2 = 0.93 and standard estimation error (SEE) = 2.63 kg.The FFM prediction equations achieved a correlation coefficient et al. [et al. [Pichard et al. , assesse [et al. , and conHeitmann compared2 = 0.89) and lower standard estimation error (SEE = 3.32 kg) than the multiple regression equations for skin fold or body mass index .The multiple regression Heitmann assessedet al. [ecf and Rtbw represents resistance of extracellular fluids and total body water extracted using the Cole module [Recently, Pichler et al. assessede module . In conce module .3.2.+ and Cl\u2212, and for ICF are K+ and PO\u22124 [Body fluid is the total volume of fluids inside a human body that represents the majority of the FFM volume percentage. TBW includes the fluids inside the cellular mass that is known as ICF; and the fluid located outside the cell body which is composed of plasma and interstitial fluid which is known as ECF. ECF and ICF fluids that are incorporated under TBW, contain several ion types with different concentrations, however the main ions in ECF are Naand PO\u22124 .Body fluids estimation using bioimpedance measurements are based on the inversely proportional between body resistance and the total amount of body water . There aet al. [Sun et al. develope2) and mean square error equal to 0.84 and 3.8 L in men, and 0.79 and 2.6 L in women.The developed equation achieved a correlation coefficient and standard estimation error equal to 0.89 and 1.7 L.After measurements performed using bioimpedance and bromide dilution methods on 40 subjects aged 21\u201381 years, of which 22 were healthy subjects, 12 were affected by chronic heart failure and 6 by chronic renal failure, the best estimation results at 1 KHz achieved a correlation coefficient and standard error of estimate (SEE) equal to 0.95 and 1.73 L using Z100KHz, and 0.95 and 1.74 L using Z50KHz:The prediction equation of TBW achieved a correlation coefficient (R2) and standard error of estimate (SEE) equal to 0.87 and 0.98 L using Z1KHz, and 0.86 and 1.02 L using Z5KHz.The prediction equation of ECF achieved a correlation coefficient is volume fraction of non-conducting tissue. Based on Hanai's mixture method [a) can be calculated using the following Prediction of body fluids using the BIS method in three steps involves firstly determination using the values of Re theory :(27)\u03c1a=e method , tissue ecf) and TBW volume (Vb). The volume fraction of non-conducting tissues at low frequencies calculated as in At low frequencies the current will pass through extracellular fluids only without intracellular fluid due to the high capacitance of cell membranes . In thata) at low frequency represents the extracellular fluid resistivity (\u03c1Aecf), thus the resistance of ECF (Recf) can be recalculated in aecf) from Based on the mixture theory , apparenecf to be equal to 40.3 \u03a9 \u00b7cm for men and 42.3 \u03a9 \u00b7cm for women, which is close to that achieved by saline, and is about 40 \u03a9 \u00b7cm t for the ECF composed of plasma and interstitial water [Hanai , calculaal water .ecf) caused by changes in estimated ECF resistance (Recf), that is achieved by replacing body volume (Vb), that is equal to the ratio between body weight (Wt) in Kg and body density (b)D in Kg/L from To reform the equation to evaluate the variance in ECF volume (Vb), extracellular fluid resistivity (\u03c1aecf) and body density (bD) are constant values that can be included in one factor defined as extracellular fluid factor (Ke) as in ecf) as in Body factor and taking the module of non-conducting tissue factor (c) in ecf) using the same equation as Moissl et al. , suggestFrom , (a) andet al. [inf using the same assumption of mixture theory [Jaffrin et al. suggestee theory , and assa_tbw) from actual total body water resistivity (\u03c1tbw), the parameters in (c) from To determine the apparent resistivity of total body water (\u03c1b) from tbw) and total body water volume (Vtbw) is recalculated by using By replacing the actual resistivity by apparent resistivity for total body water in et al. [tbw to be equal to 104.3 \u03a9 \u00b7cm in men and 100.5 \u03a9 \u00b7cm. A validation study conducted in 28 dialysed patients [tbw was equal to 108.1 \u03a9\u00b7cm in men and 100.2 \u03a9\u00b7cm, which predicted 91% of mean water loss when compared with 39% for Cole method [tbw and hydration rate values.Considering that total body water is equal to the accumulation of ECF and ICF, Jaffrin et al. calculatpatients , conclude method , but oveet al. [icf) using a new assumption for TBW resistivity (\u03c1tbw), as in For ICF prediction using a BIS method, Matthie et al. introductbw/Recf) is opposite and proportional to (Vtbw/Vecf):In the second version of mixture theory, total body water volume is considered to be equal to the summation of ECF and ICF, for ECF estimation, the relation in et al. [\u03c1icf to be equal to 273.9 \u03a9\u00b7cm and \u03c1ecf = 40.5 \u03a9\u00b7cm in men and 264.9 \u03a9\u00b7cm and 39.0 \u03a9\u00b7cm, respectively in women. De Lorenzo et al. [icf):Moissl et al. calculato et al. suggest et al. [Ri is less accurate than for Re in parallel module because it sums up the errors on Re and Rinf, is not.Jaffrin and Morel claim thet al. , who staet al. [icf), taking into consideration that the non-conducting tissue factor (c) is as given in Moissl et al. introducicf) and intracellular fluid volume (Vicf) are added as in tbw) and total body water volume (Vtbw) is equal to the summation of ECF and ICF volumes as in tbw) using different assumption of (Ktbw) and (\u03c1tbw) from Jaffrin et al. [et al. [40K isotope [Then the recalculated intracellular fluid factor with a correlation coefficient referenced to the total body potassium counting method is equal to 0.91, 0.82 and 0.89, and a standard estimation error equal of 5.6 kg, 6.3 kg and 1.3 kg for FFM, BCM and ECF, respectively.Ward et al. presente4.2.Variations in body composition between male and female were proven in several studies . In bodyet al. [FFM or lean mass studies show that males have greater FFM than females with different ranges. Kyle et al. state thet al. on 1649 et al. [et al. [TBW averaged 73.2% of fat free mass in the healthy population; however several studies show that males have less TBW than females . Sun et et al. , stated [et al. state thDue to the different body composition between males and females, gender considerations have a strong impact in estimating body compartments.4.3.Aging is defined as a multi-factor changing in the physical and biological activities of the human body that leads to differences in body composition among age groups. When the human body becomes older it leads to a gradual increase in fat mass and spontaneous decrease in lean mass. Fat free mass to fat mass ratio increases gradually in response to increase of age, and a noticeable increment in average weight is seen among the elder population compared with adults associated with increment in fat mass . In someet al. [Several studies were conducted using the BIA method on children ,105 adulet al. reported4.4.Body composition varies among different races and ethnic groups due to the environment, nutrition factors, culture and anthropometric measurements that include body conformation . There iet al. [et al. [et al. assessed the segmental lean mass among Koreans [et al. assessed the fat free mass among Germans and compared it to the American and Swiss population [et al. studied the clinical applications of BIVA on Slovaks [et al. had performed a comparative study among two different Indian races [et al. obtained specific BIVA reference values for the Italian healthy elderly population in order to construct the specific tolerance ellipses to be used for reference purposes for assessing body composition in gerontological practice and for epidemiological purposes [The majority of bioimpedance measurement studies have been done on Caucasian subjects , Kotler et al. and Sun [et al. have inc Koreans , Schulz pulation . Siv\u00e1kov Slovaks . Nigam ean races , whereaspurposes . Validat4.5.Simplicity and the economic acceptance of bioimpedance analysis method for body composition estimation have increased the need to unify the protocols and procedures of bioimpedance measurements in order to retrieve robust data.For the foot to ankle measurement method, bioimpedance measurements performed in a supine position with abduction of the upper limbs to 30 degrees and lower limbs to 45 degrees for 5 to 10 min. studies show that when the posture changes from a standing to a supine body position, the ECV decreased in the arms by 2.51% and legs by 3.02%, but increased in the trunk by 3.2% . Fastinget al. concluded that the error in total body water prediction range from 1 to 1.5 L figured out after laying at rest for one hour [Electrodes should be placed on the pre-cleaned metacarpal and metatarsal phalangeal joints with a distance in between of at least 5 cm without skin lesions at the location of the electrodes. In some studies skin temperature should be counted ,120. Subone hour .4.6.In bioimpedance analysis, the geometrical structure of electrode has a strong impact on elementary data retrieved during the measurement process. In bioimpedance analysis electrodes are defined as isoelectric materials with a negligible voltage drop along the connectors. The minimum numbers of electrodes required to perform the bioimpedance measurements are two, one for current injection with the assumption of zero potential difference and the other for collecting the voltage drop with a negligible current flow and is more affected by position.The tetrapolar electrode approach become widely used for whole bioimpedance measurements because of the uniformity of current distribution compared to monopolar electrodes , and the2 are the most commonly used shapes [Ag-AgCl electrodes are now used in most bioimpedance measurements because it has a well-defined DC potential with electrolyte gel to minimize the gap impedance between skin and electrodes. Circular and rectangular electrode shapes with a contact area greater than 4 cmd shapes .et al. investigated the impact of electrode discrepancy on BIS measurements and concluded that mismatched potential electrode causes 4% overestimated measurements in resistance at zero and infinite frequency because of an imbalanced electrical field distribution [et al. stated that capacitance between different body segments and earth, and capacitance between the signal ground of the device and earth cause a significant false dispersion in the measured impedance spectra at frequencies >500 kHz [Buend\u00eda ribution . Shiffmaribution addresse>500 kHz .Errors in bioimpedance measurements are caused by many factors such as motion, miss-positioning, connector length and fabrication errors. Moreover, the diversity of the commercially available bioimpedance analyzers cause a wide range of fluctuations in measurements between the devices. Thus the calibration of the components inside a bioimpedance analyzer such as signal generator, sensing apparatus, scales of weight and height and electrical interference should be conducted to ensure the reliability of the bioimpedance analyzers .5.Bioimpedance analysis in healthcare practice contributes to the estimation of body compartments to assess the regular change in nutrition status in in-patients and to monitor nutritional risk in out-patients . Most ofObservation of body compartment fluctuations like fat free mass, fat mass and total body water from normal limits are considered as key factors to be used in bioimpedance analysis in healthcare applications. Abnormal loss in lean body mass and unbalanced shifts in body fluids are the most measured parameters to be used to assess the healthiness of the human body. Analysis of bioimpedance parameters has bern used in several studies to estimate and analyze the changes in disorders of different kind of diseases.et al. [et al. [Norman et al. stated t [et al. stated t6.Increasing demands for accurate, cost effective and non-invasive systems for clinical status monitoring and diagnosis of diseases in healthcare, has accelerated the research endeavors to provide new methods and technologies to evaluate the health condition of human body. Body composition assessment tools has been considered a promising approach for the quantitative measurement of tissues characteristic over time, in addition to direct relativity between fluctuations in body composition equivalences and survival rate, clinical condition, illness and quality of life. Bioimpedance analysis is a growing method for body compartments estimation in nutrition studies, sport medicine and evaluation of hydration rate, fat mass and fat free mass between healthy and diseased populations. Fat mass, fat free mass including skeletal muscle mass, bone minerals, and total body water, which is composed of intercellular fluid and extracellular fluid, are compartments that can be predicted and analyzed using suitable bioimpedance measurements techniques, procedures and population, age, ethnic groups or disease-dedicated bioimpedance analysis equations. Further studies are needed to evaluate the correlations between variations in bioimpedance parameters, especially in ECF and ICF, and the deviation from health to disease."} {"text": "It is now well documented that over 400 subglacial lakes exist across the bed of the Antarctic Ice Sheet. They comprise a variety of sizes and volumes (from the approx. 250\u2009km long Lake Vostok to bodies of water less than 1\u2009km in length), relate to a number of discrete topographic settings and exhibit a range of dynamic behaviours . Here we critique recent advances in our understanding of subglacial lakes, in particular since the last inventory in 2012. We show that within 3 years our knowledge of the hydrological processes at the ice-sheet base has advanced considerably. We describe evidence for further \u2018active\u2019 subglacial lakes, based on satellite observation of ice-surface changes, and discuss why detection of many \u2018active\u2019 lakes is not resolved in traditional radio-echo sounding methods. We go on to review evidence for large-scale subglacial water flow in Antarctica, including the discovery of ancient channels developed by former hydrological processes. We end by predicting areas where future discoveries may be possible, including the detection, measurement and significance of groundwater (i.e. water held beneath the ice-bed interface). SubglacSubsequent to the discovery of subglacial lakes over 40 years ago, the consensus among glaciologists at the time was that water flowed very slowly at the Antarctic Ice Sheet bed and, therefore, had minimal glacial dynamical impact. As a consequence, little research on subglacial lakes was conducted in the 1980s, except for an unpublished chapter in a PhD thesis . This laInterest in subglacial lakes was renewed in the early 1990s, when high-precision ERS-1 satellite radar altimetry revealed that the 1970s RES data from Lake Vostok coincided with a notably flat ice surface . This brIn the 20 years that followed, our appreciation of Antarctic subglacial lakes developed considerably and allowed a wider appreciation of hydrological processes beneath the ice. We now know that over 400 lakes exist at the ice-sheet bed. The last full inventory of subglacial lakes provided information on 379 features ; this waSome subglacial lakes are prone to sudden discharges of water , which can flow hundreds of kilometres and also connect with other lakes . Such waThe last review on the state of knowledge of subglacial lakes and their impact on ice-sheet hydrology was undertaken by Wright & Siegert . An exce2.Satellite altimetry techniques have been used on numerous occasions to identify the presence of \u2018active\u2019 subglacial lakes, their connectivity and their impact on ice dynamics ,17,25,26et al. [2 surface depression. Between 2009 and 2012, the lake re-filled slowly, causing the ice-sheet surface to slowly rise again [3) yet reported from Antarctica [One of the most remarkable \u2018active\u2019 subglacial lakes identified by Smith et al. is Lake et al. , and ASTet al. . The datse again ,30. The et al. [et al. [Since the publication of Smith et al. , severalet al. ,31,32,33 [et al. and the et al. [To investigate the nature of the subglacial environment directly beneath and around a zone of known ice-surface elevation change, Siegert et al. undertooet al. , as wellet al. [The first thing to note about the resulting dataset is that no obvious RES reflection from a deep-water subglacial lake was identified . Normallet al. .et al. [The second observation is that Institute E2 does coincide, at least in part, with a small minimum in the subglacial hydrological potential, meaning that basal water is expected to accumulate at some level. The location of this potential minimum is downstream of a subglacial hill. The implication is that Institute E2 is an ephemeral, probably shallow, lake that exists because water pools on the lee side of a subglacial hill; the potential minimum being a result of the ice thickness gradient rather than the basal topography. One possible explanation, given the spatially restricted nature of the hydrological minimum, is that its dimensions are far smaller than those proposed by Smith et al. .et al. [et al. [et al. [Clearly, the differences between Institute E2 and Lake Vostok are substantial and they possibly represent end-members of the spectrum of Antarctic subglacial lakes. To see whether other \u2018active\u2019 subglacial lakes conform to observations at Institute E2, Wright et al. undertoo [et al. had noti [et al. , no dire3.et al. [The dynamic behaviour of \u2018active\u2019 lakes demonstrates that there are hydrological pathways and connections beneath the ice sheet. Wright et al. showed tet al. . The natet al. [et al. [et al. [\u22122). Using the correct (greater) geothermal heat flux will produce higher meltrates than previously calculated, which may be capable of supporting persistent R-channels.Wingham et al. first ev [et al. provide [et al. ). The qu [et al. . Carter [et al. suggest [et al. suggest et al. [Le Brocq et al. provide et al. ).4.Geomorphological evidence for channels associated with former subglacial flow of water and its connectivity is well documented from the areas of former Northern Hemisphere glaciation (e.g. ), from t3) palaeo-subglacial lake in the upper Wilkes Basin in the Middle Miocene [et al. [\u22121), meaning the channels have clear surface expressions and smaller scale canyon-like structures . Combin [et al. were abl [et al. interpre [et al. , which a [et al. postulatet al. [et al. [While channels at the ice-sheet bed have only been identified across a few locations, it is entirely possible that other regions of the ice-sheet base contain both relic and \u2018active\u2019 subglacial drainage features. By combining satellite and airborne geophysical datasets, using techniques established by Rose et al. and Jord [et al. , the imp5.et al. [Research undertaken in the last 4 years has expanded our knowledge of subglacial hydrology considerably and has also allowed a number of new research questions to be framed. Wright & Siegert and Wriget al. ,14 list et al. also deset al. [The notion of \u2018active\u2019 subglacial lakes has provoked speculation that subglacial lake water at the centre of the ice sheet is able to flow to the margin. While direct evidence for such flow will obviously be difficult to acquire, Wright et al. have shoet al. and, whiAnother unknown is the size and nature of the basal water bodies that are responsible for ice-surface elevation changes. Research to date points to much smaller water bodies than originally depicted, but more work is needed to fully appreciate the hydrological processes responsible for the surface observations.et al. [The traditional view of Antarctic subglacial hydrology is to consider the ice-sheet bed as being impermeable. This is almost certainly not the case and there is every possibility that groundwater exists across large portions of the continent. Indeed ice-sheet modelling has pointed to groundwater as being critical to maintaining the flow of ice streams in the Siple Coast . Furtheret al. postulatet al. . The rea"} {"text": "Manual therapy has long been a component of physical rehabilitation programs, especially to treat those in pain. The mechanisms of manual therapy, however, are not fully understood, and it has been suggested that its pain modulatory effects are of neurophysiological origin and may be mediated by the descending modulatory circuit. Therefore, the purpose of this review is to examine the neurophysiological response to different types of manual therapy, in order to better understand the neurophysiological mechanisms behind each therapy's analgesic effects. It is concluded that different forms of manual therapy elicit analgesic effects via different mechanisms, and nearly all therapies appear to be at least partially mediated by descending modulation. Additionally, future avenues of mechanistic research pertaining to manual therapy are discussed. Manual therapy has been a component of physical rehabilitation programs since as early as 400 BC . Since i\u03b2 fibers that inhibit nociceptive input from A\u03b4 and C afferent fibers [Melzack and Wall were thet fibers , 10. Howt fibers \u201319. Whatt fibers and plact fibers \u201327 is al\u03b2-endorphins are EO peptides that have not only been shown to have a comparable analgesic effect to morphine [morphine , but aremorphine . Diffusemorphine and Le Bmorphine found thmorphine , 33, andmorphine . It has morphine .Previous reviews have noted potential descending modulatory mechanisms, an endogenous opioid response, in both physical therapy and physIn order to investigate the neurotransmitters associated with a descending inhibitory response to manual therapy, PubMed was searched using the following query: AND NOT (\u201clabor\u201d OR \u201ccardiac\u201d OR \u201cuterus\u201d OR \u201cmilk\u201d) in November 2015. All relevant studies and reviews that were written in English were included, with the exception of those that were retracted. Animal studies were included, as they may provide further insight into mechanisms that may not be ethical to directly measure in humans; for example, utilizing brain biopsies to observe receptor activity. The general methods and important findings pertaining to descending pain modulatory systems were described.Through the millennia, numerous types of manipulation therapies have been developed and advocated, and have been purported to cure everything from scarlet fever and diphtheria to hearing loss . However\u03b2-endorphin and N-palmitoylethanolamide (PEA), an endogenous analog of arachidonylethanolamide (AEA), or anandamide, an endocannabinoid, were observed 30 minutes after treatment; at 24 hours, similar biomarker changes from baseline were found. Subjects with chronic low back pain presented greater biomarker alterations following OMT than the control (asymptomatic) group. However, because no true control or sham group was utilized, it is not possible to distinguish whether these changes in biomarkers were due to the placebo effect or something greater, as endocannabinoids are implicated in placebo-induced analgesia [Degenhardt et al. recruitenalgesia , though In a blinded, randomized control trial, McPartland et al. investig\u03b2-endorphin levels in the experimental group (n = 9) who, following a 20-minute relaxation period, underwent a procedure intended to mobilize the upper cervical spine through \u201cjoint play maneuvers\u201d [\u03b2-endorphin levels were taken at \u221220, \u221215, +5, +15, and +30 minutes prior to and following the intervention, and effects were determined via an analysis of variance. Neither the sham group (n = 9), which underwent the same joint play manipulation but without the thrusting maneuver , nor the control group (n = 9) experienced such an increase in plasma \u03b2-endorphin concentration. However, two subsequent studies demonstrated findings contradicting those of Vernon et al. [\u03b2-endorphin concentrations in experimental groups with respect to sham and control groups following spinal manipulation. Sanders et al. [\u03b2-endorphin levels across all time points. Christian et al. [\u03b2-endorphin assays. Relevant methodological differences between the investigations of Christian et al. [n = 10) and symptomatic (n = 10) groups that received the experimental spinal manipulative therapy (SMT) protocol and asymptomatic (n = 10) and symptomatic (n = 10) groups that received the sham SMT procedure. Experimental and sham SMT procedures employed by Christian et al. [n = 6) from the aforementioned two include the region of the spine considered , the application of light touch to the affected area in the sham group as opposed to joint play of any kind, and the population sampled. Unlike Christian et al. [A number of studies have investigated the pain modulation mechanisms of spinal manipulation, which, as the name implies, is specific only to spinal articulation. The first to do so were Vernon et al. , who founeuvers\u201d , during n et al. : Christin et al. and Sandn et al. both fais et al. drew blos et al. , an analn et al. drew thrn et al. describen et al. and Vernn et al. include n et al. : asympton et al. and Vernn et al. were iden et al. and Sandn et al. , no contn et al. . Methodon et al. study and thoracic manipulations (n = 10) to a control group (n = 10) in a single-blind, randomized study of graduate student subjects who responded to university-placed advertisements. Cervical manipulations consisted of a high-velocity, mid-range, and leftward rotary thrust about the C4 and C5 vertebrae of the supine subject. Thoracic manipulations consisted of a high-velocity, end-range force applied in the anteroposterior plane to T3-4/T4-5 articulations. Blood was collected from the cephalic vein of each subject before, immediately after, and two hours following the intervention. Both cervical and thoracic groups saw decreases in neurotensin and oxytocin, as well as increases in orexin A plasma concentrations following respective interventions.Recently, Plaza-Manzano et al. comparedMultiple reviews have also investigated the pain modulating mechanisms of spinal manipulation , 45 and \u03b3-aminobutyric acid (GABAA) receptors in the spinal cord. Knee manipulations employed consisted of the movement of the tibia on the fixed femur. For a duration of three minutes, the joint was flexed and extended across its full range of motion while the tibia was made to translate in the anteroposterior plane. One minute was allowed for rest between each of the three manipulation sessions. Using a model of capsaicin-induced hyperalgesia and the systematic introduction of GABAA, opioid, \u03b12-adrenergic, 5-HT1/2, 5-HT1A, 5-HT2A, and 5-HT3 receptor inhibitors, the authors determined that the analgesic effects of knee joint manipulation were not impacted by the spinal blockade of opioid or GABAA receptors but were impacted by the blockade of 5-HT1A and \u03b12-adrenergic receptors. It was therefore posited that descending inhibition following knee joint manipulation may be modulated by serotonergic and noradrenergic mechanisms. No attempt has been made to replicate these findings in humans.Skyba et al. investigThere is evidence to support that, in male, Swiss mice, ankle mobilization-induced analgesia is mediated by EO, endocannabinoidergic, and adenosinergic pathways \u201350. All Importantly, Martins et al. noted thFurther research by Martins et al. investigPaungmali and colleagues have studied Mulligan's Mobilization with Movement (MWM) in lateral epicondylalgia , 56. Tweproven to be clinically effective,\u201d at least in the reduction of pain.Neural mobilization (NM) is a type of therapy that purports to relieve adverse neural tension, using methods such as nerve gliding and neural stretching. A systematic review put forth by Ellis and Hing highlighn = 5) and Western blot assays of the PAG, the authors examined the brains of the rats for \u03bc-, \u03b4-, and \u03ba-opioid receptor expression potentially resulting from the neurodynamic intervention. The NM protocol adopted by the investigators is as follows: rats in the experimental group were anesthetized and placed on their left sides such that the side affected by the CCI (the right) could be manipulated freely; rats in the sham group were just anaesthetized. With the right knee remaining fully extended throughout the session, the investigators flexed the right hip to 70\u201380\u00b0 (absolute) until the hamstrings produced a light resistance. At this point, the right ankle was dorsiflexed 30\u201345\u00b0 relative to its resting position until a similar resistance (presumably from the gastrocnemius) was detected by the manipulator. Following the establishment of minimal resistance in the manipulated joints, oscillations of the right ankle, wherein the joint was dorsiflexed to 30\u201345\u00b0 repeatedly from resting, were initiated. The oscillations were carried out every other day for two minutes, each at 20 oscillations/min with 25-second pauses between them. A total of 10 sessions were completed. Apart from the experimental group (CCI + NM), four other groups were considered. These included \u201cna\u00efve\u201d control, CCI only, sham, and sham + NM groups. Researchers did not find changes in \u03b4- or \u03bc-opioid receptor expression following the intervention; however, \u03ba-opioid receptor expression underwent a significant, 17% increase. These data indicate that the analgesic effects reported by those treated with neural mobilization may be mediated by EOs that act on \u03ba-opioid receptors, such as dynorphin A and subtypes thereof.Under the hypothesis that the EO system mediates the reversion of neuropathic pain following neurodynamic treatment and an earlier study suggesting that glial cells and neural growth factor may be involved in the analgesia associated with neural mobilization , Santos \u03b2 fibers, which inhibit nociceptive input from A\u03b4 and C afferent fibers [Massage therapy is often sought for both pleasure and therapy. It has been proposed to work through the gate control theory of pain , initialt fibers , 10. Deet fibers . Therefo12 and subcostal strokes, giving rise to \u201csharp, cutting\u201d sensations in the subjects. Following this, 10 minutes was spent treating more local areas of pain specific to each subject using the same stroke length and pressure as applied to the initially treated region. Following the termination of massage treatment and an unspecified storage time on ice , blood samples were centrifuged and processed using an assay designed specifically to detect human \u03b2-endorphin (New England Nuclear \u03b2-endorphin 125I radioimmunoassay). Blood work reportedly indicated a statistical increase in plasma \u03b2-endorphin levels following connective tissue massage, similar to the time course observed in acupuncture and to the magnitude observed during exercise. These results are indicative of a CPM response, which modulates pain through descending inhibition.Connective tissue massage is intended to both decrease pain and increase range of motion . In the or a saline substitute. No indication of exactly how many of the five rats received naloxone versus saline was provided by the authors. Assessments of antinociception consisted of tail-flick latency tests in which the distal portion of the rats' tails was subjected to a current-containing, tungsten wire heated to 75 \u00b1 5\u00b0C. These assessments were performed 10 minutes prior, 10 minutes into, and 20 minutes following the acupressure procedure. Trentini et al. [ saline versus those injected with naloxone, indicating possible EO-mediated nociception resulting from acupressure at the acupoint. These results are internally substantiated by the authors' establishment of a statistically greater tail-flick latency in experimental trials as opposed to sham and control trials, which remained at or below baseline for the duration of the assessment.Using naloxone in male and female Sprague-Dawley rats, Trentini et al. suggestei et al. found st\u03b2-endorphin levels were not observed in follow-up research in humans [\u03b2-endorphin levels remain unclear. It is also worth noting that both groups of investigators discussed here applied experimental, sham, and control treatments on the same groups of subjects. It was specified by Fassoulaki et al. [Despite the findings of Trentini et al. , changesn humans followinn humans , Fassouln humans only invi et al. that eaci et al. provided\u03b2-endorphin or \u03b2-lipotropin levels following a 30-minute back massage. The possibility of oxytocinergic mechanisms in massage-like stroking in rats was investigated by Agren et al. [d = 0.89) in plasma oxytocin was observed five minutes after massage, which returned to baseline after 30 minutes. Bello et al. [\u03b2-endorphin in the massage group relative to the control group. Importantly, a larger sample size was incorporated here, relative to the other studies , so the findings of Morhenn et al. [Regular massage, consisting of effleurage and other common techniques, has been well studied, but its effects are still not completely understood. Day et al. were then et al. , who tesn et al. . Particio et al. also obso et al. . Similaro et al. , who comn et al. may be mSince then, two studies have found that massage increases urine concentration of dopamine and serotonin , 74, sugThe longitudinal effects of massage on select catecholamines and neuropeptides have also been investigated. Hart et al. treated Corticotropin releasing factor (CRF) acts on the locus coeruleus and is associated with analgesic responses, possibly due to the role of the locus coeruleus in modulating ascending and descending pain pathways . Therefo or both hands and lower arms. The control group received the same amount of attention but was not touched. Plasma oxytocin levels experienced no statistical changes over the course of therapy in either group; however, a 47.6% decrease in oxytocin in the massage group over the treatment period was observed [In a randomized-controlled trial in women with breast cancer, subjects received ten twenty-minute effleurage massage treatments over three to four weeks. These treatments were applied to either both feet and lower legsobserved .Recently, Tsuji et al. carried and one versus two times per week, for five weeks, on a number of neuroendocrine measures, including arginine vasopressin and oxytocin. In the acute trial, a larger decrease in arginine vasopressin was found relative to the touch group, but no statistical differences were observed for oxytocin [ d effect sizes are presented relative to the touch (control) group. Following the final intervention, the once per week group experienced negligible effects for oxytocin (\u22120.14). The twice per week group, following the final intervention, noted increased levels of oxytocin (0.50). Preceding the final intervention, negligible increases in oxytocin were noted (0.05) for the once per week massage group. A large effect was observed in the twice per week group for oxytocin (0.92) following the final intervention [Further longitudinal and acute work has been done by Rapaport et al. and Rapaoxytocin . In the rvention .From the aforementioned studies, it appears clear that oxytocin plays a role in the analgesic response following conventional massage therapy, but the role of other neuropeptides is unclear.Being that the analgesic effects of both human touch and placNearly all types of manual therapy have been shown to elicit a neurophysiological response that is associated with the descending pain modulation circuit; however, it appears that different types of manual therapy work through different mechanisms . For exaFor some therapies, such as manipulation, a minimal amount of force may be required for an analgesic effect , but wheDespite the large popularity and long history of manual therapy, its mechanisms are not truly understood. Understanding these mechanisms may help researchers and clinicians to choose which therapy is most appropriate for each patient or subpopulation and may also lead to more effective therapies in the future."} {"text": "Liver diseases are a worldwide medical problem because the liver is the principal detoxifying organ and maintains metabolic homeostasis. The liver metabolizes various compounds that produce reactive oxygen radicals (ROS). Prooxidants are ROS which can cause tissue liver damage and whose levels may be increased by certain drugs, infection, external exposures, tissue injury, and so forth. Oxidative stress can result from an increase in prooxidant formation or a decrease or deficiency in antioxidants. Molecular redox switches and oxygen sensing by the thiol redox proteome and by NAD/NADP and phosphorylation/dephosphorylation systems are bias involved in signaling, control, and balance redox of a the liver system.\u03b2 (TGF-\u03b2), interleukin-6 (IL-6), and tumor necrosis factor-\u03b1 (TNF-\u03b1).Because of their reactivity, ROS readily interact with all cellular macromolecules. ROS cleave the phosphodiester bonds holding bases in RNA and DNA together, breaking the chain structure of RNA and DNA. Polyunsaturated fatty acids are also a major target for oxidation by ROS, in a process called lipid peroxidation that disrupts normal membrane structure leading to necrosis. In addition, ROS, especially the hydroxyl radical, oxidize the SH group of cysteine residues of proteins to the disulfides or to the sulfoxide or the sulfonic acid; since enzymatic activity depends on cysteine, enzymes are inactivated by ROS. Also oxidative stress contributes to fibrogenesis by increasing harmful cytokines such as transforming grown factor-2O2-dependent oxidation of thiol residues. These transcription factors subsequently activate many genes, some of which code for cellular antioxidants. Thus, low levels of ROS can cover up for high levels of ROS. Endogenous or exogenous antioxidants (mainly from diet) inhibit either formation of ROS or remove/scavenge the generated radicals. Due to the central role of oxidative stress on liver disease, this special issue was devoted to its implication in hepatic health and disease.However, ROS are not always the bad guy; important transcription factors such as Nrf2, NF-kB, and AP-1 are activated by H\u03b1-Adrenergic Signaling\u201d) found that the sympathetic nervous system allows oxidative stress to damage the liver, thus suggesting that targeting the hepatic \u03b1-adrenergic signaling may provide a therapeutic approach to fight liver disease. Obesity associated with excessive alcohol consumption produces fatty liver; in this regard. Professor M.-C. Guti\u00e9rrez-Ruiz et al. from Mexico found that the combination of ethanol and cholesterol in vitro produced a potent damage in steatotic hepatocyte. From Italy, Professor A. Galli et al. provided us with a very original review about the evolving concept of oxidative stress in the cellular hepatocyte compartments, highlighting the essential mechanisms of damage caused by free radicals.From Mexico, Dr. J. Camacho et al. reviewed the link of ion channels and oxidative stress in hepatic injury of various etiologies; the main conclusion is that such association may be useful to develop new treatments for the principal liver diseases. The group of Dr. H.-S. Lee et al. from Taiwan . In the review made by Professor R. Hern\u00e1ndez-Mu\u00f1oz et al. from Mexico , the utility of liver enzymes as markers of oxidative stress is challenged. Dr. Li et al., from Taiwan , studied the effect of ulinastatin on liver IRI and graft survival in mice and found that this compound affords significant protection to donor livers from cold IRI, probably by inhibition of proinflammatory cytokine release and modulating apoptosis.The protective effect of dietary curcumin against alcohol induced liver disease and atherosclerosis was reported by the group of Professor M. R. Lakshman et al. from Washington (in \u201cProtective Role of Dietary Curcumin in the Prevention of the Oxidative Stress Induced by Chronic Alcohol with respect to Hepatic Injury and Anti-atherogenic Markers\u201d). Since ischemia/reperfusion (IR) injury is still an unsolved problem in the clinical practice, efforts are being made to prevent it; in this regard, Dr. P. C. P\u00e9rez et al. from Mexico demonstrated that spironolactone reduced liver damage produced by IR by increasing IL-6 production and catalase activity (\u201cSpironolactone Effect in Hepatic Ischemia/Reperfusion Injury in Wistar Rats\u201d). Oligonol, a low molecular weight polyphenol derived from lychee fruit, was reported by Dr. J.-O. Moon et al., from Korea, to ameliorate CClThese authors highlight both the importance of free radicals in the development, establishment and perpetuation of liver disease, and the potential therapeutic effect of compounds that directly or indirectly interfere with the prooxidant process.Hopefully, this publication will provide a benchmark for future investigations evaluating a far greater body of basic and clinical evidence regarding the role of oxidative stress in liver health and disease as well the beneficial effect of antioxidant therapy to prevent or reverse hepatic injury.Pablo MurielPablo MurielKarina R. GordilloKarina R. Gordillo"} {"text": "Chronic exposure to cadmium (Cd), even at low concentrations, has an adverse impact on the skeletal system. Histologically, primary and secondary osteons as basic structural elements of compact bone can also be affected by several toxicants leading to changes in bone vascularization and mechanical properties of the bone. The current study was designed to investigate the effect of subchronic peroral exposure to Cd on femoral bone structure including histomorphometry of the osteons in adult male rats.2/L, for 90\u00a0days. Ten one-month-old males without Cd intoxication served as a control group. After 90\u00a0days of daily peroral exposure, body weight, femoral weight, femoral length, cortical bone thickness and histological structure of the femora were analysed.In our study, 20 one-month-old male Wistar rats were randomly divided into two experimental groups. In the first group, young males received a drinking water containing 30\u00a0mg of CdClP\u2009<\u20090.05) in Cd-intoxicated rats. These rats also displayed different microstructure in the middle part of the compact bone where vascular canals expanded into central area of substantia compacta and supplied primary and secondary osteons. Additionally, a few resorption lacunae which are connected with an early stage of osteoporosis were identified in these individuals. Histomorphometrical evaluations showed that all variables of the primary osteons\u2019 vascular canals, Haversian canals and secondary osteons were significantly decreased (P\u2009<\u20090.05) in the Cd group rats. This fact points to alterations in bone vascularization.We found that subchronic peroral application of Cd had no significant effect on body weight, femoral length and cortical bone thickness in adult rats. On the other hand, femoral weight was significantly increased (Subchronic peroral exposure to Cd significantly influences femoral weight and histological structure of compact bone in adult male rats. It induces an early stage of osteoporosis and causes reduced bone vascularization. Histomorphometrical changes of primary and secondary osteons allow for the conclusion that the bone mechanical properties could be weakened in the Cd group rats. The current study significantly expands the knowledge on damaging action of Cd on the bone. Cadmium (Cd) is considered as one of the most toxic heavy metals . Adverseet al. [Bone is one of the target organs for Cd toxicity ,4. Varioet al. have repet al. .The damaging action of Cd on the rat skeleton during chronic exposure and its possible mechanisms have been extensively studied and reported by other researches; however, a detailed histological analysis of compact bone including histomorphometrical evaluations of primary and secondary osteons after subchronic peroral administration of Cd in rats had not been done prior to our experiment. In general, individual osteon morphology reflects changes in formation and resorption of bone, thus histomorphometrical analyses may give important information about the reorganization process in the bone also in stress situation, e.g. intoxication with various xenobiotics. Therefore, the aim of the present study was to analyse changes in macroscopical structure of femoral bones, and also in qualitative and quantitative histological characteristics of compact bone in rats subchronically exposed to Cd in their drinking water. In the case of potential alterations, the hypothesis of toxic effects of Cd on femoral bone structure, mechanical properties and vascularization of investigated bones will be described.ad libitum. We used cages Tecniplast 2154\u00a0F which comply with the Government Regulation No. 23/2009. The cage disposed dimensions 480\u2009\u00d7\u2009265\u2009\u00d7\u2009210\u00a0mm, with a floor area 940\u00a0cm2. We used wood chips as bedding material with paper rolls which enriched the living-space of animals. The clinically healthy experimental animals were randomly divided into two groups, of ten animals each. In the first group (Cd group), young males were dosed with a daily Cd intake of 30\u00a0mg CdCl2/L in drinking water for 90\u00a0days. The second group without Cd intoxication served as a control. The water consumption was daily monitored during the whole experiment. Since Cd absorption from the gastrointestinal tract in rats is lower than in humans [2/L in drinking water; chosen on the basis of studied literature and our previous experiments with tested dose\u2013response effects) was high enough to reach a toxicity level but also safe enough to prevent animal mortality [Our experiment was conducted on 20 one-month-old male Wistar rats obtained from the accredited experimental laboratory (number SK PC 50004) of the Slovak University of Agriculture in Nitra. Generally, the laboratory rat is the preferred animal for most researchers. Its skeleton has been studied extensively in various experimental protocols leading to bone loss, including hormonal interventions, immobilization, and dietary manipulations. Although there are several limitations to its similarity to the human condition, these can be overcome through detailed knowledge of its specific traits or with certain techniques . We usedn humans . The conal dose) ,20-22. Tal dose) .All procedures were approved by the Animal Experimental Committee of the Slovak Republic.et al. [et al. [At the end of 90\u00a0days, all rats were euthanized, weighed and their femurs were used for macroscopical and microscopical analyses. The right femurs were weighed on analytical scales with an accuracy of 0.01\u00a0g and the femoral length was measured with a sliding instrument. For histological analyses, the right femurs were sectioned at the midshaft of the diaphysis and the segments were fixed in HistoChoice fixative . The segments were then dehydrated in increasing grades (40 to 100%) of ethanol and embedded in Biodur epoxy resin according to the method described by Martiniakov\u00e1 et al. . Transveet al. . The quaet al. and Ricq [et al. , who claT-test was used for establishing statistical significance (P\u2009<\u20090.05) between Cd and control groups of rats.Statistical analysis was performed using SPSS 8.0 software. All data were expressed as mean\u2009\u00b1\u2009standard deviation (SD). The unpaired Student\u2019s P\u2009<\u20090.05) in Cd-intoxicated rats of the thin sections. This type of bone tissue contained cellular lamellae and osteocytes without occurrence of primary and/or secondary osteons. Areas of primary vascular radial bone tissue were also identified in anterior, posterior and lateral views. Some primary and secondary osteons were exceptionally found in anterior and posterior views near endosteal surfaces. The occurrence of the osteons (primary and secondary) was also identified in middle parts of s Figure\u00a0.Figure 1The rats exposed to Cd displayed a similar microarchitecture to that of the control rats, except for the middle part of the compact bone in the medial and lateral views. In these views, primary vascular radial bone tissue occurred because vascular canals expanded from endosteal border into central area of the bone and supplanted primary and secondary osteons. Therefore, a smaller number of the osteons was observed in Cd-intoxicated rats. In these rats, we identified a few resorption lacunae near endosteal surfaces which are connected with an early stage of osteoporosis Figure\u00a0.Figure 2P\u2009<\u20090.05) in rats from the Cd group as compared to those of the control one.For the quantitative histological characteristics, 841 vascular canals of primary osteons, 435 Haversian canals and 435 secondary osteons were measured in total. The results are summarized in Table\u00a0et al. [2+ with Cd2+ in many enzymes and transcription factors may induce aberrant gene expression, resulting in the stimulation of cell proliferation. Rat experiments indicate that oral ingestion of Cd through drinking water leads to an accumulation of Cd into the bone as it has also been observed in our study (Cd concentrations in the control and Cd groups were 0.92\u2009\u00b1\u20090.21\u00a0mg/kg dry weight (dw), 1.33\u2009\u00b1\u20090.18\u00a0mg/kg dw, respectively). In accordance with our results, Cd-induced increase in a weight of liver, kidney and spleen has also been mentioned in rats subcutaneously exposed to 2.0\u00a0mg CdCl2/kg (three days/week or daily) for 28 or 21\u00a0days, respectively [et al. [Cadmium is an environmental pollutant that causes serious toxicity in humans and animals . As a reet al. , the repectively ,28. In cWe have found no significant differences for body weight and femoral length between the both groups of rats. Similarly, no demonstrable changes in the body weight ,33 and fet al. [et al. [In general, the thickness of cortical bone is an important parameter in the evaluation of cortical bone quality and strength. According to Garn et al. , cortica [et al. , corticaet al. [Our findings from the qualitative histological analysis of compact bone in rats from the control group correspond with those reported by other researches -38. The et al. for thirsubstantia compacta in rats from the Cd group, where primary vascular radial bone tissue was found. Formation of this type of bone tissue may be explained as an adaptive response of the bone to Cd toxicity in order to protect bone tissue against death of cells and subsequent necrosis. It is known that Cd induces apoptosis in osteoblast-like cells [et al. [On the other hand, prolonged intake of moderate dose of Cd induced changes in the middle part of ke cells -43, osteke cells ,45 and hke cells . Disappe [et al. for ovarin vitro [Additionally, we identified a few resorption lacunae near endosteal surfaces in Cd-exposed rats which signalize an early stage of osteoporosis or a presence of an inflammatory process. However, the inflammatory process is characterized by newly built bone formation on periosteal surface which hain vitro . It alsoin vitro which arin vitro ,56.et al. [et al. [et al. [et al. [et al. [The vascular canal constriction identified in rats from the Cd group could be associated with structural changes of blood vessels due to toxic effect of Cd. Each Haversian canal contains blood vessels and nerves supported by loose connective tissue . Blood vet al. , mechaniet al. -63 showeet al. . On the et al. . The impet al. . Accordi [et al. and Stra [et al. , angioge [et al. found th [et al. . The res [et al. demonstret al. [All variables of the secondary osteons had also lower values in Cd-intoxicated rats. Generally, it is known that heavy metals (including Cd) are adsorbed and stored within bones . In boneet al. showed tet al. ,76. On tet al. , the metet al. . Furtheret al. ,79.Our study demonstrates that subchronic exposure to Cd had a significant impact on femoral weight and compact bone microstructure in adult male rats. The histomorphometrical changes were identified on levels of primary osteons\u2019 vascular canals, Haversian canals and secondary osteons. Due to increasing trend of Cd environmental contamination and human exposition, our findings are of high importance and could also have practical implications in humans. Our results with the moderate dose of Cd in rats, reflecting possible Cd exposure in humans in specific conditions, give a real indication that the same changes in bone microstructure may also occur in exposed humans. The environmental Cd contamination, as well as improper living habits, such as cigarette smoking, can lead to damages and diseases of bones. Identifying environmental factors (such as Cd) and possible mechanisms that contribute to bone loss, decreased bone vascularization and weakening of mechanical bone properties is the first step to prevention of skeletal damages in humans.2/L in drinking water for 90\u00a0days has the significant impact on femoral weight and both the qualitative and quantitative histological characteristics of compact bone in adult male rats. It induces an early stage of osteoporosis, alterations in bone vascularization and potentially diminished bone mechanical properties. Our study can be involved in creation a comprehensive novel insight into bone toxicology in experimental animals.The current study revealed that subchronic peroral administration of 30\u00a0mg CdCl"} {"text": "Extracorporeal shock wave therapy has been reported as an effective treatment for lower limb ulceration. The aim of this systematic review was to investigate the effectiveness of extracorporeal shock wave therapy for the treatment of lower limb ulceration.d) from means and standard deviations.Five electronic databases and reference lists from relevant studies were searched in December 2013. All study designs, with the exception of case-reports, were eligible for inclusion in this review. Assessment of each study\u2019s methodological quality was performed using the Quality Index tool. The effectiveness of studies was measured by calculating effect sizes . Improvements in wound healing were identified in these studies following extracorporeal shock wave therapy. The majority of wounds assessed were associated with diabetes and the effectiveness of ESWT as an addition to standard care has only been assessed in one randomised controlled trial.Considering the limited evidence identified, further research is needed to support the use of extracorporeal shock wave therapy in the treatment of lower limb ulceration.The online version of this article (doi:10.1186/s13047-014-0059-0) contains supplementary material, which is available to authorized users. Lower limb ulceration is reported as a common problem world-wide, and is considered a major social and economic burden . Lower lThe use of extracorporeal shock waves in medicine was first reported over 30\u00a0years ago as a treatment for kidney stones , and is Despite the reported success of ESWT for the treatment of lower limb ulceration, the quality of evidence investigating the effectiveness of this intervention has not been reviewed in detail. Therefore, the aim of this review was to investigate the effectiveness of ESWT for the treatment of lower limb ulceration.All studies included in this review were obtained from English-language peer reviewed scientific journals investigating the effectiveness of ESWT for lower limb ulceration. All study designs, with the exception of case-reports, were eligible for inclusion in this review. Letters to the editor, opinion pieces and editorials were also excluded.Studies were included if the use of ESWT for the treatment of lower limb ulceration was assessed. The category of ulcers included in this review were those of neurovascular origin . Studies where the participant\u2019s ulcer was associated with pressure sores, burns or surgical complications were excluded.In December 2013 an electronic database search was conducted using Medical Subject Headings (MeSH), followed by a keyword search strategy. Auto-alerts were developed to provide updates on recent publications until the review was finalised (March 2014). The following databases were searched: Ovid MEDLINE (1966 to date), CINAHL (1982 to date), Web of Knowledge, Scopus and Ovid AMED (from inception). The database search strategy is presented in Table\u00a0Upon completion of the search (March 2014), a hand search was performed of references from the studies identified in the electronic search, and Google Scholar was searched in an attempt to identify any further material. Two reviewers (PAB and TPW) then independently reviewed titles and abstracts according to the pre-determined inclusion criteria. Discrepancies between reviewers regarding eligibility were discussed until consensus was reached. Progression to full text review was then permitted.p values) were extracted from studies by two investigators (PAB and TPW), with specific attention to the following variables; study design, participant numbers, mean age, sex, ulcer classification, change in healing and ulcer size and ESWT protocol used. The data pertaining to each study was then assigned a numerical value to ensure the two investigators (PAB and TPW) were blinded to author and publication details during quality assessment. Where disagreements occurred during the quality assessment process, a third assessor (YDP) made the final decision on quality assessment scores. Where studies provided sufficient statistical data, effect size (Cohen\u2019s d) was calculated from means and standard deviations. Effect sizes were categorized as follows: negligible effect (\u2265\u2009\u2212\u20090.15 and <0.15); small effect (\u22650.15 and <0.40); medium effect (\u22650.40 and <0.75); large effect (\u22650.75 and <1.10); very large effect (\u22651.10 and <1.45) and, huge effect (\u22651.45) [A predefined data extraction form was used in the extraction process ,20.r\u2009=\u20090.88) and good inter-rater reliability (r\u2009=\u20090.75). The Quality Index tool consists of 27 items, and allows for assessment of internal and external validity, reporting and power.Assessment of each study\u2019s methodological quality was performed using the Quality Index tool developed by Downs and Black . This toa priori decision was made to remove two items where they did not apply to the respective studies identified. Firstly, Item 25 was removed for non-randomised controlled studies, as it has been shown that case mix adjustment cannot reduce the extent of bias in non-randomised trials [For this systematic review, an d trials . SecondlWe chose to present the quality assessment results as percentage scores, which is typical of previous studies using the Quality Index tool ,23-24. FA total of 555 results were identified through our electronic search .Table\u00a0et al. found thet al. [Saggini et al. treated et al. [p\u2009=\u20090.001. Furthermore, greater than 50% improvement of the ulcer was observed in 89% of participants in the ESWT group and 72% of participants in the HBO group (p\u2009<\u20090.001). In their second study comparing ESWT and HBO, Wang et al. [p\u2009=\u20090.003); \u2265 50% improved ulcers in 32% and 15% (p\u2009=\u20090.071), and unchanged ulcers in 11% and 60% (p\u2009<\u20090.001) respectively.Wang et al. found thg et al. found: cet al. [p\u2009=\u20090.001). Furthermore, arterial insufficiency ulcers did not completely heal in 33% of cases, the second worst healing rate of all ulcer types. The primary outcome assessed in their study was the safety and feasibility of using ESWT on wounds, the authors concluding that ESWT is a safe and effective treatment.Schaden et al. found thThe aim of this systematic review was to investigate the effectiveness of ESWT for the treatment of lower limb ulcers. We evaluated five studies in this review, and identified a trend to suggest that ESWT may be effective in improving wound healing and decreasing wound size. Furthermore, ESWT may also be a safe treatment option with few complications associated with its use, however; we found average study quality for the studies identified. External validity across studies was rated most poorly, due to deficient definitions of the source population and methods of patient selection, and poor identification of confounding factors. It is difficult therefore, to generalise the findings of the studies to the populations from which the study participants were derived. Furthermore, it is unknown whether participants were representative of the population from which they were recruited. As such, all five studies performed poorly on the external validity questions, scoring a mean of only 27% for questions 11 to 13 on the quality Index tool.et al. [et al. [All studies also rated poorly on the internal validity component of the Quality Index (questions 14 to 26). For example, Moretti [et al. , the autet al. [et al. [et al. [et al. [et al. [et al. [et al. [The internal validity of the studies identified may also have been threatened due to a loss of participants during the study. While Moretti et al. and Sagg [et al. suggeste [et al. describe [et al. describe [et al. ,29, grou [et al. made an [et al. were notet al. [2. Although Moretti et al. [et al. [et al. [et al. [et al. [The classification of ulceration varied across the five studies identified in this review. Moretti et al. defined i et al. made an [et al. included [et al. defined [et al. ,29 inclu [et al. ,29, and [et al. , there wet al. [et al. [et al. [et al. [The ESWT protocol varied between studies resulting in study heterogeneity, making comparison difficult. Specifically, there were differences in the duration, frequency and strength of ESWT application identified between studies. Furthermore, these differences were also noticeable within studies. For example, a repeat course of treatment was performed in cases with incomplete healing from the first course of treatment in the study by Wang et al. . Moreove [et al. and Scha [et al. was depe [et al. , particiThis systematic review has identified a number of important implications for future research. Firstly, to reduce bias it is essential that when evaluating the effectiveness of ESWT for the treatment of lower limb ulceration, that rigorous randomised controlled trial (RCT) methods are used ,31. SecoThe existing evidence that supports the use of ESWT for treatment of lower limb ulceration therefore needs to be viewed in light of some limitations. Firstly, there were only two studies (one of which was an RCT) that investigated the effect of ESWT versus standard treatment, and there were small participant numbers in the studies identified. Secondly, this review identified significant methodological heterogeneity between studies. For example; one of the studies in this review included smokers and also assessed ulceration associated with multiple comorbidities , whereasThis systematic review identified five studies that reported on the effectiveness of ESWT for the treatment of lower limb ulcers. There is limited evidence to support ESWT as a treatment for lower limb ulceration. Considering this, further research is needed to support the use of ESWT in the treatment of lower limb ulceration."} {"text": "In recent years there have been monumental and exciting advances in the treatment of retinal disease which made a special issue on advances in retinal therapeutics relevant and interesting. Indeed, in this issue there are 11 papers reporting either original basic or clinical research findings or reviewing the literature and reporting recent developments.The papers in this issue are on a wide range of topics. J. He et al. present original basic research in elucidating the effects of VEGF receptor 1 blockade on diabetic retinopathy while M. L. Perepechaeva et al. present original basic research identifying a potential novel target for the treatment of age-related macular degeneration in a rodent animal model of the disease. O. Chrapek et al. present original clinical research identifying occult CNVM and lesion size less than 5 disc diameters as baseline characteristics predicting inactivity after 3 injections of an intravitreous anti-VEGF agent (ranibizumab) for neovascular age-related macular degeneration (nARMD). S. M. Prea et al. review endogenous inhibitors of nARMD as well as the principles of gene therapy that could use such inhibitors to treat nARMD clinically. There are 2 papers discussing advances in retinal laser treatment\u2014one by Y. G. Park et al. discussing developments on laser photocoagulation for diabetic macular edema and another by K. Inagaki et al. discussing micropulse laser for persistent macular edema from branch retinal vein occlusion in eyes which includes eyes with good visual acuity at baseline. Two additional papers address vitreoretinal surgery topics: one by Dr. C. Pournaras et al. that reports on the outcomes of repair of recurrent retinal detachment, including presenting visual outcomes which are not uncommonly omitted in prior publications and the other paper by A. Garc\u00eda-Layana et al. that reviews the current treatment of vitreomacular traction and macular hole. The paper by M. Harrell and P. E. Carvounis is an up-to-date evidence-based review of the treatments of toxoplasma retinochoroiditis which takes into account evidence published within the last 2 years when the previous evidence-based review was published. Finally, the paper by M. Cabrera et al. reviews the 3 sustained-release corticosteroids available for the treatment of retinal disease.The 54 authors and coauthors should be commended for the quality of their manuscripts; the science as well as the writing is of high caliber. It is our belief that the readership of the Journal of Ophthalmology will enjoy reading the papers in this special issue as much as we did.Petros E. CarvounisPetros E. CarvounisThomas A. AlbiniThomas A. AlbiniAndrew J. BarkmeierAndrew J. BarkmeierMiltiadis TsilimbarisMiltiadis Tsilimbaris"} {"text": "Sit's name was originally spelled incorrectly, and Dr. Ng's degree was listed incorrectly. The article has since been corrected online.In the article \u201cAcupuncture and Related Therapies for Symptom Management in Palliative Cancer Care: Systematic Review and Meta-Analysis\u201d,"} {"text": "Root canal irrigants play a significant role in elimination of the microorganisms, tissue remnants, and removal of the debris and smear layer. No single solution is able to fulfill all these actions completely; therefore, a combination of irrigants may be required. The aim of this investigation was to review the agonistic and antagonistic interactions between chlorhexidine (CHX) and other irrigants and medicaments. An English-limited Medline search was performed for articles published from 2002 to 2014. The searched keywords included: chlorhexidine AND sodium hypochlorite/ethylenediaminetetraacetic acid/calcium hydroxide/mineral trioxide aggregate. Subsequently, a hand search was carried out on the references of result articles to find more matching papers. Findings showed that the combination of CHX and sodium hypochlorite (NaOCl) causes color changes and the formation of a neutral and insoluble precipitate; CHX forms a salt with ethylenediaminetetraacetic acid (EDTA). In addition, it has been demonstrated that the alkalinity of calcium hydroxide (CH) remained unchanged after mixing with CHX. Furthermore, mixing CHX with CH may enhance its antimicrobial activity; also mixing mineral trioxide aggregate (MTA) powder with CHX increases its antimicrobial activity but this may negatively affect its mechanical properties. Chlorhexidine (CHX) is a synthetic cationic bisguanide consisting of two symmetric 4-cholorophenyl rings and two biguanide groups, connected by a central hexamethylene chain . It is uThis increases the permeability of the cell wall, which allows the CHX molecule to penetrate into bacteria. CHX is a base and is stable in the salt form. The most common oral preparation, CHX gluconate, is water-soluble and at physiologic pH level it readily dissociates and releases the positively charged CHX component . At low Different endodontic agents used as intracanal medicaments or irrigants may interact with CHX. For instance, interaction of sodium hypochlorite (NaOCl) mixed with chlorhexidine (CHX) produces a brown precipitate containing para-chloroaniline (PCA) which is not only toxic but also interferes with canal sealing . RegardiExposure to CHX decreases the push-out bond strength of mineral trioxide aggregate (MTA) to dentin . HoweverChelating agents such as ethylenediaminetetraacetic acid (EDTA) can interact with CHX, as it is shown that subsequent use of CHX, NaOCl and EDTA can cause a color change in dentine .The aim of the present critical review is to determine the interaction between CHX and different endodontic agents according to the results of previous studies published from 2002 to 2014.Retrieval of literatureAn English-limited Medline search was performed through the articles published from 2002 to 2014. The searched keywords included \u201cchlorhexidine AND sodium hypochlorite (NaOCl)\u201d, \u201cchlorhexidine AND ethylenediaminetetraacetic acid (EDTA)\u201d, \u201cchlorhexidine AND calcium hydroxide (CH)\u201d, and \u201cchlorhexidine AND mineral trioxide aggregate (MTA)\". Then, a hand search was done in the references of collected articles to find more matching papers.A total of 1095 articles were found which in order of their related keywords are \u201c567-chlorhexidine AND NaOCl\u201d, \u201c255-chlorhexidine AND EDTA\u201d, \u201c252-chlorhexidine AND CH\u201d and \u201c21-chlorhexidine AND MTA\".Combination of irrigants and medicaments with CHXDespite having important useful properties, CHX possesses some drawbacks as well. One of its important drawbacks is lacking tissue solubility, which is one of the most important properties of a standard irrigation solution . Therefoa) irrigation with NaOCl to dissolve the organic components, b) irrigation with EDTA to assist in the elimination of the smear layer and finally, c) irrigation with CHX to increase the anti-microbial spectrum of activity and impart substantivity.A suggested clinical protocol by Zehnder for dentEnterococcus faecalis and Candida albicans are the main causes of treatment failure, CH is ineffective against these two species [Studies have demonstrated that in retreatment cases where microbial species like species . TherefoInteraction between CHX and NaOClThe combined use of NaOCl and CHX has been advocated to enhance their antimicrobial properties. In other words, a final rinse with CHX offers the advantage of substantivity (due to its affinity to dentine hydroxyl apatite) which prolongs the antimicrobial activity of CHX . HoweverA study used electrospray ionization quadrupole time-of-flight mass spectrometry (ESI-QTOF-MS) analyses to investigate the byproducts formed after combination of CHX and NaOCl . Findinget al. [et al. [It seem that the oxidizing activity of NaOCl causes chlorination of the guanidino nitrogens of the CHX . Basraniet al. detectedet al. , 18, 19.et al. . Some coet al. . Basrani [et al. tested c [et al. .et al. [et al. [Studies have been undertaken to elucidate the chemical composition of the flocculate produced by the association of NaOCl with CHX -21. Marcet al. combined [et al. , showed et al. [et al. [et al. [et al. [Krishnamurthy and Sudhakaran detectedet al. showed tet al. . Findinget al. . Using S [et al. assessed [et al. compared [et al. showed t [et al. .In summary, the combination of NaOCl and CHX causes color changes and the formation of a neutral and insoluble precipitate, which may interfere with the seal of the root filling. Therefore, drying the canal with paper points before the final CHX rinse is suggested.Interaction between CHX and chelatorsin vitro study on bovine dentin slices using atomic absorption spectrophotometry, Gonzalez-Lopez et al. [In an z et al. assessedz et al. .et al. [et al. [et al. [et al. [et al. [Akisue et al. showed t [et al. and Rasi [et al. demonstr [et al. showed t [et al. showed t [et al. .In summary, CHX forms a salt with EDTA rather than undergoing a chemical reaction.Interaction between CHX and CHThe optimal antimicrobial activity of CHX is achieved within a pH range of 5.5 to 7.0 . TherefoE. faecalis harbored within the dentinal tubules [et al. [E. faecalis from the dentinal tubules; a 1% gel CHX worked slightly better than the other preparations. These findings were corroborated by Gomes et al. [. faecalis, followed by liquid CHX and CH and then CH used alone. Using agar diffusion test, Haenni et al. [et al. [E. faecalis inside human dentinal tubules, followed by a CH mixed with 2% CHX, whilst CH alone was totally ineffective, even after 30 days. The 2% gel CHX was also significantly more effective than the CH and 2% CHX mixture against C. albicans at seven days, although there was no significant difference at 15 and 30 days. CH alone was completely ineffective against C. albicans. In another study on primary teeth, 1% CHX gluconate gel with and without CH, was more effective against E. faecalis than CH alone within a 48-h period \u00a0\u00a0\u00a0\u00a0\u00a0[When used as an intracanal medicament, CHX was more effective than CH in eliminating tubules , 30. In [et al. , all of s et al. in bovins et al. in humani et al. could noi et al. . This mai et al. . Ercan e [et al. showed tiod \u00a0\u00a0\u00a0\u00a0\u00a0.E. faecalis than CH used alone, or a mixture of the two. This was also confirmed by Lin et al. [et al. [et al. [et al. [C. albicans than saturated CH, while CH combined with CHX was more effective than CH used alone.Schafer and Bossmann reportedn et al. . In a st [et al. using bo [et al. reported [et al. reportedIn summary, combined use of CHX and CH in the root canal may generate excessive reactive oxygen species, which may potentially kill various root canal pathogens. In addition, it has been demonstrated that the alkalinity of CH when mixed with CHX remained unchanged. Furthermore, mixing CHX with CH may enhance its antimicrobial activity.Interaction between CHX and MTAMTA is marketed in gray and white colored preparations: both contain 75% Portland cement, 20% bismuth oxide and 5% gypsum by weight. MTA is a hydrophilic powder which requires moisture for setting. Traditionally, MTA powder is mixed with supplied sterile water in a 3:1 powder/liquid ratio. Different liquids have been suggested for mixing with the MTA powder such as lidocaine anesthetic solution, NaOCl and CHX .et al. [et al. [et al. [Stowe et al. determin [et al. . Hernand [et al. comparedet al. [et al. [in vitro. Kogan et al. [et al. [et al. [Cell cycle analysis showed that exposure to MTA/CHX decreased the percentage of fibroblasts and macrophages in S phase (DNA synthesis) as compared with exposure to MTA/water. On the other hand, Sumer et al. examined [et al. found thn et al. found th [et al. found th [et al. evaluateOverall, it can be concluded that mixing MTA powder with CHX increases its antimicrobial activity but may have a negative effect on its mechanical properties."} {"text": "Bioconversion of lignocellulosic biomass to bioethanol has shown environmental, economic and energetic advantages in comparison to bioethanol produced from sugar or starch. However, the pretreatment process for increasing the enzymatic accessibility and improving the digestibility of cellulose is hindered by many physical-chemical, structural and compositional factors, which make these materials difficult to be used as feedstocks for ethanol production. A wide range of pretreatment methods has been developed to alter or remove structural and compositional impediments to (enzymatic) hydrolysis over the last few decades; however, only a few of them can be used at commercial scale due to economic feasibility. This paper will give an overview of extrusion pretreatment for bioethanol production with a special focus on twin-screw extruders. An economic assessment of this pretreatment is also discussed to determine its feasibility for future industrial cellulosic ethanol plant designs. Biomassstimated . Lignocestimated .Lignocellulosic biomass can be converted into fermentable sugars for fermentative ethanol production. However, this bioconversion is further complicated due to recalcitrance caused by the association of cellulose, hemicelluloses and lignin in the biomass . CelluloA large number of pretreatment methods have been proposed generally on a wide variety of lignocellulosic biomasses for bioethanol production since different feedstocks have different physical-chemical characteristics. These pretreatment methods are usually divided into physical, chemical, physical-chemical and biological, such as steam explosion , dilute Extrusion is defined as an operation of creating objects of a fixed, cross-sectional profile by forcing them through a die of the desired cross-section. The material will experience an expansion when it exits the die. The extrusion process has been expanded as one of the physical continuous pretreatment methods towards bioethanol production due to its significant improvements of sugar recovery from different biomass feedstocks. Extrusion pretreatment has some advantages over other pretreatments: (1) low cost and provides better process monitoring and control of all variables ; (2) no A screw extruder is based around screw elements, including (1) forward screw elements, which principally transport bulk material with different pitches and lengths with the least degree of mixing and shearing effect; (2) kneading screw elements, which primarily exert a significant mixing and shearing effect with different stagger angles in combination with a weak forward conveying effect; and (3) reverse screw elements, designed with a reverse flight to push the material backward, which carries out extensive mixing and shearing effects . The arret al. [Different types of extruders, such as single-screw extruders and twin-screw extruders, have been widely examined for different lignocellulosic biomass, resulting in subsequently high enzymatic hydrolysis rates. The extrusion pretreatment process can be used as a physical pretreatment for the bioconversion of biomass to ethanol production; it also can be conducted in a large number of systems with or without the addition of chemical solutions. Karunanithy and Muthukumarappan ,30 reporet al. evaluateLignocellulosic biomass can be treated with chemical solutions such as acid and alkali during the extrusion process ,34,35; hi.e., kneading disks [The screw extruder is a well known technology in the production, compounding, and processing of plastics; it also can be used in food processing industries, such as pet food, cereals and bread. The single screw extrusion process consists of an Archimedean screw in a fixed barrel. It can be classified as a smooth barrel, grooved and/or pin barrel screw extruder. Both are employed when melting and pressure build up are required. However, the mixing ability of single screw extruders is limited to distributive mixing and dispersive mixing . These cng disks .i.e., co-rotating or counter-rotating for which the screws rotate in either similar or opposite directions, respectively. Twin-screw extruders can be further subdivided into fully, partial or non-intermeshing based on the relative position of the screws [Twin-screw extruders consist of two parallel screws with the same length placed in a stationary barrel section. Twin-Screw extruders can be classified according to their direction of screw rotation, e screws . In conte screws . Therefoe screws .et al. [et al. [et al. [et al. [Screw design strongly influences work done on the material and amount of shear force generated during extrusion processes such as compression ratio, screw speed and barrel temperature. Karunanithy and Muthukumarappan conducteet al. investiget al. evaluate [et al. investig [et al. . Karunan [et al. optimize [et al. also eva [et al. investig [et al. conducteFibrillation of Douglas-fir was performed using water with mechanical kneading forces instead of chemicals for biomass pretreatment in a batch type kneader with twin screw elements. Douglas-fir was milled in a ball milling for 20 min and then kneaded at 40 \u00b0C at 90 rpm in a batch-type kneader by adding water for 30 min. The results showed the surface area of cellulose was increased and the glucose yield from the fibrillated products by enzymatic hydrolysis was 54.2%, much lower than the extrusion process with chemicals .2SO4). Consequently, 38.2% of dry sawdust solids were converted to soluble liquids and 44.4% of cellulose was converted to soluble monomer sugars and oligosaccharides [Extruders can be used as an acid hydrolysis reactor. Acid pretreatment is effective for converting cellulose and hemicelluloses into monomeric sugars. For example, an extruder type reactor was used for dilute acid hydrolysis for municipal solid wastes and the optimal glucose yields reached 60% at temperatures of 230 \u00b0C, pressures of 30\u201332 atm, pH values of 0.50 and reaction times of 8\u201315 s . A twin charides . Later tcharides . The twicharides .2S). Similarly, Zhang et al. [et al. [et al. [et al. [Alkali pretreatment can be performed at a lower temperature and pressure compared to other chemical pretreatment methods. The process in the extruder also does not cause as much sugar degradation . Carr ang et al. also evag et al. also carg et al. evaluate [et al. conducte [et al. performe [et al. employed [et al. .et al. [et al. [et al. [Biomass extrusion can be utilized as a stand-alone pretreatment method, or in combination or sequence with other pretreatment techniques. Lee et al. evaluateet al. . Zheng e [et al. evaluate [et al. investig [et al. . The ext [et al. .et al. [Many pretreatment technologies of lignocellulosic biomass have been studied to improve ethanol yield . Thermo-et al. carried et al. [et al. [Yoo et al. analyzedet al. . In the [et al. report t [et al. .et al., 2010 [et al., 2011 [The underlying assumptions of each economical model have to be carefully evaluated when comparing different pretreatment techniques, as can be illustrated by the different assumptions for the capital costs of a diluted acid process of 376 MM by Kazi l., 2010 and 191 l., 2011 made in l., 2011 . The Purl., 2011 .The main purpose of pretreatment is to remove hemicelluloses and lignin, to increase the accessible surface area for enzymes and to descrystallize cellulose. The advantages of extrusion pretreatment technologies have been listed and discussed above. However, due to the varying types of biomass, efficient and economical methods need to be developed in a feedstock-specific manner; here, a few reports of quantitative economical analyses of data of different pretreatment technologies were discussed. In this review, the feasibility and economical analysis of extrusion pretreatment offers an initial glimpse into future investigations in pretreatment technology. None of the cellulosic ethanol from extrusion pretreatment technology has been commercialized to date, and uncertainties and limitations are unavoidable in the economic analysis and comparison of conversion technologies. Therefore, identifying the economic impact of different pretreatments related to productivity, capital cost, and operating cost, as well as defined assumptions, is important when conducting an economic analysis of bioethanol to aid reliable and creditable cost predictions. Further, improvements in pretreatment, enzymatic hydrolysis, and fermentation should be studied in order to reduce production costs."} {"text": "The increased use of nephron-sparing surgery to treat localized renal cell carcinoma (RCC) lends weight to the question of the value of microscopically positive surgical margins (PSM) in cases with a tumor bed macroscopically free of residual tumor. The aim of this article is to highlight the data available on risk factors for PSM, their clinical relevance, and possible therapeutic consequences. For this purpose, publications on the incidence and relevance of PSM after partial nephrectomy from the last 15\u00a0years were examined and evaluated. We summarize that PSM are generally rare, regardless of the surgical procedure, and are seen more often in connection with an imperative indication for nephron-sparing surgery as well as a central tumor location. Most studies describe that PSM lead to a moderate increase in the rate of local relapses, but no study has thus far been able to demonstrate an association with shorter tumor-specific overall survival. Intraoperative frozen section analysis had no positive influence on the risk of definite PSM in most trials. Therefore, we conclude that PSM should definitely be avoided. However, in cases with a macroscopically tumor-free intraoperative resection bed, they should lead to close surveillance of the affected kidney and not to immediate (re)intervention. In recent years, organ-sparing surgery for renal tumors in terms of partial nephrectomy or tumor enucleation has replaced radical nephrectomy as the standard procedure for treating locally confined renal cell carcinoma (RCC)\u20138. This However, no prospective and/or randomized study has yet been performed to investigate the prognostic significance of histopathologically positive but intraoperative macroscopically tumor-free surgical margins in predicting the risk of local relapse, metachronous metastases, and tumor-specific survival; only one nonsystematic review has been published thus far. MoreoveAccording to various studies, the incidence of PSM at final pathology is between 0 and 7% for open surgery\u201337, 1 toet al.[P\u2009=\u20090.01). Multivariate analysis also revealed a nearly five times higher risk of PSM after classic partial nephrectomy (P\u2009=\u20090.04). Minervini et al.[Data from smaller studies suggest that enucleation along the plane of the tumor pseudocapsule may be superior to classic partial nephrectomy with regard to the incidence of PSM. Verze et al. retrospei et al. comparedet al.[P\u2009=\u20090.02).Patients with an imperative indication for nephron-sparing surgery have a higher incidence of larger and more unfavorably located tumors than the total patient population. This explains why an imperative indication could be identified as a risk factor for PSM in nearly all studies, at least by univariate analysis. PSM rates of 9 to 28% are described here, 53\u201355. et al. also ideet al.[et al.[According to a study by Kwon et al. in 770 p[et al. reportedet al.[et al.[et al.[It is still controversial whether tumor size has an impact on the PSM rate. While various research groups were unable to demonstrate a correlation, 43, othet al., for exaet al.. On the [et al. the PSM [et al. have alset al.[P\u2009<\u20090.001).It cannot yet be conclusively clarified whether tumor location within the kidney can influence the PSM rate, since none of the published studies included a reproducible nephrometry scoring system. However, available data suggest that PSM is observed more frequently after resection of centrally located tumors, 56. Benet al. proposed quick-staining cytology as an alternative to FSA in a recent publication and showed a good level of agreement with final histologic examination [Intraoperative frozen section analysis (FSA) to ensure tumor-free surgical margins is performed frequently and may reduce the rate of PSM, at least in some patient subgroups in laparoscopic surgery grades 1 to 2 and 2 tumors). However<0.0001).et al.[et al.[et al.[It has not yet been conclusively clarified whether PSM increase the risk of local relapses after partial nephrectomy, even though the majority of studies suggest that this is probably the case, 56, 66.et al. found 26[et al. showed tet al.[P\u2009=\u20090.11) or tumor-specific overall survival (P\u2009=\u20090.4). Using multivariate analysis, an imperative indication for partial renal resection and a central tumor location proved to be independent risk factors for tumor relapse, but not PSM at final pathology[Bensalah et al. evaluateathology.et al.[P\u2009=\u20090.97). Multivariate analysis revealed that, unlike tumor size, PSM was not a risk factor for local relapse or metachronous metastases .In another large study published by Yossepowitch et al., 77 out et al.[A study by Marszalek et al. with a met al.[P\u2009=\u20090.58). Multivariate analysis also failed to identify PSM status as an independent predictor of cancer-specific survival . Thus, microscopic PSM does not appear to significantly influence tumor-specific survival[et al. reported a HR of 3.45 (95% CI: 1.79 to 6.67) for cancer-specific survival in patients with PSM; however, one limitation of that study was the small number of PSM cases (n\u2009=\u200914) compared to total cohort size (n\u2009=\u20091801)[In a large study recently published by Ani et al., 71 of 6survival, 57, 67.\u2009=\u20091801).PSM should definitely be avoided even if a certain safety margin is no longer required in nephron-sparing surgery for renal tumors, 35. How"} {"text": "We investigated the effect of extra virgin (EV) olive oil and genetically modified (GM) soybean on DNA, cytogenicity and some antioxidant enzymes in rodents. Forty adult male albino rats were used in this study and divided into four groups. The control group of rodents was fed basal ration only. The second group was given basal ration mixed with EV olive oil (30%). The third group was fed basal ration mixed with GM (15%), and the fourth group survived on a combination of EV olive oil, GM and the basal ration for 65 consecutive days. On day 65, blood samples were collected from each rat for antioxidant enzyme analysis. In the group fed on basal ration mixed with GM soyabean (15%), there was a significant increase in serum level of lipid peroxidation, while glutathione transferase decreased significantly. Interestingly, GM soyabean increased not only the percentage of micronucleated polychromatic erythrocytes (MPCE), but also the ratio of polychromatic erythrocytes to normochromatic erythrocytes (PEC/NEC); however, the amount of DNA and NCE were significantly decreased. Importantly, the combination of EV olive oil and GM soyabean significantly altered the tested parameters towards normal levels. This may suggest an important role for EV olive oil on rodents\u2019 organs and warrants further investigation in humans. Plant food represents a major source of nutrients for humans. As well as performing a protective role against chronic diseases such as coronary heart disease, diabetes mellitus and cancer ; plants Extra virgin (EV) olive oil is a vegetable oil obtained from olive trees (oleaeuropaea); a traditional tree crop of the Mediterranean Basin. It has many applications including cooking, cosmetics, pharmaceutical preparations and soaps. Olive oil is considered a healthy product because of its constituents, which include oleic acid, palmitic acid and other fatty acids; in addition to traces of squalence and sterols. There is considerable data demonstrating that the consumption of olive oil is beneficial to cardiovascular health; specifically it has a favorable effect on cholesterol regulation and LDL cholesterol oxidation. It has also been shown to have anti-inflammatory, antithrombotic, antihypertensive and vasodilatory effects in both animals and humans .Soyabean is considered to be one of the most important crops worldwide. It is generally considered to be a protein concentrate as this represents 42% of its make up .et al. [et al. [Ram et al. found th [et al. reported [et al. reported [et al. ,9.The primary aim of this study was to elucidate any potential hepato-protective role that may be performed by natural products such as GM soyabean and whole EV olive oil. To accomplish this we examined their anti-oxidative effects, both individually and in combination, by investigating their effects on damage to DNA in rats. Interestingly, we found a possible protective role when animals are fed with a mixture of EV olive oil and GM soya bean and we believe this outcome represents an important opportunity to examine the potential benefits of these valuable natural products further.Forty adult white albino rats (Sprague Dawley strain) with an average weight of (150 to 180 g), were used in this study. They were obtained from the laboratory animal house of the ophthalmic research institute, Giza, Egypt. The rats were acclimatised to laboratory condition before the experiment began. They were fed according to the experimental design detailed later in the manuscript and supply was given ad-libitum. All experiments were conducted according to the University\u2019s ethical protocols.Soyabean: Soyabean grind was purchased from a local market in Cairo, Egypt. It was freshly prepared and mixed with basal ration for rats, at a concentration of 15% .Extra virgin olive oil: was purchased from a local market in Cairo, Egypt. It was freshly prepared and mixed with basal ration for rats, at concentration of 30% . The cheForty adult rats were divided into four groups of ten. The experiment was conducted over 65 consecutive days. The feeding structure for each group is detailed in Blood samples were obtained from each animal group from their orbital plexuses and received into a clean, dry tube. Samples were left to clot at room temperature for 2 h, stored overnight in a refrigerator at 4 \u00b0C and centrifuged at 3000 revolutions per minute (rpm) for 15 min. Serum samples were then drawn in clean, dry capped bottles and kept in deep freeze at 4 \u00b0C for antioxidant enzyme studies. .et al. [et al. [Lipid peroxide was obtained using thiobarbitutric acid reaction outlined by Ohkawa et al. and puri [et al. .et al. [et al. [The isolation of DNA from spleen of rats was performed according to the following protocol: first, the tissue was homogenized in isotonic saline buffered with sodium citrate (pH 7.0); protein was then removed by treatment with a chloroform/amyl alcohol mixture and the DNA precipitated with ethanol according to protocols established by Schwander et al. . Estimat [et al. .et al. [The micronucleus test was performed to detect chromosomal damage associated with treatment. Micronuclei were identified as dark blue staining bodies in the cytoplasm of the polychromatic erythrocytes (PCEs) according to the protocol mentioned by Salamon et al. . The polet al. ,18.Parametric data were interpreted using the analysis of variance (ANOVA) test and comparative of means were performed using the Duncan Multiple Range test using SPThis study was carried out to investigate the effects of EV olive oil and GM soybean on DNA cytogenicity, as well as looking into its antioxidant role. To address these questions male albino rats were fed one concentration of each plant alone, or combined, for 65 days. We examined the antioxidant enzymes analyses and investigated in order to determine the DNA damage in the spleens and its cytogenicity, in rats.Diet effects on antioxidant enzymes: Feeding on ration mixed with GM soyabean at concentration 15% caused significant increase in serum level of lipid peroxidation, while the level of glutathione dehydrogenase was significantly reduced .Ration mixed with EV olive oil and GM soyabean significantly altered the serum level of antioxidant enzymes. This change was in the direction of the normal value when compared with the group fed on soyabean only .Diet effects on DNA: Maintaining rats on EV olive oil at concentration 30% for 65 days revealed a significant increase in the amount of DNA (ng/g of spleen) when compared with the control group. The amount of DNA (ng/g of spleen) significantly decreased when feeding rats GM soyabean (15%), while it increased significantly in rats fed EV olive oil (30%) and GM soyabean (15%) .Diet effects on cytogenicity: Feeding rats GM soyabean only caused a significant increase in micronucleated polychromatic erythrocytes (MPCEs) and the ratio of polychromatic to normochromatic erythrocytes (PCE/NCE). However, maintaining rats on ration mixed with EV olive oil and GM soyabean, instigated significant decrease of MPCEs and the ratio of PCE/NCE, though normochromatic erythrocytes (NCE) increased significantly when compared to the group fed soyabean only .EV olive oil is known to have a wide variety of benefits on many different body systems. This study was carried out to examine the effects of EV olive and G.M. soybean oil on DNA, as well as serum antioxidant enzymes.In our study, rats were fed EV olive oil, GM soya bean, or a combination of the two, at concentrations (30%) or (15%). The study was conducted over 65 consecutive days.We report a significant decrease in the level of lipid peroxidation and a significant increase in the level of glutathione dehydrogenase, when comparing the EV olive oil and GM soyabean combined group with GM soyabean fed animals only. These results may be attributed to the presence of antioxidants in EV olive oil, which has shown a number of positive effects on liver regeneration ,22,23.In addition, EV olive oil contains antioxidants and flavonoids that usually improve nutritional status and also has anti-inflammatory effects which may help in liver renewal ,25,26.Our data illustrates that the group of rats fed a mixed ration with EV olive oil for 65 successive days, showed a significant increase in the amount of DNA ng/g of spleen, when compared with the group maintained on soyabean alone.On the other hand, tests on the rats fed GM soyabean alone revealed a significant decrease in the amount of DNA ng/g of spleen; while the group that was fed a mixed ration of EV olive oil and GM soyabean showed a significant increase in the amount of DNA ng/g of spleen; with these changes in parameters appearing to fall into the normal ranges.et al. [et al. [et al. [et al. [The mechanism for the decrease in the amount of DNA was discussed by Zhou et al. who repo [et al. , Amel et [et al. and Vani [et al. showed tet al. [et al. [et al. [et al. [Our results are in agreement with Salvini et al. , Jacomcl [et al. and Quil [et al. who repo [et al. who statet al. [et al. [et al. [et al. [Fabiani et al. suggeste [et al. reported [et al. and Fabi [et al. describeet al. [The mechanism for the increase in DNA was explained by Lopez et al. who illuet al. [et al. [et al. [Jeffery , Jee-Youet al. , Aiad [3et al. , Khataib [et al. and Tudi [et al. showed tet al. [Sekene et al. concludeet al. [Rats fed a mixed ration with GM soybean showed a significant increase in MPCEs and in the ratio of PCE/NCE; while a significant decrease in NCE was observed. Feeding EV olive oil in combination with GM soyabean to rats, instigated a significant decrease in MPCEs and in the ratio of PCE/NCE and this helped to normalise their values. Suzuki et al. describeet al. [Hanan et al. reportedet al. [2+ induced mitochondrial swelling, mitochondrial membrane depolarization and intra-mitochondrial Ca2+ release on mitochondrial permeability transition (MPT). Moreover Sivikova et al. [The mechanism of reducing MPCEs and the ratio of PCE/NCE were explained by Tang et al. who conca et al. describeWe can conclude that adding EV olive oil to the diet of rats appears effective in inhibiting oxidative damage and may act as a protective agent against chronic diseases such as liver fibrosis, hyperlipidemia and diabetes. In addition, EV olive oil may also have a protective function against carcinogenic processes. Further clinical studies are therefore required to determine whether the observations observed in our study translate to human conditions and illnesses."} {"text": "We critically re-examine Fredrickson et al.\u2019s renewed claims concerning the differential relationship between hedonic and eudaimonic forms of well-being and gene expression, namely that people who experience a preponderance of eudaimonic well-being have gene expression profiles that are associated with more favorable health outcomes. By means of an extensive reanalysis of their data, we identify several discrepancies between what these authors claimed and what their data support; we further show that their different analysis models produce mutually contradictory results. We then show how Fredrickson et al.\u2019s most recent article on this topic not only fails to adequately address our previously published concerns about their earlier related work, but also introduces significant further problems, including inconsistency in their hypotheses. Additionally, we demonstrate that regardless of which statistical model is used to analyze their data, Fredrickson et al.\u2019s method can be highly sensitive to the inclusion (or exclusion) of data from a single subject. We reiterate our previous conclusions, namely that there is no evidence that Fredrickson et al. have established a reliable empirical distinction between their two delineated forms of well-being, nor that eudaimonic well-being provides any overall health benefits over hedonic well-being. Fredrickson et al. claimed Recently, Fredrickson et al. publisheWe previously noted a numberWe ran essentially the same analyses as before to test t test to non-independent data, namely the collected coefficients from multiple separate regressions of the expression of individual genes on measures of well-being. However, Fredrickson et al. in identifying eudaimonic well-being as the primary source of associations between overall well-being and CTRA gene expression and provide no support for any independent favorable contribution from hedonic well-being. The consistency and robustness of these findings across 3 independent study samples also refutes claims that the initial discovery study findings were somehow spurious or unreplicable. antiviral genes. This implies that the symmetric association of the expression of pro-inflammatory genes with hedonic well-being, identified by Fredrickson et al. in their initial analyses of their discovery sample , should 2 metric) gene expression values\u201d (p. 5) in their discovery and confirmation studies. In other words, this figure represents the results of applying their original regression method from [Fredrickson et al. used thehod from to both hod from and theihod from , and, atFirst, as we have previously demonstrated , 4, Fredy-axis of that figure shows that it was not shown in their Table 3 [used two different versions of the GSE45330 dataset in their article [Second, Fredrickson et al.\u2019s original analysis was greaWe note that Fredrickson et al. have recently issued a correction to theirchange sign, and the remainder to undergo a substantial change in magnitude, whether Fredrickson et al.\u2019s original RR53 regression method or their new mixed effect linear model was used. Specifically, in the case of the RR53 method, the fold difference value for hedonic well-being changes from +3.6% to \u22128.8% when this participant is excluded, while the value for eudaimonic well-being changes from \u22127.3% to +5.3%. In the case of the mixed effect linear model, the b-coefficients change from 0.536 to \u22120.120 (hedonic) and from 0.135 to 0.065 (eudaimonic). Click here for additional data file."} {"text": "Feedback and closed-loop circuits exist in just about every part of the nervous system. It is curious, therefore, that for decades neuroscientists have been probing the nervous system in an open-loop manner to understand it. Instead of the linear, reductionistic \u201cstimulate \u2192 record response\u201d approach, a more modern approach is taking hold: closed-loop neuroscience. It respects the inherent \u201cloopiness\u201d of neural circuits, and the fact that the nervous system is embodied, and embedded in an environment. Through active sensing, behaving animals can influence their environment in ways that alter subsequent sensory inputs. Therefore, loops abound not only in the nervous system itself, but through its dynamic interactions with the world. By interposing our own technology in some of these loops, we can achieve unprecedented control over the system being studied and explore the functional consequences. This Research Topic, \u201cClosing the Loop Around Neural Systems,\u201d presents a diverse set of recent methodological, scientific and theoretical advances from neuroscientists and neuroengineers who are pioneering closed-loop neuroscience.As shown here, cutting-edge researchers are taking advantage of real-time or \u201con-line\u201d processing of large streams of neural data. This has become feasible thanks to advances in computer processing power, in electronics such as microprocessors and field-programmable gate arrays (FPGAs), and in specialized and open-source software. These advances have enabled a wide variety of new neuroscience approaches to understanding, modulating, and interfacing with the nervous system\u2014approaches in which the variables being monitored can influence the experiment in progress, just as active sensing can influence an animal's next input.Our call for submissions to this Frontiers in Neural Circuits Research Topic yielded an overwhelming response, indicating that closing the loop around neural systems is an exciting and rapidly expanding field. Perhaps this is because of the diversity of ways in which \u201cclosed-loops\u201d can be interpreted and implemented. This Research Topic presents seven Methods articles, 16 Original Research articles, and seven Reviews, Mini-Reviews, and Perspectives, for a total of 30 accepted papers published in Frontiers in Neural Circuits between April 2012 and October 2013. A map showing the locations of all the contributorsin vitro and in vivo are described.Several articles describe or review new technologies that increase the options for closed-loop neuroscience. Two papers by Bareket-Keren and Hanein and Robiin vitro. Bonifazi et al. (in vitro preparation and its response to focal ischemia. The goal is to develop the closed-loop prostheses of the future. Tessadori et al. (in vitro on an MEA with small tunnels for neurites to grow through. Pimashkin et al. (A number of articles present advances using acute or cultured networks i et al. present i et al. present i et al. reconstrn et al. used an in situ brainstem preparation of a mouse model for Rett syndrome. To help map the brain's feedback loops, Beier et al. (Others studied the nervous systems of intact or semi-intact animals with closed-loop approaches. Nishimura et al. restoredr et al. demonstrr et al. provide r et al. present r et al. review hThe \u201cModel-in-the-loop\u201d paradigm is a powerful approach to understanding complex neural network dynamics. Brookings et al. interfacTheoretical advances are described in several modeling and simulation papers. Witt et al. modeled On the clinical side, Afshar et al. describeThe diversity of methods, experiments, tools, and analyses in this Research Topic suggests that many more areas of neuroscience research would benefit from adopting a closed-loop perspective.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "By contrast, no significant benefit emerged for CBT when compared to active control conditions either at end-of-therapy or at follow-up . The meta-analysis, however, contains errors and omissions that when rectified, cast doubt on the reliability of the reported significant effects comparing CBT to Treatment as Usual (TAU) at end-of treatment and at follow-up.In their meta-analysis, Mehl et al. examine d = 0.27 to d = 0.14. Mehl et al. did not, however, report the 95% Confidence Intervals and whether the revised effect size is significant (rather they report the standard error: SE = 0.12)\u2014the required analysis reveals that the revised effect size of 0.14 becomes non-significant (95%CI \u22120.07 to 0.35).First, for the end-of-trial analysis, Mehl et al. demonstrate the presence of significant publication bias. Their funnel plot is asymmetric and a trim-and-fill analysis points to four possible missing unpublished studies . They note that when effect sizes for the four \u201cmissing\u201d studies are included, the original effect size almost halves from d = 0.94) alone is considered an outlier, but not Foster (d = 0.90) or Waller et al. (d = 0.89). The removal of a single outlier here is somewhat opaque and atheoretical.Second, Mehl et al. also highlight the moderate level of heterogeneity in their end-of-trial analysis and attribute this to one outlying study by Kr\u00e5kvik et al. . After rd = 0.94), which might also have been influenced by difficulties in maintaining the blinding procedure.\u201d If we turn to the Kr\u00e5kvik et al paper itself, those authors clearly state that \u201cAll four professionals were trained in the use of assessment measures, but it was not possible to keep them blind to the treatment condition.\u201d Mehl et al. also incorrectly classify Waller et al. (k = 9) elicit a non-significant effect size of d = 0.13 (95% CI \u22120.028 to 0.29) while non-blind trials produce an effect size five-times larger with d = 0.65 (95% CI 0.21 to 1.09). Additionally, it is notable that amongst the nine blind trials, heterogeneity is virtually non-existent I2 = 4.8. In sum, the minority of non-blind trials underpins the inflation of effect sizes and their reported heterogeneity.Third, Mehl et al. give only a fleeting mention to the most well-documented, \u201cgenuine\u201d and significant source of heterogeneity in CBTp trials\u2014whether outcomes are measured blind or not in favor of controls. If we add this new value to replace the apparently erroneous effect size reported by Mehl et al. a random effects model now shows the overall effect size reduces to 0.16 (\u22120.03 to 0.34) and becomes non-significant.Turning to the follow-up analysis, where Mehl et al. claim an overall significant effect of CBT on delusions with n et al. which isn et al. delusionTo summarize, examination of Mehl et al.'s end-of-trial data comparing CBT and TAU shows that if the overall effect size is adjusted for potential publication bias, then it becomes non-significant. Further analysis of the same data also shows that the significant heterogeneity reported by Mehl et al. is likely to reflect the inclusion of 4 non-blind trials, which elicit effects sizes five-times larger than in blinded trials. Analysis of nine blind trials revealed no heterogeneity and no CBT efficacy. Turning to the follow-up data, adjusting the effect size for Turkington et al. means thThe author confirms being the sole contributor of this work and approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The reviewer, Maria Semkovska, and handling Editor declared their shared affiliation, and the handling Editor states that the process nevertheless met the standards of a fair and objective review."} {"text": "Structural health monitoring (SHM) has gained a significant number of attentions from the engineering communities in the past two decades, which integrates the knowledge of a variety of disciplines including structural engineering, material science, computer science, signal processing, and data management \u20133. A maiTherefore, in the light of these considerations, this special issue was launched in this journal, an SCI-indexed international journal. The papers in this special issue present the most recent advances, progress, and ideas in the field of the SHM and integrity maintenance and its application. It includes smart, bioinspired, nanometer, wireless, and remote sensing technology, sensor placement and optimization strategies, data compression, cleaning, mining and fusing technology, pattern recognition, feature extraction and damage detection and assessment, design, retrofit, maintenance, renewal and risk management of civil infrastructure, and application of SHM for heritage structures, historical monuments, and old bridges.Structural health monitoring of civil infrastructure using optical fiber sensing technology: a comprehensive review\u201d by X. W. Ye et al. presents a summary of the basic principles of various optical fiber sensors, innovation in sensing and computational methodologies, development of novel optical fiber sensors, and the practical application status of the optical fiber sensing technology in civil infrastructure monitoring. The paper \u201cDynamic responses and vibration control of the transmission tower-line system: a state-of-the-art review\u201d by B. Chen et al. reviews the dynamic responses and vibration control of the transmission tower line system as well as the disaster monitoring and mitigation of the system subjected to dynamic excitations. The paper \u201cRecent research and applications of numerical simulation for dynamic response of long-span bridges subjected to multiple loads\u201d by Z. Chen and B. Chen addresses the key issues involved in dynamic response analysis of long-span multiload bridges based on numerical simulation technologies and the engineering applications of newly developed numerical simulation technologies for safety assessment of long-span bridges. The paper \u201cA review on strengthening steel beams using FRP under fatigue\u201d by M. Kamruzzaman et al. summarizes the existing FRP reinforcing techniques for fatigue damaged structural steel elements.After two-round peer reviewing, totally 55 research articles are received and 28 out of them are finally accepted for publication in this special issue. Among them, four papers are review articles. The paper \u201cDamage identification for large span structure based on multiscale inputs to artificial neural networks\u201d by W. Lu et al. proposes a structural damage identification method by combining the measured results from strain sensors and accelerometers in the noisy environment based on the artificial neural network. The paper \u201cDamage assessment of two-way bending RC slabs subjected to blast loadings\u201d by H. Jia et al. investigates the blast response and damage assessment of a two-way bending RC slab subjected to blast loadings. The paper \u201cStructural damage identification based on rough sets and artificial neural network\u201d by C. Liu et al. conducts the research on potential applications of the rough sets theory and the artificial neural network method for structural damage detection. The paper \u201cDamage detection on sudden stiffness reduction based on discrete wavelet transform\u201d by B. Chen et al. presents the damage detection on sudden stiffness reduction of building structures based on the discrete wavelet transform. The paper \u201cDamage detection of structures identified with deterministic-stochastic models using seismic data\u201d by M.-C. Huang et al. addresses a deterministic-stochastic subspace method for damage identification which has been experimentally verified to detect the equivalent single-input-multiple-output system parameters of the discrete time state equation.The following papers address the research work on structural damage detection. The paper \u201cIntegrated system of structural health monitoring and intelligent management for a cable-stayed bridge\u201d by B. Chen et al. describes the integrated system for structural monitoring and intelligent management of the cable-stayed Zhijiang Bridge, China. The paper \u201cFull-scale measurements and system identification on Sutong Cable-stayed Bridge during typhoon Fung-Wong\u201d by H. Wang et al. analyzes the wind data and the structural vibration responses obtained from the SHM system installed on the cable-stayed Sutong Bridge, China, during a typhoon. The paper \u201cStudy on typhoon characteristic based on bridge health monitoring system\u201d by X. Wang et al. investigates the typhoon characteristics by use of the measured data from the bridge health monitoring system instrumented on the Jiubao Bridge, China. The paper \u201cNumerical simulation of monitoring corrosion in reinforced concrete based on ultrasonic guided waves\u201d by Z. Zheng et al. predicts the location of the pitting corrosion in reinforced concrete based on the ultrasonic guided waves. The paper \u201cStudy on dynamic response measurement of the submarine pipeline by full-term FBG sensors\u201d by J. Zhou et al. measures the dynamic responses of the submarine pipeline by use of the FBG sensing technology. The paper \u201cCase study on the maintenance of a construction monitoring using USN-based data acquisition\u201d by S. Kim et al. develops a ubiquitous sensor network for monitoring and maintenance of the building structure. The paper \u201cIn-line ultrasonic monitoring for sediments stuck on inner wall of a polyvinyl chloride pipe\u201d by H. Seo et al. verifies the applicability and effectiveness of the ultrasonic monitoring of sediments stuck on the inner wall of polyvinyl chloride pipes.Other papers focus on the research related to the development of integrated SHM systems and novel sensing technologies. The paper \u201cNumerical simulation on slabs dislocation of Zipingpu concrete faced rockfill dam during the Wenchuan earthquake based on a generalized plasticity model\u201d by B. Xu et al. investigates the slab dislocation phenomenon of the Zipingpu concrete faced rockfill dam during earthquake. The paper \u201cAn improved multidimensional MPA procedure for bidirectional earthquake excitations\u201d by F. Wang et al. develops an improved multidimensional modal pushover analysis method for estimating the response demands of structures subjected to bidirectional earthquake excitations. The paper \u201cA methodology for multihazards load combinations of earthquake and heavy trucks for bridges\u201d by D. Sun et al. presents a modified model considering the advantages of Ferry Borges-Castanheta's model and Turkstra's rule in converting the random process into the random variables for earthquake analysis. The paper \u201cExperimental study of the seismic performance of L-shaped columns with 500\u2009MPa steel bars\u201d by T. Wang et al. addresses the experimental results of six L-shaped RC columns with 500\u2009MPa steel bars for seismic performance assessment.The subsequent papers present the investigations on the structural performance under seismic excitations. The paper \u201cEffects of outlets on cracking risk and integral stability of super-high arch dams\u201d by P. Lin et al. presents the outlet cracking in the Goupitan and Xiaowan arch dams by use of the nonlinear finite element method. The paper \u201cReal-time safety risk assessment based on a real-time location system for hydropower construction sites\u201d by H. Jiang et al. proposes a method for real-time safety risk assessment for the hydropower construction site. The paper \u201cAnt colony optimization analysis on overall stability of high arch dam basis of field monitoring\u201d by P. Lin et al. conducts a dam ant colony optimization analysis of the overall stability of the high arch dam on the complicated foundation. The paper \u201cDisplacement back analysis for a high slope of the Dagangshan hydroelectric power station based on BP neural network and particle swarm optimization\u201d by Z. Liang et al. presents a displacement back analysis for the slope using an artificial neural network model and particle swarm optimization model. The paper \u201cUplifting behavior of shallow buried pipe in liquefiable soil by dynamic centrifuge test\u201d by B. Huang et al. carries out the dynamic centrifuge model test to investigate the uplifting behavior of the shallow buried pipeline subjected to seismic vibration in the liquefied site.The remaining papers in this special issue introduce the research outcomes on monitoring of hydraulic and geotechnical structures. The paper \u201c"} {"text": "The purpose of this review was to build upon a recent review by Weigelt et al. which examined visual search strategies and face identification between individuals with autism spectrum disorders (ASD) and typically developing peers. Seven databases, CINAHL Plus, EMBASE, ERIC, Medline, Proquest, PsychInfo and PubMed were used to locate published scientific studies matching our inclusion criteria. A total of 28 articles not included in Weigelt et al. met criteria for inclusion into this systematic review. Of these 28 studies, 16 were available and met criteria at the time of the previous review, but were mistakenly excluded; and twelve were recently published. Weigelt et al. found quantitative, but not qualitative, differences in face identification in individuals with ASD. In contrast, the current systematic review found both qualitative and quantitative differences in face identification between individuals with and without ASD. There is a large inconsistency in findings across the eye tracking and neurobiological studies reviewed. Recommendations for future research in face recognition in ASD were discussed. Individuals with autism spectrum disorders (ASD) experience difficulties communicating, which impacts on involvement in social situations . Informa, and what the nature (advantage or disadvantage) and magnitude of any difference might be (quantitative differences). Weigelt et al. [Weigelt et al. recentlyt et al. presentet et al. cited bott et al. includedt et al. review),t et al. , by inveSeven databases were used to locate published scientific studies matching with the inclusion criteria. These databases were CINAHL Plus, EMBASE, ERIC, Medline, Proquest, PsychInfo and PubMed. The main search terms were \u201cface recognition\u201d, \u201cautism\u201d and \u201ceye tracking\u201d. Using Boolean operators, the following search strategy was used: (\u201cface recogni*\u201d OR \u201cfac* perception\u201d OR \u201cfac* processing\u201d OR \u201cface identi*\u201d) AND AND (autis* OR Asperger OR \u201cautism spectrum disorder\u201d)Prior to selection of the final published articles, a screening of the titles and abstracts was conducted. After reviewing full text articles, a manual search was conducted through a search of the reference list of the selected articles.Due to ongoing change of the Diagnostic of Statistical Manual of Mental Disorders criteria of ASD throughout the years, the search was limited from year 2000 to May 2013. The latest diagnosis criteria of ASD can be obtained from the Diagnostic of Statistical Manual of Mental Disorders (DSM-5) , all typThe current review included the different types of ASD specified in the DSM IV, which included individuals with Autistic disorder, Asperger\u2019s syndrome and Pervasive developmental disorder not otherwise specified (PDD-NOS) . A furthAn inter-rater reliability score of 0.94 was achieved after evaluation by two reviewers, the first author and an occupational therapy honours student, based on an inclusion and exclusion criteria of a set 50 randomly selected articles. Any discrepancies were resolved with consensus discussion.S1 Table) [The methodological quality of each article was assessed using Kmet form (1 Table) . This sc1 Table) . Scores S2 Table) were citation, level of evidence, ASD population, comparison group, age group, type of stimuli, intervention , methods, outcomes and results.The extraction of data was based on Cochrane Handbook for Systematic Reviews Section 7.3.a . The heaA narrative review approach was applied. The synthesisation and analysis of data were based on themes of recognition accuracy, reaction times, number of fixations, fixation duration and secondary outcomes of brain activation studies fMRI activation, EEG and MEG measures.Electronic searches located a total of 880 articles. A review of titles and abstracts were conducted and 720 articles were excluded. Evaluation of 131 full text articles was conducted to determine suitability. Based on the exclusion criteria, 106 articles were omitted. Three articles were identified after a manual search of the reference lists resulting in a final inclusion of 28 articles .The majority of the studies included were case-control studies with an exception of one individual case study. A total of 1329 participants were included. Eight studies utilised eye tracker technology for the measurement of visual search strategies. Ten studies examined differences in patterns of brain activity; EEG was reported in five, fMRI measurements in four, and MEG in one study.rd edition-faces subtest. There is currently no evidence on the reliability and validity of the Benton Facial Recognition test, however the Neuropsychological assessment purportedly demonstrates adequate to high reliability and validity [rd edition is a reliable assessment, there may exist issues regarding validity. Four studies utilised the fine grained face perception method, in which face recognition is achieved through successful discrimination between two or more faces with some changes in features of the recognition stimulus [Sixteen employed simple face perception methodology, with stimuli necessitating discrimination between two or more faces. Seven studies used facial memory methods and assed participant\u2019s ability to remember faces after at least a 30 seconds interval between familiarisation and recognition . Severalvalidity . Kent [1validity states tstimulus .S2 Table). Limitations of the studies included small sample sizes and inadequate descriptions of sampling strategy. The majority of the studies controlled the baseline characteristics for individuals with ASD and the TD individuals. However, the intelligence quotient (IQ) was not controlled in ten studies, which may produce a bias in the obtained results [S2 Table) [The methodological quality of the studies ranged from poor to strong ( results \u201322. Poss2 Table) , 20\u201329.Numerous types of stimuli were used in the selected studies with varying levels of experimental manipulations that Weigelt et al refer toTo determine the absence or presence of quantitative or qualitative differences in face recognition in ASD, the studies were categorised under qualitative if the methodology involved face markers in face recognition, such as face inversion effects, part-whole effect, face space and thatcher . StudiesA total of 25 studies reported recognition accuracy. Zurcher et al. investigIn a fine grained face perception test (faces in the encoding and familiarisation phases had subtle differences), adults with ASD demonstrated greater difficulty (p = 0.003) in recognizing faces compared with the control group . This stSeven studies investigated facial memory recall accuracy and employed measures with or without a standardised assessment. Out of the 28 included studies, three studies used Wechsler memory scale scores for both immediate and delayed memory , 39, 40.\u2018certain\u2019, \u2018somewhat certain\u2019 and \u2018guessing\u2019. It was expected that when an individual is \u2018certain\u2019 the face recognition accuracy improves. This is present among TD children as performed better when they were \u2018certain\u2019 (p<0.01). However, memory awareness accuracy was improved when children with ASD indicated \u2018guessing\u2019 (p<0.05) in comparison to TD children. This indicates a possible delay in their memory awareness. Recognition accuracy was similar between both groups when they were \u2018somewhat certain\u2019. When accuracy was compared with adults with ASD, recall awareness was better in comparison to children with ASD. Due to the younger population used in Chawarska and Volkmar [Benton Facial Recognition Test is a standardised assessment which measures simple face perception and this was used in two studies , 32. The Volkmar , recogniDeruelle et al. studied As shown in Majority of the studies achieved statistical differences, which signify that TD individuals performed better in facial recognition tasks in comparison to individuals with ASD. Overall, the quantitative measurements in face recognition were reported in 16 studies. Three of these studies reported mixed results. With the exclusion of the mixed results studies, a total of 11 studies out of the 13 quantitative studies (85%) reported reduced face recognition accuracy among individuals with ASD. Qualitatively, a similar pattern was observed as six out of the eight qualitative studies (75%) reported poorer face recognition in individuals with ASD. Therefore, the studies reviewed reported both qualitative and quantitative differences were observed in face recognition individuals with ASD.Six studies included reports of reaction time. In a Benton Facial Recognition test, Tehrani-Doost et al. recordedAs shown in Measurements of fixation duration were discussed in all seven eye tracking studies. Bradshaw et al. found noChawarska & Shic comparedOverall, fixation duration comparisons in Sterling et al. revealedOutcomes of fixation duration in Falkmer et al. comparedA study by Wilson et al. found noDue to the differences in the classification of the areas of interests across the seven identified studies in fixation duration, studies which classified the individual features of the face, more specifically the \u2018eyes\u2019 and \u2018mouth\u2019, were retrieved for further comparisons. Five studies were reported the results of fixation durations in the \u2018eyes\u2019 and \u2018mouth\u2019. Majority of these studies reported similar fixation durations towards the \u2018eyes\u2019 and \u2018mouth\u2019, as illustrated in Four eye tracking studies measured number of fixations. In a part-whole effect study, Falkmer et al. divided Examination of number of fixations in primary regions of the face revealed that both ASD and TD groups exhibit higher preference for fixating on the eyes . There wInterestingly, Sterling et al. concludeTrepagnier et al. demonstrIn summary, individuals with ASD demonstrated a decreased in the number of fixations towards the \u2018eyes\u2019 or the inner features of the face, i.e., eyes nose and mouth among the group with ASD. However, there is a large inconsistency in findings among the analysis of number of fixations towards the \u2018mouth\u2019. A summary of the results are presented in The areas of interest investigated in Pierce and Redcay were theGrelotti et al. concludefMRI activation was analysed differently in Zurcher et al. as it waIn conclusion, specific differences in fMRI activation were inconclusive, as presented in Previous research has demonstrated that the N170 component is involved in face processing . Bentin In McPartland et al. , individIt has been previously suggested that the ERP, P1 is associated with early visual processing modulated by attention . Both McAdditional discussion of other ERPs like P2, N250 and face-N400 components can be found in Webb et al. . Dawson As demonstrated in Kyll\u00e4inen et al. studied Overall, while the studies using imaging techniques demonstrated some patterns in brain activity were apparently intact in individuals with ASD. However, two specific quantitative differences in face identity perception are described; namely a face-specific reduction in accurate recall and eye-specific discrimination deficit for individuals with ASD. The conclusions of the synthesis of studies not discussed by Weigelt et al. [Weigelt et al. concludet et al. and publt et al. . The fint et al. . Howevert et al. , this requantitative and qualitative face recognition processes in ASD. Studies involving different \u201cface markers\u201d, i.e., face inversion effects, face space and Thatcher illusion were classified under qualitative studies [Quantitative studies included studies with (1) a working memory component, (2) simple visual perceptual discrimination between different faces, (3) perceptual discrimination of similar faces with subtle differences in face features, and (4) standardized face recognition assessments [qualitative and quantitative differences can be ambiguous, which could suggest a possible inherent methodological limitation of this systematic review to address face recognition processes. Ideally, an alternative framework should have been developed for this purpose but that would then have made the extension of the previous review impossible.This systematic review inherited the theoretical framework adopted by Weigelt et al. , who pro studies . Quantities with a workinqualitative or quantitative stimuli.The comprehensive review by Weigelt et al. chose toqualitative and qualitative face recognition conclusions in visual search strategies and brain activation processes cannot be derived. However, the conclusion from this systematic review was based on the comparisons of 734 participants . Due to this, definitive . Despite limitations, it can be stated that most studies were scored as having good to strong quality based on Kmet forms. Furthermore, casecontrol study design is indeed the most appropriate design to investigate differences between individuals with ASD and their TD controls.The main limitation in the majority of the studies reported was low sample size. However, this is a common occurrence across most research due to difficulties relating to recruitment. Sampling strategy could be determined in a minority of studies and ten studies revealed possible confounding factors due to insufficient control of IQ at baseline. Possible type II errors were identified in fourteen studies Click here for additional data file.S2 Table(DOCX)Click here for additional data file.S3 Table(PDF)Click here for additional data file."} {"text": "Empiricpost-hoc tests ranged between 0.09 and 0.96, falling well-below the usual threshold of 0.80 for 9/10 comparisons). Moreover, we were asked to use Bonferroni-corrections due to multiple testing. This resulted in a very conservative test strategy . Therefore, we reported descriptive effect sizes that are better indicators of the hypothesized impact of symmetry issues than statistical significance in case of underpowered comparisons whereas for male stimuli the pattern\u2014albeit less pronounced\u2014ran into the reversed direction . Magnitude and opposed directedness of symmetry effects preliminarily corroborate differences across gender and stimulus orientation subsets. These results were not mentioned in Bernard et al.'s . Moreover, the same post-hoc tests as in Bernard et al. . However, both proposed contrasts yield a calculated mean difference of d = 0.29, p = 0.096 and d = 0.35, p = 0.024 further adding to the positive evidence against the SBIH. Taken together with our finding that newly constructed symmetry-matched stimuli also yielded positive evidence against the SBIH using their original stimuli in a counterbalanced design. In summary, hitherto mixed findings are reported based on the original stimuli from Bernard et al. (We agree with Bernard et al. that thed et al. , 2015b td et al. that exaThe author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Primary malignant melanoma of the oral cavity is a rare neoplasm, especially on the tongue. We report a case of mucosal melanoma at the base of the tongue, an extremely rare location (only about 30 cases have been reported in literature). The extension study doesn\u00b4t revealed distant metastatic lesions. The patient was treated by subtotal glossectomy and bilateral functional neck dissection. Tongue is one of the most difficult structures to reconstruct, because of their central role in phonation, swallowing and airway protection. The defect was reconstructed with anterolateral thigh free flap. Surgical treatment was supplemented with adjuvant immunotherapy. The post-operative period was uneventful. At present, 24 months after surgery, patient is asymptomatic, there isn\u00b4t evidence of recurrence of melanoma and he hasn\u00b4t any difficulty in swallowing or phonation. Key words:Malignant mucosal melanoma, anterolateral thigh free flap, phonation, swallowing. Primary malignant melanoma in oral cavity is rare neoplasm. It represents approximately 1.7 % of all melanomas and 6.3% of head and neck melanoma (A 51-year-old man with a clinical history of esophageal hiatal hernia, gastroesophageal reflux, hepatitis A in childhood and appendectomy was referred to our department due to an asymptomatic pigmented lesion in the base of the tongue. On examination, a black, pigmented and diffuse mass measuring approximately 3 X 3 cm in size was found on the base of the tongue Fig. . It was et al (et al (According Gutman et al , the preet al or Rapinl (et al series, l (et al . The tonl (et al . Reconstl (et al . After gl (et al . This cal (et al . When thl (et al . Ante-rol (et al . Becausel (et al . Anotherl (et al . However"} {"text": "Plasma sheet pressure increases to balance magnetic flux density increases in the lobes. Here we examine plasma sheet pressure, density, and temperature during substorm growth phases using 9 years of Cluster data . We show that plasma sheet pressure and temperature are higher during growth phases with higher solar wind driving, whereas the density is approximately constant. We also show a weak correlation between plasma sheet temperature before onset and the minimum SuperMAG AL (SML) auroral index in the subsequent substorm. We discuss how energization of the plasma sheet before onset may result from thermodynamically adiabatic processes; how hotter plasma sheets may result in magnetotail instabilities, and how this relates to the onset and size of the subsequent substorm expansion phase.During substorm growth phases, magnetic reconnection at the magnetopause extracts \u223c10 As such, increases in the lobe magnetic pressure during substorm growth phases result in changes in the plasma sheet density and temperature. i et al. and Kistr et al. found tha et al. that shoal. Lui, . Thus, iBoakes et al.,http://nssdcftp.gsfc.nasa.gov/spacecraft_data/omni/).Using 9 years of Cluster data, we statistically examine how the plasma and magnetic field in the magnetotail vary with solar wind energy input during substorm growth phases. Using new advances in the identification of magnetotail regions [ER in the magnetotail during the Northern Hemisphere summer. During each orbit, the spacecraft passed through the northern lobes, plasma sheet, and southern lobes, with the orbits sweeping from dawn to dusk each year.Between 2001 and 2009, the Cluster spacecraft orbited the Earth in a near-polar orbit, with an apogee of \u223c19\u2009Reme et al.,Balogh et al.,X\u2009<\u2009\u221210\u2009ER and |Y\u2009|\u2009<\u20095\u2009ER.Using data from the Cluster Ion Spectrometer Composition Distribution Function sensor (CIS-CODIF) [Juusola et al. [Newell and Gjerloev,Gjerloev, t) were calculated between 2001 and 2010 and filtered using a 60\u2009min low-pass filter to remove short period variations. The median positive and negative changes were then calculated. Intervals during which dSML/dt was less (more) than the median negative (positive) change were labeled as expansion (recovery) phase times. The main difference between our method and that of Juusola et al. [Juusola et al. [\u03b5 function [Perreault and Akasofu,Akasofu, Morley et al.,In order to identify growth phase intervals, we isolated those times that were not during substorm expansion or recovery phases using a technique similar to a et al. but appla et al. is that a et al. only conV is the solar wind speed, B is the interplanetary magnetic field (IMF) strength, \u03b8 is the clock angle of the IMF with respect to the Earth's magnetic dipole moment, and \u21130 is a scaling constant equal to 7\u2009ER.For each Cluster data point, we calculate the mean solar wind power input over the preceding 15\u2009min using the 1\u2009min resolution OMNI data and the function XZ and database [Boakes et al.,Dunlop et al.,\u03b2 were determined for the different regions based on statistical relations between \u03b2 and magnetotail currents . These criteria compared well with other methods of region determination [Boakes et al.,XZ plane and Figure\u2009XY GSM plane.Figure\u2009d Figure\u2009b XY GSM ER2 in the XY plane (quartiles of 552 and 2239\u2009ER\u22122) and 1728\u2009ER\u22122 in the XZ plane (quartiles of 728 and 2944\u2009ER\u22122). Figures\u2009Boakes et al.,Cluster provides good coverage of the lobes and plasma sheet, with a median sample density of 1376 data points per +\u2009+\u2009O+) pressure and magnetic pressure, (c) ion temperature, and (d) ion density in the plasma sheet against the 15\u2009min averaged \u03b5 function for growth phase intervals. The overlaid boxes indicate the distribution of data points in deciles of the complete solar wind data set (rather than the subset of solar wind data cotemporaneous with the Cluster data set). The boxes show the medians (blue line), upper and lower quartiles (thick boxes) and upper and lower deciles (thin boxes). Since the binned Cluster data are not always well described by a symmetric normal distribution, we have chosen this method of display to highlight the characteristics of the data points that would be lost if the data were expressed as a mean and standard deviation.Figure\u2009\u03c1) of the entire data sets were 0.35 and 0.21, respectively, although correlating the shown medians gave \u03c1 of 0.94 and 0.96, respectively. This general trend in the data supports the canonical substorm model: increased driving leads to increased magnetopause reconnection and loading of open magnetic flux into the magnetotail lobes, leading to increased flaring and increased pressure in the magnetotail. In a recent study, Liu et al. [\u03b5 function; thus, they are effectively independent. Pressure variations during the substorm may be related to solar wind driving, while solar wind pressure defines the initial pressure within the system, resulting in a spread of observed pressures.The magnetic pressure in the lobes Figure\u2009a and totu et al. showed tAndre and Cully,Figures\u2009\u03c1 for the entire data set of 0.31 and 0.83 for the shown median values. The median temperature increases by a factor of 2.9 between the lowest and highest driving deciles. The interquartile range also increases, showing that a greater range of temperatures was observed during intervals of high solar wind driving. In contrast, there is little variation in the median or interquartile ranges of density between different levels of driving for the fits to all the data were less than 0.18, indicating that the spread of the data is not dependent on the solar wind driving. For the medians, R2 is higher than 0.95, indicating that the general trends in the data are well described by the model. We note that the cross-sectional area for individual events is dependent on the solar wind ram pressure, which can vary by a factor of 2 between storm and nonstorm intervals [Kistler et al.,where We expand this model to calculated plasma sheet temperature under the assumption of pressure balance in the magnetotail,\u03b2 is given bythus the temperature in the plasma sheet for a given value of \u03b2\u2009=\u20090.35 and for constant density and the medians of the total pressure (dashed line). We note that \u03b2PS can vary with Ti,PS and that a full expansion of this equation will be dependent on the plasma equations of state used.The grey lines in Figure\u2009Figure\u2009min) during the expansion phase, which can be taken as an indication of the intensity of the substorm, increases with temperature of the plasma sheet just prior to onset. The difference between the median SMLmin between subsequent groups is statistically significant, assessed using the Mann-Whitney-Wilcoxon test, beyond the 95% level. The spread of data is large (with interquartile ranges of up to 137% of the median) and the Spearman's correlation coefficient is low (\u03c1\u2009=\u20090.43) but statistically significant beyond the 99% level. These results show a weak link between substorm expansion phase magnitude and the plasma sheet temperature observed by Cluster prior to onset.The minimum SML but this is expected if one considers that the substorm expansion phases heats a preheated plasma sheet.Tsyganenko and Mukai [Morley et al. [nd Mukai developey et al. that is Shue et al.,Kropotkin and Lui,Miyashita et al.,P\u2009=\u2009nkT, pressure varies with density, temperature, or both. Thermodynamically isothermal changes in pressure would result in variations in density at constant temperature, whereas adiabatic changes in pressure would result in temperature changes. Statistical studies examining the thermodynamics of the plasma sheet have concluded that the plasma sheet is predominantly thermodynamically adiabatic [e.g., Baumjohann and Paschmann,Goertz and Baumjohann,Pontius and Wolf,Erickson, Birn et al.,Baumjohann et al.,Huang et al.,\u03b5 function, we find that the increase in temperature with these components is comparable.Given that the magnetosphere's shape is defined by pressure balance between the solar wind and magnetospheric plasmas [e.g., Burlaga and Ogilvie, Richardson and Smith, Borovsky et al. [A relationship between the solar wind velocity and temperature has previously been shown [e.g., y et al. also shoNagai et al.,Kistler et al.,Case studies [Lui,Rae et al.,Walsh et al.,Angelopoulos et al.,Nishimura et al.,Sergeev et al.,Rae et al.,Cheng and Zaharia,Pritchett and Coroniti,Lui [Our results can be interpreted as showing that thermal energy can be added to the plasma sheet during the substorm growth phase without the need for reconnection or a rapid reconfiguration of the magnetosphere. While this additional energy may be small , it may be significant, particularly in controlling substorm onset. The physics controlling the onset of the magnetospheric substorm are still the subject of rich debate. Both plasma instabilities [e.g., iti,Lui showed tMorley et al.,Milan et al.,Our results have shown that plasma sheet temperature increases with increased solar wind driving, and that higher plasma sheet temperatures are associated with larger substorms. This link may not be causal but simply a reflection of the correlation between lobe magnetic flux and plasma sheet temperature and the correlation between stored lobe magnetic energy and substorm size [e.g., Using 9 years of Cluster tail observations, we have shown that the growth phase plasma sheet temperature is higher during intervals of higher solar wind driving. This is thermodynamically reasonable: work done on the plasma sheet by the increasing magnetic pressure in the lobes increases the internal energy of the plasma sheet; in an thermodynamically adiabatic magnetotail this energy cannot be easily and quickly extracted from the plasma sheet so its temperature rises. We note that we cannot fully deconvole the links between plasma sheet temperature and the components of the solar wind drivers that may increase the plasma sheet temperature through other mechanisms. Higher temperatures during intervals of high driving may increase the likelihood of the plasma sheet becoming susceptible to a number of instabilities thus increasing the likelihood of substorm onset. While the energy increase of the plasma sheet may be small compared to the total energy budget of a substorm, the thermodynamic processes and energization of the plasma sheet prior to substorm onset may be key determining onset times and substorm intensity."} {"text": "A suite of near-identical magnetite nanodot samples produced by electron-beam lithography have been used to test the thermomagnetic recording fidelity of particles in the 74\u2013333 nm size range; the grain size range most commonly found in rocks. In addition to controlled grain size, the samples had identical particle spacings, meaning that intergrain magnetostatic interactions could be controlled. Their magnetic hysteresis parameters were indicative of particles thought not to be ideal magnetic recorders; however, the samples were found to be excellent thermomagnetic recorders of the magnetic field direction. They were also found to be relatively good recorders of the field intensity in a standard paleointensity experiment. The samples' intensities were all within \u223c15% of the expected answer and the mean of the samples within 3% of the actual field. These nonideal magnetic systems have been shown to be reliable records of the geomagnetic field in terms of both direction and intensity even though their magnetic hysteresis characteristics indicate less than ideal magnetic grains.Nonideal magnetic systems accurately record field directionWeak-field remanences more stable than strong-field remanences Ozima and Ozima, Coe, Thellier and Thellier, Paterson et al., Dekkers and B\u00f6hnel, Muxworthy and Heslop, Extracting the directional information recorded by natural remanent magnetizations (NRM) of thermal origin has long been shown to be relatively reliable regardless of the magnetic domain state of the particles within a rock [Evans et al., Muxworthy et al., Heider and Bryndzia, King et al., Kr\u00e1sa et al., Kr\u00e1sa et al. [Kr\u00e1sa et al., Muxworthy et al., One practical way to resolve this problem is to experimentally quantify the behavior of PSD TRM, however, the experimental investigation of PSD behavior has its own set of problems: The geometry and size of a magnetic crystal strongly controls its magnetic properties, as does its spatial relationship with respect to other magnetic particles [a et al. reportedKr\u00e1sa et al. [Kr\u00e1sa et al. [The nanofabrication technique used in this study to make the new samples has been described extensively by a et al. . The sama et al. and are Muxworthy and Williams, Muxworthy et al., Muxworthy and Williams, Muxworthy and McClelland, Kr\u00e1sa et al., Kr\u00e1sa et al., Most of the samples are in the middle of the PSD range [The thermoremanence measurements reported in this paper were all conducted at the Kochi Core Center, Kochi University, Japan, using a combination of a Natsurhara-Giken TDS-1 paleomagnetic oven and a single-sample 2G magnetometer. For normalization purposes, a saturation isothermal magnetization (SIRM) was induced in a field of 1 T using a Magnetic Measurements MMPM10 pulse magnetizer.Before making the measurements the samples were vacuum-sealed in quartz-glass capsules. The samples were fixed to the inside of the capsules using Omega CC high-temperature cement. Before vacuum sealing, the samples had been stored in alcohol since last reduced.Kr\u00e1sa et al. [The samples were induced with a SIRM using a field of 1 T, followed by a TRM in a field of 60 \u00b5T on cooling from 650\u00b0C. Due to the shape of the quartz capsules both the SIRM and TRM could only be induced in the plane of the nanodots unlike the experiments reported by a et al. who measThe samples' in-plane TRMs were then stepwise thermally demagnetized . The meaCoe [Riisager and Riisager, Walton, Kr\u00e1sa et al., After the initial TRM experiments, six of the magnetically stronger samples were selected for a synthetic paleointensity study: DK0011, DK0023, DK0024-2, DK0124right, DK0127, and DK0131. The samples were first induced with a TRM in a field of 100 \u00b5T (the \u201cNRM\u201d), and a paleointensity study conducted following the standard double-heating protocol of Coe , with pTKr\u00e1sa et al., Arai plots with corresponding NRM demagnetization plots are shown for all the samples in Leonhardt et al. [q). Five out of the six samples passed the selection criteria; sample DK0131 (k) [Paterson, The results were analyzed with the ThellierTool v. 4.22) software of t et al. (Table2)2 software DK0131 f failed e DK0131 d, and itKr\u00e1sa et al. [Kr\u00e1sa et al. [During thermal demagnetization of the samples' thermoremanences, all the samples were found to be reliable recorders of the inducing-field direction , i.e., sa et al. who founa et al. found thRS/MSM ratios over the susceptibility of ARM (\u03c7ARM) , and they are again less likely for the larger particles during TRM acquisition.As the calculation of TRM per unit mass relies on a number of assumptions, we also plot the susceptibility of TRM (\u03c7M (\u03c7ARM) b. The ARa et al. . For theThese differences between the EBL, crushed and hydrothermally grown samples shown in Day et al., Carvallo et al., Kr\u00e1sa et al., The samples displayed wide TRM unblocking spectra and hysteresis behavior that are more akin to MD behavior , howeverThere are a number of factors that question the universality of the laboratory paleointensity study in this paper. First, for example, there is uncertainty in the magnitude of the effect of aligning the TRM field direction with the initial NRM direction in the paleointensity experiment on the final intensity estimate, though it is likely it would improve the accuracy. In future studies, it would be worth repeating these experiments for a range of different angles, unfortunately, that was outside the scope of this study as there was evidence that the samples may have chemically altered during the first paleointensity experiment. Second, the cooling rate for these samples was the same in both the NRM acquisition and TRM acquisition. Future laboratory experiments could investigate this, but it is difficult to generate geologically comparable long cooling times in the laboratory. Certainly, the viscous decay of TRM in such samples should be investigated in the future.Kr\u00e1sa et al. [Ten samples produced by electron-beam lithography with near-identical grains in the pseudo-single domain size range have been induced with thermoremanences, and their thermomagnetic properties examined including their ability to record reliable paleointensity information. They were found to be reliable recorders of both the intensity and direction of the geomagnetic field. On comparison with a et al. it is se"} {"text": "Helical polymers are found throughout biology and account for a substantial fractionof the protein in a cell. These filaments are very attractive for three-dimensionalreconstruction from electron micrographs due to the fact that projections of thesefilaments show many different views of identical subunits in identical environments.However, ambiguities exist in defining the symmetry of a helical filament when onehas limited resolution, and mistakes can be made. Until one reaches a near-atomiclevel of resolution, there are not necessarily reality-checks that can distinguishbetween correct and incorrect solutions. A recent paper in eLife almost certainly imposed an incorrect helical symmetry and this can be seen usingfilament images posted by Xu et al. A comparison between the atomic model proposedand the published three-dimensional reconstruction should have suggested that anincorrect solution was found.DOI:http://dx.doi.org/10.7554/eLife.04969.001 Helical polymers are ubiquitous in biology, and are found extensively in viruses , bacteriThe reconstruction of the MAVS filament presented in Wu et al. had a stated resolutionof 3.6 \u00c5 and had a helical symmetry of a rotation of \u2212101.1\u00b0 (thenegative rotation corresponding to a left-handed helix) and a rise of 5.1 \u00c5 persubunit. In contrast, the reconstruction of Xu et al. had a stated resolution of 9.6\u00c5, a C3 point group rotational symmetry, with a rotation of \u221253.6\u00b0 anda rise of 16.8 \u00c5 for every ring of three subunits. One of the differences betweenthe two reconstructions is that in Wu et al. not only are \u03b1-helices clearlyresolved, but bulky aromatic side-chains can be seen that are consistent with not onlythe sequence but a crystal structure of the subunit which can be fit quite well as arigid body into the reconstruction. In the Comment appended to their paper , Xu et aTheir first argument for a different helical symmetry is that a layer line at\u223c1/(17 \u00c5) in their power spectrum has intensity on the meridian, while thecorresponding layer line in Wu et al. does not and is clearly arising from an n =\u22121 Bessel order. Is it possible that the sorting I have done for out-of-plane tilt is completely wrong,since it has involved the application of a helical symmetry, and the power spectrum inThe second argument advanced by Xu et al. in their Comment is that the symmetry used byWu et al. is unstable in the Iterative Helical Real Space Reconstruction (IHRSR)approach when applied to their filaments, and therefore their filaments must have adifferent symmetry. Since I developed the IHRSR method , I have The third and last argument advanced by Xu et al. in their Comment is that theirreconstruction has a hole in the center, while the model of Wu et al. that has beenbuilt into the 3.6 \u00c5 resolution reconstruction would have no such hole. Theycorrectly state that such a hole can be seen in their filaments independently of thesymmetry simply by cylindrically averaging their 3D density map. If one takes thecryo-EM density map deposited by Wu et al. (EMDB-5922) and filters it to 12 \u00c5resolution a hole is seen in the center , arrow. I have thus shown that the three arguments for why their filaments are different fromthose of Wu et al., advanced by Xu et al. in their Comment, are not true. This does notestablish, however, that the filaments are the same. Establishing that would require ahigh resolution reconstruction from the filaments of Xu et al. which I fear would beimpossible. But we can look at their published reconstruction, which gets to the mostimportant point: given all of the potential problems and ambiguities in helicalreconstruction, how does one know that a structure is correct? The great success ofprotein x-ray crystallography lies in the fact that structures are being solved that allhave a known stereochemistry. One has prior knowledge about what an \u03b1-helix and a\u03b2-sheet look like, as well as knowing the amino acid sequence that must be builtinto a model. At lower resolution by EM most of these reality-checks are simply absent,and one needs to generate methods, such as the tilt-pair validation used forsingle-particle cryo-EM that canI show a comparison between their actual reconstruction , and theThis comparison between a map and a model can be made quantitatively, and In conclusion, the arguments of Xu et al. for a different symmetry in their filamentsthan that in Wu et al. collapse completely when one actually examines the publiclydeposited data.http://pdbe.org/empiar) from The micrographs in set04 within the EMPIAR-10014 deposition ( reviewprocess). Similarly, the author response typically shows only responsesto the major concerns raised by the reviewers.eLife posts the editorial decision letter and author response on a selection of thepublished articles . An edited version of theletter sent to the authors after peer review is shown, indicating the substantiveconcerns or comments; minor concerns are not usually shown. Reviewers have theopportunity to discuss the decision before the letter is sent , Wes Sundquist (Reviewingeditor), and three reviewers, of whom Niko Grigorieff and Carsten Sachse have agreed toreveal their identity. A third reviewer remains anonymous.Thank you for sending your work entitled \u201cAmbiguities in HelicalReconstruction\u201d for consideration at The Reviewing editor and the three reviewers discussed their comments before we reachedthis decision, and the Reviewing editor has assembled the following comments to help youprepare a revised submission.eLife . Thestudy by Xu et al. was published before a related study was published by Wu et al. inMolecular Cell . The situation merits follow-up because the two reported MAVS structures areentirely different. There is no dispute that the Wu et al. study contained high qualityEM data, that their reconstruction was technically correct, and that they successfullygenerated a near-atomic resolution structure of their MAVS filaments. The centralquestion is whether differences in sample preparation generated distinct MAVS filamentswith different physical structures, or whether the differences in the reportedstructures instead reflect methodological problems in the Xu et al study. Dr Jiang has posted a Comment thataccompanies the original eLife manuscript in which he argues that thereported differences in the filament helical symmetries and structures reflect genuinephysical differences in the MAVS filaments generated by the two groups. Here, Egelmananalyzes the central claims of that Comment and also discusses the validity of theoriginal Xu et al. reconstruction.Dr Egelman analyzes the validity of a cryoEM helical reconstruction of the prion-likefilament formed by the CARD domain of the RIG-I adaptor molecule MAVS, as reported by Xuet al. in Egelman downloaded part of the original data deposited by Xu et al. in the EMPIAR database and performed image analyses to test the arguments put forth by Jiang that thefilaments used in the two studies were indeed different. He shows convincingly that thethree central pieces of evidence presented in the Jiang Comment are invalid. He makesseveral important points, including that Xu et al. did not appreciate that theirfilaments had significant out-of-plane tilt and that convergence stability in a symmetrysearch is not sufficient to establish the validity of that symmetry. Importantly, healso shows that the Xu et al. structure does not agree with its own PDB model beyond aresolution of \u223c20 \u00c5, suggesting that the computed EM density is not correct.These arguments convincingly rebut the points raised in Jiang's Comment, revealpotential inconsistencies in the processing procedure used by Xu et al., and raisesubstantial doubts about the validity of the Xu et al. reconstruction. These analysesare therefore an important contribution.The Egelman analysis could be strengthened even further by providing additional evidencethat Xu et al. must have imposed the wrong helical symmetry. The author could do this bycomputing a structure with the symmetry parameters from Wu et al. imposed on the EMPIARdata set from Xu et al. . Such a structure shouldcontain clearer densities and recognizable secondary structure features. [\u2026] The Egelman analysis could be strengthened even further by providingadditional evidence that Xu et al. must have imposed the wrong helical symmetry. Theauthor could do this by computing a structure with the symmetry parameters from Wu etal. imposed on the EMPIAR data set from Xu et al. . Such a structure should contain clearer densities and recognizablesecondary structure features.I have been able to improve the resolution that I initially obtained with their imagesby excluding those with a defocus greater than 3.0\u03bc. This reduced the data setfrom \u223c64k segments to \u223c15k segments, but the reconstruction is clearlyimproved. I have expanded"} {"text": "The ribution , 4.et al. conducted a screen using a colon cancer cell line to identify small molecules in a National Cancer Institute (NCI) chemical library that could induce TRAIL on tumor cells and thereby activate TRAIL receptors via an autocrine or paracrine mechanism [http://oncoceutics.com/).To overcome the lack of activity of exogenous TRAIL ligands, Allen echanism . This scechanism . TIC10/Oechanism . With adet al. [et al. synthesized the reported structure to study its mechanism of action, it had no activity [et al. using NMR and X-ray crystallography [The isomeric structure of TIC10/ONC201 is critical to its activity , 7. The et al. , and theactivity . Subsequactivity . Interesactivity . The disDespite the structural mis-assignment of TIC10/ONC201, the preclinical studies support the therapeutic potential of TIC10/ONC201. Elucidation of the structure of TIC10/ONC201 will facilitate its preclinical and clinical development. Indeed, the inactive isomers begin to provide structure/function insight. As this research moves forward, however, the challenge will be to address the hurdles that have hindered development of TRAIL receptor agonists. Predictive biomarkers that identify sensitive and/or resistant cells will need to be developed so that studies can be targeted to cohorts of patients most likely to benefit from TIC10/ONC201 . In addi"} {"text": "Streptococcus, Enterococcus and Lactococcus contain large amounts of the sugar rhamnose, which is incorporated in cell wall-anchored polysaccharides (CWP) that possibly function as homologues of well-studied wall teichoic acids (WTA). The presence and chemical structure of many rhamnose-containing cell wall polysaccharides (RhaCWP) has sometimes been known for decades. In contrast to WTA, insight into the biosynthesis and functional role of RhaCWP has been lacking. Recent studies in human streptococcal and enterococcal pathogens have highlighted critical roles for these complex polysaccharides in bacterial cell wall architecture and pathogenesis. In this review, we provide an overview of the RhaCWP with regards to their biosynthesis, genetics and biological function in species most relevant to human health. We also briefly discuss how increased knowledge in this field can provide interesting leads for new therapeutic compounds and improve biotechnological applications.The composition of the Gram-positive cell wall is typically described as containing peptidoglycan, proteins and essential secondary cell wall structures called teichoic acids, which comprise approximately half of the cell wall mass. The cell walls of many species within the genera This review summarizes new insights into the genetics and function of rhamnose-containing cell wall polysaccharides expressed by lactic acid bacteria, which includes medically important pathogens, and discusses perspectives on possible future therapeutic and biotechnological applications. N-acetylglucosamine (GlcNAc) and N-acetylmuramic acid (MurNAc) polysaccharide strands that are cross-linked by short peptides to form a three-dimensional network. Individual species tailor important physical properties of their peptidoglycan such as elasticity and porosity through the composition of the peptide bridge, the amount of crosslinking and chemical modifications of the composing glycan residues that surrounds the cell membrane , such as capsular polysaccharides and wall teichoic acids (WTA), are anchored to peptidoglycan GlcNAc or MurNAc, covering the bacterium with a layer of glycans that is directly exposed to the environment and Streptococcus agalactiae (Group B Streptococcus), have emphasized the critical role of these molecules in cell wall biogenesis and pathogenesis . L-rhamnose is commonly found in bacteria but is not used or produced by humans and Streptococcus lactis (Lancefield Group N). Today, more than 150 different species are known within these three genera, which remain classified within the order Lactobacillales . We focus here on the species relevant to food production and human health and with experimental evidence for the presence of RhaCWP: Lancefield Groups A, B, C, E and G Streptococcus represented by S. pyogenes, S. agalactiae, Streptococcus equi subsp. zooepidemicus, Streptococcus mutans and S. dysgalactiae subsp. equisimilis, respectively, as well as E. faecalis and L. lactis.Although the direct correlation between Lancefield serotyping and species clearly no longer persists, bioinformatic analyses of genome sequences indicates that many of the species within these three genera express RhaCWP versus a 2 side chain in the Group C Carbohydrate (GCC), respectively and G carbohydrate (GGC), rhamnose is the major antigenic determinant acetylation and hydroxylation. Their biosynthesis requires the coordinated action of glycosyltransferases (enzymes that link monosaccharides), transporters and metabolic enzymes for the production of nucleotide-sugar precursors. Often, genes encoding proteins required for glycoconjugate biosynthesis are clustered on the chromosome. Their identification allows subsequent structure\u2013function studies that help to understand the role of glycosylation in bacterial (infection) biology.et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. S. agalactiae, the availability of the GBC structure and genome sequence enabled a comprehensive and detailed in silico analysis that linked protein-encoding genes to the different glycosidic linkages in the GBC structure , Salmonella enterica (RmlC and RmlD) and Streptococcus suis (RmlB and RmlC), demonstrated that rhamnose biosynthesis enzymes require dimerization or even dimerization of dimers to catalyze the respective enzymatic reactions elongation of the polysaccharide (building block) on the lipid carrier by sequential addition of activated sugar precursors, (3)\u00a0translocation of lipid-linked precursors, either repeating units or the complete glycoconjugate, across the membrane by ABC transporters or \u2018flippases\u2019 and (4) by a dedicated polymerase. Since undecaprenylphosphate serves as a common scaffold to build structurally diverse glycoconjugates including capsules, peptidoglycan, lipopolysaccharides and protein-modifying glycans, the availability of \u2018free\u2019 undecaprenylphosphate is essential for bacterial survival genes in E. coli and subsequent disruption of the ABC transporter-encoding genes (rgpCD), the bacteria were unable to produce rhamnan as well as two integral membrane proteins (encoded by gbs1483 and gbs1490) that may act as accessory proteins . However, the presence of an additional glycosyltransferase compared to the GAC gene cluster in S. pyogenes suggests some fine structural variation. Likewise, the GBC has been reported to be expressed by different streptococcal species most notably Streptococcus porcinus, Streptococcus pseudoporcinus, Streptococcus troglodytidis and Streptococcus plurextorum. Correspondingly, the genomes of S. pseudoporcinus and S. porcinus contain a fully syntenous GBC biosynthetic gene cluster except for the lack of a gbs1485 orthologue (unpublished observations). Presumably, the expressed structures only lack one of the monosaccharide rhamnose side-branches present in the GBC repeating unit and this species is non-groupable by Lancefield serotyping assays. Instead of gbs1481 and gbs1485, the S. thoraltensis locus harbors two additional glycosyltransferases absent in the otherwise syntenous locus of S. agalactiae . Absence of a gbs1481 orthologue is likely responsible for abrogated cross-reactivity in the Group B antigen serotyping tests due to the loss of the terminal rhamnose from the dominant trirhamnosyl epitope. This analysis therefore suggests that S. thoraltensis is capable of synthesizing a RhaCWP that is a structural variant of the GBC. Increased availability of genome sequences for streptococci and related species will help identify additional rmlD-linked gene clusters for RhaCWP biosynthesis that are consistent with either known Lancefield serotyping reactions (as exemplified here by S. castoreus) or from which variant or novel RhaCWP structures can be predicted.As mentioned above, the Lancefield typing scheme is unable to discriminate bacteria up to the species level. The availability of genome sequences, as well as knowledge regarding RhaCWP gene clusters, provides an opportunity to gain insight into the distribution and potential structure of RhaCWP among Gram-positive bacteria. For example, nit Fig. of S. agalactiae to initiate functional studies on the GBC. The resulting GBC-negative S. agalactiae strain was devoid of cell wall rhamnose and phosphate and lost expression of the pellicle structure by tunicamycin resulted in depletion of GAC from the cell wall, increased mutanolysin susceptibility and increased chain length as a result of cell separation defects (van Sorge et\u00a0al. S. agalactiae gbcO mutant increases capsule production suggesting a coordinated regulation between the two glycoconjugates (Beaussart et\u00a0al. Since their identification and structural characterization from the 1930s onwards, the biological roles of the Lancefield Group antigens or other RhaCWP have received little attention. It was long thought that Group-specific antigens were only of structural importance McCarty . Indeed,. et\u00a0al. targetedet\u00a0al. et\u00a0al. 2 side chain of the GCC serves as an attachment site for Group C1 bacteriophage in Group C Streptococcus (Krause 2 epitope, was isolated from Group C Streptococcus that survived exposure to Group C1 lytic phages (Krause S. pyogenes, the GAC-specific GlcNAc terminal moiety appears to be involved in both lytic and temperate phage adsorption although additional unidentified cell wall factors are also involved (Fischetti and Zabriskie S. mutans strains (Shibata, Yamashita and van der Ploeg L. lactis, selection of phage resistant strains from a random insertional mutagenesis library originally identified the presence and genetic locus of the RhaCWP (Dupont et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. E. faecalis Epa through genetic manipulation greatly affect phage sensitivity despite similar adsorption levels (Teng et\u00a0al. et\u00a0al. L. lactis, knowledge regarding the molecular mechanisms of phage adsorption and infection may benefit the food industry.In addition to their significant role in cell wall architecture, it is appreciated that RhaCWP are important phage receptors for many species. This again highlights a parallel with WTA in other Gram-positive bacteria where WTA is critical to phage-mediated horizontal gene transfer (Baptista, Santos and Sao-Jose s Krause . Corresps Krause . Also foet\u00a0al. et\u00a0al. S. pyogenes and E. faecalis now highlight that subtle modifications to the RhaCWP structure, which do not impact bacteria growth, can significantly impact virulence (Xu et\u00a0al. et\u00a0al. et\u00a0al. E. faecalis, disruption of epaB, epaE, epaM and epaN, which may completely eliminate Epa expression or only modify its structure, all caused significant attenuation in a mouse peritonitis model (Xu et\u00a0al. et\u00a0al. et\u00a0al. L. lactis, loss of the pellicle results in 10-fold more efficient uptake by macrophage cell lines compared to wild-type bacteria (Chapot-Chartier et\u00a0al. L. lactis and E. faecalis. For S. pyogenes, structure\u2013function studies focused on the role of the GAC GlcNAc side chain, which was selectively removed through genetic mutation of the glycosyltransferase-encoding gene gacI (van Sorge et\u00a0al. . The gacI mutant bacteria still expressed the rhamnan backbone but did not display apparent cell wall abnormalities. However, bacteria were increasingly susceptible to innate immune clearance by neutrophils and antimicrobial components (van Sorge et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. The localization of RhaCWP at the host\u2013pathogen interface suggests that their biological function might be broader than a role in cell wall biogenesis. Similarly for WTA, evidence is accumulating for its role in virulence by increasing adherence and immune evasion (Carvalho S. pneumoniae capsule as a key virulence factor has been established since the 1928 landmark \u2018Griffiths experiment\u2019 and the capsular polysaccharides are major protective antigens utilized in current vaccine formulations (Geno et\u00a0al. rmlABCD genes (Bentley et\u00a0al. S. pneumoniae population and at least nine of the serotype antigens included in the current 23-valent vaccin are rhamnose-containing polysaccharides (Geno et\u00a0al. Rhamnose is not just incorporated in RhaCWP, but is also present in capsular polysaccharides. We consider this structure distinct from RhaCWP given their localization in the cell wall; RhaCWP are interpolated within the peptidoglycan wall layer, whereas capsular polysaccharides are typically the outermost layers of the cell envelope. Nevertheless, it is worth noting that several clinically relevant Gram-positive bacteria synthesize rhamnose-containing capsules. The significance of the S. pneumoniae, S. agalactiae strains belonging to serotype VIII contain rhamnose within the polysaccharide repeating unit (Kogan et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. In addition to et\u00a0al. P. aeruginosa with some marginal activity against M. tuberculosis (Alphey et\u00a0al. S. aureus, where inhibition of the UDP-GlcNAc:lipid phosphate transferase TarO is not detrimental to the bacterium, but render the bacterium avirulent (Sewell and Brown S. aureus (Campbell et\u00a0al. Increased knowledge regarding the biosynthesis and function of RhaCWP could aid the development of new antimicrobial agents but may also have applications in metabolic engineering to optimize food production (Chapot-Chartier S. pyogenes and S. agalactiae. Indeed, different strategies are currently explored to develop protective vaccines against these streptococcal species (Dale et\u00a0al. S. agalactiae (Johri et\u00a0al. S. pyogenes (Dale et\u00a0al. et\u00a0al. S. pyogenes vaccine antigen. Conjugate vaccines of either isolated or synthetic GAC protect mice from subsequent infection after active and passive immunization (Sabharwal et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. in vitro and protected mice from lethal challenge with wild-type S. pyogenes through passive immunization (van Sorge et\u00a0al. RhaCWP are also attractive vaccine candidates due to their conserved and constant expression in species of medical importance, such as et\u00a0al. Finally, dissection of the RhaCWP biosynthesis pathway could benefit food production, most notably the dairy industry that uses lactic acid bacteria for food fermentations (Chapot-Chartier Despite their long history in streptococcal diagnostics, investigations on the biological roles and possible applications of RhaCWP have lagged behind. Genome sequencing has initiated genetic studies to elucidate structure\u2013function relationships of RhaCWP, highlighting their critical importance in proper cell wall architecture and pathogenesis. Their indispensable nature identifies the RhaCWP biosynthesis pathway as an attractive therapeutic target for antimicrobial drug development. Spin offs will likely find applications in the area of metabolic engineering for food production and biomedical purposes.Supplementary DataClick here for additional data file."} {"text": "To evaluate the follicular dynamics, superovulatory response, and embryo recovery following superstimulatory treatment initiated at estradiol-17\u03b2 induced follicular wave emergence and its comparison with conventional superstimulatory protocol in buffaloes.th day of the estrous cycle with same FSH dose regimen and similar timings for PGF2\u03b1 injections. In both groups, half of the buffaloes were treated with luteinizing hormone (LH) 25 mg and other half with 100 ug buserelin; gonadotrophin releasing hormone (GnRH) analog at 12 h after the end of FSH treatment. All buffaloes in both protocols were inseminated twice at 12 and 24 h of LH/GnRH treatment. Daily ultrasonography was performed to record the size and number of follicles and superovulatory response.Six normal cycling pluriparous buffaloes, lactating, 90-180 days post-partum, and weighing between 500 and 660 kg were superstimulated twice with a withdrawal period of 35 days in between two treatments. In superstimulation protocol-1 (estradiol group) buffaloes were administered estradiol-17\u03b2 and eazibreed controlled internal drug release (CIDR) was inserted intravaginally (day=0) at the random stage of the estrous cycle. On the day 4, buffaloes were superstimulated using follicle stimulating hormone (FSH) 400 mg, divided into 10 tapering doses given at 12 hourly intervals. Prostaglandin F2\u03b1 analogs (PGF2\u03b1) was administered at day 7.5 and day 8, and CIDR was removed with the second PGF2\u03b1 injection. In superstimulation protocol - 2 buffaloes were superstimulated on the 10Significantly higher number of small follicles (<8 mm) was present at the time of initiation of superstimulatory treatment in the estradiol group compared to the conventional group , however, the number of ovulatory size follicles (\u22658 mm) did not differ significantly between the respective groups . Total embryos and transferable embryos recovered were non-significantly higher in the estradiol group compared to the conventional group . The significant higher proportion of transferable embryos were recovered in buffaloes treated with LH compared to GnRH .The average number of ovulatory size follicles (>8 mm), corpora lutea, and transferable embryos was higher in buffaloes superstimulated at estradiol-induced follicular wave compared to the conventional protocol: Further the percentage of transferable embryos was significantly higher in buffaloes administered with LH compared to GnRH. Howevern cattle ,3. In bun cattle ,5. Similn cattle reportedth day), to coincide with the approximate time of emergence of the second follicular wave. It has been observed that asynchrony of even 1 day between wave emergence and initiation of superstimulatory treatment may significantly reduce the superovulatory response [et al. [So far conventional superstimulatory protocols have been used in buffaloes involving the initiation of superstimulatory treatment during \u00admid-cycle or hormonal approaches (reviewed in ). In bufThe present study being a part of a larger study for doctorate thesis, approval had been obtained from the Institutional Animal Ethics Committee. The embryo collection was done as per standard procedure without harming the animals.Six normal cycling healthy buffaloes, with average body weight 500 - 660 kg, 90-180 days post-partum, superstimulated twice with a withdrawal period of 35 days in between two treatments were used for the study.th day (day of follicular wave emergence) using follicle stimulating hormone (FSH) 400 mg NIH-FSH-P1, given daily intramuscularly in divided, tapering doses , over a period of 5 days. All the buffaloes were given two injections of prostaglandin F2\u03b1 analog (PGF2\u03b1) cloprostenol 500 \u03bcg at 84 and 96 h after the start of superstimulatory treatment and CIDR was removed at the time of second prostaglandin injection.In Group I : Buffaloes were administered estradiol-17\u03b2 along with insertion of eazibreed controlled internal drug release (CIDR) (designated as day=0), the superstimulatory treatment was initiated on the 4th day of induced estrus using same dose regimen of FSH as above. Similarly, buffaloes were given two injections of PGF2\u03b1 at 84 and 96 h after the start of superstimulatory treatment.Group II conventional superstimulation protocol: Buffaloes (n=6) were superstimulated as per the conventional protocol by initiating the superstimulatory treatment on the 10In both Groups I and II, half of the buffaloes were administered 25 mg luteinizing hormone (LH), and half were administered 100 ug buserelin (gonadotrophin releasing hormone (GnRH) analog) at 12 h after the end of superstimulatory treatment. The fixed time insemination was done at 12 and 24 h of LH/GnRH treatment.Ultrasonography was carried out daily using ultrasound machine equipped with B-mode linear array trans-rectal probe (7.5 MHz) from the start of experiment till the day of embryo recovery to record the size and the number of follicles followed with number of ovulations and luteal dynamics. Embryos were collected by flushing the uterus of the donor animals non-surgically on day 5.5 using two-way Worrlien catheter using flushing media - Dulbecoo\u2019s phosphate buffered saline supplemented with 0.4% bovine serum albumin and antibiotics at standard rates. Embryos were searched and evaluated morphologically using zoom stereomicroscope and were graded as per the manual of International Embryo Transfer Society .t-test (two-tailed) was applied to compare mean values of treatments in SPSS-16 Statistical program. A p=0.05 was considered as significant.The data are presented as means and standard errors for all variables. After confirming the normality of data and homogeneity of variance, Student\u2019s All the buffaloes responded to superstimulatory treatment by developing multiple corpora lutea (CLs) on the ovaries. The results are presented in No significant differences were observed in percent ovulation , the number of CLs on day 5 post artificial insemination (AI) , embryo recovery rate (47.95% vs. 45.16%), and total number of recovered embryos in the estradiol group compared to the conventional group, respectively. Further, a mean number of transferable embryos recovered were non-significantly higher in the estradiol group compared to the conventional group .The administration of LH or GnRH did not have any significant effect on ovulation rate and a number of embryos recovered in buffaloes . HoweverIn vivo embryo production could be a viable option in buffaloes on the identification of good donors. However, the detection of estrus is more difficult in buffaloes than cattle which made implementation of the conventional superstimulatory protocols more difficult in buffaloes resulting in poor success rates. The initiation of superstimulatory treatment following exogenous control of follicular wave emergence using estradiol-17\u03b2 could eliminate the need for estrus detection and avoid the unnecessary waiting period until mid-cycle to initiate the conventional protocol. Therefore, the present work was undertaken to study the follicular dynamics, superovulatory response, embryo recovery following superstimulation protocol initiated at the beginning of estradiol-17\u03b2-induced follicular wave emergence and its comparison with the conventional protocol in buffaloes.et al. [et al. [et al. [et al. [et al. [The number of small follicles (\u22648 mm) present at the time of initiation of superstimulatory treatment was significantly higher in the estradiol group compared to the conventional group . Similaret al. and Carv [et al. who show [et al. . However [et al. ,16. Howe [et al. and Camp [et al. reportedThe reason for higher number of follicles reaching ovulatory size (\u22658 mm) in the estradiol group compared to the conventional group in the present study could be emergence of a new follicular wave 4 days later following administration of estradiol-17\u03b2 which leet al. [et al. [et al. [In general, the both groups showed higher ovulation rate (74% and 84%) in the present study. The similar high ovulation rate in superstimulated buffaloes has been reported by Lipinski et al. . The mod [et al. . However [et al. ,21. Pate [et al. also repth day post AI in both groups was higher than reported by Misra and Joshi [et al. [The superovulatory response in terms of number of CLs on the 5nd Joshi and Agar [et al. . Misra [ [et al. in a stuth day post AI could be due to the higher number of follicles reaching ovulatory size (\u22658 mm) in the estradiol group compared to the conventional group. The presence of higher number of dominant follicles at the initiation of superstimulatory protocol in the conventional group could be another reason. The decreased superovulatory response due to the presence of dominant follicle has also been reported by others [et al. [The comparatively higher number of CLs in the estradiol group than the conventional group on the 5y others . Li et a [et al. reportedet al. [et al. [The moderate embryo recovery rate of ~45-48 was observed in the present study. The recovery rate achieved in the present study was higher than (~34%) reported by Baruselli et al. and Carv [et al. but lowe [et al. also rep [et al. , failure [et al. , fragile [et al. higher n [et al. ,28, howe [et al. was unsu [et al. . The exaet al. [et al. [et al. [The average number of embryos recovered in the present study was higher in the estradiol group than the conventional group. The two studies in the literature have reported the recovery of similar or higher number of embryos. Misra et al. have rec [et al. reported [et al. reported [et al. and admi [et al. has alsoet al. [et al. [et al. [The average number of transferable embryos recorded in the estradiol group was non-significantly higher than the conventional group. Misra et al. reportedet al. , >4 tranet al. and 7 tret al. has alsoet al. ,20,21. T [et al. , Heleil [et al. and lowe [et al. achievedet al. [Differences in superovulatory response, embryo recovery and a number of transferable embryos were non-significant between buffaloes administered LH or GnRH in the present study . Howeveret al. reportedet al. .et al. [et al. [et al. [et al. [Techakumphu et al. reported [et al. did not [et al. reported [et al. observedSignificantly higher number of growing follicles was present in the estradiol group at the initiation of superstimulatory treatment compared to the conventional group. Superovulatory response in terms of number of CLs, total, and transferable embryos tend to be higher in the estradiol group compared to the conventional protocol indicating that the estradiol based protocol could be successfully used for superstimulation in buffaloes. In addition to this, the administration of LH in buffaloes subjected to fix timed AI following superstimulation produced a significantly higher percentage of transferable embryos compared to administration of GnRH. Therefore, the results of the present study indicated that initiation of superstimulatory treatment subsequent to the synchronization of follicular wave emergence by estradiol-17\u03b2 and fixed timed AI could be successfully used for embryo production in buffaloes.The present study was part of NS\u2019s PhD dissertation. The work was designed by GSD and PSB. The execution of experimentation protocol and performing ultrasound scanning was done by NS, VSM, and MH. The lab work was performed by NS, SS, and DD. Statistical analysis and drafting of the manuscript were done by NS, GSD, and DD. All authors read and approved the final manuscript."} {"text": "Essential hypertension (EH) is a complex, polygenic condition with no single causative agent. Despite advances in our understanding of the pathophysiology of EH, hypertension remains one of the world\u2019s leading public health problems. Furthermore, there is increasing evidence that epigenetic modifications are as important as genetic predisposition in the development of EH. Indeed, a complex and interactive genetic and environmental system exists to determine an individual\u2019s risk of EH. Epigenetics refers to all heritable changes to the regulation of gene expression as well as chromatin remodelling, without involvement of nucleotide sequence changes. Epigenetic modification is recognized as an essential process in biology, but is now being investigated for its role in the development of specific pathologic conditions, including EH. Epigenetic research will provide insights into the pathogenesis of blood pressure regulation that cannot be explained by classic Mendelian inheritance. This review concentrates on epigenetic modifications to DNA structure, including the influence of non-coding RNAs on hypertension development. Hypertension (HT) affects more than 1 billion people globally and is a major risk factor for stroke, chronic kidney disease, and myocardial infarction . The pubEpigenetic modification is recognised as essential in biological processes and in recent years there has been studies investigating its role in the development of specific pathologic conditions. Research into epigenetic mechanisms will provide insights into the pathogenesis of blood pressure regulation that cannot be explained by classic Mendelian inheritance. Previous reviews in this field report on the various epigenetic mechanisms impacting CVD ,5,6 or tEpigenetics refers to all heritable changes to the regulation of gene activity, without changes to the DNA sequence itself . In otheThe launch of international initiatives, the Human Epigenome Project , which fS-adenosyl-l-methionine is bound to position 5 of the cytosine ring, forming 5-methyl-cytosine (5mC) .,19.10,19et al. [IGFBP3, KCNK3, PDE3A, PRDM6) and renal function [The most robust data on the involvement of methylation in blood pressure regulation comes from the study by Kato et al. . Kato anet al. . An inveet al. , suggest7, TBX2) . InvestiHSD11B2 gene is hypermethylated [et al. [et al. [HSD11B2 promoter methylation was associated with EH, with parallel disruptions to the THF/THE ratio.Initial DNA methylation research centred on correlating EH with the global level of 5mC , but morthylated ,28. The thylated ,28, in a [et al. . Friso e [et al. also fouPRCP gene encodes a lysosomal prolylcarbopeptidase protein, which is implicated in cleavage of c-terminal amino acids linked to proline in peptides such as angiotensin II and III [PRCP gene was hypomethylated in EH subjects [Atgr1a) were analysed in both spontaneously hypertensive rat (SHR) and its normotensive control, the Wistar Kyoto rat (WKY), expression of Atgr1a was significantly increased by week 20 in the SHR. Bisulfite sequencing revealed that the Atgr1a promoter from endothelial cells in the aorta and mesenteric arteries of the SHR rats became progressively hypomethylated with age compared to their WKY counterparts [Atgr1a expression in the SHR was related to the hypomethylation of the Atgr1a promoter and may have a role in the maintenance of high blood pressure. Somatic ACE (sACE) converts angiotensin I to the active form, angiotensin II and is, therefore, a key regulator of blood pressure [Due to the well-known involvement of the RAAS system on arterial pressure regulation, any changes in the activation status of RAAS-regulated genes have a pronounced effect on an individual\u2019s potential to develop HT. This effect has been extensively tested in animal models of HT . Hypomet and III . Methylasubjects . Similarterparts , suggestpressure . Bisulfipressure .+-K+-2Cl\u2212 cotransporter 1 (NKCC1), participate directly with fluid and electrolyte loss and therefore arterial pressure regulation [Sic2a2 gene, mediates the transport of sodium, potassium and chloride into and out of the cells [+, K+, Rb+, and Cl\u2212 in hypertensive vascular smooth muscle cells [Sic2a2 gene promoter in the SHR aorta and heart, result in increased expression of NKCC1 and is positively correlated with HT [Membrane transporters such as Nagulation . NKCC1, he cells . Changesle cells . Hypomet with HT .N-methyltransferase (PNMT), which can act similarly to methyl CpG binding protein 2, has been shown to exacerbate the decrease in norepinephrine uptake, thereby enhancing the local and systemic catecholaminergic effect [Methyl CpG binding protein 2 methylates the norepinephrine transporter gene, silencing its expression . Hypermec effect .DNA is packaged into the dynamic protein structure of chromatin, whose basic unit is the nucleosome. A nucleosome comprises of two copies each of the histone proteins H2A, H2B, H3, and H4 . Post-trEpigenetic histone modification occurs when the N-terminal tail is subjected to a variety of post-translational modifications . Up to 6et al. [et al. [et al. [et al. [Histone modification affecting arterial pressure levels has been documented in a variety of human and animal tissues, including vascular smooth muscle. Vascular oxidative stress can contribute to endothelial dysfunction\u2014a hallmark of HT\u2014and the development of HT. A study by Bhatt et al. document [et al. . Epigene [et al. , possibl [et al. . eNOS ac [et al. found th [et al. observed [et al. .et al. [et al. [et al. [Ace1 promoter regions of the SHR tissues were more enriched with H3Ac and H3K4me3, and concluded that Ace1 is locally up-regulated in SHR tissues via histone code modifications [The study by Riviere et al. noting t [et al. further [et al. also fouications .et al. [et al. [Nkcc1 mRNA and protein in the aortas of Sprague\u2013Dawley (SD) rats were significantly increased after administration of an angiotensin II infusion. Cho et al. [Nkcc1 during HT development.In line with observations by Lee et al. , Cho et [et al. found tho et al. found tho et al. , suggestDisruptor of telomeric silencing-1 (DOT 1), a methyl-transferase, enhances methylation of lysine 79 residue of histone H3 (H3K79) . DOT1-me2AR) leading to cyclic AMP (cAMP) production, and increased activity of renal epithelium sodium channels (ENaC) [et al. [et al. [WNK4 promoter region that contains a negative glucocorticoid-responsive element [et al. found WNK4 expression was inhibited via these pathways leading to overexpression of the sodium chloride co-transporter (NCC) and the onset of HT [Activation of the renal sympathetic nervous system has long been thought to play a crucial role in the development of salt-reactive HT . Overacts (ENaC) . WNK4 sis (ENaC) ,48. Mu e [et al. found th [et al. . Mu et a [et al. also not element . Mu et aet of HT .et al. [Non-coding RNAs (ncRNA) are implicated in several epigenetic processes, notably small RNAs that can influence histone modifications and cytosine methylation which are connected with gene expression regulation . NcRNA met al. . Here weMiRNAs are the most commonly studied small ncRNA; currently there is no research available investigating the involvement of the other types of small and mid-sized ncRNAs in EH. There are more than 1800 miRNAs in the genome, each approximately 22 nucleotides in length . miRNAs et al. [AGTR1 mRNA. This site has a Mendelian C (minor) and an A (major) allele. The minor-C allele is associated with EH [AGTR1 mRNA in EH individuals who inherit this allele. Individuals with the C allele did not have reduced AGTR1 mRNA and exhibited a greater pressor effect in response to angiotensin II [AGTR1 expression in umbilical vein endothelial cells [DOT1 gene [In regards to the RAAS-regulated genes, it appears that miRNAs may also have a role modulating ACE mRNA transcription . MiRNA het al. to targe with EH . Has-miRensin II . Has-miRal cells . A recenal cells . Two miRal cells . In refeal cells (mentionOT1 gene . FurtherOT1 gene . AngioteOT1 gene . siRNA tOT1 gene .Long non-coding RNAs (lncRNAs), a heterogeneous group of non-coding transcripts longer than 200 nucleotides, regulate their targets by influencing epigenetic control, mRNA processing, translation or chromatin structure. LncRNAs have been implicated in several biological processes, including transcriptional regulation by epigenetic mechanisms . An exemet al. [ADD3, NPPA, ATP1A1, NPR2, CYP17A1, ACSM3, and SLC14A2 were connected with cis-lncRNA transcripts [NPPA) gene (present in cardiac hypertrophy and heart failure) and, therefore, has potential CVD involvement [Large intergenic non-coding RNAs (lincRNAs) have been identified both as modulators of health development ,65,66 anet al. have deset al. . The proet al. , suggestet al. . LncRNAsnscripts . Of thesolvement .We are entering a new era of understanding how the genome interacts with the environment to affect disease pathogenesis. There is now emerging evidence that epigenetic, as well as genetic, factors are key players in regulating and maintaining blood pressure, and strong evidence for a complex interaction of genetic and environmental factors that influence the risk of HT in each individual . Many ep"} {"text": "Nanotechnology has offered a wide range of opportunities for the development and application of structures, materials, or systems with new properties in the food industry in recent years. The developed nanomaterials could greatly improve not only food quality and safety but also the foods\u2019 health benefits. In this special issue, different nano-sized vehicles are reported as efficient bioactives delivery systems and sensitive detection materials.w/v lecithin). Additionally, the naringenin-loaded nanoliposomes still maintained good stability during 31 days of storage at 4 \u00b0C. Zhou et al. [To resolve the low chemical instability, poor water solubility, and intestinal efflux limitations and challenges of bioactive ingredients, constructing an effective delivery vehicle using food-grade polymers is supposed to be a novel and feasible strategy. For example, Chen et al. encapsulu et al. fabricatu et al. reportedu et al. fabricatu et al. . Two typu et al. systemat2+ in water. This probe exhibited an extremely low limit of detection of 0.22 nM. Meanwhile, a possible fluorescence-quenching mechanism was proposed in this study. In another study, to achieve the rapid detection of Listeria monocytogenes, Zhu et al. [3) was used as a photosensitive material, which was modified with gold nanoparticles to immobilize complementary DNA, and amplified the signal by means of the sensitization effect of CdTe quantum dots and the shearing effect of exonuclease I (Exo I) to achieve high-sensitivity detection. This strategy had a detection limit of 45 CFU/mL in the concentration range of 1.3 \u00d7 101\u20131.3 \u00d7 107 CFU/mL, providing a new way to detect Listeria monocytogenes.Another focus of the published articles in this special issue is the sensitive detection of various contaminants ions, pathogenic bacterium) associated with food safety through different nano-techniques. In the study by Feng et al. , a novelu et al. used aptOverall, these articles extend the knowledge on the application of nanomaterials in food nutrition and safety, promoting the development of nanotechnologies in food industry.Finally, we would like to thank all of the authors for their submissions, and all of the referees for their valuable suggestions for improving the manuscripts."} {"text": "Kibria et al. was incorrectly included as reference 28. As a result, all subsequent references are misnumbered. References 29\u201350 should be references 28\u201349. All occurrences of Kibria et al. should refer to Migneco et al. .As a result of the inclusion of the wrong reference there is an error in There is an error in There are errors in the Supporting Information files S2 Table(DOCX)Click here for additional data file.S3 Table(DOCX)Click here for additional data file.S4 Table(DOCX)Click here for additional data file."} {"text": "The authors of \u201cRevision Total Knee Arthroplasty Using Robotic Arm Technology\u201d would liWe have reviewed the article by Steelman et\u00a0al and openMatthew Bullock is a paid presenter for Smith & Nephew; is an unpaid consultant for Osso VR; has stock in Stryker; received educational support from Stryker, Smith & Nephew, Depuy, and Zimmer/Biomet; is a part of Editorial Board Arthroplasty Today; is a board member of AAKHS Patient Education Committee and West Virginia Orthopaedic Society Education Committee. The other authors declare no potential conflicts of interest.https://doi.org/10.1016/j.artd.2022.101091.For full disclosure statements refer to"} {"text": "Journal of Clinical Medicine with great interest. This systematic review and meta-analysis on the effects of assisted reproductive technologies (ARTs) on DNA methylation in human offspring could be of major importance for the health of children conceived through these techniques. Unfortunately, there are many limitations and confusing elements in this study that lead to questioning the veracity of the results.We read the study by Cannarella et al. recentlyIt is first surprising that 50 articles were selected for quantitative synthesis (as shown in the first figure of the paper (page 4)), but only 12 were finally included. The methodology used in this systematic review to select studies and how they were distributed into the different analyses must be clarified.The way the meta-analysis was conducted and its motivations are very confusing. The meta-analysis is greatly similar to the one published by Barberet et al. , but theH19 CTCF3 region in ART newborns compared to controls. However, a duplication error was made in the analysis for buccal smears, and results from Puumala et al. [n = 29, mean = 43.09, SD = 7.82). It is also hardly understandable that results from Choux et al. [The main result of this study suggests a hypomethylation in the a et al. are wronx et al. are consH19 CTCF6, the sample size is wrong for Sakian et al. [KCNQ1OT1, the sample size for Gomes et al. [KvDMR1 gene\u201d, which is an imprinting region, or when referring to \u201cH19 methylation\u201d, which does not allow us to know if the authors are discussing CTCF3, CTCF6, or H19/IGF2 DMRs regions in each corresponding study.There are also other figures in this study that seem to be wrong and may compromise the overall results for other genes. For n et al. and Sakis et al. is highes et al. : \u201cIn addWe hope that these items can be corrected or justified and will no longer confound the results of Barberet et al. who eval"} {"text": "More than 10 studies have confirmed the association of antibiotic overuse with colorectal cancer. The exact cause is unknown, but most authors hypothesize that disturbance of colon microbiota is the main culprit. In this commentary, an evolutionary explanation is proposed. It is well known that antibiotics can induce antibiotic resistance in bacteria through selection of mutators\u2014DNA mismatch repair deficient (dMMR) strains. Mutators have an increased survival potential due to their high mutagenesis rate. Antibiotics can also cause stress in human cells. Selection of dMMR colon cells may be advantageous under this stress, mimicking selection of bacterial mutators. Concomitantly, mismatch repair deficiency is a common cause of cancer, this may explain the increased cancer risk after multiple cycles of oral antibiotics. This proposed rationale is described in detail, along with supporting evidence from the peer-reviewed literature and suggestions for testing hypothesis validity. Treatment schemes could be re-evaluated, considering toxicity and somatic selection mechanisms.The association of antibiotics with colon cancer is well established but of unknown cause. Under an evolutionary framework, antibiotics may select for stress-resistant cancerous cells that lack mechanisms for DNA mismatch repair (MMR). This mimics the selection of antibiotic resistant \u2018mutators\u2019\u2014MMR-deficient micro-organisms\u2014highly adaptive due to their increased mutagenesis rate. Multiple studies have found a positive association between overuse of antibiotics and risk for colorectal cancer, though the exact cause remains unknown. In this article, a connection with DNA mismatch repair (MMR) genes is speculated, under the view of somatic selection. Below, a detailed description of the most significant studies that found the antibiotics\u2013cancer association is included. Studies are summarized in et al. [et al. [et al. [et al. [P\u2009<\u20090.001). Along the same lines, Armstrong et al. [P\u2009<\u20090.001). Simin et al. [et al. [et al., [et al. [P\u2009<\u20090.001) or duration of antibiotic exposure . The most recent studies were performed by Lu et al. [et al. [et al. [et al. [Kilkkinen et al. , using a [et al. surveyed [et al. studied [et al. found ang et al. found a n et al. in a hug [et al. in a met[et al., , more th [et al. searchedu et al. and Anek [et al. . In the [et al. , 40\u200a545 [et al. was a meet al. [et al. [The association of antibiotics use and cancer is not restricted to colorectal tumors. Indicatively, two studies are referred here. The study of Kilkkinen et al. found an [et al. reportedAccording to this vast amount of published data, there can be little doubt that antibiotics overuse is associated with increased colorectal cancer risk, often in a dose- or time-dependent manner. Antibiotic type was usually not found to be a significant parameter of this association; however, some studies agree on penicillin and anti-anaerobic antibiotics. Association of antibiotics with other cancer types probably needs further investigation. Is colorectal cancer\u2013antibiotics a causal relationship? The obvious explanation is the disturbance of colon microbiota. This is the explanation that most authors give for their results. In this perspective article, an alternative evolutionary explanation will be discussed, related with the selection of DNA MMR-deficient cells under antibiotic stress. I would like to state here that I do not neglect the probable significance of microbiota to cancer. DNA MMR deficiency is probably a part of a complicate equation that drives to cancer.DNA MMR is considered one of the most important mechanisms of DNA damage repair and one of the most conserved molecular mechanisms in all living organisms . MMR proMLH1, MLH3, MSH2, MSH3, MSH6, PMS1 and PMS2) have been identified. For some of them, the exact function is not clear. Deficiencies in the MMR pathway are a frequent cause of carcinogenesis.In humans, seven DNA MMR genes/proteins [MLH1-associated tumors is not a gene mutation but hypermethylation of the promoter. Promoter hypermethylation of MLH1 is found in at least nine more cancer sites including gastric cancer (21.6%) [Most cancer cases are associated with somatic mutations in oncogenes and tumor suppressor genes. MMR genes are considered as tumor suppressor genes. Inherited neoplasias represent \u223c 5\u201310% of all cancer cases and usually follow an autosomal dominant model of inheritance. Mutations in the MMR genes are responsible for hereditary nonpolyposis colorectal cancer/Lynch syndrome (HNPCC/LS), and other cancer-predisposing Lynch variant syndromes. The majority of mutations in HNPCC/LS occur in and PMS2 . Additioand PMS2 . Specifi1 (9.8%) , 17. Fre (21.6%) and oral (21.6%) .MSH2 and MLH1 genes [Microsatellite instability (MSI) is considered the classical method for detecting MMR pathway deficiency in colorectal or other tumors. Microsatellites are short tandem repeats (STRs) that are found throughout the genome. The most common ones in the human genome are the dinucleotide repeats, especially (AC)n. In case of a deficient MMR pathway, genetic instability is detected as presence of multiple alleles (instead of two) per each analyzed STR in tumors\u2019 DNA . The NatH1 genes . TreatmeH1 genes , despiteH1 genes . ImmunotH1 genes .MMR gene mutations are observed in monocellular as well as multicellular organisms. In multicellular organisms, these mutations can cause cancer. In monocellular organisms, these mutations can offer an adaptive advantage through the \u2018mutator\u2019 phenomenon. Eucaryotic somatic cells with MMR gene mutations may have also increased fitness under the concept of \u2018mutator\u2019 cells. By virtue of the MMR mutation that may increase their fitness, mutator cells are also potentially cancerous cells.Escherichia coli mutators were among the first that were studied [The term \u2018mutator\u2019 is used for cells that have increased mutagenesis rate, which contributes to their survival under demanding or hostile environments. Most of the knowledge we have of this phenomenon comes from antibiotic-resistant bacteria or other drug-resistant microorganisms. Commonly, mutator microorganisms\u2019 strains have a defective MMR pathway , 28. Esc studied . In thesPseudomonas aeruginosa is antibiotic-resistant and has increased virulence [P.aeruginosa lung infections are a life-threatening condition for these patients. Generally, mutator multidrug-resistant bacterial strains are common in chronic infections, like cystic fibrosis or urinary tract infections. Patients in these cases receive multiple antibiotic cycles and bacteria are continuously under positive selection for antibiotic resistance [Salmonella strains have also been identified with mutations in MMR genes [Cryptococcus, Candida and Aspergillus genus, all of which are characterized by increased mutagenesis rates and rapid adaptation to antifungal drugs [There are several examples of mutator strains. Studies show that MMR-deficient irulence . This hasistance , 33. AntMR genes , 34. Funal drugs , 35.et al. [NOTCH1 and TP53 genes, the most frequently mutated ones in esophageal cancer. NOTCH1 mutations in normal esophagus were several times higher than in esophageal cancers. Similar results were published soon after for many other healthy human tissues like endometrial, colorectal and liver [Recent advances in genomic analysis of somatic tissues challenge the standard knowledge that somatic mutations in oncogenes and tumor suppressor genes are always pathogenic. Mutations in oncogenes and tumor suppressor genes can lead to clonal expansions and adaptation in cells harboring these mutated genes. Martincorena et al. found thnd liver . Many \u2018dIntestinal epithelial cells with mutator capabilities have an adaptive advantage under stressful conditions, e.g. anticancer therapy . MMR-defet al. [Under repeated antibiotic courses, MMR-deficient colon mucosa cells can be selected as in the case of bacteria. These evolutionary pressures probably affect colon crypt stem cells, which are small clonal units occupying intestinal spaces referred to as crypts. Mutations in non-stem cells usually do not accumulate since they have limited life spans, while the stem cells are responsible for cell proliferation of the crypt. Despite stem cells being quite resistant to mutagenesis, inevitably mutations appear during ageing . Crypt set al. showed tStudies have reported a gut microbiome imbalance (dysbiosis) in patients with colorectal cancer, showing an increase of the population of \u2018bad\u2019 microbes compared to a decrease of \u2018good\u2019 microbes . In lighThe MMR genes\u2019 hypothesis could be tested by multiple ways. A population-based study would be ideal, by arranging prospective cohorts of patients treated frequently with antibiotics. Steps: (i) Patients undergo once a year colonoscopy examination, checking for any alterations in their colon mucosa, (ii) biopsies must be taken from any abnormal forms of tissue, like polyps or cancer-like malformations, (iii) DNA from those tissues will be tested for MSI, (iv) exome sequencing can be performed in polyp DNA or tumor DNA, looking for mutations in MMR genes or other implicated genes, (v) groups of cancer patients with an already MSI-tested biopsy, can be examined for a previous history of multiple antibiotic treatments, comparing the MSI-positive and the MSI-negative ones.The weakness of testing this hypothesis in humans is the need of colonoscopy. Colonoscopy is considered an invasive method, and this may be problematic under a research protocol. Additionally, biopsy testing cannot differentiate between direct and indirect effects of antibiotics. An alternative way to test this hypothesis is the use of animal models. Mice and zebrafish can be used as well. Steps: (i) Antibiotics can be administered in mice or zebrafish for a prolonged time, (ii) After some months (multiple time points can be used), DNA from multiple cell types, including intestinal cells, could be checked for any MMR gene pathogenic mutations, (iii) Results must be compared with antibiotic-free animals of the same age. Experiments can be designed to be more complicated, e.g. by performing comparisons between microbiota-free animals vs normal microbiota animals. Additionally, cancer incidence must be estimated, between treated and non-treated animals.Similar experiments can be performed in cell cultures, preferably colon cell tissue cultures. Cultured cells treated for a prolonged time with antibiotics and antibiotic-free cells can be tested for MMR gene mutations. More advanced technology can be used like organ-on-a-chip models as well. Microfluidic organ-on-a-chip models of human intestine are available . Chip exThe above suggestions can confirm or reject the hypothesis of MMR-deficient mutator cell selection. In addition, it would be important to consider whether extensive use of antibiotics by cancer patients could be risky as MSI-negative tumors can be transformed to MSI-positive after exposure to a harsh micro-environment. These tumors are more aggressive than the previous ones. It is probably wise for cancer patients to carefully consider antibiotic treatments or generally drugs that can increase death resistance of their cells.In conclusion, an evolutionary explanation is proposed for the association of antibiotics with colorectal cancer, which has been revealed in multiple large-scale population-based studies. Testing this hypothesis is feasible, especially in national cancer reference centers, where large cohorts of patients exist. Somatic selection is the key for the understanding of many conditions related with human disease."} {"text": "MINFLUX is purported as the next revolutionary fluorescence microscopy technique claiming a spatial resolution in the range of 1\u20133\u2009nm in fixed and living cells. Though the claim of molecular resolution is attractive, I am concerned whether true 1\u2009nm resolution has been attained. Here, I compare the performance with other super-resolution methods focusing particularly on spatial resolution claims, subjective filtering of localizations, detection versus labelling efficiency and the possible limitations when imaging biological samples containing densely labelled structures. I hope the analysis and evaluation parameters presented here are not only useful for future research directions for single-molecule techniques but also microscope users, developers and core facility managers when deciding on an investment for the next \u2018state-of-the-art\u2019 instrument.This article is part of the Theo Murphy meeting issue \u2018Super-resolution structured illumination microscopy (part 2)\u2019. The significant milestones have been confocal laser scanning microscopy , 2-photoThe major advantage of the direct combination is in resolution enhancement by increasing the information per photon. For readers interested in the technical details of these methods, following are some excellent recent reviews on this topic \u201319. MINF 1.New biological insights through molecular resolution 2.A priori structural information and subjective event filtering 3.Multi-colour, three-dimensional, live imaging and the caveats when imaging ideal, well-defined structures 4.Spatial resolution versus localization precision, densityMINFLUX is presently the most photon efficient method to localize molecules and the aim of this article is not to argue otherwise. Here, I evaluate MINFLUX on the following four broad categories:I hope this detailed categorization helps scientists to evaluate whether MINFLUX is the right microscopy technique for their research.. 2et al. [The primary highlight of the paper by Gwosch et al. is the a (a)NPCs are symmetrical structures of eight subunits arranged in an octagonal geometry with an outer diameter of approximately 120\u2009nm (see (b)et al. [et al. [et al. [It is worth noting that no independent validation of molecular copies or subunits of NPCs was done in Gwosch et al. using ei [et al. used the [et al. and single-molecule localization microscopy (SMLM) roughly 50 and 10 years ago, respectively ,22. For (c)et al. [et al. [et al. [Gwosch et al. used Nup [et al. . As a ge [et al. . Moreove (d)et al. [et al. [Gwosch et al. state th [et al. MINFLUX [et al. created [et al. , please . 3For years, in electron and single-molecule localization microscopy, filtering of imaging data has been done to enhance contrast and optimize visualization. In this section, I highlight how event filtering can be used to attain higher localization precision, purport new biological structures not present in the raw data and the need for blind samples to standardize the resolution claims. (a)a priori information to achieve molecular resolution, I use nuclear pore data from [www.ebi.ac.uk/biostudies/BioImages/studies/S-BIAD8. For downstream analysis at individual nuclear pore level, I zoomed-in at an image section with 4831 localizations was chosen to provide close to four copies per subunit. In this regard, we question the choice of pixel and kernel size used in Gwosch et al. [4a and 6e but a kernel of 2\u2009nm and a pixel size of 0.2\u2009nm for figure 4a,\u2009f in Gwosch et al. [A single nuclear pore (yellow box) is further highlighted to show the impact of visualization and image rendering. Note how the individual components of the eight subunits of NPCs cannot be resolved with image zoom but can be resolved with a manually chosen pixel size (0.5\u2009nm) and Gaussian kernel (8\u2009nm). The choice of pixel size and Gaussian kernels were done with h et al. . For exah et al. . (b)a priori structural information. For structures with prior information like NPCs, the manual filtering can lead to additional biases for live imaging, see figure 6b. For structures with little prior information, the reduction in the number of molecules would lead to under-sampled structure (figure 6a). Furthermore, it raises questions on the authenticity of the structures if an independent validation via electron microscopy or other super-resolution methods is not provided.The MINFLUX filtering is done at various levels based on photon counts, targeted coordinate pattern to account for true emission and those considered as background events. For a fair comparison, both raw and filtered data should be presented to demonstrate that any such filtering is not biased by et al. [Lastly, it is not clear if the molecular components of NPCs will still be visible in raw/unfiltered data as the increased density of signals will tend to overlap like in the two-colour images . It is wet al. ) leads t (c)DNA origami is commonly used to measure the spatial resolution for different microscope setups. DNA origami has a pre-defined arrangement and blind samples with no knowledge on the prior arrangement are needed for proper calibration. This will make the photon count-based filtering more accountable as origami samples, unlike biological samples, have almost zero background see . As signResults obtained by imaging and analysing DNA origami will likely be a poor predictor of performance for real biological samples where problems of out-of-focus blur, non-specific background, light scattering and other sample aberrations exist. Thus, the resolution claimed based on DNA origami needs to be thoroughly investigated in this context and efforts made not to mislead the researchers interested in real biology.In fact, a new quality metrics for microscopy techniques of the relative deviation between nominal and real life resolution in different biological situations needs to be introduced. After all, conventional fluorescence microscopy rarely achieves its theoretical limit. However, the relative deviation is much smaller than in MINFLUX. (d)For the lateral spatial resolution, so far, MINFLUX imaging has been done on NPCs , microtuFor the biological targets with no underlying structure, an independent validation with another super-resolution method or a relative comparison under identical imaging conditions and length scales is essential. For such studies, a minimum signal density and labelling efficiency should be considered as a minimum requirement.For the biological targets with an underlying organization such as NPCs, the filtering of data can be misused to attain a certain resolution . Further. 4et al. [MINFLUX was originally published in 2017 with nan (a)et al. [et al. [The individual proteins of the eight NPC subunits (roughly 40\u2009nm apart) that can be observed in single-colour images, vanish in two-colour images , puttinget al. and Thev [et al. , respect (b)et al. [et al. [Regarding the three-dimensional imaging of well-defined structures like NPCs, the rendered MINFLUX data appears to be highly clustered, under-sampled and unevenly distributed and Nup96 (SNAP-Alexa Fluor 647) again are barely comparable with standard three-dimensional SMLM images. The ring distribution of WGA is unresolved and instead appears like a random distribution of points. The Nup96 octamer is also hardly visible, both laterally and axially . (c)a (this paper) or figure 3A and Movie S6 (http://movie-usa.glencoesoftware.com/video/10.1073/pnas.2009364117/video-6) of Pape et al. [MINFLUX has been used to probe the sub-mitochondrial localization of the core MICOS proteins . The autfigure 5e et al. . As no iThe microtubules: Historically, FWHM of microtubules (MTs) has been the benchmark for resolution demonstration in STED microscopy. MTs are continuous and have a denser tubular organization (25\u2009nm) when compared to the nuclear pores. Hence, any subjective filtering would lead to an under-sampled image. The hollow tube structure of MTs has been resolved with SMLM and DNA-PAINT setups. For well resolved three-dimensional cylinders of microtubules with SMLM, see Huang et al. [et al. [et al. [g et al. , Li et a [et al. . On the [et al. claim 15In summary, MINFLUX performance on continuous structures with a well-defined organization, such as MTs or those for which prior information is not available, remains to be proven. For MICOS, post-synaptic proteins, etc. which do not have an underlying structure, any resolution and shape can be claimed but is it valid? Thick samples where background is high will be another big challenge for MINFLUX. (d)b). The observed uneven distribution of nuclear pores is possibly due to heavy filtering of localizations (greater than MINFLUX imaging of nuclear pores (Nup96\u2013mMaple) in living U2OS cells shows a highly under-sampled configuration with no data on cell viability or photo-bleaching b. The obh & Curd for a deRegarding excitation powers for live imaging, the authors used powers old less . For str. 5MINFLUX nanoscopy claims to provide a \u2018resolution in the range of 1\u20133\u2009nm for structures in fixed and living cells\u2019. It is important to remember that 1\u20133\u2009nm range here refers to localization precision, and more specifically of fluorophore and not that of the target protein . As argu (a)For years, localization precision and spatial resolution have been interchangeably used by physicists given their inherent dependence on photons. Resolution, i.e. the ability to resolve close-by structures or target molecules, primarily depends on how densely a biological structure is labelled while localization precision implies how well these labels are detected, see For single-molecule studies, localization precision follows an inverse square-root relationship with the total number of detected photons, whereas MINFLUX has an inverse quadratic relationship with detected photons under ideal conditions with zero background. For real-life biological applications, MINFLUX is severely limited by the background photons as this compromises the positioning of the donut zero. In general, MINFLUX is likely to achieve better precision numbers than single-molecule imaging but so far, due to multiple reasons stated in this article, the spatial resolution of the method has been much poorer than SMLM methods . (b)et al. [et al. [Gwosch et al. applied The second resolution assessment is made by subtracting the mean localized position from all localizations. This process is similar to the first approach and will be biased by the filtering of the data. The third approach is based on Fourier ring correlation (FRC) and it is not clear whether filtered or raw data was used for FRC analysis. FRC measures correlation between subset of localizations and will again be affected by filtering of the data.Prakash & Curd estimate (c)A resolution cannot be better than the localization precision of the detected fluorophore. As a general rule, a more conservative upper limit for localization precision or full-width at half-maximum should be used to account for the linkage errors. From a biochemistry point of view, how reasonable to have a resolution in the range of 1\u20133\u2009nm when the SNAP-tag itself is about 3\u20133.5\u2009nm in size? Given the resulting uncertainty of relative position of the dye molecule, the stated nanometer resolution can only be claimed for the fluorophore, but not the biological target. In this context, I urge the readers to take careful note when authors state the precision of the fluorophore and that of the target protein. (d)Generally speaking, the resolution is the fundamental ability to determine a structure. From a physicist perspective, it is satisfying that MINFLUX can achieve 1-nm localization precision under ideal conditions (based on photon statistics), whether it materializes in resolving real life biology or not. From a biologist perspective, for a 1-nm resolution claim, MINFLUX is off by a factor of 5 for synthetic samples like DNA origami and a factor of 40 for biological samples like NPCs.For a true 10 nm structural resolution, a general expectation would be to resolve a well-established 10\u2009nm bead-on-string structure of nucleosomes or a turn of DNA helix which is about 3.4\u2009nm (for 1-nm claim). From Nyquist viewpoint to achieve 10\u2009nm structural resolution, one would need a dye every 4\u2009nm, which for many biological structures is hard to achieve due to steric hindrance or low copy number/binding sites. Alternatively, for structural biology, with true 1-nm resolution claim, I wonder if one can compare MINFLUX with X-ray crystallography, which has a broad range of resolution (anything between 1.5 and 10\u2009\u00c5) depending on the crystal quality.. 6 (a)The spatial resolution in light microscopy has become something of a numbers game where scientists cite an arbitrary FWHM or an FRC resolution or filter localizations to achieve any desired resolution. Given the large number of potential biological reference structures in the range of 1\u2013100\u2009nm size scale, for example, DNA (2\u2009nm), nucleosomes (10\u2009nm), microtubules (20\u2009nm), NPCs (40\u2009nm), synaptonemal complexes (60\u2013150\u2009nm), I recommend using these well-defined structures to support spatial resolution claims. For DNA origami researchers, I encourage usage of blind samples where the origami arrangement or the distances are not known in advance. This will make the photon count-based filtering more accountable. (b)et al. [et al. [Gwosch et al. and Balz [et al. have mad (c)MINFLUX papers have shown several loop-holes in current scientific reporting and standards. A clear distinction needs to be made about experimental precision, spatial resolution, cross-validation of new structures by independent methods, extent of sample variability versus measurement uncertainty, hardware adaptability, image quality/standards , availability of raw data/codes, reproducibility of the analysis pipeline, automated versus manual components of data analysis (like filtering) and proper procedures to report them .From a historical perspective, 4Pi, STED and now MINFLUX have been closed systems with little focus on making the hardware adaptable or raw data/codes available. Several units of highly priced 4Pi microscopes were sold around 2000s as the next generation microscope, however, not many biological findings resulted. This was followed with the launch of STED microscopes with very few novel biological results.Science is about reproducible measurements. Improvement in precision leads to refined measurements. MINFLUX improves the precision with which fluorophores can be localized but by not making the hardware adaptable, raw data and codes available, it is hindering reproducibility, open science efforts and overall progress of the field. (d)One of the important aspects of science is to promote cutting-edge ideas even if they offer few gains in the short run. MINFLUX research provides an important conceptual advancement and leads to overall progress in the field. The aim of this article is not to discourage MINFLUX research but to highlight that the technology is still underdeveloped and needs further validation from other independent labs before it is made commercially available, especially considering the high price tag of the commercial system.As research grants mostly come from tight public funds and are a zero-sum game, I hope this critique provides the scientists, microscope users, developers and core facility managers with an alternative viewpoint when deciding about an investment in the next \u2018state-of-the-art\u2019 microscope instrument."} {"text": "There is a growing trend in complementary and alternative medicine (CAM) usage among the population with medical conditions. However, there is hesitancy for medical practitioners to integrate its application with the current treatment modality, despite governance by the authority. Hence, our objective is to systematically evaluate the healthcare perception towards integrating CAM in their practices. We systematically searched three large and renowned databases i.e., Scopus, Web of Science and PubMed, regarding \u201cPerception on Integrating CAM Usage in Patient's Treatment among Healthcare Practitioners\u201d from 2016 until 2020. At least two independent reviewers comprehensively screened and extracted the data from the accepted articles. A total of 15 studies were included in the final qualitative synthesis following a strict and rigorous assessment checked using MMAT 2018 checklist. The studies included providing the richness of information due to the qualitative nature of the study design. There were three main domains extracted i.e. knowledge, attitude, and perspective of the healthcare practitioner towards CAM integration. Limited knowledge of CAM among healthcare providers may be the possible main reason for non-supportive attitude and negative perspective on CAM. However, those who showed an inclination towards CAM were found to be more open and ready to learn about CAM if it provides benefits to the patients. There is a heterogeneity of perception towards CAM integration from healthcare providers' point of view. A proactive and systematic CAM literacy awareness program may help to improve their understanding and possibly gain more trust in its application. The world population is consistently facing health disease burden either from communicable or non-communicable disease. The practice of conventional medicine is the mainstream health system in most countries in treating diseases. However, in this current era, the treatment modalities are accessible within the spectrum of conventional medicine to complementary and alternative medicine (CAM). CAM is a general term referring to a broad field of medical \u201ctherapies\u201d that is different from the conventional medical treatment practice in hospitals. According to the National Centre for Complementary and Integrative Health (NCCIH), there are five main groups of CAM, namely alternative medical system, mind-body interventions, biologically therapies, manipulative and body-based methods as well as energy therapies . The NCCAs reported in a systematic review, there was substantial CAM use (9.8% - 76%) among the general population in 15 countries surveyed . MeanwhiAt the same time, evidence had shown that conventional medicine has been steadily reducing morbidity and mortality as well increase the quality of life for the past decades in managing most of the disease. However, CAM practised were perceived to be more effective compared to conventional medicine based on the population survey conducted among elderly patients in Malaysia. Half of the respondents (55.1%) agreed, from a total of 256 respondents in the study . On the At the time being, there is a scarce study on CAM among healthcare practitioners. Therefore, this review aims to determine the perception of integrating CAM in patient's treatment among healthcare practitioners.Search strategy: this systematic review was conducted using three large and renowned databases, i.e. PubMed, Web of Science, and Scopus, regarding the \u201cPerception on Integrating CAM Usage in Patient's Treatment among Healthcare Practitioners\u201d from the year 2016 until 2020. This search was conducted in accordance with the Preferred Reporting Items for a Systematic Review and Meta-Analysis (PRISMA) checklist [Annex 1 and PRISMA Checklist as Annex 2. The keywords used were as below and the search strategies as Annex 3: \u201cComplementary medicine\u201d OR \u201ctraditional medicine\u201d OR \u201calternative medicine\u201d AND \u201cBelief\u201d OR \u201cperception\u201d OR \u201cperspective\u201d OR \u201cattitude\u201d AND \u201cHealthcare practitioner\u201d OR \u201chealthcare worker\u201d OR \u201chealthcare professional\u201d OR healthcare personnel\u201d OR \u201cdoctor\u201d OR \u201cphysician\u201d OR \u201cmedical assistant\u201d OR \u201cnurse\u201d OR \u201cpharmacist\u201d.hecklist . A tableInclusion and exclusion criteria: the target population of this search was any healthcare practitioner. The inclusion criteria from the database searches were (a) Original article , (b) All medical disease or advice, and (c) Availability of full-text article. The exclusion criteria in this search were based on (a) Quantitative type in design, (b) Systematic and narrative review paper article, and (c) non-English article. The articles were then identified through the titles and abstracts screening process according to the eligibility criteria. Full-text articles obtained were subsequently included in the qualitative synthesis. The flow of the article search is described in Data extraction tool: all included articles were extracted by two independent authors and in case of inconsistency, a third author was consulted. The data was customized into (a) Number; (b) Year; (c) Author and Country; (d) Titles (e) Study Design; (f) Type of methods and analysis; (g) Result-themes generated; and (h) Conclusion.Operational definition: the definition stated by NCCIH on CAM were simply as a group of diverse medical and healthcare interventions, practices, products, or disciplines that are not generally considered as part of conventional medicine. The healthcare providers include all practising professionals in the medical field who may have direct or indirect contact with the patient either by giving treatment or medical advice. They may include doctors, physicians, surgeons, nurses, medical assistance, occupational therapist, physiotherapist, pharmacist etc.Annex 4.A total of 511 articles were initially obtained for the title and abstract screening, while only 50 articles were left for full-text screening. The common reason for omitting the other 35 articles upon full-text review was due to the study design, different objectives, not the target population, and not related to CAM. Two reviewers assessed the quality independently using the mixed method assessment tool Mixed Method Assessment Tool (MMAT) version 2018. Articles were only selected if both reviewers agreed with the quality. Any disagreement between the assigned reviewers will employ a third independent reviewer. All the included studies answered \u201cyes\u201d for all the questions of the respective domains of MMAT checklists which are risk of bias assessment that is present as The distribution of the articles varies with four articles from Asia ; three articles each from the USA and Australia; two articles each from the United Kingdom and Germany; and one paper from Ghana.The findings can be broadly categorized into three main domains which are knowledge, perspective, and attitude towards CAM by the healthcare provider. Due to the richness of the data from qualitative type of research, some of the themes overlap with one another. Therefore, two of our domains which are perspective and attitude were further broken down into more sub-themes and categories. Factors affecting the practitioner's perception in integrating CAM are described knowledge):Domain ( overall, studies that looked into this domain reported the lack of knowledge in CAM among medical practitioners [et al. [et al. [itioners -13. Acco [et al. , almost [et al. were the [et al. suggesteperspectiveDomain: Positive perspective: there are two sub-themes derived in this domain, which are positive perspective and negative perceptive. The integration of CAM with conventional treatment have received two types of fates, either being positively accepted or the idea clearly being denied by the medical practitioner. To put in highlight, the reason for positive acceptance by the healthcare worker was the probable good additional impact that CAM could contribute to the treatment. This is especially true in pain management as most of the practitioners who have a positive perspective on CAM echoing its usage as an additional modality. A study by Sharp et al. [et al. [et al. [p et al. on patie [et al. that uti [et al. claimed [et al. stated t [et al. describeNegative perspective: the opposite sub-theme that emerged was the negative perspective on CAM integration. One of main the reason for the negative perspective towards CAM was the skepticism towards it despite patients claiming of having a good result after using it. According to Christina et al. [et al. [et al. [et al. [a et al. , nurses a et al. found th [et al. also fou [et al. explaine [et al. . Becker [et al. on the oOther than being skeptical, another reason for the negative perspective on CAM was the modality that has some link with cultural and religious belief. For example, Corina, Christine and Klein in theirattitudeDomain: Supportive attitude: the sub-theme of supporting attitude towards CAM can be further subdivided into three categories. One category that supports the integration was based on its true definition of being a complementary or supporting treatment. Anheyer et al. [et al. [et al. [et al. [r et al. found th [et al. also add [et al. showed a [et al. supportiet al. [et al. [The positive attitude towards CAM was also shown in the sub-theme of displaying interest in the intervention. Tagharrobi, Mohammadkhan Kermanshahi and Mohammadi showed tet al. where alet al. ; Christi [et al. showed tet al. [Another reason for the supportive attitude on CAM by medical practitioners was due to the respect of patient's choice. Liem highlighet al. that demet al. extendedNon-supportive attitude: on the other hand, the sub-theme of non-supportive attitude towards CAM can be categorized into two; due to concern of implication and from the lack of support in healthcare services. Anheyer et al. [et al. [r et al. raised t [et al. and Liem [et al. highligh [et al. were conet al. [et al. [et al. [Another category, which is lack of support in healthcare services was found to be a non-supportive attitude towards CAM. This reason was mentioned in studies by Kretchy et al. and Beck [et al. . Apart f [et al. , the lacet al. [The interest in CAM has dramatically increased over the past years. In the UK, there was a high prevalence of herbal medicinal products that were bought over the counter, mostly self-prescribed. Based on our findings of the positive perspective subdomain, several articles reported the use of complementary and alternative medicine, such as chiropractic and acupuncture due to the reasoning of having an additional impact on the treatment and whole-person healing ,20. Thiset al. that higet al. .With the above-mentioned positive perspective of benefit, it is not surprising for some medical practitioners to have a supportive attitude towards CAM. A study in India found that more than half of doctors working in tertiary hospital utilized CAM therapies and the most commonly utilized therapy was Homeopathy . The docAnother noted reason for medical practitioners to have a supportive attitude towards CAM was respect for patient's choice. Respecting patient's choice is one of the arts in treating disease. It shows effective doctor-patient communication, especially in giving support to patients to find hopes in curing their diseases. According to Kelak, Cheah and Safii , doctor'Although our subdomain religion showed a negative perspective among medical practitioners towards CAM, many other studies found otherwise. A review by Alrowais and Alyousefi noted reWe also found that medical practitioners have negative a perception towards CAM is because of their skepticism towards it. In a study conducted among the oncologist in Brazil, some of the participants had negative views on CAM due to the limitation of the resources in the healthcare system, thus, they need more evidence-based medicine for practice . On the Our systematic review showed a mixture of perception among healthcare practitioners in the integration of CAM in patients' management that were presented into three main domains and respective subdomains. Many factors were highlighted, ranging from personal experience until the concern of implication if CAM superseded conventional treatments. Nevertheless, knowledge on CAM still remains low among healthcare providers. More awareness program, targeting medical professional, is needed, to successfully integrate CAM in patients care.CAM among healthcare practitioners is a new body of knowledge although it is widely practice in some population;Some patients used CAM as a replacement modality to treat their chronic non-communicable diseases;The integration of CAM together with conventional medicine is limited due to lack of safety and efficacy data.The study has highlighted a low level of knowledge about CAM and its limited application among the healthcare practitioners;The heterogeneity of perception regarding integration of CAM modality with conventional treatment hinders its application by the healthcare practitioners;Healthcare practitioners' attitude towards CAM can skewed towards acceptance as a result of environment and patient factors."} {"text": "The shape and load bearing strength of cells are determined by the complex protein network comprising the actin-myosin cytoskeleton. In response to signals received from the external environment, including chemical and mechanical stimuli, the organization of the actin-myosin cytoskeleton may undergo dynamic changes that contribute to the production of physical force necessary for many cellular processes including cell division, endocytosis, intracellular transport and migration. The essential role of the actin-myosin cytoskeleton in so many cellular functions means that aberrant regulation or function can contribute to a variety of human pathological conditions and diseases.Cells includes 11 review articles that present up-to-date perspectives on a range of cytoskeleton-related fields. A prominent theme linking several reviews is the actin\u2013myosin cytoskeleton in neurons. Mikhaylova et al. [This Special Issue of a et al. also foca et al. related a et al. profileda et al. discusseBlaine and Dylewski examinedCells also includes four primary research articles. Garc\u00eda-Bartolom\u00e9 et al. [This Special Issue of \u00e9 et al. reported\u00e9 et al. demonstr\u00e9 et al. conditio\u00e9 et al. examined"} {"text": "We have read the case report entitled \u201cClinical presentation vs endoscopy for an early diagnosis of eosinophilic esophagitis: a case report\u201d by Di Stefano et al. (2022). We wouIn the case report , the auOn the other hand, a systematic review by Shaheen et al. (2018), which The authors declare no conflicts of interest.None."} {"text": "Eichner index is a dental index, which is based on the occlusal contacts between naturally existing teeth in premolar and molar regions. One controversial topic is the association between occlusal status and temporomandibular joint dysfunction (TMD) and its associated degenerative bony changes. Through the use of cone-beam computer tomography (CBCT), the current study sought to ascertain the relationship between the Eichner index and condylar bone alterations in TMD patients. In this retrospective study, the CBCT images of bilateral temporomandibular joints (TMJs) of 107 patients with TMD were evaluated. The patients\u2019 dentition was classified into three groups of A (71%), B (18.7%), and C (10.3%), according to the Eichner index. Radiographic indicators of condylar bone alterations, including as flattening, erosion, osteophytes, marginal sclerosis, subchondral sclerosis, and joint mice, were either present or absent and registered as 1 or 0, respectively. Chi-square test was used to evaluate the link between the condylar bony changes and the Eichner groups.p= 0.00). However, no significant relationship was found between sex and condylar bony changes (p= 0.80).There was a significant relationship between the Eichner index and condylar bony changes (p= 0.05). According to the Eichner index, the most prevalent group was group \u201cA\u201d. The most prevalent radiographic finding was \u201cflattening of the condyles\u201d (58%).Condylar bony changes were found to be statistically related to age ( Patients with greater loss of tooth supporting zones have more condylar bony changes. The temporomandibular joint (TMJ), one of the body's most intricate articulations, has a wide range of anatomical and physiological characteristics . DifferTMD describes a group of clinical complaints that affect the stomatognathic system, mainly the muscles of mastication . TMD haDeveloped by Karl Eichner, the Eichner index can be applied for epidemiological studies and it is one of the most widely used dental indices. This index is effective in establishing intermaxillary contacts and extending functional dental invalidity . Using The relationship between Eichner index and bony changes in the condylar region has only been studied in a few studies , 29. IIn accordance with IR.SUM.DENTAL.REC.1399.0908, the present study was approved by the Institutional Research Committee. In this study, 107 patients with clinical signs and symptoms of TMD who needed further CBCT investigation were recruited from the archives of a private clinic and an oral and maxillofacial radiology department. There was a wide age range of 16 to 80 years for the patients (39.57\u00b13.31 years). Exclusion criteria were patients with an established history of temporomandibular surgery, acute trauma, congenital abnormalities, musculoskeletal or neurological diseases, and any systemic diseases potentially affecting joint morphology.Based on the occluding pairs in the posterior teeth (two premolars and two molars), the dentition of each patient was divided into four main occlusal supporting zones. All of the four supporting zones are in contact in class A; one supporting zone is missing in class B, or all of the four supporting zones are absent, but theanterior region remains intact; and class C has no occlusal contact between the remaining teeth .This stA New Tom VGi was used to obtain CBCT images of bilateral TMJs with 110 KVp, 3.05 mA and 3.6 s exposure time in the standard resolution mode (voxel size 0.3). Image fields were 15\u00d715cm. standing upright; the patients were biting their teeth in the maximum intercuspal position. The Frankfort plane was parallel to the floor when their heads were positioned. The NewTom Cone Beam 3D imaging system workstation (NNT Software version 6.2) was employed to prepare the images of TMJ. The reconstructed CBCT scans were assessed using high-resolution monitor (Barco-China) in a dedicated reporting room with appropriate viewing (dimly lit) condition.The raw data were reconstructed primarily for the TMJ. By scrolling the axial images, the system identified the axial view on which the condylar width had the largest mediolateral dimension. The interval and thickness of the image slices were both set at 0.5mm. Afterwards, the corrected coronal and sagittal cross sections of each joint were rectified by drawing a perpendicular and parallel line and reconstructing them to the long axis of the condyle.The criteria used to evaluate condylar bony changes included (1) flattening (loss of convexity of condylar head outlines), (2) surface erosion , (3) marginal bony overgrowth or osteophytes , (4) subcortical erosion or Ely cyst , (5) marginal sclerosis, (6) subchondral sclerosis (Increased radiopacity of thecancellous bone), (7) joint mice (osteophytes that break off and lie free within the joint space) .For each patient, the right and left TMJ areas were evaluated for the presence or absence of one or more of these radiographic changes, and were rated accordingly as 1 or 0.We calculated the left and right TMJ bony changes independently by adding up the scores related to any radiographic finding. For example, when a patient had flattening, erosion, and sclerosis in the condylar bones, the total number of changes was considered \"3\u201d. The right and the left scores were then added. Consequently, the association between the condylar bony changes and the Eichner index was examined.The images were evaluated by two dentomaxillofacial radiologists. Each observer evaluated the images independently after a minimum of two weeks. In addition to checking the consistency between the first and the second sets of records produced by each specialist, we also examined the reliability of the inter-examiners for each of the criteria applying \u03ba statistic. Adequate intra-examiner agreement index , as well as strong inter-examiner agreement (\u03ba coefficient: 0.801 to 1) was detected.p= 0.05. A Dunn\u2019s post-hoc test was used to compare the prevalence of overall bone changes between Eichner groups. Statistical calculations were performed using SPSS . To ensure the intra and inter-examiner reliability, \u03ba statistic was used. The chi-square test was used to determine whether there was a correlation between condylar bony changes, age, sex, and the Eichner groups .Significance level was set at Excellent inter-(\u03ba coefficient: 0.801 to 1) and intra-(\u03ba c-oefficient: 0.833 to 1) examiner agreement was observe-d. In the present study, 107 patients wererecruited among which 74 were females with an age range of 17-78 years (mean 39.18+15.20) and 33 were males with the age range of 16-80 years (mean 40.42+17.70). Based on the Eichner index,the most common Eichner group was a (71%) followed by group B (18.7%) and group C (10.3%) .p= 0.00). However, sex did not significantly affect total bony changes (p= 0.80).Bony changes in the condylar region were significantly associated with Eichner index showed radiographic changes in condylar morphology. Flattening was the most prevalent bony changes (58.9%) followed by erosion (40.2%), marginal sclerosis (20.6%), subchondral sclerosis (14.0%), osteophyte (5.6 %), cyst (3.7%) and joint mice (0.9%).The results showed that total bony changes had a favorable relationship with age (er index .Moreoveer index .et al. [ A common cause of orofacial pain that is not related to dental or infectious conditions is TMD . Severaet al. . Many set al. ; canineet al. ; discreet al. , Kennedet al. , 29 . Since posterior teeth are necessary to maintain uniform occlusal force distribution on TMJ, losing them can have a greater impact than occlusion type. Additionally, due to incisors' inclined planes, entire mandible move backward when posterior teeth are lost. As a result of this movement, the condyles are moved above and behind their normal position in relation to the articular eminence. When a unilateral cause is responsible, only one condyle is affected but when the bite has a closed, both are affected - 28.et al. [ et al. [ et al. [ et al. [ Only two studies have investigated the relationship between Eichner index and condylar bony changes , 29. Aet al. and Jal et al. , despitet al. [ et al. [ et al. [ et al. [ et al. [ The TMJ is characterized by its ability to remodel when loading forces exceed its normal tolerance. In adults, this adaptive response can alter the condylar bone morphology and articular eminence . Flatteet al. , Mathew et al. , Gharge et al. also fo et al. . Unlike et al. and Tak et al. found tet al. [ et al. [ In the published literature, there is still controversy over the correlation between condylar bony changes and age. According to several studies, condylar bony changes and aging are positively correlated - 35, wet al. and Jal et al. found n et al. .et al.'s study [ et al. [ et al. [ et al. [ et al. [ Our study found no significant relationship between sex and total bony changes, which is consistent with Irsan NDH s study . Alzaha et al. , Takaya et al. , and Ja et al. reporte et al. , who foVarious radiographic modalities such as plain film radiography , 29, cThis study aimed to determine whether radiographic bone changes in the condyles are associated with Eichner; however, some factors such as the time between tooth extraction and image time and oral habits were not evaluated in relation to the severity of TMD symptoms and condylar bony changes. To either confirm or refute the results of the present study, further investigation should be conducted with these factors taken into account.The results of this study indicate that condylar bony changes are highly associated with the Eichner index. In addition, CBCT findings were associated with variations in finding condylar bony changes in this study. It is therefore important to consider CBCT as a modality for appropriate diagnosis in clinical practice as well as for patients' therapeutic choices.The authors declare that they have no conflict of interest."} {"text": "Gas sensors play an irreplaceable role in industry and life. Different types of gas sensors, including metal-oxide sensors, are developed for different scenarios. Titanium dioxide is widely used in dyes, photocatalysis, and other fields by virtue of its nontoxic and nonhazardous properties, and excellent performance. Additionally, researchers are continuously exploring applications in other fields, such as gas sensors and batteries. The preparation methods include deposition, magnetron sputtering, and electrostatic spinning. As researchers continue to study sensors with the help of modern computers, microcosm simulations have been implemented, opening up new possibilities for research. The combination of simulation and calculation will help us to better grasp the reaction mechanisms, improve the design of gas sensor materials, and better respond to different gas environments. In this paper, the experimental and computational aspects of Humans cannot live without gas. However, some of these gases are toxic and harmful even though they may be colorless and odorless. When these gases are present in our environment, it is essential to detect them immediately to protect us from their hazards. Research on gas sensors has long been ongoing and of increasing interest due to their role in a wide range of fields, including laboratory health and safety, gas detection, observation, and environmental investigation , with evnt gases . Even th sensors . The resironment . Further of TiO2 with the of TiO2 , the resectivity . The pho, and NO . The quaatalytic , and gasatalytic and dram explore , and as humidity . It is t testing . In the testing . Sensiti testing . In part testing . In Tablarticles , orange tum dots , light ycrystals , light be phases , and dar2 fibers ) that weeriments . Since 1material . As the i et al. showed ti et al. . It was The MOS gas sensor is designed based on the change in conductivity of the sensor material under the action of reducing or oxidizing gas O2 for an n-type semiconductor can be expressed by Equation (5) , in whiction (6) :(5)S=RaiIn Equation (6), Computer technology is also developing rapidly at a time when the level of experimentation is constantly improving. It can be said that a computer\u2019s help is indispensable in every aspect now, including the experimental part. In the same way, the theory of physics has made astonishing progress. The development of quantum mechanics has promoted the rise and application of the density functional theory (DFT). After deducing the famous Kohn\u2013Sham (K-S) equation by local density approximation (LDA), the first-principle calculation based on DFT has become a powerful material development tool . Using cThere are several experimental methods to improve the performance of onductor , which eI41/amd) , and bro) [pbca) , and the) [pbca) ; these s) [pbca) . In addis 600\u00a0\u00b0C . The ban 2.96\u00a0eV . Howeverfficient . There a bandgap . It is fThe performance of the sensor can be improved by comparing the experimental results, as different experimental methods can obtain different results. In the past, in the manufacturing of gas sensors, the sensing material was uniformly covered on the substrate because the electrical conductivity of metal oxides varies with the adsorption and desorption of gas molecules, but it turns out that this may not be good enough . Common In general, the excellent advantages of Various materials have been obtained through different methods , includinditions b,c is giIn addition, unadulterated nanoparticles are used to help improve the performance of gas sensing. Azhar Ali Haidry et al. used magy\u2019s work that paro et al. preparedhe flame . During he flame used thiElectrostatic spinning is one of the most commonly used methods to produce nanofilaments. Electrostatic spinning technology has also been used in many fields, including in the study of gas-sensitive materials, by being combining with other technologies. Fibers that are -Ah Park . The mixDue to bandwidth gaps formance . On the formance . The maiA d. Zhang then triu et al. were abli et al. doped CuHowever, the results of Abdelilah Lahmar et al. showed ti et al. allowed alehifar preparedL. Aldon et al. doped Snichphant , a similSingle-layer graphene (SLG) has the purest carbon chemistry. The two-dimensional (2D) atomic structure maximizes the surface-to-volume ratio. The SLG\u2019s tiny resistance value is very friendly to gas detection at a low power consumption and usually enables a significant reduction in operating temperature. If this can compensate for the higher operating temperature of g et al. used theSimilarly, semiconductor materials are other common doping materials that are doped into TiO2\u00a0 . The finChunrong Xiong and Kenneth J. Balkus, Jr. used SnOr et al. used ther et al. conducteme ratio .Piotr Nowak et al. aimed toIn the case of Chachuli . In theiThe same is true of Feng Li et al. , who dopn et al. . From thJinniu Zhang et al. doped tii et al. discussei et al. synthesiIn this section, the authors summarized the papers on U [U + J [Obtaining better results and breakthroughs often requires a lot of research time and the persistence of countless researchers to make gas sensors work better, but there is no doubt that it is often more difficult when we try to explore the mechanism of the reaction. In the last decade, the development of quantum chemical computational molecular simulations and density functional theory has allowed us to take a more microscopic view of the reaction process, which has attracted more and more people to participate in the research. First principles have helped us to take our research to a higher level from a physics perspective. First-principle computing has several attractive features. First, it can obtain the characteristics of a system without any experimental parameters , based oU , DTF + UU [U + J and the In the microscopic field of theory, physicists use an equation called the wave functions to describe the states of particles, which is generally expressed as Equation (7): as 1927 . The Hoh as 1927 theorem [H=T+U+V , which c[H=T+U+V equation[H=T+U+V , GGA [87[H=T+U+V , are appFirst-principle calculation can be used to perform many simulations, including some dangerous biological experiments, without any side effects. However, as mentioned earlier, doping is an effective way to improve the performance of semiconductor gas sensors. A doped semiconductor gas sensor has been widely used in first-principle calculation . For exaCompared with STM images, it is found that ang Wang derived formance . The surn et al. . In Lilin et al. work, thJuan Liu and otheaballero showed tr et al. . The restion (8) ,96. It iIt was found by theoretical calculations that the Oleg Lupan et al. preparedBharat Sharma et al. looked aequation :(9)D=0.8ion (10) :(10)\u0394E(Tlated as :(11)Eadsessed as :(12)\u0394R\u221deJoseph Muscat et al. studied . Garcia studied (0\u00a00\u00a01) . This al (0\u00a00\u00a01) calculatTo verify the accuracy of the results, James A. Quirk et al. characteMichele Reticcioli et al. also carBandwidth changes the most quickly after stimulant use because the first principle can show changes before and after doping . The bang et al. used theisa Ohno not onlyU or other hybrid functions is proposed. Youngho Kang et al. [U to calculate the electron\u2013phonon interaction and transmission in anatase U (blue solid line). The lower right corner gives the quadrilateral crystal structure with blue atoms of titanium and red atoms of oxygen. To better describe the structural properties, the on-site Coulomb energy In the early stage of the crystal system, the direct use of function calculation results may deviate from the experimental values. Therefore, a solution of DFT + g et al. used DFTU and Hund\u2019s J corrections with parameters calculated using linear response theory based on an extended first principle, found that this method was able to predict the bandgap more accurately, giving a maximum error of less than Okan K. Orhan and David D. O\u2019Regan , based og et al. researchg et al. added thIn addition to the relationship between polarons and electrons, the first-principle calculations can also study the surface or defects of the The surface of anatase ang Wang . The resHarrison explain It is undeniable that two-dimensional (2D) materials are gaining more and more importance in some fields as researchers continue to refine and develop their research on materials . Additio Hao Sun performeion (13) :(13)\u03c3\u221de\u2212g et al. , who intGraphene, with its unique two-dimensional structure and large specific surface area, has gradually become a hot spot for sensor research . Zhang eIn this section, the authors provided a brief overview of the theoretical basis of DFT calculations for In general, the excellent advantages of"} {"text": "This article (Correspondence)\u00a0is in response to the recently published article on the role of Pecto-intercostal Fascial Block for cardiac procedures by Zhang et al. in \u201cBMC Anesthesiology\u201d. I greatly appreciate the authors for publishing this study in which Pecto-intercostal Fascial Block, a novel technique for providing pain relief in open cardiac surgical procedures was evaluated. I wish to present my reflections on this article as well as to add a few more points on this topic. Dear Editor,I read the recently published article that has analyzed the effectiveness of Pecto-intercostal Fascial Block (PIFB) for patients undergoing open cardiac surgery with keeZhang et al. have staZhang et al. have alsThe control group of patients had significantly higher levels of blood glucose despite a significant rise in insulin levels, which was attributed as \u201cInsulin resistance\u201d, by Zhang et al. . Also, tZhang et al. have men"} {"text": "To the editor: We read with interest the article by van Ewijk et al. [k et al. regardink et al. . In factk et al. . If the k et al. ,5. FurthThere is a previously published report suggesting that policies differentially targeting the vaccinated and unvaccinated would alter VE estimates. A study in New York showed that VE estimates declined simultaneously across different time cohorts after lifting mask mandates exclusively for fully vaccinated individuals, which cannot be explained by waning immunity . AlthougWe previously published a similar study adjusting for high-risk behaviours and mask wearing as well as testing behaviour . We alsoWhen estimating VE, we assume a causal relationship between vaccination and infection/disease and we r"} {"text": "Saliva is a precious oral fluid that contributes to oral health and when its quantity is diminished, it hampers the quality of life. Individuals suffering from diabetes have a complaint of reduced salivation due to the consumption of xerogenic drugs and autonomic neuropathy. Our study aims to evaluate the effectiveness of the transcutaneous electric nerve stimulation (TENS) device on the salivary flow rate with respect to age and gender in Jaipur population.p value <0.05 was considered to be significant. A descriptive observational study was carried out on individuals in Jaipur at the Department of Oral Medicine and Radiology at Rajasthan Dental College andHospital during a period of 7 months. The study consisted of 200 individuals who were divided into two groups. Unstimulated and stimulated saliva were collectedfor 5 minutes in a graduated beaker. Stimulated saliva was collected after keeping the TENS unit activated at 50Hz. Kolmogorov-Smirnov, Shapiro-Wilks normalitytests and Mann Whitney U test were done. The The TENS unit was effective in increasing the quantity of stimulated saliva and a highly statistical significance was seen in age groups. TENS was also found to be more effective in increasing saliva in diabetic individuals. The mean unstimulated salivary rate was 1.64ml/5min and the mean stimulated salivary rate was 1.914ml/5min for Group I. The mean unstimulated salivary rate was 1.231ml/5 min and the mean stimulated salivary rate was 1.547ml/5 min for Group II. The p value for Group I and II for unstimulated saliva was 0.01 and for stimulated saliva was 0.03. It seems that TENS has shown positive results in increasing salivary secretions and salivary values may diminish with age; therefore, TENS might be used in aged individuals as well as in diabetic patients to increase the quantity of saliva. Saliva is a precious oral fluid that is often taken for granted but we need to realize that it is very critical for preserving and maintaining oral health . SalivaHyposalivation has been found to be associated with diabetes mellitus . XerostApplication of electric impulses to one or more of the three components of the salivary reflex arch should theoretically improve salivary secretion and lessen the various long-term effects of hyposalivation .Transcutaneous electric nerve stimulation (TENS) is a simple, inexpensive, and non-invasive modality that uses electric current to activate nerves for therapeutic reasons. It is a non-pharmacological method of pain management for which it is widely used .Earlier, TENS was majorly concentrated on pain, but few clinical trials have been conducted to identify the effect of electrical nerve stimulation on salivary flow . As theThis descriptive type of observational study was conducted at the Department of Oral Medicine and Radiology at Rajasthan Dental College and Hospital during a period of 7 months (August 2019- February 2020). The ethical certificate was obtained from the college .The total sample size of the study was decided to be200 participants after a discussion with the statistician.Out of the total 200 individuals, 100 healthy individuals within the age group of 20-40 years were included in Group I. Group II consisted of 100 individuals of age >40 years, out of which 50 were non-diabetics and the remaining 50 were diabetic individuals . Patients who had any deleterious habits like alcohol or drug consumption and tobacco in any forms, usage of any medications, history of salivary gland pathology that may affect salivary flow, pregnancy, history of head and neck radiotherapy, granulomatous diseases, Sj\u00f6gren\u2019s syndrome and any systemic diseases except diabetes were excluded from the study as most of these lead to hyposalivation and we wanted our study to be exclusive to notice changes in salivary values due to diabetes alone. Patients were explained about the procedure and informed consents were taken. For the diabetic individuals the random blood sugar (RBS) levels were obtained firstby using a tabletop glucometer (Dr. Morepen GlucoOne blood glucose monitoring system BG 03) and only those individuals who were under medication for at least oneyear were included. The patients were seated comfortably on the dental chair and were asked to refrain from eating or drinking 1 hour prior to saliva collection.The procedure was done during the early morning hours between 9 am to 12 am. Unstimulated saliva was collected for 5 minutes in a graduated beaker by drainingmethod [ p Value of < 0.05 was set as statistical significance. Kolmogorov-Smirnov and Shapiro-Wilks normality tests were used to check whether the variables follow normal distribution. For variables that did not follow normal distribution, Mann Whitney U test was used to compare mean between the groups.The data collected were analyzed by SPSS software, version 20.0. The The mean age for Group I was 28.36\u00b13.1, for Group II non-diabetic individuals was 53.56\u00b17.8 and for Group II diabetic individuals was 52.16\u00b17.4. 85 out of 100individuals experienced an increase in stimulated saliva, 9 had no change and 6 had a decrease in Group I (20-40years), In Group II (>40years) 83patients out of 100 patients had an increase in stimulated saliva, 9 had a decrease and 8 patients had no change in salivary flow. Individuals who did not showany change were the ones who showed no flow initially. Only those individuals who had an initial saliva flow already were the ones who showed a significant increase.For age group 20-40 years, the mean unstimulated salivary rate was 1.64ml/5min and the mean stimulated salivary rate was 1.914ml/5min. In age >40years, themean unstimulated salivary rate was 1.231ml/5min and the mean stimulated salivary rate was 1.547ml/5min. Mann Whitney U test was applied; the p value for both Group I(20-40 years) and Group II (>40 years) for unstimulated salivary flow was 0.01 and for stimulated salivary flow was 0.03, respectively. A significantdifference was seen between unstimulated and stimulated salivary flow while comparing different age groups .For Group I (20-40years), the mean unstimulated salivary rate was 1.64ml/5min and the mean stimulated salivary rate was 1.914ml/5min. For GroupII (>40years), the mean unstimulated salivary rate was 1.231ml/5min and the mean stimulated salivary rate was 1.547ml/ 5min. Mann Whitney U test wasapplied and the p value for unstimulated salivary production and for stimulated salivary production for both males and females was 0.198 and 0.083, respectively.There was no significant difference on salivary production considering gender when the effect of TENS on salivary production wasdone .p value forunstimulated salivary production for both diabetic and non-diabetic individuals was 0.002 and stimulated salivary production was 0.043; a significant difference wasseen (For the diabetic individuals, the mean unstimulated salivary rate was 1.07ml/5min and the mean stimulated salivary rate was 1.44ml/5min. One-sample t-test wasapplied and the p value for unstimulated and for stimulated salivary production was 0.00; a significant difference was seen . The meawasseen .Reduced salivary secretion has been linked to diabetes and not many studies have been done regarding the use of electro-stimulation as a therapeutic modality in treating hypo-function of the salivary gland.et al. [ et al. [ The present study showed that 168 participants of the study had an increase, which may be attributed to the mechanism by which TENS acts on the parotid glands. It causes stimulation of the auriculo-temporal nerve, which controls the secreto-motor drive of the parotid gland. On the application of the TENS device, parasympathetic supply gets stimulated and hence, produces watery and copious saliva. A total of 17 individuals had no change in saliva because these were those individuals who showed no flow initially. A total of 15 individuals had reduced saliva, which may be due to the fact that the brain may have perceived the electrical stimulus to be painful. In addition, for the diabetic individuals our study demonstrated that the mean unstimulated saliva was 1.08ml/5min, which increased to 1.44ml/5min after stimulation. Similar findings were noted in the study conducted by Dhillon et al. where i et al. includeet al. [ et al. [ Similar to our studies, where an increase in stimulated saliva was seen, studies carried out by Bhasin et al. reporte et al. , it waset al. [ et al. [ In the study by Manoj Kumar et al. , 62 out et al. , 65 outet al. [ Bhasin et al. conductet al. [ Hargitai et al. conductet al. [ Aggarwal et al. conductet al. [ et al. [ In the present study, in the age group 20-40 years, the mean unstimulated saliva was 1.64ml/5min, which increased to 1.91ml/5min post-stimulation. This data was consistent with the study conducted by Bhasin et al. and Sin et al. where tet al. [ et al. [ In this study, the individuals of the age range of 20-40 years had an increase in the both the mean unstimulated and stimulated salivary values which were 1.64ml/ 5min and 1.91ml/5min, respectively when compared to the individuals of the age >40 years who had mean unstimulated and stimulated salivary values of 1.23ml/ 5min and 1.54ml/5min, respectively. Our study correlates to findings of the study conducted by Manoj Kumar et al. where m et al. , there et al. [ et al. [ et al. [ et al. [ The data in our study are also consistent with the studies conducted by Dyasnoor et al. , Singla et al. , Aggarw et al. , and Si et al. where net al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ et al. [ Our study employed the draining method for saliva collection in which the saliva was collected passively in the floor mouth and was then allowed to drain into the graduated beaker without any forced self-stimulation. The studies conducted by Dhillon et al. , Aparna et al. and Har et al. used th et al. , Pattip et al. , Aggarw et al. , Singh et al. , Sakshi et al. , Lingam et al. , Manoj et al. , Mimans et al. and Sin et al. used thet al. [ In this study, the mean value of unstimulated saliva for the diabetics was 1.08ml/5min and for stimulated saliva was 1.44ml/5min. In the study conducted by Jagdhari et al. , where et al. [ For the diabetic individuals, our study demonstrated that the mean unstimulated saliva was 1.08ml/5min, which increased to 1.44ml/5min. A significant change in saliva was seen post-stimulation. Dyasnoor et al. conductThe limitation of our study was that the electrode pads were placed on the approximate location on the skin overlying the parotid gland and the exact anatomical measurements were not made. The diabetic staging of the individuals was not done and all individuals who were found to be diabetic were included in the study irrespective of the sugar levels. In addition, the patients could have been re-evaluated after 1 or 2 weeks after the application of the TENS device, to check for its effectiveness.It seems that TENS has shown positive results in increasing salivary secretions in different age groups and in the diabetic individuals. In the future, TENS might beused as an additional treatment modality to manage salivary gland dysfunctions.The authors declare that they have no conflict of interest."} {"text": "Mitochondria are semi-autonomous, membrane-bound organelles present in the cytoplasm of nearly all eukaryotic cells. Their \u201cmost classic\u201d role is to provide the cells with the energy needed to sustain the organism\u2019s life; however, they do not only function as cell \u201cpower stations\u201d. Mitochondria regulate complex processes in cell homeostasis\u2014they are a hub for a number of cell signaling cascades and house several metabolic pathways. Not surprisingly, if these processes become dysfunctional, they often impair mitochondria functions and eventually lead to various types of mitochondrial pathological phenotypes.This Special Issue is focused on the link between mitochondrial dysfunctions and central nervous system (CNS) disorders: the contributions presented here, comprising four original research papers and four review articles, cover various topics, ranging from glutamate toxicity to schizophrenia.An old proverb says that \u201cexcess of everything is bad\u201d. Glutamate excitotoxicity is a common culprit of numerous neurodegenerative diseases. Polster et al. developeSome researchers like proteins, while others prefer lipids; cardiolipin (CL) is a major lipid component of the inner mitochondrial membrane (IMM). CL is also found in the outer mitochondrial membrane (OMM), specifically in the contact sites formed between the OMM and IMM. Similar structures are mitochondria-associated membranes (MAMs), which are in contact with the endoplasmic reticulum (ER). Manganelli et al. demonstrThe opening of mitochondrial permeability transition pores (MPTPs) is a major pathophysiological mechanism of ischemic brain pathology. The modulation of MPTPs may thus represent a potential target for neuroprotection during ischemic brain injuries. Therefore, Skemiene et al. studied The last research paper describes \u201cthe importance of being balanced\u201d. Schizophrenia is a complex mental disorder defined by continuous or relapsing episodes of psychosis. Bryll et al. investig\u201cOpportunities neglected can never be recovered\u201d could be the motto of the first review by Gasparotto et al. . Their rGrespi et al. investigMaresca et al. focus onIn the last review, Vezzani et al. describeIn summary, in this concise Special Issue, the readers will find articles that propose new perspectives from which to interpret the contribution of mitochondrial defects in CNS pathological contexts."} {"text": "Mycobacterium bovis remains a dominant cause of bovine and zoonotic TB worldwide. Despite these impediments, mycobacterial researchers continue to deliver exciting and cutting-edge research, as exemplified in this special issue of Microbiology.The last special edition dedicated to mycobacteria was in 2003 and focused on the promise of the post-genomic era to deliver significant advances in mycobacterial research . In the M. tuberculosis affects its host\u2019s metabolism and immunity. Attention is now being redirected to understanding the battle for metal ions during host pathogen interactions. Understanding metal acquisition by mycobacteria and how metals are used by the host as bacteriostatic/bactericidal weapons is the focus of a review by Serafini and a study by Tamuhla et al. [M. smegmatis adapts to low iron stress [Mycobacterial researchers have led the way in the emerging frontiers of pathometabolism and immunometabolism. We now have a much better understanding of the dietary requirements of this pathogen when it is growing within its human host and how a et al. . Serafina et al. . Tamuhlan stress .M. tuberculosis so antibiotic resistant that it was not included in its list of priority antibiotic-resistant pathogens (it is a footnote at the bottom of the widely quoted league table), which has proved unfortunate for the profile of this pathogen. M. tuberculosis tragically causes up to 25\u200a% of AMR-associated deaths [Mycobacterium abscessus, are also important human pathogens, which are associated with severe morbidity and mortality and have limited treatment options because of their inherent antibiotic resistance. Consequently, phage therapy is being explored as an alternative to antibiotics to treat TB and non-TB mycobacterial diseases. Joshi et al.\u2019s study [et al. [Increasing antibiotic resistance (AMR) is a significant impediment to the control of TB and non-TB mycobacterial infections. The WHO deemed d deaths and therd deaths . Non-TB \u2019s study provides [et al. are usin [et al. reviews et al. [et al. [et al. [Treatment of tuberculosis requires a cocktail of drugs to target all mycobacterial populations, including those that are refractory to antibiotic killing because of the specific physiological/phenotypic state of the bacteria. This antibiotic tolerance can affect the whole bacterial population, or a small sub-population known as persisters. Antibiotic-tolerant bacteria are more likely to go on to become genetically resistant and therefore directly contribute to AMR, as reviewed by Mandal et al. . This emet al. . Multiplet al. . Biofilm [et al. use comp [et al. . Therapi [et al. .in vitro passage [et al. [et al. review where we are in terms of understanding their function in non-TB mycobacteria [We have so much more to learn about what is undoubtedly one of the most complex bacterial cell walls, and is also an important drug target because it is intrinsically linked to mycobacterial survival and virulence . The rol passage . Di Capu [et al. explore bacteria .et al. [Regulation of events within mycobacterial cells is the focus of several papers . Insightet al. . This woet al. .et al. [Mycobacterium hassiacum, although it rarely infects humans, could be used as an indicator of disinfection success with utility in the hospital environment, and also as a source of thermostable and tractable enzymes for drug design [et al. [Mycobacteria and Burkholderia.There are several papers focused on non-tuberculosis mycobacteria . Davarpaet al. demonstrg design . Mycobac [et al. investigBecause of the challenges posed by mycobacterial research , the knowledge and tools available for this group of bacteria previously trailed behind those for other more easily tractable pathogens. However, the last two decades has seen mycobacterial researchers pioneering exciting methodologies, tools and emerging paradigms in host pathogen interactions. The challenge is to translate this research into impactful solutions for the control and prevention of mycobacterial diseases."} {"text": "The temporal convolutional networks model efficiently estimates energy expenditure under sitting, standing and high levels of exercise intensities. Conclusion: Our results proved the respiratory magnetometer plethysmography system\u2019s effectiveness in estimating energy expenditure for different age populations across various intensities of physical activity.Purpose: Energy expenditure is a key parameter in quantifying physical activity. Traditional methods are limited because they are expensive and cumbersome. Additional portable and cheaper devices are developed to estimate energy expenditure to overcome this problem. It is essential to verify the accuracy of these devices. This study aims to validate the accuracy of energy expenditure estimation by a respiratory magnetometer plethysmography system in children, adolescents and adults using a deep learning model. Methods: Twenty-three healthy subjects in three groups (nine adults (A), eight post-pubertal (PP) males and six pubertal (P) females) first sat or stood for six minutes and then performed a maximal graded test on a bicycle ergometer until exhaustion. We measured energy expenditure, oxygen uptake, ventilatory thresholds 1 and 2 and maximal oxygen uptake. The respiratory magnetometer plethysmography system measured four chest and abdomen distances using magnetometers sensors. We trained the models to predict energy expenditure based on the temporal convolutional networks model. Results: The respiratory magnetometer plethysmography system provided accurate energy expenditure estimation in groups A (R Physical activity (PA) is defined as any bodily movement produced by the contraction of skeletal muscles that leads to an increase in energy expenditure (EE) above the resting level of an individual . EE is cE) can also be used as an index of EE estimation. Recent technological advances to estimate E from wearable sensors have encouraged researchers to use this physiological variable to estimate EE ) and the relationship between 2 and EE (EE-IC = 2 (L) \u00d7 4.825), we propose to directly use the changes in the thoracic and abdominal distances to estimate EE. The differences between EE-IC and EE-RMP in adults and for the \u201cRest-VTh1\u201d intensity in group PP (\u22124%) and a higher value of EE-RMP for the \u201cstanding\u201d intensity in group P (+10%). These differences, even if they are significant, seem acceptable in terms of the literature. Indeed, Lopez et al. [Comparing EEexercise . Our resz et al. used the2 = 0.97, RMSE = 0.71 kcal/min). Our results thus seem to agree with those of previous studies, such as those of Farrahi et al. [The overall results of our study also seem to validate the choice of a DL model to estimate EE from the RMP system. In adults, Zhu et al. , using a results in adulti et al. , which si et al. , estimati et al. . Our resi et al. . With DLi et al. . Howeveri et al. indicateConcerning the limitations, enough datasets including various ages and anthropometric characteristics are essential for improving the accuracy of predicting EE. Indeed, it would be interesting to increase the number of subjects in each age group and also increase the age groups by adding, for example, a group of elderly people whose ventilatory responses differ from those of younger people . It woulOur results show that using a DL model and the RMP system seems efficient for estimating EE in children and adolescents under resting and exercise conditions. The findings of this study represent an essential step in the search for measurement methods and DL models for estimating EE in various subject populations."} {"text": "Elderly and sedentary individuals are particularly vulnerable to heat related illness. Short-term heat acclimation (STHA) can decrease both the physical and mental stress imposed on individuals performing tasks in the heat. However, the feasibility and efficacy of STHA protocols in an older population remains unclear despite this population being particularly vulnerable to heat illness. The aim of this systematic review was to investigate the feasibility and efficacy of STHA protocols undertaken by participants over fifty years of age.Academic Search Premier, CINAHL Complete, MEDLINE, APA PsycInfo, and SPORTDiscus were searched for peer reviewed articles. The search terms were; (heat* or therm*) N3 (adapt* or acclimati*) AND old* or elder* or senior* or geriatric* or aging or ageing. Only studies using primary empirical data and which included participants \u226550 years of age were eligible. Extracted data includes participant demographics to an environmental chamber while the remaining study used a hot water perfused suit. Eight studies reported a decrease in core temperature following STHA. Five studies demonstrated post-exercise changes in sweat rates and four studies showed decreases in mean skin temperature. The differences reported in physiological markers suggest that STHA is viable in an older population.Twelve eligible studies were included in the systematic review. A total of 179 participants took part in experimentation, 96 of which were over 50 years old. Age ranged from 50 to 76. All twelve of the studies involved exercise on a cycle ergometer. Ten out of twelve protocols used a percentage of There remains limited data on STHA in the elderly. However, the twelve studies examined suggest that STHA is feasible and efficacious in elderly individuals and may provide preventative protection to heat exposures. Current STHA protocols require specialised equipment and do not cater for individuals unable to exercise. Passive HWI may provide a pragmatic and affordable solution, however further information in this area is required. Performing tasks in a hot environment causes physiological stress and impaired performance in non-acclimated individuals . Heat acOlder populations have a greater mortality risk during heat waves, evidenced by an exponential increase in mortality in people over the age of fifty, with most medical complications and deaths during heatwaves originating from increased cardiovascular strain ,3. Olderet al. [Heat acclimation protocols vary in duration, both in terms of session length, number of days of exposure and whether they are consecutive or non-consecutive, the environmental temperature and humidity used, or whether a passive or exercise-based protocol are used, which means HA can be tailored to the needs of a specific population. Given the time constraints of modern weather prediction , short-tet al. found thet al. four dayControlled hyperthermia is one technique of achieving HA that involves repeated sessions of submaximal work in an environmental chamber to increase the participant\u2019s core temperature above 38.5\u00baC. The time that a participant spends above 38.5\u00baC is important for developing physiological adaptations . ConsecuPassive heating can induce numerous health benefits, similar to those induced by sustained exercise . As exerHeatwaves are becoming more extreme and more frequent across the globe , thus thet al. [Millyard et al. identifiet al. . Furtheret al. . This arThe aim of this systematic review was to determine the feasibility and efficacy of short-term heat acclimation (STHA) protocols undertaken by healthy participants over fifty years of age.This review was completed in accordance with the relevant items in the preferred reporting items for systematic reviews and meta-analyses (PRISMA) .th September 2022. The search terms were as follows; (heat* or therm*) N3 (adapt* or acclimati*) AND old* or elder* or senior* or geriatric* or aging or ageing. The final search was done on 18th October 2022, no additional papers were found.An initial comprehensive search was performed on 10The search was conducted using EBSCOhost Research Databases using the advanced search function that including the following data bases: Academic Search Premier; CINAHL Complete; MEDLINE; APA PsycInfo; SPORTDiscus with Full Text. Only published peer reviewed articles available in English were included. Only studies using primary empirical data were eligible.The relevant data was then extracted from the articles and entered into two tables. The intervention was any form of short-term heat acclimation that were equal to or longer than four , but no All included studies were assessed for methodologic quality and risk of bias. The included studies were subject to a modified Downs and Black (1998) checklist to assess the overall quality of the papers and rank them accordingly \u201338. The The search returned 4011 studies, of which 328 were duplicates, a further 3501 were deemed not relevant at title level leaving 182 to be assessed at abstract level. A further 170 papers were then excluded for the following reasons; the protocol did not include an acclimation period, the acclimation period was longer than twelve days, the acclimation period was shorter than four-days, the participants were younger than fifty, it was not a primary investigation or the PICO criteria were not fulfilled. Twelve papers were selected for a full text review .et al. [et al. [et al. and Maca [et al. . This fuet al. [-1 while Gerrett et al. [k et al. adopted t et al. asked th[V.O2max ,30 and t[V.O2max \u2014the lattet al. [et al. [Two studies used heated water in some capacity, Gerrett et al. utilised [et al. used HWI [et al. while th [et al. .Reported outcome variables are shown in et al. [et al. . Scores The primary aim of this review was to explore the feasibility and efficacy of the use of STHA protocols for an older population. The systematic review identified twelve articles that were eligible for review. Twelve studies using different HA protocols, demonstrated STHA is feasible in an older population. Despite using different methods, all of the studies presented significant changes in physiological outcome measures associated with adaptations brought about by HA. All of the study protocols involved an exercise component. This systematic review suggests that HA is feasible and efficacious in elderly individuals and may provide protection to heat exposures in this population.et al. [Six of the twelve studies reported a decrease in core temperature from HA during exercise ,25\u201329, fet al. reportedet al. . Increaset al. . Improveet al. ,29,31, met al. ,26, theret al. ,35 and rAll participants completed the HA protocols with no adverse events. However, dropouts were reported in three studies ,28,30. Fet al. [et al. [re during the post-test simulated activities of daily living protocol and improved performance in a 6-minute walk test. Our systematic review therefore suggests HA protocols should include a modifiable exercise intensity for individuals unable to sustain target workload in the heat. A recent study suggests exercise in a thermoneutral environment followed by HWI and sauna use after exercise was effective at lowering core temperature, skin temperature and HR in younger participants, despite not showing an improvement in performance [Waldock et al. reported [et al. reportedformance . Therefoet al. [This review identified a number of key considerations when designing a STHA protocol for an older population. It was noted that the elderly participants had difficulty performing and the maintaining workload during HA from four of the studies ,28,30,35V.O2max ,26. Howestimulus . A recen [et al. showed tntensity , howeveret al. [A second consideration would be the length of the acclimation protocol and the consecutive nature of the protocol is performed over. STHA protocols should allow time for core temperature to reach 38.5\u00baC or +1.0\u00b0C from baseline. Nine of the twelve studies from this review performed consecutive day HA. In a recent review, Tyler et al. stated tet al. . Given tet al. who sugget al. [ad lib hydration and implement a minimum consumption and encouragement to drink more water. In addition, to a blunted psychological thirst response, Waldock et al. [Thirdly, three studies from this review demonstrated that older men have been shown to have a slower sweat response, as well as a blunted thirst response when compared to younger men ,27,34. Tet al. . This cok et al. identifiet al. [et al. [There are still unanswered questions in the field of STHA and more research is required to fully explore the effectiveness and practical implications of HWI. Greenfield et al. found th [et al. found th studies ,48. Wate studies and theret al. [et al. [Waldock et al. was the [et al. . Not allThe majority of the participants from this review were male. This is important as hormone imbalances during menstruation impact female core temperature in younger population . HoweverThe twelve studies examined suggest that HA is feasible and efficacious an older population. Current HA protocols require specialised equipment and do not cater for individuals unable to exercise. Further thermoregulatory research should aim to include more female participants to explore differences in effectiveness of HA pre- and post- menopause. There remains very limited data on HWI and how it compares to other HA methods. However, HWI could serve as a viable and practical form of HA and remove some of the issues posed by traditional STHA protocols highlighted in this review. Given that HWI does not require specialist equipment, such as an environmental chamber, HWI could be a cost effective and time saving method of HA for general use, as well as used with populations who struggle with the exercise component of traditional HA methods.S1 Checklist(DOCX)Click here for additional data file."} {"text": "The risThe authors conclude that unilateral antegrade cerebral perfusion (ACP) had a lower mortality (6.6%) and stroke rate (4.8%), whereas bilateral ACP , retrograde and deep hypothermic circulatory arrest without adjunctive perfusion had higher rates of mortality and stroke. However, these conclusions must be tempered with the following considerations. The data are diverse because it is mostly from observational studies, which include multiple procedures , indications for surgery and experience . Importantly, there is no consensus on the criteria for selecting antegrade, bilateral antegrade, retrograde or deep hypothermic circulatory arrest without cerebral perfusion. Confounding considerations may have led to the selection of 1 cerebral perfusion technique over another. The lowest temperature and total time of cerebral perfusion for each technique were also incomplete across the studies. Unfortunately, without this level of granularity, it becomes quite difficult to conclusively determine if 1 technique is indeed superior to another. Accordingly, the authors are careful to not directly compare 1 technique to another.et al., Lou et al. and our group [et al. [et al. [Nonetheless, the meta-analysis adds to the literature by correlating the findings of similar studies by Angeloni ur group . We applur group . Unilateur group . Notably [et al. and Prev [et al. and foun"} {"text": "Mitoraj et al., RSC Adv., 2019, 9, 23764\u201323773.Correction for \u2018Structural versatility of the quasi-aromatic M\u00f6bius type zinc( The authors regret that the affiliations of Maria G. Babashkinah and Damir A. Safin were incorrectly shown in the original manuscript. The corrected list of affiliations is as shown herein.The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} {"text": "Editorial on the Research TopicExploring physical activity and sedentary behaviour in physical disability By Ryan J, Kerr C, Kilbride C, Norris M. (2022) Front. Rehabilit. Sci. 3:1006039. doi: 10.3389/fresc.2022.1006039Increasing physical activity and reducing sedentary behaviour reduces the risk of premature mortality, cardiovascular disease, cancer, depression, and type 2 diabetes. For children and adults with physical disabilities \u20135, benefMorgan et al. (2022), stroke Church et al. (2021), multiple sclerosis Stennett et al. (2021); Lavelle et al. (2022); Fortune et al. (2021) and rare neurological conditions Ramdharry et al. (2021); and in children and young people with physical disabilities Bolster et al. (2021); Sharma et al. (2021); Sansare et al. (2021). Collectively, this research topic provides a snapshot of the breadth and diversity of research in the area and highlights some of the key considerations when developing, implementing and evaluating interventions to increase physical activity for people with disabilities of all ages.The nine papers included in the research topic were authored by teams from the United Kingdom, United States of America, Australia and the Netherlands. They employed a variety of research designs, and considered various facets of physical activity in adults with mobility disability Stennett et al. (2021); Ramdharry et al. (2021); Bolster et al. (2021). Detailed consideration of contextual factors when developing physical activity interventions, potentially before function and disability factors, was highlighted in a number of papers Morgan et al. (2022); Stennett et al. (2021); Ramdharry et al. (2021); Bolster et al. (2021); Sharma et al. (2021). Interestingly, Ramdharry et al noted a mismatch between outcome tools reported in the literature, which focused primarily on activity, compared to the participation-focused outcomes of importance articulated by people with rare neurological conditions Ramdharry et al. (2021). Church et al. noted a similar trend in their paper, demonstrating that although body structure and function outcomes were measured in all 15 empirical studies in their rapid review of high intensity training in people with stroke, participation outcomes were only measured in four Church et al. (2021). Taken together, the papers in this research topic use the language of the ICF to articulate the many influences on physical activity. They advocate for theory-driven physical activity interventions that incorporate behaviour change components, take due cognisance of the individual's health status, their environment and their individual goals, and evaluate outcomes of importance to the individual.The International Classification of Functioning, Disability and Health (ICF) was emplLavelle et al. (2022). In addition to the issues this may cause when evaluating effectiveness of physical activity interventions, it also resulted in frustration and distrust amongst wearers, which could potentially negatively impact motivation to be physically active. It appears that there is still a need for development of psychometrically robust, user-friendly methods of objective measurement of physical activity in people with disabilities.Measurement was a strong theme in the papers included in this research topic. As detailed above, an ICF approach was often employed with a strong focus on participation outcomes. However, measurement validity was also addressed. Lavelle et al. demonstrated poor criterion validity of commercially available devices to monitor step-count and activity time in people with multiple sclerosis Sharma et al. (2021). In contrast, Morgan et al. reported outcomes from a long-running community-based exercise programme delivered in an accessible community facility Morgan et al. (2022). Bolster et al. also strongly advocated for consideration of the environment in which physical activity interventions are delivered but acknowledged that provision of physical activity \u201ctherapy\u201d in the everyday environment is logistically difficult and thus costly within current service delivery models Bolster et al. (2021). This suggests that innovation is required at policy and health systems levels to deliver impactful interventions in a cost-effective manner.Sustaining participation in physical activity can be challenging and may be strongly influenced by both personal and environmental factors. Sharma et al. used technology to overcome environmental barriers, demonstrating the feasibility and acceptability of an online physical activity intervention for people aged 12\u201321 years with a physical disability The value of increasing physical activity and reducing sedentary behaviour for everyone is undisputed \u2013 we know \u201cwhy\u201d it is important. For people with disabilities, the \u201cwho\u201d, \u201cwhat\u201d, \u201cwhere\u201d, \u201cwhen\u201d and \u201chow\u201d to optimise physical activity participation are still up for discussion. This research topic demonstrates this complexity but also the innovation and variety in design, methods, implementation and evaluation of physical activity interventions for people with disabilities. It also provides a stimulus for further research in this important area."} {"text": "Nature, Niemann et al. show that brown adipocytes become apoptotic under thermoneutral conditions and release ATP, which in turn is converted extracellularly into inosine. They further present evidence that pharmacological and genetic manipulations that enhance signalling of this purine metabolite stimulates thermogenesis in brown adipocytes and promotes metabolic health.Brown adipocytes react to temperature and nutritional challenges by ramping up their metabolism and generating heat. This adaptation to changes in the environment is crucial for defending organismal homeostasis, but is impaired in obesity and during aging. Writing in The global incidence of obesity along with its comorbidies such as type 2 diabetes and fatty liver disease continues to rise . This coSince weight gain results from a combination of increased energy intake and reduced energy expenditure, a way of targeting the latter process could further optimize obesity treatments. Thermogenic adipose tissue has attracted much interest in this regard as its activation increases energy expenditure and promotes weight loss in preclinical studies . While aNature, Niemann et al. [Writing in n et al. performen et al. , it woulet al. [Niemann et al. next asket al. [A and A2B receptors. Consistent with the increase in PKA signalling caused by inosine in brown adipocytes, its thermogenic effects were lost in A2A and A2B receptor knockout mice.Niemann et al. then proet al. [Having established a clear acute thermogenic effect of inosine, Niemann et al. then asket al. [Slc29a1 was the most highly expressed of the two transporters in all adipose tissue depots analysed and was indeed required for the uptake of radioactively labelled inosine by brown adipocytes. In line with the hypothesized role of ENT1 in brown adipocytes, extracellular inosine levels were increased under ENT1 deficiency. Remarkably, oxygen consumption was increased in ENT1-deficient brown adipocytes and brown adipose tissue explants. Furthermore, ENT1-deficient brown and white adipocytes had increased thermogenic gene expression and nutrient consumption. These results clearly demonstrate that ENT1 serves to clear extracellular inosine in brown adipose tissue to put a brake on thermogenesis.At this junction of the manuscript, Niemann et al. were ideet al. , focus wet al. [Slc29a1 mRNA was found to be expressed in platelet-derived growth factor receptor A (PDGFR\u03b1) positive stromal vascular cells in white adipose tissue, raising the possibility that inosine promotes the differentiation of thermogenic precursors as well as transdifferentiation of mature white adipocytes mentioned earlier. Despite the largely overlapping phenotypes of global and adipose tissue-specific ENT1 knockout mice, it should be noted that the former were more protected from obesity suggesting that ENT1 in other peripheral tissues, such as skeletal muscle, promotes a positive energy balance.Since chronic inosine treatment was shown to promote a negative energy balance, it would be reasonable to expect that chronic loss of ENT1 function would have a similar effect. To address this question, Niemann et al. generateet al. [When delivered systemically, inosine could have negative off-target effects, thus an approach of enhancing its signalling more locally could be safer . Niemannet al. thereforet al. [SLC29A1 and thermogenic gene expression in subcutaneous and visceral white adipose tissue in a large human cohort of approxmiately 1,500 individuals. Furthermore, analysis of the Genome Aggregation Database revealed that the single nucleotide polymorphism c.647T>C in the SLC29A1 gene leading to a Ile216Thr substitution in the ENT1 protein occured with high frequency. Functional studies further revealed that this variant reduced inosine uptake in cells by approximately 30%. Remarkably, in a self-contained German population of 895 individuals, the Il2216Thr variant signficantly associated with a lower BMI and homo/heterozygous carriers were more likley to be underweight and healthy, although it remains to be determined whether this is through effects on adipose tissue thermogenic capacity [Finally, to translate the preclinical findings to the clinical context, Niemann et al. performecapacity .et al. [et al. [A and/or A2B receptors which would have provided evidence for the role of inosine in mediating this effect. Also, it would be interesting what effects blocking inosine signalling has on adipose tissue under thermoneutral conditions or in aged mice when apoptosis is expected to be high. If dying brown adipocytes do indeed make a purinergic call to arms to neighbouring healthy brown adipocytes, blocking inosine signalling under such conditions should diminish thermogenesis even further. Notably, adipose tissue-specific A2B receptor knockout mice show normal energy expenditure when acutely housed at thermoneutrality [et al. [Since its rediscovery in 2009 , 18, 19,et al. has expa [et al. found th [et al. provides"} {"text": "Locusta migratoria manilensis and anti-aging effect in Caenorhabditis elegans\u2019 by Hui Cao et al., RSC Adv., 2019, 9, 9289\u20139300, https://doi.org.10.1039/C9RA00089E.Retraction of \u2018Structural characterization of peptides from RSC Advances article due to a significant amount of unattributed text overlap with articles by different author groups that were not cited, including articles published in Phytochemistry by Shan Su et al.,1Journal of Functional Foods by Elena M. Vayndorf et al.2 and Natural Product Research by Hui Ai et al.3The Royal Society of Chemistry hereby wholly retracts this Jie Liu agrees to the retraction. The other authors have been informed but have not responded to any correspondence regarding the retraction.RSC AdvancesSigned: Laura Fisher, Executive Editor, Date: 29th March 2022"} {"text": "Fractures are extremely common in children. The fracture risk is 40% in boys and 28% in girls. Although many pediatric fractures are frequently regarded as \u201cinnocent\u201d or \u201cforgiving\u201d, typical complications do occur in this precious population, e.g., premature physeal closure and post-traumatic deformity, which could potentially cause life-long disability.Despite the high incidence of pediatric injuries, there is still much debate on optimal treatment regimes. Although nonoperative and surgical treatment techniques have developed enormously during the past decades, current management is still more eminence-based rather than evidence-based because of the limited scientific evidence. For example, the recently developed comprehensive Dutch clinical practice guideline on diagnosis and treatment of the most common pediatric fractures included almost solely \u201clow\u201d or \u201cvery low\u201d level recommendations, based on the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) criteria. The only exceptions were some forearm fracture recommendations, which received \u201cmoderate\u201d GRADEs. There is a clear lack of data and a need for higher-level science in pediatric trauma.Children was to help fill the gap of undiscovered knowledge and improve the scientific understanding of pediatric fractures and related subjects. A great variety of topics were covered in the 14 high-quality original and review papers that have been published so far.The main goal of this Special Issue in Two studies dealt with general aspects related to acute pediatric trauma. To start with a contemporary hot topic, Verdoni et al. studied To analyze predictors of mortality in pediatric trauma, Yang et al. retrospeIn addition, numerous articles investigated specific pediatric fractures, including their diagnosis, treatment, complications and outcomes. In line with the epidemiology of fracture localizations in children, most papers reported on the upper extremity. Many interesting papers were published, from head to toe:Van der Water et al. providedPlate fixation for proximal humerus fractures was investigated in a small case series by Freislederer et al. from SwiSupracondylar humerus fractures are very common in children and are sometimes accompanied by brachial artery injuries. Vu et al. describeIn order to investigate when to operate these injuries, Terpstra et al. reviewedNext, Hermans et al. investigForearm fractures may malunite, leading to rotational deficits. Schr\u00f6der et al. describeWith the aim of reducing exposure to radiographic radiation, Zhang et al. in theirOne lower limb study was included in this Special Issue. Quality of life after surgery for recurrent patellar dislocation was prospectively studied by Herdea et al. from RomAfter a fracture, the healing of bone in the skeletally immature may be influenced by several factors. Bone healing and the use of non-steroidal anti-inflammatory drugs were systematically reviewed by Choo and Nuelle from the USA . Their aIn another systematic review, Armstrong et al. investigFinally, two papers focused on children with brittle bones. Nijhuis et al. providedRam\u00edrez-Vela et al. from MexIn conclusion, this Special Issue contains a great diversity of studies on a broad pediatric population in relation to fractures. Each paper contributes to our knowledge in its own way and helps improve care for our most valuable population. During the writing of this Editorial, more papers are on their way to further fill the current knowledge gaps and identify room for further study."} {"text": "Resveratrol (RSV) is a polyphenol phytoalexin first extracted from the Veratrum grandiflflorum O. Loes and can be found in various plants and red wine. Owing to the in\u2010depth study of pharmacological mechanisms, the therapeutic potential of RSV in various diseases such as osteoarthritis, neurodegenerative diseases, cardiovascular diseases and diabetes have attracted the attention of many researchers. RSV has anti\u2010apoptotic, anti\u2010senescent, anti\u2010inflammatory, anti\u2010oxidative, and anabolic activities, which can prevent further degeneration of intervertebral disc cells and enhance their regeneration. With high safety and various biological functions, RSV might be a promising candidate for the treatment of IDD. This review summarizes the biological functions of RSV in the treatment of IDD and to facilitate further research.Intervertebral disc degeneration (IDD) is a high incidence disease of musculoskeletal system that often leads to stenosis, instability, pain and even deformity of the spinal segments. IDD is an important cause of discogenic lower back pain and often leads to large economic burden to families and society. Currently, the treatment of IDD is aimed at alleviating symptoms rather than blocking or reversing pathological progression of the damaged intervertebral disc The intervertebral disc acts as the load\u2010bearing component of the spine and is composed of three closely connected parts: inner nucleus pulposus (NP), peripheral annulus fibrosus (AF) and outer cartilage endplate (CEP). IDD is a complex disease including multiple pathological processes. The pathogenesis of IDD mainly involves degradation of ECM, NP cells senescence, apoptosis, autophagy, inflammatory responses and oxidative stress. NP cells senescence, apoptosis, inflammation and oxidative stress result in the promotion of the ECM degradation. What is more, activation of autophagy can also influence the activity of NP cells, thereby regulating ECM homeostasis. According to the literatures, the RSV showed protective effects on IDD through a variety of mechanisms. Intervertebral disc degeneration (IDD) is a high incidence disease of musculoskeletal system that often leads to stenosis, instability, pain and even deformity of the spinal segments.14H12O3) is a polyphenol phytoalexin with a relative molecular weight of 228. It was first extracted from Veratrum grandiflorum O. Loes by Takaoka in 1939 and can be found in various plants and red wine.Resveratrol was 1319 ng h/mL.via uridine\u20105\u2032\u2010diphosphate\u2010glucuronosyltransferase and sulfotransferase, respectively.In 40 healthy volunteers orally administered with single doses of 0.5, 1, 2.5, or 5 g RSV, the mean maximum plasma concentration (Cmax) at the highest dose was 539 ng/mL, which was achieved within 1.5\u00a0h post dose (Tmax).et al.The toxicity of RSV mainly depends on its dosage. At single dose of <1 g RSV showed no obvious side effects.The intervertebral disc acts as the load\u2010bearing component of the spine and is composed of three closely connected parts: inner NP, peripheral annulus fibrosus (AF) and outer cartilage endplate (CEP).et al.et al.via activating PI3K/Akt signaling pathway, thereby attenuating inflammation induced apoptosis of NP cells. Li et al.et al.Inflammatory response directly participates in the progression of IDD. It can also lead to secondary low back pain and radicular symptoms.et al.et al.et al.et al.,via PI3K/Akt/caspase\u20103 pathway. Wu et al.et al.The NP is a hydrated gel\u2010like tissue, which is the main functional structure of intervertebral disc and can be adjusted according to mechanical stress stimuli.et al.et al.via ERK1/2 pathway. Recent research has confirmed that diabetes is a potential causative factor of IDD.et al.et al.et al.et al.et al.via activating PI3K/AKT/mTOR and PI3K/AKT/GSK\u20103\u03b2 signaling pathways.The occurrence of IDD is accompanied by high rates of apoptosis, leading to decreasing cell numbers in NP tissue, and therefore disturb the homeostasis of ECM.et al.2O2 enhanced intracellular ROS expression and induced mitochondrial dysfunction in human NP cells, which was characterized by down\u2010regulation of ATP and mitochondrial membrane potential levels. However, RSV treatment promoted autophagic flux as well as exerting protective effects on mitochondrial dysfunction and cell apoptosis induced by H2O2. Therefore, they concluded that RSV attenuated oxidative stress induced mitochondrial dysfunction by activation of autophagy.et al.via the PI3K/Akt pathway under oxidative damage. Wang et al.via stimulating upstream regulator AMPK in TNF\u2010a treated human NP cells and thereby inhibited the expression of MMP\u20103. They concluded that RSV attenuated the catabolic effect through down\u2010regulating TNF\u2010a induced MMP\u20103 expression via stimulating autophagy mediated by the AMPK/SIRT1 signaling pathway. In addition, Shi et al.Autophagy is a self\u2010protective cellular mechanism that removes damaged or senescent organelles, and considered to be an important cellular metabolic process.et al.et al.et al.et al.NP cells senescence is a common feature during IDD progression and is often demonstrated to be positively correlated with IDD grade.et al.The CEP plays an important role in maintaining the normal shape of the vertebral body, biomechanical stabilization and solute transportation. The calcification or cell apoptosis of CEP hinders nutrient supplement, oxygen transmission and excretion of metabolic wastes from NP, and in depth study of the mechanism of CEP degeneration can provide a novel idea for the prevention and treatment of IDD.in vivo using rodent and rabbit models group, the RSV treatment group had an increased T2 weighted image signal, and the modified Thompson MRI grade was lower than that of the vehicle group. Examination of IDD related gene expression showed aggrecan expression in the RSV group was higher than that of the vehicle group, while the expression level of MMP\u201013 was lower than that of the vehicle group. Hematoxylin\u2013eosin (HE) staining showed RSV could ameliorate the cellular characteristics of IDD caused by annular puncture, such as fibroblast\u2010like cells and severe fibrosis of extracellular components. In another study by Zhang et al.,et al.Researchers investigated the effects of RSV on IDD ls Table\u00a0. Kwon96 et al.et al.Radiculopathic pain is the main symptom of IDD. The von Frey filament test is commonly used to evaluate paw withdrawal threshold, and lower withdrawal thresholds are considered as sign of mechanical hypersensitivity, which is correlated to pain behavior in animal models.et al.The application potential of RSV is limited by its poor water solubility, poor stability, fast metabolism, and difficulty in reaching an effective blood concentration in intervertebral discs.et al.in vivo models and humans, a more clinically relevant animal model should be developed to better mimic the complexity of the human intervertebral disc and IDD progression.At present, rodent trauma models are the main animal models used to investigate the effects of RSV on IDD, which do not reflect the biomechanical characteristics of the natural degeneration of the human body.In conclusion, RSV shows substantial protective roles in the progression IDD. Mechanism researches reveal that RSV could effectively inhibit the apoptosis and senescence as well as promote autophagy of NP cells. It also exerts anabolic and anti\u2010catabolic effects on ECM, which are crucial for the regeneration of damaged intervertebral disc. Multiple signaling pathways, such as PI3K/Akt, NF\u2010\u03baB, AMPK/SIRT1, and ERK1/2, are the common target signaling pathways of RSV in IDD treatment. What is more, inflammatory and oxidative stress could also be suppressed by RSV treatment. However, the published studies are limited to preclinical studies, and evidence of RSV\u2010containing drugs for the therapy of IDD has yet to be investigated in clinical trials to confirm the preliminary results obtained from previous researches. And the current studies are mainly focused on NP, and the effects of RSV on AP and CEP also need systematic investigation. In addition, more studies should be performed to verify the synergistic effects of RSV combined with traditional drugs in the treatment of IDD in order to enhance efficacy and decrease the drug resistance and side effects. Moreover, the poor aqueous solubility and rapid metabolism of RSV might restrict its clinical application. As previously reported, RSV released by US\u2010mediated RSV/AbCDH2 NBs is a promising strategy to enhance its bioavailability and pharmacological efficacy in IDD treatment. With this consideration, mechanisms to promote the absorption of RSV are also worthy of investigation.The concept of the manuscript was devised by Yan\u2010zheng Gao. Ming\u2010yang Liu and Kai\u2010guang Zhang performed the overall literature searches. Ming\u2010yang Liu and Liang Zhang were in charge of writing. Tables and Figure were designed by Ming\u2010yang Liu. Hai\u2010jun Li, Wei\u2010dong Zang and Yan\u2010zheng Gao discussed the content of the article and gave suggestions."} {"text": "Brain areas at the parahippocampal gyrus of the temporal\u2013occipital transition region are involved in different functions including processing visual\u2013spatial information and episodic memory. Results of neuroimaging experiments have revealed a differentiated functional parcellation of this region, but its microstructural correlates are less well understood. Here we provide probability maps of four new cytoarchitectonic areas, Ph1, Ph2, Ph3 and CoS1 at the parahippocampal gyrus and collateral sulcus. Areas have been identified based on an observer-independent mapping of serial, cell-body stained histological sections of ten human postmortem brains. They have been registered to two standard reference spaces, and superimposed to capture intersubject variability. The comparison of the maps with functional imaging data illustrates the different involvement of the new areas in a variety of functions. Maps are available as part of Julich-Brain atlas and can be used as anatomical references for future studies to better understand relationships between structure and function of the caudal parahippocampal cortex.The online version contains supplementary material available at 10.1007/s00429-021-02441-2. The caudal parahippocampal cortex (PHC) is part of the ventral temporal cortex and its transition to the occipital cortex. It has been activated in studies targeting different tasks of visual\u2013spatial processing as well as in memory , acquired by a Siemens 1.5\u00a0T scanner , was used. Afterwards, the brains were embedded in paraffin and serially cut in the coronal plane into sections of 20\u00a0\u03bcm obtained through the body donor program of the Anatomical Institute of the University of D\u00fcsseldorf . Postmortem brain number 4 is shown in Fig.\u00a0\u00a0\u03bcm Fig.\u00a0. For theThe definition of borders was based on image analysis and statistical tools, to identify significant changes in the laminar pattern, i.e., cytoarchitecture images . Finally, each border was controlled by visual inspection of the histological images and the layer VI/the white matter border , the thickness T of a histological section (20\u00a0\u03bcm), the width x as well as the height y of a pixel both measuring 0.02116\u00a0mm, the areal surface \u03a3Ni over all sections (in pixels) and the shrinkage factor F of each individual brain using the following formula .The volumes were individually corrected for shrinkage caused by histological processing , and the nonlinear asymmetric MNI152 2009c template space .In a next step, a maximum probability map (MPM) was created. For this purpose, each voxel was assigned to the area that had the highest probability in this particular position ; data are provided in Table p\u2009>\u20090.05).The mean volumes of all four new areas Ph1, Ph2, Ph3 and CoS1 varied between 183.7\u00a0mm2 and 341.7 mm2 and via the \u201cInteractive Atlas Viewer\u201d of the Human Brain Project and the EBRAINS research infrastructure (https://www.humanbrainproject.eu/en/explore-the-brain/atlases/).The maps are available in the Julich-Brain Atlas It is difficult to go beyond this point and discuss in more detail correspondence between different maps, since an important added value of the probabilistic maps is the inclusion of the variability between brains. Any historical brain map, such as that from von Economo and Koskinas, based on a selection of sections of one or a few individual brains cannot cover this aspect, and it is not possible to compare them with sufficient accuracy in a common reference space.Compared to the representation of the examined region in previous anatomical maps, the functional segregation of this region is much more heterogeneous and more fine-grained subdivisions have been proposed. Several studied demonstrated that the PHC participates in visual\u2013spatial tasks including map reading, spatial orientation, navigation and spatial memory (yellow), Epstein et al. (1999) (white), Hales et al. (2009) (turquoise), Henke et al. (1999) (grey), Janzen et al. (2007) (green), Kirwan and Stark (2004) (blue), Maguire et al. (1998) (pink), Kveraga et al. (2011) (orange) and Sommer et al. (2005) (red) converted to native MNI coordinates and shown as coloured dots in a sequence of coronal sections within the MNI152 reference space. The corresponding native MNI coordinates are written above the respective image. Coordinates of the named studies are also written in Table 4 (PNG 1815 KB)Supplementary file2: Comparison of cytoarchitectonic maps with positions of functional imaging studies of Aguirre et al. (1996) (yellow), Epstein et al. (1999) (white), Hales et al. (2009) (turquoise), Henke et al. (1999) (grey), Janzen et al. (2007) (green), Kirwan and Stark (2004) (blue), Maguire et al. (1998) (pink), Kveraga et al. (2011) (orange) and Sommer et al. (2005) (red) with surface reconstructions of the MPM of the four new areas in the MNI152 reference space. The dots indicate the coordinates of different activations. The light blue area represents the most probable position of the PPA as described by Weiner et al. (2018). Ph1 is marked in green, Ph2 in yellow, Ph3 in red and CoS1 in blue. Coordinates of the named studies are written in Table 4 (MP4 9490 KB)Below is the link to the electronic supplementary material."} {"text": "The aims of this study are (1) evaluate the effectiveness of interventions that focus on body weight, smoking cessation, improving sleeping patterns, and alcohol and illicit substance abuse; (2) Compare the number of interventions addressing body weight and health risk behaviours in low- and middle-income countries (LMICs) Intervention studies published up to December 2020 were identified through a structured search in the following database; OVID MEDLINE (1946\u2013December 2020), EMBASE (1974\u2013December 2020), CINAHL (1975\u20132020), APA PsychoINFO (1806\u20132020). Two authors independently selected studies, extracted study characteristics and data and assessed the risk of bias. and risk of bias was assessed using the Cochrane risk of bias tool V2. We conducted a narrative synthesis and, in the studies evaluating the effectiveness of interventions to address body weight, we conducted random-effects meta-analysis of mean differences in weight gain. We did a systematic search of systematic reviews looking at cardiometabolic and health risk behaviours in people with SMI. We compared the number of available studies of LMICs with those of HICs.I2 = 37.8%). The quality and sample size of the studies was not optimal, most were small studies, with inadequate power to evaluate the primary outcome. Only two were assessed as high quality . We found 5 reviews assessing the effectiveness of interventions to reduce weight, perform physical activity and address smoking in people with SMI. From the five systematic reviews, we identified 84 unique studies, of which only 6 were performed in LMICs.We assessed 15 657 records, of which 9 met the study inclusion criteria. Six focused on healthy weight management, one on sleeping patterns and two tested a physical activity intervention to improve quality of life. Interventions to reduce weight in people with SMI are effective, with a pooled mean difference of \u22124.2 kg (95% CI \u22126.25 to \u22122.18, 9 studies, 459 participants, Pharmacological and activity-based interventions are effective to maintain and reduce body weight in people with SMI. There was a very limited number of interventions addressing sleep and physical activity and no interventions addressing smoking, alcohol or harmful drug use. There is a need to test the feasibility and cost-effectiveness of context-appropriate interventions to address health risk behaviours that might help reduce the mortality gap in people with SMI in LMICs. As a secondary aim we compared the number of interventions addressing body weight and health risk behaviours in LMICs et al., et al., et al., The review was reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA), Population (SMI); (2) Type of study and; (3) Outcomes . Further relevant studies were sought by citation searching (forwards and backwards) of the included studies and relevant systematic reviews. The results of the database and citation tracking reference searches were stored and de-duplicated in an EndNote library classification) (Santos Weight gain: Reduction in body weight or body mass index (BMI). Tobacco use: Self-reported abstinence with biochemical verification, including expired carbon monoxide , and reduction in levels of expired carbon monoxide, cotinine and nicotine. Alcohol abuse: Alcohol abstinence, frequency of alcohol use, and quantity of alcohol use measured with any standardised and validated questionnaires. Substance use (illicit drug and unprescribed medication): Self-reported abstinence measured as any standardised and validated questionnaires. Biochemically-verified abstinence was recorded where available. Sleep: Sleep time, self-reported prevalence of bad sleep (less than 7\u00a0h or more than 9\u00a0h per day) and improvement in insomnia measured by any standardised questionnaire. Quality of life: measured by any validated scale and adverse events.We included any outcome related to weight or health risk behaviour. For instance, The EndNote library was exported to Covidence and de-duplicated again . Missing data were requested from the study authors. The extracted information included: Study reference , study population, country, setting , study design, intervention aim, number of intervention groups, description of the intervention, comparison intervention(s), duration of the intervention and outcome collection: short term (<6 months), medium term (6\u226412 months), long term (12 months or longer); number of participants, participant demographics , participant diagnoses (including diagnostic criteria according to ICD and DSM) and baseline characteristics, primary outcome measure, secondary outcome measures, overall effect size/relative effect of intervention and funding source.et al., et al., The methodological quality of the included studies was independently assessed by two reviewers with discrepancies resolved by a third reviewer (PM). We used the Cochrane Collaboration risk of bias tool 2.0, .et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., The search strategy identified 15\u00a0657 records. After removing 6773 duplicates, 8884 titles and abstracts were screened for eligibility . We asseet al., et al., et al., et al., et al., et al., et al., et al., As seen in I2\u00a0=\u00a037.8%). The highest effect size was found on the combination of lifestyle and metformin . We found low heterogeneity between the studies I2\u00a0=\u00a037.8%.There were a total of 459 participants in the six studies, all focusing on weight reduction. All of the studies reported body weight, BMI and waist circumference as primary outcomes. As shown in I2\u00a0=\u00a00.0%).The sensitivity analysis excluding studies with a high risk of bias Wu, includedet al., s.d.\u00a0=\u00a00.91) v. 1.70 times (s.d.\u00a0=\u00a00.57) in the control; increase in sleeping time, being 5.7\u00a0h (s.d.\u00a0=\u00a01.6) in the intervention group v. 5.4\u00a0h (s.d.\u00a0=\u00a00.9) in the control group. The use of melatonin was effective as a short-term hypnotic for patients with schizophrenia with insomnia, as participants who received the treatment experienced greater early morning freshness all through the study.Only one study focused on sleeping patterns and sleep were effective while there was a gap in trial evidence regarding smoking cessation and alcohol and illegal substance use.et al., et al., et al., All of the interventions focused on weight reduction and reported additional cardiometabolic risk factors as secondary outcomes. Similarly to our findings, evidence from HICs have shown that lifestyle and pharmacological interventions are effective to maintain and reduce body weight in people with SMI . It is likely that we were able to identify them because of the specific search terms focusing on LMICs in our review and the additional databases we searched.There are a few limitations of our study that require acknowledgement. (1) Most of the studies had small samples, and were categorised as having a high risk of bias. Furthermore, our sensitivity analyses show that these studies significantly bias the results of our meta-analyses. However, even after removing these studies with high risk of bias, the pooled effect on weight reduction remains statistically significant.; (2) There were minor deviations from the protocol. We intended to assess the effectiveness of interventions that focused on diet and physical activity for weight gain, however we also included pharmacological interventions with the same primary outcome (weight reduction). We did sub-group analysis according to the type of interventions, which allows for an assessment of non-pharmacological interventions independently. (3) We did not systematically search for the number of interventions to address weight and health risk behaviours in people with SMI in HICs, but systematically looked for available reviews that have looked in detail into these topics instead. There was an overlap in the RCTs included in the previously published reviews and our review, however, we excluded duplicated studies to avoid over-estimation of studies in any particular region. Despite these limitations, we provide for the first time a summary of the available interventions to address health risk behaviours and weight gain in people with SMI living in LMICs.Pharmacological and behavioural interventions were effective in reducing body weight in people with SMI. There was a limited number of interventions addressing sleep and physical activity and no interventions addressing smoking, alcohol or illicit drugs abuse. We found a disproportionate number of interventions performed in LMICs as compared to HICs. The absence of smoking cessation studies, and the gap between LMICs and HICs was the most surprising, since smoking makes the greatest contribution to poor health and health inequalities for people with SMIs. There is a need to test the feasibility and cost-effectiveness of context-appropriate interventions to address health risk behaviours and weight gain in people with SMI in LMIC, and to generate evidence that might aid in the development of policy and programmes to address these issues that might reduce the mortality gap in people with SMI."} {"text": "In recent years, peptides have received increased interest in pharmaceutical, food, cosmetics, and various other fields. The high potency, specificity, and good safety profile are the main strengths of bioactive peptides as new and promising therapies that may fill the gap between small molecules and protein drugs. Peptides possess favorable tissue penetration and the capability to engage in specific and high-affinity interactions with endogenous receptors. The positive attributes of peptides have driven research in evaluating peptides as versatile tools for drug discovery and delivery. In addition, among bioactive peptides, those released from food protein sources have acquired importance as active components in functional foods and nutraceuticals because they are known to possess regulatory functions that can lead to health benefits.This Special Issue of International Journal of Molecular Sciences represents the third in a series dedicated to peptides. This issue includes thirty outstanding papers describing examples of the most recent advances in peptide research and its applicability. Saccharomyces cerevisiae proteome microarrays are employed to discover the direct protein targets of Sub5 from Sub5-protein interactions. Bioinformatics analysis reveals 15 actin-associated proteins as targets of Sub5. The protein\u2013protein interaction network is linked to ribonucleoprotein, transcription and translation, chromosome, histone, and ubiquitin-related DNA repair, and chaperone. Ramos-Martin and D\u2019Amelio [Bombyx mori. Besides its potent activity against several highly virulent and antibiotic resistant bacterial pathogens, cecropin KJ has demonstrated to be one of the few peptides active against esophageal cancer. In the study of Bryzek et al. [\u00ae technology with enzymatic in situ N-acetylation by RimJ to obtain a long-acting version of immunostimulatory peptide thymosin \u03b11 in Escherichia coli at high yield. The findings of this study provide the basis for the therapeutic development of a next generation thymosin \u03b11 with prolonged plasma circulation that promotes its benefits against hepatitis B and C viruses, among other infections. The study of Colvin et al. [Toxoplasma gondii-secreted dense granule protein (GRA9) interact with recognition receptor NLRP3 and inhibit the formation of the NLRP3 inflammasome through the blockade of the binding of apoptotic speck-containing (ASC) to NLRP3 in mitochondria. In an E. coli- or P. aeruginosa-induced sepsis model mice, recombinant GRA9C increase the anti-inflammatory, bactericidal, and anti-septic effects by increasing M2 polarization. These findings define the potential of GRA9 as a new candidate to be a therapeutic agent for sepsis. Kanellopoulos et al. [26Met by hydrophobic amino acids in the sequence of bovine lactoferricin-derived dimeric peptide LfcinB results in a significant enhancement of its cytotoxic activity against breast cancer HTB-132 and MCF-7 cells without affecting non-tumorigenic MCF-12 cells. Moreover, the obtained dimeric peptides promote apoptosis through the intrinsic pathway and do not compromise the integrity of the cytoplasmic membrane. These findings show the promising role of these molecules as basis for the design and development of new drugs against breast cancer. Finally, Wilson et al. [2Tyr and 9Ile) to an alanine . The findings of this study offer new strategies for the design of more efficient peptide-based vaccines to achieve a desired immune response.The Special Issue starts with a group of papers investigating the potential of synthetic peptides as new drug alternatives for controlling and/or managing chronic diseases. It begins with a study by Lee et al. on the eD\u2019Amelio elucidatk et al. , argininn et al. investign et al. on BBI-1n et al. demonstrotein GRA interacts et al. use a ras et al. , peptides et al. , the subAgrobacterium-mediated transformation. Lunasin is not detected in wild-type wheat while it is present in transgenic wheat. Moreover, lunasin enrichment from transgenic wheat displays an increased anti-proliferative activity in HT-29 cells through induction of pro-apoptotic pathways. The review of Tyagi et al. [Moreover, there is a short series of articles dealing with research in food-derived bioactive peptides, including the characterization of chemical structure and elucidation of modes of action of food-derived bioactive peptides. The review of Jahandideh & Wu discussei et al. discussei et al. about thi et al. on two bWalterinnesia aegyptia, for bioactive compounds. Walterospermin, a peptide of 57 amino acid residues with high homology to other venom toxins, was identified. Moreover, its ability to activate sperm motility from a variety of species (including humans) through the interaction with the receptor that controls motility function was proven. De Waard et al. [The Special Issue includes some studies on identification and assessment of biological activity of natural peptides produced by animals and microganisms. El Aziz et al. screen td et al. explore d et al. evaluated et al. review m2+, which illustrates how zinc might regulate DNA/RNA. Wojciechowski et al. [Another group of papers explores the effects of human endogenous peptides on body functions. Retinal aging is the result of accumulating molecular and cellular damage with a manifest decline in visual functions. P\u00f6sty\u00e9ni et al. validatei et al. examine i et al. describei et al. review ti et al. confirmeFinally, an article describes newly developed biomaterials based on peptides for biomedical applications. Antifouling polymer layers containing extracellular matrix-derived peptide motifs offer promising new options for biomimetic surface engineering. Sivkova et al. report thttps://www.mdpi.com/journal/ijms/special_issues/peptides_2021, accessed on 20 May 2022).We wish to thank the invited authors for their interesting and insightful contributions, and look forward to a new set of advances in the bioactive peptides field to be included in the following Special Issue, \u201cPeptides for Health Benefits 2021\u201d, ("} {"text": "HIV infection, through various mechanisms causes a derangement in sexual maturation. This study compared the Marshal and Tanner staging of HIV-infected and uninfected males. The aim of the study was to determine the sexual maturation in male children infected with HIV on HAART in Abakaliki.this was a cross-sectional and comparative study involving 80 HIV-infected boys aged 8-17 years and 80 uninfected counterparts matched for age and socio-economic class. Stages of sexual maturation (testicular size and pubic hair) were determined according to the method proposed by Marshall and Tanner. The testicular size was measured using an orchidometer. Data analysis was done with SPSS version 20. Structured questionnaire was used to collect information on socio-demographics.assessment of pubic hair development, showed that 45 (56.2%) of the subjects were in the pre-pubertal stage compared to 27 (33.8%) among the controls, this relationship was statistically significant . The mean testicular volume among subjects was found to be 8.29 \u00b1 8.26mls compared to 11.57 \u00b1 8.26mls found in controls. This relationship was also statistically significant. There were significant statistical relationships between duration on HAART and clinical stages of disease with both pubic hair development and testicular volume of subjects and controls.HIV-infected males had significantly delayed onset and progression of sexual maturation. Routine assessment of the sexual maturation of HIV-infected children as well as addressing the modifiable variables influencing sexual maturity is recommended. Amongst all the developmental changes that occur in adolescence, sexual maturation is the most significant in its influence on the behaviour of boys and girls . The phyAssessment of sexual maturation using testicular size and pubic hair development in boys can be done by two methods, namely; the adolescent self-report and the physician assessment . The disThe study was conducted at the Alex Ekwueme Federal University Teaching Hospital and Mile Four Mission Hospital, Abakaliki, Ebonyi State, Nigeria using physician assessment.Study site: the study was carried out in the paediatric HIV clinics of both hospitals. The clinics run once a week in each hospital and has an average attendance of 20 and 15 patients respectively. A total of 200 children were registered in the paediatric HIV clinic, as at August 2018 from both hospitals. The controls were recruited from the Children Outpatient Clinic (CHOP). This outpatient clinic runs daily, except on weekends with an average attendance of 350 patients per week.Study design: this was a cross-sectional and comparative study. HIV-infected males aged 8-17 years who met the inclusion criteria were consecutively enrolled from the HIV clinic until the desired sample size was achieved.Study population: the study subjects were HIV-infected males aged 8-17 years who were born to women with documented HIV-1 infection during pregnancy or at the time of delivery and in whom HIV infection had been diagnosed through detection of viral markers (DNA PCR) or the persistence of HIV-1 antibodies (investigated by means of enzyme linked immunosorbent assay) after 18 months of life. The controls were HIV-uninfected males who were matched for age and social class and without chronic diseases such as sickle cell anemia, chronic kidney disease or asthma attending the children outpatient clinic.Sample size determination: the sample size required was determined using the method as shown:Where r = ratio of control to cases (subjects), 1 for equal number of cases and control.\u03b2= normal standard variate for significant level as identified in previous section. i.e normal standard variate for power of 80% which is equal to 0.84. Z\u03b1/2= normal standard variate for significant level as identified in previous section i.e normal standard variate at 95% confidence level = 1.96. P1-P2 = different in proportions or effect size identified from previous studies. P1 = is proportion in cases (subjects) (0.17) compared to the controls [TS1=7 (8.75%) and PHI=27 (33.75%)] as shown in Examining relationship between sexual maturation and some modifiable factors:et al. [et al. [et al. [HIV infection was noted to negatively affect the total number of males that attained sexual maturation in the index study. More of the subjects were in pre-pubertal stage of testicular development 37 (46.25%) than controls 7 (8.75%). In subsequent testicular development (TS II), (21.25%) of the subjects were in stage II compared with controls (47.5%) on the same stage. Therefore, HIV uninfected controls were observed to develop faster from testicular stage II to stage V more than the HIV-infected subjects. This implies that the subjects were more likely to be in the pre-pubertal stage using testicular size development, which leads to delayed pubertal development in the subjects. This may be due to disruption of the gonadotropin-releasing hormone axis. Ebling FJ and Ojeda SR et al. ,17, obse [et al. found th [et al. .et al. [et al. [However, Kessler M et al. found no [et al. noted thet al. [et al. [et al., Mbwile GR, Pozo J & Argente J and Iloh ON et al. [et al. [et al. [Similar findings were found on the pubic hair development indicating that subjects were more likely to remain in the pre/early adolescent period than the controls. The difference in age compared favorably with the findings made by Mbwile GR in Tanzaet al. in the Uet al. . The stu [et al. corroborN et al. ,20-22 foN et al. . Similar [et al. in Afric [et al. in Italyet al. [et al. [et al. [There was significant relationship between CD4 count and development of pubic hair. This suggests that subjects in advanced or in a severe immunologically depressed stage may have delay in attainment of puberty in HIV-infected children. The reason for the observation may be because subjects who have low CD4 count are prone to opportunistic infection which may lead to poor growth. Bellavia A et al. observed [et al. and Buch [et al. who in tet al. [et al. [The logistic regression analysis of the present study showed that clinical stages was significantly related to the sexual maturation. This may be explained by the facts that opportunistic infections may affect the pubertal development and sexual maturation. Furthermore, there was a positive significant relationship between duration on HAART and sexual maturation. This may be explained by the facts that ARV drugs may lead to restoration of immunity and subsequently improvement in sexual maturation. The study by Mbwile GR supporteet al. observed [et al. also supThere was a significant delay in both the age at onset and progression of sexual maturation in HIV-infected subjects compared to their HIV-uninfected counterparts.Limitation: the study was a case-control hospital-based study with only one assessment of physical growth and sexual maturation. A prospective cohort study with multiple interval assessments would have given more information on the age at onset of testicular enlargement and pubic hair development as well as the subsequent pubertal development of HIV-infected males.Adolescents infected vertically by HIV are at higher risk of developmental impairment, growth alteration, wasting, delayed- puberty and impaired neuro-cognitive function;An actual pubertal delay in sexual maturation in adolescent can lead to the development of poor body image and low self-esteem which may result in psychosocial problems like school avoidance, poor academic achievement, isolation, eating disorders, bullying/teasing by peers, depression, and social withdrawal.This study affirms that there is a significant delay in both the age at onset and progression of sexual maturation in HIV-infected subjects compared to their HIV-uninfected counterparts;There was an association between HIV disease severity and delay in sexual maturation."} {"text": "While the benefits of nutrition and physical exercise are commonly studied separately, their concomitant integration has the potential to produce greater benefits in women than strategies focusing only on one or the other . StudyinWith this Special Issue on \u201cNutrient Intake and Physical Exercise as Modulators of Healthy Women\u201d, we are honoured to contribute with important pieces of evidence for an integrational approach\u2014nutrition and physical exercise\u2014as a potential modulator of lifelong. It includes ten studies: eight articles and two narrative reviews. Please let us introduce the articles with a short summary.The differences in substrate oxidation between men and women have been a topic of interest in the past few years. In this line, Nosaka et al. performeIn elderly women, Amirato et al. found thIn apparently healthy women, Waliko et al. assessedIn overweight or obese women with hypertension, Dos Santos Fechine et al. found th2 but with an increased body fat percentage. The cut-offs used for body fat percentage have varied depending on the study population, sex, and ethnicity. NWO is a widespread public health issue that may be prevalent in up to one-third of individuals [The term normal weight obesity (NWO) was firstly described by Lorenzo et al. and was ividuals . In NWO ividuals reportedn = 251; 74% white), who perceived greater body size, reported less liking of physical activity as well as less healthy dietary behaviours. Additionally, women whom both liked and engaged in physical activities had a lower body size perception and healthier diet quality.Attention to preferred physical activities encourages participation in physical activity. Participation in physical activity is associated with a better body size perception but not in a consistent way. In this line, Hubert et al. reported2) presented significant changes in the anthropometric parameters, namely decreased body weight, waist and hips circumference, as well as body mass index (BMI) and waist-to-hip ratio (WHR) after three months of endurance or endurance strength training intervention. Additionally, those alterations were associated with a better perception of the current figure and a lower level of concern about body shape, with more significance for the endurance training group compared to the endurance strength training group.In a prospective randomised trial, Bak-Sosnowaska et al. found thIn Rocha-Rodrigues et al. \u2019s reviewThe roles of diet and nutrition on gynaecological disorders were reviewed by Afrin et al. . The evi"} {"text": "Rebekah et al., Nanoscale Adv., 2021, DOI: 10.1039/d1na00135c.Correction for \u2018NiCo The authors regret that the name of the first author (A. Rebekah) was incorrectly given in the original article. The corrected name is given here.The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} {"text": "In Ferris et\u00a0al.,Michael Kissick is employed by and has ownership interest in Accuray, Inc. John E Bayouth has ownership interest in MR guidance, LLC, which has business activity with a company that utilizes respiratory motion management . The remaining authors have no conflicts of interest to disclose."} {"text": "Membrane processes have demonstrated their enormous potential for water treatment, either by removing organic and mineral contaminants before permeating stream discharge, or by concentrating high added-value compounds in retentate stream. Although the advantages and drawbacks of the various membrane processes are well known, the mechanisms governing their filtration performances are usually not fully understood, and discussion is often still open. For this purpose, researchers have been developing numerical models for decades to describe the transport of species through membranes and predict their performances for specific applications. Numerical modeling can be useful in many aspects of membrane science and can help to solve many scientific issues. A numerical approach can be used to model the physical mechanisms governing fluid flow or mass transfer, and then applied to various membrane processes, such as pressure-driven, concentration-driven, electrically driven, or thermally driven processes.This Special Issue on \u201cNumerical Modeling in Membrane Processes\u201d provides examples of original works or reviews of the literature dealing with the use of mathematical models to understand, describe, predict, or optimize the performances of membrane processes. Quezada et al. reviewedKim et al. numericaPark et al. studied Chae et al. formulatChoi et al. analyzedTheir study revealed that exergy destruction in the permeate occurred near the feed inlet, and the effect became less influential closer to the feed outlet. Their analysis of exergy flows also showed that the efficiency loss in the permeate side corresponded to 32.9\u201345.3% of total exergy destruction. Gu et al. implemenXie et al. coupled 2+/Li+ ratio based on free flow ion concentration polarization in a microfluidic system. In their study, the authors numerically showed that this method is able to decrease the Mg2+/Li+ ratio significantly, and has great potential as preprocessing technology for lithium extraction from salt lake brines. Zhang et al. simulateZhu et al. used a sDutourni\u00e9 et al. proposedNagy et al. comparedJokic et al. examinedSkolotneva et al. proposed2 removal rate which was determined in vivo with porcine blood from that determined in vitro with water. This study indicated that the main CO2 transport resistance behaves generally differently in blood and water. The authors concluded their work by mentioning that studies of the CO2 boundary layer should be preferably conducted with blood, whereas water tests should be favored for the determination of total CO2 removal performance of oxygenators. Lukitsch et al. conducteWu et al. performeOsterroth et al. used a CFinally, Nunes et al. studied"} {"text": "Water pollution is a major environmental problem that has a significant impact on human and animal health and the ecosystem. Most pollution is caused by human activities, and pollutants can be categorized into inorganic, organic, biological and radioactive . Thus, t3O4, TiO2, ZnO, Ag, etc.) and nanocatalytic membrane systems are more efficient and less time-consuming than other treatment methods, environmentally friendly and consume a low amount of energy [During the last decade, the field of nanotechnology has extensively developed, contributing to a significant impact on water purification. The use of nanomaterials has led to impressive findings in the field of water remediation, with a high efficiency for the removal of various pollutants, cost effectiveness and reusability. Thus, nanostructured materials, due to their unique physicochemical characteristics, such as catalytic activity; high physical, chemical and thermal stability; large specific surface area; high chemical reactivity; strong electron ability, etc., have gained the attention of many researchers. Different classes of nanomaterials can be used in water purification either as adsorbents, photocatalysts and/or antibacterial agents ,3,4,5. MThis Special Issue focuses on nanostructured materials and their applications in advanced water purification processes. This Special Issue contains fifteen articles and one communication that address the synthesis, modification and regeneration of novel nanomaterials for their applications to remediate water contaminated by various pollutants.3O4@C nanoparticles for the decolorization of high concentrations of methylene blue. According to the obtained results, an easy method for producing Fe3O4@C nanoparticles with excellent catalytic reactivity is presented, representing a promising approach for the industrial production of Fe3O4@C nanoparticles used to treat high concentrations of dyes in wastewater. The second paper by Zia et al. [3O4) for the efficient and specific removal of iodine anions from contaminated water. The findings presented in this study offer a novel method for desalinating radioiodine in various aqueous media. The next paper by Narath et al. [Cinnamomum tamala leaves extract. According to the obtained results, the synthesized ZnO NPs exhibit excellent photocatalytic activity against dye molecules, through a green protocol. The fourth paper by Shrestha [Dalbergia sisoo (Sisau) derived from activated carbon (AC) as an adsorbent material for the removal of rhodamine B dye from an aqueous solution. According to the author, this study successfully addressed a local problem of wastewater pollution from garment and textile industrial effluents using the locally available agro-waste of Dalbergia sisoo. Khan et al. [E. coli and S. aureus. The results presented, demonstrated that solvent-free and sustainable mechanochemical synthesis can easily produce semiconductor nanocrystals in a multidisciplinary application. In the paper by Pe\u00f1aranda et al. [E. coli), due to their easy preparation, low-cost synthesis, and disinfection efficiency. Magro et al. [\u221215 M and 10\u22125 M in mineral and river water matrices. Ram\u00edrez-Rodr\u00edguez et al. [3N4) using a solution method for the successful extraction of arsenic (III), pointing out that a ZnO\u2013CuO/g\u2013C3N4 nanocomposite can be a potential candidate for the enhanced removal of arsenic from water reservoirs. Zhang et al. [3PO4 for pollutants degradation. These authors concluded that Ag3PO4\u2013based materials could be reliably used for the degradation of methyl orange (MO) as they mostly retain their photoactivity during the second recycling test. In the final paper by Giannoulia et al. [The first paper published in this Special Issue by Xiang et al. illustraa et al. presentsh et al. proposesShrestha illustran et al. investign et al. found thn et al. , amino-fn et al. reporteda et al. , three da et al. present o et al. successfz et al. proposedz et al. focus ong et al. , prepareg et al. presentea et al. , halloys"} {"text": "For heating, ventilation or air conditioning purposes in massive multistory building constructions, ducts are a common choice for air supply, return, or exhaust. Rapid population expansion, particularly in industrially concentrated areas, has given rise to a tradition of erecting high-rise buildings in which contaminated air is removed by making use of vertical ducts. For satisfying the enormous energy requirements of such structures, high voltage wires are used which are typically positioned near the ventilation ducts. This leads to a consequent motivation of studying the interaction of magnetic field (MF) around such wires with the flow in a duct, caused by vacuum pump or exhaust fan etc. Therefore, the objective of this work is to better understand how the established movement in a perpendicular square duct interacts with the MF formed by neighboring current-carrying wires. A constant pressure gradient drives the flow under the condition of uniform heat flux across the unit axial length, with a fixed temperature on the duct periphery. After incorporating the flow assumptions and dimensionless variables, the governing equations are numerically solved by incorporating a finite volume approach. As an exclusive finding of the study, we have noted that MF caused by the wires tends to balance the flow reversal due to high Raleigh number. The MF, in this sense, acts as a balancing agent for the buoyancy effects, in the laminar flow regime Menni et al.2 introduced an analysis of the hydrodynamic and thermal of water, ethylene glycol and water-ethylene glycol as base liquids isolated by aluminum oxide nano-measured dense particles. Tayeb et al.3 evaluated the hydrodynamic and thermal performances of nanofluids (NFs) in a chaotic situation. Phan et al.4 prepared a numerical investigation on concurrent thermodynamic and hydrodynamic instruments of subaquatic explosion. Another presentation of numerical studied on heat transfer (HT) performance of thermally mounting movement inside rectangular microchannels is given by Ma et al.5. Mozaffari et al.6 increased the ability of lattice using the mechanism of hydrodynamic and thermal. Wakif et al.7 checked the stability of thermal radiation and surface roughness effects via the thermo-magneto-hydrodynamic method. Ali et al.8 formulated a new mathematical revision of MF communication with completely established movement in a vertical duct. Rios et al.9 formulated an investigational assessment of the current and hydrodynamic presentation of NFs in a coiled flow inverter. Sabet et al.10 studied the behavior of the current and hydrodynamic of forced convection steamy slide flow in a metal spray.When a flow is fully developed , it is said to have reached steady state. For the given constant heat flux, there will be no temperature variation with respect to time. Gevari et al.11. A case study is introduced for shape memory of NFs by Osorio et al.12 and Zareie, et al.13. Azmi et al.14 studied HT for hybrid nanofluids (HNFs) in a tube with CCWM. Kumar and Sharma15 optimized ferrofluid using CCWM. Numerical and computational investigations are presented using the advantages of CCWM by Khan et al.16 on a constant fluid, Ali et al.8 for MF interaction, Chang et al.17 on magnetic NFs. He et al.18 introduced a computational heat transfer and fluid flow in view of CCWM. Lu et al.19 gave a computational fluid dynamics examination of a dust scrubber with CCWM. Briggs and Mestel20 showed a linear stability of a ferrofluid centered on a CCWM. Dahmaniet al.21 enhanced the HT of ferrofluid movement in a solar absorber tube by a periodic CCWM. Sharma et al.22 analyzed the MF-strength of multiple coiled utilizing the idea of CCWM. Vinogradova et al.23 modeled a system of ferrofluid-based microvalves in the MF shaped by a CCWM. He et al.24 studied the dynamic pull-in for micro\u2013electromechanical scheme with a CCWM.Current-carrying\u00a0wire\u00a0model (CCWM) is used in fluid for many advantages. Its usability appeared in many researches, where a review paper in this direction is given by Liu et al.25 examined the HNFs movement near an adaptable insincere with tantalum and nickel NFs, according to the consequence of MF. Talebi et al.26 offered an inspection of mixture-based opaque HNF movement in porous mass media inflated by MF operating mathematical technique. Ayub et al.27 deliberated the MF of nanoscale HT of magnetized 3-D chemically radiative HNF. Mourad et al.28 employed the finite element analysis of HT of Fe3O4-MWCNT/water HNF engaged in curved addition with uniform MF. Manna et al.29 showed a novel multi-banding application of MF to convective transport arrangement employed with porous medium and HNF. Khashi\u2019ie et al.30 examined unsteady hugging movement of Cu-Al2O3/water HNF in a straight channel with MF. Lv et al.31, Khan et al.32 and Alkasasbeh et al.33 distributed a numerical technique near microorganisms HNF movement with the arcade current and MF over a revolving flappy. Roy et al.34 investigated HT of MHD dusty HNFs over a decreasing slide. Khazayinejad and Nourazar35 recycled the fractional calculus to describe 2D-fractional HT examination of HNF alongside a leaky plate together with MF. G\u00fcrdal et al.36 compressed the HNF curving in depressed tube imperiled with the MF. Azad et al.37 presented a study on rapid and sensitive MF sensor based on photonic crystal fiber with magnetic fluid infiltrated nanoholes. Skumiel et al.38 considered the consequence of the MF on the thermal effect in magnetic fluid. Alam et al.39 examined the influence of adjustable MF on viscous fluid between 3-D rotatory perpendicular hugging platters.A magnetic field (MF), which can be thought of as a vector field, governs the magnetic effect on stirring rechargeable tasks, power-driven flows, and magnetic resources. An influencing control in an MF involves a force that is perpendicular to both the control's own velocity and the MF. Zhang et al.40 enhanced the wind turbine equipped with a VDG. Kim et al.41 presented a computational fluid dynamics analysis of buoyancy-aided turbulent mixed convection inside a heated VDG. L\u00f3pez et al.42 designed selection and geometry in OWC wave dynamism converters for performance. Umavathi and B\u00e9g43 introduced a computation of thermo-solutal convection with soret-dufour cross diffusion in a VDG NFs. Oluwade and Glakpe44 computed 3D-Mixed convection in a VDG. Choudhary45 optimized the VDG of 3D printer part cooling fan duct. Li et al.46 studied the effects of VDG on intraglottal pressures in the convergent glottis. Zhao et al.47 investigated of necessary instrument leading to the performance development of VDG. Wojewodka et al.48 considered a numerical study of complex flow physics and coherent structures of the flow through a longwinded VDG. Moayedi and Amanifard49 enhanced the electrohydrodynamic usual HT in a VDG.In multistory, enormous building constructions, vertical duct geometry (VDG) is channels or paths utilized to deliver, reappearance, or use air for reheating, ventilation, or air conditioning. Rapid population expansion, particularly in areas with concentrated industries, has given rise to a culture of building skyscrapers with tens of stories, where vertical ducts are the obvious select for eliminating muted air. Ranjbar et al.50. The divergence theorem is used in the finite volume method to transform volume integrals in a partial differential equation containing a divergence term into surface integrals. The surfaces of each finite volume are then used to evaluate these terms as fluxes. Namdari et al.51 investigated of the effect of the discontinuity direction on fluid flow in porous rock masses on a large-scale using HNFs and streamline utilizing FVM. Faroux et al.52 studied a coupling non-local rheology and capacity of liquid (VOF) process in view of FVM implementation. Xu et al.53 simulated a system of incompressible curved element hydrodynamics\u2010finite volume technique joining procedure for interface tracking of two\u2010phase fluid movements in view of FVM. Wang et al.54 investigated a coupled optical-thermal-fluid-mechanical analysis of parabolic trough solar receivers employing supercritical CO2 as HT in virtue of FVM. Liu et al.55 studied the consequence of gas compressibility on liquid ground of air\u2010cooled turbo\u2010generator according to FVM. Koulali et al.56 presented a comparative study on effects of thermal gradient direction on heat exchange between a pure fluid and NFs hiring FVM. Ding et al.57 considered a mathematical examination of passive toroidal tuned liquid column dampers for the trembling regulator of monopile wind turbines using FVM and FEM. Makauskas58 indicated a comparison of FDM, FVM with NN for solving the forward problem. Yousefzadeh et al.59 inspected a natural convection of Water/MWCNT NF movement in an inclusion for examination of the first and second laws of thermodynamics in view of FVM.A technique for expressing and analyzing partial differential equations as algebraic equations is known as the finite volume method (FVM)In this work, the complex interaction of thermodynamically as well as hydrodynamically settled current in a perpendicular square channel, with the MF created by neighboring positioned two wires, has been investigated for the first time. One wire is positioned whereas the other one is assumed to be present above the duct. The new aspects of the issue are described through physical explanations. A finite volume based computational approach has been developed to obtain the numerical solution for different values of the governing parameters. The numerical results have been depicted in the graphical form, and are interpreted accordingly.L) based on the exterior pressure gradient. The current is expected to be stable, laminar and incompressible. That is why, the velocity is:We consider the fully settled movement of a standard Newtonian fluid that for the fully developed flow when the velocity distribution over any cross section of the duct does not change along the direction of flow , the pressure gradient is a constant. It is observable that the pressure is the function of It is observable that the pressure is the function of yAn evident significance of Eq.\u00a0 is the dNow, the succeeding dimensionless variables:the Eqs. and 13)14\\documeE, N, S, W etc. are evaluated by means of finite volume method (FVM). The differential equations, in the FVM, are transformed to surface integrals and then solved iteratively. The system of algebraic partial differential equations can be solved with the usual numerical methods. But the unknown conditions such as e.g. initial or boundary conditions cause a trouble in finding the numerical solution. At some stage, the system might be divergence even for precise estimations of missing conditions. Contrarily, solution will be interrupted for the partial differential equations involving the complex eigen values. However, finite volume method is the best choice to tackle such types of problems which might not be fixed easily by the other methods. On the other hand, a better convergence can be obtained with FVM as compared to other numerical methods. Obviously, Eqs. and 16 mentclass1pt{minimaentclass1pt{minimaIt is to point out that the control volume is defined by Now we integrate and evaluate the integrals over each term in Eq.\u00a0 as givenSimilarly, the integration over the second term leads to:Finally, incorporation of Eqs. \u201320 in EqThe algebraic system, in light of Eq.\u00a0, is corr8), as shown in the Fig.\u00a0Our numerical results for the flow velocity in the central of the channel along the straight line, for the case when there is no wire, compare favorably with the existing literature . In this way the MF tends to balance the impact of buoyancy in the laminar flow regime.Thermal distribution is significantly reduced over the whole duct, as the MF is strengthened.It may be inferred that the flow reversal may be controlled by applying a MF of appropriate power, around the channel, carrying the flow.Impact of the two nearby current carrying wires on the momentum and temperature behavior in the flow inside a vertical duct has been numerically investigated. In order to validate our computational technique, the numerical results have been compared, and are found to be in excellent comparison with the ones reported in existing literature. Based on the numerical study, following conclusions may be drawn:Various numerical experiments may be performed with different types of fluids of industrial interest. For example, Nanofluids, Hybrid nanofluids, Casson fluids etc.Rectangular duct can be replaced by the other shapes of ducts Entropy changes may also be studied with wide range of combinations in fluid choices and duct shapes.78.The Finite volume method could be applied to a variety of physical and technical challenges in the futureFuture extensions of the present study include but not limited to:"} {"text": "The year 2021 marked the 10th anniversary of the publication of Cells. To celebrate this milestone, a Special Issue entitled \u201c10th Anniversary of Cells\u2014Advances in Cellular Pathology\u201d was launched. The goal of this Special Issue was the collection of impactful research/review articles in the Cellular Pathology field. The final roster of published articles for the Special Issue is an incredible collection of research articles and reviews, covering topics from cancer, diabetes, and ocular manifestations, all of which are truly consequential to the quality of life.188ReO4) could potentially become a new therapeutic agent against HCC. It appears that the therapy regulates the induction of apoptosis and cell cycle arrest and the inhibition of tumor formation. It is certainly an intriguing concept, and further studies are warranted in order to achieve a selective/personalized HCC therapy.One of the most common types of primary liver cancer is hepatocellular carcinoma (HCC). Recurrence in HCC after conventional treatments remains a significant clinical challenge, despite advanced targeted therapies. Asadian et al. reportedThe study reported by Makboul et al. focused Liu et al. reportedIn regard to thyroid cancer, the most prevalent endocrine malignancy, Singh et al. publisheVan Acker et al. reviewedMcKay et al. providedDiabetes mellitus (DM) was also covered in this Special Issue. DM is one of the principal manifestations of metabolic syndrome and its prevalence with modern lifestyle is increasing perpetually. Dewanjee et al. publisheExtracellular vesicles (EVs) are secreted from cell membranes within the circulatory system and body fluids. Current knowledge about the involvement of EVs in numerous diseases is increasing at an ever-accelerating rate. D\u2019Alessandro et al. investigOuyang et al. focused Finally, Iwahashi et al. reported"} {"text": "In the Indeed, the field is already successfully moving in the direction of examining oxytocin function under more naturalistic contexts and more holistically, aided by advancements in technology ,47 and d. 2Oxytocin, and the closely related arginine-vasopressin, are the result of ancient gene duplication in vertebral evolution \u201355. Whilet al. [et al. [Macaca mulatta) display sexual skin. This modulation of stimulus preference indicates that both oxytocin and testosterone influence reproductive behaviours by possibly increasing the visual salience of sexual features. Similarly, Paletta et al. [In this theme issue, Bakermans-Kranenburg et al. asks how [et al. examineda et al. review h. 3Microtus ochrogaster) model of pair-bonding has been demonstrated to be mediated not just by oxytocin receptors in the nucleus accumbens [et al. [In situ hybridization to visualize and quantify oxytocin receptor mRNA found no differences between groups, suggesting that the difference in receptor expression was possibly the result of local dysregulation in oxytocin receptor protein translation or changes in the endocytosis and recycling rates.Interactions between oxytocin and other neuromodulator or neurotransmitter systems remain broadly underappreciated in the literature. However, links between oxytocin and the dopaminergic system are arguably the most well understood . The claccumbens \u201339,41, bccumbens . Explori [et al. in this The nucleus accumbens is a key site of oxytocin\u2013dopamine interactions, as detailed thoughtfully in a review by Borie and colleagues that expExamining the relationship that oxytocin has with the dopamine and serotonin systems in maternal behaviours, Grieb & Lonstein in this in vitro [Beyond the classical neurotransmitters, oxytocin also interacts with other neuromodulatory systems. One notable example is interactions between oxytocin and the opioid receptor system, observed both in vitro \u201373 and iin vitro . Putnam in vitro detail i. 4et al. [The promise of using oxytocin as a therapeutic intervention remains a tantalizing goal for the field of social neuroscience ,75. Howeet al. in this et al. [In an intervention-focused research article, Daughters et al. compare . 5A holistic understanding of oxytocin extends beyond specific neurotransmitters or brain regions but encompasses underappreciated aspects of neurobiology. A prime example of this is reviewed by Gonzalez and Hammock , who exaSimilarly, a review piece by Carter & Kingsbury offers aAt the neurobiological level, ageing also drastically impacts cognitive capacities. However, the link with neuropeptides is understudied. Polk and colleagues examine Oxytocin interactions also encompass broad life events, as expounded by Bales and Rodgers in a rev. 6et al. [The final paper of our theme issue by Leng et al. takes a"} {"text": "Breast cancer (BC) ranks as the first malignant disease and the second leading cause of death by cancer in women . DespiteLiterature reported that the tumor progression, treatment response, and clinical outcome would be affected by host metabolic abnormalities, including diabetes, obesity, and metabolic syndrome . MeanwhiDong et\u00a0al., metabolic syndrome could affect prevalence, treatment response, progression and survival of breast cancer. As for the initiation of breast cancer, Yan et\u00a0al. studied the association between mammary tumorigenesis and metabolome in a novel mouse model and showed that MCP-1 derived from adipose could contribute to breast tumorigenesis. By performing a nationwide population-based cohort study, Seol et\u00a0al. found that elevated GGT level could be a risk factor for breast cancer, especially in the obese post-menopausal group.In the review by Wang et\u00a0al. illustrated that Lactate Dehydrogenase-A (LDHA) mediated a loop between breast cancer stem cell plasticity and tumor-associated macrophage infiltration, which would a potential target for combating metastasis. Since both intrinsic and extrinsic factors contributed to metabolic reprogramming phenotypes, Wang et\u00a0al. comprehensively reviewed the metabolic mechanisms underlying BC metastasis.In terms of progression and metastasis, Lu et\u00a0al. found that the UCHL1, a deubiquitinating enzyme, could lead to chemoresistance by modulating free fatty acid synthesis. Wang et\u00a0al. found that pyrotinib and adriamycin had synergistic effects on HER2-positive BC. Qiu et\u00a0al. reviewed the newly published studies in the correlation between hyperglycemia and chemoresistance, as well as the hyperglycemic microenvironment and glucose metabolism. Li and Li. further summarized the recent studies about mitochondrial metabolism and therapeutic resistance in breast cancer. And Wang et\u00a0al. made a review of the mechanisms by which salinomycin protected against breast cancer and discussed its future clinical applications. In additional, He et\u00a0al. studied the lipid changes during endocrine therapy and found that tamoxifen would improve total cholesterol and low-density lipoprotein levels in premenopausal patients, and that aromatase inhibitors had no adverse effects on lipid profiles.Several authors focused on the therapeutic response and resistance. Qin et\u00a0al. studied the relationship between hormone receptor status and prognosis between medullary breast carcinoma and atypical medullary carcinoma of the breast. And Tong et\u00a0al. analyzed the correlation between 21-gene recurrence score (RS) and obesity, indicating RS varied among different obesity status.Furthermore, Du et\u00a0al. and Ni et\u00a0al. indicated that long non-coding RNAs MIR210HG and ADAMTS9-AS2 could modulate the metabolic reprogramming and progression of triple-negative breast cancer (TNBC). Sheng et\u00a0al. further illustrated the latest progress in DNA N6-Methyladenine modification and drug resistance in TNBC. And Zhang et\u00a0al. reviewed the role of hypoxia in breast cancer, and discussed the relationship between hypoxia and therapeutic response, as well as the clinical values of hypoxia biomarkers.Several articles studied the molecular mechanism beyond metabolic changes in breast cancer, including non-coding RNAs, RNA modification, and hypoxia. In conclusion, all these publications in the present Research Topic provide new insights into the role of metabolic abnormalities in the disease development, treatment response, and prognosis of breast cancer. We hope that the finding of the articles in this Research Topic would provide novel treatment strategies to improve survival of BC patients.All authors contributed equally to this Editorial. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "The cell division cycle in eukaryotic cells is a series of highly coordinated molecular interactions that ensure that cell growth, duplication of genetic material, and actual cell division are precisely orchestrated to give rise to two viable progeny cells. Moreover, the cell cycle machinery is responsible for incorporating information about external cues or internal processes that the cell must keep track of to ensure a coordinated, timely progression of all related processes. This is most pronounced in multicellular organisms, but also a cardinal feature in model organisms such as baker's yeast. The complex and integrative behavior is difficult to grasp and requires mathematical modeling to fully understand the quantitative interplay of the single components within the entire system. Here, we present a self-oscillating mathematical model of the yeast cell cycle that comprises all major cyclins and their main regulators. Furthermore, it accounts for the regulation of the cell cycle machinery by a series of external stimuli such as mating pheromones and changes in osmotic pressure or nutrient quality. We demonstrate how the external perturbations modify the dynamics of cell cycle components and how the cell cycle resumes after adaptation to or relief from stress. The yeast cell cycle has a tight control system and responds to external perturbations, which are considered here in a quantitative model. The cell cycle machinery coordinates all processes that are required for a cell to duplicate and ensure faithful inheritance of all its critical components. Therefore, it is per definition deeply entangled with nearly all physiological processes that happen within and around a cell. Yet, the cell cycle machinery itself is already a large and complex network of many interacting partners with regulation spanning many levels, including transcriptional and posttranslational control as well as stoichiometric inhibition or activation of protein function (Enserink and Kolodner 1 and G2) in which the cell primarily grows. Various checkpoints monitor transitions between the phases controlling that the processes of the previous phase have been completed and, otherwise, arrest cell cycle until the requirements are met phase, in which DNA is replicated, and mitosis (M), in which the chromosomes are separated between the progeny cells. The S and M phases are interspaced by two gap phases to inhibit its activity Sic1 during Get al. et al. et al. et al. The final B-type cyclins, Clb1/2, are required for mitotic entry and the isotropic switch. Cdc28-Clb1/2 regulates mitotic spindle elongation cascades are employed for this task. Many interactions between components of signaling pathways and the cell cycle machinery have been described, e.g. the cell wall integrity pathway (CWI) Levin ; the higet al. et al. Growth is the most important determinant of cell cycle progression. The growth rate of unicellular organisms is determined by nutrient availability and influences cell size, ribosome content and metabolic efficiency , the well-studied High Osmolarity Glycerol (HOG) pathway mainly coordinates the adaptation to increased osmolarity activity as a mark for the G1/S transition with 111 parameters (Tables S1 and S2). It is implemented in the Systems Biology Markup Language (SBML) and is available in the supplementary data (File S1). The cell cycle part of the model is implemented without events. However, we use events to turn pheromone signaling or osmotic stress on and off. The cell cycle duration of our reference condition, i.e. the standard parameter set (Table S2) without any stress, is \u223c122 minutes. Our implementation results in limit cycle oscillations, as shown in\u00a0Fig.et al. et al. et al. S.cerevisiae\u2014Whole organism (integrated)\u2019 data set from the Protein Abundances Across Organisms database (PaxDB) (Wang et al. All parameters were adjusted with respect to cell cycle phase duration (Skotheim In the following, we describe how the model behaves when exposed to alpha factor treatment, high osmotic stress, and how the cell cycle duration is affected by the availability of nutrients.1/S transition. The arrest caused by the sensing of pheromone in the environment is mediated by the CKI Far1 (\u2018Factor ARrest\u2019, (Chang and Herskowitz et al. 1 is a stabilizing phosphorylation of Far1 at Thr306 by Fus3 (Gartner et al. et al. 1 phase (Fig.\u00a0Pheromone treatment leads to arrest before the Gase Fig.\u00a0\u2013D. This et al. Both Far1 as well as its stabilized form are subject to phosphorylation at the S87 residue by Cdc28-Cln1/2, triggering ubiquitination and proteasomal degradation (Henchoz Cells released from alpha-factor are synchronized in their cell cycle, independent of the time point when the cell was treated with the pheromone (compare\u00a0Fig.1 to S phase (Iyer et al. CLN1/2 and CLB5/6 is downregulated (Adroveret al. et al. 1, where Cln1/2 and Clb5/6 are the prevalent drivers of cell cycle progression (Figs\u00a0et al. et al. et al. et al. Budding yeast has developed adaptation programs to certain stresses in order to arrest the cell cycle, react to the stress appropriately and, upon successful adaptation, resume cell cycle progression. Osmotic stress induces activation of the HOG MAP kinase cascade, which ultimately leads to the activation of Hog1. The response on the level of cell cycle regulation is depicted in\u00a0Fig.2 to M transition requires degradation of the Cdc28 kinase inhibitor Swe1. In the unperturbed cell cycle, Swe1 degradation requires its localization to the bud neck. This recruitment is mediated by Hsl7. Hsl7 forms a complex with Hsl1, which, in turn, is attached to septin. Export from the nucleus and tethering to the Hsl1/Hsl7 complex in the budneck primes Swe1 for phosphorylation by Clb2-Cdc28 and Cdc5 (Howell and Lew 2/M transition is implemented in a simplified manner: In the model, Swe1 phosphorylation is only mediated by Clb2. The stress induced Hog1 activity results in sustained Swe1 levels (Fig.\u00a0The Gels Fig.\u00a0\u2013D and stels Fig.\u00a0 and\u00a0D orels Fig.\u00a0 and\u00a0B.Additionally, Hog1 directly inhibits phosphorylation of Swe1 Fig.\u00a0, thus, tS. cerevisiae changes with the nutritional condition the cells live in (Barford and Hall et al. et al. et al. et al. et al. It is well known that the cell cycle duration of et al. et al. et al. et al. We present a model of the yeast cell division cycle that incorporates dynamics of the major cyclins, cyclin dependent kinase inhibitors, transcription factors, and other key players. The model structure is largely based on well-studied concepts of the cell cycle network (Barberis We used the model to specifically analyze the response to pheromone treatment. In accordance with experimental findings, the interaction of the pheromone pathway with the cell cycle was implemented via the modification of Far1 activity Fig.\u00a0. This imet al. et al. et al. et al. et al. et al. et al. et al. et al. et al. et al. et al. 1, cells arrest prior to Start, when the stress occurs later, however, cells pass into S phase and continue the cell cycle until reaching the next checkpoint. Interestingly, the model predicts that osmotic stress applied in M phase can lead to early mitotic exit (Radmaneshfar et al. et al. The osmotic shock response is the prime example used to study how single cells and cell populations cope with stress and recover from changes in their environment (Hohmann et al. et al. et al. et al. pk of protein production, should be linked to or completely substituted by the output of the metabolic network extension. This would, however, not cover direct effects from signaling pathways that communicate nutritional information such as PKA. Such signaling must be integrated in appropriate fashion.Growth is a fundamental property of life, which critically depends on the available nutrients. Therefore, we also analyzed the impact of change in the nutritional conditions on cell cycle progression. While the interfaces between the cell cycle and signaling pathways described above are well defined, the implementation of the cell cycle response to nutrient changes was more challenging due to the complexity of interaction. In the end, we settled for the simplest, most straightforward implementation we could think of. We incorporated nutrient quality as a global parameter that modifies the rate of all protein production reactions equally. The model dynamics scaled appropriately with nutrient quality, i.e. poorer nutritional conditions caused slower accumulation of regulatory proteins leading to slower proliferation, while richer nutrition enhanced cell cycle progression Fig.\u00a0. This beThe presented model also has a number of other properties that will make it useful above the case scenarios for which we have analyzed it here. First, the model comprises a system of only ODEs without any additional algebraic, stochastic or Boolean-like equations, thus remains quite manageable and comprehensible. It is formulated in SBML and it complies with current modeling standards. This is an important aspect to mention for it ensures model reusability. Our model can easily be integrated with any SBML-compliant ODE integrator offering ease of use without risking critical behavior. The model represents realistic orders of magnitude for protein amounts instead of frequently used arbitrary values, thus it can be compared to experimental data and can be integrated with other models employing realistic molecule numbers or concentrations.et al. et al. et al. Driving the concept of integrating different cellular networks forward, cellular dynamics upon cell cycle progression, development, external stimulation, feeding or other causes of change are extremely complex since, loosely spoken, everything is connected to everything. Given a eukaryotic organism such as yeast with about 6000 genes (Goffeau foac026_Supplemental_FilesClick here for additional data file."} {"text": "Ross et al., Chem. Sci., 2022, https://doi.org/10.1039/d2sc02402k.Correction for \u2018Jahn-Teller distortion and dissociation of CCl The correct email address is The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} {"text": "We thank Campbell et al. for their comment on our mJournal of Human Hypertension [We strongly agree with Campbell et al. that redrtension . Howeverrtension . Still, rtension . The metrtension cited byrtension reportedrtension . As Camrtension , there irtension ,9,10). Irtension , that arAlthough controversial or \u201cunexpected\u201d data regarding non-classic findings of diet sodium restriction, such as the absence of a relevant curvilinear association between salt consumption and reduction in CVD-associated mortality in some papers , cannot In their recent paper, Cappuccio et al. raise awThe paper of Hogas et al. totally Campbell et al. further state in their comment that theFinally, reducing dietary sodium\u2014even slightly\u2014is beneficial beyond doubt in the general population, while some reverse causality reports are still to be investigated and confirmed in specific populations. We agree with Campbell et al. that a population-wide approach would be beneficial, with salt-reduction strategies resulting in less people suffering from arterial hypertension and its consequences . Reducin"} {"text": "They cause hyperkeratinization of the pilosebaceous follicles and seborrhea. Endocrine diseases characterized by increased levels of androgens often present with acne vulgaris. A correlation between serum androgen levels and acne severity exists, and the assessment of serum androgen levels is therefore essential in women with severe acne vulgaris and treatment resistant acne.the study was conducted in the Dermatology Clinic of the University of Nigeria Teaching Hospital, Ituku Ozalla. Seventy females with acne vulgaris and seventy females without acne vulgaris were recruited as subjects and controls respectively. Blood samples were taken from subjects and controls to measure levels of serum testosterone, dehydroepiandrosterone sulfate (DHEAS) and androstenedione. Acne severity was measured using global acne grading system (GAGS).the median levels of DHEAS and androstenedione (1.20\u00b5g/ml and 1.80ng/ml respectively) were higher in subjects than 1.00\u00b5g/ml and 1.70ng/ml in controls respectively, although these findings were not statistically significant. There was also no significant difference between the levels of serum testosterone in both the subjects and the controls. No correlation existed between levels of serum androgens and acne severity.there was no statistically significant difference in the serum androgen levels between the subjects and the control population, and no relationship between androgen levels and severity of acne vulgaris was demonstrated. Propionibacterium acnes (P. acnes) [Propionibacterium acnes in individuals who have a genetic predisposition to acne [Acne vulgaris (AV) is an inflammatory disorder of the pilosebaceous glands of the skin that is characterized by comedones, papules, pustules, nodules and cysts . It is a. acnes) . Serum a. acnes) . They ca to acne .Endocrine diseases such as acromegaly, cushing\u00b4s syndrome and adrenal androgen secreting tumors in which increased levels of androgens occur, are characterized by acne vulgaris. Polycystic ovarian syndrome (PCOS) and congenital adrenal hyperplasia (CAH) also present with acne vulgaris . Acne vuObjective: this study was conducted to investigate the serum androgen levels in females with acne vulgaris and correlate it with its severity.Study design: this study was a prospective, cross-sectional study conducted at the Dermatology Clinic of the University of Nigeria Teaching Hospital (UNTH) in Southeastern Nigeria. It was done between April and October 2016.Sample size: the sample size was calculated using the Kish and Leslie formula [ formula .Where n= minimum sample sizeZ= constant at 95% confidence interval from Z table, p = prevalence of acne vulgaris in Enugu (4.3%) , q = 1 -s= sample size to be selected, n = original calculated sample size, a = anticipated response = 90%. Calculation:Where n=70 (to the nearest whole number). For the purpose of this study, 140 patients (70 cases and 70 controls) were recruited using a consecutive sampling method. Inclusion criteria for the cases were consenting patients with clinically diagnosed active acne vulgaris .Patients with any of the following criteria were excluded: A) Pregnant women and lactating mothers ,17; B) pEthical consideration: ethical clearance was sought and obtained from the Ethics Review Board of UNTH, Ituku Ozalla (NHREC/05/01/2008B-FWA 002458-IRB 00002323). Informed written consent was obtained from all participants recruited into the study or their parents/guardian if the patient was between 16 and 18 years of age before enrollment. For the unlettered participants, the informed consent form was read to them in the local language and their thumb prints were obtained in the presence of a witness.Statistical analysis: data analysis was carried out using the Statistical Package for Social Sciences (SPSS\u2122) version 21.0 . The socio-demographic and clinical characteristics of the participants were presented as frequency distribution tables. Mean and standard deviation were computed for normally distributed continuous variable while median and interquartile ranges were computed for skewed continuous variables. Categorical variables were presented as frequencies and percentages. Continuous variables were compared using student\u00b4s t-test for normally distributed data while Mann-Whitney U test and Kruskal-Wallis test were performed for skewed data. Association between categorical variables was evaluated using Chi-square and Fisher\u00b4s exact test, while the correlation between continuous variables was done using Spearman correlation analysis. A p-value of \u22640.05 was deemed statistically significant.2and 24.55 kg/m2, and there was no statistically significant difference between the two groups (p = 0.844). On grading the severity of the acne vulgaris with the global acne grading scale, we observed that forty-six subjects (65.7%) had moderate acne while thirteen subjects (18.6%) had mild acne. Eleven subjects (15.7%) had severe acne. We looked at the symptoms of hyperandrogenemia, the commonest symptoms of hyperandrogenemia were hirsutism and alopecia, seen in twenty-five subjects and fourteen subjects (35.7% and 20%) respectively. The two groups of participants had similar age distributions, the mean ages being 27 \u00b1 7.28 years and 30.4 \u00b1 9.59 years for the females with acne vulgaris and the controls respectively. There was a statistical significance between the educational levels of the subjects and controls. Also, the mean BMI of the subjects and controls was 24.71 kg/met al. [et al. [This present study aimed to determine the serum androgen levels of females with acne vulgaris and to correlate it with the severity of acne vulgaris. The age range of the study participants was 16 to 50 years. The subjects aged 20-24 years were about a third and this was followed closely by those aged 30-34. This is similar to findings in previous studies ,17,26 anet al. where itet al. . Althoug [et al. in Southet al. [et al. [et al. [et al. [et al. [et al. [This may be explained by the fact their study population were recently admitted undergraduates. Other hospital-based studies in Africa reported their mean ages: 23 years in Togo and 25 y [et al. and Khon [et al. . This ag [et al. reported [et al. found th [et al. reportedet al. [et al. [et al. [et al. [Hirsutism, which usually causes cosmetic concerns and embarrassment for affected females, was the commonest symptom of excessive serum androgens in this study. This is consistent with the findings by Karrer-Voegeli et al. . A possi [et al. alopecia [et al. ,37. Some [et al. ,35,38. I [et al. . Further [et al. found no [et al. . Howeveret al. [et al. [et al. [et al. [et al. [This study did not demonstrate an overall correlation between serum androgen levels and acne severity. Some researchers ,35,42 reet al. , in thei [et al. , titled [et al. . The fac [et al. found th [et al. who also [et al. , concludLimitations: this was a clinic-based study. A population study may have more robust outcome and the findings more generalizable. The assay of other serum androgenic parameters like free testosterone, dihydrotestosterone and sex hormone binding globulin (SHBG) might have given a wider androgenic profile.Recommendations: this study has raised questions that would need further research; larger scale studies will further elucidate the role of androgens among women with acne who have normal androgenic values. Further work comparing the degree of response of anti-androgen treatment between women with abnormal androgenic and normal androgenic parameters may give more and better insight the role of androgens among women with acne. Population based research is desirable as it may give a better representation of the serum androgen levels and the effect of acne on the quality of life of women.Although this study did not show a difference between serum androgen levels in subjects and controls, it does not undermine the importance of hormonal assay, especially in patients with treatment resistant acne, persistent acne and with other clinical signs of hyperandrogenemia. There was no correlation between the androgen levels with the degree of severity of acne in our subjects. However, the small sample size may make it difficult to generalize this finding. Large-scale studies in this area may shed more light on the role of androgens in the severity of acne vulgaris.Serum androgens play an important role in the pathogenesis of acne vulgaris;It is essential to assess serum androgen levels in women with severe acne, persistent acne or PCOS.The difference in serum androgen levels were not statistically significant between women with acne vulgaris and women without acne;Acne vulgaris may be dependent on the degree of sensitivity of the sebaceous glands to serum androgens and not the levels of serum androgens."} {"text": "Cancer is a major public health problem worldwide. Studies on oncogenes and tumor-targeted therapies have become an important part of cancer treatment development. In this review, we summarize and systematically introduce the gene enhancer of rudimentary homolog (ERH), which encodes a highly conserved small molecule protein. ERH mainly exists as a protein partner in human cells. It is involved in pyrimidine metabolism and protein complexes, acts as a transcriptional repressor, and participates in cell cycle regulation. Moreover, it is involved in DNA damage repair, mRNA splicing, the process of microRNA hairpins as well as erythroid differentiation. There are many related studies on the role of ERH in cancer cells; however, there are none on tumor-targeted therapeutic drugs or related therapies based on the expression of ERH. This study will provide possible directions for oncologists to further their research studies in this field. Cancer is a major public health problem worldwide and is the second leading cause of death in the United States: 1,918,030 new cancer cases and 609,360 cancer deaths are projected to occur in the United States in 2022 , 5. In tet al. first started to suspect and discover the relationship between ERH and malignant tumors , nematodes (Caenorhabditis elegans), and insects (Aedes aegypti). Lower vertebrates (Zebrafish), mammals (Mus musculus), and humans (Homo sapiens) also have a high degree of sequence conservation. ERH is not found in the fungi, except for the fission yeasts Schizosaccharomyces, S. pombe, S. octosporus, S. cryophilus, and S. japonicus Assay, Pogge et\u00a0al. demonstrn factor . But theAnalyzed by Co-Immunoprecipitation (Co-IP) and mass spectroscopy (MS), Kwak et\u00a0al. found that ERH is associated with SPT5 elongation factor Figure\u00a02Analyzed by YTH screening, Smyk et\u00a0al. demonstret al. was shown to interact with ERH using YTH screening. They demonstrated by fluorescence co-localization assay that when Ciz1 and ERH are co-expressed in HeLa cells, Ciz1 could recruit ERH to the region of DNA replication. They indicated that ERH can block the action of Ciz1, and then reduce the expression of ERH inducted by DNA damage, which facilitates CDK-cyclinE-p21Cip1/Waf1 complex formation and enables the repair of DNA damage.It is inferred that phosphorylation of CKII sites (Thr18 and Ser24) would disrupt the dimerization of ERH and then disrupt its interaction with other proteins . In 2008et al. found anet al. Figure\u00a02Analyzed by co-IP and MS in 2011, ERH was shown to interact with HOTS, a tumor growth inhibitor encoded by H19 antisense transcript Figure\u00a02ERH was shown using stable isotope labeling by amino acids in cell culture MS that it can interact with Sm protein SNRPD-3 Figure\u00a02et al. family member, to have an effect on serine-phosphorylated STAT3, regulating canonical tyrosine phosphorylation and enhancing transcriptional activity in gastric cancers . ERH canet al. cells . Ishikawet al. discoveret al. ; Jiangsu Maternal and Child Health Association Project (FYX202026); Jiangsu Province key research and development program ; Xuzhou Medical University Excellent Talent Fund Project ; Jiangsu Province, the medical innovation team (CXTDA2017048).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "Also questionable and not justified by the authors is the inclusion of only RCTs comparing lithium against placebo or treatment as usual, but not against an active comparator. At least three studies were excluded on this basis: (1) the randomised comparison of lithium v. lamotrigine in bipolar II disorder .In addition, the authors' assumption that no suicides took place in studies with no information on suicide events is flawed; this compromises the validity of included data. It also leads to an inflation of zero studies, i.e. studies with no events in either treatment arm, potentially biasing the results towards false negative. Furthermore, the largest RCT included by the authors . For the analysis of rare events, it is also useful to use large open studies. In fact, in a meta-analysis of 31 studies with a total of over 85\u00a0000 person-years of risk exposure, Baldessarini et al. (p\u00a0<\u00a00.0001).The analysis of rare events leads to statistical difficulties Liu, , as disci et al. showed ap\u00a0=\u00a00.002). The meta-analysis by Cipriani et al. (p\u00a0=\u00a00.01). Remarkably, in this meta-analysis, Cipriani et al. also demonstrated a statistically significant reduction in all-cause mortality in the lithium groups , thereby confirming older studies (Ahrens et al., The adequate evaluation of RCTs also provides statistically reliable evidence of the preventive effect of lithium on suicidal acts: In an overview of RCTs, Baldessarini and Tondo recentlyi et al. found siet al.'s findings that are probably flawed and a misleading representation of the evidence base. Looking at the whole picture of all studies available demonstrates the well-established effect of lithium to prevent suicidal acts.For these reasons, we question the validity of Nabi"} {"text": "Lentini et al. present the potential of in situ brain regeneration to address neurodegeneration in the epileptic brain.Drug-refractory forms of neurological diseases could find their next breakthrough therapy in non-pharmacological approaches to brain repair. Lentini et al. [When millions of patients suffering from drug-refractory forms of neurological diseases face a therapeutic void, can we push the limits of current medicine to heal the brain from within? An elegant study by i et al. .\u00a0providei et al. . This faLentini et al. [Mesial temporal lobe epilepsy with hippocampal sclerosis (MTLE-HS) is a dreadful epileptic syndrome which usually manifests several years following an early childhood triggering event such as brain injury or febrile seizure . The pati et al. . first pLentini et al. [Lentini et al. [The study by i et al. . providei et al. . relies i et al. . SupplemSMN1, Zolgensma\u00ae, Novartis) and biallelic RPE65 mutation-associated retinal dystrophy have been at the forefront of medical advances and their recent approval by regulatory institutions paves the way for the use of gene delivery-based regenerative methodologies in other settings. The targeting of reactive glia for lineage conversion is an attractive strategy because of their potentially harmful role in neurodegeneration. Their elimination by conversion into functional neurons could therefore play a dual role of removing detrimental cells while effectively correcting the loss of neurons. On a larger scale, by combining two promising technologies, in situ cellular regeneration opens new spheres of possibilities for other untreatable disorders such as Parkinson\u2019s, Huntington\u2019s or retinal diseases, which could very well benefit their next breakthrough therapy in lineage conversion strategies [Gene therapies to halt the manifestation and/or progression of devastating diseases such as spinal muscular atrophy (AAV9-rategies \u20136."} {"text": "It is aetiology .Further research into the identification of candidate causal genes will create opportunities to study the molecular mechanisms underlying the disease phenotype and will enable clinicians to make timely and accurate clinical diagnoses . This wide novo mutations have a prevalence rate of 1 in 213 to 1 in 448 live-births and adaptive function among children aged 5\u00a0years or younger. The global prevalence of DD is 1\u20133% among children aged 5\u00a0years or younger . The pree-births .Advances in variant detection technology have evolved in recent years, leading to accelerated causal gene discovery and understanding of genomic lesions in ID/DD cohorts. After preliminary confirmation of the disease through clinical, laboratory and radiological examination, clinicians can order further diagnostic testing, which is influenced by initial findings and availability and access to resources. The range of potential lines of inquiry include but are not limited to: karyotyping to identify gross chromosomal abnormalities; chromosome microarray (CMA) to identify deletions, duplications, loss of heterozygosity, and aneuploidy; and genomic sequencing to identify disease-causing variants .Cerminara et al.; Chen et al.; Deng et al.; Fan et al.; Garc\u00eda-Ortiz et al.; Li et al.; Servetti et al.; Smetana et al.; Tao et al.; Wan et al.; Wang et al.; Xiang et al.; Zarate et al.; Zhang et al.; Ali Alghamdi et al.; Binquet et al.; Ha et al.; Li et al.; Rong et al.; Yue et al.; Zhao et al.).This Research Topic features new causal genes for DD and ID, advances in the mechanistic understanding of previously reported DD and ID genes, and potential therapeutic applications. We received 40 submissions and accepted 21 papers after rigorous peer review . The authors delineate the genotype-phenotype correlation for 12q interstitial deletions and discuss likely causative genes. Wang et al. reported familial translocation t . They describe a range of complex phenotypes associated with these chromosomal abnormalities. This paper highlights the value of studying extended families for intrafamilial variation in genotypes and associated phenotypes. Smetana et al. reported a case of Xq22.3 deletion associated with Alport syndrome with intellectual disability with genotype phenotype correlation . The deletion is larger in size compared with previously reported cases and includes two additional genes. This may explain a broader phenotype with additional features in the proband. Fan et al. reported a case with deletion of chromosome 7q35-7q36.3, which causes congenital brain dysplasia, DD and ID . Servetti et al. used an integrated framework to analyse 12 cases of neurodevelopmental disorders with complex phenotypes and suggested that those cases can be explained by multiple mechanisms, including additive effects of multiple CNVs, involving known neurodevelopmental disorder genes and novel candidate genes . One study leveraged a consanguineous family with four affected individuals to identify a likely pathogenic homozygous variant in OSGEP . Detailed phenotyping and proteomic analysis are a valuable part of this study .Four publications are based on chromosomal aberrations associated with DD and ID. Deng et al. reported two new cases of chromosome 12q14 deletions and reviewed the published literature . Xiang et al. used whole exome sequencing to investigate 17 patients with unexplained DD and/or ID . They used the whole exome data for analysis of CNV, SNV and indels. Seven affected individuals carried a single nucleotide variant or an indel that explained the disease, and three cases carried a disease-associated CNV. Zarate et al. reported two cases of SATB2-Associated Syndrome . The clinical symptoms overlapped with mitochondrial disease presentation. The authors recommended considering exome sequencing in suspected cases of mitochondrial disease.Three studies are based on the evaluation of multiple cases. Wan et al. reported six new variants in seven cases and provided phenotype descriptions for MEF2C haploinsufficiency syndrome (MCHS) . This is an indication of new avenues in the field that are moving toward reduced costs of sequencing. Garcia-Ortiz et al. examined methylation in blood samples of patients with autism spectrum disorder (ASD) . In a case report, Cerminara et al. reported an affected individual with a complex phenotype including ASD and its association with a maternally inherited X-linked missense variant in HUWE1 and a de novo stop variant in TPH2 .Binquet et al. reported a study that compares the diagnostic yield of trio sequencing in 1,275 cases with the current strategy for fragile X diagnostics, which involves microarray and panel sequencing for 44 genes . The predominant phenotype was a focal epilepsy. This gene has been implicated previously in autosomal recessive focal seizures. Tao et al. reported a case of complete maternal uniparental disomy of chromosome 2 with a rare UNC80 c.5609-4G>A intronic variant in a Chinese patient with infantile hypotonia, psychomotor retardation and facial dysmorphology . Disomy resulted in a homozygous mutation. These two reports reinforce the paradigm of parental disomy causing homozygous disease in outbred populations . Two additional papers reported compound heterozygous mutations causing recessive disorders in outbred populations .In a study by Chen et al., maternal uniparental disomy resulted in a homozygous variant in Li et al.). They used different techniques including FISH and low pass whole genome sequencing to identify the genetic changes. Zhang et al. identified a pathogenic variant in GRIN1 in a single case and performed further experiments to study the pharmacological impact of the DNA change and rescue mechanisms . Zhao et al. described a nonsense variant in ZNF462 associated with Weiss-Kruszka syndrome-like manifestations . Li et al. identified a novel homozygous splice-site variant in a female child with Cohen syndrome . Rong et al. reported a milder phenotype produced by a novel mutation in SEPSECS .Li et al. described a case of mosaic Turner syndrome 45, X [56.5%]/46, X, del(Y) (q12) [43.5%] (In conclusion, this special issue reports on both CNVs and SNVs associated with DD and ID. In the future, more refined cellular and molecular studies are required to understand the disease mechanisms . This co"} {"text": "White matter lesions (WMLs), often due to cerebral small vessel ischemia and several inflammatory demyelinated diseases such as multiple sclerosis (MS) and neuromyelitis optica spectrum disorder (NMOSD), are characterized as focal demyelination with a certain degree of inflammation. Brain-resident microglia and macrophages are highly dynamic cells that rapidly respond to cues in the injury sites and provide a neuroprotective or detrimental microenvironment for myelin maintenance by changing their activation state. Microglia/Macrophage phenotypic heterogeneity and their diverse responses are likely related to the differences in demyelinated pathology in WMLs. The peripheral immune cells, including attracted monocytes and T lymphocytes to the CNS, can also modulate microglia responses. The goal of this Research Topic is to gather contributions to advance research on immune mechanisms in white matter lesions as well as to explore potential interventions for ischemic white matter lesions or inflammatory demyelinated diseases.Guerrero and Sicotte). Based on the recent understanding of microglial function, microglia-targeting therapeutic may represent a potential treatment for WMLs .Microglia are the major immune cells of the central nervous system (CNS). The conventional view holds that the activation of microglia is deleterious in the pathological process, but accumulating evidence indicates that microglial activation may also have neuroprotective effects. The dual role of microglia suggests a diversity of potential functions of microglia . SpecifiIschemic white matter lesions, the main pathological feature of cerebral small vessel disease (CSVD), is a disorder of cerebral microvessels that causes white matter injury, accompanied by inflammatory activation of microglia . Qin et\u00a0Wang et\u00a0al. assessed possible factors associated with WMLs heterogeneity in patients with cognitive impairment in CVSD. Wang et\u00a0al. graded WMLs, enlarged perivascular spaces (ePVS), microbleeds, and lacunes on brain MRI. Cognitive impairment was assessed with Montreal Cognitive Assessment (MoCA) scores. Wang et\u00a0al. defined mismatch as the severity of WMLs do not match the severity of cognitive impairment. Further, they used penetrating artery imaging to clarify this mismatch\u2019s underlying mechanism. Eventually, Wang et\u00a0al. suggested that conventional imaging features and penetrating artery damage may be responsible for the heterogeneity of WMLs in cognitively impaired patients with CSVD, which may be therapeutic targets for early identification and prevention of cognitive impairment Wang et\u00a0al.In this topic, Seals et\u00a0al. reviewed B cell biology and the role of B cells in autoimmune inflammatory demyelinating diseases and provided a novel review about the possible regulation of microglial activation by IgE in MS/EAE Seals et\u00a0al.MS is the most common chronic inflammatory demyelinating disease of the CNS, which lead to immune-mediated inflammation, demyelination, and subsequent axonal damage in white matter . TraditiZhang et\u00a0al. delineated the repulsion guide molecule-a (RGMa) is involved in the pathogenesis of MS/EAE by affecting BBB permeability. Next, Zhang et\u00a0al. demonstrated that RGMa causes dysfunction of endothelial cells through BMP2/BMPR II/YAP, leading to disruption of BBB integrity in MS. This provides new insights for MS clinical treatment focusing on maintaining the BBB Zhang et\u00a0al.In the early stages of MS, inflammatory cells infiltrate into the CNS through a compromised blood-brain barrier (BBB). Chen et al. Meanwhile, CD4+CD25+forkhead box P3+ (Foxp3) regulatory T cells (Tregs) play a central role in the immune regulation of NMOSD. Ma et\u00a0al. discovered that Tregs attenuated immune cell infiltration in NMOSD mice, and polarized macrophages/microglia to an anti-inflammatory phenotype, thus mitigating white matter inflammation and involves the optic nerve and spinal cord. Previous studies have proved that microglial and macrophage activation are required for NMOSD pathogenesis ammation .Wang et\u00a0al. detected lymphocyte subsets in whole blood by flow cytometry and explored the changes in circulating lymphocyte subsets before and after immunotherapy for NMOSD and their correlation with clinical outcomes. Tacrolimus (TAC) inhibits the immune inflammatory response by interfering with the differentiation and proliferation of T cells and was used for the maintenance treatment of NMOSD in some studies. Wang et\u00a0al. found that the proportions of some lymphocyte subsets changed obviously before and after TAC treatment. EDSS score may be associated with certain lymphocyte subsets after TAC therapy Wang et\u00a0al..Besides, numerous B cell subsets also play significant roles in the pathogenesis of NMOSD. Wang et\u00a0al. developed an outcome prediction model and validated it in a multicenter validation cohort. Wang et\u00a0al. reported a variety of factors, including demographics, clinical and therapeutic predictors of relapse, etc. to identify factors that predict relapse in AQP4-ab-positive NMOSD patients. These results suggest that early identification of patients at risk of adverse outcomes has significant implications for clinical treatment decisions Wang et\u00a0al.In addition, predictors of disease relapses in AQP4-ab-positive NMOSD patients are critical for individualized therapy. Immune Mechanism in White Matter Lesions: Clinical and Pathophysiological Implications\u201d has collected a variety of valuable research and contributions on WMLs. The articles on this topic highlight the molecular mechanism involved in the pathology of demyelination and the innovative therapeutic approaches for white matter lesions, both of these are of clinical and pathophysiological importance.In summary, the Research Topic of \u201cX-WP drafted the manuscript, CM, WQ, L-JWand D-ST collaborated the topics. All authors contributed to the article and approved the submitted version"} {"text": "Histories of large-scale horizontal and vertical lithosphere motion hold important information on mantle convection. Here, we compare continent-scale hiatus maps as a proxy for mantle flow induced dynamic topography and plate motion variations in the Atlantic and Indo-Australian realms since the Upper Jurassic, finding they frequently correlate, except when plate boundary forces may play a significant role. This correlation agrees with descriptions of asthenosphere flow beneath tectonic plates in terms of Poiseuille/Couette flow, as it explicitly relates plate motion changes, induced by evolving basal shear forces, to non-isostatic vertical motion of the lithosphere. Our analysis reveals a timescale, on the order of a geological series, between the occurrence of continent-scale hiatus and plate motion changes. This is consistent with the presence of a weak upper mantle. It also shows a spatial scale for interregional hiatus, on the order of 2000\u20133000\u2009km in diameter, which can be linked by fluid dynamic analysis to active upper mantle flow regions. Our results suggest future studies should pursue large-scale horizontal and vertical lithosphere motion in combination, to track the expressions of past mantle flow. Such studies would provide powerful constraints for adjoint-based geodynamic inverse models of past mantle convection. Figure et al. . Couette [et al. tied to [et al. from pre [et al. for timen et al. . This flnd (e.g. ). We cho\u2009m (e.g. ). \u0394x is c models ,138. The [et al. . It varic models \u2013143. The2\u2009N\u2009m\u22121 ,150137,1Base of Lower Cretaceous, figure 3a, for Africa and South America. This result is a consequence of the large Couette flow inferred at that time for the upper mantle beneath Africa and South America from the assumed plate motion model [et al. [et al. [We close with some implications, starting with the Poiseuille flow dominated area on model . The laton model from preon model for timen et al. . O\u2019Neil [et al. drew atton (e.g. ), which on e.g. ,154). Bu. BuAf coon e.g. ,143) see seeAf coet al. [Next, we recall that Hayek et al. ,46 interet al. for a reet al. ,157), thet al. ). Our re\u2009cm\u2009yr\u22121 as repreFinally, we turn to plate boundary forces. Several studies emphasized their role in the Indian Ocean realm \u2013118 owin. 6We have used continent-scale hiatus maps as a proxy for mantle flow induced dynamic topography and compared them with plate motion variations in the Atlantic and Indo-Australian realms since the Jurassic, building upon earlier work and exploiting growing observational constraints on both. We find that oceanic spreading rate changes and hiatus surfaces frequently correlate, except when plate boundary forces may play a significant role. Our work is geodynamically motivated from the description of asthenosphere flow beneath tectonic plates in terms of Poiseuille/Couette flow. This description explicitly relates plate motion changes, induced by evolving basal shear forces (Poiseuille flow), to non-isostatic vertical motion of the lithosphere. Our analysis reveals a timescale on the order of a geologic series between the occurrence of continent-scale hiatus and plate motion changes. It is best interpreted through dynamic topography response functions of dynamic Earth models, because a weak asthenosphere delays significant surface deflections into the final phase of material upwellings, when buoyant flow enters from the lower into the upper mantle. Our analysis suggests that the spatial scale of interregional hiatus, which is on the order of 2000\u20133000\u2009km in diameter, should be interpreted through Poiseuille flow, where it corresponds to regions of active plume-driven upper mantle flow. We use fluid dynamic arguments to show that such active upper mantle flow can induce plate motion changes of"} {"text": "With the increased use of extended-criteria donors, machine perfusion became a beneficial alternative to cold storage in preservation strategy for donor livers with the intent to expand donor pool. Both normothermic and hypothermic approach achieved good results in terms of mid- and long-term outcome in liver transplantation. Many markers and molecules have been proposed for the assessment of liver, but no definitive criteria for graft viability have been validated in large clinical trials and key parameters during perfusion still require optimization.In this review, we address the current literature of viability criteria during normothermic and hypothermic machine perfusion and discuss about future steps and evolution of these technologies. The ongoing discrepancy between organ demand and supply has moved the spotlight toward implementing rescue strategies for extending the pool to deceased donors that were previously considered marginal or even unsuitable for transplantation ,2,3. In In this review, we evaluate the current literature on MP with the aim to provide a concise overview on the adopted viability criteria during NMP and HMP and discuss about the future steps and the evolution of these technologies.A systematic of the published literature, with the goal to investigate viability criteria in machine perfusion for LT was carried out on the 1 August 2022. Inclusion criteria for this review were as follows: A search of the MEDLINE, Scopus and Cochrane Database was conducted using the following terms: for NMP section: (\u201cnormothermic\u201d AND/OR \u201cmachine perfusion\u201d) AND liver transplantation\u201d AND ((\u201c2005/01/01\u201d[Date\u2014Publication]: \u201c2022/08/01\u201d[Date\u2014Publication])), for HMP section: ((hypothermic) AND (machine perfusion)) AND ((\u201c2005/01/01\u201d[Date\u2014Publication]: \u201c2022/08/01\u201d[Date\u2014Publication])). The references of each of the selected articles were also evaluated in order to locate additional studies that were not included in the initial search. Only clinical studies on human graft were considered.The search streategy was performed by the Preferred Reporting Items for Systemic Reviews and Meta-Analysis (PRISMA) guidelines Figure . Relevant articles were extracted independently by four authors who evaluated and excluded duplicates. No specific search dates were used. Consensus for the relevance of an included study were carried out by four senior authors . Given the heterogeneity of the selected studies and paucity of patients identified within the selection criteria, the results are reported as a narrative review.NMP provides a near-physiological environment to the liver, has the potential to evaluate high-risk grafts viability, allow organ therapeutics and improve transplant logistic by prolonging the preservation time up to 24 h .\u00ae device for a median of 9.3 h (3.5\u201318.5h). The study demonstrated that NMP was safe and feasible reporting 100% graft survival at 1 and 6 months in the NMP groups versus 97.5% at the same time points in the static cold storage (SCS) group, respectively. Selzner et al. [TM as an alternative to a red blood cell-based perfusate. They reported a 100% 3-months graft survival in the study group and demonstrated the safety of the Steen SolutionTM for ex-situ NMP. Bral et al. [\u00ae device was initiated at the donor center. Nasralla et al. [p = 0.01), evidence of IRI at histology , incidence of IC at 6 and 12 months after LT , larger use of DCD livers . Reiling et al. [The first phase-1 non-randomized prospective clinical trial evaluating NMP in LT was performed in 2016 by Ravikumar et al. in which3 h 3.5\u20131.5h. The a et al. analyseda et al. investiga et al. performea et al. showed bg et al. describeg et al. presente.3 h 3.5\u2013.5h. The g et al. tested tMany metabolic and dynamic parameters are used during ex-situ NMP to evaluate graft quality and viability, but the predictive value of perfusate biomarkers on post-LT outcomes remains to be established. The studies that focused on the liver viability parameters during NMP are summarized in p = 0.02), IC , biliary complications , and 1-year graft survival . Re-transplantation rate was higher in SCS group (18% vs. 0%). Similarly, Guarrera et al. [HMP is gaining increasing widespread acceptance in the preservation of marginal liver grafts ,22,23,24a et al. perfuseda et al. compareda et al. . In anota et al. . Moreovea et al. .NMP is an ex-situ technology that maintains the liver at 37 \u00b0C in a physiological state through the delivery of oxygen and nutrition. Throughout perfusion, hepatic artery (HA) pressure is set to 70 mm Hg and portal pressure to 6\u20138 mm Hg. The flow rates targets are >150 mL/minute in the HA and 600\u20131200 mL/minute in the portal vein (PV). During perfusion, serial arterial perfusate and bile samples are collected and biopsies for serial histological analysis can be obtained. Many metabolic and dynamic parameters are used during ex-situ NMP to evaluate graft quality and viability, but the predictive value of perfusate biomarkers on post-LT outcomes remains to be established. LactatesLactate clearance is the most used viability criteria during NMP evaluation. Lactate is a product of anaerobic glycolysis. The anoxia in the donor is the common cause of lactate elevation. The liver is the major organ responsible for lactate clearance, thus lactate represents a dynamic biomarker to monitor the function of the liver grafts . In the In the first experience of Mergental et al. lactate Ghinolfi et al. ,33 propoQuintini et al. argued tTo date, not only lactate clearance but a combination with other parameters and the normalization of the lactate values per liver size and amount of perfusate are key factors for the decision to transplant a graft. TransaminasesAlthough transaminases value is one of the most used markers of injury during perfusion, its correlation with post-operative outcome is poor and the level during perfusion could be influenced by the \u201cwash-out\u201d phenomenon and the size of the liver . Wash-ouIn the clinical setting, no defined cut-off values were adopted. Only grafts with very high levels of perfusate transaminases are discarded (ranging from >5000 up to 9000 IU/L based on center experience and preferences). Watson et al. reportedp = 0.092; r = 0.560).Nasralla et al. showed tGlucose MetabolismGlucose is also an easy and rapid marker of viability. Initially, glucose in the perfusate is high due to the glycogenolysis activated during SCS. One hour after commencing NMP, functioning livers determine the glucose concentration fall due a block of glycogenolysis and trigger of glycogenesis. Low levels of glucose at NMP start are related to PNF. The stimulation test with exogenous glucose was suggested by Watson . In caseSeveral authors considered glucose metabolism as an important viability marker in multiparametric assessment ,19,34. NPhPerfusate pH is usually low at the beginning of NMP due to hypoxia and anaerobic metabolism. During perfusion, pH must maintain within a physiological range. Bicarbonate may be administered at the beginning to correct pH, but several authors reportedPlatelet and Coagulation FactorsProduction of coagulation factors could show the efficient synthetic function of the liver. Eshmuminov et al. in theirMore recently, Weissenbacher et al. tested tBile EvaluationBile storage and analysis are routinely performed during NMP with the aim to find markers predictive of IC or biliary complications, which remain the Achille\u2019s heel of LT ,41,42,43Watson et al. were theSeveral groups proposed to evaluate the bile in relation to other perfusate parameters. Matton et al. showed tNovel biliary biomarkers have been recently reported in literature. Matton et al. investigCold preservation relies on the suppression of the metabolic rate, as most enzymatic reactions slow down with temperature reduction. Therefore, hypothermic technologies are traditionally considered less useful in assessing graft viability. Moreover, in the cold, there is a lack of active secretion of bile, which is used as viability parameter during NMP. Even when the perfusion is through the portal vein only, a certain fluid secretion through the biliary tree has been observed during HOPE, corresponding to perfusate mixed with molecules released from the hepatocytes . HoweverAccording to recent research, the key mechanism of (D-)HOPE seems to be the modification of the mitochondrial metabolism, as reported in mammalian hibernation and suspended animation . The delPressure, Flow, and ResistanceVascular resistance during HMP is an independent predictor of functional recovery and graft survival in the context of kidney transplantation, nevertheless, the predictive accuracy is modest and vascular resistance is discouraged to be used as the sole basis for organ acceptance ,51. A siTransaminase, Lactate Dehydrogenase, Glucose, and LactateIn the first clinical series published by Guarrera et al. transamiFlavin MononucleotideAmong the many functions of the liver and the consequent many confounders, Panconesi et al. have poiThe Zurich group has recently found that FMN, determined by fluorescence spectroscopy in HOPE perfusate, correlated with early graft loss, cumulative complications, and hospital stay after liver transplant ,60. BaseDespite the validation in several cohorts, the establishment of reliable markers during NMP and HMP requires higher caseloads and RCT. Nevertheless, these trials can be hardly planned for many reasons: (1) no clear endpoints and graft risk scores are defined in the field of LT and machine perfusion, (2) the presence of several perfusion devices and protocols make multicentric study difficult to be planned, (3) the prohibitive costs discourage the broad utilization of machines .In this scenario, it is very difficult to validate viability parameters during NMP, and lactates and transaminases, the most used markers, have been recently downgraded in several experiences on liver perfusion due to the poor predictive value. Graft viability testing and selection can also be performed during (D-)HOPE. Different parameters have been explored, but only FMN correlated with graft failure. Besides graft selection, the addition of specific molecules to limit IRI may be considered in future research .The challenge for the future is to find dynamic, very specific, rapidly measurable markers able to predict postoperative complications and long-term outcome. Technical advance in laboratory technique , genomic"} {"text": "The physical situation was modelled using boundary layer analysis, which generates partial differential equations for a variety of essential physical factors (PDEs). Assuming that a spinning disk is what causes the flow; the rheology of the flow is enlarged and calculated in a rotating frame. Before determining the solution, the produced PDEs were transformed into matching ODEs using the second order convergent technique (SOCT) also known as Keller Box method. Due to an increase in the implicated influencing elements, several significant physical effects have been observed and documented. For resembling the resolution of nonlinear system issues come across in rolling fluid and other computational physics fields.The purpose of this research was to estimate the thermal characteristics of tri-HNFs by investigating the impacts of ternary nanoparticles on heat transfer (HT) and fluid flow. The employment of flow-describing equations in the presence of thermal radiation, heat dissipation, and Hall current has been examined. Aluminum oxide (Al An artificial neural network was used by Mandal et al.2 to provide investigative statistics. A study of the HT and rheological properties of HNFs for refrigeration presentations was described by Saha et al.3. In this direction different investigations are serviced and documented by Al-Chlaihawi et al.4, Kursus et al.5, Xiong et al.6, and Muneeshwaran et al.7, while Dubey et al.8 provided a brief study in HNF on mechanical revisions. Syed and Jamshed9 looked at how an MHD tangent HNF might migrate across the boundary layer of a stretched slide. In addition, Qureshi10, Jamshed et al.11 and Parvin et al.12 tested the proof of the extended HT of tangent hyperbolic liquids crossways a nonlinearly wavering transparency containing HNFs. References14 list literature related to recent advancements in fluid flow in light of various fluid models.Hybrid nanofluids (HNFs) have distinctive qualities that make them effective in numerous heat transfer applications (HT). When used in conjunction with the wrong fluid, these resources improved heat behavior and convective heat operator measurement. Years ago, the concept of boundary level flow of HNFs over an expanding surface became more astounding due to its generous requests in engineering and industrial research. The investigators have shown a great deal of attention to rehabilitate HNFs that shatter heat transfer due to their affability to the many uses of HNFs. Truncated transfer charges are present in steady liquids such as ethylene, water, glycol combinations, and some types of oils. A 3D-class of HNF was planned by Said et al.16 investigated the stability of tri- HNFs in water-ethylene glycol combination. Sahu et al.17 presented a steady\u2010state and fleeting hydrothermal examines of single\u2010phase ordinary movement loop utilizing water\u2010based tri\u2010HNFs. Muzaidi et al.18 studied the heat preoccupation possessions of tri-HNFs and its possible upcoming path towards solar thermal applications. Safiei et al.19 patterned the effects of tri-HNFs on surface coarseness and wounding heat in end crushing process. Adun et al.20 introduced a review of tri-HNFs studying the synthesis, stability, thermo physical possessions, HT applications, and ecological properties. Gul and Saeed21 presented a nonlinear assorted convection couple pressure tri-HNFs movement in a Darcy-Forchheimer porous intermediate over a nonlinear widening superficial. Zahan et al.22 used the current presentation of tri-HNFs in water-ethylene glycol mixtures. Hou et al.23 studied the dynamics of tri-HNFs in the rheology of pseudo-plastic fluid with thermal-diffusion and diffusion-thermo properties.Three different kinds of single nanofluids were combined and disseminated in the base fluid to create tri-HNFs. Ramadhan et al.24 improved the effectiveness and PEC of geometric solar gatherer having tri-HNFs utilizing interior helical flippers on rotating discs. Hafeez et al.25 employed the finite element analysis of current energy predisposition founded by tri-HNFs, which influenced by persuaded magnetic field over the rotating discs. Haneef et al.26 presented a arithmetical training on temperature and mass transfer in Maxwell liquid with tri-HNFs using rotating discs. Nazir et al.28 investigated the thermal and mass classes\u2019 conveyance in tri-HNFs with temperature foundation over perpendicular heated cylinder (rotating discs). Alharbi et al.29 gave a computational valuation of Darcy tri-HNFs movement transversely a spreading cylinder with initiation effects.Numerous mechanical devices such as flywheels, gears, brakes, and gas turbine engines, utilize rotating discs. The power needed to effort the circle to overwhelmed frictional drag is determined by shear stresses between the disc and the liquid in which it is rotating, and the local flow field will have an impact on heat transfer. Inappropriately, a lot of variables conspire to thwart any universal analysis; therefore, it is important to consider the flow characteristics as well as the proximity of local geometry. Suliman et al.30. Finite element analysis was used by Mourad et al.31 to study the HT of tri-HNFs engaged in curved addition with uniform MHD. A unique multi-banding application of MHD to a convective transport arrangement using a porous medium and tri-HNF was demonstrated by Manna et al.32. Unsteady hugging movement of tri-HNF in a straight channel with MHD was studied by Khashi'ie et al.33. Using an arcade current and MHD, Lv et al.34, Khan et al.35, and Alkasasbeh et al.36 distributed a numerical technique near microorganisms tri-HNF movement over a rotating flappy. The HT of MHD dusty HNFs along a decreasing slide was examined by Roy et al.37. The fractional calculus was reused by Khazayinejad and Nourazar38 to describe a 2D-fractional HT analysis of HNF in conjunction with a leaky plate and MHD. The tri-HNF curved in a depressed tube endangering the MHD was compressed by G\u00fcrdal et al.39. A study on a quick and sensitive MHD device based on a photonic quartz grit with magnetic liquid penetrated nanoholes was reported by Azad et al.40. The impact of the MHD on the thermal effect in magnetic fluid was taken into consideration by Skumiel et al.41. The effect of adjustable MHD on the viscous fluid between 3-D rotatory perpendicular hugging platters was investigated by Alam et al.42.The magnetic effect on stirring rechargeable jobs, power-driven flows, and magnetic resources is controlled by a Magnetohydrodynamics (MHD), which can be conceptualized as a vector field. A force perpendicular to the control's own velocity and the MHD is used as an influencing control in an MHD. The MHD of nanoscale HT of magnetized 3-D chemically radiative HNFs was discussed by Ayub et al.43 considered an examination of the moderately ionized kerosene oil-based tri-HNFs movement over a convectively animated revolving shallow. Wang et al.44 presented a strategy for tri-HNFs combination in ethylene glycol encompassing movable diffusion and current conductivity using non-Fourier\u2019s scheme. Sohail et al.46 suggested a study of tri-HNFs diffusion species and energy transfer in material prejudiced by instigation energy and heat source. Nazir et al.47 presented a significant manufacture of current dynamism in incompletely ionized hyperbolic tangent substantial created by ternary tri-HNFs50. Include recent updates that explore nanofluids with heat and mass transmission in diverse physical circumstances.While an electric field is supplied to an electrode that also contains a MHD, the current continuously exists and is known as a Hall current . Ramzan et al.54 utilized the KBM in different types of tri-HNFs to improve the thermal efficiency, comparative examination, single phase based study, thermal expansion optimization and a numerical frame effort respectively. Shahzad et al.55 studied the movement and HT occurrence for dynamics of tri-HNFs in sheet subject to Lorentz force and debauchery possessions. Alwawi et al.56 used the KBM to optimize the HT by MHD tri-HNFs over a cylinder. Alazwari et al.57 considered the KBM to discuss the entropy optimization of first-grade viscoelastic tri-HNFs movement over a widening slip by consuming classical KBM. Kumaran et al.59 presented numerical studies based on KBM.The Keller-Box method (KBM) is an implicit approach for reducing a set of ODEs to a system of first-order DEs, which is one of the numerical strategies for solving problems. The KBM is used widely by many researchers in HNF generally and tri-HNFs particularly. Jamshed et al.2O3), copper oxide (CuO), silver (Ag), and water (H2O) nanomolecules make up the ternary HNFs under study. The resilient SOCT is used to find numerical solutions once the controlling PDEs system is converted into linear ODEs using the correspondence approach. Numerical results are shown graphically, and comments are built upon. The effects on particle morphologies, the convective slip boundary condition, the thermal radiative flow, and the slippery velocity have all been well addressed.Nobody has looked into the HT of tri-HNFs in excess of a revolving disk using linear energy, Hall current, and heat degeneracies, or the mixing of ternary HNFs in MHD flow. Aluminum oxide electrically steering movement of HNFs across a stretchable spinning disk with current energies and Hall movement in the cylindrical coordinate system mentclass2pt{minim60 were made in order to simplify the issue:The succeeding presumptions2O3, Ag and CuO.There are three different types of nanoparticles in the flow, including AlIt is expected that a adequately enough magnetic field persuades the Hall current. When an electric field is present, the comprehensive Ohm's law has the subsequent arrangement:2, Al2O3, and Ag with the consideration of water (H2O) as an improper fluid in the current problem.Figure\u00a0s) at z\u2009=\u20090 may be stretched uniformly in the radial direction at a stretching rate of (c). Hence, the governing equations60 of continuity, momentum and energy areThe schematic diagram in cylindrical coordinates and and 6) aWith boundary conditionsAt the nanoscale, skin friction and the Nusselt quantity are very useful for manufacturing purposes. Skin frictions, a modern physical issue, are described as:64). In the dimensionles governing Eqs.\u00a0 known as Keller box method (KBM) (seeing Eqs.\u00a0\u201315) wit wit64). ing Eqs.\u00a0. The doming Eqs.\u00a0a.Figure We start with introducing new independent variables on, Eqs. \u201315) red red\\docuFurthermore, domain discretization in entclass1pt{minimaApplying central difference preparation at midpoint Equations\u00a0\u201335) are are35) aSubstituting the terminologies gotten in \u201335) and and35) aThe linearized scheme then takes the chunk tridiagonal assembly shown below.Now we factorize A as66. In addition, Table Table (M) are displayed in Fig.\u00a02O3, CuO and Ag nano-molecule. In the profiles, the Al2O3\u2009+\u2009CuO\u2009+\u2009Ag/water tri-hybrid nanofluids demonstrated stronger impact on the flow dimensions than the Al2O3\u2009+\u2009CuO/water hybrid nanofluid and unitary Ag/water nanofluid. The gyration and random motion of the nanoparticles encouraged the fluid thermodynamic mechanism, which leads to an enhanced flow characteristic. The metal oxide and metallic nanoparticles carry current and heat energy about the stretching boundary plate to influence the fluid thermal dispersion.The sensitivity of the movement appearances to the rising magnetic term 2O3\u2009+\u2009CuO\u2009+\u2009Ag/water tri-HNFs. The rising effect is significant because of the generated electric potential that is normal to the applied right angle magnetic field and the flowing electric current past a conducting nanomaterial. Therefore, the current carrying nanoparticles stimulate viscous heating that discourages molecular bonding, which leads to the overall rise in the flow characteristics. The effect of nanoparticle fractional volume 2O3\u2009+\u2009CuO\u2009+\u2009Ag/water tri-HNFs, the more the thermal conductivity of the nanoparticles. The particle volume fraction increased the thermal coefficient and the density of the nanoparticles, which correspondingly raised the heat transfer as presented in the plot.In Fig.\u00a02O3\u2009+\u2009CuO\u2009+\u2009Ag/water tri-hybrid nanomaterial to break nanoparticles chemical and molecular bonds. This propels nanoparticle interaction, and boosted the heat conduction and transfer rate. Thus, temperature distribution is raised. Likewise, in Fig.\u00a02O3\u2009+\u2009CuO\u2009+\u2009Ag/water tri-hybrid nanomaterials, which encouraged thermal conductivity.Figure\u00a0The comparison of the numerical outcomes for various fluid dynamical parameters is presented in Tables 2O3\u2009+\u2009CuO\u2009+\u2009Ag/water tri-hybrid nanomaterial is investigated in a base fluid. To investigate the sensitivity of thermodynamic fluid parameters under considered boundary conditions, quantitative numerical results and qualitative graphical results are presented. The following are the study's findings, as summarized below: The base fluid thermal conductivity is strengthened with the combined Al2O3\u2009+\u2009CuO\u2009+\u2009Ag/water tri-HNFs.The volume fraction of nanoparticles had a significant impact on flow characteristics and heat distribution.A rising value of the Hall current term momentously encouraged the flow velocity field and thermal distribution all over the flow regime.The increasing values of the radiation, heat dissipation, and unsteadiness factors increase the HT rate.To investigate the impact of Hall current on the radiative ternary HNF flow over a rotating disc influenced by magnetic field, a second order convergent analysis was performed. The thermal conductivity strength of the flowing AlHence, tri-HNFs will assist in obtaining the desired thermal conductivity strength of an industrial working fluid and improved nanotechnology advancement.To contribute to the increasing demand of nanofluids for industrial and domestic usages, further study is encouraged. As such, this investigation can be extended to various combination of nanofluids under diverse boundary constraints. Also, different base fluids such as engine oil, glycol and others can be considered to appropriately determine the best nanomaterial for the augmentation heat propagation and conduction.73.The SOCT could be applied to a variety of physical and technical challenges in the future"} {"text": "Key messageFasciitis might be a cause contributing to myalgia in patients with MCTD.Dear Editor, MCTD, a concept proposed by Sharp et al. . ANAs were present at a titre of 1:5120 in a speckled pattern. Anti-Sm and anti-dsDNA antibodies were negative. Anti-U1-RNP antibody was positive at 550\u2009U/ml. A CT scan of the lung revealed a small amount of pleural effusion and very mild reticular opacities. Ultrasonic cardiography showed no evidence of pulmonary hypertension and preserved left ventricular function. A diagnosis of MCTD was made according to criteria proposed by the Ministry of Health and Welfare of Japan [e fibres and milde fibres . No eosiThe most remarkable characteristic in this case is that fasciitis was detected histopathologically instead of myositis, although the patient showed severe myalgia. We reported that myalgia in patients with DM and PM was attributable to fasciitis rather than myositis . Clinicaet al. [Matsuda et al. reportedet al. . Taken tWe report a unique case of MCTD with severe myalgia and fasciitis. Fasciitis can lead to severe myalgia. When patients with MCTD show severe myalgia, we should suspect the presence of myositis and fasciitis.Funding: This work was supported by JSPS KAKENHI grant number 22K08551 and Japanese Non-surgical Orthopedics Society \uff08JNOS\uff09 grant number JNOS202101.Disclosure statement: The authors have declared no conflicts of interest."} {"text": "Interface Focus (in two parts) brings together articles on time-keeping and decision-making in living cells\u2014work that uses precise mathematical modelling of underlying molecular regulatory networks to understand important features of cell physiology. Part I focuses on time-keeping: mechanisms and dynamics of biological oscillators and modes of synchronization and entrainment of oscillators, with special attention to circadian clocks.To survive and reproduce, a cell must process information from its environment and its own internal state and respond accordingly, in terms of metabolic activity, gene expression, movement, growth, division and differentiation. These signal\u2013response decisions are made by complex networks of interacting genes and proteins, which function as biochemical switches and clocks, and other recognizable information-processing circuitry. This theme issue of As in aytoplasm . Cellulaet al. [Journal of the Royal Society Interface on \u2018biological switches and clocks'. As a follow-up to that issue, Interface Focus is presenting a two-part collection of articles on the theme of \u2018time-keeping and decision-making in living cells'.The theoretical foundation for understanding time-keeping and decision-making in molecular regulatory networks was crafted in the 1960s and early 1970s by the pioneering work of Goodwin ,9, Roesset al. in a supet al. [2+/cyclic AMP (cAMP) oscillations), glycolytic oscillations and inflammatory responses (nuclear factor-\u03baB (NF\u03baB) oscillations). Next, Goldbeter & Yan [et al. [et al. [block transcription , R binding to A may displace A from the promoter (R + A:P \u2192 R:A:P \u2192 R:A + P) and R binding to A may sequester A from P (R:A prevents A from binding to P)\u2014they show that synergistic interactions of the three modes generate ultrasensitive transcriptional responses and robust oscillations.Part I focuses on time-keeping, in particular on mechanisms of biological oscillators, on synchronization of intercommunicating oscillators and on entrainment to external driving rhythms, with particular emphasis on circadian rhythms. Jim\u00e9nez et al. lead offer & Yan present [et al. provide [et al. investigPart II will focus on decision-making in cell differentiation, development and cell cycle progression.. 2Because this theme issue presents only a few snapshots of the immense progress that has been made on biological clocks since the 2008 collection, we review here some other recent developments.Per1, Per2, Cry1, Cry2, Clock, Bmal1, Rev-erb\u03b1 and Ror\u03b1), have been remarkably successful in accounting for the physiological properties of circadian rhythms in wild-type and mutant cell lines [et al. [et al. [et al. [et al. [Cry1 \u2212| Per2 \u2212| Rev-erb\u03b1 \u2212| Cry1) as a dominant source of oscillations in their models [et al. [As might be expected, considerable progress has been made in understanding the molecular mechanisms underlying circadian rhythms. Firstly, some new models of mammalian clock, taking into account interactions among the principal clock genes . By fit [et al. ,41 identr models . Ko et a [et al. have hig [et al. \u201346.et al. [et al. [et al. [et al. [et al. [Intercellular coupling and synchronization of circadian clocks have also been the subject of many publications. Bernard et al. showed t [et al. suggeste [et al. . Studies [et al. ,50 and b [et al. have foc [et al. further Neurospora crassa, Dovzhenok et al. [et al. [et al. [The interplay of circadian clocks and metabolism has received much attention. Using a combination of mathematical modelling and experiments in k et al. found an [et al. introduc [et al. used a c [et al. ,57 devel [et al. used a met al. [Wee1 is upregulated by an E-box, which binds to the master transcription factor BMAL1 : CLOCK. Zamborszky et al. [Wee1 expression, would generate quantized cell cycle times and cell division sizes. Later, Gerard & Goldbeter [et al. [et al. [et al. [We also draw your attention to some additional studies of interactions between the circadian clock and the cell division cycle, based on the unexpected discovery by Matsuo et al. that tray et al. used matoldbeter found en [et al. found mu [et al. . And, Ma [et al. , using d [et al. used matFinally, we review recent proposals that the cell division cycle is a \u2018clock-shop' of autonomous (independent) oscillators entrained to one another. It is commonly thought that cell cycle events are controlled by a \u2018master programme' based on fluctuations of cyclin-dependent kinases (Cdks) and their immediate regulators ,67. HoweAltogether, these recent studies and many others have contributed greatly to our understanding of the molecular regulatory mechanisms underlying biological oscillations and of their advantageous properties, such as robustness, tunability, entrainment and temperature compensation. Also, we now have a better appreciation of the\u2014often non-intuitive\u2014dynamics resulting from the interplay between clocks and clock-controlled processes. This progress has revolutionized our interpretation of experimental observations and our vision of future progress in health science and biotechnology.. 32+ waves in fertilized eggs [Joel E. Keizer (1942\u20131999). As a physical chemist, Joel Keizer made fundamental contributions to the theory of non-equilibrium thermodynamics before turning his attention to the application of dynamical systems theory to problems in cell physiology, most notably complex bursting oscillations in pancreatic \u03b2-cells and Cazed eggs ,82.Benno Hess (1922\u20132002). As director of a Max-Planck-Institute in Dortmund, Germany, Benno Hess directed a large group of biochemists and mathematical modellers exploring the molecular mechanisms of glycolytic oscillations in yeast cells .Arthur T. Winfree (1942\u20132002). An engineer by training, Art Winfree turned his creative mind to the dynamics of oscillations and wave propagation in living organisms. His predictions of \u2018phase singularities' in circadian rhythms and cardiac physiology revolutionized our understanding of these fields ,85.Rene Thom (1923\u20132002). The purest of pure mathematicians, Rene Thom was among the first persons to recognize the relevance of bifurcations of vector fields to temporal and spatial organization in living organisms ,87.Ilya Prigogine (1917\u20132003). The 1977 Nobel Prize in Chemistry was awarded to Ilya Prigogine for his fundamental insights on far-from-equilibrium thermodynamics, most notably on \u2018dissipative structures' in living organisms .Lee A. Segel (1932\u20132005). A world-renowned applied mathematician, Lee Segel made major contributions to the theory oscillations, pattern formation and wave propagation in living cells ,90.Reinhart Heinrich (1946\u20132006). A physicist who moved into biochemistry, Reinhart Heinrich combined modelling of specific cellular systems with a search for general principles. As one of the founding intellects behind metabolic control theory, he upset the paradigm of \u2018rate-limiting' steps in biochemistry by showing, in quantitative detail, how flux control is distributed across all enzymes in a metabolic network .Brian Goodwin (1931\u20132009). A Canadian \u00e9migr\u00e9 who got his PhD at the University of Edinburgh under Conrad Waddington, Brian Goodwin had a lifelong interest in development and evolution and was a leading figure in the renaissance of mathematical biology in the 1960s. His pioneering work on biochemical oscillators is reverberating to this day .Christopher Zeeman (1925\u20132016). A leading English mathematician and early convert to Thom's theory of singularities of vector fields, Christopher Zeeman applied \u2018catastrophe theory\u2019 in creative ways to a wide variety of phenomena in biology, as well as other fields ,94.Rene Thomas (1928\u20132017). The Belgian biochemist Rene Thomas was an early proponent of \u2018logical modelling' of biochemical reaction networks (i.e. modelling by Boolean functions). Early on he recognized the importance of negative feedback in generating biochemical oscillations and positive feedback in creating biological switches .Gregoire Nicolis (1939\u20132018). A chemical physicist at the Free University of Brussels, Gregoire Nicolis led a team of talented and creative younger scientists in the application of Prigogine's abstract ideas about non-equilibrium thermodynamics and \u2018dissipative structures' to real-world problems in chemistry, physics and biology ,97.George F. Oster (1940\u20132018). After an unusual start in the merchant marines and nuclear physics, George Oster studied biophysics and applied his massive intellect to a theoretical understanding of some of the toughest problems in molecular cell biology, including morphogenesis, pattern formation and molecular motors ,99.Garrett M. Odell (1943\u20132018). Trained in applied mathematics and theoretical mechanics, Garry Odell took an abrupt turn to mathematical biology, where he made many remarkable contributions to our understanding of chemotaxis, embryogenesis, molecular motors, gene regulatory networks and cytoskeletal mechanics .The theoretical foundations of time-keeping and decision-making by the molecular regulatory networks in living cells were laid out in the 1970s and 1980s by a small band of physical chemists, biochemists, chemical engineers and mathematical biologists. Unfortunately, many of these pioneering scientists have passed away and are sorely missed. In conclusion, we would like to recognize their contributions to the field."} {"text": "The association of periodontal disease in people diagnosed with RA is emerging as an important driver of the RA autoimmune response. Screening for and treating periodontal disease might benefit people with RA. We performed a systematic literature review to investigate the effect of periodontal treatment on RA disease activity.Medline/PubMed, Embase and Cochrane databases were searched. Studies investigating the effect of periodontal treatment on various RA disease activity measures were included. The quality of included studies was assessed. Data were grouped and analysed according to RA disease outcome measures, and a narrative synthesis was performed.We identified a total of 21 studies, of which 11 were of non-randomized experimental design trials and 10 were randomized controlled trials. The quality of the studies ranged from low to serious/critical levels of bias. RA DAS-28 was the primary outcome for most studies. A total of 9 out of 17 studies reported a significant intra-group change in DAS-28. Three studies demonstrated a significant intra-group improvement in ACPA level after non-surgical periodontal treatment. Other RA biomarkers showed high levels of variability at baseline and after periodontal treatment.There is some evidence to suggest that periodontal treatment improves RA disease activity in the short term, as measured by DAS-28. Further high-quality studies with longer durations of follow-up are needed. The selection of the study population, periodontal interventions, biomarkers and outcome measures should all be considered when designing future studies. There is a need for well-balanced subject groups with prespecified disease characteristics. Key messagesA short course of periodontal treatment can significantly improve RA disease activity.Periodontal treatment might influence serum ACPA levels in people with RA and co-existent periodontitis.Further high-quality intervention studies with longer study durations are needed.RA is an autoimmune inflammatory condition that primarily affects the joints. RA autoimmunity is thought to be initiated at mucosal sites, such as the oral cavity, lung and gastrointestinal tract. At these sites, local inflammation [e.g. periodontitis (PD)] can be driven by genetic or environmental risk factors (e.g. cigarette smoke). The combination of mucosal inflammation and local bacterial dysbiosis might be responsible for triggering the RA autoimmune response, in particular ACPA, the serological hallmark of RA . Of the Periodontitis is an inflammatory condition of the tooth supporting structures initiated by microbial biofilms on the tooth surface and exacerbated by a dysregulated host response. There is also a similar dysregulation of the pro-inflammatory cytokines to that seen in RA . The res+ at-risk individuals without arthritis [Porphyromonas gingivalis, has a peptidylarginine deiminase enzyme that can citrullinate cytoskeletal filaments, which might promote the production of ACPAs. It is therefore hypothesized that changes in the oral microbiome in PD could be a trigger for RA-related autoimmunity. A study investigating microbial composition of subgingival plaque in CCP+ subjects at risk of RA found dysbiotic subgingival microbiome composition in addition to an increased prevalence of P.\u00a0gingivalis compared with healthy controls [There is mounting evidence to support a relationship between RA and PD. It has been found that periodontal disease is more prevalent in people with RA . PD is arthritis . A key pcontrols .Building on the putative association between PD and RA, a logical question is: can periodontal therapy influence disease activity and disease progression in RA? Recent reviews have found improved DASs after non-surgical periodontal treatment . We aim Rheumatology Advances in Practice online. A systematic literature search was designed with input from an expert librarian and informatician at Leeds Teaching Hospitals NHS Trust using a combination of key words and MESH terms. The search was conducted according to a prespecified protocol and the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines . SearcheScreening of search results was undertaken independently by two authors (Z.M. and J.T.). Any disagreement was resolved through discussion and, where necessary, arbitration by a third author (K.M.).For each article, the effect of NSPT on RA disease activity was analysed, and the following themes emerged from the data: the effect of treatment on DAS-28, ACPA, ESR and CRP, RF, swollen and tender joints, morning stiffness, HAQ and other ancillary RA biomarkers.s.e.m. was provided. Data were tabulated for a narrative synthesis and grouped by RA disease outcome measure.Effect measures of NSPT on outcome measure were calculated as the mean difference between baseline and follow-up (intra-group comparison) and/or the mean difference between groups (inter-group comparison). Where possible, the Included articles were quality appraised using the Risk Of Bias In Non-randomized Studies\u2014of Interventions (ROBINS-I) and the The systematic review was not registered. There was no deviation from the protocol throughout the review.Rheumatology Advances in Practice online, details the excluded studies. A total of 21 articles were included in the analysis; 10 were of randomized controlled trial design and 11 of non-randomized intervention design. The study characteristics are detailed in The screening results are summarized in the PRISMA flow diagram . SupplemDAS-28 was reported in a total of 17 studies. DAS-28 was calculated using ESR in eight studies, in two studies using CRP and one study reported both formats. Whether CRP or ESR was used was not reported in seven studies. Of the 17 studies, 9 demonstrated a statistically significant improvement in the DAS-28 after NSPT, compared with baseline. Of the 10 studies that reported on inter-group analysis, 6 studies demonstrated a statistically significant difference between the experimental and control arms.et al. [P\u2009=\u20090.013). This represents the longest study duration to show improved DAS-28.A recent paper by Nguyen et al. , compariet al. [+PD+ and 30 RA+PD\u2212 subjects. At 45\u2009days after NSPT, the RA+PD+ group had a reduction of DAS-28 from 4.34 to 3.12, . However, this study has a high risk of bias owing to the randomization process.Moura et al. conducteet al. [P\u2009=\u20090.04) and a decrease of DAS-28 CRP from a median of 3.26 to 2.76 (P\u2009=\u20090.002).In an analysis of a subgroup of 22 participants who had RA, Bia\u0142owas et al. showed tet al. [P\u2009<\u20090.01). However, at the 3- and 6-month re-evaluation there was no significant further change.B\u0131y\u0131ko\u011flu et al. , in theiet al. [P.\u00a0gingivalis counts at both baseline and the 3-month re-evaluation .Cosgarea et al. failed tet al. [Monsarrat et al. failed tet al. [P\u2009=\u20090.002). The mean DAS-28 reduction was 1.05 , whereas in the control group there was no mean reduction.Khare et al. comparedRheumatology Advances in Practice online.The full results are summarized in Rheumatology Advances in Practice online, summarizes the change in serum CRP and ESR values after NSPT.A total of 7 studies evaluated the effect of periodontal intervention on CRP , 32, 33,Rheumatology Advances in Practice online, summarizes the changes in RF after NSPT.Seven studies evaluated RF levels before and after NSPT. Five studies , 32, 33 A total of six studies evaluated serum ACPA. Three studies reported a significant reduction in serum ACPA levels after NSPT.et al. [r\u2009=\u20090.939, P\u2009<\u20090.001).Zhao et al. found a et al. [P\u2009<\u20090.001). Likewise, Nguyen et al. [P\u2009<\u20090.001).Anusha et al. found a n et al. found a et al. [et al. [P.\u00a0gingivalis abundance from periodontal pocket samples. Rheumatology Advances in Practice online, summarizes the change in serum ACPA values after NSPT.Ding et al. failed t [et al. found noRheumatology Advances in Practice online, summarizes the effect of NSPT on ancillary biomarkers.A total of 9 out of 21 studies investigated the effect of NSPT on ancillary biomarkers. Four studies investigated the effect of NSPT on serum TNF-\u03b1. Two of the studies , 31 founA total of four studies included data on the effect of NSPT on swollen and tender joint counts. Two of these four studies reported that there was a statistically significant improvement in swollen and tender joint counts after treatment.et al. [P\u2009<\u20090.01 in both cases). They also found a statistically significant reduction in tender joint count in group\u00a0C (P\u2009<\u20090.05) but not in group\u00a0A.Ortiz et al. found a et al. [P\u2009=\u20090.01) and tender joint count (P\u2009=\u20090.04) 6\u2009weeks after NSPT. Neither Al-Katma et al. [et al. [Bia\u0142owas et al. found a a et al. nor Okad [et al. found a Two studies evaluated the duration of early morning stiffness , 27. Neiet al. [There were no adverse events reported in any of the included studies from NSPT in the study populations. Of interest, however, in the study by Monsarrat et al. , two parThis systematic review has identified and included new data providing an important update on this topic. The relationship between periodontal disease and RA, and the potential impact of NSPT on RA continue to be research areas of great interest. This is reflected in the markedly increased frequency of studies exploring these links in the last 5\u2009years . We founStudy design was highly variable, with a moderate to critical risk of bias in most studies. The most common risks for biases were in measurements of outcomes and selection of the reported result for non-randomized study designs assessed using the ROBINS-I tool. For randomized controlled trials, the reporting/design of the randomization process and measurement of outcomes were most frequently areas of concern. Several trials were multiple-arm studies with a range of interventions, disease status , making meaningful inter-group comparisons difficult. Such designs also meant that there was no true control group in many cases .Although systematic reviews with meta-analyses have been published examining the impact of periodontal therapy on RA , we feltCase definitions for both periodontal disease and RA varied significantly between studies. This is important because the severity of periodontal disease and subsequent levels of oral mucosal inflammation might be an important prognostic factors. Levels of baseline RA-related systemic inflammation might have similar importance regarding RA-related outcome measures. Future studies should follow the 2017 World Workshop on the Classification of Periodontal and Peri-Implant Diseases and Conditions to charaDescription of the periodontal interventions frequently lacked sufficient detail, with the experience of the operator , instruments/equipment used, thresholds for plaque control before treatment and any time limits rarely reported. In the five studies that followed subjects up to 6\u2009months, only a single course of NSPT was provided, whereas the accepted standard of care is for \u223c3-monthly supportive therapy . This miMost of the included studies (16 of 21) had a primary endpoint of 3\u2009months or less. Although this may be adequate to demonstrate proof of concept, because both RA and periodontal disease are chronic conditions, the clinical significance of acute changes after a single course of treatment is questionable. Whether the improvements seen in most studies would be sustained with continued treatment or supportive therapy remains to be investigated.Most studies measured several RA-specific and ancillary pro-inflammatory biomarkers . Although identification of candidate biomarkers is an important objective, these markers were highly variable both at baseline and after treatment and did not appear to be suitable as independent measures of disease activity. Although levels of inflammatory markers offer one explanation for a common mechanism of action for the impact of periodontal therapy on both periodontal disease and RA activity, most of these biomarkers do not appear to be useful proxies for more robust clinical measures, such as DAS-28 for RA or bleeding on probing/probing pocket depth for periodontal disease. One promising biomarker as an outcome measure for assessing the impact of NSPT on RA disease activity is ACPA, for which three studies found there to be a significant reduction after treatment , 26, 32.Regarding interpretation of DAS-28 reduction, future studies should take care to report consistent DAS-28 formats. This review found inconsistency in the use of either DAS-28-ESR or DAS-28-CRP, and occasionally, the format was not reported. This is particularly important because CRP has been shown to be elevated in individuals with periodontal disease and no systemic disease . TherefoReporting of additional outcomes was limited for most studies. Adverse events were rarely reported, although in the studies that did report them these were rare, predictable and known consequences of periodontal therapy (e.g. dentine hypersensitivity). Only two studies , 27 repoBox 1. We make these recommendations for future researchWhen reported, the duration of RA in the study populations was at least several years. The participants also had persistent active disease, despite pharmacotherapy. Despite such recalcitrant disease characteristics, most studies demonstrated a clinically meaningful reduction in disease activity, underscoring the potential for management of periodontal disease as a potential adjunctive measure. One study that recruited refractory RA participants but did not meet the inclusion criteria was the study by M\u00f6ller et al. [This systematic review had several strengths. New data have been captured by our search, thus providing an important update on this topic. We specified open inclusion criteria to include all potentially relevant studies, including subgroups of individual study arms where relevant. We used a comprehensive risk of bias/quality assessment for all study designs. We elected to avoid meta-analyses owing to the heterogeneity of study design, high risk of bias present in most studies and clinical variability between study samples. Based on the limitations of currently available studies, we have suggested considerations for the design of future research to evaluate the impact of NSPT on RA . A qualir et al. . They co+ individuals who have not yet progressed to RA compared with controls [Although more research is needed, we found clear support for the use of NSPT in people who have both PD and RA. Recent studies have shown that there is a higher incidence of periodontal disease in ACPAcontrols . We recoIt has been suggested that periodontal inflammation might precede joint inflammation and therefore that the periodontium could be the site of RA disease initiation in some individuals. Early detection/treatment of PD in the pre-RA phase could therefore delay/prevent RA . Recent Funding: There was no funding associated with this research.Disclosure statement: The authors have declared no conflicts of interest.Registration: The systematic review was not registered. There was no deviation from the protocol throughout the review.rkac061_Supplementary_DataClick here for additional data file."} {"text": "Sir,We thank Kessler et al. for their comments on our study.We carried out a population study with the aim to evaluate the impact of introduction of STAN in Norway on the following outcome measures: occurrence of fetal and neonatal deaths, Apgar scores <7, intrapartum cesarean sections and instrumental vaginal deliveries.Kessler et al.Kessler et al.According to the authors, \u201cthe number of theoretically preventable neonatal deaths is substantially lower than its total count\u201d.It is difficult to estimate exactly how many intrapartum stillbirths could have been prevented. A reasonable estimate is that three to four intrapartum stillbirths could have been prevented yearly by STAN. This is a low number, in particular since there are about 55\u2009000 deliveries each year in Norway. Also note that the regression coefficient is small Table\u00a0. During Kessler et al.Note that the starting point for our graph in Figure\u00a0Kessler et al.Kessler et al.Kessler et al.In their Letter to the Editor, Kessler et al. conclude that \u201cthe data presented in the study do not support its key message\u201d. The authors put forward several arguments to justify their conclusion. Below, we present their arguments, and give our response.Figure S1Click here for additional data file."} {"text": "Philosophical Transactions published G. I. Taylor\u2019s seminal paper on the stability of what we now call Taylor\u2013Couette flow. In the century since the paper was published, Taylor\u2019s ground-breaking linear stability analysis of fluid flow between two rotating cylinders has had an enormous impact on the field of fluid mechanics. The paper\u2019s influence has extended to general rotating flows, geophysical flows and astrophysical flows, not to mention its significance in firmly establishing several foundational concepts in fluid mechanics that are now broadly accepted. This two-part issue includes review articles and research articles spanning a broad range of contemporary research areas, all rooted in Taylor\u2019s landmark paper.In 1923, the Philosophical Transactions paper (part 1)\u2019.This article is part of the theme issue \u2018Taylor\u2013Couette and related flows on the centennial of Taylor\u2019s seminal All the solutions are for the same equations, only with different values of . We have no reason to think that there are any terms missing from these equationsrprise.\u2019 .. 2Philosophical Transactions.Part 1 of this theme issue is a combination of review articles and research articles having a root in Taylor\u2019s 1923 paper and often connecting directly to Taylor\u2019s vision of future work on the problem. The authors who have contributed to this theme issue are leading researchers in the field of Taylor\u2013Couette and related flows. They represent an international community of scientists and engineers with research interests that span a broad range of flow physics and applications, all of which can trace their heritage back to Taylor\u2019s 1923 paper in the et al. investigate \u2018a dynamical skeleton of turbulence\u2019 experimentally and numerically. Wiswell et al. conduct experiments on end-effects in low aspect ratio Taylor\u2013Couette flow. Jeganathan et al. present numerical results on the origin of turbulent Taylor rolls. Oishi & Baxter use a generalized quasi-linear approximation to study non-normality in spiral turbulence.For the classical Taylor\u2013Couette problem with a Newtonian fluid, the focus of modern research has largely moved beyond the linear onset of instability studied by Taylor to consider instead the highly supercritical, turbulent regime. Crowley et al. and Meyer et al. each present numerical results obtained by imposing radial temperature gradients in the underlying cylindrical geometry. Further extensions of Taylor\u2013Couette flow include multi-phase flows, where Baroudi et al. review Taylor\u2013Couette flow of suspensions, followed by experimental studies by Alam & Ghosh, Yi et al. on emulsions, and Blaauw et al. on bubbly drag reduction by switching from fresh to salt water.Fundamentally new areas of research are opened up by combining Taylor\u2013Couette flows with convection. Kang et al. review experiments involving elasto-inertial transitions, and Song et al. review turbulent flows of dilute polymeric solutions.Three further extensions of the classical Taylor\u2013Couette problem are magnetohydrodynamic, ferrofluidic and viscoelastic flows. At first glance, these might seem very different, but they modify the original problem in somewhat similar ways, by providing new ways of coupling fluid parcels via magnetic tension in magnetohydrodynamics, magnetic forces in ferrofluids, or polymeric elasticity in viscoelastic flows. Guseva & Tobias present numerical studies and theoretical analysis of transition to chaos and modal structure in magnetohydrodynamic flows. Altmeyer numerically explores ferrofluidic wavy Taylor vortices under alternating magnetic fields. For the viscoelastic problem, Boulafentis et al. review routes to turbulence in rotating disc boundary layers and cavities, demonstrating that ideas and methods similar to those used in the Taylor\u2013Couette geometry can often be applied in other flow systems as well.Finally, there are a variety of systems and geometries that are not strictly Taylor\u2013Couette flows as such, but are nevertheless closely related. Martinand . 3Philosophical Transactions builds on the ongoing interest in Taylor\u2013Couette flow and its many important derivatives in terms of current research, perspectives on the influence of Taylor\u2019s seminal paper, and its future impact on many related fields.Although Taylor\u2013Couette flow has been studied for a century , it continues to provide a basis for a broad range of research. This two-part issue of the"} {"text": "Appropriate prenatal care (PNC) is essential for improving maternal and infant health; nevertheless, millions of women in low- and middle-income countries (LMICs) do not receive it properly. The objective of this review is to identify and summarize the qualitative studies that report on health system-related barriers in PNC management in LMICs.This systematic review was conducted in 2022. A range of electronic databases including PubMed, Web of Knowledge, CINHAL, SCOPUS, Embase, and Science Direct were searched for qualitative studies conducted in LMICs. The reference lists of eligible studies also were hand searched. The studies that reported health system-related barrier of PNC management from the perspectives of PNC stakeholders were considered for inclusion. Study quality assessment was performed applying the Critical Appraisal Skills Programme (CASP) checklist, and thematic analyses performed.Of the 32 included studies, 25 (78%) were published either in or after 2013. The total population sample included 1677 participants including 629 pregnant women, 122 mothers, 240 healthcare providers, 54 key informed, 164 women of childbearing age, 380 community members, and 88 participants from other groups . Of 32 studies meeting inclusion criteria, four major themes emerged: (1) healthcare provider-related issues; (2) service delivery issues; (3) inaccessible PNC; and (4) poor PNC infrastructure.This systematic review provided essential findings regarding PNC barriers in LMICs to help inform the development of effective PNC strategies and public policy programs. Qualityet al., et\u00a0al., As a qualitative evidence synthesis method, we applied thematic synthesis were published either in or after 2013. The studies took place in 21 countries across four continents. Of the included studies, 59% discussed countries or regions in Africa, with Tanzania and Malawi being the most common of these; 25% discussed Asian countries or regions and only one study (3.1%); and discussed barriers in the South America and one in Papua New Guinea from Oceania (3.1%).Numbers of participants varied from five to 295, with most between 20 and 80 participants. The total population sample included 1677 participants including 629 pregnant women, 122 mothers, 240 healthcare providers, 54 key informed, 164 women of child bearing age, 380 community members, and 88 participants from other groups .The overall quality assessment of the studies was conducted by rating CASP items Table\u00a0. All of We categorized the review findings into four main themes: healthcare provider-related issues, service delivery issues, inaccessible PNC, and poor PNC infrastructure. There are one to five subthemes under each theme that are presented in Figure\u00a0Concerns about the negative impact of healthcare providers\u2019 issues on the PNC emerged as a prominent theme with five subthemes: (1) human resource shortage; (2) lack of female PNC providers; (3) insufficient PNC providers\u2019 knowledge; (4) poor relationship with PNC clients; and (5) lack of motivation.et al., et\u00a0al., et\u00a0al., Baffour-Awuah et al., et\u00a0al., et\u00a0al., et\u00a0al., et al., et\u00a0al., et\u00a0al., et\u00a0al., et al., et\u00a0al., et al., Participants in many of included studies expressed concerns over insufficient human resources long distance; (2) unaffordable PNC; and (3) long waiting times.et al., et\u00a0al., et\u00a0al., et al., et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., Many of participants believed that pregnant women cannot afford the cost of PNC. They reported high cost of care, laboratory tests, and medications (Mathole et\u00a0al., et al., et al., et al., et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., et al., Waiting time was another important accessibility area in which frustration was expressed. The participants believed that long waiting times would be the factor which would discourage pregnant women from seeking PNC services (Larsen et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., et al., According to participants\u2019 perspectives, geographical access to PNC appears inadequate. They mentioned that PNC seekers\u2019 access to care is restricted by long distance (Larsen et\u00a0al., et al., et\u00a0al., et\u00a0al., et\u00a0al., et\u00a0al., et al., et al., et\u00a0al., et\u00a0al., et al., et al., et al., et\u00a0al., et\u00a0al., et al., et al., et\u00a0al., et\u00a0al., et al., et\u00a0al., et\u00a0al., et al., et\u00a0al., et\u00a0al., et\u00a0al., et al., et\u00a0al., et al., et al., et al., et\u00a0al., We found that many of participants complained that poor PNC clinic facilities hindered PNC provision or utilization (Larsen PNC is an essential component of improving maternal and infant health during pregnancy and birth, by treating and monitoring potential complications. This review set out to summarize the qualitative literature concerning the healthcare system-related barriers in PNC management in LMICs. Included studies came from a variety of countries and help understand the range of different potential difficulties in PNC management from several continents. Findings of this systematic review suggest that PNC in LMICs can be challenged by a number of barriers at different levels of healthcare systems, including human resources aspects, service delivery issues, PNC accessibility, and PNC infrastructures.In addition to a wide range of countries with low- and middle-income settings, the included studies encompassed a wide range data from different types of PNC stakeholders such as healthcare providers, pregnant women, male partners, and community members. This indicates that PNC stakeholders, in any role, are aware that PNC is provided in a context lead by the healthcare system.et al., et\u00a0al., It is notable that the majority of barriers identified within the evidence emerged within the human resources and service delivery themes. This stakeholder perception is supported by other systematic reviews investigating LMICs barriers in other maternal health contexts such as midwifery care (Filby et\u00a0al., et al., et al., et al., Many of the emerged barriers in this review of qualitative studies also match those observed in earlier quantitative studies. For example, one of them highlighted insufficient geographical accessibility (Kuupiel This review provides a comprehensive approach to qualitative studies of healthcare system-related barriers to PNC in LMICs. Exploring pregnant women, PNC providers, and general population accounts also provided a rounded understanding of PNC barriers from multiple perspectives.There are several important limitations to note when interpreting the results of this review. One limitation is that it we only included articles published in English, which may suggest that the potentially relevant studies from cultural contexts where English is not the norm may be missed. In addition, limited time and resources prevented a more thorough and comprehensive search of the gray literature, a body of evidence that may have had more to offer PNC clients\u2019 experiences and perspectives.Despite all of the works that has been conducted in the area of PNC barriers, the current review noted a significant gap in the evidence base related to PNC and healthcare systems. This important gap is the perspectives of women who are underrepresented in the data: pregnant women who did not make it to PNC. Because of health system-centric nature of the majority of related literature, there is much more information about pregnant women who stayed in care than about those who never attend PNC.This review contributes to the current debate on the knowledge of key barriers to PNC in LMICs contexts. Findings of this systematic review suggest that PNC in LMICs can be challenged by a number of barriers at different levels of healthcare systems, including human resources aspects, service delivery issues, PNC accessibility, and PNC infrastructures. Healthcare policymakers in LMICs, when planning and managing the PNC, should consider the lessons learnt from previous reports as synthesized in this review and should carefully develop strategies to prevent and mitigate common barriers to successful PNC."} {"text": "S. Najm et al., RSC Adv., 2022, 12, 29613\u201329626, https://doi.org/10.1039/D2RA04790J.Correction for \u2018Mechanism and principle of doping: realizing of silver incorporation in CdS thin film The authors regret that the list of affiliations was incorrectly shown in the original manuscript. The corrected list of affiliations is as shown below.The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} {"text": "Micromixers are important components of lab-on-a-chip systems, and also have many biological and chemical applications . MicromiHejazian et al. reviewedBanos et al. examinedKoike and Takayama introducOevreeide et al. proposed2O3 nanoparticles affects the pressure drop and thermal mixing for a Re range from 0.1 to 25 was tested. The results revealed that higher nanofluid concentration and stronger chaotic advection improved the hydrodynamic and thermal performances over the whole Re range remarkably. Mahammedi et al. [Tayeb et al. numericai et al. also numi et al. reportedI appreciate all the authors who contributed to this Special Issue. Additionally, thanks also go to the reviewers for the submitted papers and the editorial staff who conducted the review process. Further contribution to this topic can be made to the Topical Collection \u201cMicromixers: Analysis, Design and Fabrication\u201d of this journal."} {"text": "Heliconius butterflies or heterostyly in Primula\u2014have been studied since the Modern Synthesis, we still understand very little about how they evolve and persist in nature. The genetic architecture of supergenes is a critical factor affecting their evolutionary fate, as it can change key parameters such as recombination rate and effective population size, potentially redirecting molecular evolution of the supergene in addition to the surrounding genomic region. To understand supergene evolution, we must link genomic architecture with evolutionary patterns and processes. This is now becoming possible with recent advances in sequencing technology and powerful forward computer simulations. The present theme issue brings together theoretical and empirical papers, as well as opinion and synthesis papers, which showcase the architectural diversity of supergenes and connect this to critical processes in supergene evolution, such as polymorphism maintenance and mutation accumulation. Here, we summarize those insights to highlight new ideas and methods that illuminate the path forward for the study of supergenes in nature.Supergenes are tightly linked sets of loci that are inherited together and control complex phenotypes. While classical supergenes\u2014governing traits such as wing patterns in This article is part of the theme issue \u2018Genomic architecture of supergenes: causes and evolutionary consequences\u2019. As a New genomic, theoretical and bioinformatic methods now allow us to begin to understand the complex evolutionary history and diversity of supergenes ,3,18,19.The present collection of papers integrates (i) theoretical (including simulation) studies that generate predictions about supergene evolution, (ii) empirical studies that dissect genomic variation associated with supergenes, and (iii) synthetic opinion and review papers that tie together these diverse aspects of supergene evolution. Papers in this theme issue showcase the stunning architectural and taxonomic diversity of supergenes and cover a broad range of subjects relevant to our understanding of supergenes including local adaptation versus gene flow; parallel evolution; mutation accumulation and balanced lethality; sexually antagonistic selection; sex chromosome evolution; speciation; chromatin architecture and gene expression; and selfish genetic elements. In this introduction, we review the advances made by these papers, summarize the genomic architecture they uncover (\u00a72) and detail how it connects to three major questions in supergene research: the role of supergenes in adaptation (\u00a73), maintenance of supergene polymorphism (\u00a74), and mutation accumulation in supergenes (\u00a75). We highlight methodologies that can move the field forward and focus on outstanding questions in supergene evolution.. 2Characterizing the genomic architecture of supergenes is the first step towards understanding their evolution. A major focal point is to comprehend the underlying genetic mechanisms of recombination suppression between supergene haplotypes. Certain genetic mechanisms can generate unbalanced gametes in supergene heterozygotes , which has the two-pronged effect of generating underdominance as well as reducing effective recombination. For example, improper segregation of chromosomes in translocation heterozygotes can lead to the creation of aneuploid gametes at a rate of 18\u201380% ,25. The et al. [Danaus chrysippus. They find three alleles that differ dramatically in sequence, each containing multiple duplicated regions and inversions. Comparing BC supergene structure across the Danaus phylogeny, the authors reconstruct this supergene's evolutionary history and infer that it initially arose when a large genomic region was repeatedly duplicated approximately 7.5 Myr, with several inversions arising subsequently within this region of duplicated segments. Recombination suppression across the supergene probably spread through the fixation of these inversions, in addition to the genetic divergence of duplicated segments followed by differential loss of some of these duplicated regions. This second process of divergence and loss of duplicates may be a common contributor to supergene evolution.Kim et al. use trioet al. [Papilio butterflies that controls female-limited mimicry [Papilio polytes, no inversion is present in H in Papilio memnon [et al. [P. memnon and P. polytes. Using knock-down and expression studies, the authors are able to directly link genes within the supergene to different aspects of the female wing colour patterns. The authors show that both systems exhibit strong linkage disequilibrium in the supergene region as well as accumulation of repetitive sequences, a hallmark of low recombination regions [P. polytes seems to have generated a new gene, U3X, that regulates the expression of other loci within the supergene. Overall, this suggests that selection for the inversion may be acting on the breakpoints (i.e. direct selection on the inversion itself), as opposed to other characteristics such as reduced recombination.Komata et al. also use mimicry ,29. Two o memnon ,30,31. T [et al. put toge regions \u201336. Surpet al. [et al. [et al. [Not all supergenes show suppressed recombination between haplotypes. Dufresnes et al. investiget al. . In factet al. . Amphibi [et al. review t [et al. . Dufresn [et al. highlighDrosophila pseudoobscura to a map of topologically associated domains (TADs). TADs, which represent a form of higher order chromatin interactions, are self-interacting regions of the genome hypothesized to regulate gene expression [In addition to affecting linkage between loci, supergenes may also go beyond DNA changes to alter the epigenome, i.e. chromatin and methylation patterns. The ensuing consequences of supergenes on epigenomic modifications have been little explored up to now. In their paper, Wright & Schaeffer explore pression \u201345. Wrigpression examine t-haplotypes in mice [Sp haplotype in the Alpine silver ant, Formica selysi [Segregation distorter in Drosophila melanogaster [et al. [Mimulus guttatus is a supergene, segregating in multiple Mimulus populations in the Pacific northwest. They show that in several populations, D is a large (10\u201312 Mb) non-recombining region that leads to female meiotic drive and contributes to pollen inviability when homozygous . D shows evidence of prior selective sweeps (possibly separate sweeps in two populations), but appears to be maintained within populations as a balanced polymorphism. Transcriptomes of individuals with and without the D allele indicate that many genes located within the supergene show reduction of expression in developing and reproductive tissues, suggesting many of the effects of D are cis-acting, similar to findings for other non-driving supergenes [et al. [SIM3), further contributing to our knowledge of selfish supergenes.Supergenes may also be selfish genetic elements, containing alleles that are specialized in over-representing themselves in the next generation through segregation distortion ,47. Ther in mice , the Sp a selysi and Segrnogaster . Finseth [et al. provide pergenes \u201353. Fins [et al. identify. 3et al. [et al. [Supergenes provide a mechanism that allows multiple favourable trait combinations to be maintained in the face of recombination. While the role of supergenes in adaptation has been well established ,18,19,54et al. use simu [et al. examine et al. [Schaal et al. address et al. [et al. [Stenl\u00f8kk et al. use empi [et al. .et al. [et al. [A second critical question regarding the role of supergenes in adaptation is whether supergenes repeatedly arise. Observations of the same supergene are common in isolated populations of the same species or in closely related species and could result from several different evolutionary scenarios: multiple independent origins of the supergene, the presence of the supergene in a common ancestor or adaptive introgression of supergenes. Kay et al. and West [et al. use bothet al. [Kay et al. examine et al. . Thus, bet al. [The mechanisms behind the role of supergenes in parallel adaptation with gene flow is reviewed and modelled by Westram et al. . The autet al. [Heliconius numata. Within the supergene, which is itself composed of three chromosomal inversions, the authors identify several independent, putatively adaptive loci that are associated with different aspects of wing patterning. The results of Jay et al. [et al. [A third challenge in studying how supergenes facilitate adaptation is identifying adaptive variants in large regions of linkage disequilibrium generated by recombination suppression. Jay et al. use a muy et al. are cons [et al. ).et al. [CRHR1, KANLS1 and MAPT) that might underlie these phenotypes. The second inversion studied, 8p23.1, is also associated with several related phenotypes and gene expression differences; however, the complex breakpoint structure, and the apparent lack of genetic divergence within the inverted region away from the breakpoints, renders understanding the effects of this inversion challenging. Studying the properties of supergenes in humans is fundamentally important, not only for gaining evolutionary insights, but also for an improved understanding of how supergenes such as inversions impact human health and disease.Campoy et al. review a. 4et al. [et al. [et al. [Many known supergenes are over 1 Myr old , begginget al. and Berd [et al. . Dagilis [et al. take a met al. [F. selysi. This supergene has two haplotypes, Sm and Sp, that control colony structure in this haplodiploid species. In monogyne colonies, all queens and workers are Sm/Sm and males are Sm. In polygyne colonies queens and workers are Sm/Sp or Sp/Sp. However, the males produced in these colonies are only Sp, indicating the transmission ratio distortion caused by the Sp haplotype via a maternal killing effect [et al. [Multiple forms of balancing selection are probably needed to protect polymorphism over long time scales, but we understand little about which combinations of selective pressures can achieve this. Tafreshi et al. examine g effect . Tafresh [et al. find thaet al. [et al. [One of the major challenges of supergene polymorphism maintenance is that the allelic content of the supergene haplotypes can shift over time. In particular, supergenes may accumulate deleterious mutations that lessen the selective edge of certain haplotype combinations . Using set al. examine et al. ). AOD deet al. ,72. Berd [et al. find thaet al. [et al. [Gasterosteus nipponicus) [Another form of balancing selection that may maintain a supergene polymorphism within a population is sexually antagonistic selection, in which an allele that increases the fitness of males decreases the fitness of females, or vice versa. Dagilis et al. investiget al. \u201375, but [et al. take advponicus) , which f. 5et al. [et al. [The reduction in effective recombination between supergene haplotypes can be viewed as a double-edged sword: it can facilitate adaptive processes when beneficial alleles are brought together, but might also speed up the accumulation of deleterious mutations . This iset al. use simu [et al. look foret al. use simulations in SLiM [Berdan in SLiM ,78 to ex in SLiM . The aut in SLiM ,80.et al. [Stenl\u00f8kk et al. directly. 6By bringing together diverse empirical and theoretical studies of supergenes, this theme issue provides new insights into supergene evolution. While supergenes are a classical subject in evolutionary biology, investigating the evolution of supergenes has only really taken off within the twenty-first century ,18,19,54"} {"text": "That neOur efforts in publications \u20135 have b (i)a single, fixed material reference configuration from which to measure the deformation of a two-dimensional elastic body; (ii)a suitable class of competitors and related variations when formulating a solid mechanics problem within the calculus of variations. We sketch the basis for our line of argument here, relegating details and further clarifying discussion to subsequent \u00a73a\u20133c. In \u00a72, we briefly appraise the claims of van der Heijden & Starostin [et al. [The major issues with the strategy of Starostin & van der Heidjen , reprisetarostin concerni [et al. .fixed pair Equation (2) from the comment , which dct, from , their ewhere in . Then, ilined in is not bsame fixed rectangular reference configuration, say In formulating their variational problem, Starostin & van der Heijden thus ovently, in Starosti Heijden is an isIn their variational problem, Starostin & van der Heidjen envisionsible in is to bem 1) of will suf Heidjen and furt of willm (1) of . HoweverAn objective justification of the above comments is provided in \u00a73. In the next \u00a72, we briefly address the criticisms of our publications that van. 2planar development connected to their complete class of rectifying developable parameterizations.Chen & Fried do not eet al. [et al. [et al. [et al. [Chen et al. focus onet al. are alsoet al. are cite [et al. to the s [et al. elaborat [et al. . The obs [et al. have no [et al. .et al. [Chen et al. obtain aet al. [et al. [Chen et al. focus onet al. , a work [et al. demonstret al. [shape of the rectangle is fixed but not the position of its internal material points, relative to which the deformation of a placement (1) is to be measured.The comments contained in van der Heijden & Starostin do not pet al. \u20135. By inet al. do exposet al. and doestarostin introduc. 3Here, we present the basis upon which we have earlier discussed the work of Starostin & van der Heijden . In \u00a73a, (a)The fundamental problem of determining the equilibrium configuration of a free-standing M\u00f6bius band that is formed from the deformation of a flat unstretchable rectangular material strip by joining its two ends together after a Sadowsky ,8 and WuSadowsky ,10, and 1 with the aim of solving this problem, considered the variational problem of minimizing the Wunderlich [Most recently, Starostin & van der Heijden ,1 with tnderlich ,10 funct2 The isometric mapping Being developable, The Wunderlich ,10 functrization , then H=ius band , a midli (b)unstretchable M\u00f6bius band in the absence of tractions or couple tractions on its edge. Analogous to the various descriptors of Starostin & van der Heijden apply a fined in , and repgiven in with \u03b7 in to the m3isometric deformation of the fixed, undistorted material reference configuration deformation of the fixed, undistorted material reference configuration is not an isometric deformation. The consequence is that the set of rectifying developable M\u00f6bius bands as given in in and the n one material object, and that their intention was to study the material length-preserving deformation of a given undistorted rectangular material strip into a free-standing isometry relation between each M\u00f6bius band and a flat rectangle in fixed rectangular material reference configuration relative to which a deformation to each M\u00f6bius band is measured.Starostin & van der Heijden set up tduced in . A fixedduced in could befixed rectangular material reference configuration, Starostin & van der Heijden [By restricting the class of variational competitors to be the set of rectifying developable M\u00f6bius bands, and by neglecting to introduce a Heijden , amended Heijden , did notrated in , their vion R of , relativ used in to define aid of and (3.9ly, from and (3.9 through and (3.8Granted, Starostin & van der Heijden , amendednderlich ,10 functThe Wunderlich ,10 functed as in , then it Heijden but incl Heijden , and as Heijden recognizIn closing, we emphasize that any mechanics-based study of the isometric deformation a given fixed rectangular material reference strip into a rectifying developable surface is of distinctly limited applicability. For example, the only way to maintain the parameterization of a rectifying developable and to isometrically deform a flat rectangular shape onto the surface of a right circular cylinder is to wrap it so that two of its parallel edges are coincident with the generators of the cylinder\u2014but there are other ways, helical in form, that do not involve maintaining the parameterization of a rectifying developable, to achieve an isometric wrapping. Moreover, as Chen, Fosdick & Fried , \u00a74.3 shClick here for additional data file."} {"text": "Despite reports of an elevated risk of breast cancer associated with antipsychotic use in women, existing evidence remains inconclusive. We aimed to examine existing observational data in the literature and determine this hypothesised association.We searched Embase, PubMed and Web of Science\u2122 databases on 27 January 2022 for articles reporting relevant cohort or case-control studies published since inception, supplemented with hand searches of the reference lists of the included articles. Quality of studies was assessed using the Newcastle-Ottawa Scale. We generated the pooled odds ratio (OR) and pooled hazard ratio (HR) using a random-effects model to quantify the association. This study was registered with PROSPERO (CRD42022307913).N = 2 031 380) and seven for meta-analysis (N = 1 557 013). All included studies were rated as high-quality (seven to nine stars). Six studies reported a significant association of antipsychotic use with breast cancer, and a stronger association was reported when a greater extent of antipsychotic use, e.g. longer duration, was operationalised as the exposure. Pooled estimates of HRs extracted from cohort studies and ORs from case-control studies were 1.39 [95% confidence interval (CI) 1.11\u20131.73] and 1.37 (95% CI 0.90\u20132.09), suggesting a moderate association of antipsychotic use with breast cancer.Nine observational studies, including five cohort and four case-control studies, were eventually included for review (Antipsychotic use is moderately associated with breast cancer, possibly mediated by prolactin-elevating properties of certain medications. This risk should be weighed against the potential treatment effects for a balanced prescription decision. Studies were excluded if they were not published in English, had a study design that was neither cohort nor case-control, included participants who developed breast cancer prior to antipsychotic exposure or did not compare antipsychotic use to non-use, such as comparing between different classes of antipsychotics.All published cohort and case-control observational studies that investigated and quantified the association of antipsychotic use .The methodological quality of each included study was assessed using the Newcastle-Ottawa Scale (NOS). Like the data extraction procedure, the quality assessment was conducted independently by JCNL and DWYN. Study quality was indicated by numbers of stars, with nine representing the highest possible methodological rigour. See online Supplementary eTable 3 for details of the quality assessment procedures. Cohen's kappa was not calculated for the quality assessment decisions, as nine studies were included and there were only a few discrepancies, which were resolved through in-depth discussions.I2 statistic was used to examine the heterogeneity of the estimates across studies. Upon a sufficient number of included studies, the Egger's regression test was conducted to detect any publication bias in the pooled estimates. The pooled estimates and test for heterogeneity were implemented using Cochrane Collaboration Review Manager (Version 5.4.1).Upon satisfactory assessment result with regards to multivariable adjustment according to the NOS, meta-analyses of the estimates of the association, i.e. odds ratios (ORs) and hazard ratios (HRs), were conducted. Stratified by study design, i.e. cohort and case-control studies, the estimates of the association of antipsychotic use and breast cancer were pooled using a random effects model. The exposure was binarily operationalised as any antipsychotic use compared with non-use. In cases where this operationalisation was not possible, the longest-term exposure category, or the category representing the farthest extent of antipsychotic use, were used in comparison with non-use in the pooled estimates. The inverse variance weighting method was used to determine the relative importance between studies while the N\u00a0=\u00a0 2\u00a0031\u00a0380) were included for a qualitative synthesis and quality assessment. Cohen's kappa for title and abstract screening and full-text selection suggest moderate and substantial agreement respectively. Two studies were excluded from the meta-analysis provided adequate data for a pooled estimate of the hypothesised association. Study characteristics and results, as well as quality assessment scores are tabulated in As shown in et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., The included studies have been conducted in five countries/jurisdictions: three studies in the United States and substance misuse. Five of the nine studies had made such adjustments for 1\u20134 years of antipsychotic use and 1.74 (95% CI 1.38\u20132.21) for at least 5 years of antipsychotic use , whereas breast cancer risk amongst antipsychotic users of less than 6 years were reported to be non-significant. In contrast, the dose\u2013response relationship was not observed in the atypical antipsychotic subgroup of Chou et al., where an apparent association was observed with lower exposure instead of increased exposure. They reported HRs 2.49 (95% CI 1.69\u20133.66) and 1.05 (95% CI 0.58\u20131.87) for mean antipsychotic exposure of less than 28 and greater than 245\u00a0g/year, respectively.Despite a null association with the exposure defined as any antipsychotic use, long-term use was found to have a small association with breast cancer development in Potteg\u00e5rd et al. . An incret al., et al., et al., et al., et al. with breast cancer with a >30% increased risk observed, although the pooled OR did not reach statistical significance . As only three and four studies were included in the pooled estimates of the OR and HR, we did not conduct the Egger's regression test for publication bias.Using a random effects model, we pooled the HRs and ORs of breast cancer between antipsychotic users and non-users from four cohort studies , this action increases prolactin secretion (Besnard et al., et al., et al., et al., et al., With an increasingly prevalent use of antipsychotic medications worldwide, the risk of adverse events associated with it should be investigated in more breadth and depth to inform clinical practice. This study on the potentially elevated risk of breast cancer adds to the current knowledge of adverse events associated with antipsychotic use, such as stroke and myocardial infarction were investigated previously (Douglas and Smeeth, et al., The increased use of routine electronic health records in pharmacovigilance studies have contributed to the existing literature significantly, as shown in the included studies in this review. While providing a typically large sample size with realistic real-world clinical data, there are intrinsic limitations to these records. Specifically, the lack of lifestyle and other important factors might introduce bias to the estimated association. Primary data collection may provide much more detailed information but with a much-limited sample size. Therefore, both types of research are much warranted, and the evidence needs to be considered in the context of a variety of study designs with various strengths and weaknesses for a balanced overall assessment. With the benefits of record-linkage techniques with prescription registries, antipsychotic prescription practices such as antipsychotic polypharmacy in comparison with monotherapy can be addressed in future studies. One review suggested that aripiprazole use in combination with another antipsychotic was associated with better lipid profile outcomes than the use of other antipsychotic polypharmacy or monotherapy, although the quality of evidence was lacking (Ijaz et al. through the use of anticonvulsants and lithium as comparator drugs, which are also prescribed to patients with psychiatric disorders such as anxiety, depression and bipolar disorder, but with no known risk of hyperprolactinaemia (Ajmal et al., In spite of the important clinical implications, there are several limitations. First, the reviewed evidence is all generated from observational research without randomisation. There is likely unmeasured confounding effects and causal inferences need to be made with great caution. Specifically, the comparators selected for some included studies may not be entirely suitable and could be subject to potential selection bias. One example of mitigating this bias is demonstrated in Rahman et al., et al., There are also limitations specific to this review as well. First, although the meta-analysis generated consistent results across study designs, i.e. cohort and case-control, the association could not be appropriately pooled across designs to increase the precision of the estimate. Second, the number of studies is too small to provide a more precise estimate of the hypothesised association and the presence of publication bias could not be tested as a result. Third, significant heterogeneity was observed between studies even within the same design, probably due to different populations, research practice and availability of data, further studies with more accrued data should investigate factors that contribute to this heterogeneity. Recent studies reported higher basal epigenetic changes in African American women (Joshi In conclusion, we found a moderate association between the use of antipsychotics and breast cancer with a more evident association observed with prolactin-elevating medications and greater extent of antipsychotic exposure. This risk, together with other known associated adverse events, should be weighed against the anticipated treatment outcomes for a balanced clinical management decision."} {"text": "Polycystic ovary syndrome (PCOS) is the most common endocrine disorder in women of reproductive age with a reported prevalence ranging from 6% to 20% . In 2004PCOS has long been accepted as the major cause of anovulatory infertility and hirsutism, with an increased risk of developing metabolic abnormalities, type 2 diabetes mellitus (T2DM), obstetrical complications, mood disorders, cardiovascular and cerebrovascular events, venous thromboembolism, and endometrial and ovarian cancer . The conWe asked the authors who participated in this Research Topic to consider the following questions: How to dive deep into the pathophysiology of PCOS? What are the new insights into metabolic dysfunctions, such as basal/glucose\u00ad stimulated hyperinsulinemia and insulin resistance (IR), obesity, hyperandrogenism, and subfertility in PCOS? Among the repertoire of clinical management of PCOS, is there any exciting progress in pharmacological and lifestyle interventions? Is it possible to make a clinical diagnosis of PCOS through specific biomarkers? Can we effectively personalize assisted reproductive technology (ART) procedures for PCOS patients? How does PCOS influence maternal and offspring health? Why is it always associated with obstetrical complications including preeclampsia, very preterm birth (defined as <32 weeks of gestation), and gestational diabetes mellitus (GDM)? For the offspring of patients with PCOS, what symptoms are they inclined to display? What is the mechanistic role of the microbiome in PCOS? The result of this call is a relatively comprehensive collection of 33 articles regarding such aspects.Ding et\u00a0al. give a comprehensive review of the mutual role of IR and HA on PCOS development. Song et\u00a0al. provide an experimental study in a rat model which demonstrates that androgen excess could damage mitochondrial ultrastructure by depressing the expression of NDUFB8 and ATP5j and thereby influence the function of granulosa cells (GCs) in PCOS. Furthermore, the risk of epilepsy and antiseizure medications on PCOS through the HPO axis is reviewed by Li et\u00a0al.In reports of pioneering studies, the most consistent feature of PCOS is an elevated level of testosterone and/or androstenedione in serum . Ding etJiang et\u00a0al. reveal that a higher expression of ANGPTL4 in GCs might be associated with glucose and lipid metabolic disorders in PCOS. Differently expressed elements of store-operated Ca2+ entry (SOCE) are analyzed in the work by Song et\u00a0al. and are proved to contribute to the dysfunction of ovarian GCs and hormonal changes. Deng et\u00a0al. conduct a whole genome transcriptomic sequencing and find DLGAP5 as a candidate gene for PCOS.Liu et\u00a0al. show that IL-15 affects the inflammation state, steroidogenesis, and survival of GCs. Ding et\u00a0al. indicate that adipose tissue-derived extracellular vesicles-miR-26b promote GCs apoptosis. To supplement, Zhou et\u00a0al. review the seminal features and therapeutic potential of extracellular vesicles in PCOS. Finally, Gu et\u00a0al. have an extensive discussion on the relationship between the microbiome and sexual hormones, immune homeostasis as well as insulin resistance.Immune balance and immune microenvironment may play a significant role in the infertility of PCOS patients. He et\u00a0al. provide an update that increased apolipoprotein B/A1 ratio is associated with worse metabolic syndrome components, hyperandrogenemia, IR, and elevated liver enzymes. The new aspect of excessive visceral adipose tissue mass is analyzed in the contribution by Zhang et al., demonstrating it is this characteristic but not other fat compartments that can exacerbate the risk of hyperuricemia in PCOS. Yang et\u00a0al. describe the positive correlation between neck circumference and serum uric acid levels. The increased risk factor of PCOS is examined in the contribution of Chen et\u00a0al., who defined IL-17, SDF1a, SCGFb, and IL-4 as potential biomarkers for PCOS. With the rapid development of artificial intelligence, Lv et\u00a0al. propose an automated deep learning algorithm for exploring the potential of scleral changes in PCOS detection.Since the expense of the diagnostic evaluation accounted for only a minor part of the total costs, more liberal screening for PCOS appears to be a cost-effective strategy. Given the probable multifactorial cause of PCOS, a specific plan for early risk prediction of diagnosis is not yet possible. Hence, further studies of the early biomarkers of PCOS and early intervention of at-risk adolescents are sorely needed. Patients with PCOS are at risk of experiencing obstetrical complications including pre\u00adeclampsia, very preterm birth, and GDM . Their oMai et\u00a0al. demonstrate that PCOS exhibited higher CLBR and better ovarian reserve and response. When it comes to the independent variables for determining CLBR of aged patients with PCOS, this view is counter-argued by Guan et\u00a0al., with a retrospective cohort study presenting significantly decreased CLBR for females of advanced reproductive age up to 37. Compared to regular menstruation or oligomenorrhea, a higher overall incidence of adverse pregnancy outcomes in PCOS patients with amenorrhea is presented by Yu et\u00a0al. In addition, Du et\u00a0al. point out that preterm birth in PCOS is associated with a BMI\u226524 kg/m2 plus serum AMH>6.45 ng/ml. According to Jiang et\u00a0al., advanced age, obesity, total cholesterol, triglycerides, and insulin resistance (IR) are all independent risk factors for a lower chance of achieving a live birth.The issue of cumulative live birth rate (CLBR) in PCOS is dealt with by two articles. Zhang et\u00a0al. and Jiang et\u00a0al. respectively, focus on the health of PCOS offspring and broaden our understanding of PCOS offsprings\u2019 cardiometabolic status and autistic traits. Xie et\u00a0al. have generated letrozole-induced PCOS-IR rat models and treated them with metformin. In this article, they interpret that metformin might improve obesity, hyperinsulinemia, and IR in female offspring.Two articles, by Gu et\u00a0al. review the current evidence of the role of lifestyle modifications in PCOS, including diet modifications, exercise modifications, sleep modifications, mood modifications, and weight modifications. A meta-analysis by Shang et\u00a0al., involving 20 RCTs with 1113 participants, depicts that diet intervention significantly improves fertility outcomes, reproductive endocrine, and clinical hyperandrogenism. Shen et\u00a0al. submit another meta-analysis, which regards tea supplements as adjuvant therapy for improvement in body weight, fasting blood glucose, and insulin. Through remodeling gut microbiota, Wang et\u00a0al. show that a high-fiber diet could alleviate chronic metabolic inflammation, reproductive function, and brain-gut peptides secretion in their study.As the primary treatment of metabolic dysfunction in PCOS, lifestyle interventions prevent progression to T2DM and lower cardiovascular risk as well as improve ovulation in 40\u201350% of patients with PCOS , 9. Gu eYao et\u00a0al., which underscores the role of metabolome changes for BAT transplantation in improving reproductive and metabolic phenotypes in PCOS. Furthermore, Ye et\u00a0al. pinpoint that cold treatment could also improve ovulation and hormone disorders via activating endogenous BAT.Functional abnormalities of adipose tissue do exist in PCOS patients, mainly manifesting as IR and inflammation . As antiLin et\u00a0al. provide a novel perspective on the regulation of microbial taxa and SCFA content after SG, which might explain the mechanism of the amelioration of PCOS-related reproductive and metabolic disorders.Sleeve gastrectomy (SG) is a popular bariatric surgical procedure. However, its suitability and potential mechanisms in PCOS remain ambiguous. Chen et\u00a0al. compare the implantation rate, clinical pregnancy rate, and live birth rate between progestin-primed ovarian stimulation and the short protocol. The former shows significantly more positive results.Due to the poor quality of oocytes retrieved from patients with PCOS, IVF treatment is always accompanied by lower-quality embryos with a low implantation rate . Chen etAlthough a substantial fraction of information has been added to the existing pool of knowledge of PCOS in the past decades, much remains to be elucidated. We sincerely thank all contributors and reviewers for their support in putting this timely collection of articles together and hope that the readers will find useful answers to their questions.YiW drafted this editorial. PL, RL, YaW and HH revised and approved the final submitted version.This work is supported by the National Key Research and Development Program of China (2021YFC2700701), the National Natural Science Foundation of China (82088102), CAMS Innovation Fund for Medical Sciences (2019-I2M-5-064), Collaborative Innovation Program of Shanghai Municipal Health Commission (2020CXJQ01), Clinical Research Plan of SHDC (SHDC2020CR1008A) and Shanghai Frontiers Science Research Base of Reproduction and Development.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "This paper reviews the flow behavior and mathematical modeling of various metals and alloys at a wide range of temperatures and strain rates. Furthermore, it discusses the effects of strain rate and temperature on flow behavior. Johnson\u2013Cook is a strong phenomenological model that has been used extensively for predictions of the flow behaviors of metals and alloys. It has been implemented in finite element software packages to optimize strain, strain rate, and temperature as well as to simulate real behaviors in severe conditions. Thus, this work will discuss and critically review the well-proven Johnson\u2013Cook and modified Johnson\u2013Cook-based models. The latest model modifications, along with their strengths and limitations, are introduced and compared. The coupling effect between flow parameters is also presented and discussed. The various methods and techniques used for the determination of model constants are highlighted and discussed. Finally, future research directions for the mathematical modeling of flow behavior are provided. Hot deformation is one of the most well-known ways to enhance the mechanical properties of metals and alloys ,3,4,5,6 The constitutive models for the prediction of flow stress behavior at different strain rates and temperatures can be categorized as physical-based models ,34,35,36The high complexity of the non-linear behavior of flow stress at elevated temperatures and different strain rates for some alloys causes the JC model to fail to reach precise predictions from time to time ,62,63. TIn this article, the effect of strain rate and temperature on the flow stress behavior of metals and alloys is outlined. Subsequently, the constitutive modeling and accuracy of predictions at a wide range of temperatures and strain rates using original JC and over thirty modified JC-based models are critically reviewed and presented. Furthermore, the implemented methods and approaches that are used to determine the constants that constitute the models are explained. Finally, a summary based on the three terms of the original JC and modified JC-based models is presented. The future potential of this research area is also considered.A common flow stress curve under hot deformation is shown in \u22121 are shown in The flow behaviors of the B07 and GH51As a result of DRX, dislocation is formed due to the release of stored elastic energy in the hardening region, which results in homogeneous sub-grains. As the strain increases, the misorientations between sub-grains increase, and the sub-grains are turned into fine grains ,121,122.\u22121), fine grains due to full DRX are developed. It also shows the formation of a few column grains in addition to the equiaxed grains as the strain rate increases (0.1 s\u22121 and 5 s\u22121). The effect of strain rate on the hot deformation of other alloys has been reported with the same findings, such as magnesium [Shi et al. studied agnesium , CoCrFeMagnesium , nickel agnesium ,140, aluagnesium , iron [1agnesium , and titagnesium alloys.\u22121 and 0.1 s\u22121, respectively. As can be seen, the peak stress decreases as the temperature increases. Similar findings can be found in [found in ,147,148.found in ,150, in found in ,152.Song et al. studied In this section, the well-known JC and modified JC-based models are presented and carefully reviewed. In addition, the associated methods to determine the models\u2019 constants are also considered.Johnson and Cook introducTo determine the JC constants, reference values for both the strain rate and temperature must be chosen at the beginning. In both reference values, Equation (1) reduces toTaking the logarithm after performing some rearrangements, Equation (2) can be linearly expressed asThe value of constant After performing some rearrangements, Equation (4) can be linearly expressed asConstant than 0.1 , since tBy taking the logarithm after performing some rearrangements, at reference strain rate, Equation (1) can be written asConstant Li et al. determin2742 cf. a. Consta0.08 cf. b, while 5847 cf. c. The ab5847 cf. d.\u22121 and 875 \u00b0C. The FES was also helpful in checking the behavior of the superplastic-forming of the tested alloy . This modified JC model was applied to OFHC copper, 7075-T6 aluminum, wrought iron, and high-strength steel. The modified JC model that was introduced by Rule and Jones [Rule and Jones presentend Jones can be end Jones modifiedThe predicted stresses obtained using the modified JC model that was presented by Rule and Jones are compKang et al. modifiedodel see . At the \u22121 [Couque et al. presente.001 s\u22121 . Constanodel see . At the Constants e et al. for the A similar modification for the strain rate term is presented by Johnson et al. , in whicLin et al. introducAt the reference strain rate and reference temperature, Equation (14) reduces torain cf. a. At theAfter performing some rearrangements, Equation (16) can be written asStrain rate constant n\u03b5\u00b7* cf. b.To obtain the value of the two constants, By taking the logarithm of both sides, Equation (18) can be written asBy plotting he slope d.Experimental stresses are compared to predicted stresses that were obtained by the modified JC model that was introduced by Lin et al. for typiHou Q. Y. et al. modifiedodel see . At diffBy plotting A good agreement between experimental stresses and predicted stresses obtained by the modified JC that was presented by Hou Q. Y. et al. for Mg\u20131Perez et al. used theg et al. modifiedg et al. .Shin and Kim modified and Kim can be eIn another published article, Shin and Kim studied Maheshwari et al. modifiedodel see . Taking By plotting By plotting Compared with the original JC model, the modified JC that was introduced by Maheshwari et al. gave accThe predictability of the flow behavior using the modified JC that is presented by Maheshwari et al. , along wWang et al. modifiedBy plotting ined cf. a. Hence,lope cf. b.At different strain rate values, after performing some rearrangements, Equation (26) can be written asTaking the logarithm of both sides and introducing a new parameter, A comparison between experimental stresses and predicted stresses obtained using the modified JC model that was presented by Wang et al. for 30CrLin et al. modified\u22121\u00b7K\u22121. In this modification, and in order to overcome the difficulty of finding yield stress, the yield stress is replaced by beak stress and defined using an Arrhenius-type equation as per :(30)\u03c30=1ned; see for moreBy plotting At different values for both the temperature and strain rate, different values of p\u03b5\u00b7 see .Lin et al. fitted BA comparison between experimental stresses and predicted stresses obtained using the modified JC model that was introduced by Lin et al. for the Lin et al. presenteLin et al. determinA comparison between the experimental stresses and predicted stresses obtained using the modified JC model that was presented by Lin et al. for the Li et al. introducTaking logarithms for both sides after performing some rearrangements, Equation (35) can be written asThe slope of the equation is By taking logarithms for both sides, Equation (37) can be introduced asDifferent values for the slope, i et al. determinGood predictability for the prediction of the flow stress of T24 steel using the modified JC model that was presented by Li et al. is obtaiSong et al. modifiedTaking logarithms for both sides, after performing some rearrangements, Equation (40) can be introduced asBy plotting the left side vs. ined cf. .The modified JC model that was introduced by Song et al. providedWang et al. modifiedodel see . Constantion cf. a asC\u03b5\u00b7,Good agreement between experimental stresses and predicted stresses using the modified model for the flow behavior of Inconel 718 is thus obtained. However, this method does not provide accurate or precise predictions b.Another sine wave approximation of constant u et al. to prediTan et al. modifiedodel see . The cond \u03b5\u00b7 cf. in the oConstant n et al. as:(44)CCompared to the predictions of the original JC model cf. a, the moChen et al. modifiedMaterial constants n et al. to minimThe predicted stresses obtained using the modified JC model that was presented by Chen et al. were fouWang et al. modifiedMaterial constants nd Jones .The modified JC model that was introduced by Wang et al. providedShokry introducions see . Shokry stresses .3Fe3Cr2Ti2 alloy at high temperatures and different strain rates [A comparison between experimental stresses and predicted stresses obtained by the original JC and the modified JC that was presented by Shokry is shownin rates when comZhao et al. used the\u03c3=A+B1\u03b5+BGood agreement between experimental stresses and predicted stresses is obtained using the modified JC model that was introduced by Zhao et al. , with R:Iturbe et al. introducth T cf. a, and exrate cf. b.The modified JC model that was presented by Iturbe et al. providedTao et al. presenteo et al. can be eture cf. a. Thus, n\u00a0T* cf. b, and, fn\u00a0T* cf. c.Good agreement can be seen between the experimental stresses and predicted stresses for the prediction of the flow behavior of the Ti-6Al-4V alloy obtained using the modified JC model that was introduced by Tao et al. cf. Figa. The moHe et al. introducTo obtain constants lues cf. a. Finalld b3 cf. b. ConstaComparisons between the experimental stresses and predicted stresses for the 10%Cr steel alloy obtained using both the JC model and the modified JC model that was introduced by He et al. are showHou X. et al. modifiedAfter performing some rearrangements, Equation (54) can be written asTaking logarithms for both sides and plotting odel see .A comparison between the experimental stresses and predicted stresses for the Ti-6Al-4V alloy under hot deformation using both the original JC model cf. a and thePromoppatum et al. studied Zhang et al. modifiedBy taking logarithms for both sides, lues cf. a.A comparison between experimental stresses and predicted stresses obtained using the modified JC model that was introduced by Zhang et al. for the Niu et al. introducAt the reference strain rate and reference temperature, Equation (58) lowers toBy plotting Consequently, constants Accordingly, constants Very accurate predictions of the flow stress for the tested alloy are obtained when comparing predicted stresses obtained using the modified JC model that was introduced by Niu et al. , with exChakrabarty et al. presenteodel see . At the Chakrabarty et al. determinthod cf. a. The mothod cf. b.Li et al. modifiedBy plotting TLAB cf. a. At theThe modified JC model that was introduced by Li et al. succeedeQian et al. presenten et al. implemeny et al. in theirConstants odel see . At the Constants Constant The modified JC model that was introduced by Qian et al. improvedLiu et al. modifiedAt the reference strain rate and reference temperature, Equation (73) lowers toBy plotting The modified JC model that was presented by Liu et al. showed gYu et al. proposedBy introducing a new parameter, ting cf. a.Compared with the original JC model, the modified JC model that was presented by Yu et al. predictiWang et al. modifiedg et al. presenteComparisons between the experimental stresses and predicted stresses obtained by the modified JC model that was introduced by Wang et al. for the Shokry et al. introducThe four constants of After expansion, Equation (81) extends to four terms with four constants that are constituted with strain and are determined by utilizing regression analysis. At the reference temperature, after performing some rearrangements, Equation (80) is simplified toNine constants are constituted with the strain and strain rate and are determined utilizing regression analysis. At different strain rate values, after performing some rearrangements, Equation (80) is expressed asThe right term provides 27 constants of Precise predictions of the flow behavior of nickel-based (U720LI) and aluminum-based (AA7020) alloys using the improved generic modification of the original JC that was presented by Shokry et al. are achiPriest et al. introducodel see . At the Using linear regression, at different temperature and strain rate values, different values for the slope of Equation (85), d cc cf. a. At theAt different temperatures, different values for the slope of Equation (86), d cm cf. b.A comparison between the experimental stresses and predicted stresses for the flow behavior of C45 steel alloy under hot deformations obtained using both the original JC model and the modified JC model that was introduced by Priest et al. is shownThe flow behavior of metals and alloys is highly affected by hot working conditions, strain, strain rate, and temperature. As temperature decreases and strain rate increases, strain hardening increases. This is due to the emergence of crystal defects, mainly dislocations, strain-induced stages, or twin boundaries through plastic deformation ,191,192.Modeling the flow behavior of metals and alloys at a wide range of temperatures and strain rates is essential to optimize hot working conditions as well as predict the mechanical behavior of metals and alloys with applications under severe conditions such as dynamic loadings and elevated temperatures. The Johnson\u2013Cook model is one oThe JC model starts by predicting flow stress using the strain-hardening term, followed by multiplying strain rate and softening terms. The JC model employs the Ludwik equation to repres nickel , iron [3s nickel , aluminus nickel ,52, and s nickel alloys. s nickel , replaces nickel equationg et al. for 30Cred as in ,81 for Ted as in ,85,96 foThe strain rate term has received a lot of modifications in the original JC model, which can be implemented when studying hot deformation as well as dynamic loadings. Rule and Jones modifiedg et al. , in whic4V tubes and ball4V tubes . Anothertionship ,84 for ttionship to preditionship ,93,94,96tionship obtainedmodel in ,85,88 foThe thermal-softening term in the original JC model has also received a lot of modifications. The early modified softening term was introduced by Meyers et al. , in whicn et al. introducoV steel , HastelloV steel , 40CrNi oV steel , and 229oV steel . It is aoV steel , with 30oV steel 10%Cr stlized in . Anotherlized in to overcz et al. so that n et al. replacedloy 800H and CuCrloy 800H , providio et al. for the g et al. for the duced in , is replduced in for the duced in for the In this review article, the flow behavior of metals and alloys at a wide range of strain rates and different temperatures is studied. Thus, the constitutive model of the original JC model, as well as more than thirty modified JC-based models, are critically reviewed and commented on. In addition, the methods and techniques that are used to determine model constants are presented and explained. Finally, a summary of modifications based on the three terms of the original JC model is presented.The Johnson\u2013Cook model has been widely used to predict the flow behavior of metals and alloys at a wide range of temperatures and strain rates. In this regard, modified JC-based models were introduced for accurate predictions of the flow behavior for metals and alloys with complex non-linear behavior in which the JC model fails to precisely predict the flow behavior.Lin et al. providedThe improved generic modification for the JC model that was introduced by Shokry et al. can be cComparing predicted stresses using the original JC model and the modified JC-based models for the same types of alloys might be a future direction to precisely assess and evaluate the predictability of the JC model and modified JC-based models.Another future direction is considering the inverse analysis methods and the techniques that are based on non-linear least squares methods to minimize the mean square error between experimental and predicted values for the accurate determination of model constants.Coupling the original JC model and the modified JC-based models with other models such as the Zerilli\u2013Armstrong and Arrhenius models might be another future direction.The combination of multiple factors and their interactions, which may affect the flow stress response, is a much more efficient research methodology for empirical modeling. The Johnson\u2013Cook model is a strong phenomenological model that has been extensively used for predictions of the flow behavior of metals and alloys. It has been implemented in finite element software packages, which enhances its importance in performing simulation analysis of hot working processes; optimizing strain, strain rate, and temperature; and simulating real applications in severe conditions. The findings of this review can be summarized as follows:"} {"text": "Plant roots perform multiple essential functions defining plant ecological success and ecosystem functioning. For instance, roots are vital for plant nutrient and water uptake, thus regulating net primary production and nutrient cycling . In the Yang et\u00a0al. put forward that plant characteristics such as plant transpiration and root length were the main determinants of HR magnitude whereas soil factors such as water table depth or soil texture were also important yet indirect drivers.Water relations are key to understanding the ecology of terrestrial plant communities, and one determining component of water balance is the process of hydraulic redistribution HR; . The terAn et\u00a0al. and de la Riva et\u00a0al. demonstrate that there is a main trend of variation in the multidimensional root space in line with expectations of the root economics spectrum in over 300 species in China and Spain, whereas Jiang et\u00a0al. found weak or no correlation between fine-root traits in 48 species from a single semiarid ecosystem. Further, Wang et\u00a0al. provide insights into the existence of a trade-off between the number and the size of fine roots of temperate tree species. Collectively these results suggest belowground trait covariation and trade-offs are strongly driven by environmental gradients . The majority of research on phenotypic variation to date has focused aboveground , March-Salas et\u00a0al. proved that precipitation predictability promotes intra- and trans-generational plasticity in root traits, observing differential root trait responses between ancestors and descendants. Moreover, Xu et\u00a0al. found that, for lianas and vines in tropical ecosystems, phenotypic variation in root diameter in root tips is strongly linked to changes in cortex thickness and cortex cell size rather than on stele diameter variation. These studies widen the characterisation of trait phenotypic variability and decipher the complex and context dependent interactions between root traits and the environment.Phenotypic variation is an important driver of plant performance under different environmental conditions , and longer root hair length in Norway spruce; and contrasting distributions with distance from rhizoplane suggesting differential contributions of root vs. microbial enzymatic activity. In a field study, Borden et\u00a0al. found specific root respiration covaried with morphological and chemical root traits, and while microbial abundance in the rhizosphere coordinated with root trait variation, this was not the case in bulk soil.Root-soil interactions occur at multiple spatial and temporal scales and are driven by complex processes occurring between roots, microbes, and the soil environment. Discerning relationships between root traits and root-soil interactions can improve our understanding of plant responses to, and their effects on, the environment . Song etWang et\u00a0al. stated in this special issue: \u201cthe importance of root traits in ecosystem-level functioning, is increasingly recognized but still not well-understood\u201d. This special issue showcases studies observing the role of root traits as key drivers of ecosystem services, such as phytoremediation, crop productivity and carbon cycle. Fratte et\u00a0al. demonstrate that mulching application increases below-ground biomass, which may favour the proliferation of microbes devoted to soil organic contaminants\u2019 degradation. In another study with cropped lentil plants in arid nutrient-poor environments, El-hady et\u00a0al. showed that the application of root activator and phosphorus enhanced plant growth and productivity by invigorating root traits. While Borden et\u00a0al. identify connections between root trait variation with carbon dioxide emissions from soil. These studies improve our understanding of the direct benefits of plant root systems in delivery of ecosystem services via root trait-function relationships.Theoretical and empirical evidence predicts that root traits are directly linked with soil structure, nutrient cycling, production and, consequently, ecosystem functioning . Thus, rThis special issue brings together research from multiple fields of root ecology that are unified in their trait based approach. Taken together, this special issue gives a complex but realistic picture of the multidimensional and dynamic roles roots play belowground, defining plant resource uptake strategies and performance, and driving ecosystem functioning and services. However, the ability to scale up fine root trait variation to community and ecosystem functioning level requires more critical investigations that make empirical connections between anatomical, morphological, and physiological characteristics of plant roots with ecosystem scale processes.All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication."} {"text": "Two types of sonoelastography (SE) are commonly explored: strain and shear wave. Sonoelastography can be used in multiple medical subspecialties to assess pathological tissular changes by obtaining mechanical properties, shear wave speed, and strain ratio data. Although there are various radiological imaging methods, such as MRI or CT scan, to assess musculoskeletal structures , SE is more accessible since this approach is of low cost and does not involve radiation. As of 2018, SE has garnered promising data in multiple studies. Preliminary clinico-radiological correlations have been established to bridge tissue biomechanical findings with their respective clinical pathologies. Specifically, concerning the shoulder complex, recent findings have described mechanical tissue changes in shoulder capsulitis. The long head of the biceps and supraspinatus SE were among the recently studied structures with conditions regarding impingement, tendinosis, and tears. Since ultrasonography has established itself as an important tool in shoulder evaluation, it completes the history and physical examination skills of the clinicians. This study will provide an update on the most recent findings on SE of shoulder structures.Sonoelastography is a relatively new non-invasive imaging tool to assess the Sonoelastography is a relatively new and non-invasive ultrasound (US) technique that provides information about mechanical properties of tissues, such as stiffness, based on the palpation method . There aA thorough search of the most recent literature was conducted. A systematic search of MEDLINE (PubMed), EMBASE (Ovid), and Web of Science was performed to identify relevant publications from January 2018 to May 2021. The following text words, Medical Subject Headings (MeSH) terms, and Boolean operators were used: \u201celastography or sonoelastography or elastosonography\u201d and \u201cshoulder,\u201d \u201cshoulder joint,\u201d \u201cshoulder pain\u201d \u201cbiceps,\u201d \u201ccapsulitis,\u201d \u201clabrum,\u201d \u201cdeltoid,\u201d \u201cinfraspinatus,\u201d \u201csupraspinatus,\u201d \u201crotator cuff,\u201d \u201cteres major or minor,\u201d \u201cacromioclavicular joint,\u201d \u201cscapula,\u201d \u201ccoracohumeral,\u201d \u201ccoracoacromial,\u201d \u201ccoracoclavicular,\u201d \u201cquadrangular space,\u201d \u201ctriangular space or interval,\u201d and \u201cspinoglenoid.\u201d We excluded all articles not related to shoulder region or elastography or SE. The search was conducted with English and French language restriction. A further triage was performed using the COVIDENCE sorting platform to retain the final articles reviewed in this article.In recent years, several articles were published for sonoelastographic evaluation of different shoulder pathologies. A systematic review by Chiu et al. analyzed 11 studies covering the use of SE in pathologies such as adhesive capsulitis, rotator cuff tendinopathy, and tear (targeting mainly the supraspinatus tendon as well as the infraspinatus tendon and deltoid muscle) . They foDemirel et al. explored the diagnostic value of supraspinatus muscle SE in supraspinatus impingement syndrome in a cross-sectional study . MeasureMackintosh et al. used 2D shear wave SE to predict fatty infiltration of the supraspinatus muscle as a prognostic factor in determining supraspinatus tendon repair failure . They evBrage et al. explored the discriminative validity of ultrasound SE for patients with painful supraspinatus tendinopathy . They coBrage et al. assessed the intra-rater and inter-rater reliability of SE in the supraspinatus tendon evaluation . They inItoigawa et al. investigated changes in the supraspinatus muscle and tendon stiffness after arthroscopic rotator cuff (RC) repair . They coLin et al. aimed to determine whether SWE can detect biomechanical changes in the supraspinatus muscle concerning supraspinatus tendon abnormality before US grayscale changes . They evVasishta et al. assessed the relationship between tendon stiffness on SE and supraspinatus tendinopathy grading on MRI in 25 patients . They reYoo et al. investigated the value of SWE for the estimation of the supraspinatus tendon tear chronicity . They evFontenelle et al. compared the mechanical properties of the supraspinatus tendon in two different age groups using SWE . They evHackett et al. evaluated the reliability of SWE to evaluate the stiffness of normal and tendinopathic supraspinatus tendons . They asZhou et al. explored the value of SWE in the treatment efficacy and prognostic evaluation of supraspinatus tendinopathy . SupraspNocera et al. determined the healing response after RC repairs using a multimodality imaging approach with MRI, power Doppler and SWE . They evItoigawa et al. determined the feasibility of SWE to evaluate the RC muscle stiffness before arthroscopic RC repair with the intention to explore the surgical procedure's difficulty and compare SWE with the Goutallier stage on MRI . They inKim et al. evaluated the activity of the deltoid, supraspinatus, and infraspinatus muscles using SWE compared with the isokinetic dynamometry and surface electromyography methods on 12 volunteers . They foYun et al. compared the elasticity of the supraspinatus and infraspinatus tendon in idiopathic adhesive capsulitis patients with a control group to evaluate the tendon elasticity relationship . They obSahan et al. investigated SE and SWE characteristics of the long head of the biceps tendon (LHBT) tendinosis compared with MRI findings . They evWada et al. used SWE to evaluate the stiffness of the capsule, RC tendons and muscles, the coracohumeral ligament (CHL), and LHBT in patients with frozen shoulder . They coHsu et al. used SE to investigate whether corticosteroid injections influenced the elasticity of tendons . ConsidePossible barriers to the SE technique include operator dependency and limitations such as artifacts and reliability. During SE evaluation, it is important to mention the age, gender, muscle segment, shoulder position, and tension applied to the tendon or muscle of subject. In both SE and SWE methods, the biggest shortcoming seems to be choosing the region of interest (ROI) to be calculated with US machines by the Young modulus. For shear wave SE evaluation, there is controversy in previously published articles with regard to terminology. The shear wave SE does not directly measure stiffness, which is the resistance of a material to elastic deformity . It measConcerning recently published shoulder SE articles, shoulder capsulitis showed increased stiffness in supraspinatus and infraspinatus tendons and CHL on SWE evaluation . For fatAB-Gh, AB-Gr, and C-EM wrote the manuscript. DL, AB-G, and SS amended and finalized the final proof of the manuscript. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Background. Human Bocavirus (HBoV), which is an ssDNA virus of the family Parvoviridae, is responsible for 21.5\u200a% of childhood respiratory tract infections (RTIs) annually. Among the four genotypes currently known, HBoV-1 has been associated with acute RTI. Although there have been studies on HBoV in some countries, there is limited information on this virus in sub-Saharan Africa where there is the highest burden of RTI. This study aimed to characterize the circulating strains of HBoV in Ibadan, Nigeria.Methods. Nasopharyngeal and oropharyngeal swab samples were collected from 333 children \u22645\u2009years old presenting with RTI attending hospitals in Ibadan, whose parents assented, from 2014 to 2015. Twenty-three HBoV isolates were sequenced after a nested PCR and phylogenetic analysis was carried out using mega 6 software. A total of 27 children tested positive for the HBoV-1 genotype by PCR and 23 of the 27 isolates were successfully sequenced. The 23 HBoV-1 isolates from this study have been assigned GenBank accession numbers KY701984\u2013KY702006. Phylogram analysis indicated that the isolates belong to the same clades. Six isolates aligned closely to the reference strains ST1 and ST2, while 17 isolates showed a high level of divergence to the reference isolates. This study highlights the contribution of HBoV to RTIs in Nigeria and that HBoV-1 strains are associated with the infection. Similar sequences to the HBoV sequences in GenBank were checked using the blast program in the National Center for Biotechnology Information (NCBI) database. WHO HBoV reference sequences were downloaded from NCBI and a multiple alignment with the sequences generated was carried out using mega 6 software. Phylogenetic analysis was performed by neighbour-joining using the cluster w package in mega 6 and maximum-likelihood trees were generated after correcting for multiple substitutions, complete removal of positions that contained gaps and estimating reliability based on 1000 bootstraps. Multiple alignment and phylogenetic analysis of the sample sequences with previous sequences from Nigeria and worldwide was also done.All samples that were positive for HBoV (showing the expected 575 bp) at the end of the second round of the nested PCR were purified and sequenced. The query DNA sequence generated from the 23 HBoV isolates was extracted in Fasta format and manually edited using mega 6 software. The alignment of amino acid sequences of the 23 HBoV isolates was compared with those of the WHO reference strains, Nigerian and other similar strains from around the world. Maximum-likelihood trees were generated after correcting for multiple substitutions, complete removal of positions that contained gaps and estimated reliability based on 1000 bootstraps.The nucleic acid sequences generated from the samples was translated into amino acid sequences from the ORFs within each sequence using Data entry, cleaning and data analysis were performed using SPSS statistical software, and descriptive statistics were presented using tables, graphs and charts.A total of 333 children aged 5\u2009years and below presenting with cough, wheeze, breathlessness, fever, nasal congestion, catarrh, vomiting, tonsillitis, otitis media, rash, diarrhoea, lack of appetite, bronchiolitis and bronchopneumonia were recruited from September 2014 to August 2015 from hospitals and health centres in Ibadan, Oyo State, Nigeria. In total, 170 of the children were male and 155\u2009females. Twenty-seven (8.1\u200a%) of the 333 children tested were positive for HBoV infection, 16 (59.3\u200a%) of whom were female. The age group \u02c31\u20132\u2009years showed the highest prevalence. Twenty-three of the total HBoV-positive isolates were sequenced and characterized . All themega6 [The phylogenetic tree reconstructed with the sequenced isolates and WHO reference strains using mega6 showed tet al. [et al. [et al [et al. [et al. [et al. [et al. [et al. [et al. [et al. (2011) [et al. (2016) [et al. [In this study, HBoV was detected in 8.1\u200a% of the 333 children recruited. The detection rate observed in this study is higher than the 6.8\u200a% reported by Moreno et al. in Argen [et al. in Vietn. [et al in Seneg. [et al among Ke [et al. and in S [et al. , but dif [et al. among Ke [et al. among ch [et al. , whose s [et al. , Korner . 2011) , Halise 11 , Hali [et al. . This suet al. (2014) [et al. [et al., [Of the 27 HBoV isolates detected in this study, 23 were successfully sequenced. The phylogram generated from the nucleotide sequence of the 23 HBoV isolates in this study and that of HBoV reference strains showed all the isolates clustering with the reference strain of HBoV-1 . This is. (2014) , Abdel-M [et al. and Prinet al. (2017) [et al. (2013) [et al. [et al. [et al. [Six of the 23 isolates in this study showed a very high level of similarity 40\u201355\u200a%) with ST1 and ST2, which are the WHO reference strains for the HBoV-1 genotype [5\u200a% with . (2017) and Vasi. (2013) , whose s [et al. and Abde [et al. , which set al. [et al. [et al. [et al. [et al. [et al. (2010) [There was low level of diversity in the nucleotide and amino acid differences within the HBoV-1 isolates of this study . This shows the high level of genetic homogeneity exhibited by the HBoV1 isolates of this study, and supports the findings of Allander et al. , Arthur [et al. , Cheng e [et al. and Ghie [et al. . When co [et al. and Salm. (2010) , which s. (2010) .Finally, the 8.1\u200a% prevalence of HBoV-1 that was found in this study showed that this virus strain is involved in childhood respiratory tract illnesses in Ibadan and is therefore of public health importance in Nigeria. Although the greatest number of HBoV-1 isolates (11) was obtained in January , the shoRTIs are implicated in a large number of childhood deaths globally, and HBoV-1 has been shown to be involved in the respiratory illness of children \u22645\u2009years old. The findings of this study show that HBoV-1 is endemic in Ibadan and might be actively circulating among children in Nigeria. Also, there is a likelihood that the movement of people fromone country to another plays an active role in the transmission and evolution of the virus. This study therefore shows the need for continuous surveillance of HBoV and laboratory investigation of the role it plays in the disease process. It is also important to promote good hygienic practices in Nigeria."} {"text": "Correction: BMC Public Health 21, 664 (2021)https://doi.org/10.1186/s12889-022-12954-yThe original publication of this article containeIncorrect[31], but not to other team members. One stakeholder was known to [removed for review] prior to recruitment. The remaining interviews were conducted by Maltzahn & Cox et al. [1].As they were community members, some of the bingo playing participants were known to Maltzahn & Thompson et al. CorrectMaltzahn prior to recruitment. The remaining interviews were conducted by Maltzahn & Cox et al.As they were community members, some of the bingo playing participants were known to Maltzahn & Thompson et al., but not to other team members. One stakeholder was known to"} {"text": "Recent observations indicate that the Universe is not transparent but partially opaque due to absorption of light by ambient cosmic dust. This implies that the Friedmann equations valid for the transparent universe must be modified for the opaque universe. This paper studies a scenario in which\u00a0the opacity rises with redshift. In this case, the light\u2013matter interactions become important, because cosmic opacity produces radiation pressure that counterbalances gravitational forces. The presented theoretical model assumes the Universe\u00a0is expanding according to the standard FLRW metric but with the scale factor Dust grains absorb and scatter the starlight and reemit the absorbed energy at infrared, far-infrared and microwave wavelengths \u20136. Sinceet al. correlatAlternatively, the cosmic opacity can be estimated from the hydrogen column densities of Lyman ic space \u201323. Sinc\u22122\u2009mag\u22121 ,25, we gr galaxy . From ob\u2009h\u2009Mpc\u22121 , the chaThe cosmic opacity is very low in the local Universe ,17, but of dust ,20. It h for z<7 \u201333, see et al. [Another independent indication of dust at high redshifts is a weak or no evolution of metallicity with redshift. For example, observations of the e at z>6 \u201337. In az>6 [z>7 ,39 and d7 [z=5\u20137 . Zavala [et al. measuredtic wind ,42 and rtic wind , the coset al. [Since dust is traced mostly by reddening of galaxies and quasars at high redshifts, it is difficult to distinguish which portion of reddening is caused by dust present in a galaxy and by cosmic dust along the line of sight. Xie et al. ,44 studithe past ,29, see The fact that the Universe is not transparent but partially opaque might have fundamental cosmological consequences, because the commonly accepted cosmological model was developed for the transparent universe. Neglecting cosmic opacity produced by intergalactic dust may lead to distorting the observed evolution of the luminosity density and the global stellar mass density with redshift . For exatenuated . Figure Non-zero cosmic opacity may partly or fully invalidate the interpretation of the Type Ia supernova (SNe Ia) dimming as a result of dark energy and the accelerating expansion of the Universe ,42,61,62nd (CMB) \u201366. For If cosmic opacity and light\u2013matter interactions are considered, the Friedmann equations in the current form are inadequate and must be modified. The radiation pressure, which is caused by absorption of photons by dust grains and acts against gravitational forces, must be incorporated. In this paper, I demonstrate that the radiation pressure due to light absorption is negligible at the present epoch, but it could be significantly stronger in the past epochs. Surprisingly, its rise with redshift could be so steep that it could even balance the gravitational forces at high redshifts and cause the expansion of the Universe. Based on numerical modelling and observations of basic cosmological parameters, I show that the modified Friedmann equations avoid the initial singularity and lead to a cyclic model of the Universe with expansion/contraction epochs within a limited range of scale factors. I estimate the maximum redshift of the Universe achieved in the past and the maximum scale factor of the Universe in\u00a0the future.. 2 (a)The standard Friedmann equations for the pressureless fluid read ,682.1(a (b)The basic drawback of the Let us consider light emitted by a point source with mass (c)The generalized Poisson equation implies quations and (2.2The light\u2013matter interaction will be characterized by density (d)Assuming that equation , the Hub (e)The radiation\u2013absorption term equation is redsh applied ,29, see The luminosity density comprises energy radiated by galaxies into the intergalactic space and thermal radiation of intergalactic dust. All these sources produce cosmic background radiation in the Universe being the sum of the cosmic X-ray background (CXB), the EBL and the cosmic microwave background (CMB). The cosmic background radiation as any radiation in the expanded universe depends on redshift asequation depends smaller ,69, see quations and (2.3quations ranges fSince the coefficient equation depend o (f)In order to get simple closed-form formulae, we assume in the next that the mean spectral index equation , which yequation describequations and (2.2quations and equaquations \u20132.29), , \u03b3 chara. 3To calculate the expansion history and cosmic dynamics of the Universe, we need observations of the mass opacity of intergalactic dust grains, the galaxy luminosity density, the mean mass density, and the expansion rate and curvature of the Universe at the present time. (a)When estimating the mass opacity of dust, th q=3.5 ,70, but d\u22480.1\u2009\u03bcm . The graectively ,73. The ectively ,74. Cons or less \u201377. Cons or less predict (b)et al. [et al. [et al. [The EBL covers a wide range of wavelengths from 0.1 to r & Dwek , Lagache [et al. ,\u00a0and Coo [et al. . The dir [et al. ,82 and b [et al. \u201386. The [et al. . Despitem\u22122\u2009sr\u22121 ,87\u201390.The galaxy luminosity density is determined from the Schechter function . It has DSS data and et al. [et al. [The simplest and most straightforward method to estimate the matter density is based on galaxy surveys and computation of the mass from the observed galaxy luminosity and from the mass-to-light ratio et al. [et al. [et al. [et al. [The Hubble constant h effect \u2013108 or gh effect ,110, grah effect \u2013113 or a [et al. , and the Jackson ). These detected ,116. Thes et al. using ths et al. . Anothern et al. using thet al. [Assuming the Universe . This me [et al. is based [et al. ,122 or t [et al. . The cos [et al. and usin [et al. . The aut. 4et al. [Estimating the required cosmological parameters from observations, the upper and lower limits of the volume of the Universe and the evolution of the Hubble parameter with time can be calculated using equations \u20132.29). . 2.29). n et al. . The masequation is multiequation ranges fAs seen in a is controlled by b is controlled by The history of the Hubble parameter equation is shownDM model , which iThe distance-redshift relation of the proposed cyclic model of the Universe is quite different from the standard DM model . In both. 5The cyclic cosmological model of the opaque universe successfully removes some tensions of the standard \u2014The model does not limit the age of stars in the Universe. For example, observations of a nearby star HD 140283 with ageBig Bang . \u2014et al. [et al. [et al. [et al. [et al. [The model predicts the existence of very old mature galaxies at high redshifts. The existence of mature galaxies in the early Universe was confirmed, for example, by Watson et al. who anale et al. analyseds et al. for a quh et al. and a sis et al. . Note th rapidly \u2013132. \u2014et al. [et al. [Assuming 2\u20133 times higher cosmic opacity than its current estimates, the model is capable of explaining the SNe Ia dimming discovered by Riess et al. and Perl [et al. without [et al. , which i [et al. . Moreove [et al. ,137, but \u2014The model avoids a puzzle\u00a0of how the CMB as relic radiation could survive the whole history of the Universe without any distortion , and why \u2014The temperature of the CMB as thermal radiation of cosmic dust is predicted with the accuracy of 2%, see Vavry\u010duk . The CMB \u2014The model explains satisfactorily: (1) the observed bolometric intensity of the EBL with a value of Vavry\u010duk , (2) theVavry\u010duk (fig. 11Vavry\u010duk (fig. 12Note that the prediction of a close connection between the CMB anisotropies and the large-scale structures is common to both the standard model and the opaque universe model. The arguments are, however, reversed. The Big Bang theory assumes that the large-scale structures are a consequence of the CMB fluctuations originating at redshifts tic dust \u2013146.. 6The standard Friedmann equations were derived for the transparent universe and assume no light\u2013matter interaction. The equations contain densities The radiation pressure as a cosmological force acting against the gravity has not been proposed yet, even though its role is well known in the stellar dynamics . The radHence, the expansion/contraction evolution of the Universe might be a result of imbalance of gravitational forces and radiation pressure. Since the comoving global stellar and dust masses are basically independent of time with minor fluctuations only , the evoObviously, a role of recycling processes is much more important in the cyclic cosmological model than in the Big Bang theory. The processes of formation/destruction of galaxies and their interaction with the circumgalactic medium through galactic winds and outflows \u2013156 shouIn summary, the opaque universe model and the Big Bang theory are completely different concepts of the Universe. Both theories successfully predict basic astronomical observations such as the Universe expansion, the luminosity density evolution with redshift, the global stellar mass history, the SNe Ia measurements and the CMB observations. However, the Big Bang theory needs the existence of dark matter and dark energy, which are supported by no firm evidence. Moreover, they contradict small-scale observations in galaxies \u2013162 and"} {"text": "Brain Sciences has grown to include a staff of 28 academic editors with expertise related to clinical psychiatry, in addition to its supporting staff of managing and English-language editors. Our mission is to gather \u201cthe best available scientific evidence to guide clinical practice in the treatment of psychiatric illnesses\u201d. With this goal in mind, an average of approximately 10 articles per month are currently published within the Section, last year totaling 56 articles. Here are examples of our most popular articles from 2021.Since its inception in May 2021, the Psychiatric Diseases Section of Acevedo et al. provide Martinotti et al. also proSalazar de Pablo et al. provide The COVID-19 pandemic\u2019s impact on mental health has been a major subject of study. Menculini et al. summarizThese are a few selected articles from our first year in the Psychiatric Diseases Section. Our success continues this year, with nearly twice as many articles expected by year-end 2022, compared with 2021. We look forward to our next report on these works, next year."} {"text": "Natural products remain important repositories of promising therapeutic candidates due to their rich chemical and biological diversity. In particular, the development and application of bio-functional natural products from edible resources for the prevention of human diseases, health maintenance, and beauty are attractive for practical research.Molecules, includes twelve articles, including ten original and two review articles.The Special Issue \u201cBio-functional Natural Products in Edible Resources for Human Health and Beauty\u201d, published in the journal Ipomoea batatas L.) cultivars\u2019 by Guo Liang Li et al. [Vaccinium vitis-ideaea L.)\u2019 by Gabriele Vilkickyte et al. [x-, IL-1\u03b2-, TNF-\u03b1-, and IL-6-inhibiting effects and trypanocidal activity of banana (Musa acuminata) bracts and flowers: UPLC-HRESI-MS detection of phenylpropanoid sucrose esters\u2019 by Louis P. Sandjo et al. [Lavandula stoechas L.) essential oil: emerging potential for therapy against inflammation and cancer\u2019 by Mohamed Nadjib Boukhatem et al. [The original papers include \u2018Anthocyanin accumulation in the leaves of the purple sweet potato (i et al. ; \u2018Compose et al. ; \u2018NOx-, o et al. ; A new em et al. ; \u2018Naturam et al. ; \u2018Sea bum et al. ; \u2018Lycopem et al. ; \u2018Consumm et al. ; and \u2018Stm et al. .In addition, two review papers are included, namely \u2018Natural ingredients from medicine food homology as chemopreventive reagents against type 2 diabetes mellitus by modulating gut microbiota homoeostasis\u2019 by Xiaoyan Xia and Jiao Xiao and \u2018PhyMolecules, will be of use to many researchers. I would like to acknowledge all the authors for their valuable contributions and the reviewers for their constructive remarks. Special thanks to the publishing staff of Molecules at MDPI for their professional support in all aspects of this Special Issue.As the guest editor of this Special Issue, I hope \u201cBio-functional Natural Products in Edible Plant for Human Health and Beauty\u201d, appearing in the Natural Products Chemistry section of"} {"text": "To the Editor:et al. for hours, generating physiological stress and lactic acid in an already critically ill patient. Rapid cooling reduces this stressful time to minutes.et al., Five years ago this journal published a meta-analysis of 4700 postcardiac arrest patients treated with TH, which found that those treated with rapid TH had better outcomes compared with those treated with slower cooling .Kaneko et al. , in a 46et al., et al., \u201cTTM\u201d (targeted temperature management) includes both TH and controlled maintenance of normothermia. The TTM (Nielsen et al. supports our conclusions that the use of TH target temperatures >34\u00b0C, cooling at rates <3\u00b0C/hour, or reaching target >3.5 hours after ischemic insult do not provide the full benefits of TH and fail to consistently improve recoveries. Rapid TH should be further considered to improve outcomes.The analysis by Granfeldt"} {"text": "Migration has always been a feature of human populations, with people migrating and crisscrossing the globe for a wide range of reasons. During the 21st century [International Journal of Environmental Research and Public Health (IJERPH) entitled \u201cMigration, Resilience, Vulnerability and Migrants\u2019 Health\u201d was opened, and a dedicated team of scholars managed the editorial work as guest editors to facilitate the timely peer-review and publication of relevant manuscripts from multiple studies [As part of this exploration, up to 15 April 2022, a special topic of the A.\u00a0Health literacy and communication\u2014For example, Klingberg et al. [g et al. identifig et al. synthesiB.\u00a0Mental health and resilience\u2014For example, Hynek et al. [k et al. developek et al. discusseC.\u00a0Sexual and reproductive health services\u2014For example, Loganathan et al. [n et al. exploredD.\u00a0Identity and belongingness\u2014For example, Mude et al. [e et al. exploredE.\u00a0Policy for disability among migrants in Europe\u2014For example, Martin-Cano et al. [o et al. criticalIn the Special Issue, a number of thematic areas were discussed including, but not limited to: Conclusion: It is evident from these research activities that migrants, being internal or international and migrating for opportunities or as forced migrants (refugees), face a number of challenges, but opportunities do exist as well. Despite their vulnerability, especially for those migrating with a refugee background, through their resilience and adaptation to whatever adversity they face, they do survive and continue to contribute to their new place of residence."} {"text": "Brucella abortus. Diagnosis is mainly based on bacterial culture and serology. However, these methods often lack sensitivity and specificity. A range of molecular diagnostic methods has been developed to address these challenges. Therefore, this study aims to investigate the diagnostic accuracy of molecular tools, in comparison to gold standard bacterial isolation and serological assays for the diagnosis of bovine brucellosis.Bovine brucellosis is a disease of global socio-economic importance caused by B. abortus infections in animals according to Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines. The quality of included journal articles was assessed using the quality assessment of diagnostic-accuracy studies assessment tool and meta-analysis was carried out using Review Manager.The systematic review and meta-analysis were conducted based on analyses of peer-reviewed journal articles published between January 1, 1990, and June 6, 2020, in the PubMed, Science Direct, Scopus, and Springer Link databases. Data were extracted from studies reporting the use of molecular diagnostic methods for the detection of From a total of 177 studies, only 26 articles met the inclusion criteria based on PRISMA guidelines. Data from 35 complete studies were included in the meta-analysis and used to construct 2 \u00d7 2 contingency tables. Improved diagnostic performance was observed when tissue 82.0\u201398.0%]) and serum samples (sensitivity 91.3% [95% CI 86.0\u201395.0%]) were used, while the BruAb2_0168 locus was the gene of preference for optimal assay performance (sensitivity 92.3% [95% CI 87.0\u201396.0%] and specificity 99.3% [95% CI 98.0\u2013100.0%]). Loop-mediated isothermal amplification (LAMP) had a higher diagnostic accuracy than polymerase chain reaction (PCR) and real-time quantitative PCR with sensitivity of 92.0% (95% CI 78.0\u201398.0%) and specificity of 100.0% (95% CI 97.0\u2013100.0%).B. abortus to LAMP. However, due to limitations associated with decreased specificity and a limited number of published articles on LAMP, the alternative use of PCR-based assays including those reported in literature is recommended while the use of LAMP for the detection of bovine brucellosis gains traction and should be evaluated more comprehensively in future.The findings of this study assign superior diagnostic performance in the detection of Brucella bas bas16] bB. abortus genome including the BCSP31 gene (n = 3), BruAb2_0168 locus (n = 5), IS711 genetic element (23), and other genes (n = 4) (The index tests assessed in the included studies were PCR n = 23), real-time PCR n = 10), and LAMP (n = 2). The studies were validated on extracted DNA from whole blood (n = 1), serum samples (n = 3), milk (n = 8), tissue samples (n = 4), vaginal exudates or semen (n = 3) as well as mixed samples n = 7). The assays used in these studies targeted specific regions of the (n = 4) [1, 3, 50, and LA, real-ti. The assThe methodological qualities of included studies were assessed using the QUADAS-2 assessment tool and the results are presented in et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [B. abortus infections could be detected using tissue samples compared to serum samples. The diagnostic specificity for these sample choices was, however, lower , compared to those of other samples.The 26 eligible citations that were included in this study represent a total of 35 separate studies with their complete 2 \u00d7 2 contingency tables. The sensitivities reported by these studies range from 7.0% to 100.0% while the specificities reported were in the range of 0.0\u2013100.0%, these together with the diagnostic accuracy measures of the respective studies are reported 3, 5, 7, 18\u201335. [et al. and Narn [et al. and Nard [et al. both rep [et al. , to inform decision-making and encourage the adoption of these techniques as alternatives to conventional methods of diagnosis.The culture and isolation of bacterial disease agents remain the gold standard tool for the diagnosis of bovine brucellosis. However, due to a myriad of drawbacks, including decreased diagnostic efficiency associated with requirements for extended culture periods, stringent laboratory conditions, highly skilled personnel, serology remains an integral part of the brucellosis testing regimen , despite testing . MoleculBrucella , 37. Theet al. [To assess the diagnostic accuracy, we first considered the methodological qualities of the studies that were selected for inclusion in this meta-analysis. The analysis found that the overall risk of bias was unclear while the concern for applicability was low. In some studies, however , 27, 38,et al. , many suet al. [B. abortus in small ruminants using real-time PCR was far more superior (100% detection rate) compared with serological methods which were able to detect B. abortus infections in only 40% of the samples collected from abortion events while bacteriological methods failed to isolate Brucella spp. from any of the samples that were brought in for testing. Taking this into consideration, we can assume that some cases that were identified as negatives in some of the studies included in this review (particularly serum and tissue-based studies) are possibly false negatives and in fact may be true B. abortus cases that could not be picked up by gold standard tests owing to the low diagnostic sensitivity of these assays.The pooled sensitivity and specificity estimates showed that both the diagnostic sensitivity and specificity across index tests were high for molecular diagnostic assays, suggesting superiority over traditional bacterial culture and serological assays. than agnostic sensitivity (72.8\u201391.7%) was lower compared to specificity (92.8\u201399.5%). These higher diagnostic specificities can possibly be attributed to the improved diagnostic specificity of molecular tests compared to traditional bacterial culture and serological assays which were the reference standards in this study. Limitations in the accuracy of a gold standard test in detecting a specific target condition can result in the lower diagnostic sensitivity of the index test if the gold standard test is used as a benchmarking tool to judge the performance of the index test. For instance, in 2015 Wareth et al. reportedet al. [B. abortus in any given sample is significantly reduced in the early stages of infection where bacterial numbers are low and in instances where a sample may be heavily contaminated [B. abortus may take up to 2 weeks to rise to detectable levels after infection [B. abortus infection.In a similar systematic review on molecular tools for diagnosing visceral leishmaniasis, de Ruiter et al. , also ciaminated . Similarnfection , 40. Thenfection . Furthernfection . These fet al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [The results presented herein suggest that LAMP, having higher pooled diagnostic accuracy estimates than PCR and qPCR, has a superior diagnostic performance to the two. Among the LAMP assays included in the meta-analysis the assay reported in Karthik et al. , was fou [et al. , Arellan [et al. , Narnawa [et al. , and Nar [et al. , had hig [et al. and Narn [et al. reported [et al. and Nard [et al. . The idea that the target gene plays an important role in the diagnostic accuracy of an assay is supported by the findings of Mugasa et al. [We further investigated the role of target genes in the accuracy of molecular diagnostic assays for detection of strains , 28, 29.a et al. , who demThe consumption of unpasteurized milk and milk products is the primary mode of animal-to-human transmission for brucellosis . Even thB. abortus infections than traditional methods. LAMP, specifically the LAMP assay reported by Karthik et al. [B. abortus infections where LAMP is the desired assay. Where PCR and qPCR are preferred alternatives and where limitations associated with LAMP cannot be overcome, the PCR assay reported by Alamian et al. [et al. [et al. [The findings of this study indicate that choice of index test, clinical sample type, and gene target play a role in the overall performance of a diagnostic test and therefore, need to be collectively evaluated when a new diagnostic assay is developed. In addition, these findings indicate that DNA-based molecular methods are more effective in the diagnosis of k et al. , had a hn et al. and the [et al. and Bark [et al. can be rWhile LAMP appears to be the diagnostically superior assay based on this meta-analysis, we cannot ignore its tendency for false positive amplification which may be attributed among other things to the type of clinical sample used, thus decreasing the diagnostic specificity of the assay. The true superiority of LAMP over PCR-based technology can only be ascertained more precisely with the availability of more studies. Therefore, we recommend that the meta-analysis be revisited in the future to gain better insight into whether the full potential of LAMP as a rapid and robust diagnostic tool can be harnessed and applied as an alternative tool for the accurate diagnosis of bovine brucellosis. In the interim, we recommend the use of PCR-based technologies like those mentioned above for the rapid and precise diagnosis of bovine brucellosis. However, we are also mindful that the methodological qualities of these studies may be compromised; therefore, studies of higher quality are necessary for future evaluation.The datasets used and/or analyzed during the current study are included in this published article. Any additional data will be made available on reasonable request from the corresponding author.ES and OT: Conceived the idea and reviewed the manuscript. LM: Designed the study, retrieved the study articles, extracted and analyzed the data, and wrote the manuscript. TEO: Extracted and analyzed the data and reviewed the manuscript. All authors have read and approved the final manuscript."} {"text": "Gynaecological cancer impacts approximately three million women globally. The problem is much more intense in resource-limited countries. Sexual health is a critical aspect of gynaecological cancer treatment and an important component of quality of life (QoL).This study aimed to assess the determinants of sexual function among survivors of gynaecological cancer.This was a cross-sectional study. The simple random sampling technique was used to recruit survivors of gynaecological cancers aged 18 years and above on follow-up in a tertiary hospital in Kenya.The study used the socio-demographic survey, Body Image Scale, Multidimensional Perceived Social Support Scale and Female Sexual Function Index.p = 0.005), cancer stage 3 and social support were independent predictors of sexual dysfunction.Cervical cancer was the most common gynaecological malignancy among respondents (51%). The mean total score of the Female Sexual Function Index was significantly low at 10.0 (cut off = 26.5). The majority (85%) of respondents had sexual dysfunction. The most commonly affected sexual domain was lubrication at a mean value of 0.91 (SD = 1.58). Age (aOR = 0.05, 95% CI: 0.003\u20130.16, The prevalence of sexual dysfunction among gynaecological cancer survivors remains significantly high. Having cervical cancer was the most significant predictor of sexual dysfunction in this study population.There is a need for further studies to improve the sexual life and hence the QoL among survivors of gynaecological malignancies. Cancer is the most significant worldwide pathological health problem with wide geographical variation in incidence, and it has additionally become an important item in each country\u2019s health agenda . GynaecoThe burden of gynaecological cancer in developing countries appears huge. In these countries, gynaecological cancers account for 25% of all new cancers diagnosed among women aged up to 65 years compared to 16% in the developed world. According to a recent report, developing countries accounted for 820,265 cases .All women are at risk of gynaecologic cancer. Gynaecological cancers represent the second most common malignancies affecting women in Kenya . On the Addressing survivorship and QoL issues remains a challenge , 10, 13.Gynaecological cancer and its treatments can affect one or more phases of the sexual response cycle through alterations of sexual function. The high curability of cervical cancer, when detected early, combined with the latest scientific advances in medical treatment, has contributed to the greater survival of patients. However, treatment of this neoplasm can, on the other hand, lead to late adverse effects, primarily related to radiotherapy, caused by its action on healthy tissue and organs adjacent to the tumour , 17, 19.Approximately 50% of gynaecological cancer survivors present with sexual dysfunction \u201324. MostSexual dysfunction is one of the most distressful symptoms among cervical cancer survivors. Cancer treatment, including radiotherapy, results in a high degree of vaginal morbidity and persistent sexual dysfunction. The vaginal symptoms reported after cervical cancer treatment include sore membranes, reduced lubrication and genital swelling, which severely affect women\u2019s sexual health , 21, 27.The illness process and various treatment modalities negatively affect the emotional, psychological, physiological, body image, QoL and sociocultural well-being, severely affecting the affected women and their spouses , 39\u201341. Despite substantial studies recording the detrimental outcome of malignancies on sexual functioning and gratification, there is a paucity of studies on successful management programmes on sexual dysfunction within oncology care settings , 23, 49.Majority of the patients who experience sexual difficulties rarely discuss problems freely with their healthcare providers for the reason that sex is an embarrassing topic to discuss in public and is considered a taboo in many parts of the world, especially in the African and Asian communities , 50, 51.Discussion of sexuality after gynaecological cancer treatment is often tackled from a biomedical point of view, with the centre of attention being the physiological aspect of sexual functioning , 3, 33. The study was carried out to assess the determinants of sexual function among survivors of gynaecological cancers in a tertiary teaching and referral hospital.Specific objectives were (1) to assess the association between clinical characteristics, socio-demographic characteristics and sexual function among survivors of gynaecological cancers; (2) to assess the relationship between psychosocial determinants and sexual function among survivors of gynaecological cancers; and (3) to determine the predictors of sexual function among survivors of gynaecological cancers.A cross-sectional study design was utilised. The study was conducted in a tertiary hospital in Kenya. The hospital is the largest public referral hospital that offers comprehensive cancer care services at subsidised rates. It has an inpatient bed capacity of over 2,000 and provides outpatient services to approximately 27,000 cancer patients annually. Data were collected from the period of March 2021 to April 2021.et al [z = 1.96, p = 0.5 and d = 0.05. The sample size obtained was corrected considering a target population of 150 female patients attending an outpatient clinic, aged 18 years and above with confirmed diagnosis of gynaecological cancers, on treatment and regular follow-up in a tertiary hospital \u2013 Cancer Treatment Centre. A total of 108 participants were recruited in the study.The sample size was determined using Fischer et al formula,The simple random sampling method was used to recruit participants in the study whereby yes/no papers were written and placed in a box and mixed well, and then drawn out one at a time. Those who picked yes paper were eligible to participate. To ensure that a participant was not recruited more than once, the researcher and research assistants sensitised the participants through a health talk at their respective clinics about the study and sampling process. Once the participant was selected, the participant\u2019s file was marked with a red sticker for easier identification on the day when they attended their routine clinics, and for those who had participated and were not selected, their files were marked with a yellow sticker, and they were not involved in sampling process again. Simple random sampling was repeated daily from Monday to Friday for 1 month. On average, 5 participants were enrolled per day until the targeted sample size of 108 was reached.Women were recruited to participate in the study if they were currently on treatment or had undergone therapy within the previous 3 years. To have a more homogeneous sample, the inclusion criteria were (i) adult female patients aged 18 years and above; (ii) currently in a relationship with a spouse or long-term sexual partner; (iii) with a history of having actively engaged in sexual activity over 1 month prior gynaecological cancers diagnosis and treatment; and (iv) able to read, write and speak English or Kiswahili. Women with the following conditions that potentially would influence their sexual life were excluded: the presence of other malignant diseases, impaired mental function, previous history of sexual dysfunction, history of female genital organ surgery before the diagnosis of gynaecological cancer, end-stage renal disease or a dependent functional status.After establishing a rapport with the patients, the first author approached eligible women at an outpatient clinic of a tertiary hospital \u2013 Cancer Treatment Centre. The aims and procedures of the study were explained. Written informed consent was obtained from the patients who agreed to participate in the study. An interviewer-administered questionnaire was used to collect data from participants who expressed difficulty in reading questionnaires but were willing to respond.Demographic data, such as age, marital status, level of education and employment status, were collected with a survey questionnaire. Clinical information, such as cancer staging and type of gynaecological cancer, treatment modalities and treatment duration, were extracted from medical records. The instrument used to evaluate the sexual function, body image and perceived social support for patients with gynaecological cancer survivors was collected with self-report instruments described below.et al [A 19-item Female Sexual Function Index was developed by Rosen et al to measuet al [A 10-item Body Image Scale was developed by Hopwood et al , collaboet al [A 12-item Multidimensional Scale of Perceived Social Support was developed by Zimet et al to measup < 0.05.Data were analysed using Statistical Package for the Social Sciences (SPSS) version 25. Descriptive statistics, such as mean, standard deviation, frequencies and percentages, were used to summarise study variables. Chi-squared tests were used to test the association between socio-demographic, clinical characteristics and FSFI. Pearson\u2019s correlation test was used to investigate the relationship between psychological, social and sexual functions. A binary logistic regression model was used to identify the predictors of sexual function. Significance was accepted at The university\u2019s Ethics Review Committee, the National Commission for Science, Technology, Innovation, and the Hospital Administration approved the study. The informed consent form was signed before data collection. Confidentiality and privacy were assured throughout the study by maintaining the anonymity of the participants and storing the data in password-protected files.n = 63) of the respondents had secondary education as their highest level of education; 77.8% (n = 81) were unemployed; and 69.4% (n = 75) were married. The majority of respondents had cancer of the cervix, and 55.6% (n =60) had stage 3 cancer. Chemotherapy was the commonest mode of treatment (The average age of the respondents was 53.4 (SD \u00b1 12) years; 58.3% (reatment .M = 1.66 and SD = 0.96. The highest rating was feeling self-conscious about their appearance with an average score of M = 2.06 and SD = 1.0. Family support was the most significant to the patients with an average score of M = 3.44 and SD = 1.16 (The average score of patients who were dissatisfied with the changes that had occurred to their body due to the disease and its related treatment was D = 1.16 n = 92) had sexual dysfunction. Over half of the respondents had their physicians discuss sexual functioning issues. The majority of respondents were not referred to a sexual specialist counsellor or social support after treatment and 55.6% (n = 60) had their physicians discuss sexual functioning issues. Lubrication was the most prevalent problem .p = 0.001), patient\u2019s level of education (p = 0.002), employment status (p < 0.0001), lifestyle adaptation activities (p = 0.047), type of cancer (p = 0.015) and cancer staging (p = 0.024) were significantly associated with sexual function. Type of cancer (p = 0.015) and cancer staging (p =0.024) were significantly associated with sexual function. A significant positive relationship existed between body image and sexual function . There was a significant negative relationship between social support and sexual function (Age of the patient ( 0.0001) .p = 0.004, OR = 0.142, 95% CI = 0.038, 0.533). In assessing employment status, employed participants were 0.3 times less likely to have sexual dysfunction . At the same time, self-employed people were 0.1 times less likely to have sexual dysfunction compared to unemployed respondents. Respondents who had cancer of the cervix were seven times more likely to have sexual dysfunction , while patients who had cancer of endometrium were five times more likely to have sexual dysfunction . Respondents with stage 4 cancer were five times more likely to have sexual dysfunction .The findings revealed that the respondents\u2019 age, employment status, type of gynaecological cancers and cancer staging were significant predictors of sexual dysfunction. Patients aged less than 50 years were 0.14 times less unlikely to suffer from sexual dysfunction than those aged 50 years and above .p = 0.005). Patients who had stage 3 cancer were ten times more likely to have sexual dysfunction compared to those with stage 1 cancer . A decrease in the social support score was associated with a 29% increase in the likelihood of sexual dysfunction among patients with gynaecological cancers .Multivariable analysis conducted, as shown in To the best of our knowledge, this is the first comprehensive quantitative study to address specific sexual functioning and activities among survivors of gynaecological cancer in Kenya. The present study found that the mean total FSFI score was 10.0. This study revealed the worse sexual functions in all domains. The present study found that the majority of respondents (85%) reported sexual dysfunction. Furthermore, most of the respondents (69%) were not referred to a sexual specialist counsellor or social support after treatment. Cervical cancer (51%) was the commonest among respondents, and 55% were stage 3. Poor sexual activity was linked with older age, type of gynaecological cancers, cancer stage, employment status and level of education. Additionally, sexual dysfunction is indicated by its substantial positive association with a higher level of psychological distress due to body image alteration and negatively correlates with decreased social support.et al [et al [The study\u2019s findings showed that the mean Female Sexual Function Index score was 10 among patients with gynaecological cancer. These findings are comparable to Rai , in Nepaet al and Liu l [et al , who indl [et al , 60. Anol [et al , 22, 57.et al [et al [This present study found no significant differences in the distribution of the FSFI domain scores. However, lubrication was the most commonly affected domain. Similar to our study\u2019s findings, Correia et al found thl [et al and Rai l [et al . Decreasl [et al , 61, 63.et al [The study\u2019s findings revealed that the age of the patient was statistically significantly associated with sexual function. This is consistent with the findings in Kaviani et al and Rai\u2019et al studies,et al and Golbet al , who repet al .et al [et al\u2019s [This study also found an association between the level of education and sexual function, which agrees with previous reports by Rai . Of the et al conclude[et al\u2019s study, G[et al\u2019s and Gule[et al\u2019s found no[et al\u2019s . Educate[et al\u2019s . Insuffi[et al\u2019s .et al [et al [et al [This study further revealed that employment status was statistically significantly associated with sexual function. These results are similar to studies conducted by Sekse et al and Zhoul [et al , who foul [et al . On the l [et al revealedet al [et al [Our study found an association between lifestyle adaptation and sexual function, of which a good number of the respondent did not engage in any lifestyle adaptation activities. These research findings are consistent with Bifulco et al , who repl [et al and Rai l [et al indicatel [et al .et al [et al [et al [The study demonstrated that the type of gynaecological cancers and cancer staging were associated with sexual function. This is similar to studies carried out by Correia et al , Fischerl [et al and Shanl [et al , who foul [et al , 16. Surl [et al . The stul [et al .et al [et al [et al [et al [The present study found a positive statistically significant relationship between the psychological domain (body image changes) and sexual function. These findings are comparable to studies showing that the body\u2019s image changes during cancer treatment, often resulting in sexual dysfunction, and patients may or may not recover gradually even after 5 years of treatment , 71. Theet al , Li et al [et al and Usshl [et al , who foul [et al indicateet al [et al [et al [This study showed a positive statistically significant relationship between social support and sexual function. This study found low social support scores among respondents compared with published normative data, which indicated low social support. These findings agree with Golbasi and Erenel , who fouet al and Izycl [et al , who indl [et al found hiet al [et al [From the regression analysis, our study\u2019s findings revealed that the respondent\u2019s age, employment status, type of gynaecological and cancer staging were significant predictors of sexual function among the respondents. This study found that respondents aged less than 50 years were less unlikely to suffer from sexual dysfunction than those aged 50 years and above. These findings are in line with Sekse et al and Pinal [et al , who stal [et al . Additiol [et al , 74. Oldl [et al .et al [et al [et al [Kew et al reportedl [et al , Boa andl [et al and Pfael [et al found thet al [et al [et al [This study revealed that unemployed respondents were more likely to suffer from sexual dysfunction than employed and self-employed women. These findings were consistent with Najjar et al , Pinar el [et al and Zhoul [et al , who fouet al [This study revealed that respondents who had cancer of the cervix were more likely to have sexual dysfunction compared to respondents with other gynaecological cancers. Furthermore, our study demonstrated that respondents who had stage 4 cancer were more likely to have sexual dysfunction than respondents in stages I, II and III. This research is the first study in our country to indicate that cervical cancer survivors manifest more with sexual dysfunction compared with other gynaecological malignancies. One possible explanation is that over 50% of our study sample consisted of respondents with cervical cancer, probably in the advanced tumour staging. Additionally, Muliira et al and Boa et al reportedet al [et al [et al [et al [et al [Furthermore, Williams et al revealedet al . Howeverl [et al , Frimer l [et al and Pinal [et al , which rl [et al argued tet al [et al [et al [The study found that the respondents who had decreased social support had an increased chance of sexual dysfunction. These findings mirror the findings from studies Izycki et al and Donkl [et al , who foul [et al , 79. Pfal [et al suggest This research study was quantitative by design, of which the majority of participants were survivors of cervical cancer, limiting insights into subjective and individuals\u2019 experience with other gynaecological cancers. Mixed-method designs with a larger sample size comprising an equal number of survivors of different types of gynaecological cancers are recommended in future studies. In addition, the cross-sectional design of this study may not allow us to infer a causal relationship between treatment and sexual dysfunction as we could not objectively assess the pre-treatment sexual function in the participant.The FSFI score among respondents with gynaecological cancer was low, with an 85% prevalence of sexual dysfunction. Based on the study\u2019s findings, the age of the patient, patient-level of education, employment status and lifestyle adaptation activities were socio-demographic factors associated with sexual function among respondents. The type of cancer and cancer staging were clinical characteristics associated with sexual function. Psychological (body image) and social factors were related to sexual function. Age, cancer stage 3 and social factors were independent predictors of sexual dysfunction.Conceptualisation: M.O., with L.O. and J.O.O.\u2019s supervision; Literature search: M.O., with L.O.\u2019s supervision; Study design: M.O., with L.O. and J.O.O.\u2019s supervision; Data collection: M.O., with L.O. and J.O.O.\u2019s supervision; Data analysis: M.O., with L.O. and J.O.O.\u2019s supervision; Manuscript preparation: M.O., with L.O.\u2019s supervision; Writing and editing of manuscript: M.O., with L.O. and J.O.O.\u2019s supervision. All authors read and approved the final version of the manuscript.The authors declare that there is no conflict of interest.This study was self-funded. There was no external funding."} {"text": "To assess the association of sleep factors and combined sleep behaviours with the risk of clinically relevant depression (CRD).A total of 17 859 participants aged 20\u201379 years from the National Health and Nutrition Examination Survey (NHANES) 2007\u20132014 waves were included. Sleep duration, trouble sleeping and sleep disorder were asked in the home by trained interviewers using the Computer-Assisted Personal Interviewing (CAPI) system. The combined sleep behaviours were referred to as \u2018sleep patterns \u2019, with a \u2018healthy sleep pattern\u2019 defined as sleeping 7\u20139 h per night with no self-reported trouble sleeping or sleep disorders. And intermediate and poor sleep patterns indicated 1 and 2\u20133 sleep problems, respectively. Weighted logistic regression was performed to evaluate the association of sleep factors and sleep patterns with the risk of depressive symptoms.The total prevalence of CRD was 9.5% among the 17 859 participants analysed, with females having almost twice as frequency than males. Compared to normal sleep duration (7\u20139 h), both short and long sleep duration were linked with a higher risk of CRD . The self-reported sleep complaints, whether trouble sleeping or sleep disorder, were significantly related with CRD . Furthermore, the correlations appeared to be higher for individuals with poor sleep pattern .In this national representative survey, it was shown that there was a dose-response relationship between sleep patterns and CRD. Response categories for the nine-item instrument \u2018not at all\u2019, \u2018several days\u2019, \u2018more than half the days\u2019 and \u2018nearly every day\u2019 with points ranging from 0 to 3 , normal (7\u20139\u00a0h per night) and long (>9\u00a0h per night) system, as follows: age in years at the exam , gender , ethnicity , education level and marital status . Smoking status was measured by the question \u2018Have you smoked at least 100 cigarettes in your entire life?\u2019 and classified as \u2018yes\u2019 or \u2018no\u2019 based on the replies. The physical activity questionnaire is based on the Global Physical Activity Questionnaire and includes questions related to daily activities, leisure time activities and sedentary activities. The suggested metabolic equivalent (MET) scores of 8 points for vigorous work-related/leisure-time activity, 4 points for moderate work-related/leisure-time activity and walking or bicycling for transportation are used to compute metabolic equivalent. The sedentary time refers to the duration spent sitting in a typical day excluding sleeping. Body measurements were obtained by qualified health technicians in the Mobile Examination Centre. Body mass index (BMI) was calculated as weight in kilograms divided by height in metres squared, and then rounded to one decimal place. We utilised the total nutrient intakes (DR1TOT) consumed during the 24-h period prior to the interview on the first day to calculate the 13 components of the Healthy Eating Index (HEI) 2015 score , as detailed in the previous study or sleep patterns, and the risk of CRD. Model 1 adjusted for age and gender. Model 2 further adjusted for race, marriage status, education level, smoke status and alcohol intake. Model 3 further included HEI-2015 index, physical activity, sedentary time, BMI and comorbidity index.p value\u00a0<\u00a00.05 was regarded as statistically significant.The STATA version 14.0 was applied for data analysis. R software 3.5.3 was used to create the forest graphs. A Characteristics of the participants according to CRD status are presented in Participants with a poor sleep pattern appeared to be middle and older age, had higher comorbidity index and obesity trends, with lower education level, and more likely to be females, living alone, physically inactive and more sedentary time, more likely to be heavy smokers, on poor eating quality . The preAs shown in The relationship of combined sleep factors with depression was shown in As shown in To the best of our knowledge, this is the first study conducted on the relationship between sleep behaviours and depression in a large national representative study. We observed that both short and long sleep duration, as well as sleep complaints (trouble sleeping and sleep disorder), were shown to be highly related with CRD. Then we measured the combined associations of sleep duration, trouble sleeping and sleep disorders with the risk of CRD, and participants with a poor sleep pattern had a greater risk of getting depression.et al., Among the 17\u00a0859 participants, the total prevalence of CRD was almost twice as common in females (12.3%) than males (6.7%). This was consistent with earlier findings and was most likely related to individual difference susceptibility and environmental factors (Seedat et al., et al., et al., et al., 2019; Hu et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., The quantity and quality of sleep are the basic units of good sleep. A recent meta-analysis of six prospective studies from the United States and one from Japan revealed a U-shaped relationship between sleep duration and depression risk (Zhai et al., et al., et al., et al., et al., et al., To date, multiple genome-wide association studies confirmed genetic correlations between sleep duration, insomnia symptoms, excessive daytime sleepiness and depressive symptoms (Hammerschlag Our research was performed using data from a large nationally representative population sample. NHANES's sampling approach ensures that the sample was selected at random and was representative of the whole American population. There were some limitations to the current study. First, as cross-sectional research, we cannot rule out reverse causality due to the nature of the design; Second, the type of sleep disorder is not clearly defined. Finally, all sleep factors were self-reported, which may have recall bias and lack impartiality when compared to sleep monitoring.Overall, this study emphasises the independent and combined relationship between sleep-related issues and the risk of depression. Further prospective studies should be conducted to investigate causal or bidirectional relationships between sleep complaints and depression risk. Moreover, it is critical to investigate the genetic association and potential mechanism between sleep complaints and depression, for the effective depression prevention and management."} {"text": "We were very pleased to read the excellent umbrella review (UR) by van der Burg et al. who highIt is essential to be aware that the vast majority of studies have focused on genuine catatonic symptoms and ACS Treatment of ACS (stopping or switching antipsychotic treatment) must be prompt to avoid further deterioration in the sense of malignant catatonia or NMS . With recatatonia\u201d and \u201cgenes\u201d OR \u201cgenetic\u201d identified a total of 211 results. For instance, two earlier studies by St\u00f6ber et al. [Despite the outstanding clinical relevance, evidence regarding susceptibility genes for ACS is very scarce. A simple search in PubMed on March 11th, 2021 using the terms \u201cr et al. , 10 repor et al. . In liner et al. postulatneuroleptic malignant syndrome\u201d and \u201cgenes\u201d OR \u201cgenetic\u201d identified a total of 100 results. An earlier study by Suzuki et al. [TaqI A polymorphism of the dopamine D2 receptor (DRD2). Later on, Kishida et al. [TaqI A [NMS is a rare and life-threatening manifestation of DRMD that requires an efficient and timely therapy . The scii et al. postulata et al. found th [TaqI A or Ser31In daily clinical practice, catatonic syndromes and ACS are primarily treated with benzodiazepines and electroconvulsive therapy (ECT) . AccordiThe conclusion that can be drawn from the data is that both ACS and NMS have been clearly underappreciated by genetic research. Both ACS and NMS are clinically rather rare conditions, so that finding candidate genes that are associated with these disorders will require large groups of patients. Given the clinical relevance of these syndromes, we strongly endorse population-based cohort studies that could decisively contribute to the specification of risk (or protective) factors (including relevant gene variants) and predictors for the occurrence of ACS and NMS . In line"} {"text": "International Journal of Environmental Research and Public Health is dedicated to increasing the scientific information available about the long-term effects of exposure to the 2001 World Trade Center disaster. We address emerging health problems, chronic diseases including cancer and mortality, as well as research methods and intervention strategies. The following are summaries of the sixteen articles included in this issue.This Special Issue of the The James Zadroga 9/11 Health and Compensation Act of 2010 was signed into law on January 2011 and the WTC Health Program (WTCHP) was established to provide medical monitoring and treatment of covered health conditions for responders and survivors of the 9/11 attacks. The WTCHP also maintains a research program aimed to inform the clinical care of program members. In their review, Santiago-Col\u00f3n et al. [There is growing evidence suggesting that 9/11 exposures may increase the risk of mild cognitive impairment (MCI) occurrence. In their article, Daniels et al. provide To evaluate how mental health comorbidity is associated with cognitive impairment, Alper et al. investigIn a follow-up study, Singh et al. examinedPollari et al. have stuKwon et al. proposedColbeth et al. studied Asher et al. reviewedGarrey et al. studied Alper et al. studied Madrigano et al. evaluateAn elevated risk for multiple cancers has been shown for the WTC affected community members residing in South Manhattan on 9/11 . Dr. Shao et al. describeA comprehensive description of 2561 cancer cases detected at the EHC was provided by Durmus et al. A total Characteristics of lung cancer among survivors, in particular, were examined by Durmus et al. and coauthors . AlthougAnother study on the WTC Survivors examined the persistence of lower respiratory symptoms (LRS) of uncontrolled asthma affecting individuals residing in lower Manhattan with previous WTC exposure to airborne particles. Reibman et al. reported1), wheeze, and dyspnea were independently associated with prolonged ICS/LABA treatment. This study shows that high risk for treatment was identifiable from routine monitoring exam results years before treatment initiation.A two-year longitudinal assessment was conducted on 8530 WTC-exposed firefighters with lung injury by Putman et al. A total The World Trade Center disaster of 2001 continues to impact exposed populations in diverse ways. This is increasingly documented in the scientific literature, as demonstrated in this update issue as we approach the disaster\u2019s 20-year anniversary. We need to focus on interventions to reduce preventable morbidity and mortality in these populations, and future research is needed on topics pointed to in this Issue."} {"text": "Results of a recent study from our laboratory, along with Hungarian scientists suggest a large overlap between the occurrence of substance and non-substance addictions and behaviors and underlies the importance of investigating the possible common psychological, genetic and neural pathways. These data further support concepts such as the Reward Deficiency Syndrome and anti- reward symptomatology and the component model of addictions that propose a common phenomenological and etiological background of different addictive and related behaviors. Alcoholism is a very complex trait with epigenetic impact. However, we have argued that a number of candidate genes that interact in the brain circuitry involving the established Brain Reward Cascade provides an acceptable predictive blue -print as generated by the Genetic Addiction Risk System (GARS) consisting of multi- polygenic loci. Importantly, alcohol use disorder (AUD) represents a major and ongoing public health concern with 12-month prevalence estimates of 5.6% in the United States. Quantitative genetic studies suggest a heritability of approximately 50% for AUD, and as a result, significant efforts have been made to identify specific variation within the genome related to the etiology of AUD especially in our young population. Recent results and known psychological deficits based even on a few dopaminergic genetic polymorphisms persuasively suggest that the classic view of D1-D2 functional antagonism does not hold true for all dimensions of reward-related RDS behaviors, and that D2 neurons play a more prominent pro-motivation role as we argued previously. Moreover, NIAAA extensive investigation revealed the importance of complex signaling pathways in identifying factors responsible for complex traits such as alcohol consumption. We the authors hereby emphasize the importance of genetic antecedents of alcohol consumption that loads onto many psychological deficits. The take home message is that the brain physiological impairment does not actually display a specific alcohol disorder and this is further based on no known particular alcohol receptor per se. However, understanding the gene X environment interplay in terms of brain reward processing seems most prudent especially in the face of the stressful COVID 19 pandemic. Precision translational therapeutics derived from these doctrines, may help victims of RDS to dig themselves out of a \u201chypodopaminergic ditch\u201d.Our previous work along wiDopamine is a major component in the mechanisms involving brain function . The dop2 < 0.002), contribute to the genetic etiology of AUD [The roles of specific candidate genes have been the subject of much debate and to date there is no consensus regarding a unique gene panel for alcohol addiction. There are many candidate genes representing the neurochemical mechanisms involved in reward dependence behaviors linked to mesolimbic circuitry . Importay of AUD . While ty of AUD \u201322.\u00ae test. The presence of hypodopaminergia is a complicated but determining condition of the GARS\u00ae test results. The search for studies that report low-dopamine function associated with specific SNPs of reward genes formed the cornerstone of the development of the GARS\u00ae test. While there are many possible addiction-related genes; as pointed out by Li et al. [\u00ae genetic panel and Variable Number Tandem Repeats (VNTRs) connected to the promotion of a genetically induced hypodopaminergia met the final selection for the GARSi et al. . dopaminic panel .et al. [et al. [http://karg.cbi.pku.edu.cn), the first molecular database for addiction-related genes with extensive annotations and a Web interface. They connected the common pathways into a hypothetical common molecular network for addiction including alcoholism. Interestingly, two final pathways emerged were glutamatergic dopaminergic. Moreover, research from a Brain Storm consortium quantified the genetic sharing of [Hodgkinson et al. develope [et al. who intearing of brain diaring of . These rThis is a timely and clinically relevant endeavor. For instance, alcoholism pharmacotherapy may target the brain\u2019s reward mechanisms via the \u201cdeprivation-amplification relapse therapy\u201d (DART) Previouset al. [et al. [RDS can be manifested in relatively mild or severe forms that follow as a consequence of an individual\u2019s biochemical inability to derive reward from ordinary, everyday activities. DRD2 A1 Allele is indeed a very important variant as initially discovered by Blum et al. to assocet al. . In supp [et al. , demonst [et al. .An important question relates to the importance of a multi-locus approach rather than the old adage OGOD and a GABAergic gene (GABRB3) and found hypodopaminergic functioning predicted drug use in males; however, in females, a deleterious environment was the salient predictor. This preliminary study suggests that it is possible to identify children at risk for problematic alcohol use prior to the onset of drug dependence supporting a multi locus approach but selecting appropriate alleles for each gene is required for successful identification of alcohol seeking risk behavior. Understanding that there are 30,000 genes in the human genome the exact allelic prediction seemed unattainable in the late 80s but the first clue was initiated by the seminal work of Blum & Noble [et al. [TaqIA A1 allele, DRD2-141C Ins/Ins genotype, DRD4 7-repeat or longer allele, DAT1 9-repeat allele, and the val/met COMT genotype, and with a greater number of these genotypes per a multi-locus composite, show less responsivity of reward regions that primarily rely on DA signaling. Their findings underscore the need for polygenic testing rather than single gene approaches. Specifically, the multi-locus composite score revealed that those with a greater number of these genotypes showed less activation in reward regions, including the putamen, caudate, and insula, in response to monetary reward. The results suggest that the multi-locus genetic composite is a more sensitive index of vulnerability for low reward region responsivity than individual genotypes including the TaqIA A1 allele. Similarly work from the National Institutes of Alcoholism by Tabakoff et al. [There have been a number of studies utilizing many gene polymorphisms to access association of aberrant seeking behavior including the work of Conner et al., who anal & Noble and subs & Noble . Stice e [et al. tested tf et al. utilizinet al. [hypodopaminergic ditch\u201dGenomic testing such as GARS, can improve clinical interactions and decision-making especially related to the complex trait of alcoholism ,38. Knowet al. , albeit,et al. principl"} {"text": "Plant viruses cause many of the most important diseases threatening crops worldwide. Over the last quarter of a century, an increasing number of plant viruses have emerged in various parts of the world, especially in the tropics and subtropics. As is generally observed for plant viruses, most of the emerging viruses are transmitted horizontally by biological vectors, mainly insects. Reverse genetics using infectious clones\u2014available for many plant viruses\u2014have been used for the identification of viral determinants involved in virus\u2013host and virus\u2013vector interactions. Although many studies have identified a number of factors involved in disease development and transmission, the precise mechanisms are unknown for most of the virus\u2013plant\u2013vector combinations. In most cases, the diverse outcomes resulting from virus\u2013virus interactions are poorly understood. Although significant advances have been made towards understanding the mechanisms involved in plant resistance to viruses, we are far from being able to apply this knowledge to protect cultivated plants from all viral threats.The aim of this Special Issue was to provide a platform for researchers interested in plant viruses to share their recent results related to the various aspects of plant virology: ecology, virus\u2013plant host interactions, virus\u2013vector interactions, virus\u2013virus interactions, and control strategies. A total of 15 papers have been contributed by 96 authors from 18 countries to the issue, comprising ten research articles, one short communication, and four reviews .Nicotiana clevelandii to Chenopodium foetidum, can be boosted by adaptation to a bridge plant species, Arabidopsis thaliana, using mutants of the potyvirus plum pox virus. Kim et al. [Arabidopsis thaliana genotypes infected by the cucumovirus cucumber mosaic virus that were linked to virus seed transmission. On a more applied aspect, Nancarrow et al. [Plant virus ecology looks at virus\u2013host-environment interactions and, in a broad sense, includes studies on biodiversity and evolution. Several papers in this Special Issue focused on this topic. Mart\u00ednez-Turi\u00f1o et al. describem et al. also anam et al. studied m et al. , using aw et al. estimateHigh-throughput sequencing (HTS) technologies have become indispensable tools to characterize plant virus diversity thanks to their ability to detect virtually any virus without prior sequence knowledge. Three papers published in this Special Issue deal with HTS. Kutnjak et al. present The study of the precise interactions established between viruses and plant hosts to cause a productive infection is a fundamental subject that has received the attention of three research groups contributing to this Special Issue. Teixeira et al. reviewedBemisia tabaci complex, have been mostly conducted with tomato yellow leaf curl virus. In this Special Issue, Chi et al. [Deciphering virus\u2013vector interactions is an emerging research subject in plant virology. Thus far, studies concerning begomoviruses, an important group of emerging pathogens transmitted by the whiteflies of the i et al. showed tMixed-infections with two or more plant viruses are frequent in the field, with viruses being able to interact in multiple and intricate ways. These interactions are generally categorized as synergistic, antagonistic, or neutral. In this Special Issue, Elvira Gonz\u00e1lez et al. experimeAlthough a number of biotechnological approaches have been developed to produce virus-resistant plants in recent years, the use of classical genetic resistance remains the strategy of choice for practical control of plant viruses. In this Special Issue, Yan et al. reviewedOverall, the papers in this Special Issue reveal different perspectives of current research on plant viruses, from applied field studies to investigations into the intricate mechanisms involved in the tripartite interactions between viruses, plants, and vectors."} {"text": "Liver dysfunction in dengue varies from mild injury with elevation of transaminases to severe hepatocyte injury. The aim of our study was to assess the prevalence of hepatic dysfunction in patients with dengue and to correlate between the severity of the disease with the extent of hepatic dysfunction.retrospective cross-sectional observational study including 120 patients with confirmed dengue serology admitted in Medicine Department of Father Muller Medical College during November 2018-December 2019. Patient demographics, presenting symptoms, clinical signs, laboratory parameters such as complete blood count, serum glutamic-oxaloacetic transaminase (SGOT), serum glutamic-pyruvic transaminase (SGPT), alkaline Phosphatase (ALP), total and direct bilirubin; serum albumin and globulin levels were collected. Patients were categorized based on the modified WHO classification of 2009 into dengue with or without the warning signs and severe dengue. Comparison of multiple means across disease severity was performed using One Way-ANOVA with post hoc analysis using least significant difference. Pearson's correlation coefficient test was used to calculate the correlation between transaminases and platelet count. P-value <0.05 and CI 95% were considered in all analyses.serum glutamic-oxaloacetic transaminase was elevated in 66.7%, 78.6% and 91.7% patients of dengue without warning signs, warning signs and severe dengue respectively. Serum glutamic-pyruvic transaminase was elevated in 42.4%, 52.4% and 91.7% patients of dengue without warning signs, warning signs and severe dengue respectively. Patients with elevated SGOT (93.8%) and SGPT (81.2%) had a higher incidence of bleeding manifestations. Hypoalbuminemia (50.8%) and A: G ratio reversal (27.3%) was significantly more in severe dengue (p<0.0001). Serum glutamic-oxaloacetic transaminase and serum glutamic-pyruvic transaminase levels negatively correlated with platelet count (p<0.0001).liver involvement in the form of elevated transaminases was found in 74.2% dengue patients. Serum glutamic-oxaloacetic transaminase and serum glutamic-pyruvic transaminase level increases with increase in dengue severity which is indicated by fall in platelet count as they are negatively correlated with each other. Liver damage is one of the common complications of dengue and transaminitis, hypoalbuminemia and reversal of A: G ratio should be used as biochemical markers in dengue patients to detect and monitor hepatic dysfunction. Aedes family . Mean hematocrit was significantly elevated in patients with elevated liver enzymes as compared to patients with normal levels of liver enzymes . Mean platelet count was significantly low in patients with elevated liver enzymes as compared to patients with normal levels of liver enzymes Patients with elevated SGOT (93.8%) and SGPT (81.2%) had a higher incidence of bleeding manifestations. Serum glutamic-oxaloacetic transaminase and serum glutamic-pyruvic transaminase level increases with increase in dengue severity which is indicated by a fall in platelet count as they are negatively correlated with each other (P\u22640.0001) as shown in et al. [et al. [et al. [et al. in Mangalore reported similar mean age among dengue patients [34.30\u00b115.0 years] .et al. [The broad spectrum of hepatic dysfunction in dengue varies from asymptomatic elevation of the transaminases to fatal fulminant hepatic failure. In our study, 74.2% of patients had elevated transaminases. SGOT was elevated in 73.3% of the patients and SGPT in 50.8% of the patients, which was similar to the findings of various other studies 17,19-2-219-26. et al. .et al. [et al. [et al. [We observed significant difference in mean values of liver function tests when compared across disease severity. Mean SGOT, SGPT ALP, total bilirubin and direct bilirubin was significantly higher and albumin levels were significantly low among severe dengue patients when compared with dengue with and without warning signs. Thus it can be concluded that elevation of the transaminases, total and direct bilirubin, ALP levels, hypoalbuminemia and A: G reversal are all markers of a more severe form of the disease. Similar conclusions have been drawn by Bandyopadhyay D et al. . Tinambu [et al. , Soni A [et al. . In our Multiple mechanisms are responsible for liver injury in dengue such as direct viral cytopathic effects, immune mediated injury and hypo perfusion. Micro vesicular steatosis, hepatocellular necrosis, Kupffer cell hyperplasia and destruction, Councilman bodies, and inflammatory cell infiltrates are some of the changes observed in human post mortem studies. Immunohistochemistry studies have also shown infiltration of the hepatic acini with CD4+ and CD8+ T cells along with a higher expression of IFN-\u03b3, thereby implicating the Th1 cells. Dengue is also known to cause microcirculatory dysfunction due to venular or sinusoidal endothelial injury which can cause hepatocyte ischemia, irrespective of the presence of hypotension . In endeet al. [et al. [et al. [et al. [Hypoalbuminemia as a part of the spectrum of hepatic dysfunction in dengue is not well studied. Saha et al. and Wong [et al. have rep [et al. have rep [et al. found th [et al. . Accordi [et al. .et al. [et al. (60%) [et al. [Jaundice is an ominous sign of poor prognosis in dengue and is multifactorial. Itha et al. reportedet al. ,24,26. Il. (60%) . SummariIn our study, mean SGOT, SGPT and ALP levels were significantly elevated in patients with shock as compared to patients without shock. Though it has been postulated that hepatic dysfunction may be seen even in the absence of hypotension, due to microcirculatory dysfunction, the damage appears to be even more in the presence of shock . Also, pIn the study sample, we encountered two deaths; both were severe dengue female patients. Therefore, mortality from dengue does appear to be important, which can only be minimized by increasing the awareness about dengue specific signs and symptoms among people.Strengths of the study: 1) first study conducted in adult population to estimate the prevalence of hepatic involvement in dengue patients based on 2009 modified categorization of WHO. 2) Study has addressed the knowledge gap in prevalence of hypoalbuminemia among dengue patients, values of liver enzymes among dengue patient with/without shock and bleeding manifestations. 3) Our study has addressed the relationship between the signs and symptoms of dengue and hepatic dysfunctionLimitations of the study: 1) the study sample is small, may be statistically less accurate compared to studies with a larger population and our study was a retrospective study. 2) The patients were selected from a tertiary care center which usually tends to see a clustering of more severe cases as the less severe ones may be treated on out-patient basis. Hence the results of the study may not be an accurate representation of the entire population. 3) Liver biopsy which is a definitive diagnostic test of dengue hepatitis was not done due to financial and ethical reasons.Liver involvement in the form of elevated transaminases was found in 74.2% dengue patients. Serum glutamic-oxaloacetic transaminase and serum glutamic-pyruvic transaminase level increases with increase in dengue severity which is indicated by fall in platelet count as they are negatively correlated with each other. Serum glutamic-oxaloacetic transaminase was elevated more than serum glutamic-pyruvic transaminase in dengue patients. Liver damage is one of the common complications of dengue and transaminitis, hypoalbuminemia and reversal of A: G ratio should be used as biochemical markers in dengue patients to detect and monitor hepatic dysfunction.Hepatic dysfunction is commonly seen in dengue and is related to the disease severity;It mainly occurs in the form of elevation of the transaminases.The importance of identifying specific dengue warning signs that are predictive of hepatic dysfunction;While SGOT and SGPT are commonly evaluated in dengue patients, serum albumin levels and A: G ratio are usually neglected. Our study shows that hypoalbuminemia and A: G ratio <1 are markers of disease severity. Hence, they need to be included in the initial workup of dengue patients for prognostication."} {"text": "Glioblastoma (GBM) is the most common primary central nervous system tumor in adults, accounting for approximately 80% of all brain-related malignancies . It is aThe seven review articles focus on topics of great interest. O\u2019Rawe et al. highlighDi Nunno et al. review pAs current treatment strategies have not delivered significant improvements in GBM patient survival, some emerging therapeutics have redirected their efforts towards reprogramming the patient\u2019s immune system to generate an anti-tumor response. The review by Chokshi et al. focuses One of the hallmarks of cancer cell biology is an altered cell metabolism, with metabolic reprogramming occurring in cancer cells to facilitate an increase in cell proliferation, maintain self-renewal and develop treatment resistance. The review by Ghannad-Zadeh et al. focuses Ho et al. discuss Bozzato et al. highlighThe manuscript by Mozhei et al. reviews The five original articles in this Special Issue outline the innovative approaches and methodologies that research groups are utilizing to uncover novel treatment strategies for the treatment of GBM. Shapovalov et al. used a gAs angiogenesis and apoptosis play key roles in the development of GBM, the study by Scuderi et al. focused Integrin \u03b1\u03bd\u03b23 receptors are overexpressed in a number of different cancers, including GBM, especially at the tumor margins (invasive regions) and blood vessels within the tumor, facilitating tumor cell motility and invasion through interactions with the extracellular matrix. It is known that the extracellular domain of integrin \u03b1\u03bd\u03b23 contains a novel small-molecule binding site, and the authors in the study by Godugu et al. synthesiSchmitt et al. undertooFinally, as there has been evidence linking ion channels in cancer cells to a pro-invasive phenotype, and also that invadopodia as cancer-cell-based structures function to degrade the ECM and facilitate the invasive capacity of the cells, Dinevska et al. performeIn summary, this Special Issue contains a set of multidisciplinary contributions that utilize various techniques and methodologies to investigate novel treatment strategies for GBM. As Guest Editor, I wish to thank all of the authors for their involvement in the Special Issue, but more importantly for tackling this challenging disease with the intention of potentially providing alternative therapeutic strategies for GBM patients in the future, which could significantly improve patient outcome."} {"text": "A significant proportion of adults who are admitted to psychiatric hospitals are homeless, yet little is known about their outcomes after a psychiatric hospitalisation discharge. The aim of this study was to assess the impact of being homeless at the time of psychiatric hospitalisation discharge on psychiatric hospital readmission, mental health-related emergency department (ED) visits and physician-based outpatient care.N = 91 028) were included and categorised as homeless or non-homeless at the time of discharge. Psychiatric hospitalisation readmission rates, mental health-related ED visits and physician-based outpatient care were measured within 30 days following hospital discharge.This was a population-based cohort study using health administrative databases. All patients discharged from a psychiatric hospitalisation in Ontario, Canada, between 1 April 2011 and 31 March 2014 ) and to have an ED visit ). Homeless individuals were also over 50% less likely to have a psychiatrist visit (aHR = 0.46 (95% CI 0.40\u20130.53)).There were 2052 (2.3%) adults identified as homeless at discharge. Homeless individuals at discharge were significantly more likely to have a readmission within 30 days following discharge (17.1 Homeless adults are at higher risk of readmission and ED visits following discharge. They are also much less likely to receive post-discharge physician care. Efforts to improve access to services for this vulnerable population are required to reduce acute care service use and improve care continuity. Since our study included individuals who have been discharged after a psychiatric hospitalisation, physicians are a necessary part of planned follow-up to ensure continuity of medication initiated or modified during the psychiatric hospitalisation and to provide medical assessment of stability and response to treatment. Indeed, receipt of a physician visit within 7 days of a hospitalisation discharge is a standard mental health system performance indicator routinely reported by Health Quality Ontario , an independent, non-profit research organisation that holds population-level data, including administrative data. A unique, encrypted identifier is used to anonymously link the databases described below for each individual in the cohort. The databases used in this research can all be accessed through ICES. The Registered Persons Database (RPDB) contains information on age, gender and postal code (region of residence). Implemented in 2005, the Ontario Mental Health Reporting System (OMHRS) includes information on all admissions that occur in psychiatric inpatient beds for adults aged 18 and older in Ontario, which includes approximately 5000 psychiatric inpatient beds.et al., The data in the OMHRS are gathered using the Resident Assessment Instrument \u2013 Mental Health (RAI-MH) , living situation at admission, as well as income quintile. Clinical variables included the Depression Rating Scale (DRS), Positive Symptom Scale (PSS) and Mania Symptom Scale (MSS), all captured in the RAI were assessed using bles see . Second,\u03c72 tests . Finally\u03c72 tests , using CThere were 95\u00a0230 patients discharged from a psychiatric inpatient unit during the study period. After applying our exclusion criteria, 91\u00a0028 patients remained in the study cohort. Of those, 2052 patients (2.3%) were identified as homeless at discharge. Baseline characteristics by homeless status at discharge are found in v. 11.6%, 95% CI; aHR\u00a0\u00a0=\u00a01.87 (95% CI 1.68\u20132.08)).The risk of a psychiatric readmission at 30 days was 17.1% for homeless patients, in comparison with 9.8% for non-homeless patients (aHR\u00a0\u00a0=\u00a01.43 (95% CI 1.26\u20131.63)) see . Finallyv. 28.4%, p\u00a0<\u00a00.0001) ), a psychiatrist ) or both a family physician and a psychiatrist ). When only mental health-related visits post-discharge were considered, 56.5% of homeless patients had no outpatient care in the same period . Homeless adults were also less likely to have had a mental health-related family physician visit , an outpatient psychiatry visit or both .In terms of follow-up care after discharge for any reasons , 46.3% of the homeless population had no outpatient care in the 30 days that followed discharge (46.3 001) see . Homeleset al., et al., et al., To our knowledge, this is the first study to evaluate outcomes following discharge in a large comprehensive population dataset of homeless adults with mental illness. This is the first study to date to examine the rate of readmission within specific periods of time following discharge from a psychiatric hospital, and by far the largest study of outpatient visits after psychiatric discharge. At the time of discharge from a psychiatric hospitalisation, more than one out of every 50 adult patients was identified as homeless. The reality of psychiatric discharge to the street or to shelter has been rarely explored (Forchuk et al., et al., et al., et al., et al., et al., Compared with their housed counterparts, homeless people at discharge tended to be men, younger and to have higher illness severity. In the year prior to their psychiatric admission, individuals discharged as homeless were less likely to have had family physician visits or outpatient psychiatry visits, while at the same time more likely to have used acute care services: psychiatric hospitalisation and ED visit. They were also less likely to have physician visits either from a primary care physician or from a psychiatrist within 30 days following discharge, suggesting that homelessness is associated with paradoxically poor access to care following discharge despite higher need for care continuity based on illness severity. During that same period of time, homeless adults were also 43% more likely to be readmitted to a psychiatric unit. These findings point to the importance of optimizing the transition to outpatient care following discharge from hospital. For example, the Critical Time Intervention model has been shown to decrease homelessness (Herman et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., Previous studies have reported high readmission rates for homeless adults after a hospital discharge (Appleby and Desai, et al., et al., et al. (et al., et al., et al., et al., et al., et al., et al., Continuity of care following a psychiatric discharge is known to be low Boyer, . A study, et al. showed tet al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., One limitation of this research is that the indicator of homelessness at discharge found in the OMHRS dataset has not been validated, hence some participants might have been assigned the wrong housing status (Tsai et al., Homeless has become a crisis worldwide and people facing it have significant unmet mental health needs (Gaetz"} {"text": "Rep.7, 11046 (doi:10.1038/s41598-017-10669-4)) by using the same ecologically valid paradigm in an independent sample with similar age ranges. We were able to replicate the previously observed results of a preservation or even enhancement in socio-affective processes, but a decline in socio-cognitive processes for older adults. Our findings add to the understanding of how social affect and cognition change across the adult lifespan and may suggest targets for intervention studies aiming to foster successful social interactions and well-being until advanced old age.Anticipating population ageing to reach a historically unprecedented level in this century and considering the public goal of promoting well-being until old age, research in many fields has started to focus on processes and factors that contribute to healthy ageing. Since human interactions have a tremendous impact on our mental and physical well-being, scientists are increasingly investigating the basic processes that enable successful social interactions such as social affect and social cognition (Theory of Mind). However, regarding the replication crisis in psychological science it is crucial to probe the reproducibility of findings revealed by each specific method. To this end, we aimed to replicate the effect of age on empathy, compassion and Theory of Mind observed in Reiter and colleagues' study (Reiter Thus, met al. \u201314. Reseet al. ,16.with somebody [for somebody that is strongly linked to the motivation to help the other person [Socio-affective processes include empathy and compassion. The definition of empathy ranges from a multifaceted construct\u2014comprising affect sharing and empathic concern as an emotional and mentalizing or perspective taking as a cognitive component of empathy \u201320\u2014to a somebody . More prsomebody ,22. Compr person ,24. Somer person . In the et al. [Socio-affective and socio-cognitive processes are not just conceptualized distinct, there is also evidence from behavioural, neuroimaging and (sub-)clinical studies that supports their independence ,26. Inveet al. found noet al. \u201330 mostlet al. , while aet al. ,32,33 moet al. . Furtheret al. \u201339. Wheret al. .et al. [either the socio-emotional or the socio-cognitive route [et al. [et al. [In view of the conceptual and neural differentiation of social affect and social cognition as well as the findings from (sub-)clinical populations, it is likely that these two subcomponents of social understanding might also be differentially affected by age. Over the last 30 years, a vast body of research emerged on this topic . One of et al. . The autet al. . Howeverve route ,59\u201361, w [et al. . The aut [et al. ) that en [et al. observedHowever, with regard to the replication crisis in science in general and in psychological science in particular ,64, it iet al. [et al. [The reproducibility of the results of Reiter et al. on the e [et al. by using. 2This study was part of a larger project consisting of three testing sessions. Here, we focus on the behavioural measures of the third session in which participants performed the EmpaToM task ,28,67 an. 2.1N = 45, age range = 18\u201330; OA: N = 56, age range = 65\u201378). All participants were recruited via flyers and newspaper announcements in the greater Dresden city region. The OA were additionally recruited from sport, language and university courses as well as choirs for OA and the database of the Lifespan Developmental Neuroscience Lab at Technische Universit\u00e4t Dresden. All participants spoke and understood German fluently, had normal or corrected to normal vision, and were right-handed as well as MRI-suitable individuals . Additionally, participants were not allowed to have participated in the study of Reiter et al. [et al.\u2019s sample [a priori exclusion criteria for the present sample: self-reported psychological or neurological disease currently or within the last 12 month, consumption of more than five cups of coffee a day , smoking of more than five cigarettes per day, drinking of more than 12 g (women)/24 g (men) pure alcohol per day, consumption of illegal drugs more often than two times per month or having a history of drug abuse or addiction, as well as having a prescribed hearing aid device, attested tinnitus or colour blindness. The OA were additionally screened for cognitive impairment (in the first session of the project) via the official German version of the Montreal Cognitive Assessment , problems with hearing or sight during the task (N = 4), termination of the session by the participant or due to technical problems (N = 2) or neurological abnormalities in the brain (N = 4). In contrast to Reiter et al. [N = 2) that performed below chance level (less than 0.33). This did, however, not affect the results in the present study .One hundred and one individuals participated in the third session . With a [et al. , who use [et al. . Hence, r et al. , we did M = 24.00, s.d. = 3.20) and 44 older individuals. Between the two age groups, the gender distribution was comparable. YA and OA did also not differ significantly in their years of education, relationship status or residence which verified that the final sample size was nevertheless appropriate to detect the effects with a power of 1-beta = 0.95.The final sample comprised 42 younger and performed the battery of cognitive functioning tasks afterwards, or vice versa. At the end of the third session, they received 8.50\u20ac per hour for participation. The Technische Universit\u00e4t Dresden ethics committee granted ethical approval in accordance with the Helsinki declaration for the whole project (EK 486 112 015).et al. [et al. [The EmpaToM is a videt al. , we allo [et al. , we did [et al. nor repl [et al. affect tet al.\u2019s sample [et al.\u2019s sample [et al.\u2019s sample [et al.\u2019s sample [et al. [In addition, participants performed a battery of cognitive functioning tasks. These comprised the Trail Making Test A and B , the Ids sample . Regardis sample as well s sample . Howevers sample and finds sample , which as sample and the s sample ,76, OA w [et al. with com. 2.3t-tests or \u03c72-tests using the R package stats [t-test were violated, a non-parametric test (Mann\u2013Whitney U-test) was performed using the R package stats [Age differences in sample characteristics and cognitive measures were examined with independent ge stats . If the ge stats . Regardiet al. [Analogous to Reiter et al. measureset al. [t-test using the R package stats [U-Test using the R package stats [Coherent with Reiter et al. , the folet al. . If the et al. . To analge stats . Since tge stats as well.et al. [z-scores of the TMT A (RT) and B (RT), IDP (mean of RT and accuracy), and DSb ) as well as a unit-weighted composite score reflecting a proxy of verbal intelligence (based on the z-scores of the SAW (RT) and SAW (accuracy)) to test for potential effects of these cognitive abilities on the observed age effects. We performed the repeated measures analysis of covariance separately for the proxy of fluid intelligence and for the proxy of verbal intelligence with the R package rstatix [Also consistent with Reiter et al. , we furt rstatix and the rstatix . All dat rstatix and R 3. rstatix .. 3. 3.1F1,81 = 675.143, p < 0.001, \u03b7p2 = 0.893), suggesting that emotionally negative videos elicited a stronger empathic response than neutral videos in our participants and no significant interaction effect in our sample. Thus, YA and OA did not differ significantly in their empathic response to emotionally negative and neutral videos. Since the assumption of normality was violated for the valence ratings of the neutral videos in YA and emotional videos in OA, we additionally performed a robust mixed ANOVA that validated the former analysis and replicated the findings of Reiter et al. [The mixed ANOVA of the valence ratings revealed a significant main effect of emotionality of the video , with OA reporting more compassion than YA across all conditions , with higher accuracies for ToM questions , and a significant main effect of age group , with YA answering more accurately than OA . We also found a significant interaction between question type and age group that qualified the main effects and is in line with the findings of Reiter et al. [post hoc tests showed that both YA and OA performed significantly better in the ToM condition than in the factual reasoning condition, but the effect was stronger for OA than YA . Thus, tF1,84 = 0.570, p = 0.452, \u03b7p2 = 0.007) nor of age group , suggesting that ToM and factual reasoning questions were answered similarly fast by YA and OA in our sample and replicates Reiter et al.\u2019s results [F1,84 = 0.906, p = 0.344, \u03b7p2 = 0.011) ; however, robust post hoc tests on the difference scores did not reveal a significant difference in RT as a function of age group .The mixed ANOVA for RTs showed neither a significant main effect of question type ( results . The int= 0.011) . This fi results , who fou. 3.3Fs1,83 \u2265 18.123, all ps < 0.001, all \u03b7p2 \u2265 0.179) and compassion . The separate adjustment for the proxy of fluid intelligence or the proxy of verbal intelligence did not alter the absence of an age effect on empathy either . Since the assumption of homogeneity of variances was violated for empathy as well as for compassion, we additionally conducted robust ANCOVAs that validated our former analysis. Thus, adding a proxy of fluid intelligence or verbal intelligence as covariate in the analyses did not alter the effects of age on ToM/factual reasoning accuracy and compassion or the absence of an effect of age on ToM/factual RT and empathy, which is in line with Reiter et al. [To test for potential effects of fluid intelligence and verbal intelligence on the observed age effects on ToM/factual reasoning accuracy and compassion as well as the absence of an age effect on empathy, we conducted separate mixed ANCOVAs, with either the proxy of fluid intelligence or the proxy of verbal intelligence as covariate. Controlling for these covariates separately, yielded the same age effects as observed above on ToM/factual reasoning accuracy and compassion (as concern for the narrator). For empathy we found a significant main effect of video type on the valence ratings, showing that emotionally negative videos are experienced as more negative than neutral videos in both YA and OA . The maet al. , indicat. 4.2et al.\u2019s findings [Post hoct-tests on the interaction effect differed slightly from Reiter et al.\u2019s results [et al.\u2019s sample [et al. [et al. [We assessed socio-cognitive processes after the participants had watched a short video of a protagonist talking about an emotionally negative or neutral life-event by measuring ToM (as the reasoning about a narrator's thought). Factual reasoning (as the reasoning about facts included in the story) was used as control condition. Regarding the accuracy for ToM and factual reasoning, we found a significant main effect of age, revealing that YA outperform OA in both conditions. We also found a significant main effect of question type, showing that both YA and OA perform better for ToM versus factual reasoning questions. The main effects as well as a significant interaction of question type and age that qualified the main effects are in line with Reiter findings . Post ho results . OA perfs sample . However [et al. , in the [et al. . Neverthet al.\u2019s results [et al. [et al. [et al. [et al. [Regarding the response time for the ToM and factual reasoning questions, we found neither a significant main effect of age nor of question type, revealing that both age groups respond equally fast, and both question types are answered equally quickly. This is again in line with Reiter results . However [et al. had repo [et al. , OA perf [et al. had part [et al. , which m. 4.3et al.\u2019s results [To control for potential effects of diverging cognitive abilities between the age groups, we conducted separate ANCOVA analyses for empathy, compassion and ToM with either a proxy of fluid intelligence or a proxy of verbal intelligence as covariate. These control analyses did not qualitatively change the observed results which are in line with Reiter results . This in results ) as well results ,82). As results ,84. The . 4.4et al. [et al.\u2019s sample [We were able to replicate the effect of age on empathy and compassion reported by Reiter et al. \u2014revealins sample ). This ss sample ,86 that et al. [et al. [Nevertheless, the current study has three main limitations that would be important to address in future research. First, testing high-functioning individuals in our sample with equal levels of education in YA and OA, can be regarded as both a strength and a limitation. The latter particularly relates to limitations with respect to external validity or the question whether the findings can be generalized towards other populations. In a similar vein, we note that the current sample is, as in most previous research, a \u2018western, educated, industrial, rich and democratic\u2019 (WEIRD) sample , which iet al. , as well [et al. as well [et al. ,99. ThusDespite these limitations the current study lends more evidence to the view that socio-affective and socio-cognitive processes age differentially across the adult lifespan. Further, in view of the replication crisis in psychological science, reproducing the results of a study with an independent sample in a different setting (e.g. here MRI versus test cabins) has great value not only for science itself, but also for the reliability of the results and thus new ideas, theories or interventions that can be derived from these results. Overall, our findings support the assumption that the socio-affective processes are preserved or even enhanced while socio-cognitive processes decline with age, which is derived from the socio-emotional selective theory in the aClick here for additional data file."} {"text": "Methane is an important greenhouse gas, emissions of which have vital consequences for global climate change. Understanding and quantifying the sources (and sinks) of atmospheric methane is integral for climate change mitigation and emission reduction strategies, such as those outlined in the 2015 UN Paris Agreement on Climate Change. There are ongoing international efforts to constrain the global methane budget, using a wide variety of measurement platforms across a range of spatial and temporal scales. The advancements in unmanned aerial vehicle (UAV) technology over the past decade have opened up a new avenue for methane emission quantification. UAVs can be uniquely equipped to monitor natural and anthropogenic emissions at local scales, displaying clear advantages in versatility and manoeuvrability relative to other platforms. Their use is not without challenge, however: further miniaturization of high-performance methane instrumentation is needed to fully use the benefits UAVs afford. Developments in the models used to simulate atmospheric transport and dispersion across small, local scales are also crucial to improved flux accuracy and precision. This paper aims to provide an overview of currently available UAV-based technologies and sampling methodologies which can be used to quantify methane emission fluxes at local scales.This article is part of a discussion meeting issue 'Rising methane: is warming feeding warming? (part 1)'. Both emissions of CH4 to atmosphere and the absolute concentration of CH4 in the atmosphere have increased over the past decade )in situ atmospheric measurements or remote sensing of methane concentrations at the spatial scale of local sources (less than 1\u2009km) and small facilities. UAVs can be equipped to dynamically monitor the lower atmosphere and planetary boundary layer (e.g. )Aside from classifications based on size and payload weight, UAVs can be broadly divided into fixed-wing and rotary-wing aircraft. Fixed-wing UAVs resemble traditional airplanes and the largest of these can ben. 3Accurate wind speed and wind direction is typically required for flux quantification. These can be measured (or inferred) from a suitable nearby monitoring station, equipped with anemometer instrumentation, but doing so may introduce some measure of uncertainty . Preferaet al. )et al. tested a ) ) ([CH4]) . U\u22a5 is tThe mass balance method generally requires that the wind field is constant between upwind and downwind measurements, i.e. that the wind speed and wind direction does not change. However, wind field variability can be implicitly accounted for through robust uncertainty propagation . When uset al. [4 and CO2 to quantify a methane flux from landfill. They used two UAV platforms: a fixed-wing UAV equipped with a high-precision CO2 instrument and a rotary-wing UAV tethered to an off-axis Integrated Cavity Output Spectrometer Ultraportable Greenhouse Gas Analyzer [2, while the fixed-wing UAV was used for horizontal measurements of CO2, from which the horizontal distribution of methane was inferred using emission ratios characteristic of landfills (\u22121) with the uncertainties mainly influenced by the variability in the methane background, and the variability in the wind. Their reported uncertainties were comparable to uncertainties derived from other non-UAV-based landfill flux estimation methods (e.g. [2 can be a poor proxy for landfill methane emissions in the presence of interfering (off-site) sources [Allen et al. used theAnalyzer ). The roandfills . Methaneds (e.g. ). Howeve sources and the sources .Figure et al. [\u22121. This was larger than the fluxes quantified by two additional, non-UAV-based methods, which yielded methane emission rates of 5.8\u2009g\u2009s\u22121. Nathan et al. [Nathan et al. used theet al. ,93) was n et al. noted thn et al. .A variant of the mass balance method was used to detect and quantify natural gas leaks using a UAV equipped with a backscatter tunable diode laser absorption spectrometer . This inistances . The metThe mass balance approach, incorporated alongside algorithms to both detect and localize leaks, was used to quantify natural gas emissions using a UAV-based remote sensing measurement platform . Methane (b)F is the emission flux, U\u22a5 is the perpendicular wind speed, H is the height of the emission plume source above the ground and y\u03c3 and z\u03c3 are Gaussian dispersion parameters of the plume in the y and z directions, respectively. It should be noted that for the Gaussian plume method, y is conventionally used to refer to the cross-plume coordinate, whereas x is used to denote the same quantity for the mass balance box method.Gaussian plume models can be used to model enhancements in methane concentration downwind of a point source using Gaussian statistics . This meet al. [\u22121, larger than both the mass balance derived flux (14\u2009\u00b1\u20098\u2009g\u2009s\u22121), and the two independent non-UAV-derived fluxes (both 5.8\u2009g\u2009s\u22121). Ali et al. [Nathan et al. used thei et al. used theThe Gaussian plume method usually requires a large amount of time averaging in order for the instantaneous plumes to resolve into an observed Gaussian plume morphology suited to Gaussian inversion. When sampling in close proximity to an emission source (less than approximately 100\u2009m), the time scale of measurements is unlikely to allow for adequate time averaging, and a non-Gaussian distribution of methane may be observed. The turbulent fluctuation of the wind field combined with the short flight duration of many UAVs may result in the measurement of an instantaneous plume and, therefore, a poorly defined plume morphology. However, repeated flights (or longer duration sampling) in less variable wind conditions, combined with carefully designed flight patterns can mitigate this. Individual case studies would need to be evaluated for the conditions specific to the environment at the time of sampling, by comparing data with fitted Gaussian plume assumptions.y\u03c3 and z\u03c3 increase linearly with distance from the source location . This method requires adequate spatial sampling density in both the y and z directions to resolve a flux [A recently developed adaptation of the Gaussian plume flux inversion, referred to as the near-field Gaussian plume inversion (NGI) technique, was adapted for downwind sampling of turbulent plumes close to the emission source . In the e a flux .et al. [et al. [Shah et al. tested t [et al. conclude\u22121, in good agreement with fluxes derived from nearby ground-based monitoring [The NGI method was used to quantity the methane emission flux resulting from unintended cold-venting during flowback operations at a hydraulic fracturing facility in the UK figure . In thisnitoring . The rannitoring .Figure et al. [z direction (z\u03c3) was accounted for by using an open-path methane sensor measuring column-integrated methane, and dispersion in the y direction (y\u03c3) was assumed to be small, due to the close proximity to the source. Quantified fluxes were shown to be in agreement with the mass balance approach used in the same study [Another approach related to the Gaussian inversion method was used by Golston et al. to quantme study . (c)A methane emission flux could feasibly be quantified using UAVs in conjunction with the tracer release method, and a controlled release of a reference tracer gas at a known rate from the source origin e.g. ,101\u2013103)\u2013103101\u20131Alternative atmospheric inversion approaches could also be applied to UAV-based measurement data to quantify methane fluxes. For example, Lagrangian particle dispersion models simulate the paths of many massless particles as they travel with the local wind field (e.g. ) and havet al. [et al. [Ravikumar et al. compared [et al. conclude. 6The literature reviewed here demonstrates the clear capability of small UAV-based measurement platforms for quantifying methane fluxes from a range of sources at local scales (less than 1\u2009km). Such capability has only become available over the past decade, as a result of advances in both UAV technology and the miniaturization of instrumentation for the measurement of methane. Lightweight and high-performance instrumentation, with the capacity to measure methane with a precision on the order of a few ppb, are now becoming commercially available (at the time of writing), and developments continue to be made at pace. Such instrumentation will enable more accurate flux quantification, particularly of smaller emission sources which may not be captured by current methodology. Further developments in UAV technology would also be highly beneficial. Enhanced flight duration, as a result of improved battery capacity, would be extremely advantageous in terms of plume mapping, allowing for extended temporal ranges and refined spatial resolution of methane measurements. A summary of advantages and disadvantages of different UAV platform types, sampling methodologies, and flux quantification techniques is presented in UAVs provide a highly versatile platform, uniquely suited for high-density spatial mapping of methane plumes, and quantifying methane emissions at local scales. Their use could easily be incorporated into policy and regulation concerning monitoring and quantification of methane emissions from polluting industries and also"} {"text": "Adipose tissue senescence is implicated as a major player in obesity- and ageing-related disorders. There is a growing body of research studying relevant mechanisms in age-related diseases, as well as the use of adipose-derived stem cells in regenerative medicine. The cell banking of tissue by utilising cryopreservation would allow for much greater flexibility of use. Dimethyl sulfoxide (DMSO) is the most commonly used cryopreservative agent but is toxic to cells. Trehalose is a sugar synthesised by lower organisms to withstand extreme cold and drought that has been trialled as a cryopreservative agent. To examine the efficacy of trehalose in the cryopreservation of human adipose tissue, we conducted a systematic review of studies that used trehalose for the cryopreservation of human adipose tissues and adipose-derived stem cells. Thirteen articles, including fourteen studies, were included in the final review. All seven studies that examined DMSO and trehalose showed that they could be combined effectively to cryopreserve adipocytes. Although studies that compared nonpermeable trehalose with DMSO found trehalose to be inferior, studies that devised methods to deliver nonpermeable trehalose into the cell found it comparable to DMSO. Trehalose is only comparable to DMSO when methods are devised to introduce it into the cell. There is some evidence to support using trehalose instead of using no cryopreservative agent. Ageing results in adipose tissue dysfunction, resulting in systemic effects such as peripheral insulin resistance and inflammation. Cellular senescence and progenitor cell dysfunction are also seen in ageing adipose tissues, and these may represent potential therapeutic targets in age-related disease. Adipose tissue comprises many cell types, generally divided into two fractions: the adipocyte fraction (AF), which contains primarily mature adipocytes, and the stromovascular fraction (SVF), which comprises progenitor cells , pericytes, and fibroblasts. Keeping these cells in long-term culture for clinical use or research has inherent limitations, and cell banking of tissue by utilising cryopreservation would allow for much greater flexibility of use .ADSCs also show great promise in regenerative medicine as these cells can differentiate into several different cell types, which can be used for tissue regeneration. For example, ADSCs can differentiate into a large number of cell types, which include chondrogenic, myogenic, osteogenic, angiogenic, and neuronal lineages . With reVarious methods of stromal vascular fraction cryopreservation have been demonstrated , but theDimethyl sulfoxide (DMSO) is the most commonly used CPA in the cryopreservation of ADSCs. DMSO is a small amphipathic molecule with two non-polar methyl groups and a polar sulfoxide group. The solvent activity of DMSO allows it to readily cross cell membranes. DMSO helps prevent the formation of intracellular ice, thereby protecting the tissue from freezing injuries and helping achieve maximum tissue survival during the freezing and thawing process. DMSO is toxic to cells at room temperature and washing and centrifugation are needed after thawing to remove it from the tissue . This isOwing to their low toxicity, sugars have been used as extracellular cryopreservation agents. Trehalose (\u03b1-d-glucopyranosyl \u03b1-d-glucopyranoside) is a disaccharide of glucose with low toxicity in contrast to DMSO. It is synthesised by species of bacteria, fungi, yeast, insects and plants that are prone to dehydration but is absent from vertebrates . Such orIn an update on the cryopreservation of adipose tissue and ADSCs, Shu et al. highlighA systematic review was performed using the following databases: PubMed, CINAHL, Web of Science, MEDLINE, Cochrane Library and EMBASE. We included the following keywords and search terms: adipose stem cells, cryopreservation, trehalose, lipoaspirate, autologous fat grafting and cryopreservation. Those keywords were searched in that order in each database. We used a snowballing technique with an arbitrary cut-off of greater than two hundred articles for each database. An initial reviewer performed the search before being checked by a second reviewer. Any disputes about inclusion were resolved by a third reviewer.The following inclusion criteria were applied: (1) interventional studies using trehalose in the cryopreservation of human adipose-derived stem cells, including case series, case\u2013control, cohort studies and randomised controlled trials; (2) studies translated into the English language; and (3) full article accessible.Exclusion criteria: (1) unpublished literature; (2) literature published before 2000; (3) articles not available for free viewing; and (4) studies using animal adipose-derived stem cells.A total of 430 articles were retrieved. A total of thirteen articles with fourteen suitable studies were relevant for this systematic literature review. The studies included and excluded are summarised in Eight studies were included and are described in Pu et al. tested tIn a later study, Pu et al. injectedCui et al. comparedPu et al. found inFour Level 3 studies were available for analysis . Two stuRao et al. examinedP < 0.05).Roato et al. comparedFour studies were included for analysis . Study pP < 0.5). Rao et al. [Cui et al. examinedo et al. used nano et al. and Cui o et al. reportedThere are several limitations to this review. Firstly, there is an English language bias in our search strategy as only articles published in English were reviewed. However, a study by Juni et al. found thThe results from eight studies in our systematic review showed that DMSO and trehalose could be combined effectively to cryopreserve adipocytes. The advantage of this combination is that it reduces the dose of DMSO required. All four studies comparing trehalose and DMSO to simple cryotherapy with liquid nitrogen alone found a significant improvement in cryopreservation ,23,25. PP > 0.05). Wolter et al. [To maximise cell viability, a balance is required between reducing intracellular ice formation (by cooling slowly) and cooling quickly enough to minimise the solution effects. Mazur proposed a two-factor hypothesis attributing a lower cell survival rate when the cooling rate is slower than optimal . A faster cooling rate than optimal also decreases cell survival due to intracellular ice formation ,34,35,36r et al. comparedStorage temperature is thought to significantly influence the viability of cryopreserved adipose tissue. Subzero temperatures result in ice formation damage, but this can be mitigated by the storage of tissue at less than \u221270 \u00b0C. Rapid thawing can improve cell viability by reducing the risk of intracellular ice formation, and subsequent ice crystal growth. Son et al. performeLong-term cryopreserved ADSCs demonstrate a low risk of tumorigenicity and therefore offer great potential in regenerative medicine . DevelopFinally, there remains some variation in the cryopreservation of tissues and further studies are required to determine optimal protocols. Choudhery et al. demonstrated that adipose tissue could be successfully cryopreserved without compromising cell morphology, as well as subsequent proliferation and differentiation potential , making"} {"text": "In the beginning was\u2026 the genome.Nature offices in Little Essex Street and handed in these giant Figures together with envelopes containing the manuscript and the other Figures, which I had brought down from Manchester. Nature then lost the copies of Figure 1 but, by 27th March 1992, the paper was accepted and appeared in the journal on the 7th of May 1992 to the physical distance (in kb) showed a\u00a0>\u00a040-fold\u00a0range for different intervals along the chromosome, being smallest close to the centromere and greatest half-way down each chromosome arm that did not appear until the following year. The regular issue of Nature that appeared at the same time carried a \u2018News & Views\u2019 piece on the yeast genome from Craig Venter and his colleagues RNAs , it was immediately obvious that the normal course of genetics would need to be turned on its head. Instead of isolating mutants with defective or altered phenotypes and using classical genetic analyses to define functional genes and map their relative locations on chromosomes, now genes would be both identified and mapped by genome sequencing, and it would be necessary to work in the opposite direction to discover their functions (Oliver https://cordis.europa.eu/project/id/BIO4950080), comprising some 145 laboratories from 14 countries, was established at the beginning of 1996 with the aim of determining the role of 1000 S. cerevisiae genes of unknown function that had been identified from the genome sequence diploids so that all protein-encoding genes should be interrogated. Studies were carried out both in batch that the ancestor of this and related species underwent during their evolution. This major evolutionary event was first inferred by Ken Wolfe and Dennis Shields waltii by carrying out what have come to be known as \u2018multi-omic\u2019 experiments interactions by using the synthetic genetic array (SGA) methodology developed by Charlie Boone, Brenda Andrews, and colleagues in Toronto and UNICELLSYS . YSBN had a coordinating role in systems biology research using yeast as a model organism and aimed to employ both mathematical analyses and computational tools to integrate experimental data; it set out to develop common resources and standards. UNICELLSYS had the more focussed aim of achieving a quantitative understanding of how cell growth and proliferation are controlled and coordinated by both extracellular and internal signals. These two networks operated in a very synergistic manner.As with Functional Genomics, the advent of Systems Biology saw the establishment of networks of European yeast researchers to pursue major programmes of work\u2014the Yeast Systems Biology Network , perhaps because the transcriptome analyses were not carried out using RNAseq technology . This model (Yeast1) was rapidly updated even during the course of YSBN (Nookaew et al. et al. et al. S. cerevisiae metabolic network is the subject of an excellent recent review (Yu et al. A major contribution of YSBN was to establish a consensus stoichiometric model of the yeast metabolic network (Herrg\u00e5rd et al. (S. cerevisiae genome sequence (Goffeau et al. et al. Other systems of modelling yeast metabolism have been developed. Reiser et al. construcet al. et al. et al. kcat\u00a0or km values (Kroll et al. et al. et al. et al. (et al. (E.coli (Edwards and Palsson The most important developments of the genome-scale stoichiometric model of yeast metabolism have involved the exploitation of transcriptomic (Lee . et al. used mac (et al. . In thiset al. et al. et al. et al. et al. et al. et al. Much of the work described in this article is the result of open-ended research programmes that aimed to produce useful data. In other words, they were hypothesis generating, rather than hypothesis testing. Science has always needed both these kinds of research (Kell and Oliver"} {"text": "R. irregularis genes during co-existence of two R. irregularis isolates in cassava. (3) An adjusted probability of <0.1 led to a substantial number of false positives. (4) There was no evidence that two R. irregularis isolates showed divergence in a putative MAT-locus indicating mating compatibility. We consider that there are serious errors in the assumptions and rationale used for the re-analyses, a mis-interpretation of the original data, as well as the discarding of important data without any justification.The commentary by Malar et al. raised fAlthough Malar et al. \u201cdo not exclude the possibility that the genes identified by Mateus et al. are involved in mating,\u201d they qualify the homology inference between genes differentially expressed in the co-inoculation treatment and genes involved in mating in other fungal species as \u201cspurious evolutionary relationships\u201d or \u201cnot the best ortholog\u201d. Those statements imply that they attach no importance to the demonstrated sequence homology relationships identified in Mateus et al. Orthology does not necessarily imply conservation of gene function and genes with equivalent functions are not necessarily orthologs . TherefoMalar et al. have not considered, or have misunderstood, the experimental evidence on gene expression in interpreting their homology search. It is not surprising that their \u201cbest homologs\u201d were not upregulated, because we already saw that those genes were not upregulated in the original dataset. Our approach comprised performing an experiment to identify genes that were specifically upregulated when two isolates coexisted in planta. We then identified their putative function by homology. We did not look at whether the genes were the closest orthologs. However, we discussed the limitations of an homology approach to identify gene function . To our R. irregularis). In fact, Malar et al. compared the 18 genes against \u201call protein gene catalogs of fungal species from the JGI fungal genomic resource\u201d comprising 1318 taxa. The interpretation of the number of hits on a such large dataset is misleading because if a gene is highly conserved across the fungal kingdom, we would expect hundreds of hits in this database. In contrast, if an R. irregularis gene is highly specific to the Glomeromycotina taxa, we would expect very few hits (because there are less Glomeromycotina genomes in the database). Consequently, the number of hits in Table 1\u00a0from Malar et al. reflect the size of the database used and how conserved a given gene is, rather than whether a gene is from a large gene family. Malar et al. identified the so-called \u201cclosest ortholog\u201d in R. irregularis of fungal mating genes from other fungal species by showing the \u201cbest hit\u201d using OrthoMCL. However, differentiating paralogs from orthologs is a complicated task, in very distant species, especially if the organisms are highly paralogous. A more cautious analysis for each gene, including a confirmation that they are located in similar genomic locations, would lend more certitude that a given gene could be an ortholog. Consequently, the evaluation of RNA expression of their \u201cbest hit\u201d remains incomplete in terms of the effort to find the best orthologs.Malar et al. claim the identification of hundreds of hits of the 18 genes differentially expressed in Mateus et al. \u201cagainst the high-quality protein databases from the JGI Mycocosm Rhiir2\u201d , show that they indeed performed a two-way comparison of DAOM197198 vs. co-inoculation with DEseq2 but then a comparison with the three treatments B1, co-inoculation, and DAOM197198 (raw data from GitHub repository: madhubioinfo). The consequence of the three treatments as input in the DEseq2 analysis is that the software compares the first and the third treatments. In their reanalysis, Malar et al. actually compared B1 vs. DAOM197198, which completely invalidates their claims.Malar et al. claim to have found specific upregulation in only two putative orthologous genes in their reanalysis of the RNA-seq data. But they have discarded data without giving any scientific justification. In Supplementary Table 2, Malar et al. appear to have manually edited several cases with: \u201cN.A\u2009=\u2009cannot be calculated based on mapped reads\u201d even though the analysis should always yield a value, and did in our analysis. There is no explanation given about why they considered expression values could not be calculated, especially for only one comparison of a given gene (DAOM197198 vs. Co-inoculation) but not the other comparison of the same gene (B1 vs. co-inoculation). Consequently, the omission of those values that ensured their reanalysis would provide a different conclusion from that of Mateus et al.As clearly stated in Mateus et al., we used an adjusted probability of <0.1 to maximize the identification of interesting genes. However, the change of probability threshold does not change the main finding that a large number of genes upregulated are homologs of genes involved in fungal mating. Eleven, of the 18 highlighted genes, displayed an adjusted probability <0.05 in both comparisons, and the other 7, an adjusted probability below 0.05 in one comparison and between 0.05 and 0.1 in the other comparison (Supplementary Table 6 ). FurtheThe authors questioned the possibility of mating compatibility between isolates B1 and DAOM197198, because no sequence information at the putative MAT-locus was provTo conclude, Malar et al. asked \u201cWhere is the evidence?\u201d To respond to their question, the evidence of specific gene upregulation during AM fungal co-existence is where Malar et al. chose not to see it."} {"text": "Sheaffer blue ink is an effective method to stain arbuscular mycorrhizal (AM) fungi in a variety of plant species. It has, however, received criticism for its potential rapid degradation and short-term viability. The long and medium term storage and viability of stained samples has not, to date, been described for this particular staining method. This short communication reports on the viability of 730 samples stained with Sheaffer blue ink stored for the duration of 4 years in microscope slide boxes out of direct sunlight. There was no significant difference in micrograph image quality and presence of stain between years as indicated by the number of AM fungal structures quantified. In conclusion Sheaffer blue ink stain does not deteriorate in the medium term. The present communication evaluates the medium-term viability of Sheaffer blue ink staining for arbuscular mycorrhizal fungi. To date, Sheaffer blue ink stained root materials have not been subject to a further quantification and assessed for the longevity of the original stain during prolonged storage. The samples evaluated in the present communication show that root tissues retain the original stain and maintain the capacity to identify AM fungal structures for a minimum of 4 years.Arbuscular mycorrhizal (AM) fungi are symbiotic biotrophs that are34H28N6O14S4) or lactophenol cotton blue as a method of detecting mycorrhizal structures such as vesicles, arbuscules and intra-radiating hyphae dissolved in ethyl acetate [C4H8O2]) and viewed under a light microscope at \u00d7100 magnification. Samples were stored at room temperature in microscope slide boxes avoiding direct sunlight. Images of samples stained with Sheaffer blue ink n=30 accord samples were takt-tests of equal variance compared the baseline year (the year that the root samples were stained) with observations of root arbuscules for the same samples in the year 2021 . Statistical significance was determined by P values \u22640.05.Statistical analyses were conducted using the R commander software package. Arbuscular mycorrhizal fungal root arbuscules were quantified both visually and using CamLabLite version 1.0.8942.20170412 . The mean and standard error was calculated for each set of sample data. A single factor ANOVA tested for differences in winter wheat root mycorrhizal structures between years before further statistical testing was performed. Paired two-tail P=0.76, df: 4, 256, F value: 51.90, F critical: 2.64, Single factor ANOVA). A paired two-tail t-test of equal variance revealed no significant difference .Single factor ANOVA testing revealed no statistical difference between AM fungal root structures between the years quantified (P=0.76,fference between et al. [et al. [The present communication evaluates the medium-term viability of Sheaffer blue ink staining for AM fungi structures in winter wheat roots when stored under appropriate conditions. It contests the short-term viability of Sheaffer blue ink staining reported by Vierheilig et al. and cite [et al. .et al. [et al. [et al. [et al. [Modifications to the ink staining approach, implemented by Wilkes et al. , who use [et al. . Recent [et al. identify [et al. . Additio [et al. staininget al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [Vierheilig et al. did not [et al. . Kowel e [et al. also use [et al. demonstr [et al. The impa [et al. fail to [et al. and Wilk [et al. to those [et al. when comThe present communication has been able to demonstrate that Sheaffer blue ink, in the staining of AM fungal root components, has medium term storage potential when root tissues are fixed in FAA solution. The avoidance of boiling in KOH and using a fixative afterwards preserves AM structures, improves staining clarity and longevity. The continued monitoring of sample clarity and an evaluation of the impact of storage procedure, for example the effects of temperature and sunlight, on the persistence of ink as a mycorrhizal stain will be undertaken as future work."} {"text": "Tritrichomonas foetus is a venereal trichomonad parasite which causes reproductive issues in cattle. No other trichomonads are known to be urogenital pathogens in cattle, but there are several reports of Tetratrichomonas and Pentatrichomonas isolates of unclear origin from the cattle urogenital tract (UGT) in the Americas. This study reports the first case of a non-T. foetus cattle urogenital trichomonad isolate in Europe. Molecular analysis of the internal transcribed spacer (ITS) 1-5.8S ribosomal RNA-ITS 2 and 18S ribosomal RNA loci suggest that the isolate is a Tetratrichomonas species from a lineage containing other previously described bull preputial isolates. We identified close sequence similarity between published urogenital and gastrointestinal Tetratrichomonas spp., and this is reviewed alongside further evidence regarding the gastrointestinal origin of non-T. foetus isolates. Routine screening for T. foetus is based on culture and identification by microscopy, and so considering other trichomonad parasites of the bovine UGT is important to avoid misdiagnosis. Tritrichomonas foetus is an important bovine venereal parasite which causes reproductive issues including spontaneous abortion, pyometra and infertility of animals including invertebrates, fish, mammals, reptiles, amphibians and birds , respectively , 5\u00a0g\u00a0L\u22121 neutralized liver digest (Oxoid) and 20\u00a0g\u00a0L\u22121 glucose, pH 7.4], supplemented with 200\u00a0units\u00a0mL\u22121 penicillin, 200\u00a0\u03bcg\u00a0mL\u22121 streptomycin and 1000\u00a0units\u00a0mL\u22121 polymixin B, overlaid on a solid medium slope prepared by heating 7\u00a0mL horse serum at 75\u00b0C for 2\u00a0h. Cultures were incubated at 37\u00b0C for 7 days, and were examined microscopically on days 4 and 7 for motile protozoa. Positive cultures were passaged in fresh media, and 10\u00a0mL culture was harvested by centrifugation at 300\u00a0\u00d7\u00a0g for 10\u00a0min, fixed in 15\u00a0mL 100% ethanol and stored at \u221220\u00b0C. No cryopreserved stock of the isolate was generated.Samples were collected from bulls in the UK during routine screening for parasites, in accordance with the principles defined in the European Convention for the Protection of Vertebrate Animals used for Experimental and Other Scientific Purposes. To collect samples, the preputial cavity was washed with 30\u00a0mL pre-warmed phosphate buffered saline (PBS) (pH 7.2). Washings were pelleted by centrifugation at 300\u00a0\u00d7\u00a0Genomic DNA was extracted from ethanol-fixed parasite isolates using the DNeasy ultraclean microbial kit (Qiagen) according to the manufacturer's instructions. Loci for molecular sequence typing were amplified by polymerase chain reaction (PCR) using generic Taq polymerase (NEB). A region containing the 5.8S ribosomal RNA (rRNA) and flanking internal transcribed spacers (ITS) 1 and 2 was amplified using the trichomonad-specific TFR1 and TFR2 primers, and the 18S rRNA gene was amplified using the generic eukaryotic primers Euk 1700 and Euk 1900. Resulting products were cloned into pCR4TOPO using the TOPO TA cloning kit (ThermoFisher Scientific) according to the manufacturer's instructions. Sequences were generated for the inserts from five independent clones by Sanger sequencing (Eurofins) on both strands using the T7 and T3 promoter primers, and additional internal sequencing primers were designed to cover the full length of the 18S rRNA gene for both strands. All primer sequences are listed in Supplementary Table S1.et al., et al., et al., et al., et al., et al., et al., A reference collection of ITS1-5.8S rRNA-ITS2 and 18S rRNA trichomonad sequences were selected from the NCBI RefSeq database .In order to identify the isolate, sequences were generated for five clones for the 18S rRNA and ITS1-5.8S rRNA-ITS2 loci. For the 18S rRNA locus, three unique clonal sequences were generated which shared 99.5\u201399.7% identity, suggesting a single-species infection. ML analysis of the 18S rRNA locus placed all clone sequences together within Tetratrichomonas with moderate bootstrap support . The isolates were also placed in a separate lineage from urogenital Tetratrichomonas isolates from cattle originating from a previous study (accession AF342742).For the ITS1-5.8S rRNA-ITS2 locus, there were three unique sequences which shared 95.2\u201399% identity, suggesting some diversity amongst the parasites present. In agreement with the 18S rRNA locus, ML analysis also placed all the isolates together within Tetratrichomonas lineage of the isolates in more detail, ML analysis was performed on the 18S rRNA sequence from a greater taxonomic sampling of Tetratrichomonas spp., including several bull urogenital isolates from previous studies amongst Tetratrichomonas 18S rRNA sequences. For the published urogenital Tetratrichomonas isolate 2004\u20135000 (Dufernez et al., Tetratrichomonas sequence generated in this study. Representative BLAST results for a single sample are shown in Supplementary Table S2, and an alignment between metatranscriptome sequences and urogenital Tetratrichomonas 18S rRNA sequences are shown in Supplementary file S4.In order to investigate the potential gastrointestinal origin of urogenital Tetratrichomonas and Pentatrichomonas spp., which appear to have been isolated in roughly equal proportions. In contrast, there has only been a single report of a Pseudotrichomonas isolate.In addition to the isolates identified in this study, there have been numerous reports of trichomonads isolated from the UGT of cattle, summarized in Tt. foetus trichomonad isolated from the bovine UGT in Europe. In agreement with previous reports of cattle urogenital trichomonads (Walker et al., et al., et al., Tetratrichomonas sp. The phylogenies presented here are largely in agreement with previous studies regarding the interrelationship between trichomonads and the reported lineages of Tetratrichomonas spp. (Cepicka et al., et al., et al., Tetratrichomonas genus.This study reports the first case of a non-Tetratrichomonas lineage 4 as defined by Cepicka et al. (Tetratrichomonas isolates defined by Dufernez et al. (et al., Tetratrichomonas lineage 10 (Cepicka et al., et al. (et al., Tetratrichomonas isolates.Our results support the placement of the new isolates in a et al. , which cz et al. . The new et al., , corresp, et al. , althoug et al., . These oTetratrichomonas lineage 4 was described as mostly limited to the tortoise GIT (Cepicka et al., et al., et al., Tetratrichomonas lineage 10 (Cepicka et al., Te. buttreyi (Dufernez et al., Tetratrichomonas lineage 10 also appears to be mostly restricted to tortoises (Cepicka et al., Tetratrichomonas lineages, P. hominis, and a Pseudotrichomonas sp. have been isolated from the cattle UGT (Walker et al., et al., Tt. foetus, which shows a remarkable degree of genetic homogeneity suggestive of a recent founder event (Kleina et al., et al., et al., et al., Tetratrichomonas 18S rRNA (Dufernez et al., et al., Tt. foetus trichomonads in a backdrop of frequent Tt. foetus monitoring (Yao, Tt. foetus is also possible.At least two ing Yao, suggestsTetratrichomonas spp. (Zhang et al., Te. buttreyi (Dufernez et al., P. hominis is known to occupy the GIT of a very wide range of animals (Li et al., et al., Pseudotrichomonas keilini (Dufernez et al., Monoceromonas ruminantium is also relatively closely related (Dufernez et al., et al., P. keilini is an exception or mislabelled as free-living. The presence of gastrointestinal-associated bacteria at the same site as urogenital trichomonad isolation provides further evidence for their gut origin (Cobo et al., et al., There are plausible sources for the isolated urogenital trichomonads from the bovine GIT; Tetratrichomonas preputial isolates in the cattle UGT, results have ranged from no persistence (Cobo et al., et al., et al., Pentatrichomonas hominis derived from the cattle gut also failed to persist in the UGT of heifers (Cobo et al., Tetratrichomonas to regularly establish colonization during experimental infection (Cobo et al., Tt. foetus trichomonads (Cobo et al., During experimental inoculation of Tt. foetus trichomonad isolates obtained from the cattle UGT do not represent an emerging urogenital inhabitant but rather a sporadic transmission from another source, most likely the gut. However, transmission of parasites between these mucosal sites may increase the possibility of new pathogens emerging, as has been documented for other trichomonads (Maritz et al., Together, the evidence supports the hypothesis that non-Tt. foetus trichomonads in the cattle UGT is most likely misdiagnosis through confusion with Tt. foetus, which could cause unnecessary culling of suspected infected animals (Campero et al., et al., Tt. foetus monitoring is culture-based isolation and morphological identification by light microscopy (Parker et al., et al., et al., et al., et al., Tt. foetus detection (Oyhenart et al., Tetratrichomonas sp. are short-lived in the bull UGT (Cobo et al., et al., et al., The most significant concern associated with non-"} {"text": "Most research exploring the link between traumatic events and psychotic experiences has focused on either Australia, Europe or North America. In this study, we expand the existing knowledge to Thailand and investigate the impact of the type and the number of traumatic events on psychotic experiences in Thailand.We used data from the nationally representative 2013 Thai National Mental Health Survey (TNMHS), including questions on traumatic events and psychotic experiences. We regressed the lifetime experience of hallucinations or delusions against the following independent variables: the experience of any traumatic event during lifetime ; the experience of either no traumatic event, one interpersonal, one unintentional or both interpersonal and unintentional traumatic events and the number of traumatic events experienced during lifetime . We adjusted the regression models for sociodemographic indicators and psychiatric disorders, and considered survey weights.About 6% of the respondents stated that they had either hallucinatory or delusional experiences during their lifetime. The risk of reporting such experiences was more than doubled as high among respondents who had experienced at least one traumatic event during their lifetime than among those who had not yet experienced one, with higher risks for interpersonal or multiple traumatic events. Our results further indicated an increase in the risk of psychotic experiences as the number of traumatic events increased, with up to an eight-fold higher risk for people exposed to five or more traumatic events in their lifetime, compared to those with no traumatic events.Individuals reporting interpersonal or multiple traumatic events face much higher risk of psychotic experiences. Effective and widely accessible secondary prevention programmes for people having experienced interpersonal or multiple traumatic events constitute a key intervention option. The objective of the current study was to address this research gap using data from the 2013 Thai National Mental Health Survey (TNMHS). According to the existing literature, we investigated the following hypotheses:et al., Data from the TNMHS in 2013 were used, a sub-nationally representative cross-sectional survey that adopted the methodology of the World Mental Health Survey Initiative , a fully standardised interview on mental disorders in accordance with the ICD-10 or auditory hallucinations (\u2018Have you ever heard any voices that other people said did not exist?\u2019), or delusions such as thought insertion and/or withdrawal, mind control and/or passivity, ideas of references, plot to harm and/or being followed during lifetime, after excluding the possibility of dreaming or the influence of substances . Furthermore, respondents were asked whether they had ever experienced one or more traumatic events . Survey weights were applied in all regression analyses. All analyses were carried out using Stata 15.1 , socioeconomic indicators and finally psychiatric disorders . Additionally, an unadjusted model was used (model 0). Respondents with missing information on either the dependent, independent or control variables were excluded from analyses constituted the analytic sample (<1% missing). An overview of sociodemographic characteristics of the sample, the prevalence of psychotic experiences, the prevalence of reporting at least one traumatic event and the weighted percentage of persons fulfilling diagnostic criteria for mental disorders is displayed in A total of 4727 adults completed the interview (response rate: 74.3%), of which 4715 aged 44.1 years on average (standard deviation [s.d.: 19.5 years), with almost two-fifths reporting having been under the age of 18 . Almost 1% of the respondents met the criteria for PTSD at any time during their lifetime.About 6% (95% CI: 4.9\u20137.0) of the respondents reported a psychotic experience during their lifetime, with hallucinatory experiences being more prevalent than delusional experiences. One in two respondents reported at least one traumatic event during their lifetime, with 12.8% (95% CI: 11.5\u201314.2) reporting two, 5.3% (95% CI: 4.5\u20136.4) three, 1.9% (95% CI: 1.5\u20132.5) four and 1.7% (95% CI: 1.3\u20132.3) five traumatic events. Unintentional, non-interpersonal traumatic event were reported more often than interpersonal traumatic events. On average, the first traumatic event was reported at the age of 26.1 years (Results of the Poisson regression models conducted for hypotheses testing are displayed in p\u00a0=\u00a00.103). Beginning with two traumatic events, the risk of psychotic experiences increases with each additional event, with risk ratios of 2.5 (95% CI: 1.5\u20134.2), 3.7 (95% CI: 2.0\u20136.7) and 5.9 (95% CI: 3.3\u201310.7) for two, three and four traumatic events, respectively. Compared with respondents without traumatic event, those who reported five or more traumatic events had 7.7 (95% CI: 4.0\u201314.8) times higher risk of having at least one psychotic experience during their lifetime.The risk of reporting psychotic experiences as a function of the number of traumatic events is shown in et al., et al., et al., et al., et al., et al., et al., To the best of our knowledge, this is the first study investigating the link between traumatic events and psychotic experiences in the Thai population. In line with previous research, we found an increased risk for psychotic experiences in individuals who have suffered from traumatic events (Scott et al., et al., et al., et al., et al., The risk of psychotic experiences increased with the number of traumatic events, pointing to a dose\u2013response relationship which is in line with evidence from previous research (Scott et al., et al., Some limitations have to be taken into account. First, the cross-sectional design of our study does not allow for conclusions about causality. Second, since the TNMHS was conducted in 2013, more research is needed using more recent databases, including other countries in this region, and taking into account longitudinal study designs. Third, psychotic experiences constitute only one symptom of schizophrenia spectrum and other psychotic disorders and may not in all cases lead to clinically meaningful impairment of functioning. Additionally, the TNMHS did not include questions on family history of psychotic experiences, which would have been a key variable given the large genetic component in schizophrenia and other psychotic disorders (e.g. Chou et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., Based on the strong link between traumatic events and psychotic experiences, some researchers call for a distinction to be made between trauma-related psychotic experiences and such experiences in people who were not exposed to trauma (Hardy, et al., et al., et al., et al., et al., et al., Our findings emphasise the close link between traumatic events and psychotic experiences and extend the existing knowledge to the region of Southeast Asia. Experiencing either an interpersonal trauma or from multiple traumatic events were particularly associated with reporting psychotic experiences during lifetime. In Thailand, almost one in five individuals reported having experienced either an interpersonal trauma or multiple traumatic events in their lifetime, and two in five were minors at the time of first trauma exposure. With a total population of 69.8 million inhabitants in Thailand in 2020 (United Nations, Department of Economic and Social Affairs, Population Division, This study was the first to investigate the association between traumatic events and psychotic experiences in a Thai population. We found the experience of interpersonal or multiple traumatic events to be associated with a substantially elevated likelihood of reporting hallucinations or delusions during lifetime. Within our sample, a total of 6% reported having had psychotic experiences at least once in their lives, and one in five respondents stated having experienced interpersonal or multiple traumatic events. The high lifetime prevalence of traumatic and psychotic experiences in the Thai population highlights the need for effective and widely accessible secondary prevention programmes to reduce post-traumatic stress in affected people."} {"text": "Hereditary deficiency of plasma prekallikrein (PPK) is a rare autosomal recessive disease. The affected patients are often asymptomatic and diagnosed incidentally during preoperative investigations or during hospitalization by isolated prolongation of activated partial thromboplastin time (aPTT). In this article, we report, a 46-year-old woman who was candidate for two invasive procedures (thyroid FNA and hysterectomy) and underwent preoperative evaluation. Due to prolonged aPTT with normal PT she was referred to the IBTO reference coagulation laboratory for specific coagulation assays. Ultimately, the examinations revealed severe PPK deficiency (<1%) with partial deficiency of factor XII level (25%). It was called as \"Fletcher factor\". In 1973, Wuepper et al. further investigated this unknown factor and they called it as \u201cPrekallikrein\u201d (PK). The gene related to PK is KLKB1, which is located on the q34-q35 region of the long arm of chromosome 4 for all the patients who have prolonged aPTT but normal intrinsic coagulation factors and without evidence of coagulation inhibitors. During this period of time this patient was the second case of PPK deficiency that was diagnosed.6/UL (3.3-5.8), WBC: 6.6 \u00d7 103/UL (3.5-11), platelet: 323 \u00d7 103/UL, aPTT: 206 seconds, and PT: 10 seconds (see A 46-year-old woman with a history of hypothyroidism has been a candidate for thyroid FNA (fine needle aspiration) to evaluate thyroid cold nodules and hysterectomy for uterine fibroids. During preoperative work up, an isolated prolongation of aPTT was noted and the patient was referred for further evaluation to the special coagulation laboratory of IBTO with suspicion to Lupus Anticoagulant (LA). The patient had a history of easy bruising and also bleeding after one tooth extraction . She also had two previous surgeries (appendectomy and cesarean section) without any abnormal bleeding. According to the ISTH-SSC Bleeding Assessment Tool the total bleeding score was 5 . The patient\u2019s parents did not have any consanguinity. She had two sisters and three brothers, as well as a 16-year-old girl who were rejected for any history of bleeding or thrombotic events. She had a history of long-term hypothyroidism and resistant to routine doses of thyroid hormone. Initial laboratory tests in IBTO showed: HGB: 12.7 g/dL .et al. the deficiency of contact factors is the second cause of unknown prolonged aPTT after antiphospholipid antibodies (et al. in 2017 (Prekallikrein is a glycoprotein mainly synthetized in the liver and secreted into the plasma as a single-chain peptide with a molecular weight of 88,000 Daltons, and normal concentration of 40 mg/mL. About 75-90% of this coagulation factor is combined with HMWK. The activation of prekallikrein results in production of kallikrein, and activation of factor XI and factor XII. Patients with PPK deficiency, which is a rare autosomal recessive disorder , 6, typi in 2017 .et al. (et al. (et al. (The occurrence of bleeding events in the patients with PPK deficiency is not a common phenomenon and the reported studies about bleeding complications after surgical procedures show a wide range of bleeding tendency from no bleeding to major bleeding events -7, 11. Ret al. in their (et al. in 2019 et al. (et al. (Thrombotic events among patients with PPK deficiency were reported by several studies , 4, 6. B (et al. . Our caset al. (et al. (The first case of concurrent PPK and factor XII deficiencies was reported in 1981 by Man-Chiu Poon et al. . They de (et al. one quarThere are some sporadic reports about PPK deficiency with some comorbidities e.g. autoimmune diseases including Graves' disease and systemic lupus erythematous. In the adults with multiple comorbidities or systemic inflammation, the effect of PPK deficiency remains controversial and further researches are needed to confirm the causative association between them -16. Our The data on the preoperative management of patients with PPK deficiency are also limited. Unal and his colleagues in their literature review on the surgical management of PPK deficient patients observed that most of these patients needed no FFP transfusion and FFP transfusion may be used only in patients with PPK deficiency undergoing invasive procedures . Our patIn this study, we presented a case of severe PPK deficiency with concurrent partial factor FXII deficiency that experienced occasional non-significant hemorrhages without any thrombotic events. It seems that PPK deficiency cases are under-diagnosed due to the lack of significant hemorrhagic or thrombotic symptoms in general and low availability of its specific laboratory test in our country. The most significant presentation of these cases is incidental finding of isolated prolongation of aPTT especially in preoperative hemostatic evaluations. Physicians should be aware of this rare congenital deficiency by conducting special coagulation assays and excluding other causes of prolongation of aPTT in order to prevent unnecessary interventions like transfusion of blood products in these patients."} {"text": "To investigate the patients transferred by helicopters, as well as an emergent medical services that were performed for them.In this retrospective cross-sectional study, all patients who were transferred by Fars province of Helicopter Emergency Medical Services (HEMS) to Shiraz hospitals, southern Iran (March 2017-March 2019) were investigated. Patients\u2019 information was collected and analyzed includes age, gender, dispatch reason, trauma mechanisms, take hold of emergent medical services, as well as the air transportation time, time between dispatch from the origin hospital and starting the procedures, and patients\u2019 outcome.p<0.0001). Mental status deterioration (25.3%) was the most dispatched indications. The mortality rate was 13.25% totally (11.11% in traumatic vs. 10% in non-traumatic). The mean\u00b1SD of air transportation time was significantly lower than ground transportation in both traumatic (p=0.0013) and non-traumatic (p<0.0001) patients. Also, the mean\u00b1SD of time between dispatch from the origin hospital and starting the procedures was statistically lower in air transportation in both traumatic (p=0.0028) and non-traumatic (p=0.0017) patients.Eighty-three patients were enrolled with the mean\u00b1SD age of 36.9\u00b119.47 years that 75.9% had trauma ( Most of the patients transferred by HEMS were traumatic. The air transportation time as well as the time between dispatches from the origin hospital to the starting of the procedures were significantly lower in HEMS in comparison with ground transportation for both traumatic and non-traumatic patients. Helicopter Emergency Medical Services' (HEM) purpose is to provide specialized services for patient\u2019s triage, treatment, and rapid transfer directly to the trauma center for providing definitive treatment , 2. An aWhen deciding about HEMS using, many factors are important include access to higher-level pre-hospital care that HEMS staff provide, faster access to the main trauma center , simultaneous extract of several injured patients need, and access of local communities to ground emergency medical services (GEMS) centers. In many systems, HEMS staff receive Advanced Life Support (ALS) training that may not be available in rural EMS systems .et al., [4] compared the HEMS with other conventional means of emergency transportations. They reported that it\u2019s an affordable method of transportation in cases which helicopter transmission increases the survival rate. Ringburg et al., [It is proved that HEMS is economical for various clinical conditions if administered carefully. Gearhart et al., showed t et al., .The introduction of HEMS impacts was controversial on the health outcomes of trauma patients . BesidesAccording to our knowledge, few studies investigated the HEMS in Iran -16 and tStudy Design and PopulationThe current retrospective cross-sectional study (Aug-Dec 2019) was conducted on all patients\u2019 medical records who were transferred by Fars province HEMS to Shiraz hospitals , the center of Fars province, southern Iran, from March 2017 to March 2019. Patients who died at the time of arrival to the destination hospitals were excluded.Study ProtocolThe patients\u2019 names and their initial data such as demographic variables , the city and the reason of dispatch (traumatic or non-traumatic), the mechanisms of trauma, the indication of dispatch, and the taken pre-hospital and in-hospital emergent medical services less than six hours were collected from Fars province HEMS center. Then, the started time of the procedures as well as the patients\u2019 outcome (discharge or death) were extracted from the patients\u2019 medical files in the destination hospitals and recorded in a data collection form.To compare the duration of helicopter and ground transportations, the land distance information between the dispatched cities (patients\u2019 locations) and the destination hospitals as well as land estimated times dispatches were obtained from the https://www.google.com/maps website.A variable was defined: \u201cthe estimated time interval of dispatch to the start of the procedure, if the patient was dispatched by ground emergency services\u201d in comparing the air and ground emergency services. To calculate this variable, \u201cair transfer to the hospital duration\u201d was reduced from \u201ctotal duration of dispatch to start of procedures\u201d. This is the duration of patients\u2019 entrance to the emergency department of the hospital and initiating the procedures. Then, it was summed up by the \u201cground transfer time of the patient to the destination hospital\u201d. Therefore, if patients were transferred by ground emergency vehicles, the time interval between dispatch and starting of procedures was estimated. It should be noted that 30 minutes was considered for boarding and disembarking patients from the ambulance in ground transportation but this time is hidden for helicopter transportation.Statistical Analysisp-value less than 0.05 and confidence interval (CI) of 95% was considered to be statistically significant.The statistical package for social sciences version 25 and Medcalc software for Windows were used for statistical analysis through descriptive and analytical tests such as independent t-test, Chi Square, and nonparametric tests. Results are presented as mean\u00b1standard deviation (SD) for continues variables and were summarized in number (percentage) for categorical variables. Two-sided Ethical ConsiderationsThe current study was supported by Shiraz University of Medical Sciences (grant No. 17434), which was approved by the vice-chancellor of research and technology as well as the local ethics committee (IR.sums.med.rec.1397.344) of Shiraz University of Medical Sciences.p=0.550), that 56 (67.5%) of them were men (p<0.0001). Sixty-three (75.9%) patients had trauma (p<0.0001) which was significantly higher in men . The mortality rate was 13.25% totally . It should be considered that there were no air accidents during the study period and none of the patients died on the way. Totally, as shown in p=0.009). Car-car accident (CCA) and car turn over (CTO) were the most mechanisms of trauma (43.9% vs. 39.02%). In general, 130 indications were recorded for dispatch in traumatic patients which mental status deterioration and long bone open fracture were the most of them (33.33% vs. 31.75%). For each patient, at least one emergent medical services were performed which blood transfusion was the most of them (44.44%), and 85.71% of the procedures were performed before 202.09 minutes , that this difference was statistically significant (=0.0028) .As shown in p<0.0001). The mean\u00b1SD of time between dispatch from the origin hospital and starting the procedures was 131.03\u00b180.60 minutes in air transportation which was statistically higher than the estimated time in ground transportation (251.40\u00b165.03 minutes) (p=0.0017) .et al., [et al., [et al., [In the current study, all patients transferred by HEMS of Fars province to hospitals of the Shiraz from 2017 to 2019 and the emergent medical services were investigated. As mentioned before, most of the patients were men which is consistent with previous studies , 17. Beset al., 's study [et al., reported[et al., also staet al., [et al., [In the current study, detailed investigation of trauma causes revealed that CCA had the highest frequency followed by CTO and motor-car accident. Andruszkow et al., reported[et al., reported[et al., , they reet al., [et al., [et al., [In the present study, the overall mortality rate was 13.25%, and 11.11% for traumatic patients which was lower than the results reported by Andruszkow et al., that rep[et al., reported[et al., reportedComparison of traumatic patients\u2019 data in the current study with other studies which is used as a reference for evaluating the quality of services provided in trauma, showed that although the mortality rate of traumatic patients dispatched by helicopter was slightly higher in the current study, this difference was not statistically significant.et al., [et al., [In the current study, the dispatch\u2019s indications of non-traumatic patients were ACS, cardiac arrest, abortion, and respiratory arrest. In line with the current study, McQueen et al., showed t[et al., reportedet al., [et al., [Some studies compared the efficiency of HEMS and GEMS. For example, Davis et al., investig[et al., in a proet al., [et al., [Schiller et al., study onet al., . In anotet al., . Moreove[et al., showed tet al., [et al., [Brown et al., found th[et al., reported[et al., . AccordiOne of the strengths of the current study was that Fars province has one of the best equipped and busiest HEMS in the country due to its vastness and mountainous climate. Therefore, investigating the patients who transferred in this province will provided valuable information about characteristics of the patients and performed emergent medical services. On the other hand, the study also had limitations including a low number of patients who were transferred during the study period. Air Emergency Center reported 109 patients but only 83 patients were analyzed due to the incompleteness of data. To obtain comprehensive results, future studies should collect the number of HEMS all around the country, therefore, such services can be used more effectively.According to the results, most of the patients transferred by HEMS in Fars province were traumatic patients and the main causes of traumatic injuries were CCA, CTO, and motor-car accidents. The overall mortality rate was 13.25% (11.11% for traumatic and 20% for non-traumatic patients). The time between dispatches from the origin hospital to the starting of the procedures was significantly lower in HEMS in comparison with GEMS for both traumatic and non-traumatic patients. Regarding that most of the patients were traumatic, it is necessary to increase the quality of the triage system in HEMS. However, prospective studies with larger sample sizes are necessary."} {"text": "During Earth's 4.6-billion-year history, its surface has experienced environmental changes that drastically impacted habitability. The changes have been mostly attributed to near-surface processes or astronomical events with little consideration of Earth's deep interior. Recent progresses in high-pressure geochemistry and geophysics, however, indicated that the deep Earth processes may have played a dominant role in the surface habitability . This Sp2O is the key compound that acts as a strong oxidant and oxygen transporter. In this Special Topic, Katsura and Fei [2O in the asthenosphere, and Walter [2O transport all the way down to the CMB. It has been established that when H2O meets Fe at the CMB, Fe can be oxidized to produce superoxide FeO2 with pyrite structure. At the middle of the lower mantle , Hu et al. [2O can also react with ferropericlase O, a major lower mantle mineral, to form pyrite-structured superoxide. At shallower depth of 1000 km (40 GPa), Liu et al. [2O induced oxidation of ferropericlase to form Fe2O3+\u03b4 with a new hexagonal structure. Both reactions result in further oxygen enrichment beyond the already highly oxidized slab materials in the deep mantle environment. The subducting process may compile substantial oxygen-rich materials to form oxygen reservoirs at the CMB.Oxygen, the most abundant element in Earth, is the key element that enables lifeforms to thrive and creates the unique living planet as we know. Recent discoveries on the pressure-altered chemistry revealed that the oxygen cycles through Earth's deep interior could also be the origin of the dynamic mantle processes that dictate the disruptive evolution of the life on surface. The Earth shows a vertical gradient of oxygen fugacity over 10 orders of magnitude from the highly oxidized crust at ambient conditions to the highly reduced core\u2013mantle boundary (CMB) at \u223c2900 km depth and 130 GPa. The dynamic process of subducting slab, on the other hand, injects the oxidized crust down into the reduced deep lower mantle. Here, Hu et al. show thainsitu at high pressure\u2013temperature conditions of the deep interior. Mao et al. [Recognition of novel materials in deep Earth relies on seismological observations of their characteristic elastic signatures that must be determined o et al. present o et al. uses lono et al. that couo et al. highligh"} {"text": "The solar atmosphere is full of complicated transients manifesting the reconfiguration of the solar magnetic field and plasma. Solar jets represent collimated, beam-like plasma ejections; they are ubiquitous in the solar atmosphere and important for our understanding of solar activities at different scales, the magnetic reconnection process, particle acceleration, coronal heating, solar wind acceleration, as well as other related phenomena. Recent high-spatio-temporal-resolution, wide-temperature coverage and spectroscopic and stereoscopic observations taken by ground-based and space-borne solar telescopes have revealed many valuable new clues to restrict the development of theoretical models. This review aims at providing the reader with the main observational characteristics of solar jets, physical interpretations and models, as well as unsolved outstanding questions in future studies. Although these jet activities are observed at different scales and temperature ranges, they can be viewed as the same type of solar jets owing to their similar observational characteristics and generation mechanism, i.e. magnetic reconnection-dominated jet-like activities with an inverted-Y structure. For smaller, lower-energy jet-like activities such as spicules and dynamic fibrils, their generation mechanisms are still open questions. Previous studies have suggested that spicules and dynamic fibrils are possibly launched by upward-propagating shocked pressure-driven waves leaking from the photosphere ,11. Howe\u03b1 telescopes and a few low-resolution space instruments such as Skylab (1991\u20132001) and SMM (1980\u20131989). Solar jets have been studied intensively since the 1990s owing to the launch of a series of space telescopes, including the Yohkoh satellite [Solar and Heliospheric Observatory [SOHO; 1995 to now), the Transition Region and Coronal Explorer [TRACE; 1998\u20132010), the Reuven Ramaty High Energy Solar Spectroscopic Imager [RHESSI; 2002\u20132018), the Hinode [Solar Terrestrial Relations Observatory [STEREO; 2006 to now), the Solar Dynamics Observatory [SDO; 2010 to now) and the Interface Region Imaging Spectrograph [IRIS; 2013 to now). Also, more and more ground-based large-aperture solar telescopes have been put into routine observation; for example, the Swedish Solar Telescope [The observation of solar jets dates back to the 1940s; they were dubbed surges in history . At the atellite , the Solervatory \u03b1 jets, extreme ultraviolet (EUV) jets and X-ray jets, depending on different observing wavebands. Nevertheless, since solar jets are often observed simultaneously at different wavebands covering a wide temperature range, and they can occur in all types of solar regions, it seems that the above classification methods are not very reasonable if one considers the physical properties and morphologies.Solar jets are generally described as collimated, beam-like ejecting plasma flows along straight or slightly oblique magnetic field lines. Because of the huge improvement in observing capabilities, solar jets can be imaged in a wide temperature range from the photosphere to the outer corona. According to different classification methods, solar jets were classified into different types in history. Firstly, solar jets were divided into photospheric jets, chromospheric jets (or surges), transition region jets, coronal jets and white-light jets, based on the temperature of the atmosphere in which they occur. Secondly, they were classified as coronal hole jets, active region jets and quiet-Sun region jets, based on regions where they occur. Thirdly, they were classified as Het al. [\u03bb types, in which an inverted-Y (\u03bb)-type jet was commonly interpreted as a small-scale magnetic bipole reconnecting with the ambient open coronal magnetic fields around the bipole top (footpoint). Hence, the different shapes could possibly be used to distinguish the reconnection sites in solar jets [Based on morphology, Shibata et al. classifiet al. \u201349. Receet al. , which cet al. \u201360. A falar jets .Figure et al. [et al. [et al. [et al. [et al. [et al. [Moore et al. classifi [et al. found th [et al. . Sterlin [et al. studied [et al. further [et al. to deter [et al. \u201368 they [et al. ,69\u201384. R [et al. ,49,85,86 [et al. .Figure (b)It have been demonstrated that solar jets are launched from various pre-eruption structures, including satellite sunspots , mini-filaments, coronal bright points and mini-sigmoids. Observationally, these structures can be regarded as the progenitor of solar jets, and studying them can contribute to the prediction of the occurrence and evolution characteristics of solar jets.In the photosphere, satellite sunspots can be considered as a conspicuous progenitor for surges or many active region coronal jets. Rust reportedet al. [Mini-filaments in the chromosphere and the corona can be recognized as an important progenitor for producing coronal jets. Mini-filaments were found to be eruptive in nature, and their eruption characteristics are similar to those evidenced in large-scale filament eruptions . Severalet al. ,69 studiet al. \u201384,97,98et al. ,65,99.et al. [Solar jets are frequently observed to be ejected from coronal bright points and micro-sigmoids. Coronal bright points represent a set of small-scale low-corona loops with enhanced emission in the EUV and X-ray spectra ,101, andet al. found thHinode X-ray observations, Raouafi et al. [et al. [A coronal sigmoid consists of many differently oriented loops that all together form two opposite J-shaped bundles or an overall S-shaped structure , which ii et al. identifi [et al. reported [et al. .(c)(i)\u03b1 images as a surge and in EUV or soft X-ray images as a coronal jet [et al. [\u03b1 bright jet-like features. Jiang et al. [\u03b1 component had a smaller size than the larger, hot one, and the former moved along the edges of the latter.Sometimes, cool and hot plasma flows can be identified simultaneously in a single jet. The co-existing cool and hot components can be observed in EUV images, or separately appear in Honal jet ,115\u2013117.onal jet \u2013120. Theonal jet , althougonal jet . Neverthonal jet ,124. Par [et al. reportedet al. [Yohkoh X-ray data. Since solar jets can occur repeatedly from the same source region, and their lifetimes are usually of 40\u2009min or less according to high-resolution observations, the 2\u2009h time interval seems too long for a single jet. Hence, the cool surge and the hot X-ray jet in Zhang et al. [The cool component in a jet often appears a few minutes after the corresponding hot one ,125\u2013127.a et al. observeda et al. . This abg et al. were verd5), several works have provided evidence that the cool component of jets is directly formed by the erupting mini-filaments confined within the jet base [et al. [The different spatial relationships can possibly be reconciled by considering the projection effect. In principle, observational results suggested that the two components are along different magnetic field lines and dynamically connected. Therefore, their spatial relationship should not be co-spatial but adjacent to each other. The co-spatial case can be expected when the two components overlap each other along the line of sight. For the delayed appearance of the cool component, one can understand it based on the formation mechanism of solar jets. Previously, the delayed appearance of the cool component was explained as the cooling of the earlier, hotter one ,117,125,figure 2jet base ,109,131. [et al. ,63,69, a [et al. ,109,131.(ii)et al. [H filtergram of the Hinode/Solar Optical Telescope (SOT), whose typical lifetime, size and intensity enhancement relative to the background are about 20\u201360 seconds, 0.3\u20131.5\u2009Mm and 30%, respectively. Plasmoids have a multi-thermal (1.4\u20133.4\u2009MK) structure with a number density in the range of IRIS, whose average size is about 0.57\u2009Mm, and their ejecting speed ranges from 10 to 220\u2005km\u2005s\u22121 [Supported by various observational indications , it has et al. observed010\u2009cm\u22123 ,133,134.0\u2005km\u2005s\u22121 . Besides0\u2005km\u2005s\u22121 ,136\u2013139,0\u2005km\u2005s\u22121 instability. According to this study, the observed discrete high-density features in solar jets could be either plasmoids or vortex structures at different wavebands. In 3D simulations, plasmoids were evidenced as twisted flux ropes resembling the shape of solenoids, and they are most likely formed as a result of the resistive tearing-mode instabilities in the current sheets located between closed and open fields [et al. [Numerical experiments also revealed the appearance of plasmoids in solar jets. Yokoyama & Shibata ,140 perfn fields ,145. Wyp [et al. pointed (iii)KH instability is a basic physical process that occurs when there is velocity shear in a single continuous fluid, or when there is a velocity difference across the interface between two fluids . Recent IRIS observations, Li et al. [\u03b1 observations taken by the NVST, Yuan et al. [Vortex structures caused by KH instability can be regarded as a basic fine structure of solar jets, which were frequently observed within or at the outer edges of solar jets ,155. Usii et al. reportedn et al. studied et al. [Theoretical and numerical works were performed to study the KH instability in rotating solar jets. Zaqarashvili et al. found th(d)(i)Yohkoh soft X-ray observations, Shimojo et al. [5 the distribution of the lifetimes is a power law with an index of about 1.2; (v) most active region jets are observed to the west of the active regions; (vi) 76% of jets show constant or converging spires whose widths get narrower from the photosphere to the corona, and their intensity distribution often shows an exponential decrease with distance from the footpoints. In a subsequent paper [9\u2005cm\u22123 with an average value of Hinode [\u22121, 5\u2009\u00d7\u2009104\u2005km, 8\u2009\u00d7\u2009103\u2005km and 10\u2009min, respectively. In addition, the velocities of the transverse motions perpendicular to the jet axis ranged from Based on o et al. describent paper , they fue Hinode . The autH broadband filter observations taken by the Hinode/SOT, Nishizuka et al. [Using Ca II a et al. made a sa et al. . Differea et al. ,16 and da et al. , chromoset al. [STEREO observations; they found that the appearance of EUV jets is always correlated with small-scale chromospheric bright points. The typical lifetimes of the studied EUV jets are 20 (30)\u2009min at 171 (304)\u2009\u00c5, while those of the white-light jets observed in coronagraphs peak at around 70\u201380\u2009min. It was found that the speeds were 400 and 270\u2005km\u2005s\u22121 for the hot 171\u2009\u00c5 and cool 304\u2009\u00c5 components, respectively. The speeds measured from 171\u2009\u00c5 observations are comparable to those derived from coronagraph observations (390\u2005km\u2005s\u22121). Mulay et al. [SDO; their results indicated that the lifetimes and velocities are in the ranges of 5\u201339\u2009min and \u22121, respectively. Typically, all the studied jets were co-temporally associated with H\u03b1 jets and non-thermal type III radio bursts, and 50% (30%) of the events in their samples originated in the regions of flux cancellation (emergence). Other similar statistical studies based on STEREO and SDO observations can also be found in the literature [SOHO, the speeds of jets at the solar minimum of activity are in the range of 400\u20131100\u2005km\u2005s\u22121 for the leading edge and 250\u2005km\u2005s\u22121 for the bulk of their material, while the typical speeds at the maximum of activity are around 600\u2005km\u2005s\u22121 [et al. [\u22121, respectively.Nistic\u00f2 et al. performey et al. studied terature ,163. In 0\u2005km\u2005s\u22121 ,165. In [et al. carried (ii)et al. [\u03b1 spectral observations. With the improvements in the quality of imaging and spectral observations, rotating motion was widely observed in H\u03b1 surges [Rotating motion is a typical dynamic characteristic of solar jets. Earlier observations demonstrated the appearance of rotating motion in prominences like a tornado \u2013169, andet al. detected\u03b1 surges ,178,179,\u03b1 surges ,180,181 \u03b1 surges ,183. Som\u03b1 surges ,174,184,\u03b1 surges . In such\u03b1 surges ,88,179.STEREO imaged the fine helical structure of the rotating jets, which exhibited different morphologies when they were observed from different viewing angles [STEREO observations indicated that at least half of EUV jets exhibited a helical magnetic field structure [SDO data, Shen et al. [et al. [Stereoscopic observations taken by g angles . A statitructure . Using tn et al. studied [et al. ,186 foun [et al. \u2013190, and [et al. ,191,192.et al. [\u22121, respectively. Cheung et al. [\u22121, and the magnetic twists needed for the helical jets were found to be supplied by emerging current-carrying magnetic fields et al. [et al. [\u22121) and fast (20\u2005km\u2005s\u22121) lateral expansions in an X-ray jet, in which the slow expanding stage was explained as the loop escaping from the anti-parallel magnetic field, while the fast stage corresponded to the whip-like motion of the reconnected field lines. By contrast, Chandrashekhar et al. [\u22121) and fast (135\u2005km\u2005s\u22121) expanding motions of loop systems were also observed in small chromospheric anemone jets [Lateral expansion is a typical characteristic of solar jets, which manifests as the whip-like upward motion of the newly formed field lines ,129. Theet al. found th [et al. reportedr et al. proposedone jets , in whicet al. [\u22121), fast (25\u2005km\u2005s\u22121) and constant stages. Both the slow and fast expansion stages lasted for about 12\u2009min, and the jet kept a constant width of about 4\u2009\u00d7\u2009104\u2005km during the constant stage (a\u2013f ). The fast transition from the slow to the fast expansion stage was explained as the sudden acceleration of the magnetic reconnection between the emerging arch and the ambient open field. In other words, the slow expansion stage corresponded to the emerging period of the arch, during which its reconnection with the ambient open field was slow, while the fast expansion stage manifested as the impulsive reconnection between the two magnetic systems. The constant stage indicated the full opening of the closed arch and the end of the twist transfer into the open fields, and its width corresponded to the distance between the footpoints of the open field line and the remote footpoint of the closed arch. In a statistical study [Shen et al. reportednt stage a\u2013f . Theal study , the autg). Cirtain et al. [et al. [et al. [Transverse oscillation is another distinct characteristic of solar jets g. Cirtain et al. proposedn et al. , Morton [et al. estimater et al. estimate00\u2013536\u2009s ,201\u2013203,00\u2013536\u2009s ,88.. 3(a)Coronal plumes are thin ray-like structures that are pervasive within polar and equatorial coronal holes, as well as quiet-Sun regions \u2013206; theet al. [et al. [Lites et al. observedet al. . Raouafi [et al. studied [et al. further (b)Solar jets are tightly associated with filaments. On the one hand, as has been discussed in \u00a7\u00a7et al. [et al. [et al. [Luna et al. reportedet al. . A simil [et al. , in whic [et al. reported [et al. ; it was [et al. ,219. It et al. [et al. [et al. [et al. [Solar jets not only supply sufficient mass for filament formation but also cause the instability and eruption of large-scale filaments. Zirin reportedet al. reported [et al. studied [et al. ,222. All [et al. reported [et al. found th(c)et al. [et al. [et al. [Solar jets are closely related to magnetohydrodynamic (MHD) waves. Observational studies indicated that solar jets can act as a driver to excite torsion Alfv\u00e9n waves in themselves \u00b0 or less are typically associated with solar jets [a), but also standard broad CMEs with a typical three-part structure [b).CMEs represent large-scale plasma and magnetic fields being released from the Sun into the interplanetary space \u2013237. CMElar jets . Observalar jets 3He/4He ratios and high ionization states and tightly correlated with type III radio bursts [Solar energetic particles (SEPs) carry important information about the particle energization inside the solar corona, as well as the properties of the acceleration volume. SEP events are divided into \u2018gradual\u2019 and \u2018impulsive\u2019 types. Gradual SEP events are long-lasting, intense, more closely correlated with CMEs and characterized by the abundances and charge states of the solar wind. Therefore, they are thought to be accelerated by CME-driven coronal/interplanetary shock waves. By contrast, impulsive SEP events are short lived, less intense, closely related to flaring active regions, characterized by high o bursts .et al. [3He-rich events and found that their sources lie close to coronal holes and are characterized by jet-like ejections along Earth-directed open field lines. Some studies suggested that impulsive SEP events are associated with narrow jet-like CMEs [et al. [3He-rich SEPs were also observed to be associated with helical jets [Flaring regions accompanied by solar jets are found to be the most possible candidate solar source for producing impulsive SEP events , since tet al. investigike CMEs \u2013266. Nit [et al. found thcal jets , and thecal jets ,266,268,cal jets and sunscal jets ,271.ne is the electron number density) by the Langmuir waves generated by the electron beam instabilities [et al. [A type III radio burst is an important diagnostic tool for SEPs. It is a signature of propagating non-thermal electron beams in a wide range of heights of the solar atmosphere (from the low corona to the interplanetary space), and is excited at the fundamental and second harmonic of the local electron plasma frequency \u22121 for slow and fast solar winds [The problems of coronal heating and the acceleration of solar wind are two highly controversial topics in solar physics. Since energy must come from the solar interior, it is hard to understand why the coronal temperature is far hotter than the solar surface. The problem is primarily concerned with how energy is continuously transported up into the corona through non-thermal processes from the solar interior and then converted into heat within a few solar radii. In the last half-century, many coronal heating theories have been proposed, but two theories have remained as the most likely candidates: wave heating and magnetic reconnection . The solar winds . Previouar winds . Recent ar winds .\u22121, whose energy and mass can satisfy the power (6\u2009\u00d7\u20091027\u2005erg\u2005s\u22121) and mass flux (2\u2009\u00d7\u20091012g~s\u22121) requirements of the corona and solar wind if one assumes a birth rate of 24 events per second over the whole Sun [et al. [et al. [Ultraviolet spectrum observations have revealed prevalent high-energy jets in the corona at an average speed of 400\u2005km\u2005sscenario . The onescenario ; this sha et al. found tha et al. ,153.Figet al. [\u22121) and the other is near the speed of sound (approx. 200\u2005km\u2005s\u22121). The authors claimed that a large number of X-ray jets with high velocities may contribute to high-speed solar wind. McIntosh et al. [et al. [et al. [et al. [IRIS observations also reveled the prevalence of small-scale jets from the networks of the solar transition region and chromosphere [\u22121. They were thought to be an intermittent but persistent source of mass and energy for solar wind.Alfv\u00e9n waves, which propagate along magnetic field lines over large distances and transport magneto-convective energy from near the photosphere into the corona, have been invoked as a possible candidate to heat coronal plasma to millions of degrees and to accelerate the solar wind to hundreds of kilometres per second. Transverse oscillations of spicular jets were regarded as the presence or passage of Alfv\u00e9n waves; the energy carried by these Alfv\u00e9n waves was found to be enough to accelerate solar wind and to heat the quiet corona \u2013288. Ciret al. detectedh et al. found th [et al. found tw [et al. and Thre [et al. ). Recent\u22121 at heliocentric distances of a few solar radii. These observational facts may imply that the moving jets have been incorporated into the ambient solar wind [et al. [\u22121; they found a ubiquitous presence in polar coronal regions at about 100-fold mass and energy greater than the coronal response itself. This suggests that the primary acceleration of the solar wind should induce the dissipation of high-speed solar jets [For big jets that often reach up to a few solar radii and can be observed as white-light jets or jet-like CMEs, their contribution to solar wind often exhibits as microstreams or speed enhancements ,293\u2013295.lar wind ,241. Yu [et al. found thlar jets .. 4With the unceasing improvement of solar telescopes and numerical modelling, the physical interpretation of solar jets has achieved significant progress in recent years. In theoretical studies, the mechanism of flux emergence and the onset of instability or loss of equilibrium were investigated in detail, in which the slingshot effect, untwisting and chromospheric evaporation were considered as the possible acceleration mechanisms . As more(a)et al. [et al. [\u03b1 surges are generated by magnetic reconnection between emerging fluxes and ambient pre-existing magnetic fields et al. [et al. [Pariat et al. proposedet al. , the helet al. ,63,180 aet al. ,199. It et al. . In such [et al. found th [et al. , plasma [et al. , gravity [et al. and the [et al. . In addi [et al. and the [et al. , microst [et al. were als(c)The magnetic breakout model was originally proposed to interpret the initiation of large-scale CMEs, in which magnetic reconnection between the unsheared field and neighbouring flux systems decreases the amount of the overlying field and, thereby, allows the low-lying sheared flux to break out . So far,et al. [Recently, high-resolution observational and statistical studies suggested that all coronal jets are probably driven by mini-filament eruptions, and they share many common characteristics with large-scale eruptions. Therefore, coronal jets are proposed to be the miniature version of large-scale eruptions ,186,247.et al. performeet al. , and theet al. \u2013321.Fig(d)To obtain realistic numerical results that are more comparable to real observations, some works managed to use multi-wavelength observations in tandem with MHD simulations to investigate the formation and evolution of solar jets. Such simulations are known as data-driven models, which use continuously time-varying solar observations as the input to reproduce solar jets. By contrast, if one use only an instantaneous cadence of observation as the input, it should be called data-constrained modelling.et al. [Jiang et al. simulateet al. ,301. In et al. [Using an extrapolated non-force-free magnetic field as the initial condition, Nayak et al. performeet al. and the et al. .et al. [et al. [Cheung et al. presenteet al. to carry [et al. presente(e)Most previous simulations were performed within small numerical domains in Cartesian geometry to study the generation mechanism and evolution process. So far, only a few publications have considered a large simulation domain extension to the interplanetary space using spherical geometry to investigate the interplanetary effects caused by solar jets.et al. [et al. [T\u00f6r\u00f6k et al. and Lion [et al. performen models . A whiteet al. [in situ in the solar wind. Using another code that employs Alfv\u00e9n wave dissipation to produce a realistic solar wind background, Szente et al. [et al. [\u00b0 in latitude and out to To investigate the influence of solar jets on the solar wind, Karpen et al. extendedet al. by inclue et al. studied [et al. was gene results ,298, the. 5High-spatio-temporal-resolution imaging and spectroscopic and stereoscopic observations covering a wide temperature range over the last several decades have significantly improved our understanding of solar jets, including various aspects such as their triggering, formation, evolution, fine structure, relationships with other solar eruptive activities, and their possible contribution to the coronal heating and acceleration of solar wind. Nowadays, we recognize that the basic energy release mechanism in solar jets is magnetic reconnection; they are triggered by photospheric magnetic activities exhibiting as flux\u2013flux cancellations and shearing motions of opposite polarities, and are accelerated alone or in combination by possible mechanisms of untwisting, chromospheric evaporation and slingshot effects. Observationally, solar jets can be divided into eruptive jets and confined jets, or straight anemone jets and two-sided-loop jets; they can evolve from different progenitors including satellite sunspots , mini-filaments, coronal bright points and mini-sigmoids; they can exhibit various fine structures including cool and hot components, plasmoids and KH vortex structures; and they can show interesting rotating and transverse oscillation motions. Solar jets not only provide the necessary mass and energy to the corona and solar wind, triggering other eruptive phenomena such as EUV waves, filament and loop oscillations and CMEs, but also significantly affect the interplanetary space through launching CMEs and energetic particles. Among the new knowledge that we have gained in recent years is that solar jets are often driven by mini-filament eruptions in association with photospheric magnetic flux cancellations; in addition to narrow white-light jets, broad CMEs with typical three-part structures and simultaneous paired narrow and broad CMEs are found to be dynamically associated with solar jets. These findings lead to an important conclusion that solar jets may represent the miniature version of large-scale solar eruptions, and they probably hint at a scale invariance of solar eruptions. In this sense, investigating solar jets can provide important clues to understanding complicated large-scale solar eruptions (e.g. CMEs) and currently indistinguishable small-scale transients (e.g. spicules).Numerical modelling of solar jets has also achieved many significant advances in recent years. MHD models of solar jets have been developed from one to three dimensions with different scenarios such as the emerging-reconnection and onset of instability mechanisms, which can be applied to interpret the formation, evolution, morphology and plasma properties of standard and blowout jets in coronal holes and active regions. Recently, some numerical works have further considered the effects of heat conduction, radiative losses and background heating, and more realistic data-constrained and data-driven MHD simulations are being developed to understand solar jets. These great efforts make the obtained numerical results more morphologically and quantifiably comparable to real observations. In addition, numerical works that consider a large domain extension to the interplanetary space using spherical geometry are also being developed to aid our understanding of the interplanetary disturbances resulting from solar jets.Observations have shown that solar jets are tightly associated with magnetic flux cancellation, especially in mini-filament-driven jets. Nevertheless, what kind of physical process takes place during the triggering stage is still unclear. Physically, flux cancellation represents three possible processes: emergence of U-shaped loops, submergence of \u03a9-shaped loops and reconnection in the magnetogram layer . TherefoObservational studies have indicated that solar jets not only cause narrow white-light jets in the outer corona, but they can also result in broad CMEs with a typical three-part structure. Sometimes, a single mini-filament-driven jet can cause a pair of simultaneous narrow and broad CMEs . Narrow Although more and more observational studies have shown the similarity between small-scale solar jets and large-scale filament/CME eruptions, the possible scale invariance of solar eruptions should be further tested theoretically and observationally. It should be made a priority to check whether the current jet models can be applied to small-scale explosions such as spicules and nano-flares, which are believed to be important for coronal heating. On the other hand, it is also important to check if the current jet models are suitable for explaining complicated large-scale solar eruptions and astrophysical jets.The contribution of solar jets to coronal heating and to the formation and acceleration of solar wind as well as the jet-associated acceleration mechanism of SEPs should be investigated in depth. There are too many assumptions and uncertainties in the existing studies on these topics.Most of the current MHD simulations only deal with idealized boundary and initial conditions using a relatively small numerical domain. Future investigations should consider more realistic data-constrained and data-driven MHD simulations, using a large simulation domain so that one can study the interplanetary disturbances caused by solar jets.Despite the great advances achieved in previous observational and numerical studies, there are still many aspects of solar jets that deserve further investigation. The following is a list of some outstanding questions.Parker Solar Probe , launch Orbiter , launche Orbiter , which iy (ASO-S ), which Click here for additional data file."} {"text": "The WHO has identified vaccine hesitancy as one of the 10 threats to global health. One of the biggest causes of vaccine hesitancy is an inaccurate understanding of infectious diseases and vaccines. Currently, a new strain of coronavirus has spread around the world. People have a great fear of new coronavirus infections. A protective vaccine against coronavirus would be widely well received, even if there are inevitable severe or unknown adverse reactions. Awareness of various kinds of infectious diseases, preventive behavior against infectious diseases including vaccination, and various kinds of bias towards the recognition of infectious diseases and vaccines might be a potential focus.Dr. Foteini Malli et al. presenteDr. Adam Palanica et al. examinedDr. Abanoub Riad et al. conducteDr. Georgios Marinos et al. conducteDr. Elham Kateeb et al. assessedDr. Abanoub Riad et al. investigDr. Yen-Ju Lin et al. comparedThe editor believes that these findings will provide a rationale for more effective dissemination of vaccines against COVID-19 and hopes that this Special Issue \u201cHuman Consciousness and Behavior towards Infectious Diseases and Vaccines 2.0\u201d will contribute to the improvement of health of people around the world."} {"text": "Microsurgical anatomy is not only the backbone for neurosurgical operations, but also for technological innovations, novel surgical techniques, a better understanding of the etiopathogenesis of pathologies, and translational medicine from neuroscience to daily clinical practice.The overarching goal of this Special Issue was to build a bridge between research and patient care studies and to bring neuroscientists and clinicians together.This Special Issue contains 16 individual papers that are categorized into research laboratory investigations; clinical case series, including those of the brain and spine regarding updated intraoperative technologies; novel neurosurgical tenets; and molecular anatomy.Wysiadecki et al. contribuThe article published by Poblete et al. , simplifAnterior petrosectomy provides excellent exposure to the petroclival region, but its related morbidity is non-negligible. The paper published by Flores-Justa et al. notes saThe article published by \u00c7evik et al. examinesG\u00f6ksu et al. publisheThe paper published by Kim et al. is a cliRevascularization procedures are considered to be the most effective treatment modality that can be implemented to diminish the risk of intracranial hemorrhages in hemorrhagic moyamoya disease. On the other hand, delayed anastomotic occlusion is a common long-term complication after direct vascularization procedures. In the article published by Chen et al. , the autExtreme lateral interbody fusion (XLIF) has become a standard technique for the fusion of the thoracic and lumbar spine . It is aThe article published by Pojski\u0107 et al. is a proThe paper \u201cDo Orthopedic Surgeons or Neurosurgeons Detect More Hip Disorders in Patients with Hip-Spine Syndrome? A Nationwide Database Study\u201d by Yin et al. pointed Soldozy et al. systemic\u201cSpontaneous Resolution of Late-Onset, Symptomatic Fluid Collection Localized in the Meningioma Resection Cavity: A Case Report and Suggestion of Possible Pathogenesis\u201d is the contribution by Kim et al. . The autTranscranial MR-guided focused ultrasound has been adopted as a noninvasive ablative procedure for the treatment of movement disorders and psychiatric disorders . Additio\u201cCommon Challenges and Solutions Associated with the Preparation of Silicone-Injected Human Head and Neck Vessels for Anatomical Study\u201d, the article in this Special Issue published by \u00c7\u0131rak et al. , reportsSince the beginning of this century, the portion of the central nervous system that is associated with lymphatics has gained popularity after the proven existence and functionality of the lymphatic structures for cleaning waste products from cerebrospinal fluid in animal studies . Animal"} {"text": "That O. setnai was recovered as the sister group of all other ricefishes in a molecular phylogenetic analysis with high branch support, therefore, is not surprising and possibly reflects a long branch attraction artefact et al. co dataset , in whicO. setnai, Yamahira et al. [To explain the exceptionally long branch of a et al. invoked . 4Lithopoecilus brouweri, a fossil of Miocene age from Sulawesi described by de Beaufort [Oryzias and the Sulawesi endemic Adrianichthys. Like Rosen [et al. [Lithopoecilus to calibrate the internal node between Oryzias sarasinorum and Oryzias eversi, citing Horoiwa et al. [Lithopoecilus to represent the last common ancestor of these two recent species without any supporting evidence. Its use for calibration of this internal node is unfounded.The authors employed three fossil calibrations including \u2020Beaufort as interke Rosen , Parentike Rosen included [et al. used \u2020Lia et al. . The latIn conclusion, the \u2018out-of-India' dispersal hypothesis to explain modern ricefish biogeography is unsupported and vicariance, the fragmentation of a coastal widely distributed ancestral species by tectonic and climatological events, a better explanation for the historical biogeography of ricefishes."} {"text": "Chlorinated paraffins (CPs) have been applied as additives in a wide range of consumer products, including polyvinyl chloride (PVC) products, mining conveyor belts, paints, sealants, adhesives and as flame retardants. Consequently, CPs have been found in many matrices. Of all the CP groups, short-chain chlorinated paraffins (SCCPs) have raised an alarming concern globally due to their toxicity, persistence and long-range transportation in the environment. As a result, SCCPs were listed in the Stockholm Convention on Persistent Organic Pollutants (POPs) in May 2017. Additionally, a limit for the presence of SCCPs in other CP mixtures was set at 1% by weight. CPs can be released into the environment throughout their life cycle; therefore, it becomes crucial to assess their effects in different matrices. Although about 199 studies on SCCP concentration in different matrices have been published in other continents; however, there are scarce/or limited studies on SCCP concentration in Africa, particularly on consumer products, landfill leachates and sediment samples. So far, published studies on SCCP concentration in the continent include SCCPs in egg samples, e-waste recycling area and indoor dust in Ghana and South Africa, despite absence of any production of SCCPs in Africa. However, there still remains a huge research gap in the continent of Africa on SCCPs. Consequently, there is a need to develop robust SCCP inventories in Africa since the Stockholm Convention has already developed guidance document in this respect. This review, therefore, examines the state of knowledge pertaining to the levels and trends of these contaminants in Africa and further provides research gaps that need to be considered in order to better understand the global scale of the contaminant. Since all CP mixtures detected had a SCCP content significantly above 1% , the used mixtures would also be classified as POPs , and in seven of these, oven SCCP content was below the limit of quantification meeting the European Union and Stockholm Convention requirement of a SCCP content of less than 1% in MCCP. However, in 3 cases, the SCCP content was 2.8%, 5.0% and 14.4% of the total CP content were used, and an average migration efficiency of 12% and 1.5% for SCCPs and MCCPs were reported, respectively. SCCPs (C10) and lower chlorinated (C6 and C7) showed the highest migration. With these findings, it is important to highlight that since there is lack of CP regulations on uses in most African countries, SCCP oils might enter sensitive uses like lubricants in food production sector leading to a larger population exposed to their toxicological effects.Considering the use of SCCPs as additives in diverse consumer applications, continuous human exposure is likely to occur from this route. The major routes of exposure to SCCPs in human include air inhalation, dust ingestion and dermal absorption EFSA . Despite\u22121 day\u22121 after the 100th week, and that of rats at 312 mg kg\u22121 day\u22121 after the 90th week, which were significantly less than those of respective controls. Another study by Wang et al. for 28 days caused immune cell infiltration in the liver, which indicated a potential of liver damage although the liver weight was not affected. The International Agency for Research on Cancer (IARC 10-SCCPs (60% chlorination) were found to be carcinogenic to experimental animals and possibly carcinogenic to humans. Wang et al. (9-13-CPs (vehicle corn oil) altered the expression of cancer-related genes, implying that these cancer-related genes may be involved in SCCP-induced carcinogenicity. Furthermore, SCCPs are known to be endocrine disruptors. Treatment of male rats with chlorowax 500C and cerclor 56\u00a0L at 1\u00a0g kg\u22121 day\u22121 (vehicle corn oil) by gavage for 14 days significantly reduced the levels of plasma thyroid hormone (TH), 32% and 39% lower for free thyroxine (FT4) and 26% and 35% lower for total T4 (TT4), respectively for 28 days showed decreased levels of plasma FT4, free triiodothyronine (FT3) and hepatic T4 and increased levels of plasma TSH and hepatic T3.The toxicity of SCCPs has raised an alarming public concern globally leading to a number of research studies reviewed by Wang et al. . Accordig et al. demonstrcer IARC reportedg et al. also demDifferent studies have indicated the ubiquity of SCCPs in diverse environments and matrices, including industrialized regions as well as remote areas . Compared with previous atmospheric SCCP concentration, only Huang et al. were below MDL and if \u2211SCCP concentration above MDL, ranged from 11 (RU5) to 170 (RU1) ng/sampler with \u2211SCCP concentration of 61 ng/sampler. The \u2211SCCP concentrations were 28-fold higher than those of PCBs measured with the same sampler at the same site in 2012 , which was attributed to the use pattern of commercial CP formulations in China. Thus, Shanghai aquatic system is mainly contaminated by MCCPs than SCCPs. Another study by Pan et al. which were in the same order of magnitude with those reported by Saborido Basconcillo et al. from three natural protected areas in Spain . SCCP concentration determined in egg samples of Larus michahellis ranged from 1.78 to 3.70 ng g\u22121 ww, which were notably lower than those that were determined in Larus audouinii ranging from 4.40 to 5.08 ng g\u22121 ww. Morales et al. (Larus michahellis and Larus audouinii) from the Ebro delta Natural Park, and SCCP concentrations were detected at 4536 \u00b1 40 pgg\u22121 ww in Larus michahellis and 6364 \u00b1 20 pg g\u22121 ww in Larus audouinii.In Africa, Adu-Kumi-Jonathan et al. analysedg et al. determinez Prats analyseds et al. also con\u22121\u201410% of the product weight . Furthermore, a number of studies conducted on landfills in China have focused on SCCP concentration that are determined in soil and sediment samples of e-waste dismantling areas than in landfill leachate and sediment samples. Different research studies have shown that concentrations of SCCPs in e-waste dismantling areas are significantly higher than in other sampling areas (Kalinowska et al. \u22121 dw. Xu et al. (5 ng g\u22121 and 32.5 to 1.29 \u00d7 104 ng g\u22121, respectively. Concentrations of SCCPs in soil samples were higher than in sediment samples, which were attributed to the distance between sampling points around the e-waste dismantling centres. Concentrations of SCCPs in soil and sediment samples may vary based on factors such as land-use type, proximity to sources and soil profile (Kalinowska et al. Guida et al. stated tu et al. determin\u22121 and 145 to 28.000 ng g\u22121 were determined in Kingtom and Agbogbloshie e-waste waste sites, respectively. Compared to literature data, the lowest SCCP concentration determined in the Agbogbloshie and Kingtom samples was similar to slightly higher than concentrations determined in ambient soils in China, UK and Norway (Xu et al. According to M\u00f6ckel et al. , it can According to Xia et al. , there i\u22121 ww in breast milk samples from China. Compared with other studies, the concentrations were of the same magnitude with those reported by Yang et al. (\u22121 lw and 65.6 to 2310 ng g\u22121 lw in 2007 and 2011, respectively. The obtained results varied by about three orders of magnitude. Cao et al. (\u22121 lw. In the UK, Thomas et al. (\u22121 lw.Recently, Liu et al. reportedg et al. . Xia et g et al. also deto et al. also dets et al. determin\u22121 and 70.0 ng mL\u22121, respectively, which were in the same range with those reported by Aamir et al. (\u22121 ww, which were comparable with those previously reported in blood samples (Qiao et al. \u22121 were determined by Zhou et al. (\u22121 ww. The concentrations obtained were lower than those obtained in human, maternal and cord serum samples from China (Zhou et al. Liu et al. reportedr et al. and Qiaor et al. . Ding etr et al. also detu et al. , which wu et al. and humau et al. . In Austu et al. showed S2, including over 54 countries and a population of about 1.17 billion people (UNDP Africa is the second largest continent in the world covering over 30 million kmple UNDP . Apart fple UNDP , it remaple UNDP . Consideple UNDP .In comparing SCCP studies done in Africa so far with those published in other continents, it is clear that there remains a huge gap in CP studies. There are limited published studies on the determination of SCCP concentration in consumer products and landfills as well as in other different matrices in the continent, and as such, further research studies are necessary in order to tackle those research gaps. Additionally, very few studies have so far reported on SCCP concentration in landfill leachate and sediment globally. Nearly all imports of CPs and CP-containing waste end up in dump sites or landfills in Africa. The lack of resources on intensive analytical methods and elaborate sampling techniques required may be the reason for fewer SCCP studies in Africa; therefore, attention should be given to this area. Temperature is one of the key meteorological parameters that can severely influence the global distribution of SCCPs in the environment. Due to the hot climate experienced in Africa, SCCPs can be evaporated and thereby influence the re-emission and distribution of SCCPs to other regions. In view of the effects of SCCPs globally, the use of SCCPs needs to be legally restricted/banned and not used in many exempted applications as prescribed by the Stockholm Convention. Robust SCCP inventories and other CPs containing SCCPs are overdue in Africa and, therefore, require some attentions. It is, therefore, important to gather comparable data on concentrations of SCCPs in consumer products and other matrices in all the continents in order to have a global overview of the state SCCPs. It is also crucial to study SCCPs in Africa in order to understand the impacts of the Stockholm Convention to reduce POPs in Africa and to assess their distribution and influence of other continents to the environmental burden of SCCPs in Africa, and vice versa."} {"text": "Liver diseases are currently diagnosed through liver biopsy. Its invasiveness, costs, and relatively low diagnostic accuracy require new techniques to be sought. Analysis of volatile organic compounds (VOCs) in human bio-matrices has received a lot of attention. It is known that a musty odour characterises liver impairment, resulting in the elucidation of volatile chemicals in the breath and other body fluids such as urine and stool, which may serve as biomarkers of a disease. Aims: This study aims to review all the studies found in the literature regarding VOCs in liver diseases, and to summarise all the identified compounds that could be used as diagnostic or prognostic biomarkers. The literature search was conducted on ScienceDirect and PubMed, and each eligible publication was qualitatively assessed by two independent evaluators using the SANRA critical appraisal tool. Results: In the search, 58 publications were found, and 28 were kept for inclusion: 23 were about VOCs in the breath, one in the bile, three in urine, and one in faeces. Each publication was graded from zero to ten. A graphical summary of the metabolic pathways showcasing the known liver disease-related VOCs and suggestions on how VOC analysis on liver impairment could be applied in clinical practice are given. Fetor hepaticus, a musty breath aroma, has been among the most prominent liver insufficiency signs available to clinicians, and it was in the 1970s when Chen et al. . HoweverA wide variety of viral, immune-mediated, cholestatic, and toxic conditions may cause chronic liver tissue inflammation. In response to this, the liver accumulates extracellular matrix components, leading to fibrous tissue and scarring ,7. In prIn the past few decades, several noninvasive biomarkers have entered the liver research field, some of which have already been used in clinical trials, and the most widely used are the enhanced liver fibrosis score (ELF) , the FibHelicobacter pylori infection via the C13 urea breath test ) OR ((Diagnosis/Broad[filter]) AND (\u201cLiver Diseases\u201d[Mesh])))) AND ((volatile organic compounds) OR \u201cVolatile Organic Compounds\u201d[Mesh])) AND OR \u201cBreath Tests\u201d [Mesh]).The search terms for faeces were:(((((\u201cLiver Diseases\u201d[Mesh]) OR liver disease) OR ((Diagnosis/Broad[filter]) AND (\u201cLiver Diseases\u201d[Mesh])))) AND ((volatile organic compounds) OR \u201cVolatile Organic Compounds\u201d[Mesh])) AND OR faecal analysis) OR \u201cFeces\u201d [Mesh]).Replacing the word \u201cDiagnosis\u201d with \u201cPrognosis\u201d or \u201cMonitoring\u201d yielded the same results for both biological matrices. Additional studies cited by the initially identified research papers were also included and discussed in this review. These additional studies examined liver diseases related to VOCs in the breath and faeces and other body fluids such as urine, blood, and bile. The number of the latter was minimal; therefore, it was decided to discuss these as well. Only articles published in English, reporting original research in humans, and focused on different VOC patterns between healthy and diseased liver subjects were included. Engineering or technical studies were excluded since they fall outside the scope of this review. Finally, no year of publication criterion was imposed as an exclusion criterion. An overview of the literature search and the exact numbers of the publications found and used herein can be seen in the Results section in Two independent evaluators assessed the eligible studies using the Scale for the Assessment of Narrative Review Articles (SANRA) . SANRA iThe literature search performed in both PubMed and ScienceDirect resulted in 58 hits in total, of which 1 was not accessible, 16 were either engineering or technical, and 13 were reviews. Thus, the final number of papers to be discussed here was 28. From these 28 articles, 23 found VOCs in the breath, one in the bile, three in urine, and one in faeces.Pauling et al. pioneered breath testing with their unprecedented study published in 1971 . Since tVan den Velde et al. and DadaIn 2015, Del Rio et al. also comPijls et al. stratifiDe Vincentis et al. also comIn 2015, Eng et al. conducteThe most significant compounds, and the ones that the aforementioned literature seems toIn 2013, Alkhouri et al. assessedHepatic encephalopathy (HE) was investigated by Khalid et al. . They saIn 2016, O\u2019Hara et al. , a folloQin et al. comparedFerrandino et al. followedIn 2020, another broader scale HCC study was reported by Miller-Atkins et al. . They saArasaradnam et al. investigBreath analysis has also been implemented to examine obesity-related liver diseases. Solga et al. comparedIn 2013, Verdam et al. investigAlkhouri et al. examinedMillonig et al. demonstrIn 2020, Sinha et al. were theLetteron et al. conducteHanouneh et al. publisheThe key compounds and their metabolic pathways discussed in the aforementioned literature can be sRaman et al. sampled In 2015, Navaneethan et al. publisheNavaneethan et al. publisheArasaradnam et al. publisheFinally, Bannaga et al. publisheVOC analysis might greatly benefit liver disease diagnosis and prognosis; however, it is apparent from the literature findings that implementation of the VOC analysis in clinical liver practices is not ready yet for routine applications since much more research is needed. All conducted studies are either proof-of-concept studies or of a small sample size. Furthermore, many of the studies presented here did not perform any internal or external validation of their findings. The correction of possible confounding factors was also not considered, and this might have affected their results. Nevertheless, some key concept can be kept from the present review that may point towards the eventual implementation of the VOC analysis in clinical liver practices. Several VOCs have been found in several studies, and as indicated in"} {"text": "This new alternative combines an improved quantification of intracellular fungal components with a lower hazard risk at a lower cost.Arbuscular mycorrhizal (AM) fungi are one of the most common fungal organisms to exist in symbiosis with terrestrial plants, facilitating the growth and maintenance of arable crops. Wheat has been studied extensively for AM fungal symbiosis using the carcinogen trypan blue as the identifying stain for fungal components, namely arbuscles, vesicles and hyphal structures. The present study uses Sheaffer blue ink with a lower risk as an alternative to this carcinogenic stain. Justification for this is determined by stained wheat root sections ( Arbuscular mycorrhizal (AM) fungi are the most common fungal organisms to exist symbiotically with the root structures of vascular plants . It is c37H27N3Na2O9S3) is one of several stains that has been widely utilized for many years [34H28N6O14S4), originally developed by Philips and Hayman (1970) [Current staining procedures target arbuscules, vesicles and hyphal structures within the root cortex. Staining of target structures is performed for rapid, simple and cost-effective assessment of fungal symbiosis. Using light microscopy, the required skill sets are lower and the procedure can be performed with ease. Lactophenol cotton blue (Cn (1970) . With thet al. [Many widely used fungal root stains, including trypan blue, are known carcinogens . As a coet al. investig3COOH). The constituents of commercially available fountain pen ink were reported by [et al. [Phaseolus vulgaris), barley (Hordeum vulgare), cucumber (Cucumis sativus) and wheat (Triticum aestivum). Vierheilig et al. [et al.\u2019s work [Rhizoctonia cerealis inoculation were stated as being observable on wheat roots, but the authors do not present the data in any further detail. Another potential factor that may explain the lack of more wide-scale adoption is that in subsequent years, focus shifted toward the immunological identification of characteristic intracellular root cortical fungal structures [The use of an ink\u2013vinegar stain has been proposed as a safer alternative. The chemical composition of the ink\u2013vinegar stain is not stated by , althougorted by as being [et al. . They ing et al. reported.\u2019s work . The reaructures .et al. [Immunohistochemical (IHC) methodologies are advantageous due to their higher specificity and the reduced damage caused to histological architecture . When pret al. has remaThe present study compares the efficacy of trypan blue and Sheaffer blue ink as stains of intracellular AM fungal components in the root sections of two varieties of winter wheat (Zulu and Siskin). The focus is on image clarity, quantifiable structures (arbuscules and vesicles) and the potential for the substitution of a hazardous staining material with a safer alternative.et al. [Winter wheat (variety: Siskin), 98\u200a% germination rate, with no chemical pretreatment was supplied by KWS UK Ltd. A second winter wheat variety (Zulu) was sourced from a farm in central Hertfordshire as farm-saved seed. The percentage of organic matter of the adjusted soil was confirmed via modified loss on ignition (LOI) methodologies obtained from Myrbo et al. , using 5Individual seeds were introduced into 300\u2009g of adjusted, pre-purchased, top soils (J Arthur Bowers) and kept in controlled growth room conditions at 25 \u00b0C, 1770 lm and a humidity of 35\u200a%.n=60) and Siskin (n=60) wheat varieties in controlled growth conditions. The samples were viewed initially at a total magnification of 40\u00d7 using a Vickers compound microscope. The counting of stained root vesicles and arbuscules was performed at a total magnification of 100\u00d7, and fungal components were counted and recorded with the focus on arbuscule and vesicle quantity. Images of samples were taken with a Bresser HD microscope camera.Intracellular root arbuscules and vesicles were examined after the roots were left fully submerged in a formaldehyde, acetic acid, alcohol (FAA) and deionized water solution, 10\u200a:\u200a5\u200a:\u200a50\u200a:\u200a35, respectively, for 24\u2009h. Roots were removed and rinsed with deionized water prior to autoclaving. Root systems, containing small quantities of soils, were subject to sonication at 42\u2009kHz for 10\u2009min and rinsed in deionized water. If small amounts of soils still adhered to root systems, a soft fine paint brush was used to remove debris. Root systems were submerged in 5\u200a% hydrochloric acid for 30\u2009min and incubated at 60\u2009\u00b0C in a water bath. After cooling to room temperature, root material was sectioned into 1\u2009cm pieces, with adjacent sections subjected to different stains. Five 1\u2009cm root sections were each allowed to stain in 0.4\u200a% trypan blue in phosphate-buffered saline (PBS) (Fisher Scientific) and 10\u200a% Sheaffer blue ink in 25\u200a% glacial acetic acid for 3\u2009mit-tests were employed for null hypothesis testing of differences between trypan blue and Sheaffer blue staining. Statistical significance was determined by P values \u22640.05.Standard errors and means were calculated from raw data for each week of sample collection. Paired The difference in clarity of the stained root sample between the trypan blue and Sheaffer blue ink approaches can be seen in n=120), staining with trypan blue did not produce sufficient clarity in comparison to staining with Sheaffer blue for the two varieties of winter wheat investigated. Quantification of fungal spores is possible from the images presented in From the samples examined (t-test for Zulu (degrees of freedom (df)=5, t value=\u22124.5, P=0.003) and Siskin (sem) for those samples stained with trypan blue varietiepan blue reflectsThe present study has identified a statistically significant difference between staining techniques with respect to observable intracellular root cortical fungal components. The employment of Sheaffer blue ink over trypan blue is more favourable in terms of both improved image clarity and reduction of the user\u2019s exposure risk to chemicals with potential long-term health hazards. As an azo stain, trypan blue is a known carcinogen . Sheaffeet al. [Rhizoctonia cerealis, but did not present the data or images to substantiate the comment in any further detail. Vierheilig et al. [Although subject to scrutiny in previous studies, the use of ink\u2013vinegar as a staining method has not been adopted widely. In 1998, Vierheilig et al. investigg et al. also useet al. [et al. [et al. [et al. [Cottet et al. compared [et al. studied [et al. , and tho [et al. , who anaet al. [et al. [et al. [The clearing of soil materials from roots is an important first step in the preparation of stained samples due to the desired components being potentially obscured by debris. The present study used adjacent root sections. This negates any differences from root clearing. In most cases, roots are cleared with the use of 10\u200a%\u2009w/v potassium hydroxide . Whilst et al. , the emp [et al. , Kobae [ [et al. and CottAn area of limitation within the method used arises from the manual counting of stained fungal components and the time input required. Although potentially faster approaches exist for the quantification of fungal structures, employing image analysis software, the programs are only able to scan the field of view for objects that are different to the background image. There is a high risk of misidentification and classification of structures and hence this was not considered to be a sufficiently reliable approach for the purpose of the current study.et al. [Ford and Becker producedet al. investigIn conclusion, the employment of Sheaffer blue staining allows for the effective, safe and low-cost quantification of fungal components in commercially important plant species such as wheat. The manual handling of slides, whether old or new, comes with reduced long-term health risks when stained with Sheaffer blue as opposed to the carcinogenic azo dye trypan blue. Further, the number of fungal components that obstruct viewing are limited, resulting in a more reliable quantification of the established AM fungal infection and symbiosis within the roots of wheat plants."} {"text": "Mycobacterium tuberculosis enable the bacterium to acquire lipids from the host cells. Asthana et al. present the first structural insights into the potential assembly of Mce1 and Mce4, advancing our understanding of lipid transport by the human pathogen that causes tuberculosis.The mammalian-cell-entry (Mce) proteins of Mycobacterium tuberculosis, the bacterium that causes tuberculosis , the \u2018white plague\u2019 or consumption, tuberculosis appears as a common theme in art, music and literature, and has shaped many elements of human social history multiprotein complexes that are proposed to play crucial roles in trans\u00adlocating various lipid molecules across the cell envelope , and importing cholesterol from the host cells (Mce4) that act as substrate-binding proteins (SBPs) domain, the MCE domain, a helical domain and a tail domain of variable size. They also showed that all these individual domains, except for the MCE domain, require detergents for solubility and stability. Interestingly, the full-length and individual domains of M.\u00a0tuberculosis Mce1A and Mce4A are predominantly present as monomers in solution. This was further confirmed by the crystal structure of the single MCE domain present in Mce4A (Mce4A39\u2013140), indicating that this domain could not form homo-hexamers due to steric clashes between monomers. This is a notable difference from the previously reported hexameric SBPs observed in E. coli (Ekiert et al., 2017et al., 2020et al., 2020et al., 2020A. baumannii (Kamischke et al., 2019et al., 2020The results presented by Asthana al. 2021 reveal sM. tuberculosis (Asthana et al., 2021Ec-Pqi (Ekiert et al., 2017M. tuberculosis.These results have consequently led to a proposed model on the likely assembly of the Mce proteins in et al. (2021M. tuberculosis. Such endeavours may also facilitate the development of specific compounds to target cholesterol import as a therapeutic intervention, particularly restricting M. tuberculosis growth and survival during persistence.The proposed model by Asthana al. 2021 establis"} {"text": "Clustered regularly interspaced short palindromic repeats (CRISPR)-mediated genome engineering and related technologies have revolutionized biotechnology over the last decade by enhancing the efficiency of sophisticated biological systems. Cas12a (Cpf1) is an RNA-guided endonuclease associated to the CRISPR adaptive immune system found in many prokaryotes. Contrary to its more prominent counterpart Cas9, Cas12a recognizes A/T rich DNA sequences and is able to process its corresponding guide RNA directly, rendering it a versatile tool for multiplex genome editing efforts and other applications in biotechnology. While Cas12a has been extensively used in eukaryotic cell systems, microbial applications are still limited. In this review, we highlight the mechanistic and functional differences between Cas12a and Cas9 and focus on recent advances of applications using Cas12a in bacterial hosts. Furthermore, we discuss advantages as well as current challenges and give a future outlook for this promising alternative CRISPR-Cas system for bacterial genome editing and beyond.Cas12a is a powerful tool for genome engineering and transcriptional perturbation\u2022 Cas12a causes less toxic side effects in bacteria than Cas9\u2022 Self-processing of crRNA arrays facilitates multiplexing approaches\u2022 Streptococcus pyogenes is still by far the most studied and well characterized. Based on the architecture of the genomic loci, it is classified as a Class 2 type II-A CRISPR system for its single, large effector Cas9 protein , Acidaminococcus sp. BV3L6 (AsCas12a), Lachnospiraceae bacterium ND2006 (LbCas12a), and Moraxella bovoculi AAX11_00205 (MbCas12a). Thus far, the Cas12a orthologs are shown to be able to mediate genome editing in human cells variants Fig. . For thiE. coli, Bacillus subtilis, Streptomyces coelicolor, and Paenibacillus polymyxa (Zhang et al. . A clear strand bias of the repression efficiency by dCas12a was observed, especially when aiming for interference during transcription elongation. Different studies reported that efficiency of transcriptional perturbation significantly increases when the template strand is targeted (Zhang et al. S. coelicolor, repression efficiency of dFnCas12a targeting the template strand can achieve up to 88% whereas it was much less effective when targeting the non-template strand (Li et al. Compared to gene editing, dCas12a-mediated gene regulation in bacteria was reported less frequently although some studies demonstrated its high efficiency for gene interference (Table B. subtilis and P. polymyxa (Schilling et al. . These studies demonstrated that linking dCas12a to transcription activation domain like RemA or SoxS resulted in higher expression levels of the target genes. Contrary to eukaryotic organisms, for which CRISPRa is primarily based on chromatin rearrangements (Gilbert et al. Besides CRISPRi, dCas12a can also be employed for activation of gene expression by linking it to a transcription activator domain. Upon dCas12a binding to the target region, the activator domain facilitates recruitment of RNA polymerase leading to higher expression levels of the gene of interest. Gene activation facilitated by the dCas12a has been well explored in mammalian cells (Campa et al. While Cas12a is of importance for bacterial strains in which Cas9 expression shows toxic effects, its simplicity for multiplex targeting remains the most attractive property of Cas12a. To realize multiplex targeting, the spacers-containing crRNAs can either be delivered individually in separate plasmids or in form of a crRNAs array. Nonetheless, it has been reported that supplying the crRNAs in one array is as efficient as supplying them individually (Ao et al. E. coli, B. subtilis, Clostridium difficile, S. coelicolor, and P. polymyxa (Zhang et al. . Nevertheless, the studies demonstrated the functionality of Cas12a multiplexing with reasonably high efficiency. In bacteria, the highest degree of multiplexing that has been investigated thus far was regulation of four genes in E. coli and P. polymyxa (Zhang et al. . While efficiency of transcriptional perturbation is usually not heavily influenced by an increasing number of targets, efficacy of genome editing via homology-directed repair can decrease (Zhang et al. C. difficile resulted in an efficiency of 25% which was significantly lower than the efficiency of targeted single gene deletion (Hong et al. E. coli, where single-site chromosomal integration showed an efficiency close to 100%, it dropped to 40% and 20% when two and three loci were targeted for simultaneous integrations (Ao et al. S. coelicolor (Li et al. .Despite the great potential, Cas12a-based multiplexing has only been investigated in few bacteria: There are different strategies that can be employed to achieve higher activity of Cas12a in the desired bacterial host. An important aspect is to ensure adequate expression of Cas12a. Since each organism has distinct codon usage preference (Quax et al. Various studies also investigated different possibilities to enhance the activity of Cas12a. It is reported that engineered AsCas12a variant with E174R/S542R/K548R mutations has twofold higher editing efficiency in human cells than the wild-type variant (Kleinstiver et al. First characterized in 2015, Cas12a has emerged as a promising genetic tool and many studies have exploited its potential since then. With the rapidly growing research, there will be several improvements that we can anticipate in the upcoming years which will boost the use of Cas12a for bacterial genome engineering.acr genes. For example, Listeria monocytogenes encodes acr for Cas9. Consequently, it severely inhibits commonly used SpCas9 (Marino et al. As often seen in biological systems, there exist antagonistic mechanisms to keep the balance of the natural condition. Recently, it was described that some proteins can act as natural inhibitor of Cas nucleases (Pawluk et al. To broaden Cas12a application, it will also be interesting to analyze its utilization as a highly efficient base editing tool in bacteria. As described for Cas9, fusing the dead or nickase variant with a cytidine deaminase protein could direct the conversion of cytosine to thymidine within a particular editing window (Komor et al. . This variant will certainly be a beneficial add-on for extended applications of Cas12a.Finally, we also anticipate the development of other Cas12a variants including the nickase which only induces ssDNA breaks, while still triggering the repair mechanism. The mutated variants will particularly be of interest for applications in bacteria which are deficient of the dsDNA break repair mechanism (Song et al."} {"text": "Radiomics is a quantitative and high-throughput radiological method that can aid in clinical decision-making, like treatment modality selection, and treatment plan optimization. By extracting plentiful of parameters from standard images, plenty of information that cannot be discovered by human naked eyes can be explored. Based on the hypothesis that these extra data provide additional information related to gene, protein and tumor phenotype, radiomics has gained increasing attention in cancer research. Meanwhile, because of the rich amount of data obtained in radiomics, sophisticated image analysis tools are required to analyze it. Many image-based signatures have been constructed by computer algorithms. Herein, this Research Topic recruited studies that exploring the usage of radiomics and artificial intelligence assisting clinical decision-making of tumors.We are very glad to see that many excellent works were submitted to our Research Topic. In the end, a total of 36 papers were published, among which 34 were original studies and two were reviews. The researches were carried out in different countries, including China, USA, UK and France, and most of them were retrospective studies. They used various methods to explore the role of imaging in clinical decision-making. The methods used to select high-throughput imaging parameters can be divided into three levels, including the mathematical formulas level, Machine Learning level and Deep Learning level which belongs to Machine Learning but is more automatic. These kinds of analysis methods are constantly evolving to mimic the thinking patterns of the human brain, gaining the ability to analyze increasingly complicated data. However, for the lack of open platform of images and non-uniform manual feature extraction, there is still a long way to go till a standard or a series of standardized radiomics signatures can be constructed.Xu X. et\u00a0al.; Xu H. et\u00a0al.; Gao et\u00a0al.; Hu et\u00a0al.; Wang J. et\u00a0al.; Zhou X. et\u00a0al.; Zhang Y. et\u00a0al.; Zhong et\u00a0al.; Wu J. et\u00a0al.; Zhang P. et\u00a0al.; Chen W. et\u00a0al.; Dong Y. et\u00a0al.; Mai et\u00a0al.; Li et\u00a0al.). For instance, Zhang P. et\u00a0al. differentiated seminomas and nonseminomas by MRI radiomics. Features were selected by comparing their heterogeneity among different groups and by assessing their relevance and redundancy. Then, Least Absolute Shrinkage and Selection Operator (LASSO), a regression analysis method, was used to select features to improve the mode prediction accuracy and interpretability . Mai et\u00a0al. focused on the differentiation of phyllodes tumors and fibroadenoma with breast MRI texture analysis. They used a combination of a linear discriminant analysis and the K-Nearest Neighbor classifier to construct differentiative models .As for the first level, using mathematical formulas, some studies generally extract parameters from images, then use statistical methods, like Mann\u2013Whitney U-test, Spearman\u2019s rank correlation test, etc., to compare the internal and external differences of parameters, and then select the most heterogeneous data in different groups. After selecting the appropriate features, Machine Learning algorithms will be used to build models. This kind of studies included in the Research Topic used textures extracted from various images, like Positron Emission Tomography\u2013Computed Tomography (PET/CT), CT, Magnetic Resonance Imaging (MRI), etc., to improve the accuracy of disease differentiation or prognosis prediction . Machine Learning algorithms build prediction models based on patterns in the training data and make predictions by comparing new instances to previous similar events, and they can be divided into supervised, unsupervised and semi-supervised learning algorithms, based on whether the data are labeled. Some studies selected one kind of algorithms to process data. Fei Wang et\u00a0al. used LASSO to select features and Support Vector Machine (SVM) algorithm to constructed a predictive model and drew a nomogram to improve the preoperative T category accuracy . Huang et\u00a0al. used LASSO regression model to select features and a multivariable logistic regression to develop predicting models. In addition, a nomogram was drawn by radiomics and clinical features to evaluate peritoneal metastasis status in gastric cancer . Yi et\u00a0al. predicted treatment response to neoadjuvant chemoradiotherapy in patients with locally advanced rectal cancer. Three aspects of the treatment response: not only partial clinical remission and good response, but also down-staging were evaluated. They used SVM rather than LASSO or Random Forest (RF) to regress features into a two-dimensional plane .At the second analysis level, studies mainly used Machine Learning algorithms to select and classify radiomics features -based models represented better performances than SVM-based models and Logistic Regression (LR)-based models . Similar results were found in other studies that compared different combinations . For example, Yang Zhang et\u00a0al. used five selection methods and nine classifiers. The combination of LASSO and LDA represented the best comprehensive performance . LDA is a linear classifier whose decision boundary is a plane or a line, while SVM is a non-linear classifier with a decision boundary of a surface or a curved line. Although the above studies showed that LDA was superior to SVM, other studies uncovered the opposite results . Zhang Y. et\u00a0al. differentiated anaplastic oligodendroglioma from atypical low-grade oligodendroglioma. The best-performed combinations were various according to different image parameters. The combination of LASSO and RF classifier was the best for T1 images, while the combination of GBDT and RF classifier was the best for the fluid attenuated inversion recovery images . In addition, Payabvash et\u00a0al. differentiated posterior fossa tumors by using different Machine Learning classifiers, and also found RF models achieved greater accuracy. Delzell et\u00a0al. used three types of classifying methods, including linear, nonlinear, and ensemble predictive classifying models, and found Elastic Net and SVM performed the best, while RF and Bagged Trees were the worst. It is impossible to draw firm conclusions about which method is the best, because there are too many influencing factors, such as sample size, parameter acquisition, extraction method, etc. However, at the very least, all the relevant articles show that Machine Learning methods are superior to manual methods, so more research on Machine Learning is necessary.Some other studies used multiple methods for feature selection and classification, as there are many kinds of Machine Learning methods with different advantages and drawbacks. They found the choice of classification methods accounted more than selection methods. Moawad et\u00a0al. explored the feasibility of volumetric assessment of pre- and post- Transhepatic Arterial Chemotherapy And Embolization hepatocellular carcinoma using fully automated segmentation that based on a Convolutional Neural Network (CNN) approach (U-Net). For automated segmentation, attenuation of adjacent organs and the small size of lesions were the main challenges. According to the assessment of response evaluation criteria in solid tumors, automated segmentation was a good substitute for manual segmentation . Sun et\u00a0al. compared the deep CNN model based on breast ultrasound parameters with the radiomics model. Radiomics can be regarded as an accurate phenotypic analysis of medical images in which the imaging features are carefully defined in advance according to expert opinion. However, Deep Learning uses the raw data and analyzes the pixels and the voxel values by themselves. With convolution techniques, imaging features are automatically defined in the network. Thus, Deep Learning is the most artificial intelligent tool among these three analyzing levels. It is closest to human mode of thinking and can extract features and analyze them automatically.Deep Learning is a subclass of Machine Learning that extends Deep Neural Networks to create complex neural architectures to solve difficult problems which would be impossible with traditional programming based on mathematical logic. Lacroix et\u00a0al. optimized MR images before process with N4ITK bias field correction and normalizing voxel intensities with fat as a reference region. The results showed that correction of magnetic field heterogeneity and normalization of voxel values can promote the usage of radiomic features . Wu W. et\u00a0al. decomposed data by a non-linear kernelization method, Kernel Principal Component Analysis (KPCA), to find a new set of candidates and maximize the use of data. Lu et\u00a0al. used concordance correlation coefficients to measure the fidelity in repeated experiments. A lot of features with good repeatability were found and their repeatability can be improved by using specific lesion-drawing methods. Zormpas-Petridis et\u00a0al. proposed a novel multi-resolution hierarchical framework (SuperCRF) which can introduce the spatial context of a cell as additional information and improve the single cell classification algorithms. In other researches, topics related to radiomics and Machine Learning were discussed. Dong J. et\u00a0al. and Ge et\u00a0al. reviewed the usage of radiomics and Machine Learning in the management of cancers, and summarized computer-aided clinical decision-making as a promising solution.Moreover, there are some included studies focusing on optimizing the original features to promote the analysis results. http://www.predictcancer.org need to be built. Secondly, based on open image sources, algorithms of lesion delineation, feature extraction and signature construction require more standard reference to increase the generalization of the results. Thirdly, more studies that based on uniform data and algorithms and comparing the efficiency of computer-aid and conventional clinical decision-making, are required to better promote the usage of Artificial Intelligence in clinic.In conclusion, the combination of radiomics and Machine Learning can provide clinical practice convenience, as long as some obstacles can be solved. The limitations of Machine Learning-based radiological decision-making mainly lie in the following aspects: Firstly, the data quality is uneven, and thus open data-platforms like This Research Topic involved many studies, which used the combination of radiomics and Machine Learning in tumor management. We appreciate all the reviewers and authors for their contributions to this Research Topic. We hope this Research Topic can arouse more attention in the related fields.HX and LD wrote the first draft of the manuscript. RT and XM contributed to manuscript revision. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Archer et al., Chem. Sci., 2019, 10, 3502\u20133513.Correction for \u2018A dinuclear ruthenium( The authors regret that incorrect details were given for ref. 41 in the original article. The correct details for ref. 41 are given below as The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} {"text": "It is erroneously stated that the European Synchrotron Radiation Facility developed the FlexED8 sample-changer. However, it has been pointed out that it was the EMBL-Grenoble Instrumentation team who actually developed the FlexED8 sample-changer, as shown in the paper by Papp et al. (2017In the paper by al. 2017.et al., 2017et al., 2006et al., 2017The correct sentence should read: \u2018Across the pond, Diamond Light Source achieved one of the fastest sample exchange times with BART, a six-axis robotic arm system (O\u2019Hea"} {"text": "Estimating the coronavirus disease-2019 (COVID-19) infection fatality rate (IFR) has proven to be particularly challenging \u2013and rather controversial\u2013 due to the fact that both the data on deaths and the data on the number of individuals infected are subject to many different biases. We consider a Bayesian evidence synthesis approach which, while simple enough for researchers to understand and use, accounts for many important sources of uncertainty inherent in both the seroprevalence and mortality data. With the understanding that the results of one's evidence synthesis analysis may be largely driven by which studies are included and which are excluded, we conduct two separate parallel analyses based on two lists of eligible studies obtained from two different research teams. The results from both analyses are rather similar. With the first analysis, we estimate the COVID-19 IFR to be 0.31% [95% credible interval (CrI) of ] for a typical community-dwelling population where 9% of the population is aged over 65 years and where the gross-domestic-product at purchasing-power-parity (GDP at PPP) per capita is $17.8k (the approximate worldwide average). With the second analysis, we obtain 0.32% [95% CrI of ]. Our results suggest that, as one might expect, lower IFRs are associated with younger populations . For a typical community-dwelling population with the age and wealth of the United States we obtain IFR estimates of 0.43% and 0.41%; and with the age and wealth of the European Union, we obtain IFR estimates of 0.67% and 0.51%. Above all, what's needed is humility in the face of an intricately evolving body of evidence. The pandemic could well drift or shift into something that defies our best efforts to model and characterise it.The New YorkerSiddhartha Mukherjee, 22 February 2021The infection fatality ratio (IFR), defined as the proportion of individuals infected who will go on to die as a result of their infection, is a crucial statistic for understanding severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and the ongoing coronavirus disease-2019 (COVID-19) pandemic. Estimating the COVID-19 IFR has proven to be particularly challenging \u2013and rather controversial\u2013 due to the fact that both the data on deaths and the data on the number of individuals infected are subject to many different biases.et al. . In the published version of their article , an alternative interval \u201caccounting for uncertainty in the number of recorded deaths\u201d is provided. This alternative interval, which essentially takes into account the D|C binomial distribution, is substantially wider: . In a very similar way, Levin et al. per capita (GDPk), respectively.Having established simple binomial distributions for the study-specific IRs and IFRs, we define a simple random-effects model such that, for et al. . The spread of COVID-19 is substantially different in LTC facilities than in the general population and residents of LTC facilities are particularly vulnerable to severe illness and death from infection; see Danis et al. . With th ; see \u2018World\u2019, \u2018USA\u2019 and \u2018EU\u2019 rows in et al.-based analysis, an across-population average IFR estimate of 0.31%, with a 95% CrI of . With the Serotracker-based analysis, we obtain a similar estimate of 0.32%, with a 95% CrI of . For 65yo\u00a0=\u00a016% and GDP\u00a0=\u00a0$ 65\u00a0298, the USA values, we obtain across-population average IFR estimates of 0.43%, with a 95% CrI of and of 0.41%, with a 95% CrI of . Finally, for 65yo\u00a0=\u00a020% and GDP\u00a0=\u00a0$ 47\u00a0828, the EU values, we obtain across-population average IFR estimates 0.67%, with a 95% CrI of and of 0.51%, with a 95% CrI of . Note that for the \u2018World\u2019 predictions, the Serotracker-based analysis has the more precise estimates, while the Chen et al.-based estimates are more precise for the \u2018USA\u2019 predictions. This is likely due to the fact that the Serotracker-based analysis considers several younger and less wealthy populations, whereas the Chen et al.-based analysis considers fewer outliers.We can infer , the moet al. [et al. [et al. [et al. [et al. [et al. [et al. [Our estimates are somewhat similar to those obtained in other analyses. Brazeau et al. , using d [et al. obtain a [et al. , using d [et al. , who res [et al. estimate [et al. estimate [et al. and the [et al. . On the other hand, those individuals who have reason to believe they may have been infected, may be more likely to volunteer to participate in a seroprevalence study . It is aet al. [et al. [The need to improve the quality and reporting of seroprevalence studies cannot be overemphasised.et al. review tet al. . Novel m [et al. proposedOutside of biased data, perhaps the foremost challenge in evidence synthesis using observational data is that necessarily one is forced to make an array of design choices around inclusion/exclusion criteria, statistical modelling approaches and prior specifications . With thet al. [Reducing the uncertainty around the severity of COVID-19 was of great importance to policy makers and the public during the early stages of the pandemic \u201393 and iet al. ). And yeet al. , identifet al. , 95.We prioritised simplicity in our modelling so as to promote transparency in our findings and to facilitate adaptations to similar, but not identical, data structures. While \u2018simple\u2019 is a relative term, note that the entire dataset used for our analyses fits on a single page in and 4 anet al. [Including age-stratification in the model could represent a substantial improvement given that infection in some populations is far from homogeneous (e.g. about 95% of Singapore's COVID-19 infections were among young migrant workers (as of September 2020), which explains the incredibly low case fatality rate ). If a fet al. for estiFinally, we must emphasise that the IFR is a moving target. As the pandemic changes, so to does the IFR. Our estimates are based on data from 2020, most of which were obtained more than a year ago. It is likely that, with continual viral mutation of SARS-CoV-2, advances in treatment and the availability of vaccines, the current IFR in many places is now markedly different than it was earlier in 2020 , 99, 100The COVID-19 IFR is estimated to be about 0.32% for a typical community-dwelling population where 9% of individuals are over 65 years old and where the GDP per capita is $17.8k (the approximate worldwide averages). For a typical community-dwelling population with the age and wealth of the United States we estimate the IFR to be approximately 0.42%.Any estimation of the COVID-19 IFR should take into account the various uncertainties and potential biases in both the mortality data and the seroprevalence data.Bayesian methods with interval censoring are well suited for complex evidence synthesis tasks such as estimating the COVID-19 IFR."} {"text": "P-wave velocity perturbations (dVp) [et al. [et al. [Recent studies have shown the extent and nature of the South China Sea (SCS) at the end of spreading by unfolding the Manila slab, which is the subducted part of the SCS, and by identifying the nature of the crust-lithosphere from mid-slab ns (dVp) ,2. The ons (dVp) ,4) until [et al. introduc [et al. and cons [et al. ,6), but [et al. ).The Manila subduction zone along the eastern boundary of the present-day SCS has undoubtedly played an important role in the evolutionary history of the SCS; however, the details are unclear because the eastern part of the SCS has already subducted. What did the SCS look like at the end of seafloor spreading (i.e. between 15.5 and 20\u00a0Ma) ,9? What dVp values within the mapped slab, slightly negative areas (in green) roughly correspond to thinned continental crust and positive areas (in blue) to oceanic crust. Thus, S1 corresponds to the northeastern portion of the northern COB within the slab, S3 is the portion of the northern COB defined by Liu et al. [Figure u et al. and sligu et al. .et al.\u2019s [et al. [et al. [Based on spreading flow-lines with NNE\u2013SSW trending (30\u00b0\u201345\u00b0) small rift basins ,15. The The first episode of the SCS rifting (56.0\u201340.0\u00a0Ma) corresponds to the first phase of the PSP seafloor spreading (~53.2\u201342.5\u00a0Ma). The second episode of the SCS rifting (40.0\u201333.0\u00a0Ma) corresponds to the second phase of the PSP spreading (~40.4\u201333.7\u00a0Ma). Therefore, during Eocene, similar-age tensional tectonic phases are observed on the SCS continental margin and in the oceanic PSP basin. We propose a simple sketch for the 56- to 33-Ma period, in which the two tensional domains of the SCS continental margins and of the PSP oceanic basin are connected by a left-lateral shear fault. We suggest that this fault is a large shear-plate boundary, which was active from 56\u00a0Ma up to now and locaIf paleo-latitude variations give the correct magnitude of the PSP northward component of the motion through time, the variations of paleo-magnetic declinations (synthesized in ) give thet al. [et al. [Since 56\u00a0Ma, a straightforward link between the SCS and PSP is not easily established given their 2200-km separation and the smaller size of the PSP relative to the present. Our interpreted large shear-plate boundary does not necessarily follow the same geological features through time. For example, magnetic lineations that show different trends across either side of the Gagua ridge indicate this boundary was not only a fracture zone (e.g. ), but a et al. and Desc [et al. confirm [et al. . Though An early Cretaceous age was proposed for the HB based on radiometric dating of gabbros belonging to the 115- to 125-Ma oceanic crust uplifted by the Miocene Luzon arc and the 115-Ma age of radiolarians deposited on this oceanic crust , suggestet al. [This paper only considers a fairly narrow class of allowable solutions amongst the many plate models. For example, our proposed reconstruction model of Fig. et al. and Hallet al. . The lefet al. .Despite ample evidence the HB was formed during the early Cretaceous ,7,20,21,"} {"text": "Mammalian target of rapamycin (mTOR) is a conserved Ser/Thr kinase that includes mTOR complex (mTORC) 1 and mTORC2. The mTOR pathway is activated in viral hepatitis, including hepatitis B virus (HBV) infection-induced hepatitis. Currently, chronic HBV infection remains one of the most serious public health issues worldwide. The unavailability of effective therapeutic strategies for HBV suggests that clarification of the pathogenesis of HBV infection is urgently required. Increasing evidence has shown that HBV infection can activate the mTOR pathway, indicating that HBV utilizes or hijacks the mTOR pathway to benefit its own replication. Therefore, the mTOR signaling pathway might be a crucial target for controlling HBV infection. Here, we summarize and discuss the latest findings from model biology research regarding the interaction between the mTOR signaling pathway and HBV replication. Patients with chronic HBV infection are prone to cirrhosis, liver failure, and hepatocellular carcinoma (HCC) signaling pathway -related kinase (PIKK) family. mTOR is the catalytic subunit of two complexes, mTORC1 and mTORC2 . Under nutrient-rich conditions or insulin stimulation, mTOR activation leads to the phosphorylation and activation of SREBP1, thereby enhancing lipogenesis , another major metabolic sensor. AMPK is activated in response to the increased AMP/ATP ratio, for example, after starvation of glucose. Importantly, AMPK directly or indirectly inhibits mTOR activity Semenza and promet al.et al.et al.Akt, serum and glucocorticoid-induced protein kinase 1 (SGK1) and protein kinase C-\u03b1 (PKC-\u03b1). mTORC2 directly activates Akt by phosphorylating the hydrophobic motif (Ser473) of Akt, which is required for the maximum activation of Akt .et al.et al.et al.et al.et al.et al.et al.et al.et al.The HBx protein, the product of the smallest of the four overlapping open reading frames of the HBV genome, plays a vital role in HBV replication and the pathogenesis of HBV-associated HCC. HBx transfection in hepatoma cells increases the expression of mTOR as a functional receptor for HBV, the role of HBV in cellular metabolism has been placed at the forefront. Yan etal. showed that HBx acts as a positive regulator of gluconeogenesis or siRNA that targets the PtdIns3K or ATG7 gene to disrupt the initiation of autophagy and late phase of autophagy, HBV assembly and release through enhanced autophagy are still beneficial for the production HBV particles despite the degradation by lysosomes. Recent findings also showed that HBV replication can interfere with the lysosomal functions and thereby evade the autophagic degradation process increased the transcription of 3.5-kb and 2.4-kb viral RNA as well as the replication of HBV DNA (Guo et al.et al.et al.invitro or invivo (Raney et al.et al.et al.et al.et al.et al.Different concentrations of PI3K, Akt and mTOR inhibitors have different effects on HBV transcription and replication. Guo et al.et al.et al.et al.etal. reported that na\u00efve T cells fail to differentiate into Th1, and Th17 cells in mTOR or Rheb gene knockout mice (Delgoffe et al.+ T cell responses (Pearce et al. + T cell differentiation (Araki et al.Recently, increasing evidence has indicated that the mTOR signaling pathway plays multiple roles in immunity (Weichhart Despite recent advances in drug discovery and development, effective antiviral drugs against HBV are still very limited, especially for the cure of chronic HBV infection. Thus, the development of effective, well-tolerated and affordable antiviral treatments is necessary and urgent. The HBV-induced abnormalities in the expression of PI3K/Akt/mTOR and its downstream regulators and alteration in host metabolism make mTOR a potential target for drug development."} {"text": "Due to the variability of an individual\u2019s prognosis and the variety of treatment options available to breast cancer (BC) patients with brain metastases (BM), establishing the proper therapy is challenging. Since 1997, several prognostic tools for BC patients with BM have been proposed with variable precision in determining the overall survival. The majority of prognostic tools include the performance status, the age at BM diagnosis, the number of BM, the primary tumor phenotype/genotype and the extracranial metastases status as an outcome of systemic therapy efficacy. It is necessary to update the prognostic indices used by physicians as advances in local and systemic treatments develop and change the parameters of survival. Free access to prognostic tools online may increase their routine adoption in clinical practice. Clinical trials on BC patients with BM remains a broad field for the application of prognostic tools.Background: Determining the proper therapy is challenging in breast cancer (BC) patients with brain metastases (BM) due to the variability of an individual\u2019s prognosis and the variety of treatment options available. Several prognostic tools for BC patients with BM have been proposed. Our review summarizes the current knowledge on this topic. Methods: We searched PubMed for prognostic tools concerning BC patients with BM, published from January 1997 (since the Radiation Therapy Oncology Group developed) to December 2021. Our criteria were limited to adults with newly diagnosed BM regardless of the presence or absence of any leptomeningeal metastases. Results: 31 prognostic tools were selected: 13 analyzed mixed cohorts with some BC cases and 18 exclusively analyzed BC prognostic tools. The majority of prognostic tools in BC patients with BM included: the performance status, the age at BM diagnosis, the number of BM (rarely the volume), the primary tumor phenotype/genotype and the extracranial metastasis status as a result of systemic therapy. The prognostic tools differed in their specific cut-off values. Conclusion: Prognostic tools have variable precision in determining the survival of BC patients with BM. Advances in local and systemic treatment significantly affect survival, therefore, it is necessary to update the survival indices used depending on the type and period of treatment. PIK3CA alterations [Brain metastases (BM) are a serious consequence of breast cancer (BC) progression. They occur in up to a half of patients with metastatic human epidermal growth factor receptor 2 (HER2) positive or triple-negative BC and frequently occur in luminal phenotypes with erations ,2,3,4. TCurrently, the treatment of BM in BC patients includes neurosurgery, stereotactic radiotherapy (SRS), whole-brain radiotherapy (WBRT) and systemic therapy .Due to the variability of an individual\u2019s prognosis and the variety of treatment options available to BC patients with BM, establishing the proper therapy is challenging. A patient\u2019s stratification into prognostic classes includes factors related to the patient and his or her disease(s). Beginning in the 1990s, there was a growing interest in developing prognostic tools to provide accurate prognoses, to guide treatment decision making, provide appropriate and cost-effective care and direct clinical trial design. Several prognostic tools in BC patients with BM have been proposed. Our review summarizes the current knowledge on this topic.The present review compares different prognostic tools based on a systematic literature search on PubMed/MEDLINE. It is limited to adult patients with newly diagnosed BM regardless of the presence or absence of leptomeningeal metastases. The keywords used were \u201cbrain metastases/brain metastasis\u201d with \u201cprognostic index\u201d with \u201cbreast cancer\u201d or \u201cnomogram\u201d or \u201cprognostic score\u201d and \u201cvalidation studies\u201d for studies published between January 1997 and December 2021.31 prognostic tools were selected: 13 analyzed mixed cohorts with BC cases and 18 were specific to BC. In 1997, the Radiation Therapy Oncology Group (RTOG) developed the first prognostic score for BM patients by using a Recursive Partitioning Analysis (RPA) strategy . The stu3) and the number of lesions (\u22653 vs. 2 vs. 1). The median OS within the 3 SIR groups was 2.9, 7.0 and 31 months.In 2000, Weltman et al. developeLorenzoni et al. proposedTo predict the survival of BM patients treated with WBRT, Rades et al. created Sperduto et al. analyzedRades et al. updated Barnholtz-Sloan et al. developeIn 2012, Yamamoto et al. proposedLee et al. establisIn 2021, Sato et al. developeEGRF and ALK in lung cancer), the LDH level (<200 vs. >300 U/L), and the cumulative tumor volume (<3.5 vs. \u22653.5 cm3). Three risk groups were selected with a significant difference in OS. In the total group of 356 patients, only 38 (10.7%) were BC patients.Zhou et al. presenteFan et al. construcIn 2005, Claude et al. retrospeLe Scodan et al. analyzedIn 2009, Park et al. retrospeNieder et al. retrospeIn 2012, Sperduto et al. developeIn a prospective study, Niwi\u0144ska et al. publisheAhn et al. publisheIn turn, Marko et al. proposedLe Scodan et al. developeRades et al. construcIn 2014, Ahluwalia et al. validateYang et al. developeSubbiach et al. validateHuang et al. proposedXiong et al. establisIn 2020, Sperduto et al. publisheIn 2020, Weykamp et al. developeLiu et al. proposedThree separate groups of scientists ,37,38 cop < 0.001) for the OS according to the prognostic category. Pairwise comparisons of each prognostic index revealed statistically significant differences in survival between prognostic classes, except for the breast-GPA classes I vs. II, BS-BM scores 1 vs. 2 and Le Scodan\u2019s scores I vs. II. There were no significant differences between all prognostic indices concerning the survival predicting ability. Only minor differences were seen using Harrell\u2019s c-index (range 0.60\u20130.68). The authors concluded that RPA seemed to be the most useful score. RPA performed better than new prognostic indices because it was the most accurate in identifying patients with long (>12 months) and short (<3 months) survival.Ahn et al. in 2012 Tabouret et al. in 2014 In 2015, Subbiah et al. validateCastaneda et al. in 2015 Further, Laakmann et al. comparedHuang et al. in 2018 Znidaric et al. in 2019 Similarly, in 2019 Zhuang et al. undertooLee et al. in 2020 Weycamp et al. in 2020 Since the first Recursive Partitioning Analysis published in 1997, a series of prognostic tools have been developed in BC patients with newly diagnosed BM to facilitate clinical decision-making and appropriate stratification to local and systemic therapy. This review of prognostic tools illustrates how over time, progress in the understanding of the biology of BC and new effective systemic treatments have influenced the prognosis of patients with BM ,33,34,35The majority of prognostic tools in BC patients with BM include the performance status, the age at BM diagnosis, the number of BM (rarely volume), the primary tumor phenotype/genotype and the ECM status as an outcome of the systemic therapy efficacy 19,20,2,33,34,35On the other hand, the BC phenotype or genotype is a main predictive factor of systemic therapy efficacy and ECM control. However, studies assessing ECM status as a prognostic factor used various definitions. Typically, it was the presence of distant metastases outside the brain or their absence ,30,34,35Advanced TNBC and HER2-positive BC have a higher risk of BM . It is aPredicting the survival of BC patients with BM is difficult; therefore, prognostic tools are crucial in stratifying different patients\u2019 outcomes. However, they are limited by their retrospective nature and may underestimate survival in the modern era with the growing number of effective systemic agents. Furthermore, the prognostic tool developed for single institution cohorts might be biased by institutional practice patterns; therefore, external validation for new prognostic tools are the gold standard and should be obtained whenever possible ,56. Morewww.clinicaltrials.gov; accessed on 25 January 2022).International multidisciplinary recommendations, EANO-ESMO and NCCN, do not recommend prognostic tools in BC patients with BM ,49. The In clinical practice, the decision on the treatment sequence of BC patients with newly diagnosed BM should be made by a multidisciplinary team including a medical oncologist, radiotherapist and neurosurgeon, according to the accepted standard operating procedures (SOP). In the management algorithm, it is crucial to provide supportive care to patients with a poor prognosis and an expected survival of less than 3 months; for example, patients with uncontrolled ECM and/or more than 10 BM where systemic pharmacotherapy and WBRT are mainly used. In turn, for the minority of patients, with small and few lesions (1 to 10) depending on the volume <15 mL), who may experience long-term survival or even cure, several approaches are used in combination mL, who .The universal, ideal prognostic tool should be simple and easily usable. Electronic access to such indices improves their usefulness in clinical practice, e.g., for the modified updated Breast GPA index . Furthermore, new prognostic tools in BC patients with BM should be used more in clinical trials.Prognostic tools have variable precision in determining the survival of BC patients with BM. Progress in local and systemic treatment significantly affects the parameters of survival. Hence, it is necessary to update the prognostic indices used, depending on the period of treatment. Free access to prognostic tools online may increase the frequency of their use in clinical practice. Clinical trials in BC patients with BM remain a broad field for the application of prognostic tools."} {"text": "Thereaet al. , Carde\u00f1o [etal. ). In a rn etal. contribuet al.'s[2r = 0.48) than its elasmobranchcatch as reported to FAO (2r = 0.20) underlinesthis erroneous assumption. An example of the dissonance caused by excluding fishing activityis northwestern Australia, where Van Houtan et al. [We fundamentally disagree with the central assumption of the paper that there is a directlink between species distribution and shark fin origin. This assumption relies on fisheriescatch being equal through the distribution of a species, which we know is not true. Fishingeffort that catches sharks is spatially heterogeneous because et al.'s estimaten et al. indicaten et al. . Such diet al. [Squalus acanthias\u2014do not occur in the fin trade[The paper's use of DNA data from some markets may be misleading since it assumed that allmarkets contributed equally to the global fin trade. For example, Feitosa et al. collecteet al. , and having serious inaccuracies. In all flawed cases,the SDMs indicate species occurrence well outside their established geographicaldistributions known from decades of fishery and research data, which are reported in widelyavailable species guides (e.g. [Isurus oxyrinchus) primarily occur inthe open ocean rather than coastal environments, grey reef sharks (Carcharhinus amblyrhynchos) do not occur in the Atlantic Ocean, smalleyehammerhead (Sphyrna tudes) occurs only in eastern SouthAmerica and great hammerheads (Sphyrnamokarran) are mostly coastal and not present to the latitudinal extent as suggestedby the SDMs. This lack of a validity check against knowndistributions results in the allocation of species to exclusive economic zones (EEZs) inwhich they do not occur and hence erroneous probabilities of contributions to the fin trade.With more than 30% of SDMs having major flaws, the errors introduced to the estimation ofthe probability of fin origin are large. Many of the SDMs used by Van Houtan es e.g. ,11). The. Theet aet al. [et al. [\u22121 [Alopias pelagicus) and scalloped hammerhead sharks (Sphyrna lewini) and Indo-Pacific origins for silky sharks [The flaws in the methods used by Van Houtan et al. mean tha [et al. ). For ex al. [\u22121 , a level al. [\u22121 . Their f al. [\u22121 , and geniformis) ,4. ThesePrionace glauca, bigeye thresherAlopias supercilious and shortfin mako shark, Isurus oxyrinchus, which together account for most (more than 50%)shark fins found in the market samples used to populate SDMs. These are all pelagic speciesand open ocean fisheries should have higher dominance in the origin probabilities. However,this result was not apparent because inaccurate SDMs and omission of relevant fisheries datacreated an unrealistic scenario of global shark fisheries. For example, consideringavailable data on these three species, less than 1000 unprocessed t yr\u22121 are caught in Australia and the USA [\u22121 in Brazil [If the authors had considered known locations of global fishing activity, the open oceanwould appear a more likely origin for fins ,15. This the USA , and lesn Brazil . If the n Brazil .The misinterpretations and methodological issues of the paper have resulted ininappropriate management recommendations for nations that are examples of best practiceshark fisheries . Their We do not question the occurrence of coastal shark species in the global fin trade, northat opportunities exist to improve shark conservation within EEZs of numerous countries.However, prioritization of shark conservation efforts across countries and the high seasmust consider the realities of the present distribution of species and fisheries activity,and existing national and international management arrangements."} {"text": "Galba truncatula eDNA. Fourteen potential G. truncatula habitats on two farms were surveyed over a 9-month period, with eDNA detected using a filter capture, extraction and PCR protocol with data analysed using a generalized estimation equation. The probability of detecting G. truncatula eDNA increased in habitats where snails were visually detected, as temperature increased, and as water pH decreased (P < 0.05). Rainfall was positively associated with eDNA detection in watercourse habitats on farm A, but negatively associated with eDNA detection in watercourse habitats on farm B (P < 0.001), which may be explained by differences in watercourse gradient. This study is the first to identify factors associated with trematode intermediate snail host eDNA detection. These factors should be considered in standardized protocols to evaluate the results of future eDNA surveys.Environmental DNA (eDNA) surveying has potential to become a powerful tool for sustainable parasite control. As trematode parasites require an intermediate snail host that is often aquatic or amphibious to fulfil their lifecycle, water-based eDNA analyses can be used to screen habitats for the presence of snail hosts and identify trematode infection risk areas. The aim of this study was to identify climatic and environmental factors associated with the detection of Both farms had a history of F. hepatica and/or C. daubneyi infections in livestock. Habitats on both farms were selected for the study based on their physical suitability to harbour G. truncatula snails, with all habitats containing standing or flowing water for periods during the summer months, bare mud surfaces and Juncaceae spp. which are indicator plant species for G. truncatula habitats whilst temperature data were measured by a LogTag Trix-16 Data Logger . These instruments were located within 1.5\u00a0km of each study habitat. Water pH was measured for each sample using an 8100\u00a0pH and Temperature Meter Kit .The number of \u03bcm micro-glass fibre filters using an electrical suction pump, B\u00fcchner flask and a funnel, as described by Jones et al. . All filters, including blank controls, were homogenized using a sterile pipette tip with each whole filter subjected to DNA extraction via the PowerSoil\u00ae kit protocol. PCR targeting a 288\u00a0bp strand of the G. truncatula ITS2 gene and gel electrophoresis were used to identify G. truncatula eDNA in extracts as previously described by Jones et al. model, created in SPSS v.27, where the dependent variable was the presence or absence of G. truncatula eDNA in each sample collected. GEE models account for potential correlations within subject, which occurred in this data set as multiple samples collected across sequential time points were nested within the subject (habitat). GEE can also account for missing data points, which also occurred within this data set as not all habitats had water to sample at all sampling points. The working correlation matrix for data inputted into each GEE model was set as AR1, as the lowest Quasi Likelihood under Independence Model Criterion (QIC) values were associated with models created when this working correlation matrix was specified were sequentially removed. Main effect variables offered during model creation included habitat type (pasture or watercourse), snail presence, water pH, mean temperature (\u00b0C) and rainfall (mm) on sampling day, mean daily temperature and rainfall 3 days prior to sampling, sampling month and proportion of weeks in study period where sampling water was present, as were biologically plausible interaction effect variables. Farm identity was also included as a fixed factor in each model to account for potential differences between G. truncatula snail populations on each farm. The final models were selected via their QIC values .In total, 221 samples were collected across 14 habitats during the study, 48% of which were positive for n Farm B . Of the G. truncatula eDNA presence can be seen in G. truncatula eDNA was increased in samples collected when temperatures were higher (P\u00a0<\u00a00.001) and when sample water pH was lower (P\u00a0=\u00a00.047). Samples from habitats where the presence of G. truncatula had been confirmed visually during the study period were significantly associated with G. truncatula eDNA presence (P\u00a0=\u00a00.006). A significant interaction effect on the number of positive replicates between farm, habitat type and mean 3-day rainfall prior to sampling was also observed . In this interaction effect, mean 3-day rainfall pre-sampling was significantly positively associated with G. truncatula eDNA presence in watercourse habitats on Farm A (P\u00a0=\u00a00.001). Mean 3-day rainfall prior to sampling was significantly negatively associated with G. truncatula eDNA presence on Farm B (P\u00a0<\u00a00.001). There was no significant association (P\u00a0=\u00a00.651) between mean 3-day rainfall prior to sampling and G. truncatula eDNA presence in pasture habitats on Farm A.Results of binary GEE analysis of factors associated with et al., G. truncatula eDNA was detected in 12, although eDNA detection was not consistent across all timepoints. eDNA is present in very small quantities in the environment, especially the eDNA of species that are likely to expel low volumes of DNA and that are present in low densities and highly acidic conditions, known to have a significant effect on DNA degradation (Strickler et al., G. truncatula snail presence and environment pH, although contradictory findings towards the snail's preference for either a slightly acidic or neutral pH are present in the literature (Urquhart et al., et al., et al., G. truncatula due to their humid conditions. A positive relationship was observed between temperature and eDNA detection in this study, which contradicts the known effect of warmer temperatures in accelerating eDNA degradation (Strickler et al., et al., G. truncatula, it has been shown that general activity, feeding, growth, reproduction rate and lifespan were all significantly greater at 16\u201322\u00b0C compared to 5\u00b0C (Hodasi, The probability of detecting eDNA may also be impacted by a habitat's exposure to a range of environmental and climatic conditions. Unlike many other studies, where pH was positively associated with eDNA presence and concentration (Strickler Hodasi, .et al., et al. (et al. (et al., et al., et al., Galba truncatula snails are able to survive these dry conditions across the summer months before remerging to potentially shed infective cercariae (Dreyfuss et al., The association between pre-sampling rainfall and eDNA detection was variable between habitat type and location in this study. On Farm A, rainfall was positively associated with eDNA detection in watercourse habitats, whilst it was negatively associated with eDNA detection on Farm B in watercourse and pasture habitats. One of the main differences between watercourse habitats on Farm A and Farm B was their gradient , which w, et al. , a heavy (et al. demonstr et al., . This woet al., et al. (et al., et al., G. truncatula watercourse habitats, which are commonly small slow-moving streams and drainage ditches, eDNA transportation down-stream should be limited. Watercourses are also potential movement corridors for G. truncatula snails, with snails capable of either migrating up stream over 100\u00a0m a month in some instances (Rondelaud, et al., G. truncatula migration.Considering the complex nature of the relationship between eDNA detection and rainfall seen in this study, the reliability of eDNA detection as an accurate indication of species presence at a particular sampling site may be questioned, especially in watercourse habitats where eDNA can be transported vast distances with the flow of water (Pont , et al. demonstret al. (et al. (Schistosoma intermediate snail host B. truncatus. However, increasing replicate numbers and sampled water volume can be challenging. Each replicate adds cost to the protocol (Sengupta et al., et al., Two replicate water samples of 500\u00a0mL were collected at each sampling point in this study and increasing number of replicates and the sample volume would likely have increased detection rates in this study. According to Schmidt et al. , the pro (et al. found thGalba truncatula eDNA was significantly more likely to be detected in habitats where G. truncatula snails were surveyed during the study period; however, the density of snails surveyed at each visit was not a significant factor associated with eDNA detection, although this could be explained by the comparatively unsensitive and variable nature of traditional G. truncatula surveys which may fail to identify snails present in mud, water or thick vegetation. This may also explain why G. truncatula eDNA was detected in habitats where these snails were not discovered via traditional surveying methods during the study period, a similar finding to that of Jones et al. (G. truncatula. Detecting the presence of intermediate snail hosts, even in low-density populations may be vital considering that a small number of snails are capable of shedding thousands of infective trematode stages into the environment (Dreyfuss et al., et al., et al., et al., et al., G. truncatula can inhabit permanent wetlands, spring heads, ponds, streams, ditches and poached soils on pastures in both temperate and arid climates (Dreyfuss et al., s et al. . This woG. truncatula, in small water body habitats over time. These factors should be considered when devising standardized eDNA sampling protocols and data analysis to ensure eDNA surveying goals are robustly met. Future research should build on knowledge gained in this study to use and test site occupancy models for identifying G. truncatula and other intermediate snail hosts in small water body habitats.In summary, this study identifies environmental and climatic factors associated with the detection of the trematode intermediate snail host,"} {"text": "The demand for high data rate transfer and large capacities of traffic is continuously growing as the world witnesses the development of the fifth generation 5G) of wireless communications with the fastest broadband speed yet and low latency . WidesprG of wireThis Special Issue covers various aspects of novel antenna designs for 5G and beyond applications. The featured topic articles in this issue highlight recent advances in designing antenna systems for smartphones, small cells, platform stations, massive MIMO, MM-Wave, front-end, metamaterials, and metasurface applications. In addition, other relevant subjects such as spectral efficiency improvement, K-user MIMO, and multi-mode band-pass filter design are discussed. This Special Issue is a collection of fifteen papers that are briefly explained in the following.Seyyedesfahlan et al. report oOjaroudi Parchin et al. propose Ojaroudi Parchin et al. demonstrSharaf et al. develope3.Dixit et al. introducSong et al. present Hamid et al. focused Kozlov et al. present Ferreira-Gomes et al. demonstrRam\u00edrez Arroyave et al. developeGaya et al. proposedIqbal et al. have devLuo et al. present In their work, Dicandia and Genovesi focused Yerrapragada and Kelley propose Sensors journal. We hope that readers will discover new and useful information on antenna design techniques for 5G and beyond applications.We would like to take this opportunity to appreciate and thank all authors for their excellent contributions and the reviewers for their fruitful comments and feedback. Special appreciation is also extended to the editorial board and editorial office of MDPI"} {"text": "The decision-making process regarding management after severe acute brain injury is based on clinical evaluation and depends on the injury etiology as well as radiological and neurophysiological data ,2. GrowiBrain Sciences, entitled \u201cFrontiers in biomarkers of brain injury,\u201d Metallinou et al. [The examination of the specific biomarkers of brain injury is not limited to adult patients. In this Special Issue of u et al. report ou et al. . Perivenu et al. . Activinu et al. . Metalliu et al. showed tBrain Sciences, Bagnato et al. [Moreover, growing evidence suggests that acute brain injury is followed by chronic neurodegeneration . Accordio et al. report oo et al. , and chaBiomarkers may also play a role in the evaluation of cognitive impairments after the occurrence of brain injury. Specifically, traumatic brain injury triggers neuroinflammation, which may affect patients\u2019 long-term outcomes . Malik eBrain Sciences highlight the breadth of the fields in which biomarkers have potential applications regarding brain injury, ranging from infants to adult patients and from traumatic to hypoxic-ischemic and acute to chronic injuries. There are several factors that still limit the routine use of several biomarkers in clinical practice, including high costs and the need for specialized skills and specific equipment. It is expected that these obstacles will be overcome in the next few years for several biomarkers, allowing their introduction into routine clinical practice.These examples from the papers published in this Special Issue of"} {"text": "Bacillus subtilis has served as a model microorganism for many decades. There are a few important reasons why this Gram-positive bacterium became a model organism to study basic cell processes. Firstly, B. subtilis has proven highly amenable to genetic manipulation and has become widely adopted as a model organism for laboratory studies. Secondly, it is considered as the Gram-positive equivalent of Escherichia coli, an extensively studied Gram-negative bacterium, and both microorganisms have served as examples for bacterial cell division studies in the last decades. Thirdly, unlike E. coli, it can form endospores and the sporulation represents the simplest cellular differentiation process. In addition, this bacterium became an exceptional specimen to study motility, chromosome segregation, competence, host system for bacteriophages, transcription regulation, biofilm formation, and B. subtilis is also widely used to produce various enzymes, such as proteases, amylase, etc.Bacillus subtilis as a Model Organism to Study Basic Cell Processes\u201d was to provide a platform to researchers for sharing their new studies on advances in basic cell processes studies in the model organism B. subtilis. The Special Issue includes papers from several leading scientists in the field studying sporulation, biofilm formation, bacteriophages, and transcription in this bacterium and its close relatives. The present Special Issue comprises nine research articles and two reviews.The objective of this Special Issue on \u201cB. subtilis. Transcription of specific genes and production of physiologically relevant proteins is crucial for the adaptation of bacteria to changing environmental conditions. Sudzinov\u00e1 et al. [B. subtilis sigma factor \u03c3B as a subunit of RNA polymerase. They concentrated on \u03c3B-dependent genes expressed specifically during germination and the outgrowth of spores. They determined the binding motif of a subset of \u03c3B regulated genes during these parts of B. subtilis life cycle. Importantly, they also experimentally verified sixteen selected SigB dependant promoters.Two articles were focused on studies of transcription and gene expression regulation in \u00e1 et al. determin\u00e1 et al. used an B. subtilis. They concentrated on G4 DNA and hairpin-forming motifs and they showed that these non-B DNA-forming structures promote genetic instability and may have spatial and temporal mutagenic effects. Fa\u00dfhauer et al. [B. subtilis cold shock proteins, CspB and CspD proteins. These proteins belong to the most abundant proteins in the cell and thus it suggesting their possible crucial function. Deletion of either of the genes has no clear effect on phenotype. However, the simultaneous loss of both proteins results in severe growth effects and the appearance of suppressor mutations. Interestingly, the global RNA profile of the double mutant suggests that these proteins are important for transcription elongation and termination.Ermi et al. studied B. subtilis and related bacteria belong to bacteria that can form a surface-associated multicellular assemblage, so-called biofilms. Dergham et al. [B. subtilis NDmed strain. By using different 15 mutant strains they identified genes important for all biofilm phenotypes. \u0160pacapan et al. [Bacillus velezensis strains and analyzed its impacts on biofilm formation and stable colonization on different plant surfaces which finally enables its activity as an elicitor of induced systemic resistance.The next three articles are focused on different aspects of biofilm formation. m et al. concentrn et al. studied n et al. studied B. subtilis has been especially well known for many decades of studies on sporulation, an important mechanism by which bacteria can survive harsh environmental conditions, and it also represents the simplest cell differentiation process. The spore resistance arises from several protective layers that surround the spore core. In addition to the cortex, a peptidoglycan layer, the coat is the main defense system against the challenges of the environment. More than 80 coat proteins are localized into four distinct morphological layers. Kraj\u010d\u00edkov\u00e1 et al. [Bacillus subtilis A163 in comparison with laboratory B. subtilis, PY79 strain. They showed extremely high resistance to wet heat compared to spores of laboratory strain. They determined the proteome of vegetative and sporulating cells of both strains A163 and PY79 and the results revealed proteomic differences of the two strains what should help to explain the high heat resistance of B. subtilis A163 spores.\u00e1 et al. analyzedB. subtilis and Listeria monocytogenes. The authors focused on several ABC transporters, which are not only required to inactivate or export drugs but are essential for drug sensing, and on ABC transporters, which affect cell wall biosynthesis and remodeling. The second review by \u0141ubkowska et al. [Bacillus group\u2019 bacteria. This review of bacteriophages of thermophiles is an important overview not only of their previously described characteristics but also their roles in many biogeochemical, ecological processes, and biotechnology applications, including emerging bionanotechnology.This Special Issue also includes two review articles. Rismondo and Schulz provideda et al. provides"} {"text": "In 2009, rapamycin was reported to increase the lifespan of mice when implemented later in life. This observation resulted in a sea-change in how researchers viewed aging. This was the first evidence that a pharmacological agent could have an impact on aging when administered later in life, i.e., an intervention that did not have to be implemented early in life before the negative impact of aging. Over the past decade, there has been an explosion in the number of reports studying the effect of rapamycin on various diseases, physiological functions, and biochemical processes in mice. In this review, we focus on those areas in which there is strong evidence for rapamycin\u2019s effect on aging and age-related diseases in mice, e.g., lifespan, cardiac disease/function, central nervous system, immune system, and cell senescence. We conclude that it is time that pre-clinical studies be focused on taking rapamycin to the clinic, e.g., as a potential treatment for Alzheimer\u2019s disease. Streptomyces hygroscopicus, which was isolated from soil samples collected from Easter Island by Georges Nogrady in the late 1960s [Streptomyces hygroscopicus produced a compound that would kill fungi, which they named rapamycin after the name of Easter Island, Rapa Nui. The initial interest in rapamycin focused on its antifungal properties. When it was found that rapamycin inhibited the growth of eukaryote cells, research on rapamycin turned to rapamycin\u2019s immunosuppressive and anticancer properties. Rapamycin was approved by the FDA in 1994 to prevent organ rejection in liver transplant patients. In addition to being used as an antirejection drug, rapamycin or its rapalogs are being used today to prevent restenosis after coronary angioplasty, and they are being tested in many clinical trials as antitumor agents, e.g., FDA approved the use of rapamycin in treatment of pancreatic cancer patients in 2011.Rapamycin is a macrocyclic lactone produced by Science, selected this study as one of the major scientific break-throughs in 2009 , the first discovery in aging to be selected by Science as a break-through. Over the past decade, there has been an explosion in the number of reports studying the effect of rapamycin on aging and age-related diseases, and there have been several reviews describing various aspects of rapamycin on aging [Research in the late 1980s turned to identifying the mechanism by which rapamycin blocked the growth of eukaryote cells. Heitman et al. discoveron aging . In thisCaenorhabditiselegans, and other groups showed that mutations in TOR increased the lifespan of yeast [Drosophila [The first data suggesting that rapamycin might affect longevity came from studies with invertebrates. In 2003, Vellai et al. showed tof yeast and Drososophila . Subsequosophila . Based oosophila reportedosophila , 9\u00a0monthosophila , or 19\u00a0mosophila of age. Three additional points of interest with respect to rapamycin\u2019s longevity affect can be seen from Table Lmna\u2212/\u2212) that mimics Hutchinson-Gilford progeria, Ramos et al. [Lmna\u2212/\u2212 mice. Khapre et al. [Bmal1\u2212/\u2212 mice because knocking out Bmal1 (a transcription factor that is key to the circadian clock) increased mTORC1 activity and reduced lifespan and disrupted circadian rhythm. They showed that rapamycin increased the lifespan of the Bmal1\u2212/\u2212 mice leading Khapre et al. [KI/KI mice, which have a nuclear mutation in the mitochondrial nucleotide salvage enzyme thymidine kinase resulting in reduced replication of mtDNA and mtDNA instability. Rapamycin dramatically increased the lifespan of the TK2KI/KI without having any detectable improvement in mitochondrial dysfunction. The authors concluded that rapamycin enhanced longevity in the TK2KI/KI mice through alternative energy reserves and/or triggering indirect signaling events. Johnson et al. [Ndufs4\u2212/\u2212 mice, which lack a subunit in mitochondria complex I and is a mouse model of Leigh syndrome. Johnson et al. [Ndufs4\u2212/\u2212 mice, especially at very high doses of rapamycin, which were 28-fold higher than the dose of rapamycin initially showed to increase the lifespan of mice by Harrison et al. [Leprdb mouse. They found that rapamycin doubled the lifespan of female mice but had no effect of the lifespan of male mice. Rapamycin improved both kidney and cardiac functions in the female BKS-Leprdb mice.Table s et al. showed te et al. studied e et al. to propoe et al. studied n et al. initialln et al. subsequen et al. . Reifsnyn et al. studied leprdb/db mice, 13% in males and 15% in females. The reduced lifespan of the db/db mice by rapamycin was associated with an increase in suppurative inflammation, which was the primary cause of death in the db/db mice. Ferrara-Romeo et al. [Terc\u2212/\u2212) 16% compared with over a 50% increase in the lifespan of the G2-Terc+/+ mice. Fang et al. [Although the overwhelming majority of studies on the effect of rapamycin on longevity in mice have shown a significant increase in lifespan, there are five studies that have reported either no effect or reduced lifespan when treated with rapamycin. Two studies using transgenic mouse models of amyotrophic lateral sclerosis (G93A and H46R/H48Q) reported no increase in lifespan when given rapamycin , 27. Sato et al. reportedg et al. found thRapamycin would be predicted to reduce the progression of cancer because it has been shown to inhibit cell growth and proliferation. In addition, mTOR is frequently hyperactivated in cancer, and mTORC1 has often been observed to be deregulated in a wide variety of human cancers . Data geAPCMin/+ (ApcD716) mice, which are a model of human colorectal cancer. Most human colorectal cancers have somatic mutations in the adenomatous polyposis coli (APC) tumor suppressor gene, and APCMin/+ mice develop multiple intestinal neoplasia. The APCMin/+ mice are relatively short lived, living a maximum of ~\u2009200\u00a0days compared with 800 to 900\u00a0days for normal laboratory mice. Three groups [APCMin/+ mice with rapamycin or everolimus reduced intestinal neoplasia (polyp number and size) in the APCMin/+ mice. In addition, these studies showed that rapamycin or everolimus dramatically increased the lifespan of the APCMin/+ mice. For example, Hasty et al. [APCMin/+ mice.The data in Table e groups \u201335 showep53+/\u2212 mice. Komarova et al. [p53+/\u2212 mice; however, Christy et al. [p53+/\u2212 mice treated with rapamycin. Comas et al. [p53\u2212/\u2212 mice; however, Christy et al. [p53\u2212/\u2212 mice.Three groups have studied the effect of rapamycin on mice with deletions in the p53 gene, a transcription factor with broad biological functions, including as a tumor suppressor in humans . Komarova et al. reportedy et al. did not s et al. reportedy et al. did not Her-2/neu. HER2 is a member of the human epidermal growth factor receptor family and amplification/overexpression of this oncogene has been shown to play a role in certain types of breast cancer. Rapamycin treatment resulted in a modest, but significant increase in the lifespan of Her-2/neu transgenic mice [Her-2/neu transgenic mice. Hernando et al. [Ptet\u2212/\u2212 mice, a model of leiomyosarcomas. The Ptet\u2212/\u2212 mice develop widespread smooth muscle cell hyperplasia and abdominal leiomyosarcomas, and everolimus significantly reduced the growth rate of these tumors. Livi et al. [Rb1+/\u2212 mice. The retinoblastoma gene (Rb1) was the first tumor suppressor gene identified in humans and prevents excessive cell growth by inhibiting cell cycle progression. Rapamycin increased the lifespan of the Rb1+/\u2212 mice and reduced the incidence of thyroid C cell carcinomas as well as delaying the appearance and reducing the size of pituitary tumors. Hurez et al. [Rag2\u2212/\u2212, and IFN-\u03b3\u2212/\u2212 mice. Cancer immune surveillance is reduced in these two mouse models and rapamycin increased the lifespan of both Rag2\u2212/\u2212and IFN-\u03b3\u2212/\u2212 mice; however, no data were presented on the effect of rapamycin on the incidence of tumors in these mice.Two reports have described the effect of rapamycin on transgenic mice overexpressing nic mice , 41 and o et al. reportedi et al. studied z et al. studied The first indication that rapamycin might be important for the heart was the discovery that coronary stents coated with rapamycin prevented restenosis and stent thrombosis compared with non-coated or other drug-eluting stents , which lThe effect of rapamycin and rapalogs on the cardiovascular system initially was not clear, especially in humans. In clinical studies with transplant patients, rapalogs induced a negative plasma cardiovascular risk profile, e.g., an increase in LDL cholesterol and triglyceride concentrations in plasma . RapamycApoE\u2212/\u2212 or LDLR\u2212/\u2212 mice fed a high-fat diet to induce atherosclerotic plaque formation. All four studies showed that rapamycin reduced aortic atheromas, and Jahrling et al. [LDLR\u2212/\u2212 mice fed a high-fat diet. Because treating transplant patients with rapalogs has been shown to increase blood levels of cholesterol and triglycerides, the four groups also measured blood levels of cholesterol in their mouse models of atherosclerosis. Three found that rapamycin treatment had no effect on blood levels of cholesterol or triglycerides groups [Table g et al. found ths groups \u201353. Muels groups reporteds groups observedA large number of studies have evaluated the effect of rapamycin on cardiomyopathy and hypertrophy induced by physical, pharmacological, or genetic engineering in mice and rats. All nine studies show that rapamycin prevents or attenuates cardiomyopathy or hypertrophy in both mice and rats. Two studies examined the effect everolimus in rats or rapamycin in mice on myocardial infraction , 57. RapThe three studies on the effect of heart function in old mice are the most relevant to this review. The studies by Simon Melov\u2019s group at the Buck Institute and PetePerhaps the most unanticipated aspect of rapamycin\u2019s biological effects, besides its anti-aging actions, is its impact on the central nervous system in mice. The limited number of early studies suggested that rapamycin might have negative effects on memory because of its effect on protein synthesis . HoweverRapamycin has also been shown to affect mouse models related to Alzheimer\u2019s disease. The apolipoprotein E\u03b54 allele (APOE4) is the major genetic risk factor for Alzheimer\u2019s disease in humans; individuals with one or two copies of this allele have a fourfold to eightfold increased risk in developing Alzheimer\u2019s disease . Using t+) neurons in the substantia nigra pars compacta [Drosophila and a mouse model (HD-N171-N82Q) of Huntington\u2019s disease, they showed that rapamycin attenuated the polyglutamine (polyQ) toxicity in yw;gmr-Q120 Drosophila and enhanced various tests of motor performance in HD-N171-N82Q mice. Berger et al. [Drosophila expressing polyQ proteins and showed that rapamycin was protective against tau protein in Drosophila. Sarkar et al. (2008) showed that lithium and rapamycin exert an additive protective effect against neurodegeneration in a Drosophila model of Huntington\u2019s disease.As shown in Table compacta , 72 and compacta . Ravikumcompacta showed tr et al. and Kingr et al. showed tr et al. found siIn addition to its positive effect on neurodegeneration, rapamycin also has a neuroprotective effective effect on neurovascular disease, brain injury, and neurodevelopmental disorders Table . Of partAn interesting observation came from the study by Halloran et al. when theBecause rapamycin was first developed as part of a cocktail to prevent rejection in transplant patients, it is generally assumed that rapamycin is an immunosuppressant. Therefore, when it was observed that rapamycin increased the longevity of mice, there were questions about the translatability of using rapamycin to delay aging in humans because of its potential negative effect on the immune system. However, it is now recognized that rapamycin is best identified as an immunomodulator rather than an immunosuppressant , 87. BecTable Streptococcus pneumonia. Old male C57BL/6 mice (24\u00a0months) were treated with rapamycin for 4 or 20\u00a0months. Both groups of mice showed improved survival to pneumococcal pneumonia and reduced lung pathology; however, the increased survival was not statistically significant for the mice given rapamycin for 20\u00a0months. On the other hand, Goldberg et al. [One of the limitations in the current studies evaluating the effect of rapamycin on infectious agents is that almost all of the studies have used young mice (under 3\u00a0months of age). Two groups studied the effect of a same dose and formulation of rapamycin (14\u00a0ppm) used by Harrison et al. on resisg et al. found thg et al. who compIn 1961, Leonard Hayflick described the phenomenon of cell senescence when he showed that human fibroblasts did not grow indefinitely in culture but underwent irreversible growth arrest . It was 2O2, bleomycin), and oncogene activation. Cao et al. [The studies listed in Table o et al. also stuIn addition to suppressing markers of senescence, such as p16 and p21 expressions and SA-\u03b2-gal-positive cells, rapamycin reduced/prevented the SASP phenotype, i.e., the expression and secretion of proinflammatory cytokines by senescent cells. Two groups independently and simultaneously reported in 2015 that rapamycin reduced SASP produced by senescent human fibroblasts. Campisi\u2019s group at the Buck Institute reported that rapamycin suppressed the secretion of proinflammatory cytokines produced by a variety of human cells isolated from different tissues . BecauseNrasG12V mice. NrasG12V expression induced senescence in liver, and the SASPs produced by the senescent cells trigger an immune response in these mice. Rapamycin treatment (1\u00a0mg/kg by gavage once every 3\u00a0days) reduced SASP production in the NrasG12V mice. In studying the effect of rapamycin on various aspects of cardiac function in old mice, Lesniewski et al. [INK4A expression) in the skin that was accompanied by an improvement in the clinical appearance of the skin.Of particular interest to this review are the three studies showing that rapamycin suppressed cell senescence in vivo in mice. Castilho et al. studied i et al. found thi et al. studied i et al. showed tIn the 10\u00a0years since the initial report that rapamycin increased the lifespan of mice, there has been an explosion in the number of reports studying the effect of rapamycin on various parameters related to aging in mice. These studies have focused on determining the overall impact of rapamycin on aging processes and identifying potential mechanisms responsible for rapamycin\u2019s pro-longevity effect. As a result of the data generated, it is now clear that there is a consensus in many areas as to the impact of rapamycin on mice, and the research reports in these areas have been described in this review. The first and most important outcome of these studies has been the demonstration that rapamycin has a robust effect on the lifespan of mice. Thirty studies have been conducted since 2009 showing rapamycin increases the lifespan of various strains and genetic models of mice and the non-human primate, the common marmoset . Salmon\u2019s group recently reported that 9\u00a0months of rapamycin treatment had minor effects on clinical laboratory markers in middle-aged male or female marmosets . TherefoDetermine whether rapamycin\u2019s effect is sex dependent in transgenic mouse models of Alzheimer\u2019s disease: Almost all of the current studies did not identify the sex of the mice used, suggesting they used both sexes. Currently, there is no study specifically comparing the effect of rapamycin on male and female mice for any transgenic AD mouse model. As described above, rapamycin has a sex effect on longevity; the lifespan of female mice is increased more than male mice. In addition, it is well documented that gender plays an important role in Alzheimer\u2019s disease: women are at a greater risk [ter risk . TherefoDefine timing of rapamycin administration on cognition and pathology in transgenic AD mouse models: Eight of the ten studies showing that rapamycin treatment attenuated Alzheimer\u2019s disease were conducted early in the life of the mice; mice were treated with rapamycin before a significant cognitive deficit or amyloid burden occurred. While these studies show that rapamycin can prevent the development and progression of Alzheimer\u2019s disease in mice, there is only limited information on whether rapamycin can reverse Alzheimer\u2019s disease. Galvan\u2019s group [\u2019s group studied \u2019s group studied \u2019s group . When th\u2019s group have not\u2019s group . They st\u2019s group and reve\u2019s group , 59, 113\u2019s group , and cog\u2019s group , 79 in mEffect of higher levels of rapamycin on Alzheimer\u2019s disease. All of the previous studies on Alzheimer\u2019s disease and cognition used either 14\u00a0ppm or a similar, relatively low dose of rapamycin. It is now apparent that mice not only tolerate higher doses of rapamycin but that higher doses of rapamycin result in improved lifespan [lifespan , 35, 131lifespan to see aStudy the effect of rapamycin on other animal models: Because many interventions that work in mice do not translate to humans, it is important to determine if the positive effects of rapamycin are seen in other animal models. For example, it would be relatively straight forward to studying the effect of various levels of rapamycin at early and late stages of Alzheimer\u2019s disease using the transgenic AD rat models [t models , 133. Ast models have poit models . In addit models , 137.So where do we go from here? We believe one of the first areas that should be seriously considered is taking rapamycin to the clinic as a potential treatment of Alzheimer\u2019s disease, as has been proposed by Kaeberlein and Galvan . Current"} {"text": "Tachyoryctes macrocephalus, also known as giant mole rat) is a fossorial rodent endemic to the afro-alpine grasslands of the Bale Mountains in Ethiopia. The species is an important ecosystem engineer with the majority of the global population found within 1000\u2009km2. Here, we present the first complete mitochondrial genome of the giant root-rat and the genus Tachyoryctes, recovered using shotgun sequencing and iterative mapping. A phylogenetic analysis including 15 other representatives of the family Spalacidae placed Tachyoryctes as sister genus to Rhizomys with high support. This position is in accordance with a recent study revealing the topology of the Spalacidae family. The full mitochondrial genome of the giant root-rat presents an important resource for further population genetic studies.The endangered giant root-rat ( Tachyoryctes macrocephalus, R\u00fcppell, 1842), also known as giant mole rat and big-headed African mole rat, is a fossorial rodent and ecosystem engineer endemic to the Bale Mountains of south-east Ethiopia. The species is naturally confined to afro-alpine grasslands, where it constructs large underground burrow systems and three nuclear genes and DNA extracts at the GLOBE Institute, University of Copenhagen . The DNA was extracted using the Qiagen DNeasy\u00ae Blood and Tissue Kit, and sheared to approximately 400\u2009bp using the Covaris M220 Focused-ultrasonicator. DNA fragments were built into an Illumina library following the protocol from Car\u00f8e et al. , under permits from the Ethiopian Wildlife Conservation Association. The voucher tissue sample of e et al. , and seqe et al. , and reme et al. . For thee et al. with defe et al. and wereNannospalax galili as outgroup. The analysis was conducted using RaxML-HPC2 on XSEDE v8.2.12 and conducted an annotation using MITOS \u2013 with high support, in accordance with a phylogenetic analysis based on mitochondrial and nuclear genes (Our phylogenetic tree places the giant root-rat as sister clade to the genus ar genes (\u0160umberaar genes . Furtherar genes . The ful"} {"text": "The sTibaldi et al.\u2019s reevaluation included both immunohistochemical and morphological examinations of the rodent lesions. The premise underlying immunohistochemical analysis is that all cells in a malignant lesion are expected to be immunohistochemically identical - i.e., monoclonal \u2013 because they are all the direct descendants of a single transformed cell , 7. By cTibaldi et al confirmed that 72 of the 78 lesions originally diagnosed by RI as malignant were, in fact, monoclonal. Tibaldi et al. thus confirmed the original RI diagnoses of malignancy in 92.3% of the aspartame-exposed rodents [Roberts\u2019 critique is based on his review of the histological slides presented by Tibaldi et al. and on the unsubstantiated and previously refuted , 9 claimp = 0.006), including both lymphomas (p = 0.032) and leukemias (p = 0.031) [Roberts also fails to acknowledge, rebut or offer any alternative explanation for the dose-response relationship between aspartame dose and cancer incidence that was clearly evident in the RI studies and was confirmed by Tibaldi et al. Increasing aspartame exposures were associated with statistically significant increases in incidence of all hematolymphatic malignancies and takeFacile dismissals of the carcinogenicity of aspartame put forth by vested interests can no longer be sustained. Such unsubstantiated claims can impede public health interventions and can lead to unnecessary cancers, including childhood cancers and premature deaths."} {"text": "Questions were asked on demographics, protocol for root canal treatment (RCT), materials employed in different stages. Opinions were also sought on satisfaction with their practice and training needs in endodontics. Data were analyzed with SPSS version 20.0 and presented as tables and charts. Significance level was set at p\u22640.05.ninety dentists undergoing postgraduate training with mean age of 34.81 \u00b1 5.9 years participated in the study. Root canal treatment was mostly done in multiple visits in both single and multi-rooted teeth (p=0.01), only about 15% performed the procedure on multi rooted teeth. Sixty-five (72.2%) never used Rubber dam, stainless steel files were being used by 69%, step down technique of preparation by 53.9% and Sodium hypochlorite was the major irrigant (80%) used. Obturation was majorly with Cold lateral compaction technique (94%), 57.2% delayed definitive restoration for maximum of 1 week and amalgam was still the major material used for posterior teeth as reported by 62.9% of the participants. The majority (55.6%) were not satisfied with their current knowledge and practice and most were those that did not have good undergraduate training (p = 0.05).the practice of endodontics needs standardization across the nation as it is being advocated in other countries. There is need for hands on-training on endodontics to encourage adoption of new advances in technology, as well as improve the training of postgraduate dentists in endodontics. Also, emphasis should be placed on use of rubber dam in order to minimize the spread of infection and protect the patients from aspiration of small instruments involved in endodontic procedure. Endodontics is widely practiced across the globe to alleviate pulpal pain and pathologies in order to maintain the affected tooth as a functional unit of the dental arch . Many inSeveral studies -11 have This was a cross sectional study that employed self-administered questionnaire completed by resident doctors during the update course of West African College of Surgeons and residents that attended a conference of the Nigerian society for restorative dentistry (NISORD) in the same year. The update course usually draws dentists undergoing postgraduate training in institutions across the nation. To avoid duplication, the postgraduate doctors were asked to indicate if they had already completed the same questionnaire previously. The modified questionnaire was distNinety respondents filled and completed the questionnaire. The mean age of participants was 34.81 \u00b1 5.9 years. A high proportion of the participants graduated between 2001 and 2010 and the majority (86.7%) work with the government . Table 2et al. [et al. [The participants in this study were postgraduate resident doctors undergoing training in different institutions across six geopolitical zones of Nigeria, unlike the study by Udoye et al. on endodet al. , where met al. and Iranet al. where 53et al. where th [et al. report i [et al. written et al. [et al. [et al. [et al. [et al. [Autoclaving as claimed by 82.2% is the major means of sterilizing instruments and files. Though the use is higher in this study, it is also a major mode of sterilization in a study by Gupta et al. where 48et al. . This stet al. . This fiet al. , Mehta e [et al. and K\u00fc\u00e7\u00fc [et al. , that re [et al. ,9-16. Wh [et al. and Iqba [et al. respecti [et al. reported [et al. and Flem [et al. , the peret al. [et al. [et al. [Though new modifications are available to overcome the possible disadvantage (in application) of the old and conventional rubber dam kit, yet majority still do not use it. Reason for no use may include extra cost of rubber dam, lack of training, unavailability and unacceptability as observed by Gupta and Rai, and Udoyet al. . The majet al. . Howeveret al. . This st [et al. who obse [et al. ,17 have [et al. in their [et al. . The extet al. [et al. [Also, with excessive use of hand files, separation of these instruments within the canal is experienced often. As observed in this study, over 40% of the participants experienced instrument separation and this could be due to over-use. The technique of preparation of canal that involves the early coronal flaring, for example, crown-down technique has been found to produce a better shape, prevention of transportation of debris and micro-organisms to the apex, and enhanced penetration of irrigating solution. However, this study like many others -9, obseret al. which ha [et al. . Possibiet al. [et al. [The irrigant of choice in this study was majorly sodium hypochlorite which is similar to what has been reported by several studies ,3,12-17. [et al. and its [et al. .et al. [Cold lateral compaction technique has been reported as the most frequently employed technique of obturation by several studies ,9,17-20 et al. this mayet al. [Almost half (48.9%) of the participants in this study restore the access cavity a week after obturation. It is recommended by the American Association of Endodontists (AAE) that reset al. . This iset al. ,3,12 alset al. . RelativThis study showed that some practices are in line with the recommendations for endodontic practice. However, the practice of endodontics in the country needs to be standardized across the nation as it is being advocated in some other countries, for effective management of the patients. There is also a need for continuing medical education (CME) and hands on-training on endodontics for dentists, to encourage adoption of new advances in technology, as well as improve postgraduatetraining in endodontics. Also emphasis should be placed on use of rubber dam in order to minimize the spread of infection and protect the patients from aspiration of small instruments involved in endodontic procedure.Studies have reported that most general dental practitioners have low adherence to use of international guidelines for endodontics;Though better amongst studied group compared to what is known with general dental practitioners, use of latest techniques and materials in endodontics is poor;Use of rubber dam is not commonly practiced both in the studied group and internationally.This study has shown that undergraduate training in endodontics may be insufficient as many of the respondents were not satisfied with it. Thus, need to intensify endodontic training at undergraduate level;Study observed the non-use of rubber dam for endodontics among the studied group. Therefore, there is need to lay emphasis on use of rubber dam as a standard practice of endodontic treatment;Also, the study showed the need to optimize endodontic training at postgraduate level with hands-on for better knowledge in endodontics."} {"text": "Sir,I thank Shimizu et al. for their interest in our paper entitled \u201cEfficacy of probiotics in the prevention of VAP in critically ill ICU patients: an updated systematic review and meta-analysis of randomized control trials\u201d . We are P\u2009=\u20090.07; I2\u2009=\u200965%) was seen between the studies. After adding this study, it can be seen that the trend favors probiotics in reducing the risk of diarrhea in ventilated patients though the effect is not statistically significant.Most previous meta-analysis such as by Su et al. and Johnstone et al. , 4have mThe use of probiotics has been well known to be associated with the reduction of C. difficile-associated diarrhea as supported by Goldenberg et al. and Goka"} {"text": "The current COVID-19 pandemic threatens human life, health, and productivity. AI plays an essential role in COVID-19 case classification as we can apply machine learning models on COVID-19 case data to predict infectious cases and recovery rates using chest x-ray. Accessing patient\u2019s private data violates patient privacy and traditional machine learning model requires accessing or transferring whole data to train the model. In recent years, there has been increasing interest in federated machine learning, as it provides an effective solution for data privacy, centralized computation, and high computation power. In this paper, we studied the efficacy of federated learning versus traditional learning by developing two machine learning models using Keras and TensorFlow federated, we used a descriptive dataset and chest x-ray (CXR) images from COVID-19 patients. During the model training stage, we tried to identify which factors affect model prediction accuracy and loss like activation function, model optimizer, learning rate, number of rounds, and data Size, we kept recording and plotting the model loss and prediction accuracy per each training round, to identify which factors affect the model performance, and we found that softmax activation function and SGD optimizer give better prediction accuracy and loss, changing the number of rounds and learning rate has slightly effect on model prediction accuracy and prediction loss but increasing the data size did not have any effect on model prediction accuracy and prediction loss. finally, we build a comparison between the proposed models\u2019 loss, accuracy, and performance speed, the results demonstrate that the federated machine learning model has a better prediction accuracy and loss but higher performance time than the traditional machine learning model. The current COVID-19 pandemic, caused by SARS CoV2, threatens human life, health, and productivity . AI and The concept of federated learning was proposed by Google in 2016 as a new machine learning paradigm. The objective of federated learning is to build a machine learning model based on distributed datasets without sharing raw data while preserving data privacy , 5.In federated machine learning, each client has a dataset and his local machine learning model. There is a centralized global server in a federated environment that has a centralized machine learning model , which aggregates the distributed client\u2019s model parameters (model gradients). Each client trains the local machine learning model locally on a dataset and shares the model parameters or wights to the global model. The global model makes iteration of rounds to collect the distributed clients model updates without sharing raw data , 5 as shWhy federated machine learning should be used:Decentralized model removes the need to transfer all the data to one server to train the model, as training each node occurs locally, unlike traditional machine learning which requires moving all the data to a centralized server, to build and train the model.No data privacy violation as it applies methodologies including the differential privacy and the homographic Secure multiparty computation, unlike traditional machine learning.A third-party can be part of the training process as long as there is no data privacy violation and data is secured, unlike traditional machine learning third-party could not be an option in case of military organizations.Less computation power is needed as model training is performed on each client, and the centralized model\u2019s primary role is to collect gradient update distributed models, unlike the traditional machine learning which one centralized server contains all the data, which requires high computational power for model training.Decentralized algorithms may provide better or the same performance as centralized algorithms .It is highly recommended to use federated machine learning rather than traditional machine learning, in such environments where data privacy, is highly required. Federated learning can be applied in many disciplines like Federated machine learning enables us to overcome the obstacles faced by the traditional machine learning model as:Traditional machine learning occurs by moving all data source to a centralized server to train and build the model, but this may violate the rules of military organizations especially when third-party is used to create, train and maintain the model.To train the model, the third-party should prepare, clean, and restructure the data to be suitable for model training, however, this may violate data privacy when the data are handled to create the model.Traditional machine learning models also take much time to build the model with acceptable accuracy, which may cause a delay for organizations, especially recently opened ones.Traditional machine learning also requires the existence of a massive amount of historical data to train the model to give acceptable accuracy (Cold Start) .There is a need for a secure distributed machine learning methodology that trains clients\u2019 data on their servers without violating data privacy, saves computational power, and overcomes the cold start problem, enabling clients to get immediate results.Federated learning has the potential to solve these issues, as it enables soiled data servers to train their models locally and to share their model\u2019s gradients without violating patient privacy .The principal objective of this paper is, to build a comparison between a federated machine learning model and a non-federated machine learning model, by applying them to the same datasets and build the comparison between the model\u2019s prediction loss, prediction accuracy, and training time.Boyi Liu et al. [%, 91.26%), second, the COVID-Net and MobileNet-v2 had the same loss value as COVID-Net and Mobile Net. Non-federated learning was conducted on the same data and it was found that the loss convergence rate caused by using federated learning decreased slightly.u et al. proposedJunjie Pang et al. [g et al. proposedWeishan Zhang et al. [g et al. proposedThey defined a max waiting time for each client to participate during the server round which was defined by the platform owner. They applied four models using this architecture. GhostNet, ResNet50, and ResNet101 were used on COVID-19 datasets and they found that the proposed approach introduce better accuracy than the default setting one and can reduce communication overhead and the training time for ResNet50 and ResNet101, however, these results did not apply to GhostNet.Parnian Afshar et al. [r et al. proposedChaoyang He et al. [%accuracy in only a few hours compared to 77.78% for FedAvg.e et al. proposedAmir Ahmad et al. [d et al. proposedNikos Tsiknakis et al. [s et al. introducMwaffaq Otoom et al. [% prediction accuracy.m et al. proposedThanh Thi Nguyen et al. [n et al. proposedFatima M Salman et al. [% prediction accuracy.n et al. ProposedN Narayan Das et al. [s et al. proposedAKMB Haque et al. [e et al. proposedHimadri Mukherjee et al. [e et al. proposedIke FIBRIANI et al. [I et al. proposedHarsh Panwar et al. [r et al. proposedShashank Vaid, et al. [, et al. proposedRodrigo M Carrillo-Larco et al. [o et al. proposedFadoua Khmaissia et al. [a et al. proposedAkib Mohi Ud Din Khanday et al. [y et al. proposedR Manavalan et al. [n et al. proposedSina F Ardabili et al. [i et al. proposedSara Hosseinzadeh Kassan et al. [n et al. proposedIwendi, Celestine, et al. [, et al. proposedJaved, Abdul Rehman, et al. [, et al. presenteBhattacharya, Sweta, et al. [, et al. presenteManoj, Mk, et al. [, et al. proposedReddy, G. Thippa, et al. [, et al. proposedAnwaar Ulhaq et al. [q et al. introducq et al. , authorsThis section addresses the applied tools and methodology for the federated and traditional one, to predict recovery based on the features of the patient. Tensor flow with Keras API was used to build federated and traditional mode, following steps were used for building models:Algorithm 1. The Federated Learning ModelInput: COVID-19 Dataset as CSV fileOutput: Model Prediction Accuracy and lossInitialization:Data Loading (data loaded using pandas package which returned data frame object with data).Drop Unique Values Column .Replace Null Values .Label Encoding .Data Repetition (data were repeated to simulate the number of clients).Data Shuffling (data shuffled to avoid getting the same results).Data Batching (data grouped into batches to enhance performance).Data Mapping (ndarray dataset flattened to 1 darray dataset).Data Prefetching (data cached in memory for better performance).Create Deep Learning Model .from_keras_model deep learning model wrapped and built a federated learning model).Create Federated Learning Model .Model Initializing and Training .Model Evaluation .Return the machine learning model accuracy and loss for each round.Algorithm 2. The Traditional Machine Learning ModelInput: COVID-19 Dataset as CSV fileOutput: Model Prediction Accuracy and lossInitialization:Data Loading (data loaded by pandas package which returned data frame object with data).Drop Unique Values Column .Replace Null Values .Label Encoding .Create Deep Learning Model .Model Evaluation .Return the machine learning model accuracy and loss for each round.As shown in Data LoadingCV2 package was used to read chest x-ray images from the dataset download directory, and was loaded it into the memory object. The images were resized to 244*244*3 as color images.Data NormalizingImage data was divided by 255 to normalize it between 1 and 0.Data ReshapingThe image object is an array of should be flattened to be list .Creating Sample Data DictionaryAfter flattening the data dictionary instance was created for each image sample to represent the image data (features) and its label.Creating Samples and labels Tensors keras ObjectsTo build keras the dataset, the keras tensor object should be built for features and keras tensor object for labels.Create Keras Tensor Datasetfrom_tensor_slices API.Create keras dataset by using Data RepetitionData repeated to simulate the number of clients.Data ShufflingData shuffled to avoid obtaining the same results.Data BatchingData grouped into batches to enhance their performance.Data Mappingndarray dataset flattend to 1 darray dataset.Data PrefetchingData cached in memory for better performance.Create Keras Deep Learning ModelSequential deep learning model built using Keras API.Create Federated Learning Modelfrom_keras_model deep learning model.Using Keras API Create a Federated Average Processcollecting local models gradients and updates to be sent to the global model.Model Initializing and TrainingInitiated the iterative process and start training.Model EvaluationEvaluate the model performance by print evaluation metrics.As shown in Steps to be removed:Data Normalization.The data is not all the same type.Data ReshapingSteps to be addedFeatures One-Hot Encoding.Convert features categorical values to binary vectors.We modified the proposed model before model training with the Patient\u2019s Descriptive Data.As shown in Data LoadingCV2 package was used to read chest x-ray images from the dataset download directory, it was loaded it into the memory object. The images were resized to 244*244*3 as color images.Data Normalizing.Image data was divided by 255 to normalize it between 1 and 0.Creating Sample Data Dictionary.A dictionary instance was created for each image sample to represent the image data (features) and its label.Creating Samples and labels list Objects.List object were be built for features and labels.Data Reshaping.The image object was an array of should be flattened to be listed.Labels Encoding .To build a matrix of vectors of binary values representing categorical values of labels.Create Keras Deep Learning ModelSequential deep learning model created using Keras API.Model Initializing and Training.The iterative process initialized and start training.Model Evaluation.Model performance was evaluated by print evaluation metrics.As shown in Steps to be removed:Data NormalizationData ReshapingSteps to be added:features One-Hot Encoding.The patient\u2019s descriptive COVID-19 datasets contained COVID-19 case information, and after training the two proposed models were used to predict the patient recovery rate. We found that:The proposed federated model had a higher prediction accuracy than the proposed traditional model As shown in The proposed federated model had lower prediction loss than the proposed traditional model As shown in The proposed federated model had high training time than the proposed traditional model As shown in After training the federated and traditional models were used to predict the outcome for a patient based on the chest x-ray image. We found that:The proposed federated model with SGD algorithm had a higher prediction accuracy than the proposed traditional model As shown in The proposed federated model with SGD algorithm had a lower prediction loss than the proposed traditional model As shown in The proposed federated model had a high training time than the proposed traditional model As shown in After training, the federated and traditional models were used to predict the patient status based on the chest x-ray image. We found that:The proposed federated model with SGD algorithm had a higher prediction accuracy than the proposed traditional model As shown in The proposed federated model with SGD algorithm had a lower prediction loss than the proposed traditional model as shown in The proposed federated model with SGD algorithm had a training time equal or slightly greater than the proposed traditional model as shown in Our experiments were conducted by machine was shown in Table 4In this work, two types of COVID-19 datasets were used:https://www.kaggle.com/prashant268/chest-xray-covid19-pneumonia.Patients chest x-ray radiography images (CXR) with COVID19, PNEUMONIA, and NORMAL images were obtained from https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset.Patient\u2019s chest x-ray radiography (CXR) images datasets.www.kaggle.com, the dataset contains 5144 images categorized as follows:* 3,418 images for pneumonia cases* 1,266 images for normal cases* 460 images for COVID-19 casesDataset contains chest x-ray radiography images (CXR) with COVID19, PNEUMONIA, NORMAL images cases were obtained from Patient\u2019s descriptive COVID-19 datasets contained COVID-19 infected cases information .* Remove unique columns.* Replace null values.* Normalize columns like age column instead of to be .* Remove string columns.Modifications were required before feeding the machine learning model with the data like:Patients descriptive datasets with COVID-19 infected cases reported by WHO in Wuhan City, Hubei Province of China from 31 December 2019 provided by The following The model parameters modified multiple times to achieve maximum accuracy and minimum loss. These modifications included:Activation functionModel optimizerLearning rateNumber of roundsData SizeActivation functionThe sigmoid activation function was more accurate than the relu activation function.Model OptimizerChanging the SGD provided a better model accuracy and loss than ADAM, As shown in Learning Rate:it is found that changing the learning rate had a slight effect on model accuracy and model loss, when changing the learning rate from 0.02 to 0.01, the model loss changed from 34.02 to 34.04, As shown in Number of rounds:Increasing the round number affects the loss but not model accuracy, As shown in Figs Data SizeIncreasing the data size did not affect the model loss or accuracy, As shown in Figs We applied a proposed federated learning model on COVID-19 datasets, and found thatThe proposed federated learning model gives better prediction accuracy than traditional deep learning model.The proposed federated learning model gives a lower loss than traditional machine learning model.The proposed federated learning model takes a higher training time than traditional machine learning model.Activation function.The softmax activation function was more accurate than relu, sigmoid activation function when applied to the chest x-ray (CXR) images dataset and patient\u2019s descriptive data.Model optimizerChanging the SGD provided better model accuracy and loss than ADAM when applied on patient\u2019s descriptive data and patient\u2019s chest x-ray (CXR) images dataset.Learning rateChanging the learning rate had a slight effect on model accuracy and model loss when applied on patient\u2019s descriptive data and chest x-ray (CXR) images dataset.Number of roundsIncreasing the number of rounds had a good effect on reducing the loss but had no impact on model\u2019s accuracy when applied to the patient metadata and the chest x-ray image (CXR) data set.Data SizeIncreasing data size did not affect the model loss or model accuracy when applied to patient\u2019s descriptive data and chest x-ray (CXR) images dataset.Swarm intelligence algorithms will be used in the future to optimize the proposed federated model for global optimization and reduce the communications overhead. The hybrid model should be tested on chest x-ray radiography (CXR), and chest computed tomographyS1 Dataset(RAR)Click here for additional data file."} {"text": "Global food security and sustainability in the time of pandemics (COVID-19) and a growing world population are important challenges that will require optimized crop productivity under the anticipated effects of climate change. Agricultural sustainability in the time of a growing world population will be one of the major challenges in the next 50 years. Zinc (Zn) is one of the most important essential mineral nutrients required for metabolic processes, so a shortage of Zn constrains crop yield and quality worldwide. Zinc efficiency and higher growth and yield when there is low Zn supply make it a promising sustainable solution for developing cultivars that are zinc efficient.Future crop plants need to be more Zn efficient with sustainable food yields under sub-optimal Zn conditions. Therefore, there is a substantial value in biological research aimed at understanding how plants uptake and utilize Zn.Plants\u201d that provide an overview of current developments and trends in the times of high-throughput genomics and phenomics data analysis. Furthermore, this Special Issue presents research findings in various experimental models and areas ranging from maize to Medicago , flax, and sorghum.A total of 11 articles are included in this Special Issue of \u201cHacisalihoglu [alihoglu outlinesMallikarjuna et al. [a et al. describeAnisimov et al. [v et al. have useCardini et al. [i et al. have useDesta et al. [a et al. focused Lozano-Gonzales et al. [s et al. have useGrujcic et al. [c et al. have useReynolds-Marzal et al. [l et al. have usePetschinger et al. [r et al. have useHacisalihoglu and Armstrong [rmstrong have useOverall, the contributions to this Special Issue topic spans the full spectrum of Zn in plants and soils, cellular mechanisms, gene expressions, and biofortification. This Special Issue is an excellent summary of current progress with future outlooks that illustrates our increased knowledge on Zn and provides the foundation for further future research on the improvement of Zn nutrition in plants.Finally, we encourage the readers to visit the articles published in this Special Issue of \u201cUnraveling the Mechanisms of Zn Efficiency in Crop Plants: From Lab to Field Applications\u201d."} {"text": "This paper aimed to review the databases on non-displaced femoral neck fractures in elderly patients. We also discussed the surgical and non-surgical treatments and selection of implants.Reviewed was the literature on non-displaced femoral neck fractures in elderly patients. Four major medical databases and a combination of the search terms of \u201cfemoral neck fractures\u201d, \u201cnondisplaced\u201d, \u201cundisplaced\u201d, \u201cnon-displaced\u201d, \u201cun-displaced\u201d, \u201caged\u201d, \u201cthe elderly\u201d, and \u201cgeriatric\u201d were used to search the literature relevant to the topic of the review.Patients who were unable to tolerate the operation and anesthesia could be treated conservatively. Otherwise, surgical treatment was a better choice. Specific surgical strategies and implant selection were important for the patient\u2019s functional recovery.The non-displaced femoral neck fractures are relatively stable but carry a risk of secondary displacement. Surgical treatments may be a better option because the implants provide additional stability and allow early exercise and ambulation. Hemiarthroplasty is also an alternative for old patients with higher risks of displacement and avascular necrosis of the femoral head. The femoral neck fracture is one of the most common fractures in the elderly, which seriously threatens and affects the patients\u2019 health and quality of life , 2. CurrThe femoral neck fractures are classified into Garden type I and II (NDFNFs) and Garden type III and IV Table . The NDFThis paper aimed to review the databases on NDFNFs in elderly patients. We also discussed their surgical and non-surgical treatments and selection of implants.We searched PubMed, ScienceDirect, Scopus, and Embase by using the terms \u201cfemoral neck fractures\u201d, \u201cnondisplaced\u201d, \u201cundisplaced\u201d, \u201cnon-displaced\u201d, \u201cun-displaced\u201d, \u201caged\u201d, \u201cthe elderly\u201d, and \u201cgeriatric\u201d. All relevant titles and abstracts were reviewed. We read the full articles in the scope of the stated purposes, and the information supporting this review article was extracted. The flow chart depicting the strategy for selecting the relevant research is presented in Fig.\u00a0Our inclusion criteria included: (1) clinical research; (2) patients with femoral neck fracture; (3) patients aged above 65\u00a0years old; (4) type I or II femoral neck fractures against the Garden classification; (5) clinical interventions including conservative treatments, internal fixation, and joint replacement. The exclusion criteria were: (1) duplicate publications; (2) patients with a failed internal fixation or revision operation after the initial joint replacement; (3) preoperative heart failure or mental disorder; (4) old or pathological fractures and rheumatoid arthritis; (5) the inconsistent outcome indicators.A total of 149 articles were included in the retrieval process, and the records after duplicate removal were 87. Furthermore, after excluding the case reports, case series, review articles, conference abstracts, expert opinions and incomplete data sets, a total of 23 full-text articles were ultimately included in the final review.et al [et al [Both Garden type I and II fractures are NDFNFs. Garden type I fractures can be treated conservatively, but misdiagnosis and osteoporosis may lead to secondary displacement . The comet al revealedl [et al also revl [et al \u201312.etc, which increase the total cost, morbidity, and mortality [Compared to the surgical treatments, conservative treatments are associated with much more mobility restrictions, bed-related complications, family care, ortality , 9, 13.Surgical treatments mainly include internal fixation and joint replacement , which are systematically reviewed and summarized in the following sections.et al [Kim et al studied et al [et al [Many implants are used for treating NDFNFs, including cannulated screws, cancellous bone screws, dynamic hip screws, targon system, emerging dynamic locking plates, and full-thread headless compression screws. The posterior retroversion angle is closely related to the prognosis of the patients with NDFNFs. The retroversion of femoral head and frequently associated comminution of the posterolateral wall compromise blood supply of the femoral head , 16. Palet al reviewedl [et al also suget al [Cannulated screw is one of the ideal choices for the fixation of NDFNFs in elderly. Chen et al treated et al , 21. Moret al .et al [et al [et al [Fixation with the cancellous bone screw is also one of safe and effective surgical approaches. Lee et al conductel [et al found thl [et al also shoet al [et al [Dynamic hip screws effectively treat NDFNFs with few complications and a low re-operation rate. Compared to the cannulated screws, patients treated with dynamic hip screws had a higher Harris score one year after surgery and a lower re-operation rate , 26. Watet al conductel [et al studied et al [Recent studies have revealed that those systems were associated with a low incidence of postoperative complications in NDFNFs . Compareet al used emeet al [Most patients experienced the postoperative femoral neck shortening, resulting in the varus hip and poor gait function. Chiang et al treated et al [In NDFNFs, nonunion and late collapse of the femoral head are the two major complications. Patients aged 69 or younger have a high risk of ONFH after percutaneous screw fixation . Moreoveet al examinedIn recent years, hemiarthroplasty is increasingly performed in elderly NDFNFs. According to the statistics of the Norwegian fractures data center, NDFNF patients treated with hemiarthroplasty rose from 2.1% in 2005 to 9.7% in 2014 .et al [et al [vs. 21.4%). Only the surgical methods had a significant impact on the occurrence of re-operation as shown by a Cox proportional hazard regression model. There were no significant differences between the two treatments in survival time and mean Harris score 5\u00a0years after surgery. However, hemiarthroplasty resulted in a significantly higher excellent-to-good rate. In addition, given nearly one-third of the elderly patients have combined dementia, following the postoperative rehabilitation instructions is difficult. Therefore, internal fixation may be a better choice in these patients [Although different types of internal fixation have been widely acknowledged in the treatment of NDFNFs, Lin et al found thl [et al conductepatients .et al [Regarding this, Olofsson et al conducteet al \u201344. Curr et al [vs. $25,356) [ et al [Currently, the optimal treatment for NDFNFs is still controversial because few RCTs with a high level of evidence were conducted. The major considerations include postoperative function, complications, reoperation rate, and total cost , 45\u201347. et al indicate$25,356) . However$25,356) \u201350. Dola [ et al conducte [ et al \u201356. 3D pThe NDFNFs are relatively stable but carry a risk of secondary displacement. Surgical treatments may be a better option because the implants provide additional stability and allow early exercise and ambulation. Hemiarthroplasty is an alternative treatment for elderly patients with higher risks of displacement and avascular necrosis of femoral head."} {"text": "Cancers, \u201cThe Biological and Clinical Aspects of Merkel Cell Carcinoma\u201d, walks the avid reader through the interesting and sometimes even mysterious facets of Merkel cell carcinoma (MCC), starting at its carcinogenesis to also cover innovative treatment options.The Special Issue in The groundworks for MCC and its causative agent Merkel cell polyomavirus (MCPyV) are laid in an exhaustive review by Pietropaolo et al. . They prTo date, the effective treatment options for advanced MCC are still limited. In this issue, an interesting article by Sarma et al. tested tTurning to clinical patient care, Sahi et al. , portrayRare cancers pose a major challenge to the medical and scientific community . Due to Note added in Proof:It is often thought that the rarity of a specific cancer, such as MCC, causes patients to being \u201cunder-diagnosed\u201d and to receive \u201cunder-treatment\u201d, which is both unfortunately true. Published data on rare cancer are frequently based on a few patient cases or minor series with inadequate reporting , resultsAfter preparation of this editorial, two additional manuscripts were accepted. Horny et al. revealed"} {"text": "Marital Stability and Quality of Couple Relationships after Acquired Brain Injury: A Two-Year Follow-Up Clinical Study\u2019. The authors identify several demographic and clinical factors that are related to the quality of couple relationships following acquired brain injury (ABI). More specifically, they employ regression analyses to determine which factors predict (1) the functioning of couples as measured by the Dyadic Adjustment Scale and (2)ale (DAS ). ResultThe article\u2019s focus is highly relevant given the often drastic effects of ABI on romantic relationships ,5 and thn = 35), and the rather large number of predictors (11) in the regression analyses, the study\u2019s analyses may have been underpowered. Using the G*Power software [a is set at 0.05 showed that a sample size of 123 would be needed to detect medium effects (2f = 0.15). Consequently, with the current approach, one might incorrectly assume that (some of the) factors included in the study are unrelated to relationship quality, while in fact they may have failed to reach statistical significance only because of the lack of power.However, given the relatively small sample size and more research is needed, their results indicated that social cognition impairments may indeed play an important role in couple relationships following ABI. In addition, it is conceivable that social cognition impairments may function as a confounding variable in some of the relations investigated by Laratta et al. [Although it was probably not an option for Laratta et al. , given ta et al. as they a et al. and relia et al. .In conclusion, Laratta et al. address"} {"text": "The system of partial differential equations governing the unsteady hydromagnetic boundary-layer flow along an electrically conducting cone embedded in porous medium in the presence of thermal buoyancy, magnetic field, heat source and sink effects are formulated. These equations are solved numerically by using an implicit Finite-Difference Method. The effects of the various parameters that are source/sink parameter, porous medium parameter, Prandtl number, mixed convection parameter and magnetic Prandtl number on the velocity, temperature profiles, transverse magnetic field are predicted. The effects of heat source and sink parameter on the time-mean value as well as on transient skin friction; heat transfer and current density rate are delineated especially in each plot. The extensive results reveal the existence of periodicity and show that periodicity becomes more distinctive for source and sink in the case of the electrically conducting cone. As the source and sink contrast increases, the periodic convective motion is invigorated to the amplitude and phase angle as reflect in the each plot. The dimensionless forms of the set of partial differential equations is transform into primitive form by using primitive variable formulation and then are solved numerically by using Finite Difference Scheme which has given in literature frequently. Physical interpretations of the overall flow and heat transfer along with current density are highlighted with detail in results and discussion section. The main novelty of the obtained numerical results is that first we retain numerical results for steady part and then used in unsteady part to obtain transient skin friction, rate of heat transfer and current density. The intensity of velocity profile is increased for increasing values of porosity parameter \u03a9, the temperature and mass concentration intensities are reduced due heat source effects. Several studies have been conducted on heat transport through a porous medium due to its industrial applications and numerous technical processes. Flows through porous medium are of particular interest because they are so common in nature and daily life. Water is saturated in porous materials such as sand and underground crushed rocks, which allows the fluid to move and be transported through the material under the influence of local pressure gradients. This fascination stems from the various practical applications that can be modeled or approximated, through porous medium such as packed sphere beds, high-performance building insulation, grain storage, chemical catalytic reactors, sensible heat storage beds and heat exchange between soil and atmosphere. Moreover, soil salt leaching, solar power collectors, electrochemical processes, filtering devices, insulation of nuclear reactors, regenerative heat exchangers, geothermal energy systems and many other areas. Convective heat transfer in porous medium is of much interest of research community engaged in different applied and engineering disciplines.et al. [Kamel proposedet al. investiget al. . Cortellet al. encounteet al. .et al. [et al. [et al. [et al. [et al. [Sharma and Singh predicteet al. . Pal andet al. . Analyze [et al. conducte [et al. . Osman e [et al. computed [et al. discusse [et al. .et al. [et al. [The combined effects of a magnetic field and convective diffusion of species through a non-Darcy porous medium over a vertical non-linear stretching sheet in the existence of Ohmic dissipation and a non-uniform heat source/sink have been depicted by Pal and Mondal . For traet al. . The com [et al. . The mag [et al. . Pal and [et al. discusseet al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [Chaudhary et al. briefly et al. examined [et al. . They no [et al. . Hayat e [et al. investig [et al. . Suneeth [et al. gave an [et al. examined [et al. investig [et al. . By exer [et al. . Unstead [et al. . Later, [et al. \u201337 discu [et al. \u201341.In keeping view the above literature, we interact with the phenomenon for unsteady mixed convective flow across the surface of electrically conducting cone embedded in porous medium in the presence of heat source and sink. The time dependent dimensionless equations which illustrate the hydromagnetic laminar flow along the surface of sphere are formulated. Further, we consider the motion along sphere within plume region-I with main stream velocity. By using Stokes condition we separate steady and unsteady part from the modeled partial differential equation. Later, the unsteady part is further splited in to real and imaginary part. First we secure numerical solutions for steady part and then used in unsteady part to calculate periodic skin friction, heat transfer, and current density along the surface of the cone embedded in porous medium. We also calculate fluid velocity, magnetic field and temperature profiles, to ensure the correctness of numerical results by satisfying the prescribed boundary conditions.x is measured along the surface and y is measured normally on the cone surface. The velocities u and v along the -direction, Hx represent the component of magnetic field at the surface of cone, Hy component is taking normal to the surface of cone and external fluid velocity of the cone is U. Moreover, magnetic field intensity proceeds exact at the surface of the cone. The form of governing dimensioned continuity, momentum, magnetic, and energy equations, as well as the boundary conditions are given as belowConsider a two-dimensional periodic mixed convection boundary-layer fluid flow along the surface of thermally and electrically conducting cone embedded in porous medium. The scheme of coordinates is shown in The dimensioned form of boundary conditions areThe following are the dimensionless system of coupled nonlinear partial differential equations:The dimensionless boundary conditions are:\u03bb, represents porosity number and mixed convection dimensionless number, \u03b3 is Prandtl magnetic number, Prandtl parameter is Pr, Ho is the exact strength of the magnetic field at the surface and \u03b4 represents the heat generation absorption parameter.The third and fourth term on right hand side of Eq which ar\u03c9 is takes the term U(\u03c4) = 1+\u03f5ei\u03c9\u03c4. The velocity of the fluid, magnetic field and temperature components u, v, hx, hy and \u03b8 are defined as the sum of steady and unsteady equations.The stream velocity under |\u03b5|<<1, where \u03b5 presents a small magnitude periodic component and the frequency parameter O(\u03b50) and O(\u03b5ei\u03c9\u03c4), using these orders from Eqs and and19) au, v, \u03b8, hx and hy respectively, where A, B, C and D are coefficient matrices for above mentioned unknown variables. Later, the coefficient matrices are solved by using Gaussian elimination technique. Convergence of the solutions was declared at each step by following.Here, \u0393 represent the field variable n represent the nth iteration.Where \u03c4s, heat transfer \u03c4t and current density \u03c4m along the electrically conducting cone, where As, At and Am are amplitudes while, \u03b1s, \u03b1t and \u03b1m are phase angles .\u03c4w= sign indicates source and (\u2013) sign indicates the sink), respectively. In these figures, it is shown that the velocity component U and the temperature variable (\u03b8) is increased as the values of the source parameter (+\u03b4) is increased and these components are reduced for the sink parameter (\u2212\u03b4). It is also shown the transverse component of magnetic field (\u03d5) is reduced for the source parameter (+\u03b4) but on the other hand it is slightly increased for the increasing values of sink parameter (\u2212\u03b4). It is pertinent to point out that the solid lines in each plot is seen the effects of source parameter while the dashed lines highlights the effects of sink parameter. The phenomena in U, temperature variable (\u03b8) and transverse magnetic field variable (\u03d5) in the simultaneous presence of source and sink parameter (\u03b4 = \u00b10.6). It is concluded that the increase in Prandtl number Pr is reduced the velocity component U and the temperature variable (\u03b8) in both cases that is source and sink while the transverse component of magnetic (\u03d5) is increased in both source and sink regions. The effects of the mixed convection parameter \u03bb on the velocity component, temperature variable and transverse magnetic field are reflected in \u03b4 = \u00b10.7 respectively. In these plots, it can be seen for increasing values of mixed convection parameter \u03bb the velocity component U is increased and variable temperature and transverse magnetic field are reduced with the same trend simultaneously both for source and sink parameter (\u03b4 = \u00b10.7). U, temperature variable (\u03b8) and transverse magnetic field (\u03d5) for \u03b4 = \u00b10.9 respectively. It is observed that imposition of porous medium parameter increases the velocity component and decreases the temperature variable (\u03b8) for source parameter and the transverse magnetic field is reduced for sink parameter (\u2212\u03b4) and no changes are observed for the case of source parameter (+\u03b4).\u03b4 on the periodical skin friction, heat transfer and current density is presented in Qo is increase and kinetic motion of the fluid particle is decreased. In these plots it is very clear that amplitude of periodic skin friction is increased for increasing values of source parameter \u03b4 and is reduced in the sink ranges. In the case of periodical heat transfer this trend is reversed, the amplitude of periodic heat transfer is increased in the sink range and reduced in the source range. It is important to point out that the periodical current density is uniform in both rages. The behavior of periodical skin friction, heat transfer and current density is highlighted in \u03b3 in source and sink range respectively. From these figures it is depicted that the amplitude of periodic skin friction is increased for source range and is reduced for sink range, but the periodic heat transfer is increased in sink region and is uniformly distributed in each range. In \u03b3 and particularly uniformly distributed in source and sink range. The physical reasoning depicted in \u03b3 the magnetic diffusion rate is dominant over viscous diffusion rate. In \u03bb the amount of periodic skin friction is increased in the heat source range while the reverse trend is noted in the case of periodic heat transfer, mean to say the heat sink is significantly dominated. It is important to define that the mixed convection parameter is the ratio of the buoyancy force to flow shear force. The mechanism predicted in \u03bb that buoyancy force is dominant over flow hear force.The influence of the source and sink parameter \u00b1In this paper, the occurrence of periodic/oscillatory convection flow along the surface of electrically conducting cone has been investigated numerically. For this purpose, comprehensive numerical solutions have been obtained to delineate the effect of different parameters involved in the flow model on velocity, temperature field and transverse magnetic field along with periodic skin friction, heat transfer and current density. Characteristics of the transient periodic/oscillatory convection appear largely in skin friction and heat transfer and in some cases of periodic current density. The main objective of this study is to investigate the impact of heat source and sink parameter periodic/oscillatory skin friction, heat transfer and current density.In this study we conclude that intensity of heat and fluid flow is effective in the case of heat source and the heat sink effects are dominated by heat source. Moreover, heat source and sink are not prominent in the case of current density. It is noted that due to domination of the buoyancy force which acts like a pressure gradient the velocity profile, periodic skin friction is increased on the other hand temperature, transverse magnetic field, periodic heat transfer and current density are reduced. Due to the increase of porous medium parameter the avoid space accessible from the surface is increased thus a very significant reduction in periodic skin friction and heat transfer is noted. It is claimed that because of the prevailing attitude of the magnetic diffusion over viscous diffusion the velocity profile, temperature distribution, transverse magnetic field, periodic skin friction, periodic heat transfer, periodic current density are appreciably affected."} {"text": "Persisting evidence suggests significant socioeconomic and sociodemographic inequalities in access to medical treatment in the UK. Consequently, a systematic review was undertaken to examine these access inequalities in relation to hip replacement surgery. Database searches were performed using MEDLINE, PubMed and Web of Science. Studies with a focus on surgical need, access, provision and outcome were of interest. Inequalities were explored in the context of sociodemographic characteristics, socioeconomic status (SES), geographical location and hospital-related variables. Only studies in the context of the UK were included. Screening of search and extraction of data were performed and 482 articles were identified in the database search, of which 16 were eligible. Eligible studies consisted of eight cross-sectional studies, seven ecological studies and one longitudinal study. Although socioeconomic inequality has somewhat decreased, lower SES patients and ethnic minority patients demonstrate increased surgical needs, reduced access and poor outcomes. Lower SES and Black minority patients were younger and had more comorbidities. Surgical need increased with age. Women had greater surgical need and provision than men. Geographical inequality had reduced in Scotland, but a north-south divide persists in England. Rural areas received greater provision relative to need, despite increased travel for care. In all, access inequalities remain widespread and policy change driven by research is needed. A key tenet of the United Kingdom\u2019s National Health Service (NHS) is that access to healthcare should be fair and equal for all . Whilst A systematic search of published literature was performed on 4th February 2021. The search strategy followed the Population, Phenomena of Interest and Context (PICo) framework , such as shoulder arthroplasty, and studies relating to patients\u2019 postoperative return to work (n\u2009=\u20096). The 16 remaining studies were included in this systematic review.Over the three databases searched , 482 articles were identified, of which 382 were removed in the deduplication process. With duplicates removed, 120 articles were screened against the inclusion criteria. Sixty-six papers published before December 2005 and those without UK-based cohorts were removed. The remaining 54 articles were screened against the exclusion criteria, using full-text copies, resulting in the removal of a further 38 papers. Reasons for removal are shown Figure The 16 studies included in this review are of varying characteristics and demographics. Table The risk of bias checklist for assessing the quality of the included studies is shown in Table\u00a0p-value from p\u2009<\u20090.001 to p\u2009=\u20090.02. Neuburger et al. [Table\u00a0r et al. found thr et al. found thp\u2009<\u20090.001), socioeconomic inequality did not change significantly. Judge et al. [Table\u00a0e et al. reportedp\u2009<\u20090.001). Despite this, by 2007, they reported almost uniformly distributed waiting times across the deprivation quintiles. Cooper et al. [In terms of waiting times, Laudicella et al. showed tr et al. was the r et al. also repr et al. reportedr et al. reportedTable\u00a0p\u2009<\u20090.001; SF-36 mental: p\u2009<\u20090.001). Neuburger et al. [p\u2009=\u20090.04). However, no evidence was found between SES and postoperative morbidity. Clement et al. [p\u2009<\u20090.001). No significant difference between patients\u2019 SES and BMI was found by Clement et al. [p\u2009=\u20090.05 for no association hypothesis) and Jenkins et al. [p\u2009=\u20090.68). Jenkins et al. [Table\u00a0r et al. identifir et al. reportedr et al. reportedr et al. reportedr et al. , who shot et al. also fout et al. , and a wider socioeconomic gap in provision, measured using SIMD. While provision inequity between socioeconomic groups is still apparent in the UK, evidence shows the gap has fallen over time in England [Scottish geographical inequality in access to hip replacement surgery declined from 1998 to 2008 , however England , 31, 38. England , researcp\u2009<\u20090.001) [p-value was <\u20090.001. One included study [Increasing numbers of CCGs in the UK have begun implementing rationing measures for smokers and obese patients . Concern<\u20090.001) . Despite<\u20090.001) , 39 inve<\u20090.001) , are not<\u20090.001) . Neverth<\u20090.001) from the<\u20090.001) investig<\u20090.001) ; however<\u20090.001) . Of the <\u20090.001) , 33, 39 <\u20090.001) , 29 failed study used theed study change. Only one study had Welsh data and no studies had Northern Irish data. Excluding large samples of the UK population introduces selection bias, as the missing population data may have changed the pattern of inequalities described. Consequently, a narrower approach individually focussing on England or Scotland may have been more suitable. While a lack of research may be responsible for the lack of Welsh and Northern Irish data, it is also possible that geographical search criteria may have been imprecise. A custom UK geographical filter was used for the MEDLINE database search ; it was This review summarises the available literature on access inequalities in hip replacement surgery for the UK. While the heterogeneity of study outcomes and methodology made drawing conclusive evidence challenging, it is clear that access inequality is a major issue in the UK. Potential inequalities in pre-surgical patient consultation were not explored in the included studies. Patient diagnosis and referral to surgery may be impacted by implicit biases present in practitioners, such as an ethnic bias in pain evaluation for Black patients . Despite"} {"text": "Vacuum-assisted breast biopsy (VABB) is a minimally invasive procedure and has become an important treatment method. Although VABB is a minimally invasive procedure, it might cause complications, particularly those associated with blood vessels. In this article, we aimed to describe a 35-year-old female who experienced pseudoaneurysm post-VABB and was successfully treated with embolization. She presented to the hospital with a suspected left breast tumor. The pathology report after biopsy confirmed fibroadenoma, and the patient underwent VABB to remove the tumor. One hour after VABB, the patient described pain and swelling at the location of the removed tumor. Breast ultrasound revealed a hematoma and pseudoaneurysm. The bleeding did not stop following the application of manual compression. Breast hemorrhage was controlled by endovascular embolization. Pseudoaneurysm is an uncommon complication of VABB, and embolization represents an effective method for the management of breast pseudoaneurysm. The introduction of breast cancer screening programs has resulted in the increased detection of benign breast tumors . The defA 35-year-old female underwent a medical check-up due to a painless mass in the left breast, which appeared 2 years prior, without thickening or dimpling of the breast skin, pulling in of the nipple, or bloody nipple discharge. Breast ultrasound revealed a mass at the 3 o\u00b4clock position in the left breast, measuring 20mm x 12mm . This maet al. [et al. [et al. [The VABB technique, which was developed in late 1995, uses ultrasound guidance to remove samples of breast tissue through a single, small skin incision . Becauseet al. reported [et al. reported [et al. . Followi [et al. . Multipl [et al. .Patients with more than two nodules or nodules with a maximum diameter of 25 mm or larger were found to be associated with a significantly increased risk of hematoma after VABB . HematomThis case represents an incident of failure to stop VABB-induced bleeding using focused compression. Although VABB is a safe procedure, associated with a low rate of typically mild to moderate complications, breast vessel injury is always a risk. Patients who present with increasing pain and rapid growth at the tumor site after biopsy must be evaluated for possible pseudoaneurysm. This patient received a prompt diagnosis and was treated successfully with intravascular embolization."} {"text": "Du and colleagues have conFirst, Du et al note thaall individuals within the EU regardless of citizenship, including refugees and tourists). In other words, the GDPR does not rule out data transfer in principle, but sets high standards [Du et al are parttandards . This betandards . ArticleSecond, Du et al offer a More importantly, even if all legal barriers are addressed when adopting such an interface on a global scale, making a contact tracing app mandatory is too bold a proposal. This is not (only) a legal concern, but (even more so) an ethical concern. Morley and colleagues have synSo, I agree with Du et al on the n"} {"text": "Prostate cancer is the second most common cancer and the fifth leading cause of cancer-related death among men worldwide . In receChang et al. reportedWu et al. evaluateWu et al. comparedChoi et al. evaluateMilonas et al. assessedKimura et al. performeAbufaraj et al. performeBRCA1, BRCA2, and ATM germline pathogenic variants are at an increased risk of aggressive disease. They concluded that these rare genetic variants could be incorporated into risk prediction models to improve their precision in identifying males at a higher risk of aggressive prostate cancer and those with newly diagnosed prostate cancer who require urgent treatment.Nguyen-Dumont et al. performeAndo et al. reportedIn conclusion, this Special Issue provides updated information on the prognostic factors for non-metastatic prostate cancer after curative therapy, the functional outcomes of local therapy for radiation-recurrent disease, the impact of rare genetic variants on aggressive prostate cancer, and prognostic factors in patients with castration-resistant prostate cancer."} {"text": "Health Economics Review. 2020; 10 (1), 1\u20139. It explains the effects of health expenditure on infant mortality in sub-Saharan Africa using a panel data analysis (i.e. random effects) over the year 2000\u20132015 extracted from the World Bank Development Indicators. The paper is well written and deserve careful attention.This commentary assesses critically the published article in the The main reasons for inaccurate estimates observed in this paper are due to endogeneity issue with random effects panel estimators. It occurs when two or more variables simultaneously affect/cause each other. In this paper, the presence of endogeneity bias and its omitted variable bias leads to inaccurate estimates and conclusion. Random effects model require strict exogeneity of regressors. Moreover, frequentist/classic estimation (i.e. random effects) relies on sampling size and likelihood of the data in a specified model without considering other kinds of uncertainty.This comment argues future studies on health expenditures versus health outcomes to use either dynamic panel to control endogeneity issues among health , GDP per capita, education and health expenditures variables or adopting Bayesian framework to adjust uncertainty within a range of probability distribution. There is a growing concern of the importance of population health and its contribution to the national economy, but the issue of infant mortality remains a major concern in most of the developing economies including Sub-Sahara Africa , 2. One Health Economics Review, 10 (5) by Kiross et al. [P-values arising from hypothesis tests with nos et al. , among oview, 10 by Kiross et al. represens et al. estimatis et al. findingsConversely, Kiross et al. used heaThe\u00a0endogeneity bias observed in Kiross et al. paper caIt is also known that countries with similar level of development like sub-Saharan Africa, public health spending differs significantly in health outcomes measures. For example, between 2010 and 2014, the average public health expenditure as percentage of GDP in Tanzania and Zambia was 2.5% and 2.5% respectively . SimilarTo address the aforementioned weakness encountered in Kiross et al. , the useAs a take home message for the readers\u00a0and reviewers is that, random effects models require strict exogeneity of regressors and in the presence of endogeneity of variables, it leads to inaccurate estimates and misleading conclusion. Further, the Bayesian framework allows the\u00a0authors to make use of prior knowledge or beliefs about the specific question being studied, as well as the new evidence collected specifically for the study . It alsoFuture studies examining the effects of health expenditures (i.e. public or private) on health outcomes should either use dynamic system Generalized Methods of Moments (GMM) to control endogeneity and its omitted variables bias or adopting a Bayesian framework that provides a clear picture of parameter uncertainty adjusting for confounding, endogeneity and measurement error within a range of probability distribution ."} {"text": "N\u2009=\u2009560), but there was high heterogeneity (96%). Meta\u2010analysis of two studies found a statistically significant increased number of stools per day in the probiotic group compared to the placebo group at 1 month of age , with moderate heterogeneity (69%). Meta\u2010analysis of two studies showed no statistical difference in body weight between the two groups with minimal heterogeneity 23%. Probiotic therapy appears promising for infant regurgitation with some evidence of benefit, but most studies are small and there was relatively high heterogeneity. The use of probiotics could potentially be a noninvasive, safe, cost effective, and preventative positive health strategy for both women and their babies. Further robust, well controlled RCTs examining the effect of probiotics for infant regurgitation are warranted.Infant regurgitation is common during infancy and can cause substantial parental distress. Regurgitation can lead to parental perception that their infant is in pain. Parents often present in general practitioner surgeries, community baby clinics and accident and emergency departments which can lead to financial burden on parents and the health care system. Probiotics are increasingly reported to have therapeutic effects for preventing and treating infant regurgitation. The objective of this systematic review and meta\u2010analysis was to evaluate the efficacy of probiotic supplementation for the prevention and treatment of infant regurgitation. Literature searches were conducted using MEDLINE, CINAHL, and the Cochrane Central Register of Controlled trials. Only randomised controlled trials (RCTs) were included. A meta\u2010analysis was performed using the Cochrane Collaboration methodology where possible. Six RCTs examined the prevention or treatment with probiotics on infant regurgitation. A meta\u2010analysis of three studies showed a statistically significant reduction in regurgitation episodes for the probiotic group compared to the placebo group (mean difference [MD]: \u22121.79 episodes/day: 95% confidence interval [CI]: \u22123.30 to \u22120.27, Infant regurgitation is common during infancy and can cause substantial parental distress.The currently available evidence does not support or refute the efficacy of probiotics for the prevention and treatment of infant regurgitation but data from the individual trials and subset meta\u2010analysis of studies are promising.There are no indications from the available data that probiotics have any adverse effects.Further well\u2010controlled RCTs are warranted to investigate the efficacy of various strains and species, dosage, and combinations of probiotics. The secondary outcomes were the effect of probiotics on gastric emptying time, number of stools, growth rate , admissions to hospital related to infant regurgitation, loss of parent working days related to infant regurgitation, number of admissions of mother to hospital due to anxiety/depression, number of visits to any health professional, and adverse events related to probiotic supplementation (mother and infant).2.2https://www.who.int/clinical-trials-registry-platform/the-ictrp-search-portal; US National Library of Medicine: clinicaltrials.gov; ISRCTN Registry https://www.isrctn.com/). All potentially relevant titles and abstracts were identified and retrieved during the search. Independent hand searches were undertaken, and the bibliographies of each article were assessed for additional relevant titles.Eligible studies were sought from the Cochrane Central Register of Controlled Trials in the Cochrane Library; MEDLINE via PubMed (1966 to 9 April 2021); Embase (1980 to 9 April 2021); and CINAHL (1982 to 9 April 2021) using the following subject MeSH headings and text word terms: \u2018neonate(s)\u2019, \u2018newborn(s)\u2019, \u2018infant(s)\u2019, AND \u2018regurgitation\u2019 OR \u2018infant regurgitation\u2019 OR \u2018infantile reflux\u2019 OR \u2018reflux\u2019 AND \u2018probiotic\u2019. Language restrictions were not applied. We searched clinical trials registries for ongoing or recently completed trials : 2.3We included randomised controlled trials (RCTs) that compared probiotics (any dose or composition) to placebo, control, or other forms of treatment in mothers during the antenatal period, and term and preterm infants in the postnatal period (from birth and up to 12 months) for the prevention (mother/infant) and treatment (infant) of infant regurgitation. Articles in any language were considered if there was an abstract in English.We used the data extraction form available within Review Manager software (RevMan) to extract data on the participants, interventions and control(s), and outcomes of each included trial. Two review authors (JF and KP) screened the title and abstract of all identified studies. The titles were also checked by third author (SF). We reassessed the full text of any potentially eligible reports and excluded the studies that did not meet all the inclusion criteria. Two review authors (JF and KP) independently extracted data from each study without blinding to authorship or journal publication. In case of any disagreement, the three review authors resolved them by discussion until reaching a consensus. One review author (JF) entered data into RevMan, and two review authors (KP and SF) verified them were used to assess the methodological quality of included trial were enroled across the six studies.L. paracasei DSM 24733, L. plantarum DSM 24730, L. acidophilus DSM 24735, and L. delbrueckii subsp. bulgaricus DSM 24734, three strains of bifidobacteria , and one strain of Streptococcus thermophilus DSM 24731. The probiotics in this study were given to mothers 4 weeks before the expected delivery date (36th week of pregnancy) until 4 weeks after delivery from baseline to Day 28 of the study period, but did not provide any summary data. At the end of the second week the difference with the placebo group was significant (p\u2009=\u20090.05) and there was a trend to a significant result reported at the end of the third week of treatment (p\u2009=\u20090.06). Indrio et al. (p\u2009<\u20090.001). Baldassarre et al. .We were unable to include the remaining three studies in the meta\u2010analysis due to the method of reporting. Garofoli et al. reportedo et al. found ine et al. reportedN\u2009=\u2009468, p\u2009=\u20090.00001).Indrio et al. was the 3.2p\u2009<\u20090.001). Indrio et al. . However, the I2 statistic of equal to 69% indicates moderate heterogeneity . Garofoli et al. .Only one study reported no. of stools/day after 3 months' administration of the probiotic/placebo. Indrio et al. and foun3.4I2\u2009=\u200923%, N\u2009=\u2009112, p\u2009=\u20090.28] .Four studies reported on total body weight after 1 months' administration of the probiotic/placebo. Two studies were able to be included in the meta\u2010analysis and control group , p\u2009=\u20090.087 at 4 weeks after commencing treatment. However, when calculating summary data, we found a significant different between the two groups and this was checked and supported by our statistician.Garofoli et al. reportedF\u2009=\u2009118.95, p\u2009<\u20090.001; treatment effect: F\u2009=\u20090.01, p\u2009=\u20090.92; interaction effect: F\u2009=\u20091.43, p\u2009=\u20090.24).Baldassarre et al. reportedSD: 1.21) and placebo groups , p\u2009=\u20090.108 at 4 weeks.Garofoli et al. reported3.5N\u2009=\u2009468, p\u2009=\u20090.00001). Indrio et al. . Indrio et al. .Only one study , however, we repeated this analysis and found a significant difference between probiotics and the control group (p\u2009=\u20090.0009). Garofoli et al. (p\u2009=\u20090.087), however, we calculated the summary data and found there to be a significant difference between the probiotic and control groups (p\u2009=\u20090.003). All studies reported on adverse events.The remaining four studies were rated unclear risk of reporting bias. We were not able to locate a trial registration record for Indrio et al. ,\u00a02008, ao et al. reportedi et al. reported3.8.7We considered the five studies that were supported by the manufacturer of the intervention to be at high risk of bias (Baldassarre et al.,\u00a04To our knowledge, we report the first systematic review to investigate the efficacy of probiotic supplementation for the prevention and treatment of infant regurgitation. It involved a rigorous review process with adherence to internationally recognised Cochrane and Preferred Reporting Items for Systematic Reviews and Meta\u2010analyses guidelines. We found six RCTs on the use of probiotics for the prevention or treatment of infant regurgitation for inclusion in the systematic review.L. reuteri DSM 17938 (Indrio et al.,\u00a0L. reuteri ATCC 55730 (Indrio et al.,\u00a0Meta\u2010analysis of three of the six trials showed a statistically significant reduction in regurgitation in the infants receiving L. reuteri is also found to exhibit antimicrobial activity, producing reuterin, a broad\u2010spectrum antibacterial substance (Axelsson et al.,\u00a0L. reuteri strains act through diverse mechanisms.The remaining individual studies also reported a statistical reduction in episodes of regurgitation with the use of probiotics (Garofoli et al.,\u00a0L. reuteri ATCC 55730 and L. reuteri DSM 17938 found a statistically significant increase in the number of stool evacuations and it has reported that probiotics could play a crucial role in the modulation of intestinal inflammation that may contribute to infant regurgitation (Indrio et al.,\u00a0Individual studies found that gastric emptying was statistically significantly faster in the infants receiving probiotics compared with placebo. It has been reported that probiotics improve gut motility and gastric emptying time and thus reduces gastric distension and visceral pain (Garofoli et al.,\u00a0The impact of infant immaturity, disturbance of the microbiome through caesarean section and maternal mental health has been recently considered. A mixed methods study examined greater than 1 million admissions of infants in NSW, Australia to hospitals in the first year following birth (Dahlen et al.,\u00a0None of the included studies in this systematic review reported on admissions to hospital due to maternal anxiety/depression. The study by Dahlen et al. also fouL. reuteri DSM 17938 or its original strain, L. reuteri ATCC 55730 (Garofoli et al.,\u00a0L. reuteri administration: increased gastric emptying and reduction in crying time, regurgitation episodes, constipation, and fasting antral area. The authors based their findings on changes in intestinal microbiota, improved mucosal barrier, anti\u2010inflammation, improved motility of the whole intestine and neuroimmune interaction. However, other strains of probiotics could also perform such actions.Of the six studies included in the review, five administered 4.1We thoroughly reviewed the studies for results and assessed their risks of bias. There was an overall low risk of bias in the trials for random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment and incomplete outcome data. There was an overall high risk or unclear risk of selective reporting bias and five of the six trials reported receiving financial support from the manufacturer or the makers of the probiotic used Figure\u00a0.5The currently available evidence does not support or refute the efficacy of probiotics for the prevention and treatment of infant regurgitation. However, data from the individual trials and subset meta\u2010analysis of studies measuring the effect of probiotics are promising. There are no indications from the available data that probiotics have any adverse effects. The use of probiotics could potentially be a noninvasive, safe, cost effective and preventative positive health management strategy for both women and their babies. Further well\u2010controlled RCTs are warranted to investigate the efficacy of various strains and species, dosage, and combinations of probiotics to determine the most effective for preventing and treating infant regurgitation. In addition, further research is required to determine the effectiveness of administering probiotics to women antenatally and/or postnatally to prevent regurgitation in breastfed infants and their effect on maternal mental health.This project was supported by the School of Nursing and Midwifery, Western Sydney University.The authors declare no conflict of interests.Jann P. Foster, Charlene Thornton, and Hannah G. Dahlen conceived the research project and coordinated the contributors. Jann P. Foster and Kim Psaila developed the search strategy and searched the databases. Jann P. Foster, Kim Psaila, and Hannah G. Dahlen participated in the study selection. Jann P. Foster, Kim Psaila, and Hannah G. Dahlen performed the data extraction. Jann P. Foster, Kim Psaila, Hannah G. Dahlen, and Sabina Fijan interpreted the findings. Jann P. Foster, Hannah G. Dahlen, Kim Psaila, and Sabina Fijan wrote the first draft of the results and Nadia Badawi, Virginia Schmied, Charlene Thornton, and Caroline Smith revised subsequent drafts. Jann P. Foster, Hannah G. Dahlen, Kim Psaila, and Sabina Fijan prepared the manuscript and revised every version of the manuscript. Nadia Badawi, Virginia Schmied, Charlene Thornton, and Caroline Smith contributed to the revision of the manuscript."} {"text": "It is now more than 90 years since Irving Langmuir used the technical term \u201cplasma\u201d to describe an ionized gas . Plasma International Journal of Molecular Sciences, includes 14 original articles and 3 reviews providing new insights into the application and molecular mechanisms of plasma biology.On the basis of the background outlined above, this Special Issue entitled, \u201cPlasma Biology\u201d, of The solutions treated with plasma are referred to as \u201cplasma-activated medium (PAM)\u201d , plasma-N-acetylcysteine was found to prevent PAM/TRAIL-induced cancer cell apoptosis, suggesting that ROS is related to the induction of PAM/TRAIL-mediated apoptosis.Hwang et al. examinedTerefinko et al. demonstrDickeya solani and Pectobacterium atrosepticum, which are important plant pathogens. Moreover, the mechanism of action of PAL was possibly due to the presence of ROS and RNS.Dzimitrowicz et al. reported2O2). The model can be used to quantify the selective and synergistic anticancer effect of PTL in both susceptible and resistant cells.Bengtson and Bogaerts developeIn a report by Bekeschus et al. , human mZimmermann et al. demonstrCandida albicans with a sublethal dose of CAP. Subsequent analysis of the CAP treated yeast identified six single-nucleotide variants, six insertions, and five deletions, as well as the decreased or increased activity of the corresponding enzymes.Tyczkowska-Siero\u0144 et al. treated Haralambiev et al. examinedEscherichia coli, with or without a plasmid that includes an ampicillin resistance gene and chloramphenicol acetyltransferase (CAT) gene, were treated with a dielectric barrier discharge (DBD) plasma torch. The plasma treatment was found to degrade the lipopolysaccharide (LPS) and DNA of the bacteria as well as CAT. Furthermore, the plasma treatment was equally effective against antibiotic-resistant and non-resistant bacteria. This finding suggests that plasma treatment is effective against bacterial strains resistant to conventional antibiotic therapy.Sakudo et al. investigKwon et al. studied the attachment of human mesenchymal stem cells (hMSCs) to the surface of titanium modified by plasma treatment generated using either nitrogen (N-P), air (A-P), or humidified ammonia (NA-P) . N-P, A-Przekora et al. producedHan et al. performeAdhikari et al. examinedSun et al. used higA review by Zubor et al. focuses A review by Chokradjaroen et al. providesA review by Bran\u00fd et al. gives anInternational Journal of Molecular Sciences, which highlights the research of eminent scientists in the field of plasma biology. The Editors would like to thank all the contributors to this Special Issue for their commitment and enthusiasm during the compilation of the respective articles. The Editors also wish to thank Kaitlyn Wu and other members of the editorial staff at Multidisciplinary Digital Publishing Institute (MDPI) for their professionalism and dedication. Hopefully, readers will enjoy this Special Issue and be inspired with new ideas for future research.Finally, the Editors are delighted to have had the honor of organizing this Special Issue for"} {"text": "Mycerobas carnipes was sequenced in this study and the total length is 16,806\u2009bp containing 13 protein-coding genes (PCGs), 22 transfer RNA genes (tRNAs), two ribosomal RNA genes (rRNAs), and one control region. The phylogenetic analysis based on 13 PCGs of five grosbeaks and other Fringillidae birds demonstrated that Mycerobas, Coccothraustes, and Eophona had close phylogenetic relationships for clustering as three sister branches, and supported that Eophona originated earlier in phylogeny.The complete mitochondrial genome of Mycerobas carnipes is a grosbeak in the family Fringillidae, whose ecological habitats often occur in forest and shrubland and which distributes in northeastern Iran, the Himalayas to the western Ten-zan, central and southwest China perform differently using the Bayesian inference (BI), the maximum-likelihood (ML) criteria, and different types of datasets. This study presents the complete mitochondrial genome of M. carnipes and constructs a phylogenetic tree based on 13 PCGs of five grosbeaks and other Fringillidae birds for better understanding relationships of grosbeak genera.t et\u00a0al. and the t et\u00a0al. based ont et\u00a0al. and Yangt et\u00a0al. based ona et\u00a0al. based ona et\u00a0al. based ona et\u00a0al. , the topM. carnipes, which died of airport protection facility for bird strikes in the Ganzi Gesser Airport, Sichuan Province, China . The specimen was stored in the Natural Museum of Sichuan University with a voucher number of QZKK091. The complete mitochondrial genome of M. carnipes was sequenced by Chain Termination Method and the genome sequence has been deposited in the GenBank with the accession MW 304000. The assembly of mitochondrial genome was finished via SeqMan software (version 7.1.0), and the annotation was generated by MITOS first (Bernt et\u00a0al. The total mitochondrial DNA was extracted from the muscle tissue of M. carnipes and other 11 Fringillidae species were used for phylogenetic analysis by BI and ML method (Carpodacus, Loxia, and Chloris; and the lineage formed three branches with Eophona basal to them. The phylogenetic result supported that Eophona had earlier origin than other grosbeaks which was different from previous studies with one or a minority of genes (Arnaiz-Villena et\u00a0al. The mitochondrial genome sequence of 13 PCGs of L method . The ML L method . A discrL method with simL method accordin"} {"text": "We estimated life expectancy from detailed zoo records for 133 818 individuals across 244 parrot species. Using a principled Bayesian approach that addresses data uncertainty and imputation of missing values, we found a consistent correlation between relative brain size and life expectancy in parrots. This correlation was best explained by a direct effect of relative brain size. Notably, we found no effects of developmental time, clutch size or age at first reproduction. Our results suggest that selection for enhanced cognitive abilities in parrots has in turn promoted longer lifespans.Previous studies have demonstrated a correlation between longevity and brain size in a variety of taxa. Little research has been devoted to understanding this link in parrots; yet parrots are well-known for both their exceptionally long lives and cognitive complexity. We employed a large-scale comparative analysis that investigated the influence of brain size and life-history variables on longevity in parrots. Specifically, we addressed two hypotheses for evolutionary drivers of longevity: the Howevcognitive buffer hypothesis posits that increased cognitive flexibility enabled by a relatively larger brain allows species to solve problems that would otherwise increase their extrinsic mortality, hence allowing for increased longevity [expensive brain hypothesis argues that there is an indirect association between brains and longevity, with an investment in expensive brain tissue slowing down the pace of life through increased developmental time and increased parental investment per offspring [delayed benefits hypothesis extends the expensive brain hypothesis and reverses the directionality of its argument, positing that a shift to a higher quality, skill-based diet lowered adult mortality rates and supported a longer juvenile period that facilitated inter-generational skill transmission. This extended development in turn allows for investment in brain growth that further promotes skill-based foraging niches. In other words, long-lived, extractive foraging species evolve larger brains because they can benefit most from learning [et al. [et al. [Three non-mutually exclusive hypotheses have been proposed to explain the correlated evolution of larger brains and longer lifespans. First, the ongevity . Second,ffspring . Third, learning . Previoulearning ,16. For [et al. showed t [et al. showed bPsittaciformes) are famous for both their long lives and complex cognition [Micropsitta keiensis, 12 g) to kakapo [et al. [et al. [Parrots . In the 3000 g) used max [et al. , finding [et al. found thet al. [et al. [One of the greatest challenges for comparative life-history studies is sourcing good quality data . For inset al. ) and som [et al. ). Maximu [et al. . For spe [et al. . It calc [et al. ,33. Base [et al. . We addicognitive buffer hypothesis predicts a direct effect of relative brain size on life expectancy, with larger brained species living longer [expensive brain hypothesis predicts that the effect of brain size on life expectancy is indirect, emerging from increased developmental time and parental investment per offspring [delayed benefits hypothesis would also predict a direct relationship between relative brain size and longevity [Psittaciformes to date and contributes to a broader understanding of this understudied group.Here, we present a phylogenetic comparative analysis focused on brain size and its effects on longevity in parrots. First, we estimate life expectancy from Species360's zoological information management system (ZIMS) with records of 133 818 individuals across 244 parrot species. We then test for a correlation between life expectancy and relative brain size after removing the effect of covariates. Third, we used a DAG to distinguish between two possible pathways for this correlation. The g longer , while tongevity , it woul. 2 (a)We obtained data on birth and death dates from Species360's ZIMS. After cleaning we included records for 133 818 individuals across 244 species. To estimate life expectancy, we implemented Bayesian survival trajectory analysis , which a0 and a1), age independent (adult) mortality (c) as well as senescent mortality . Cumulative survival can be calculated aswhere Life expectancy at birth is calculated asWe used the Gelman\u2013Rubin statistic to dete (b)et al. [et al. [et al. [We collected body mass data from ZIMS. Additional body mass measurements were included from the literature if no captive records were available for a species . We thenet al. , from Sc [et al. and from [et al. , and sim [et al. , insular [et al. , maximum [et al. , clutch [et al. , develop [et al. . Diet, iet al. [We used a DAG to decidet al. .Figure (c)et al. [To test for a correlation between life expectancy and relative brain size, we first constructed a Bayesian structural equation model (model 1) with life expectancy as the main variable to be explained by relative brain size and four other potential covariates. We included a total of 360 species for which at least one variable was known. The structure of this first model was as follows: LE \u223c I + BO + RB + LA + D, where: LE, standardized log life expectancy; I, insularity (binary); BO, standardized log body mass; RB, relative brain size; LA, standardized maximum latitude range and D, protein content diet . Relative brain size was calculated as: BR \u2013 pBR, where: BR, standardized log brain mass and pBR, predicted brain mass from a second model that ran simultaneously: pBR\u223cBO. Relative brain size has been shown to correlate with innovation rates in birds , and we et al. , using tTo test whether any correlation between relative brain size and longevity could be indirectly caused by developmental time, delayed juvenile periods and/or parental investment, we ran a second model (model 2) where developmental time (incubation period plus fledging period in model 2) and clutch size were included as additional covariates. Both variables were log transformed and standardized. Because data on AFR were only available for 89 species and the available data were biased towards later AFR , we did not attempt to impute this variable but tested its effect in a third model (model 3) limited to cases where AFR was known.cognitive buffer hypothesis), we would expect the coefficient of the brain size effect to be positive and similar in all three models. If an increase in relative brain size only causes an increase in developmental time (expensive brain hypothesis), we would expect the coefficient of the brain size effect to be positive in model 1 and zero in models 2 and 3. We would also expect an effect of developmental variables in models 2 and 3.To assess which hypothesis was best supported by the data, we compared the effect of relative brain size in the three models. If an increase in relative brain size directly causes an increase in life expectancy ; however, covariance was generally low between species that belonged to different genera (c).Overall, we were able to estimate life expectancy for 217 species of 244 species for which we had data . This included representatives of all eight major genera (i.e. those with at least 10 species) and over half of the extant parrot species. The shortest-lived genera were the small-bodied t genera c.Figurec for model 2; electronic supplementary material, results for models 1 and 3). Relative brain size also had a small, but consistently positive, effect on life expectancy . In particular, model 2 showed no effect of developmental time or clutch size on longevity, and there was no clear effect of AFR on longevity in model 3 . However, it should be noted that these models were designed to test the effect of relative brain size, so other parameter estimates should be interpreted with caution [Model 1 as well as models 2 and 3 had similar estimates for the direct effect of relative brain size. As expected, body size was strongly and positively correlated with life expectancy uncertainty about variable estimates; (ii) imputation of missing values; (iii) a principled representation of relative brain size; and (iv) phylogenetic signal. We believe this method has some major advantages. Most notably, we could estimate both life expectancy and its uncertainty in each species. This allowed us to fully exploit the fact that we have a hundred-fold more data for some species, instead of relying on a single-point estimate of maximum longevity as in previous studies of longevity in parrots ,27. We aPsittaculirostris and Charmosyna which have been historically difficult to manage in captivity. We dealt with this by excluding potentially problematic species from the initial life expectancy estimations, and instead imputed values in the final model . We can still not be completely sure that the patterns observed in the data are all representative of the evolutionary processes that shaped them, but it is highly unlikely that the clear positive correlation between relative brain size and life expectancy is owing to captivity. It could even be expected that large-brained species live shorter in captivity, because of the higher metabolic rates required to keep the large brain supplied with glucose. This has been shown to be the case within species in captive guppies [Our study also departs from most previous studies of longevity by using data from captivity on life expectancy ,55\u201357. T guppies . Since s. 5cognitive buffer hypothesis, suggesting that relatively large brains may have buffered parrots against environmental variability and/or predation threats reducing sources of extrinsic mortality and allowing longer lifespans. This result is consistent with previous studies in other birds, suggesting that common processes may explain longevity in altricial birds. In addition to their longevity, parrots are famous for their complex cognition. It remains largely unknown what evolutionary processes have driven cognitive evolution in parrots, but given the results of our study, in addition to those of Munshi-South & Wilkinson [Overall, our results are consistent with the ilkinson , future ilkinson , showingClick here for additional data file."} {"text": "Optical humidity sensors have evolved through decades of research and development, constantly adapting to new demands and challenges. The continuous growth is supported by the emergence of a variety of optical fibers and functional materials, in addition to the adaptation of different sensing mechanisms and optical techniques. This review attempts to cover the majority of optical humidity sensors reported to date, highlight trends in design and performance, and discuss the challenges of different applications. The measurement of relative humidity (RH) is an important part of heating, ventilation, air conditioning, and refrigeration (HVACR), which provides quality control in numerous aspects of daily life. The full range of applications include: (a) manufacturing ; (b) refrigeration ; (c) packaging ; (d) transportation ; (e) building conditioning ; and (f) healthcare ; (g) agriculture/forestry ; (h) touchless control systems ; (i) weather stations ; hence, an accurate and reliable means to monitor RH supports the development and operation of numerous industries, potentially connected together under the internet of things (IoT).Owing to different environments and requirements, a wide variety of humidity-sensing technologies have been researched and developed in the literature, including optical/photonic/optoelectronic ,2,3, quaOptical humidity sensors include colorimetric indicators, control systems, point sensors, and distributed sensors. Among point sensors, the most common types involve gratings and absorption loss , with eaRelative humidity (RH) indicates the percentage of water vapor in a water-air mixture relative to the saturation level at a given temperature. Hence, at a lower temperature, 100%RH can be achieved with fewer water molecules. This effect can be observed with the common phenomenon of condensation on cooler surfaces. For simple testing of a humidity sensor, the amount of water molecules is varied instead of the temperature. This is because changing the temperature could lead to unwanted temperature-related effects, including thermo-optic, stress-optic, and thermal expansion.A humidity sensor is defined as a device that provides a measure of the RH and either presents the information directly to the user or serves as an actuator to drive the next stage in a larger system, as illustrated in Low-cost functional strips providing a means to assess the approximate level of RH by eye are useful for electronics, greenhouses, and patient monitoring. Typically, accuracy and response time are not prioritized, as the checks are performed manually. On the contrary, color uniformity and dynamic range are considered important for the end-users. Two or more colorimetric strips of different color maps can be used in conjunction to improve the accuracy of readings.The demand for a readily accessible humidity indicator that is vibration resistant likely peaked during World War II, when people became concerned about the poor condition of weapons and ammunition. High levels of humidity combined with poor packaging methods led to corrosion and moisture damage. In the beginning, the concept for the first color-change humidity indicator involved a simple go/no-go method of measuring humidity. In the late 1940s, RHs in the range of 30%RH\u201335%RH were considered important because it is the onset for corrosion . For theAmong the earliest humidity-sensitive colorimetric indicators was one reported by Tian et al. in 2008 Figure . The tea2-based photonic crystal material. A 40 nm spectral shift was demonstrated by varying the RH, and a 1%RH was claimed to be visually distinguishable. The colorimetric indicator was fabricated on silica glass using glancing angle deposition (GLAD). The color mechanism is also based on Bragg resonance. Hong Chi et al. [Matthew M. Hawkeye et al. developei et al. used a dAndrew Mills et al. reportedConventional touchless control inputs are either based on voice , motion , or capacitance . Such controls enable the non-contact operation of gadgets, machinery, or infrastructure, which has the benefits of convenience and hygiene. While these technologies work fine most of the time, false inputs are a possibility. With voice input, background noise can sometimes interfere and become misinterpreted as valid commands. Motion commands can be erroneously entered if the foreground contains multiple moving objects or people. Even with capacitance-based controls, static charge and liquid droplets can hinder the ability of the system to distinguish user inputs from random perturbations. On the other hand, humidity-based control can be more reliable, though possibly limited in variety. The humidity signal can be generated by human breath or finger proximity, and changes are much faster than that of the ambient environment, which lends itself to easy recognition.2, nanosheets that respond to humidity with changes in potentially spatially resolvable color and electrical resistance. A variation of 0\u2013320 k\u03a9 was observed for the full RH range. The response time is 30 s, which limits the applications to those with slow-moving inputs. The VS2 nanosheet coating was first prepared in solution with a modified liquid-exfoliation method and then transferred to a polyethylene terephthalate (PET) substrate with a higher surface energy for suitable adhesion force. The operating principle involves moisture interaction with the interval structures of the nanosheet, as well as the dynamics of exposed metal atoms, which influences its electrical resistance.In 2011, Jinish Mathew et al. demonstr3Sb3P2O14 and the proton exchanged phosphatoantimonic acid H3Sb3P2O14 were synthesized by a conventional solid-state reaction followed by ion exchange with HNO3. Dropcasting the resulting suspension onto a quartz substrate leads to randomly overlapping, c-oriented nanosheets. The coating absorbs water vapor and swells in thickness, resulting in a change in resonance condition and thus a shift in Bragg wavelength . Li Yu et al. [Katalin Szendrei et al. Figure also devu et al. developePoint humidity sensors provide a single spatial reading of RH and are the most common type due to their simplicity, high sensitivity, and relatively low cost. Some humidity sensors feature a single-ended detection method, where the sensor head is a probe, which is useful for measuring confined environments. Point humidity sensors come in a huge variety of forms and sensing mechanisms, which makes comparing the sensitivities meaningless when the units are different. Instead, the limit of detection (LOD) is compared, which also takes into account the electro-optic devices used for signal conversion.The earliest optical-based point humidity sensors were reported in the 1980s. Hermann Posch et al. showed t304-based thin-film colorimetric indicator that can be interrogated by electro-optics. The response time is 1 min, and the sensitivity is 4.9 \u00d7 10\u22125 dB/%RH (400 nm wavelength) between 10%RH and 90%RH. Co304 films were fabricated by pyrolysis of a thin layer of cobalt 2-ethylhexanoate, which was then spin-coated on a glass substrate from a mixed solution of toluene and 1-butanol. The film reacts to different RH due to the water-induced absorption of the probe light.Yoshihiko Sadaoka et al. demonstrA fiber-optic humidity sensor was developed by Francisco J. Arregui et al. Figure , which c\u22126 a.u./%RH between 10%RH and 90%RH. The polyimide coating is hydrophilic and thus swells in humid environments as the water molecules migrate into the polymer. The overlap between the evanescent waves of light and the water molecules within the polymer gives rise to attenuation of light, providing an indication of the RH level.Francisco Arregui et al. used an 2. The sensitivity with respect to RH is very irregular between 3%RH and 90%RH, while the response time is in the order of seconds. Prior to coating, the optical fiber was stripped of its cladding and annealed to conform to a U-bend probe. The sensing section was then functionalized with the aqueous solution of PVA and CoCl2 before being left to dry. The coating swells with increasing RH and bends the fiber, thus attenuating the light through bend-induced optical leakage. Ainhoa Gaston et al. [A plastic optical fiber (POF)-based humidity sensor was developed by Rajeev Jindal et al. , with thn et al. reportedShinzo Muto et al. Figure exploite2 film overlay on side-polished optical fibers was presented by Alberto Alvarez-Herrero et al. [2 film was deposited via electron-beam evaporation on a side-polished fiber held in a groove. Water molecules in the film change the coupling conditions between light propagating in the fiber and the substrate. The sensor works by monitoring the shift in resonant wavelength as a function of RH. Lina Xu et al. [A highly sensitive humidity sensor using a TiOo et al. . The LODu et al. and PSS coating was demonstrated by C. R. Zamarre\u00f1o et al. [2/PSS onto an MMF, facilitating a porous structure with a thickness comparable to that of the wavelength of the probe light. Upon absorbing water molecules, the change in the coating thickness and its refractive index leads to a change in the lossy resonance conditions, which shifts the resonance peaks to longer wavelengths. Matthew M. Hawkeye et al. [2 fabricated by using glancing angle deposition (GLAD). The LOD can be as low as 1%H with a dynamic range of 3\u201390%RH. The GLAD process enabled the fabrication of complex photonic crystal devices through a single step and was used in this work to coat a silicon or glass substrate with TiO2. As RH increases, water vapor penetrates the porous structure and condenses within the photonic crystal. As a result, the refractive index and optical path length increase. Both underlying effects are modified as a result, which is the Bragg resonance , and the FPI resonances are shifted to longer wavelengths.Lossy mode resonance (LMR) with titanium dioxide methacrylate (PEGMA), and cross-linker poly(ethylene glycol) diacrylate (PEGDA). The precuring mixture was sandwiched between a cover glass and a fluorinated glass slide, positioned above a NdFeB magnet and below a strong UV light source for photo-polymerization.Ming-Yue Fu et al. reportedWei Chang Wong et al. reportedA fiber-bend-based approach was taken by Jinesh Mathew et al. Figure , where aLi Han Chen et al. applied 2 nanoparticles were immobilized into a nanostructured sol-gel matrix and incorporated onto a cladding-removed optical fiber via dip coating. Absorption spectroscopy reveals the correlation between output power and RH.A Boehmite nanorod and gold nanoparticle nanocomposite film was used by Priyank Mohan et al. for locaSandra F. H. Correia et al. Figure reportedTao Li et al. developeYanjuan Liu et al. introducGinu Rajan et al. etched aJinesh Mathew et al. studied A hybrid union of FBG and reflection-mode PCF interferometer was proposed and implemented by Jinesh Mathew et al. for simuLourdes Alwis et al. evaluateA no-core fiber structure was adopted by Li Xia et al. , with a P. Sanchez et al. Figure explored2O3+) and poly-sodium 4-styrenesulfonate (PSS\u2212), which is used for selective adsorption. Yinping Miao et al. [A combination of interior-coated PCF with LPFG was reported by Shijie Zheng et al. . The seno et al. exploiteYun Cheng et al. demonstrM. Batumalay et al. applied 3 gel was deposited on an isosceles prism and annealed to form the functional coating. The coating responds to moisture in the air by changing its refractive index with the intake of water molecules. The reflected optical power was collected to provide a readout of the RH. Branislav Korenko et al. [Iron titanium oxide was tested for humidity-sensitive properties by Nidhi Verma et al. . The dyno et al. created 2O3 film. The LOD is 0.65%RH, and the response time is 18 min. The sensitivity is 310 pm/%RH between 20%RH and 90%RH, and the dynamic range is 20%RH\u201390%RH. Porous anodic alumina (PAA) was fabricated starting from high purity aluminum, which was electrochemically polished and anodized before removal in phosphoric acid. The PAA was attached to the tip of an optical fiber using adhesive. The micro-pores readily absorb water molecules due to the capillary condensation effect, albeit they are easily saturated at higher RH. The change in the effective refractive index in the etalon produces a shift in the resonance spectrum as a function of RH. Z. Harith et al. [Chujia Huang et al. developeh et al. comparedAn intensity-based humidity sensor was conceived by Zhi Feng Zhang et al. Figure , which uAitor Urrutia et al. performeGetinet Woyessa et al. reported2 produces the LMR, whose effective refractive index is sensitive to RH. A microfiber knot resonator with a PVA overlay was developed by Jong Cheol Shin et al. [2 substrate, the PVA overlay was deposited using the spin-coating technique. The microfiber allows a significant portion of the evanescent field to interact with the PVA. The PVA coating changes its refractive index with varying RH and thus alters the effective refractive index seen by the guided mode(s). Such changes affect the resonant wavelength, which can be tracked as a function of RH.J. Ascorbe et al. Figure worked on et al. . The sen2 to create a differential humidity sensor based on two different polymer waveguides. The response time is 700 ms. The sensitivity is 0.47 dB/%RH between 40%RH and 90%RH, and the dynamic range is 35%RH\u201398%RH. The GO was fabricated using a variant of Hummer\u2019s method, which used deionized water as the solvent. TiO2 nanoparticles were synthesized in ethanol. Two SU8 polymer waveguides were fabricated on a SiO2 substrate, each with two layers being undercladding and core layer . Fiber arrays were aligned to the channels for integration. The two different coatings were drop-casted onto the two waveguides and UV cured. In terms of operation, increasing RH leads to an increase in coating refractive index of the TiO2-coated waveguide, which reduces optical confinement and increases water-absorption-induced optical loss at the interface. The opposite effects occur for the graphene-oxide-coated waveguide due to increasing interlayer distance with increasing RH. The output signal is taken from the difference between the waveguide transmissions to obtain the maximum possible sensitivity.Lu Shili et al. reportedA GO PVA composite film was developed by Youqing Wang et al. , which i2 alcohol suspension is treated by ultrasonication before being dropped into a basin containing the side-polished fiber. As the RH increases, more water molecules are adsorbed on the WS2 layer with moderate adsorption energies and a moderate degree of charge transfer. Larger amounts of electrons are transferred from WS2 to H2O, which reduces the conductivity and light absorption. As a result, the transmission of the fiber is proportional to the RH. George Y. Chen et al. [Yunhan Luo et al. side-poln et al. exploiteA lithium chloride film coated on the end-face of an optical fiber was used by Bao-Kai Zhang et al. to quantTapered fiber coated with multiwalled carbon nanotubes slurry was explored by Habibah Mohamed et al. Figure . The LODJia Shi et al. incorpor2 and TiO2 nanoparticles were deposited from stable colloidal precursor suspensions to create the first multilayer. Subsequently, a suspension of H3Sb3P2O14 exfoliated nanosheets humidity-responsive optical cavity was spin-casted onto it to form the optical cavity. Then, a monolayer of light-emitting polystyrene nanospheres was deposited and covered by a new layer of H3Sb3P2O14. Swelling of the nanosheet layer by increasing the RH shifts the spatial and spectral positions of the optical cavity resonances relative to those of the emission band of the dyes. The result is either a fluorescent turn-off or a turn-on effect with changing RH. Weijia Wang et al. [A fluorescent resonator humidity sensor was developed by Katalin Szendrei et al. based ong et al. introducYung-Da Chiu et al. Figure investigA cascaded peanut-shaped structure coated with PVA was proposed by Zhaowei Wang et al. . The aimBo Wang et al. construcAn HCF with a polyimide film was used as a humidity sensor by Ce Bian et al. . The resHamid E Limodehi et al. created Kasun Prabuddha Wasantha Dissanayake et al. presenteAg-decorated ZnO nanorods have been exploited by Shweta Jagtap et al. with a fPoonam D. Mahapure et al. studied Yu Shao et al. tested aA two-mode microfiber knot resonator coated with reduced GO was designed and fabricated by Le . The LODChunhua Tan et al. exploredAnother Fabry-Perot-based humidity sensor was developed by Jinze Li et al. . It invo2Cl2 with ultraviolet laser irradiation. The irradiation power was optimized to produce the best morphological results on the TPPS thin film attached to the fiber end-face. Reflection spectroscopy was applied to analyze the local environment. Both the refractive index and absorption coefficient depend on the environmental humidity. The thin film also exhibits low-quality factor random Fabry-P\u00e9rot interference patterns. Miguel Hernaez et al. [2 film to facilitate LMR absorption bands. Poly(ethyleneimine) (PEI) solution in deionized water and GO powder were used for the layer-by-layer assembly of the sensitive layer on top of the SnO2 underlayer. The refractive index of the PEI/GO coating is sensitive to changes in RH, which results in wavelength shifts of the LMR band.Mahboubeh Dehghani Sanij et al. exploredz et al. reportedA nanocomposite film was presented by Mingyu Chen et al. Figure , which oRang Chu et al. reportedAneez Syuhada et al. fabricatDeyhaa A. Resen et al. proposedHumidity-dependent plasmonic coupling within a 2D network of gold nanoparticles was exploited by Marco A. Squillaci et al. . The res2) or tungsten disulfide (WS2). Only the Ti/MoS2 showed a response when changing the RH from 58% to 88%. Hence, MoS2 exhibits higher sensitivity on Ti with 36-nm thickness compared with that of WS2.N. Kaur Sidhu et al. carried Susana Novais et al. dip-coatAnand M. Shrivastav et al. construc2 coating. Increasing RH decreases the RI of the SnO2 thin film, which reduces the reflectance of the fiber tip end-face and thus decreases the visibility of the interference fringes. The sensitivity is 39.49 \u00d7 10\u22123%/RH between 40%RH and 90%RH, and the dynamic range is 40%RH\u201390%RH.Ce Bian et al. publisheNiobium disulfide (NbS2) was explored by Enze Zhang et al. Figure with theHsin-Yi Wen et al. used a UHye Jin Kim et al. Figure used PVDJinze Li et al. developeLin Zhao et al. developeMuhammad Quisar Lokman et al. investig2 on a planar optical waveguide to introduce humidity-dependent attenuation. A silicon wafer with a thermal oxide layer was used as the substrate and undercladding layer for the optical waveguide. Germanium and boron codoped silica layers were introduced into the substrate to form the core layer. The core layer was patterned using photolithography and etched using inductively coupled plasma (ICP) etching followed by spin coating of ZPU13 polymer to form the overcladding. The ZPU13 polymer coating was etched by using oxygen plasma down to the top surface of the waveguide core to enable evanescent wave interactions. The sensitivity is \u22122 dB/%RH between 56%RH and 90%RH, the response time is 2.5 s, and the dynamic range is 56%RH\u201390%RH. Pasquale Di Palma et al. [Nur Abdillah Siddiq et al. exploitea et al. exploredZhihai Liu et al. Figure investigCatherine Grogan et al. conducteCheng Zhou et al. reported2), molybdenum diselenide (MoSe2), and composition of graphene and graphene oxide (G/GO). The team analyzed the relative differentiation of attenuation as a function of RH. The conclusion was that G/GO was the most reliable coating among the three types. The tested dynamic range is 20%RH\u201390%RH. Husna Mardiyah Burhanuddin et al. [Erfan Owji et al. chemicaln et al. created Jia-Kai Wang et al. exploredFor practical application, such as operation safety of transmission lines, Jie Zhang et al. investigMao-qing Chen et al. Figure applied Sarah Kadhim Al-Hayali et al. proposedSeyed Reza Hosseini Largani et al. exploredXin Cheng et al. Figure employed2 was applied for humidity sensing. The sensing mechanism is evanescent wave attenuation. The response time is 25 s. The sensitivity is 5.35 \u00b5W/%RH between 15%RH and 50%RH and 1.94 \u00b5W/%RH between 50%RH and 95%RH. The dynamic range is 15%RH\u201395%RH. Yu Ying et al. [Xixi Huang et al. depositeg et al. and big data for smart cities of the near future.2-cladded optical fiber. The LOD is, on average, ~5%RH between 20%RH and 95%RH. The response time is 45 s. The sensing range depends on the number of sensing points. The underlying mechanics involve water absorption by the porous cladding, which scatters light and attenuates the output power. W. C. Michie et al. [K. Ogawa et al. were amoe et al. combinedA. Kharaz et al. Figure presenteA. Sascha Liehr et al. were amoPeter J. Thomas et al. also repGeorge Y. Chen et al. Figure created Humidity-induced strain measurement was investigated by Pavol Stajanca et al. using poThere have been numerous functional materials applied to optical fibers as a coating to serve as a transducer to the target measurand, as shown in A large number of functional materials are composite and synthetic, which are either expensive or non-degradable. In addition, the production of sensitized composites is usually accompanied by by-products that cause environmental pollution . It is nThere are environments where water vapor is not the only component in addition to regular air, such as methane or carbon monoxide. To distinguish between a humid environment and one that is something else requires the functional material to be engineered in such a way that it only reacts to water vapor . Otherwise, false readings will render the humidity sensor useless. As for miniaturization, the reduction in the dimensions of sensors allows for the less-intrusive deployment or installation in space-limited areas. The challenge is to downscale components while maintaining precision and robustness. The ability to readily integrate with electronic systems enables multiplexing as well as the adoption of the IOT, which will play an increasingly bigger role in smart cities of the future. Seamless integration relies on the use of electronic components within a reasonable package size to perform the necessary conversions and amplifications to match the impedance and connector of the rest of the system.Cost reduction and lower maintenance add to the feasibility of employing such sensors and keeping them operational for long-term usage. One of the main limiters to the widespread use of specific sensors is the cost of upscaling and difficulty of maintenance. The hurdle to overcome includes mass production of the necessary components to lower the cost, as well as a simple design with long-lasting materials to minimize the need for regular maintenance. Protective membranes or components may be required to protect humidity sensors from dust and other forms of contamination, which will degrade the reliability of data over long.In terms of tradeoffs, thin functional coatings or thin optical fibers tend to offer a faster response at the cost of a lower sensitivity . Thick cThere are challenges associated with particular applications, ranging from respiratory monitoring to nuclear power plants. This section discusses the specific needs and difficulties to overcome for achieving a viable sensing solution.High levels of humidity can cause respiratory irritation , especiaConsidering the wearable nature of the application, the humidity sensor construction materials need to be able to cope with frequent or constant large strains and bending effects . FurtherSome clinical applications, such as laryngectomy, result in a bypass of the upper respiratory tract. As a result, the microclimate conditions of the nasal cavity and pharynx experience changes in the local temperature and RH . This caMildew is a critical problem in grain storage . If unchThe operational safety of transmission lines of power systems is subject to various environmental factors , includiNuclear power plants tend to employ condensate flow monitoring, sewage pool monitoring, and inspection of the main coolant inventory balance, allowing the total leakage to be measured quantitatively . HoweverThis review covered a wide range of humidity sensors reported in the literature to date. The motivation behind developing optical humidity sensors is clear, as humidity is an important parameter in a number of industries for quality control. The design choices and various challenges for different applications are also discussed. Fiber gratings, absorption loss, and micro-resonators lead the way in terms of design popularity. Some trends were observed between sensitivity, LOD, and response time, though not definitive due to the absence of a large set of reported parameters of interest. Some of the main difficulties in designing a practical humidity sensor for harsh environments include biocompatibility and contamination of the sensor head by the environment. The different functional materials used to sensitize the sensor heads are listed to offer an overview of the variety and to highlight the importance of material engineering in the field of sensors.From the trajectory of existing developments, future advances in humidity sensing are anticipated to strive toward higher specificity, better miniaturization, more compatible system/environment integration, lower costs, and lower maintenance. Better limit of detection is meaningless without mass demand for such accuracy, and thus existing specifications can meet the majority of requirements in this area. New materials, particularly 1D and 2D structures and nanoscale-precision engineering techniques, will drive the research and development of novel humidity sensors that can adapt to the demand of industries and also pave the way for new applications. Reliability and reproducibility are likely to be better with the steadily improving automated manufacturing tools elevated by Industry 4.0."} {"text": "People with multiple sclerosis (MS) face challenges adhering to disease-modifying drug (DMD) treatment. Poor adherence to treatment reduces its clinical effectiveness which can adversely impact disease progression, MS-related hospitalisation, and mortality rates. Understanding the barriers to adherence is essential to addressing these issues in clinical practice and a consolidation of the literature had not yet been carried out. A systematic search was carried out using the electronic databases PsycINFO, and PubMed (Medline) using the search terms treatment compliance or treatment adherence and multiple sclerosis or MS. Studies included adults, with a diagnosis of relapsing\u2013remitting MS (RRMS) (sample\u2009>\u200980% RRMS), taking a DMD. The studies used an adequate measurement of treatment adherence and analysed possible factors associated with adherence. A total of 349 studies were retrieved, of which 24 were considered eligible for inclusion. Overall adherence rates of the included studies ranged from 52 to 92.8%. Narrative synthesis revealed the most prevalent factors associated with adherence were age, gender, depression, cognition, treatment satisfaction, injection-site reactions, and injection anxiety. There was contradictory evidence for disability in association with treatment adherence. The findings should be used to inform the development of targeted patient support programs which improve treatment compliance. The review also highlights the opportunities for advancing research into treatment adherence in MS.The online version contains supplementary material available at 10.1007/s00415-021-10850-w. Adherence to long-term treatment can be challenging for those suffering from a chronic illness, such as multiple sclerosis. In a widely cited report, the World Health Organisation (WHO) stated that only 50% of patients adhere to treatment recommendations . It is tThere have been remarkable advancements in the last 20\u00a0years in developing MS drug treatments known as disease-modifying drugs (DMDs) which slow the progression of the disease and reduce the rate of relapse . CurrentMedication compliance or adherence is defined as \u201cthe extent to which a patient acts in accordance with the prescribed interval and dose of a dosing regimen\u201d . This isAdherence to drug treatment in MS cannot be quantified using biological markers, therefore measuring adherence is often largely reliant on patients\u2019 self-report . PatientAdherence to treatment in multiple sclerosis is crucial to optimising patient care and managing the long-term prognosis of people with MS. Developing a strong evidence base for the key factors associated with adherence will help inform the development of targeted interventions such as patient support programs which are focused on treatment compliance. The effectiveness of these interventions could be subsequently evaluated by measuring adherence rates and health-related outcomes specific to this population. The aim of the current review was to provide a synthesis of the factors associated with adherence to disease-modifying drugs in the treatment of multiple sclerosis.A comprehensive literature search was conducted according to the \u2018PRISMA\u2019 statement . StudiesThe studies included in existing systematic reviews related to drug adherence in multiple sclerosis were screened for eligibility for this review , 17. RefStudies were limited to those written in English and published in peer-reviewed journals. Studies were included if they assessed factor/s related to the adherence of disease-modifying drug treatments for multiple sclerosis and reported quantitative data. Studies were only included if participants had a confirmed diagnosis of relapse-remitting multiple sclerosis, or included a sub-group of whom\u2009>\u200980% had the relapse-remitting form, a criteria used in a recent review . StudiesOne reviewer (FW) extracted data from the studies directly into a table made specifically for the current review and this was examined and verified by a second reviewer (DL). Study characteristics which were extracted included: participant eligibility criteria, sample size and prescribed DMD, adherence measurement, adherence rate and the key findings. It was not possible to carry out a meta-analysis of the included studies due to methodological diversity, therefore a narrative synthesis of the key findings was conducted.The quality of each study and risk of bias was assessed using an adapted AXIS Critical Appraisal tool which included 12 items . For eacA total of 24 studies were included in the current review. Eight studies did not specify the participants\u2019 MS subtype. These papers recorded the disease-modifying drugs that the participants were prescribed, which were only licensed for patients with relapsing\u2013remitting MS . TherefoThe relevant data was extracted from the 24 included studies and can be found in the Supplementary Information, Table 1.n\u2009=\u200914). All included studies used the appropriate design for their research aims. No studies were excluded from the review following the quality assessment.Of the 24 studies, four adequately met all the 12 evaluated items and no studies were given a minus sign for more than three of the items . There were seven studies which recruited participants through an MS registry or health databases. One study also recruited participants through the US National MS Society and the media [The sample size of the studies varied from 53 to 17,599 participants and the majority of participants were recruited through out-patient clinics or neurology treatment centres . Of theMost of the included studies used questionnaires or scales to quantify potential predictors of adherence, alongside capturing participants\u2019 sociodemographic information and the clinical characteristics of their MS. Several studies also gave participants surveys and provided qualitative comments or answered multiple-choice questions. This information was then used to carry out descriptive analysis of their perception of the contributing factors to non-adherence.The adherence rates of the studies range from 52 to 92.8%. The overall mean rates of adherence were pooled together based on the adherence measurement that was used by the study. It is important to note that the studies measured adherence across different time periods and used varying cut-offs. In addition, it was not possible to provide weighted means of adherence to the different disease-modifying drugs. Baseline adherence rates were used in the studies which had a longitudinal design. For the six studies which used pharmacy-based claims to measure adherence, either through the MPR or POC calculation, the mean rate of adherence was approximately 76.9%. The four studies which used an objective adherence measurement had a mean adherence rate of 80.55%. Finally, the mean rate of adherence of the self-reported studies was 74.0%.Several factors were significantly associated with adherence rates or were identified through descriptive analysis. The factors were systematically coded and used to generate descriptive themes.p\u2009=\u20090.0005). Lahdenper\u00e4 et al. [p\u2009=\u20090.0112).All of the included studies analysed how sociodemographic characteristics related to adherence rates, and gender and age were the most consistently related. The review found four studies which showed that men had better treatment adherence than women and given that MS is more prevalent amongst females, this is of particular importance. Higuera et al. used mul\u00e4 et al. also foup\u2009=\u20090.006) but older age was found to be a more consistent predictor of adherence. Thach et al. [p\u2009=\u20090.011). Zecca et al. [p\u2009=\u20090.008) and Higuera et al. [p\u2009<\u20090.01). Older age was also associated with adherence in Lahdenper\u00e4 et al. [p\u2009<\u20090.0001). There is a lack of theoretical understanding in the literature about why age may predict adherence, particularly as confounding variables such as symptom stability and disability are often controlled for in studies.Six studies found older age to be positively associated with adherence, suggesting age is a strong predictor. Paolicelli et al. reportedh et al. found ada et al. also foua et al. , using ta et al. found thp\u2009<\u20090.05). Koskderelioglu et al. [p\u2009=\u20090.007). Consistent with this, Devonshire et al. [p\u2009=\u20090.02).Erbay et al. found thu et al. found the et al. found thp\u2009<\u20090.001). Furthermore, adherent participants had also been taking their current disease-modifying drug for a shorter period of time (median\u2009=\u200930.0\u00a0months) than non-adherent participants . Similarly, McKay et al. [Devonshire et al. found thy et al. found thArroyo et al. used a qp\u2009=\u20090.035). Erbay et al. [p\u2009=\u20090.0003) and Treadaway et al. [p\u2009<\u20090.001). Koltuniuk and Rosinczuk [p\u2009=\u20090.007 and p\u2009=\u20090.020, respectively). Similarly, the duration of these care activities were also positively associated with adherence (p\u2009<\u20090.05).In the de Seze et al. study ady et al. reportedy et al. also fouosinczuk used theosinczuk examinedp\u2009<\u20090.0001). Paolicelli et a. [p\u2009=\u20090.015). In addition, Li et al. [p\u2009<\u20090.05). However, Zecca et al. [p\u2009=\u20090.008). Mckay et al. [Six studies assessed participants\u2019 disability using the Expanded Disability Status Scale (EDSS) with mixed findings. Four studies found that higher disability was associated with non-adherence, however, two studies found that those with lower disability were more likely to be non-adherent. Koskderelioglu et al. found thli et a. demonstra et al. found thy et al. found a Psychological and behavioural factors.not important at all, 16.98% said it was a little important, 20.75% found it was moderately important and 18.86% found it was extremely important. In total, more than half the sample (56.59%) stated memory problems as important. Mckay et al. [A total of nine studies found that self-reported memory problems or forgetfulness were associated with poorer adherence or missing doses. Descriptive analysis carried out by Arroyo et al. revealedy et al. performey et al. study.r\u2009=\u2009\u2212\u00a00.28, p\u2009<\u20090.05) and poorer delayed list recall . Between-group analyses of adherence revealed that poor adherers demonstrated poorer performance on a test of prospective memory compared to adequate adherers. Poor adherers also recalled fewer words after a delay (mean\u2009=\u20098.29 t(53)\u2009=\u20092.09, p\u2009<\u20090.05). Devonshire et al. [p\u2009<\u20090.0001). Inflammatory demyelination can result in cognitive deficit in up to 75% of people with MS and can be present in the very early stages of the condition [Two studies used quantitative scales to measure cognition; Bruce et al. found the et al. used theondition . Therefop\u2009<\u20090.001) and one objective measure (p\u2009=\u20090.001)). Worse scores for retrospective self-reported adherence were also associated with increased anxiety symptoms (p\u2009<\u20090.01). There were four studies which found that depression or depressive symptoms had an association with adherence rates. Koskderelioglu et al. [p\u2009=\u20090.006). The same scale was used by Treadaway et al. [p\u2009=\u20090.0009). Similarly, Higuera et al. [p\u2009<\u20090.0001). Neuropsychiatric comorbidities are prevalent amongst people with MS, a recent meta-analysis demonstrated consistent evidence for high prevalence rates of depression (31%) and anxiety (22%) in MS patients [A diagnosis of depression, symptoms of depression or at least one psychiatric disorder were associated with poorer adherence across five studies. Bruce et al. found thu et al. found thy et al. , who alsa et al. also foua et al. also foupatients .p\u2009<\u20090.0001), sentimental and sexual scales (p\u2009=\u20090.0068) and activities of daily living (p\u2009=\u20090.0021). Treadaway et al. [p\u2009<\u20090.0001), emotional well-being (p\u2009=\u20090.0012), social function (p\u2009=\u20090.0227), overall quality of life perception (p\u2009=\u20090.0001) and mental health composite scores (p\u2009<\u20090.0001). Similarly, Hao et al. [p\u2009=\u20090.05). In relation to psychological support, Siegel et al. [Devonshire et al. analysedy et al. also looo et al. reportedl et al. conductep\u2009=\u20090.008) and up to 14 times more likely to miss multiple doses (p\u2009=\u20090.008), than those who did not drink. Similarly, Mckay et al. [Tremlett et al. found thy et al. multivarn\u2009=\u20093) and injection-related reactions (n\u2009=\u20094) were commonly reported. Arroyo et al. [t(88)\u2009=\u20092.65, p\u2009<\u20090.01).Several studies captured medication-specific reasons for participant\u2019s non-adherence to treatment, and injection anxiety . Studies did not provide separate analyses for different disease-modifying drugs prescribed, although 6 studies did adjust for the different frequencies of administration. It is possible that significant drivers of adherence within sub-groups have been masked by the grouped data.The study selection process highlighted the inconsistent operational definitions of adherence and persistence used in MS research. It is important for researchers to provide a clearer justification and understanding of these concepts and to use an adequate measure of adherence. The studies which adequately measured adherence which were included in the review have considerable methodological and clinical heterogeneity. The significant variability in adherence rates across studies may reflect this heterogeneity, therefore there is a need to develop a reliable, standardised measure of adherence. In the absence of this, studies should use more than one measure of adherence to increase ecological validity. In addition, the consistent use of the same outcome measures to quantify specific determining factors of adherence would improve the validity of findings. These methods of standardization would enable researchers to carry out a meta-analysis of the findings to draw more accurate and reliable conclusions. It would be useful to conduct longitudinal research across different time points to capture a better understanding of adherence across the course of the disease. Studies should also recruit participants who are prescribed second and third-line treatments so these can be separately systematically reviewed and then compared with first-line treatments. These findings would strengthen the evidence base for developing large-scale patient support programs and tailored interventions designed to improve compliance.Poor adherence to disease-modifying drug treatment in multiple sclerosis remains a challenge in clinical practice and has an adverse impact on prognosis. The current review has substantiated research into the factors associated with adherence, which includes gender, age, depression, cognition, treatment satisfaction and medication-specific issues. Priorities for future research include addressing the methodological and conceptual limitations of previous studies which will enable researchers to carry out meta-analyses. In an ageing population, the management of multiple sclerosis is likely to further contribute to a growing burden on healthcare services due to age-related comorbidities and further complications to disease management. Therefore, addressing issues related to adherence, and delivering interventions that improve compliance to treatment, is critical to minimising the personal and economic impact of the disease.Supplementary file1 (DOCX 28 kb)Below is the link to the electronic supplementary material."} {"text": "Premna odorata\u2019 by Abeer H. Elmaidomy et al., RSC Adv., 2020, 10, 10584\u201310598. DOI: 10.1039/D0RA01697GCorrection for \u2018Triple-negative breast cancer suppressive activities, antioxidants and pharmacophore model of new acylated rhamnopyranoses from The authors regret that the name of the author Asmaa I. Owis was incorrectly spelled as Asmsaa I. Owis in the original article. The correct author list is as displayed in this document.The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} {"text": "Liver lesions are difficult to diagnose and to differentiate primary from metastatic carcinoma, while Biopsy has its limitations. Cell block technology is easily accessible with high diagnostic accuracy. Our aim is 1) To find the role of cell block technology as an alternative to biopsy in identifying liver lesions; 2) To find the efficacy of cell block along with immunohistochemistry (IHC) and ancillary studies in differentiating primary from metastatic lesions; 3) To identify the site of origin of metastatic lesions. This is a descriptive study undertaken in two tertiary care hospitals over a period of three years.Retrospective review of adequate samples from fine needle aspirations from liver lesions under radiological coverage, converted into cell block was done. IHC was applied as needed. Usefulness of cell block preparation was evaluated, and the final diagnosis correlated with the biopsy results.Analysis of 323 cases found sensitivity of 98.75% and positive predictive value of 99% for all lesions. Sensitivity for metastatic carcinomas was slightly more than hepatocellular carcinoma. However, accuracy of cell block results for individual metastatic lesions and site of origin was less. IHC and morphological pattern worked as an important adjunct in the final diagnosis. On the other hand, contribution of viral markers as a supplement in the final work up was ambiguous. High precision of validity results of cell block technology in comparison with biopsy highlights its pivotal role in conjunction with supportive tests for diagnosing and differentiating liver lesions as well as identifying primary sites in liver metastasis. Carcinoma of liver has a prevalence of 2-8% worldwide . MetastaDiagnostic sensitivity of FNAC of liver varies from 67-100% and specificity 93-100% , 5. ThisThe Aim of this study was to evaluate the scope and accuracy of cell block following FNAC with or without immunohistochemistry along with ancillary studies for diagnosing various liver lesions . Also, we aimed at evaluating the role of cell block for differentiating primary hepatic malignancy from metastatic lesions of the liver along with the use of cell block as an adjunct to FNA in sub typing the various metastatic carcinomas and identifying the source or the origin of the malignancy.This is a retrospective descriptive study carried out at both KPC medical college, Kolkata and Medical College, Patna over a period of 3 years. As it was a retrospective study no ethical issue or patient consent was needed. A detailed previous history of any other preexisting liver disease and record of serological viral marker, where available, were collected from the surgery department. FNAC was carried out either blindly or with USG/CT guidance in the radiology department. Direct air-dried smear were stained with MGG. Some smears were immediately fixed in 95% alcohol and stained with Pap. The remaining material in the syringe was allowed to clot to form cell block, where aspiration was adequate for cell block formation .Results were analyzed by two independent senior pathologists and a final conclusion of the diagnosis was derived after discussions with a third senior faculty. All the procedures were performed following the standard operating procedures with routine and con-sistent checks to identify and address various types of errors and omissions, ensuring data integrity, correct-ness and completeness of all the available records. The quality control checks included accurate patient identification, proper fixation time, adequate processing measures, appropriate embedding techniques, precision in microtome sectioning, unacceptable artifacts and regular inspection of controls used in IHC and special stains to determine the correctness in our method.Statistical analysis was done using Chi-square to compare various parameters. The P-value was calcula-ted using the sampling distribution of the test statistics under the null hypothesis and our sample data as in a two-sided test. In our analysis, an alpha of 0.05 was used as the cut off for significance. When the P-value was less than 0.05, we rejected the null hypothesis that there is no difference between the means; thus, we concluded that a significant difference exists. So, in our study, P-value below 0.05 was taken as significant and over 0.05 as not significant. Fischer\u2019s exact test was also done to compare various parameters in the patients.Out of 416 cases who underwent guided FNAC from liver, 15 cases were considered inconclusive for reporting due to very scanty cellularity or blood only aspirate. Among the adequate aspirations which were 401 in number, the aspirate was enough to make cell block in 349 cases. Others were reported on FNAC as benign or malignant and were not included in our study.Age range varied from 42 to 84 years, with a mean age of 65.5 years. Hepatocellular carcinoma was in the range of 48-84 years with a mean of 67.2 years while metastatic age range was 42-81 years with a mean of 58.4 years. Highest amount of inadequate and inconclusive smears was when the lesion size was <1 cm. Male to Female ratio was 6:4. In 251 out of 349 cases, immunohistochemical study could be done on cell block preparation. Among the rest 98 cases, 54 did not require IHC due to clear morphology on H&E staining for a final diagnosis. Of these cases, 23 were non-compliant for IHC study, mostly due to economic reasons and decided to go for direct incision biopsy as it is the gold standard. Of the patients, 18 opted for further investigation and treatment in an oncology center, while 3 were lost in follow up after H&E reporting on cell block. So, the total biopsy results were obtained for 323 cases which remained our study sample . On cell block, with or without immunohistoche-mistry, 43 cases (13.31%) were positive for hepato-cellular carcinoma, 254 cases (78.63%) were positive for metastatic lesions, 7 cases (2.1%) were suspicious of malignancy and 19 cases (5.8%) were designated as benign lesions .Individual comparison of cell block results with that of biopsy, which is the final diagnostic tool, showed a few discrepancies in interpretation of individual lesions. In biopsy, 52 cases (16.09%) were primary hepato-cellular carcinoma, 253 cases (78.32%) were metastatic lesions while 15 cases (4.64%) were actually benign and 3 cases (0.9%) were regenerative nodules . A detaiThere was occasional variance between both the results of cell block and biopsy in almost all lesions, however the disparity was obvious in undifferentiated carcinoma with eight false positive cases. Hepatocellular carcinoma was diagnosed when polygonal cells with eosinophilic cytoplasm, large vesicular nucleus with prominent nucleoli were seen in the smears. When smears showed malignant cells arranged in loose clusters or sheets of pleomorphic cells with moderate to abundant cytoplasm, they were diagnosed as metastatic adenocarcinoma. Adenocarcinoma metastasis from GIT, ovary, and pancreas with metastatic adenocarcinoma from gall bladder was differentiated with IHC and other ancillary studies like radiological imaging, history along with clinical examination of the patient. Similarly, for undifferentiated metastatic carcinoma, the site of origin of primary focus was determined by considering all the above parameters. Round cell tumor had tight clusters of monomorphic cells with nuclear molding and scanty cytoplasm. Sarcoma metastasis showed oval to spindle cells with indistinct cytoplasm. Regenerative nodules had hyperplastic hepatocytes with no distinctive cyto-architectural features and were mistaken as suspicious for malignancy on cytology. Hematological diagnosis was also missed in cell block technique. Due to aspiration from necrotic area, a few cases of metastatic adenocarcinoma were missed. Immunohistochemistry was utilized to arrive at the final diagnosis, as and when essential. Morphology was observed from the smears obtained with MGG, PAP and H&E routinely from the cell block preparation. Special stain was PAS (to look for mucin) and reticulin (to look for trabecular strand) was also performed on cell block preparation. In morphology, P=0.0001) (b) hepatocytic appearance (P=0.0000) C) Intracellular bile (P=0.005);1) Cytological features used were a) trabecular pattern (2) Gland formation-well-formed cluster of glands mostly seen in moderately to well differentiated adenocarcinoma.3) Special stain- a) Reticulin stain was used to see the trabecular strands thickness which was usually present in hepatocellular carcinoma and some benign lesions whereas it was absent in metastatic carcinoma.b) PAS to look for mucin was present in some varieties of moderately differentiated to well differentiated metastatic carcinoma (mucin secreting adenocarcinoma) whereas it was universally absent in hepatocellular carcinoma and benign lesions.Immunohistochemistry was done with CK7 & CK20, Hep Par-1 and p CEA staining. All cases of hepato-cellular carcinoma were positive to Hep Par-1 and negative for CK7 and CK20. pCEA was equivocal. All cases of moderately to well differentiates adenocarci-nomas were strongly positive for CK7 and weakly positive for CK20 and pCEA. Hep Par-1 was uniformly negative. Poorly differentiated metastatic carcinoma was positive for CK7 and negative for Hep Par-1. CK20 and pCEA were equivocal and were not helpful. Benign lesions of the liver were Hep Par-1 positive except for the abscess (2 cases). \u03b1 feto protein was highly positive for hepatocellular carcinoma but poorly differentiated lesions also showed focal positivity in certain cases. Neuroendocrine markers were positive for round cell tumors with chromogranin displaying stronger positivity than synaptophysin. A few equivocal results were also discerned in metastatic lesions. SMA was positive in sarcomatous lesions which along with morphology helped in diagnosis. All the other markers were negative.Serological studies of viral markers were documented from the patient\u2019s history recorded in the surgical department and were available for 16 cases of metastatic carcinoma and 39 cases of HCC. Serum viral markers including HbsAg and anti HCV antibody checked. Viral assay for both Hepatitis B (titer of Hep B DNA) and Hepatitis C (titer of Hep C RNA) were done. While either hepatitis B or Hepatitis C were present, in cases of hepatocellular carcinoma (39/323) consistently more often than in both poorly differentiated and moderately to well differentiated metastatic adenocarcinoma, it is not helpful to differentiate between benign liver disease and hepatocellular carcinoma. The number (16/323) of viral markers done in the metastatic group was very less, however an increased percentage was found to be positive in those tested as it was done only in cases showing liver damage (obtained by history and elevated liver enzymes) .A detailed statistical analysis showed sensitivity of all the lesions diagnosed through cell block method to be 98.75% with positive predictive value of 99% and P-value highly significant at <0.00001. Diagnosing metastatic carcinoma was also very accurate with positive predictive value of 99.2%. Primary lesion like hepatocellular carcinoma with 100% positive predictive value, 91.5% sensitivity and significant P-value had very precise results on cell block. However, differentiating the various types of metastatic lesions on cell block was less on target with accuracy ranging from 66.66% to 100% for various carcinomas . FNAC from liver has proven to be a better diagnostic tool than core needle biopsy or open biopsy in terms of cost, procedure and early diagnosis . Liver a et al. and Willems et al. bigger than 5 cm had better successful aspiration and greater accuracy than tumor <1 cm. Similar results depending on tumor size is detected by Voits et al. , 11. For similar to Tao et al. whose st et al. , 14. Int et al. . et al. ( et al. ( et al. and Fan et al. ( et al. and Edoo et al. (Immunohistochemistry helps in classification and prognostication of hepatocellular tumors which is shown in the study by Cheuk-lam Lo et al. . Careful( et al. . CK7 andn et al. , 19. Theo et al. , 21 foun et al. and Nguyen et al. ( et al. ( et al. and Murugavel et al. ( et al. ( et al. ( et al. (pCEA is a useful contributor to the diagnosis of small liver tumor still amenable by surgery. Wang et al. , 23 show( et al. stated tl et al. , 26 corr( et al. and Veen( et al. found ch( et al. showed t et al. ( et al. (et al., Di Bisceglie et al. and Yuen et al. ( et al. ( Noh et al. found ou( et al. believedn et al. -35 also ( et al. found ou et al. (et al. and Iyer et al. ( et al. (Mathew et al. discover et al. , 35. Ourr et al. , 38 thour et al. . Our stur et al. . In a fe( et al. found poCell block converts a suspicious report into a definitive diagnosis. We have to ask ourselves, \u201cDo we really need to do core biopsy?\u201d Because in resource limited areas cell block is a poor man\u2019s core needle biopsy and can be used as an adjunct to histopathology.In cell block, architecture of tumor is maintained at places whereas core biopsy can have crush artifact. Even in higher centers, in certain cases, cell block is better than core biopsy, which is formalin fixed, as studies show that formalin can hinder in DNA extraction, especially in molecular studies. However, in pediatric age group, FNAC with cell block can be used in certain cases though core biopsy remains the gold standard in most pediatric tumors. Some believe that biopsy tract seedling using unsheathed needle is probably more common than fine needle aspiration spilling, though there is no proven data. A satisfactory FNAC sample with cell block is a very useful diagnostic tool for evaluation of various liver lesions with high degree of diagnostic accuracy. Also, it reduces the timing, the economic burden and morbidity of the patient.In cases where diagnosis by FNAC is equivocal, it is recommended to perform FNA with cell block preparation and IHC studies as a part of routine laboratory practice to improve diagnostic precision. Because of its high sensitivity, Cell Block technique is a useful adjunct to routine FNA smear because multiple sections can be cut from a cell block and IHC and special stains can be applied. Viral markers, if available, can be correlated to arrive at the final diagnosis. The combination of cell block with all these adjunct techniques is of immense help in identifying primary carcinoma and differentiating it from metastatic deposits in the liver without any invasive procedure. The source of the primary site in metastatic deposits can be detected which can guide the treatment protocol and even helps in predicting the prognosis."} {"text": "Cardiovascular Diabetology, Sinha et al. :66) reported that prediabetes (defined as a fasting plasma glucose concentration of 100\u2013125\u00a0mg/dL) was associated with a higher lifetime risk of heart failure in middle-aged White adults and Black women, with the association attenuating in older Black women. This study provides important evidence that the risk of heart failure is increased in people with a fasting plasma glucose concentration as low as 100\u00a0mg/dL, supporting the definition of prediabetes according to the American Diabetes Association guideline. The study also strongly supports the notion that prediabetes should be regarded not only as a high-risk state for the development of diabetes but also as a risk factor for cardiovascular morbidity.In a recently published paper in Prediabetes, also termed \u201cintermediate hyperglycemia,\u201d is an intermediate metabolic state between type 2 diabetes mellitus (T2DM) and normoglycemia and includes patients with impaired fasting glucose (IFG) and impaired glucose tolerance (IGT) , 2. IFG Cardiovascular Diabetology, Sinha et al. [In a recently published paper in a et al. analyzeda et al. . This sta et al. , Cai et a et al. publishea et al. . The resa et al. . TogetheNotably, individuals with prediabetes are more likely to progress to T2DM. Therefore, the risk of heart failure may be confounded by progression to T2DM. Unfortunately, these studies did not adjust for such confounders. In future studies, a series of blood glucose measurements is needed in individuals with prediabetes to determine the blood glucose trajectory and the association of diabetic complications, including heart failure. Furthermore, it is known that lifestyle interventions in people with prediabetes can reduce the risk of cardiovascular events and all-cause mortality . It is a"} {"text": "Periprosthetic joint infection (PJI) is the most devastating complication of joint replacement that seriously affects the quality of life and causes a heavy burden to the families and society. Due to shorter hospital stays, lower costs, improved joint function and less morbidity, a process of debridement, antibiotics and implant retention (DAIR) is recommended as the preferred treatment for acute periprosthetic joint infection. However, the factors that impact the success rate of DAIR remain controversial. This article evaluates the influential factors of DAIR and provides insights for orthopaedics surgeons to make optimal decisions to improve the success rate of DAIR.Staphylococcus aureus (MRSA) infections may be associated with lower DAIR success rate. To the contrary, early surgery, radical debridement, exchange of removable components, washing with iodine and vacuum sealing drainage (VSD) may improve the success rate of DAIR. A sinus tract may not be absolutely contraindicated, but surgeons should treat it with caution. As there is no consensus on many issues, more high-quality research is required.The poor general condition of patients, high preoperative C-reactive protein (CRP) level, repeated joint surgeries, and Methicillin-resistant PJI is the most serious complication that contributes to more than a quarter of revision surgeries after joint arthroplasty , 2. As pThe treatment of PJI aims to eradicate infection, relieve pain and improve function. The therapeutic regimens include simple antibiotic treatment, DAIR, one-stage revision or two-stage revision, arthrodesis and amputation , 6. Due DAIR is currently considered the preferred treatment for acute PJI due to its low trauma, satisfactory cost-effectiveness and less impairment on joint function , 9, 10. et al [DAIR is only suitable for those PJI patients with good conditions of bone and soft tissue in whom no sinus tract exists between the joint prosthesis and the skin . Qasim eet al showed tWith the development of a surgical technique for DAIR and the application of postoperative antibiotics, some studies showed that the existence of a sinus tract was no longer an absolute contraindication for DAIR. A retrospective study reported 14 PJI cases of a sinus tract treated with radical debridement, removal of dead tissue, shaving of pseudo-membrane, and intraoperative irrigation with a large volume of physiological saline, hydrogen peroxide and iodine. A satisfactory outcome was achieved in all patients as there was no recurrence of infection after 1\u20135\u2009years follow-up [et al [et al [et al [Preoperative CRP level is also an influential factor contributing to a failed outcome of DAIR . Vilchezet al believedl [et al believedl [et al reportedet al [ et al [Previous joint surgery also affects the prognosis of DAIR. Abrman et al reported [ et al showed aIn addition to the above factors, age, immunity and nutritional conditions, the American Society of Anesthesiologists (ASA) score, and complications such as hypertension and diabetes also affect the success rate of DAIR .There is no consensus on age. Although most scholars suggest increased age as a risk factor for failure, recently a high-quality meta-analysis reportedStaphylococcus infection has been reported [Staphylococcus infection [et al [Staphylococcus infection. Wouthuyzen et al [Staphylococcus should be reconsidered. A recent study found that the success rate in patients treated with DAIR for Staphylococcus infection was 76.2% [Although a 100% success rate of DAIR for reported , most stnfection , 24\u201326. n [et al reporteden et al showed tStreptococcus is relatively high [et al [Streptococcus infection was 84%. However, in 2017, a large multicenter retrospective study showed that the success rate of DAIR for Streptococcus infection was only 57.9% [The success rate of DAIR for treating periprosthetic infection caused by ely high , 28. Lamh [et al reportedly 57.9% .Periprosthetic infections caused by gram-negative bacteria have been infrequently reported. A large multicenter retrospective observational study reportedStaphylococcus, Acinetobacter, Propionibacterium species, and Corynebacterium species which were more common in seronegative infections. This study suggested that low virulence infections may contribute to a relatively high success rate. However, the sample size was small and further data are needed.McArthur et al reportedet al [et al [Fehring et al showed al [et al suggesteet al [et al [Sendi et al reportedl [et al reportedl [et al reportedThe presence of biofilm on the surface of an implant is the most important reason for the transformation of an early infection of an artificial joint into a chronic infection . Three wet al [et al [Liu et al reportedl [et al showed tl [et al suggestel [et al , 40. We Thorough debridement is essential to the success of DAIR. All infected tissues, synovial membrane, necrotic tissues, scar tissue and foreign materials must be completely removed. The success rate of DAIR in PJI patients is very low if debridement is not performed .et al [et al [The prosthesis is retained when DAIR is performed to treat PJI, but removable components must be exchanged whenever possible , 35, 42.et al [et al [High volume povidone-iodine and low-pressure irrigation should be used intraoperatively with a lavage volume at least 9\u2009L. Li et al reportedl [et al VSD helps to continuously drain necrotic tissue and exudate, eliminate edema, stimulate growth of the granulation tissue, improve local blood supply and enhance local immunity. Since its development, the VSD technique has been applied to the treatment of PJI with increasing frequency.et al [The negative pressure value of VSD also had a significant influence on the effect of drainage. When high negative pressure drainage was performed, the total drainage and infection control rate were significantly better compared to low negative pressure drainage . In 2014et al reportedWe suggest VSD be used for selected patients because of limitations such as requirement for frequent exchange of materials which increases hospitalization expenses and hospital stays. et al [et al [et al [et al [et al [vs. 71%). Moreover, Jacobs et al [A voluminous literature discussed the relationship between single or multiple debridement and the success rate of DAIR. Some scholars reported that multiple debridement was related to the increased failure rate of DAIR , 47\u201350. l [et al also repl [et al reportedl [et al reportedbs et al found thWhen PJI patients were treated by DAIR without postoperative antibiotics, the failure rate after 3 years follow-up was as high as 86% .For PJI patients with positive bacterial culture and drug sensitivity test results, antibiotics sensitive to pathogenic bacteria should be used after DAIR. The selected antibiotics also should be biofilm- penetrative to kill the pathogenic bacteria remaining in the biofilm after DAIR.et al [et al [et al [For PJI patients with negative bacteria culture or no drug sensitivity results, empirical antibiotic therapy usually is required. Selected antibiotics must have an adequately broad antibacterial spectrum to cover common pathogenic microorganisms. Sousa et al suggestel [et al reportedl [et al in 2013 As to the duration of antibiotic treatment after DAIR, the Infectious Diseases Society of America (IDSA) advises initial intravenous therapy for 2\u20136\u2009weeks followed by oral antibiotics for 3 months after hip surgery or 6 months after knee surgery. Several studies showed no differences in infection control rate when comparing shorter duration with longer duration postoperative antibiotics. Other studies indicate that prolonged parenteral antibiotic therapy increases the economic burden and the risk of drug resistance , 56.DAIR is the preferred treatment for acute PJI with lower trauma and cost compared to revision surgery. A poor general condition of the patient, high preoperative CRP level, repeated joint surgery, MRSA infections may be associated with reduced DAIR success rate. Early surgery, radical debridement, the exchange of removable components, washing with iodine and VSD may improve the success rate of DAIR. A sinus tract may not absolutely contraindicate DAIR, but surgeons should treat it with caution. Orthopedic surgeons should take precautions to achieve a better outcome in PJI patients. As studies about DAIR are mostly retrospective and involve small samples, there is still no consensus on many aspects of DAIR effectiveness. More high-quality researches are required to improve the success rate of DAIR for PJI."} {"text": "Appl. Mech.86, 011002 (doi:10.1115/1.4041352)) is used to infer power-law creep parameters, the creep exponent and the associated pre-exponential factor, from noise-free as well as noise-contaminated indentation data. A database for the Bayesian-type analysis is created using finite-element calculations for a coarse set of parameter values with interpolation used to create the refined database used for parameter identification. Uniaxial creep and stress relaxation responses using the identified creep parameters provide a very good approximation to those of the \u2018experimental\u2019 materials with stress exponents of 1.15 and 3.59. The sensitivity to noise increases with increasing stress exponent. The uniaxial creep response is more sensitive to the accuracy of the predictions than the uniaxial stress relaxation response. Good agreement with the indentation response does not guarantee good agreement with the uniaxial response. If the noise level is sufficiently small, the model of Bower et al. provides a good fit to the \u2018experimental\u2019 data for all values of creep stress exponent considered, while the model of Ginder et al. provides a good fit for a creep stress exponent of 1.15.Load and hold conical indentation responses calculated for materials having creep stress exponents of 1.15, 3.59 and 6.60 are regarded as input \u2018experimental\u2019 responses. A Bayesian-type statistical approach (Zhang Instrumented indentation is attractive for identifying creep properties as it is non-destructive, requires a relatively small specimen, and has been used for the identification of mechanical properties of a broad range of materials. However, indentation involves a complex deformation field, and extracting material properties from experimentally measured indentation quantities can be complex and non-unique.et al. )et al. and from [et al. .et al. [Here, the Bayesian statistics-based approach of Zhang et al. is used (i)Can very different power-law creep parameters give nearly the same responses in load and hold indentation creep? There are sets of rate-independent plastic material parameters that have indistinguishable force versus depth responses in conical indentation but very different uniaxial responses \u201317. (ii)Does using the residual surface profile in addition to or instead of the indentation depth versus time data improve the quality of the prediction? (iii)How sensitive is the predicted creep response to noise in the \u2018experimental\u2019 indentation data? (iv)et al. [et al. [How do the power-law creep properties obtained using the analytical steady-state creep results of Bower et al. and Gind [et al. compare The questions addressed include:. 2Indentation into an isotropic elastic power-law creep solid by a conical indenter is modelled as sketched in Calculations are carried out for an indenter angle on depth . The indm=htan\u2061\u03b3 .The calculations are carried out using a quasi-static Lagrangian implementation in the commercial finite-element program ABAQUS standard (a)The magnitude of the indentation force in the As described in the ABAQUS manual, The coefficient of friction is taken to be equation are termWith (b)The elastic-creep constitutive relation of ABAQUS standardThe creep part of the rate of deformation tensor, . 3The equations of the Bayesian-type statistical approach used to infer the creep parameters given in .The \u2018experimental\u2019 indentation data consist of: (i) a vector characterizing the residual surface profile, Finite-element solutions for a normalized residual surface profile, denoted by Treating the indentation depth versus time data and the surface profile data as being independent, the posterior probability The constants et al. [The likelihood functions, which measure the difference between the \u2018experimental\u2019 data and the predicted responses in the database, are 3.4p amorphous selenium (Se) at For Se, the values of ken from , and the is from . For CsHble 1 of . The val 1(a) of . For Sn,ken from and the 2(b) of . The vale 45 GPa and the ken from .. 5 (a)The imposed loading history models a constant load and hold indentation creep test, with the magnitude of the applied force on the indenter, equation , prescriet al. [For power-law creep with elastic strains neglected, equation , Bower eet al. derived rea) see , and \u03b2 iFor equation with resFor an elastic solid, the relation between indentation depth Sneddon :5.5helaet al. [As exploited by Su et al. , the inde length . Attentie length .The values of et al. [In their experiments Su et al. found thequation were car (b)The reference finite-element mesh for the configuration in n ABAQUS standarde ABAQUS indentate ABAQUS .Convergence was investigated using a refined mesh with (c)a shows the computed normalized indentation depth, et al. [Figure 2n et al. and showb shows c shows a b. Note that the value of a\u2013c, and are not used for identifying the power-law creep parameters.Figure 2In the early stages of indentation, the plot of indentation depth trast to , the find shows the normalized surface profiles near the indenter after unloading for the three materials. The residual surface profile of Figure 2a\u2013c, respectively. For each of the three materials, the state of deformation shown is at the maximum indentation depth equation . The sizThe extent, in terms of aterials . The cred\u2013f shows the contours of the corresponding mean normal stress Figure 3 (d)The creep exponent equation define tFor each of the three \u2018experimental\u2019 materials in quations , and of quations , the \u2018exIn all three databases, the creep exponent As in ,26,27, dThe accuracy of the interpolation was checked by carrying out a few finite-element calculations using interpolated values of material parameters. The agreement between calculated and interpolated responses was best for larger values of the creep stress exponent . 6Values of the creep material parameters For uniaxial creep loading the prescribed stress For an imposed For uniaxial stress relaxation loading, the displacement rate is prescribed so that For an imposed quations and (6.4A significant difference between the indentation depth versus time response in equation and the (a)For the three \u2018experimental\u2019 materials in equation .Once the initial database is constructed, the computations for the interpolation and for the statistical analysis are very light and are quickly carried out on a personal computer . (i)For each database, the posterior probability distribution is calculated from: (i) indentation depth versus time data (HT); (ii) residual surface profile data (S); and (iii) both indentation depth versus time data and residual surface profile data (HTS). The values of For Se, the predicted values of The predicted parameter values The predicted parameter values of a. The corresponding stress relaxation responses using equation With the noise-free \u2018experimental\u2019 responses denoted by e Matlab functionThe standard deviation As in , calculaThe material parameters and associated posterior probability obtained based on indentation depth versus time data (HT), residual surface profile data (S) and on both indentation depth versus time data and residual surface profile data (HTS) are given in The predicted uniaxial creep and stress relaxation responses for Se obtained from one-element finite-element calculations (giving homogeneous stress and strain fields) using the creep properties in For The creep parameters and associated posterior probability values obtained for g. 10 of , with inb. On the other hand, the creep response in figure 14a shows a large difference between the uniaxial creep response of the \u2018experimental\u2019 material and the prediction based on the The comparison of \u2018experimental\u2019 and noise-contaminated HTS data predicted indentation responses for Sn in a, the creep responses predicted based on both the The noise-contaminated uniaxial creep and stress relaxation predictions for Sn in figure 16For all three materials, values of The results here show an increasing sensitivity to noise with increasing creep stress exponent (b)c are taken to be et al. [et al. [The aim of the analytical power-law creep models is to provide explicit expressions for relating measured indentation responses to the constitutive parameters equation . The firequation , the sloet al. [et al. [Using expressions derived by Bower et al. and idenequation with the (et al. :6.7\u03b1BFN angle \u03b3 . The valand 2 of .et al. [The closed-form algebraic expression for r et al. based onFor the noise-contaminated predictions of the analytical models, noise is added to the power-law regime indentation depth versus time data using the Matlab functionequation and the equation . Note thquations and (6.8The values of equation is givenFor Se (For equation while thequation differ fFor Sn (equation are bothequation also proequation being obThe accuracy of the predictions becomes more sensitive to noise for larger values of the stress exponent equation and equaequation , respect. 7et al. [et al. [et al. [ 1. \u2014For Se ( \u2014For Sn (The Bayesian-type statistical approach provides the values of power-law creep parameters that provide a good fit to the indentation responses of all the materials considered when based on noise-free data and for sufficiently small noise amplitudes. The sensitivity to noise increases with increasing creep stress exponent 2.Can very different power-law creep parameters give nearly the same responses in load and hold indentation creep? In the circumstances analysed, different values of the power law creep parameters did give reasonably good fits to the \u2018experimental\u2019 indentation data, particularly for noisy data, but no cases were found where very different values of both power-law creep parameters gave nearly the same indentation response. 3.Does using the residual surface profile in addition to or instead of the indentation depth versus time data improve the quality of the prediction? Using both indentation depth versus time data and residual surface profile data generally leads to an improved prediction of the uniaxial creep and stress relaxation responses. For Se ( 4.How sensitive is the predicted creep response to noise in the \u2018experimental\u2019 indentation data? The uniaxial creep response is more sensitive to the accuracy of the predicted values of the power-law creep parameters, and therefore to noise, than is the uniaxial stress relaxation response. 5.et al. [et al. [et al. [et al. [et al. [How do the power-law creep properties obtained using the analytical steady-state creep results of Bower et al. and Gind [et al. compare [et al. are in vThe Bayesian-type statistical approach of Zhang et al. has beenr et al. and Gind [et al. .1. The"} {"text": "To investigate clinical outcomes and unmet needs in individuals at Clinical High Risk for Psychosis presenting with Brief and Limited Intermittent Psychotic Symptoms (BLIPS).Prospective naturalistic long-term (up to 9 years) cohort study in individuals meeting BLIPS criteria at the Outreach And Support In South-London (OASIS) up to April 2016. Baseline sociodemographic and clinical characteristics, specific BLIPS features, preventive treatments received and clinical outcomes (psychotic and non-psychotic) were measured. Analyses included Kaplan Meier survival estimates and Cox regression methods.One hundred and two BLIPS individuals were followed up to 9 years. Across BLIPS cases, 35% had an abrupt onset; 32% were associated with acute stress, 45% with lifetime trauma and 20% with concurrent illicit substance use. The vast majority (80%) of BLIPS individuals, despite being systematically offered cognitive behavioural therapy for psychosis, did not fully engage with it and did not receive the minimum effective dose. Only 3% of BLIPS individuals received the appropriate dose of cognitive behavioural therapy. At 4-year follow-up, 52% of the BLIPS individuals developed a psychotic disorder, 34% were admitted to hospital and 16% received a compulsory admission. At 3-year follow-up, 52% of them received an antipsychotic treatment; at 4-year follow-up, 26% of them received an antidepressant treatment. The presence of seriously disorganising and dangerous features was a strong poor prognostic factor.BLIPS individuals display severe clinical outcomes beyond their very high risk of developing psychosis and show poor compliance with preventive cognitive behavioural therapy. BLIPS individuals have severe needs for treatment that are not met by current preventive strategies. The BL, et al. d). For e et al., ), it ext, et al. a). Despi et al., a. A simiThis study aims at unravelling the broader clinical outcomes in individuals experiencing a BLIPS and highlighting their potential unmet needs. Firstly, we will describe the uptake of NICE-recommended treatment as well as other types of treatments in this population. Secondly, we will describe the risk of developing poor clinical outcomes such as the risk for the first admission to mental health hospital, of receiving a first compulsory treatment, of developing a psychotic disorder and of receiving a first antipsychotic or antidepressant treatment. Thirdly, we will describe the association between candidate prognostic predictors and clinical outcomes.et al., et al., et al., et al., We included all CHR-P subjects referred for suspicion of psychosis risk to the Outreach and Support in South London (OASIS) service, South London and Maudsley (SLaM) NHS Foundation Trust cohort study in CHR-P subjects who met BLIPS criteria. The clinical assessment and follow-ups were done as part of the standard clinical routine of OASIS.et al., et al., et al., et al., et al., et al., The details of the psychopathological CHR-P assessment conducted at OASIS have been described previously , type of BLIPS , the presence of seriously disorganising and dangerous features was investigated plotting Kaplan Meier .Sociodemographic and clinical characteristics of the sample, as well as treatments uptake over follow-up, were described with mean and s.d.\u00a0=\u00a05.81), 79% of them were single and 34% unemployed. The proportion of white (43%) and black (46%) ethnicities was similar and reflected by a HONOS score of 10 (s.d.\u00a0=\u00a06.36). Most BLIPS subjects (60%) did not meet additional CHR-P subgroups. About one-third (30%) of them displayed seriously disorganising or dangerous features. The BLIPS episodes lasted on average 7 days . In about one-third of the BLIPS (35%), the onset was abrupt (within 48\u00a0h). Acute stress was present in about one-third of BLPS (32%), while significant life events were noted in half of the cases (49%). A minority of the BLIPS episodes (20%) were induced by illicit substances, but the proportion of lifetime use of illicit substances was higher (55%). About 38% of individuals with BLIPS presented a positive familial history for psychotic or non-psychotic mental disorders. Lifetime trauma was recorded in about 45% of BLIPS, while the proportion of comorbid emotionally unstable personality disorder was 24%.As shown in The vast majority of BLIPS individuals (80%), despite being systematically offered CBT for psychosis, did not fully engage with it and did not receive the minimum effective dose . Only thThe cumulative risk of a first episode of psychosis was 0.194 (95% CI 0.126\u20130.290) at 1 year , 0.299 (95% CI 0.211\u20130.415) at 2 years , 0.428 (95% CI 0.314\u20130.562) at 3 years , 0.483 (95% CI 0.359\u20130.624) at 4 years , 0.519 (95% CI 0.368\u20130.667) at 5 years a.Fig. 1The cumulative risk of a first hospitalisation b was 0.1The cumulative risk of a first MHA c was 0.0The cumulative risk of a first antidepressant d was 0.1The cumulative risk of a first antipsychotic e was 0.3The only prognostic factor which was significantly associated with the risk of psychosis onset, risk of first hospitalisation and risk of first MHA section was the presence of seriously disorganising and dangerous features . This fan\u00a0=\u00a0102) with the longest follow-up (up to 9 years). They were mostly young males of black and white ethnicities with low baseline functioning. The onset of the BLIP was abrupt and associated with acute stress in one-third of the cases, and only in a minority of the cases with illicit substance use. The vast majority (80%) of BLIPS individuals, despite being systematically offered CBT for psychosis, did not engage with it and did not receive the minimum effective dose. Only 3% of BLIPS received the appropriate dose of CBT. At 4-year follow-up, 52% of the BLIPS developed a psychotic disorder, 34% were admitted to hospital and 16% received a compulsory admission. At 3-year follow-up, 52% of them received an antipsychotic treatment and 26% an antidepressant treatment. The presence of seriously disorganising and dangerous features was confirmed to be a significant prognostic factor. These findings indicate that BLIPS individuals have severe unmet needs for treatment that are not currently targeted by preventive interventions.To our best knowledge, this is the largest cohort of BLIPS individuals . The combination of real-world ecological outcomes and psychometric interviews (e.g. CAARMS-based definition of psychosis onset) in a longitudinal design is rare in the CHR-P literature (Webb et al., et al., et al., et al., et al., et al., Firstly, this study described the uptake of preventive treatments that are recommended by current clinical guidelines. As noted above, CBT is the first-line treatment recommended not only by the NICE NICE, but alsoet al., et al., et al., et al., et al., et al., et al., Secondly, this study described the risk of developing poor clinical outcomes. The very high risk of developing psychosis of BLIPS individuals is not a novel finding (Fusar-Poli et al., Thirdly, this study described the association between candidate prognostic predictors and clinical outcomes in BLIPS. Our group has previously demonstrated that the presence of seriously disorganising or dangerous features was a robust prognostic factor when controlling for several confounders including age, HONOS, SOFAS, CAARMS P1-P4 total score, gender, borough, ethnicity, marital status, employment status, BLIPS subgroup and recurrence of BLIPS (Fusar-Poli a priori predictors that were selected based on a priori clinical knowledge, in line with methodological recommendations (Fusar-Poli et al., There are also some limitations to this study. Although this is the largest cohort study in BLIPS individuals ever conducted, it was still representing a relatively small sample size. As such, the current study may have been underpowered to estimate the impact of substance-induced BLIPS on the first hospitalisation, first MHA section and first prescription of antidepressant. Because of these inherent limitations, we did not seek to develop a prognostic model and to validate it but rather provided descriptive analyses on the potential association of prognostic factors and clinical outcomes. To further contain the risk of fishing expeditions and type I false positives, we restricted the Cox regression analyses to a subset of BLIPS individuals display severe clinical outcomes beyond their very high risk of developing psychosis. There is poor uptake of preventive CBT in BLIPS individuals. These individuals have severe needs for treatment that are not met by current preventive strategies."} {"text": "The authors dissect three possible enzyme mechanisms of TIM that have arisen in the decades since the first X-ray crystal structure of this enzyme was published in 1975.Insights are offered on the study by Kelp\u0161as Knowles & Albery Fig. 1 was diffbery 1977 also rem al. 2021, reporteet al., 1975et al., 1981et al. of two complexes of the Leishmania mexicana triosephosphate isomerase. These complexes comprise reaction intermediate mimics, which shed light on the proton shuttling steps of the enzyme mechanism. Triosephosphate isomerase is yet another example of the case of an enzyme mechanism where controversy develops between competing models of proton movements and neutron crystallography is invoked.By 2010 there were \u2018at least 111 crystal structures of triose\u00adphosphate isomerase in the PDB\u2019 reviewed by Wierenga al. 2010, a co-auet al. (2021Leishmania mexicana amino acid sequence numbering): \u2018there are (i) the so-called classical mechanism, where His95 donates a proton to the enediolate oxygen and then abstracts a proton from the other hydroxyl group of the enediol, (ii) the criss-cross mechanism where the protonated Glu167 first reprotonates the charged enediolate oxygen, followed by another proton abstraction from the other hydroxyl group of the resulting enediol. In this criss-cross mechanism the role of His95 is solely to stabilize the negative charge through strong hydrogen bonds. (iii) Another possibility, called the shuffle mechanism, is where the classical mechanism is performed in only one step, with two protons being transferred concurrently. This would avoid the formation of an intermediate where His95 would have a negative charge.\u2019. The authors have three conclusions: \u2018(i) the general base is (shown to be) definitely Glu167, (ii) there is no indication of any low-barrier hydrogen bonds and (iii) that the three suggested mechanisms are all energetically possible\u2019. This latter point relied on the cross validation of the experimental results and QM calculations.Kelp\u0161as al. 2021 combinedet al. (2021et al., 2019The paper of Kelp\u0161as al. 2021 rather met al. (2021In this whole story of commitment to this enzyme, its structure and its mechanism of action, there was also the detailed X-ray crystallographic study at atomic resolution by Alahuhta & Wierenga 2010, also of al. 2021. This inet al. (2021Overall, Kelp\u0161as al. 2021 confirms"} {"text": "The degree of anisotropy (DA) on radiographs is related to bone structure, we present a new index to assess DA.In a region of interest from calcaneus radiographs, we applied a Fast Fourier Transform (FFT). All the FFT spectra involve the horizontal and vertical components corresponding respectively to longitudinal and transversal trabeculae. By visual inspection, we measured the spreading angles: Dispersion Longitudinal Index (DLI) and Dispersion Transverse Index (DTI) and calculated DA = 180/(DLI+DTI). To test the reliability of DA assessment, we synthesized images simulating radiological projections of periodic structures with elements more or less disoriented.Firstly, we tested synthetic images which comprised a large variety of structures from highly anisotropic structure to the almost isotropic, DA was ranging from 1.3 to 3.8 respectively. The analysis of the FFT spectra was performed by two observers, the Coefficients of Variation were 1.5% and 3.1 % for intra-and inter-observer reproducibility, respectively. In 22 post-menopausal women with osteoporotic fracture cases and 44 age-matched controls, DA values were respectively 1.87 \u00b1 0.15 versus 1.72 \u00b1 0.18 (p = 0.001). From the ROC analysis, the Area Under Curve (AUC) were respectively 0.65, 0.62, 0.64, 0.77 for lumbar spine, femoral neck, total femoral BMD and DA.The highest DA values in fracture cases suggest that the structure is more anisotropic in osteoporosis due to preferential deletion of trabeculae in some directions. Sugita et al. observedet al. and perm [et al. concludeThe determination of collagen and crystal orientation in connective tissues at molecular scale has been studied, for instance by diffraction -6. Amorpet al. [et al. [et al. [et al. [in vitro sample and 14 simulation models derived from this sample. The authors reported a very good correlation (r = 0.99) between anisotropy values assessed on the 3D trabecular structure and the 2D projection images. Luo et al. [Different methods are available to characterize the structural anisotropy on bone radiographs. In 1970, Singh [et al. develope [et al. introduc [et al. or proje [et al. , and an [et al. . A MIL t [et al. by fitti [et al. . The MIL [et al. ,16 or Ma [et al. . Therefo [et al. to compao et al. insistedet al. [et al. [et al. [et al. demonstrated the contribution of normalized Bone Mineral Density (BMD), structural features and patient age to bone mechanical properties. In 1993, Oxnard reported slight visible microarchitectural changes from the Fourier transform on bone radiographic images but no parameter calculation was performed [et al. considered the properties of Fast Fourier Transform (FFT) to evaluate trabecular bone structure: the directional pattern of frequencies with high-magnitude can identify the orientation of trabeculae. He defined three indices including spectral trabecular index, longitudinal and transversal trabecular indices [et al. concluded that this quantification detects structural changes occurring with age and may be useful in osteoporosis studies [et al. [et al. concluded that this texture analysis might lead to a better prediction of the osteoporotic fracture risk [Few methods evaluating the trabecular structure have been developed on radiographs. Caldwell et al. develope [et al. develope [et al. . Jiang e [et al. developeerformed . The samerformed . Wigdero indices . These i studies . Caligiu [et al. also useure risk . Recentlure risk .et al. have shown significant differences [Our research group has previously experienced fractal analysis on bone radiographs -29. The ferences but the The osteoporotic fracture risk prediction is important especially in postmenopausal women. BMD is currently measured in clinical practice and textural parameters can be assessed on trabecular bone radiographs ,24,25,31We describe here a new quantitative method based on the FFT to obtain anisotropy indices on bone radiographs of the calcaneus and a validation on synthetic images. The intra and inter-observer reproducibility and a pilot clinical study comparing osteoporotic fracture cases to control cases are presented.Postmenopausal women were recruited from a cross-sectional unicenter case control study. The protocol screened 400 women: 349 were enrolled. The inclusion and exclusion criteria have been previously described in details . For thi\u00ae 4500 device) at lumbar spine and femoral neck. The mean lumbar spine BMD was respectively 0.814 \u00b1 0.09 g.cm-2 and 0.895 \u00b1 0.16 g.cm-2 (p < 0.05) for fracture cases and control cases. The mean femoral neck BMD was respectively 0.650 \u00b1 0.09 g.cm-2 and 0.686 \u00b1 0.12 g.cm-2 (ns) for the fracture cases and control cases.The BMD was measured by dual energy x-ray absorptiometry . The ROI of 256 \u00d7 256 pixels with 256 gray-levels. Calcaneus is known to be a heterogeneous site of trabecular bone . For thiThe low frequency noise of an image corresponds to the gray-value variations over large distances, due to the radiological artifacts and to the fat tissues projections on the radiograph. In order to remove the low frequency noise and to take into account only trabecular components of the image, we used a convolution filter previously described by Geraets . A kernexOy) plane and can be expressed as a two-dimensional function f. The Fourier transform is expressed by a function F with the two variables \u03bc and \u03c5 corresponding to spatial frequencies in (\u03bc O\u03c5) plane. The 2D Fourier transform spectrum of an image is expressed by the following formula:The Fourier transform represents a signal in spatial frequency space. An image can be considered as a repartition of bright intensities in a spectrum are spread over an angle corresponding to the deviation of the structure in the original image. By analogy, we hypothesized that the periodic structure is represented by trabeculae projection, and the degree of disorientation by anisotropy.The magnitude of the transform corresponds to:Re = real part of the FFT and Im = the imaginary part of FFT.where The FFT was calculated on the gray level filtered images of the trabecular bone using the Visilog 5.1 software (Noesis). Then the magnitudes of the frequency images were divided to the total magnitude of the transform to normalize the contrast in the images according to the following formula:Trabecular bone was assimilated to an oriented structure with two main directions, the ROI including longitudinal and transversal trabeculae Figure . All theAn index relative to the trabecular fabric or degree of anisotropy was derived from the measured parameters DLI and DTI. The Degree of Anisotropy (DA) was defined as:For a perfectly isotropic structure, the FFT spectrum has a disc shape and DA is equal to unity. More isotropic trabecular bone structures will have DA values closer to unity.Figures To test the reliability of DA assessment, we tested this method on projected volume with known disorientations. We synthesized 6 series of 8 images composed of beam like structures more or less aligned following two directions in order to obtain structures close to trabecular bone of calcaneus. For the less anisotropic structure, horizontal and vertical disorientations varied from 0 to 80\u00b0 Figure , and forThe two measured parameters DLI and DTI and the derived parameter DA were determined to calculate the intra and inter-observer reproducibility. To determine the intra-observer reproducibility, a single observer performed two sets of measurements on the FFT spectra of 20 subjects with a one day interval between each set. To determine the inter-observer reproducibility, two sets of measurements were performed on the FFT spectra of 20 subjects by two observers. The observers were blinded for each sets of measurements.n subjects with the root mean square RMS average according to the following formula [The intra-observer and inter-observer reproducibilities were calculated for formula :SDj is the standard deviation for the subject j and is the average of the measurements for the subject j.where t-test for comparisons of the means after checking Gaussian distribution. The area under ROC curves was calculated for BMD measurements and DA.Results in these two groups were compared using Student's The results of synthetic images are represented on Table Table Table corresponding to the 95% confidence interval of the measurements [In control cases DA was closer to 1 due to a large spreading of frequencies in the FFT spectra while DA was higher in osteoporotic cases in relation to a narrower frequency spreading. Comparing the DA values from Table urements .Differences in DA determined by spectral analysis (p < 0.01) and lumbar spine BMD p < 0.05) were significant in control cases than in osteoporosis cases with vertebral fractures. The larger range of orientations in controls corresponds to a lesser anisotropic structure than in osteoporosis. DTI was the best discriminant parameter between fracture patients and control cases but also the less reproducible. There are only few transversal trabeculae comparatively to longitudinal ones and they disappear first with osteoporosis due to a less contribution in bone strength. The difference between fracture cases and controls was close to errors attributed to the intra or inter-observer reproducibility. The intra-observer reproducibility is close to the 1 to 2% of reproducibility found in usual bone densitometry method as dual x-ray absorptiometry . The pooet al. [et al. was performed with the MIL method on 3D magnetic resonance images at the radius. The MIL method is widely used to determine the trabecular 3D structure anisotropy but some authors [The higher anisotropy found in osteoporosis cases is in accordance with the findings of Newitt et al. who repo authors ,40 discuThis concept of transversal and longitudinal systems of trabeculae must be cautiously interpreted in our study. Indeed the ROI on the calcaneus radiographic images is tilted around 45 degrees Figure . The traet al. [The anisotropy of trabecular bone is different according to the skeletal sites: in a study comparing the properties of calcaneus, distal femur, proximal femur and vertebrae on human specimens, Majumdar et al. found thet al. [et al. [et al. [et al. [Our results corroborate the studies of Geraets et al. on hip a [et al. on wrist [et al. on verte [et al. on femoret al. showed in a study of the distal radius that the Line Fraction Deviation values decreased along the transversal direction with age whereas the orientation along the axial direction remains stable during the entire life [et al. [et al. in vertebrae [et al. [et al. characterized iliac trabecular bone by micro-QCT and showed that trabeculae thinning led to a more isotropic structure in the first postmenopausal years whereas the structure became more anisotropic in the later years [The lower values of the Line Fraction Deviation index found in osteoporotic subjects were consistent with the early loss of the secondary compressive trabecular group of the hip . Moreoveire life . Wigdero [et al. found usertebrae showed tertebrae . Ciarell [et al. also fouer years . They hyet al. [2 in each calcaneus. They found a spatial heterogeneity in the posterior region of 40 %. In our study the ROI was 2.7 \u00d7 2.7 cm2 and represented a much larger area; it was accurately defined by anatomic marks, this point avoiding large variation in fractal analysis [It has been well established that the calcaneus structure is heterogeneous ,46. Lin et al. have stuanalysis . FurtherThis study has shown that the DA can be determined on plain radiographs using spectral analysis. The reproducibility of the DA values may be improved by automating the method. The distinction between fracture cases and control cases is very promising, but further studies are necessary to know if the DA evaluation could improve the osteoporotic fracture risk determination when combined with BMD and other textural parameters such as fractal analysis.The author(s) declare that they have no competing interests.BBI participated in the design of the study, carried out the measurements and the manuscript preparation, G L carried out measurements of reproducibility and synthetic images, CC performed statistical analysis and participated to the manuscript preparation, R H supervised the application of the Fourier Transform and CLB carried out the design of the study and supervised the manuscript preparation.The pre-publication history for this paper can be accessed here:"} {"text": "Fibromatosis-like metaplastic carcinoma is a newly described metaplastic breast tumor, literature on which is still evolving.A 77-year-old lady presented with a 2 \u00d7 2 cm mass with irregular margins in the upper and outer quadrant of left breast. Fine needle aspiration cytology (FNAC) from the lump was inconclusive. A lumpectomy was performed and sent for frozen section, which revealed presence of spindle cells showing mild atypia in a sclerotic stroma. The tumor cells revealed prominent infiltration into the adjacent fat. A differential diagnosis of a low-grade sarcoma vs. a metaplastic carcinoma, favoring the former, was offered. Final histology sections revealed an infiltrating tumor with predominant spindle cells in a collagenous background, simulating a fibromatosis. Adjacent to the tumor were foci of benign ductal hyperplasia and a micropapilloma. Immunohistochemistry (IHC) showed diffuse co-expression of epithelial markers i.e. cytokeratins and EMA along with a mesenchymal marker i.e. vimentin in the tumor cells. Myoepithelial markers (SMA and p63) showed focal positivity. A diagnosis of a low-grade fibromatosis-like carcinoma breast associated with a micropapilloma was formed.Fibromatosis-like carcinoma is a rare form of a metaplastic breast tumor. This diagnosis requires an index of suspicion while dealing with spindle cell breast tumors. The importance of making this diagnosis to facilitate an intra operative surgical planning is marred by diagnostic difficulties. In such cases, IHC is imperative in forming an objective diagnosis. Metaplastic breast tumors exhibit a wide morphologic spectrum, ranging from tumors with clearly visualized epithelial elements to heterologous tumors with non-epithelial elements like spindle cells, cartilage and bone -4. With A 77-year-old lady presented with the complaints of a left-sided breast lump of 1-month duration. She had been a heart patient and had been on treatment for the last 4 years. On clinical examination a 3 \u00d7 2 cm firm, mobile, non-tender lump was identified in the outer quadrant of her left breast. The overlying skin of the breast along with nipple and areola were unremarkable. There was no significant axillary or cervical lymphadenopathy. The other breast was normal. She underwent a mammographic examination, followed by fine needle aspiration cytology (FNAC) that was essentially inconclusive. Subsequently, she underwent a frozen section for a primary diagnosis.On mammography, a 2 \u00d7 2 cm ill-defined mass with irregular margins was identified in the left upper outer quadrant. No micro-calcifications were seen. The right-sided breast was normal. Figure .The lumpectomy specimen on cut surface revealed a firm, grey-white, fibrous, un-encapsulated nodular tumor measuring 2 \u00d7 1.2 \u00d7 0.8 cm with infiltrative borders. No area of calcification was identified. The closest margin was the base and was found to be 0.5 cm away from the tumor.Frozen sections revealed a tumor with predominant spindle cells showing mild atypia, amidst a sclerotic stroma and conspicuously infiltrated the adjacent fat. A diagnosis of a low-grade sarcoma was favored over a metaplastic carcinoma. Therefore, a sentinel lymph node biopsy and/or an axillary node dissection (ALND) were not conducted at the time of surgery.in-situ (DCIS) was seen in any of the sections. The two closest differential diagnoses considered were fibromatosis and a \"fibromatosis like\" metaplastic carcinoma. A wide panel of IHC antibody markers was performed . Thereafter, she has been on a regular 2 monthly follow-up; including her metastatic work-up with Positron emission tomography (PET-CT) of the body and bone scan. Due to a high cardiac risk, a second surgery for an ALND was not performed. Nevertheless, till 1 year and 4 months of her follow-up she has not been identified with any lymphadenopathy, recurrent lesion or metastatic lesions in her body.Spindle cell metaplastic breast tumors display a range of elements, including low-grade tumors to those with areas of high-grade sarcomas like fibrosarcomas and malignant fibrous histiocytomas -4. AmongThe present case observed in an elderly female who underwent a lumpectomy for a painless breast lump, exemplifies the diagnostic and management issues related to this tumor. The relatively bland nature of cells has prompted some authors to label it as a 'tumor' than a carcinoma .et al [et al [et al [This tumor has predilection for older women. In a classical premier series of 30 exclusive cases by Gobbi et al and 24 cl [et al , the aveet al [et al [et al [On gross findings, they observed an average size of 2.7 cms with these tumors; mostly associated with unencapsulation and irregular borders . In the et al , who notl [et al . Among tl [et al ,7. As inl [et al demonstret al [et al [Positive expression for myoepithelial markers in this case is further suggestive that these tumors might constitute as myoepithelial subtypes of a metaplastic carcinoma. A positive p63 expression, like in our case, has been lately identified in 86.3% cases of myoepithelial carcinomas in a study by Koker et al . Ultrastet al [Micropapillomas and papillomas have been described to be associated with an increased risk for subsequent development of a breast carcinoma and may form its antecedent lesion . Associaet al . Howeveret al .et al [et al [The optimal treatment options for this tumor remain unclear. However, most authors insist on a complete local treatment like lumpectomy with wide margins and adjuvant RT . Rarely,et al and Kinkl [et al . Though An objective identification of this uncommon tumor with the help of a panel of IHC, along with the presence of a micropapilloma makes this case unique. Further, an uneventful follow-up for more than a year, despite avoidance of an axillary lymph node dissection suggests a relatively lesser metastatic potential of this tumor. A certain degree of index of suspicion in the pathologists' mind while dealing with spindle cell tumors of breast is helpful in making this diagnosis. Documentation of more of such cases with follow-up details would bring new light to the management and biological behavior of this unusual breast tumor.The author(s) declare that they have no competing interests.BR: Involved in the diagnosis of the case; design, preparation and drafting of the manuscript.TMS: Involved in the diagnosis; preparation of the manuscript and in the ultrastructural analysis.RDB: Treating breast oncosurgeon, provided the clinical and follow-up details.RFC: Overall supervision and has given the final approval of the manuscript."} {"text": "While cI regret the inadvertent misquotation. My own data, my conclusions, and my admiration for Okura et al. are unchanged."} {"text": "Bos indicus) are reported to be comparatively less affected than exotic and crossbred cattle. However, genetic basis of resistance in indigenous cattle is not well documented. The association studies of few of the genes associated with various diseases, namely, solute carrier family 11 member 1, Toll-like receptors 1, with TB; Caspase associated recruitment domain 15, SP110 with JD; CACNA2D1, CD14 with mastitis and interferon gamma, BoLA\u00ad-DRB3.2 alleles with TTBDs, etc., are presented. Breeding for genetic resistance is one of the promising ways to control the infectious diseases. High host resistance is the most important method for controlling such diseases, but till today no breed is total immune. Therefore, work may be undertaken under the hypothesis that the different susceptibility to these diseases are exhibited by indigenous and crossbred cattle is due to breed-specific differences in the dealing of infected cells with other immune cells, which ultimately influence the immune response responded against infections. Achieving maximum resistance to these diseases is the ultimate goal, is technically possible to achieve, and is permanent. Progress could be enhanced through introgression of resistance genes to breeds with low resistance. The quest for knowledge of the genetic basis for infectious diseases in indigenous livestock is strongly warranted.Huge livestock population of India is under threat by a large number of endemic infectious diseases. These diseases are associated with high rates of morbidity and mortality, particularly in exotic and crossbred cattle. Beside morbidity and mortality, economic losses by these diseases occur through reduced fertility, production losses, etc. Some of the major infectious diseases which have great economic impact on Indian dairy industries are tuberculosis (TB), Johne\u2019s disease (JD), mastitis, tick and tick-borne diseases (TTBDs), foot and mouth disease, etc. The development of effective strategies for the assessment and control of infectious diseases requires a better understanding of pathogen biology, host immune response, and diseases pathogenesis as well as the identification of the associated biomarkers. Indigenous cattle ( The livestock sector plays an important role in supporting the livelihood of livestock keepers, consumers, traders, and laborers worldwide. The livestock is an important segment of expanding and diverse agricultural sector of Indian financial system. Approximately, 70% people of this country are either directly or indirectly involved in the occupation related to agriculture and livestock rearing . India\u2019s2), so on and so forth. Genome-wide association studies have attempted to confirm associations found and identify new genes involved in pathogenesis and susceptibility.Susceptibility to the mentioned infectious diseases suggested to have a genetic component. These ailments are good candidates for genetic selection as effective vaccine is either not available or if available, either costly or currently are at trial stage. Further, these diseases are difficult to cure and cause significant economic losses. Selective breeding, to increase resistance against infectious disease, will prove to be a low cost and sustainable practice. In livestock, a number of candidate genes have been studied and selected on the basis of their association to resistance or susceptibility in certain other diseases and their known role in disease pathogenesis. These genes include solute carrier family 11 member 1 (SLC11A1), interferon gamma (IFN-\u03b3), peptidoglycan recognition protein 1 (PGLYRP-1), Toll-like receptors (TLRs), Caspase associated recruitment domain 15 (CARD15), mannose binding lectin-1 (MBL-1), nitric oxide synthase is a chronic infectious disease caused by esponses . The hosIngestion of the bacillus is followed by fusion of lysosomes with the phagosome to form phagolysosomes and it is there that the phagocytes attempt to destroy the bacillus . HoweverMycobacterium avium subspecies paratuberculosis (MAP) is highly pathogenic mycobacteria affecting dairy cattle and other domestic ruminants globally [globally . The inf1 and TH2) activate different host immune responses. Mycobacterium paratuberculosis infection appears to follow patterns similar to that of M. tuberculosis. These patterns entail an initial TH1 response (referred to as \u201ctuberculoid\u201d) that is characterized by a tissue infiltrate distinguished primarily by lymphocytes with few if any detectable organisms [1 response is also characterized by the production of the cytokine IFN-\u03b3, one of the earliest detectable reactions to M. paratuberculosis infection, in addition to IL-2 and TNF-\u03b1. These cytokines are assumed to have a significant role in the CMI functions necessary to contain such an intracellular infection. During the early, subclinical stage of M. paratuberculosis infection, the TH1 T-cell activity appears to predominate. This subclinical phase of infection can last for months to years, as the bacilli are contained within macrophages and microscopic granulomas.The bacterium after crossing lumen of intestine is taken up by epithelioid macrophages which, once activated, elicit T-cell activation and clonal expansion . Two T-hrganisms . The TH1via a variety of mechanisms [The disease caused by FMD virus (FMDV), a member of the family Picornaviridae, is characterized by fever, profuse salivation, vesicles in the mouth and on the feet, and a drastic reduction in milk production. Sudden death in young stock may occur . In cattchanisms . Evidencchanisms .Rhipicephalus and Hyalomma - are most widely distributed in India [Ticks are among the most competent and versatile vectors of pathogens and are only second to mosquitoes as vectors of a number of human pathogens, such as viruses, bacteria, rickettsia, and spirochetes, and the most important vector of pathogens affecting cattle worldwide . The twoin India . In geneTheileria parva and Theileria annulata infected cattle, direct evidence for the existence of CTL activity against other tick-borne pathogens is still missing. Recently, it is observed by many that CD4+ T cells play a key role in the development of protective immunity against the TTBD. IFN-c produced by CD4+ T cells and NK cells activates macrophages for an enhanced phagocytosis and cytokine and nitric oxide production. These molecules either kill or inhibit the growth of the parasites. It is also advocated that memory T-helper cells provide help for the synthesis of opsonizing anti-parasite immunoglobulin (Ig) G2 antibodies. On the other hand, the overproduction of cytokines, particularly tumor necrosis factor alpha, leads to an exaggeration of the S52 (types of IgE) clinical symptoms and pathological reactions associated with TTBD [Evidence suggests the role of both innate and adaptive immune mechanisms against TTBD. While major histocompatibility complex (MHC-I) restricted cytotoxic T-lymphocyte (CTL) responses have been observed in ith TTBD .Theilerioses caused by Theileria species, where sporozoites injected by the ticks, which then enter lymphocytes and develop into schizonts in the lymph node draining the area of attachment of the tick, usually the parotid node. Infected lymphocytes are transformed to lymphoblasts which continue to divide synchronously with the schizonts so that each daughter cell is also infected [In India, tropical theileriosis and babesiosis are the major tick-borne infections. In babesioses, sporozoites are injected into the host and directly infect erythrocytes, where they develop into piroplasms resulting in two or sometimes four daughter cells that leave the host cell to infect other erythrocytes that leainfected . EventuaEscherichia coli, Staphylococcus aureus, and Streptococcus agalactiae [In dairy industry, intramammary infections are among the most important diseases of cows that cause great economic losses . Mastitialactiae .Following entry of bacteria into the mammary gland, large numbers of neutrophils migrate from the blood into the milk to control infection. This relies on the appropriate expression of genes, such as receptors present on cell surface as CD14 and TLRs, triggering a diverse array of responses including the early release of immune effector molecules and chemokines, facilitating the appropriate influx of cells following infection .The genes involved in the pathogenesis of these diseases may be exposed for possible variation which may further be associated with disease susceptibly or resistance and may be explored for variations, which can be exploited for selection for resistance/susceptibility. Since the unraveling of the genetic material and the rapid development of molecular genetics tools, the last 50 years has witnessed a tremendous growth in the knowledge base and utilization of genetic information in tackling human and animal diseases. The genetic material in livestock animal species harbors a huge collection of genetic variations, these variations are usually in the form of deletions/insertions of nucleotides, single nucleotide polymorphisms (SNPs), gene duplications, copy number polymorphisms, e.g., variable number of tandem repeats (VNTRs) and microsatellites .A genetic marker is a gene or DNA sequence with a known location on a chromosome and associated with a particular gene or trait. It is a variation, which may arise due to mutation or alteration in the genomic loci that can be observed . MarkersThe first approach in application of molecular markers has been the use of candidate genes. It is assumed that a gene involved in a certain trait could show a mutation causing variation in that trait, and any variations in the DNA sequences, that are found, are tested for association with variation in the phenotypic trait . Till noThe RFLP is defined by the existence of alternative alleles associated with restriction fragments that differ in size from each other. RFLP is the nucleotide base substitutions, insertions, deletions, duplications, and inversions within the whole genome, can remove or create new restriction sites . RFLP anet al. [The SSCP technique, discovered by Orita et al. , has beeSSR loci, referred as VNTRs and simple sequence length polymorphisms, are found throughout the nuclear genomes of most eukaryotes and to a lesser extent in prokaryotes . MicrosaIn 1996, Lander proposed a new molecular marker technology named SNP . When a Advances in molecular genetics are taking place leap and bound, and with the advent of new technology, detection of variation in markers is becoming easier with high precision. Among the more recent technology of detection of polymorphism is the real-time polymerase chain reaction (RT-PCR) and its use in establishing association with comparison of m-RNA expression in different pathological conditions.et al. [et al. [et al. [et al. [et al. [et al. [Deb et al. from the [et al. where he [et al. with the [et al. reported [et al. with PCR [et al. observed [et al. .et al. [HinfI enzyme, of CD14 gene (Chromosome 7) by Selvan et al. [et al. [Liu et al. using PCet al. . PCR-RFLn et al. in Karan [et al. reportedet al. [et al. [Asaf et al. in their [et al. with coret al. [et al. [In microsatellite study, Guo et al. analyzed [et al. analyzed [et al. detected [et al. based onet al. [et al. [2 (chromosome 18) in cattle plays an important role in TB resistance. Wang et al. [M. bovis. Further, Qin et al. [StyI enzyme cut site in CARD15 gene and obtained SNP was found to be significantly associated with the susceptibility to TB of Yunnan plateau dairy cows. Cheng et al. [2 gene (chromosome 19), g.19958101T>G polymorphism by PCR-RFLP and concluded that this gene may contribute to the susceptibility of TB in cattle.In bovine SLC11A1 gene (chromosome 2), Cheng et al. with the [et al. detectedg et al. with then et al. investigg et al. observedet al. [et al. [Sun et al. performe [et al. detected [et al. .et al. [et al. [et al. [Among the microsatellite Markers, as per the observation of Zanotti et al. the geno [et al. from the [et al. reportedet al. [et al. [et al. [et al. [et al. [et al. [IFN-\u03b3 gene in Holstein, Jersey and Brahman-Angus crosses.Ruiz et al. performe [et al. in their [et al. again us [et al. reported [et al. found th [et al. reported [et al. , with thet al. [The microsatellite analysis done by Pinedo et al. revealedet al. [et al. [et al. [Singh et al. in theiret al. . Similar [et al. reported [et al. with seqet al. [et al. [et al. [et al. [T. annulata infection in crossbreds as compared to indigenous cattle. Recently in a study of global gene expression profile in PBMCs with the help of Microarray, NFKBID, BoLA-DQB, HOXA13, PAK1, TGFBR2, NFKBIA genes showed breed specific differences associated with T. annulata infection [Recent years have witnessed an emergence of various dreaded TTBDs and lack of its effective control measures has compelled to look for certain markers to be included in selection programs for controlling the menace of TTBDs. IFN-\u03b3 is one such gene (chromosome 5) in which Maryam et al. with seq [et al. suggeste [et al. reported [et al. on the bnfection .et al. [One of the major questions which trouble all of us is what to do with large number of sporadic SNPs studies with small number of samples? Are these studies baseless or they can still be made useful? Recently, Schaub et al. using enThe evidence presented above indicates a detectable genomic basis of the various infectious diseases of economic importance, and there is a strong case for the inclusion of genetic elements within disease control strategies, particularly in the light of constraints to the sustainability of other classical methods. Disease resistance traits are among the most difficult to include in the classical \u201cBiometrical\u201d farm animal genetic programs. A further complication is that disease resistance is most often an \u201call or none\u201d trait. Although there may well be quantitative polygenic variation in resistance status, the observations are often limited to \u201cSick or healthy.\u201d This, too, reduces heritability compared with other polygenic traits, such as milk production. There are many documented instances of breed and individual differences in genetic disease resistance among farm animals. The current studies in cattle for selection against mastitis and JD in India have shown some encouraging results and have motivated the scientist and workers working in this area to look for more and more variations and associate these with disease resistance or susceptibility. Thus, selection for genetic disease resistance provides a potential avenue for improving the health status of farm animals, increasing productivity and reducing the need for pharmaceutical intervention, thereby reducing cost and delaying the appearance of resistant pathogens.Breeding for genetic resistance is one of the promising ways to control the infectious diseases. High host resistance is the most important method for controlling such diseases, but no breed is totally resistant. Total resistance to these diseases is the ultimate goal, is technically possible to achieve, and is permanent. Progress could be enhanced through introgression of resistance genes to breeds with low resistance. The quest for knowledge of the genetic basis for infectious diseases in livestock is strongly warranted, given that 70 % of people are either directly or indirectly earning their livelihood in India by livestock based occupation.BMP prepared the initial version of the manuscript. GAP and JDC assisted in literature collection. JPG and DPP conceived the idea, revised the manuscript and made final critical scientific corrections. All authors read and approved the final manuscript."} {"text": "Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems\u2014patient classification, fundamental biological processes and treatment of patients\u2014and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine. A recent comparison of genomics with social media, online videos and other data-intensive disciplines suggests that genomics alone will equal or surpass other fields in data generation and analysis within the next decade . The voldeep learning has come to refer to a collection of new techniques that, together, have demonstrated breakthrough gains over existing best-in-class machine learning algorithms across several fields. For example, over the past 5 years, these methods have revolutionized image classification and speech recognition due to their flexibility and high accuracy [The term accuracy . More reaccuracy , computaaccuracy , dermatoaccuracy . Across Deep learning approaches grew from research on artificial neurons, which were first proposed in 1943 as a modsupervised applications\u2014where the goal is to accurately predict one or more labels or outcomes associated with each data point\u2014in the place of regression approaches, as well as in unsupervised, or \u2018exploratory\u2019 applications\u2014where the goal is to summarize, explain or identify interesting patterns in a dataset\u2014as a form of clustering. Deep learning methods may, in fact, combine both of these steps. When sufficient data are available and labelled, these methods construct features tuned to a specific problem and combine those features into a predictor. In fact, if the dataset is \u2018labelled\u2019 with binary classes, a simple neural network with no hidden layers and no cycles between units is equivalent to logistic regression if the output layer is a sigmoid (logistic) function of the input layer. Similarly, for continuous outcomes, linear regression can be seen as a single-layer neural network. Thus, in some ways, supervised deep learning approaches can be seen as an extension of regression models that allow for greater flexibility and are especially well suited for modelling nonlinear relationships among the input features. Recently, hardware improvements and very large training datasets have allowed these deep learning techniques to surpass other machine learning algorithms for many problems. In a famous and early example, scientists from Google demonstrated that a neural network \u2018discovered\u2019 that cats, faces and pedestrians were important components of online videos [Deep learning does many of the same things as more familiar machine learning approaches. In particular, deep learning approaches can be used both in e videos without Several important advances make the current surge of work done in this area possible. Easy-to-use software packages have brought the techniques of the field out of the specialist's toolkit to a broad community of computational scientists. Additionally, new techniques for fast training have enabled their application to larger datasets . Dropoutet al. [et al. [This review discusses recent work in the biomedical domain, and most successful applications select neural network architectures that are well suited to the problem at hand. We sketch out a few simple example architectures in et al. covers n [et al. provide While deep learning shows increased flexibility over other machine learning approaches, as seen in the remainder of this review, it requires large training sets in order to fit the hidden layers, as well as accurate labels for the supervised learning applications. For these reasons, deep learning has recently become popular in some areas of biology and medicine, while having lower adoption in other areas. At the same time, this highlights the potentially even larger role that it may play in future research, given the increases in data in all biomedical fields. It is also important to see it as a branch of machine learning and acknowledge that it has the same limitations as other approaches in that field. In particular, the results are still dependent on the underlying study design and the usual caveats of correlation versus causation still apply\u2014a more precise answer is only better than a less precise one if it answers the correct question.1.1.With this review, we ask the question: what is needed for deep learning to transform how we categorize, study and treat individuals to maintain or restore health? We choose a high bar for \u2018transform\u2019. Grove , the forThere are already a number of reviews focused on applications of deep learning in biology \u201317, heal1.1.1.A key challenge in biomedicine is the accurate classification of diseases and disease subtypes. In oncology, current \u2018gold standard\u2019 approaches include histology, which requires interpretation by experts, or assessment of molecular markers such as cell surface receptors or gene expression. One example is the PAM50 approach to classifying breast cancer where the expression of 50 marker genes divides breast cancer patients into four subtypes. Substantial heterogeneity still remains within these four subtypes ,25. Give1.1.2.Deep learning can be applied to answer more fundamental biological questions; it is especially suited to leveraging large amounts of data from high-throughput \u2018omics\u2019 studies. One classic biological problem where machine learning, and now deep learning, has been extensively applied is molecular target prediction. For example, deep recurrent neural networks (RNNs) have been used to predict gene targets of microRNAs (miRNAs) , and CNN1.1.3.Although the application of deep learning to patient treatment is just beginning, we expect new methods to recommend patient treatments, predict treatment outcomes and guide the development of new therapies. One type of effort in this area aims to identify drug targets and interactions or predict drug response. Another uses deep learning on protein structures to predict drug interactions and drug bioactivity . Drug re2.ad hoc, potentially impeding the identification of underlying biological mechanisms and their corresponding treatment interventions.In healthcare, individuals are diagnosed with a disease or condition based on symptoms, the results of certain diagnostic tests, or other factors. Once diagnosed with a disease, an individual might be assigned a stage based on another set of human-defined rules. While these rules are refined over time, the process is evolutionary and ad hoc historical definitions of disease. Perhaps deep neural networks, by reevaluating data without the context of our assumptions, can reveal novel classes of treatable conditions.Deep learning methods applied to a large corpus of patient phenotypes may provide a meaningful and more data-driven approach to patient categorization. For example, they may identify new shared mechanisms that would otherwise be obscured due to In spite of such optimism, the ability of deep learning models to indiscriminately extract predictive signals must also be assessed and operationalized with care. Imagine a deep neural network is provided with clinical test results gleaned from EHRs. Because physicians may order certain tests based on their suspected diagnosis, a deep neural network may learn to \u2018diagnose\u2019 patients simply based on the tests that are ordered. For some objective functions, such as predicting an International Classification of Diseases (ICD) code, this may offer good performance even though it does not provide insight into the underlying disease beyond physician activity. This challenge is not unique to deep learning approaches; however, it is important for practitioners to be aware of these challenges and the possibility in this domain of constructing highly predictive classifiers of questionable utility.Our goal in this section is to assess the extent to which deep learning is already contributing to the discovery of novel categories. Where it is not, we focus on barriers to achieving these goals. We also highlight approaches that researchers are taking to address challenges within the field, particularly with regards to data availability and labelling.2.1.Deep learning methods have transformed the analysis of natural images and video, and similar examples are beginning to emerge with medical images. Deep learning has been used to classify lesions and nodules; localize organs, regions, landmarks and lesions; segment organs, organ substructures and lesions; retrieve images based on content; generate and enhance images; and combine images with clinical reports ,40.Though there are many commonalities with the analysis of natural images, there are also key differences. In all cases that we examined, fewer than one million images were available for training, and datasets are often many orders of magnitude smaller than collections of natural images. Researchers have developed subtask-specific strategies to address this challenge.Data augmentation provides an effective strategy for working with small training sets. The practice is exemplified by a series of papers that analyse images from mammographies \u201345. To eet al. [A second strategy repurposes features extracted from natural images by deep learning models, such as ImageNet , for newet al. repurposet al. ,52 and net al. ,54 as weet al. . Pre-traet al. . Reusinget al. \u201359. A deet al. . Howeveret al. [The technique of reusing features from a different task falls into the broader area of transfer learning (see Discussion). Though we have mentioned numerous successes for the transfer of natural image features to new tasks, we expect that a lower proportion of negative results have been published. The analysis of magnetic resonance images is also faced with the challenge of small training sets. In this domain, Amit et al. investiget al. [Another way of dealing with limited training data is to divide rich data\u2014e.g. 3D images\u2014into numerous reduced projections. Shin et al. comparedet al. [et al. [Roth et al. compared [et al. showed tet al. [et al. [Predictions from deep neural networks can be evaluated for use in workflows that also incorporate human experts. In a large dataset of mammography images, Kooi et al. demonstr [et al. estimateet al. [et al. [Systems to aid in the analysis of histology slides are also promising use cases for deep learning . Ciresanet al. develope [et al. analysed [et al. . In thiset al. [One source of training examples with rich phenotypical annotations is the EHR. Billing information in the form of ICD codes are simple annotations but phenotypic algorithms can combine laboratory tests, medication prescriptions and patient notes to generate more reliable phenotypes. Recently, Lee et al. developeet al. , but we et al. [weak labels. Such labels are automatically generated and not verified by humans, so they may be noisy or incomplete. In this case, they applied a series of natural language processing (NLP) techniques to the associated chest X-ray radiological reports. They first extracted all diseases mentioned in the reports using a state-of-the-art NLP tool, then applied a new method, NegBio [1 score, which balances precision and recall [text-mined (weakly labelled) pathology categories or \u2018no finding\u2019 otherwise. Further, Wang et al. [Rich clinical information is stored in EHRs. However, manually annotating a large set requires experts and is time-consuming. For chest X-ray studies, a radiologist usually spends a few minutes per example. Generating the number of examples needed for deep learning is infeasibly expensive. Instead, researchers may benefit from using text mining to generate annotations , even ifet al. proposed, NegBio , to filtd recall ). The red recall consisteg et al. used thiAnother example of semi-automated label generation for hand radiograph segmentation employed positive mining, an iterative procedure that combines manual labelling with automatic processing . First, With the exception of natural image-like problems (e.g. melanoma detection), biomedical imaging poses a number of challenges for deep learning. Datasets are typically small, annotations can be sparse, and images are often high-dimensional, multimodal and multi-channel. Techniques like transfer learning, heavy dataset augmentation and the use of multi-view and multi-stream architectures are more common than in the natural image domain. Furthermore, high model sensitivity and specificity can translate directly into clinical value. Thus, prediction evaluation, uncertainty estimation and model interpretation methods are also of great importance in this domain (see Discussion). Finally, there is a need for better pathologist\u2013computer interaction techniques that will allow combining the power of deep learning methods with human expertise and lead to better-informed decisions for patient treatment and care.2.2.Owing to the rapid growth of scholarly publications and EHRs, biomedical text mining has become increasingly important in recent years. The main tasks in biological and clinical text mining include, but are not limited to, named entity recognition (NER), relation/event extraction and information retrieval . Deep leet al. [et al. [et al. [et al. [NER is a task of identifying text spans that refer to a biological concept of a specific class, such as disease or chemical, in a controlled vocabulary or ontology. NER is often needed as a first step in many complex text mining systems. The current state-of-the-art methods typically reformulate the task as a sequence labelling problem and use conditional random fields \u201377. In ret al. studied [et al. investig [et al. examined [et al. exploiteet al. [et al. [et al. [et al. [et al. [Relation extraction involves detecting and classifying semantic relationships between entities from the literature. At present, kernel methods or feature-based approaches are commonly applied \u201384. Deepet al. proposed [et al. employed [et al. used a C [et al. experime [et al. proposedet al. [et al. [et al. [f-score. Among various deep learning approaches, CNNs stand out as the most popular model both in terms of computational complexity and performance, while RNNs have achieved continuous progress.For biotopes event extraction, Li et al. employed [et al. used lon [et al. applied [et al. ,99. Takeet al. [Information retrieval is a task of finding relevant text that satisfies an information need from within a large document collection. While deep learning has not yet achieved the same level of success in this area as seen in others, the recent surge of interest and work suggest that this may be quickly changing. For example, Mohan et al. describeet al. .To summarize, deep learning has shown promising results in many biomedical text mining tasks and applications. However, to realize its full potential in this domain, either large amounts of labelled data or technical advancements in current methods coping with limited labelled data are required.2.3.et al. evaluated the extent to which deep learning methods could be applied on top of generic features for domain-specific concept extraction [EHR data include substantial amounts of free text, which remains challenging to approach . Often, traction . They fotraction . This raet al. [In recent work, Yoon et al. analysedet al. [et al. [Several authors have created reusable feature sets for medical terminologies using NLP and neural embedding models, as popularized by word2vec . Minarroet al. applied et al. , ICD andet al. ,109 and et al. . Methodset al. , but theet al. further [et al. investiget al. [et al. [et al. [et al. [et al. [et al. [Identifying consistent subgroups of individuals and individual health trajectories from clinical tests is also an active area of research. Approaches inspired by deep learning have been used for both unsupervised feature construction and supervised prediction. Early work by Lasko et al. , combineet al. . In addi [et al. used a d [et al. attempte [et al. built up [et al. took a d [et al. used a set al. [Still, recent work has also revealed domains in which deep networks have proven superior to traditional methods. Survival analysis models the time leading to an event of interest from a shared starting point, and in the context of EHR data, often associates these events to subject covariates. Exploring this relationship is difficult, however, given that EHR data types are often heterogeneous, covariates are often missing and conventional approaches require the covariate\u2013event relationship be linear and aligned to a specific starting point . Early aet al. in turn et al. . The resThere is a computational cost for these methods, however, when compared to traditional, non-neural network approaches. For the exponential family models, despite their scalability , an impo2.4.2.4.1.A dearth of true labels is perhaps among the biggest obstacles for EHR-based analyses that employ machine learning. Popular deep learning (and other machine learning) methods are often used to tackle classification tasks and thus require ground-truth labels for training. For EHRs, this can mean that researchers must hire multiple clinicians to manually read and annotate individual patients' records through a process called chart review. This allows researchers to assign \u2018true\u2019 labels, i.e. those that match our best available knowledge. Depending on the application, sometimes the features constructed by algorithms also need to be manually validated and interpreted by clinicians. This can be time-consuming and expensive . Becauseet al. [Successful approaches to date in this domain have sidestepped this challenge by making methodological choices that either reduce the need for labelled examples or use transformations to training data to increase the number of times it can be used before overfitting occurs. For example, the unsupervised and semi-supervised methods that we have discussed reduce the need for labelled examples . The ancet al. trained et al. . In dataet al. [New New Oil\u2019. In this framing, data are abundant and not a scarce resource. Instead, new approaches to solving problems arise when labelled training data become sufficient to enable them. Based on our review of research on deep learning methods to categorize disease, the latter framing rings true.Numerous commentators have described data as the new oil ,132. Theet al. describeWe expect improved methods for domains with limited data to play an important role if deep learning is going to transform how we categorize states of human health. We do not expect that deep learning methods will replace expert review. We expect them to complement expert review by allowing more efficient use of the costly practice of manual annotation.2.4.2.To construct the types of very large datasets that deep learning methods thrive on, we need robust sharing of large collections of data. This is, in part, a cultural challenge. We touch on this challenge in the Discussion section. Beyond the cultural hurdles around data sharing, there are also technological and legal hurdles related to sharing individual health records or deep models built from such records. This subsection deals primarily with these challenges.EHRs are designed chiefly for clinical, administrative and financial purposes, such as patient care, insurance and billing . ScienceEven within the same healthcare system, EHRs can be used differently ,137. IndIn the wider picture, standards for EHRs are numerous and evolving. Proprietary systems, indifferent and scattered use of health information standards, and controlled terminologies makes combining and comparison of data across systems challenging . Furtheret al. [Combining or replicating studies across systems thus requires controlling for both the above biases and dealing with mismatching standards. This has the practical effect of reducing cohort size, limiting statistical significance, preventing the detection of weak effects , and reset al. showed tFinally, even if data were perfectly consistent and compatible across systems, attempts to share and combine EHR data face considerable legal and ethical barriers. Patient privacy can severely restrict the sharing and use of EHR data . Here agSeveral technological solutions have been proposed in this direction, allowing access to sensitive data satisfying privacy and legal concerns. Software like DataShield and ViPAet al. [et al. [et al. [Even without sharing data, algorithms trained on confidential patient data may present security risks or accidentally allow for the exposure of individual-level patient data. Tramer et al. showed tet al. demonstret al. , in this [et al. develope [et al. show theet al. [et al. [et al. [et al. and Esteban et al. train models on synthetic data generated under differential privacy and observe performance from a transfer learning evaluation that is only slightly below models trained on the original, real data. Taken together, these results suggest that differentially private GANs may be an attractive way to generate sharable datasets for downstream reanalysis.These attacks also present a potential hazard for approaches that aim to generate data. Choi et al. propose [et al. showed t [et al. demonstrFederated learning and secuWhile none of these problems are insurmountable or restricted to deep learning, they present challenges that cannot be ignored. Technical evolution in EHRs and data standards will doubtless ease\u2014although not solve\u2014the problems of data sharing and merging. More problematic are the privacy issues. Those applying deep learning to the domain should consider the potential of inadvertently disclosing the participants' identities. Techniques that enable training on data without sharing the raw data may have a part to play. Training within a differential privacy framework may often be warranted.2.4.3.In April 2016, the European Union adopted new rules regarding the use of personal information, the General Data Protection Regulation . A compoAs datasets become larger and more complex, we may begin to identify relationships in data that are important for human health but difficult to understand. The algorithms described in this review and others like them may become highly accurate and useful for various purposes, including within medical practice. However, to discover and avoid discriminatory applications it will be important to consider interpretability alongside accuracy. A number of properties of genomic and healthcare data will make this difficult.et al.'s retraction [First, research samples are frequently non-representative of the general population of interest; they tend to be disproportionately sick , male 1 and Eurotraction ). In somtraction ). When wtraction or lead traction .There is a small but growing literature on the prevention and mitigation of data leakage , as well2.4.4.et al. [The longitudinal analysis follows a population across time, for example, prospectively from birth or from the onset of particular conditions. In large patient populations, longitudinal analyses such as the Framingham Heart Study and the et al. used autet al. and the et al. . This ma3.The study of cellular structure and core biological processes\u2014transcription, translation, signalling, metabolism, etc.\u2014in humans and model organisms will greatly impact our understanding of human disease over the long horizon . PredictProgress has been rapid in genomics and imaging, fields where important tasks are readily adapted to well-established deep learning paradigms. One-dimensional CNNs and RNNs are well suited for tasks related to DNA- and RNA-binding proteins, epigenomics and RNA splicing. Two-dimensional CNNs are ideal for segmentation, feature extraction and classification in fluorescence microscopy images . Other a3.1.Gene expression technologies characterize the abundance of many thousands of RNA transcripts within a given organism, tissue or cell. This characterization can represent the underlying state of the given system and can be used to study heterogeneity across samples as well as how the system reacts to perturbation. While gene expression measurements were traditionally made by quantitative polymerase chain reaction, low-throughput fluorescence-based methods and microarray technologies, the field has shifted in recent years to primarily performing RNA sequencing (RNA-seq) to catalogue whole transcriptomes. As RNA-seq continues to fall in price and rise in throughput, sample sizes will increase and training deep models to study gene expression will become even more useful.Pseudomonas aeruginosa experiments [Already several deep learning approaches have been applied to gene expression data with varying aims. For instance, many researchers have applied unsupervised deep learning models to extract meaningful representations of gene modules or sample clusters. Denoising autoencoders have been used to cluster yeast expression microarrays into known modules representing cell cycle processes and to seriments ,182 and eriments . These uDeep learning approaches are also being applied to gene expression prediction tasks. For example, a deep neural network with three hidden layers outperformed linear regression in inferring the expression of over 20 000 target genes based on a representative, well-connected set of about 1000 landmark genes . Howeveret al. [Epigenomic data, combined with deep learning, may have sufficient explanatory power to infer gene expression. For instance, the DeepChrome CNN improvedet al. combinedDeep learning applied to gene expression data is still in its infancy, but the future is bright. Many previously untestable hypotheses can now be interrogated as deep learning enables analysis of increasing amounts of data generated by new technologies. For example, the effects of cellular heterogeneity on basic biology and disease aetiology can now be explored by single-cell RNA-seq and high-throughput fluorescence-based imaging, techniques we discuss below that will benefit immensely from deep learning approaches.3.2.LMNA) gene can lead to specific variants of dilated cardiomyopathy and limb-girdle muscular dystrophy [Pre-mRNA transcripts can be spliced into different isoforms by retaining or skipping subsets of exons or including parts of introns, creating enormous spatio-temporal flexibility to generate multiple distinct proteins from a single gene. This remarkable complexity can lend itself to defects that underlie many diseases. For instance, splicing mutations in the lamin A [in vivo binding maps of TFs [TFs are proteins that bind regulatory DNA in a sequence-specific manner to modulate the activation and repression of gene transcription. High-throughput leotides provide around) . TFs cans of TFs . Large rs of TFs . Owing tin vitro and in vivo TF binding datasets that associate collections of synthetic DNA sequences or genomic DNA sequences to binary labels (bound/unbound) or continuous measures of binding. The most common class of TF binding models in the literature are those that only model the DNA sequence affinity of TFs from in vitro and in vivo binding data. The earliest models were based on deriving simple, compact, interpretable sequence motif representations such as position weight matrices (PWMs) and other biophysically inspired models [Several machine learning approaches have been developed to learn generative and discriminative models of TF binding from d models \u2013200. Thed models ,202.et al. [in vitro and in vivo assays against random DNA sequences matched for dinucleotide sequence composition. The convolutional layers learn pattern detectors reminiscent of PWMs from a one-hot encoding of the raw input DNA sequences. DeepBind outperformed several state-of-the-art methods from the DREAM5 in vitro TF-DNA motif recognition challenge [In 2015, Alipanahi et al. developehallenge . Althoughallenge and accuhallenge \u2013208. Spehallenge ,210.in vivo multiple TFs compete or cooperate to occupy DNA binding sites, resulting in complex combinatorial co-binding landscapes. To take advantage of this shared structure in in vivo TF binding data, multi-task neural network architectures have been developed that explicitly share parameters across models for multiple TFs [While most of these methods learn independent models for different TFs, iple TFs ,211,212.in vivo TF binding landscapes in new cell types not used during training. One approach for generalizing TF binding predictions to new cell types is to learn models that integrate DNA sequence inputs with other cell-type-specific data modalities that modulate in vivo TF binding such as surrogate measures of TF concentration (e.g. TF gene expression) and chromatin state. Arvey et al. [in vivo TF binding prediction within and across cell types. Several \u2018footprinting\u2019-based methods have also been developed that learn to discriminate bound from unbound instances of known canonical motifs of a target TF based on high-resolution footprint patterns of chromatin accessibility that are specific to the target TF [The above-mentioned TF binding prediction models that use only DNA sequences as inputs have a fundamental limitation. Because the DNA sequence of a genome is the same across different cell types and states, a sequence-only model of TF binding cannot predict different y et al. showed target TF . Howeverin vivo TF Binding Site Prediction Challenge\u2019 was introduced to systematically evaluate the genome-wide performance of methods that can predict TF binding across cell states by integrating DNA sequence and in vitro DNA shape with cell-type-specific chromatin accessibility and gene expression [Recently, a community challenge known as the \u2018ENCODE-DREAM pression . A deep pression . FactorNpression . This toet al. [Singh et al. developeet al. ,220. Thein vivo TF binding prediction methods leverage the strong correlation between combinatorial binding landscapes of multiple TFs. Given a partially complete panel of binding profiles of multiple TFs in multiple cell types, a deep learning method called TFImpute learns to predict the missing binding profile of a target TF in some target cell type in the panel based on the binding profiles of other TFs in the target cell type and the binding profile of the target TF in other cell types in the panel [Another class of imputation-based cross cell type he panel . HoweverIt is worth noting that TF binding prediction methods in the literature based on neural networks and other machine learning approaches choose to sample the set of bound and unbound sequences in a variety of different ways. These choices and the choice of performance evaluation measures significantly confound systematic comparison of model performance (see Discussion).et al. [in silico mutation maps for identifying important predictive nucleotides in input DNA sequences by exhaustively forward propagating perturbations to individual nucleotides to record the corresponding change in output prediction. Shrikumar et al. [et al. [Several methods have also been developed to interpret neural network models of TF binding. Alipanahi et al. visualizr et al. proposed [et al. develope3.4.3.4.1.Multiple TFs act in concert to coordinate changes in gene regulation at the genomic regions known as promoters and enhancers. Each gene has an upstream promoter, essential for initiating that gene's transcription. The gene may also interact with multiple enhancers, which can amplify transcription in particular cellular contexts. These contexts include different cell types in development or environmental stresses.Promoters and enhancers provide a nexus where clusters of TFs and binding sites mediate downstream gene regulation, starting with transcription. The gold standard to identify an active promoter or enhancer requires demonstrating its ability to affect transcription or other downstream gene products. Even extensive biochemical TF binding data has thus far proven insufficient on its own to accurately and comprehensively locate promoters and enhancers. We lack sufficient understanding of these elements to derive a mechanistic \u2018promoter code\u2019 or \u2018enhancer code\u2019. But extensive labelled data on promoters and enhancers lends itself to probabilistic classification. The complex interplay of TFs and chromatin leading to the emergent properties of promoter and enhancer activity seems particularly apt for representation by deep neural networks.3.4.2.Despite decades of work, computational identification of promoters remains a stubborn problem . Researc3.4.3.Recognizing enhancers presents additional challenges. Enhancers may be up to 1 000 000 bp away from the affected promoter, and even within introns of other genes . EnhanceSeveral neural network approaches yielded promising results in enhancer prediction. Both Basset and DeepComparing the performance of enhancer prediction methods illustrates the problems in using metrics created with different benchmarking procedures. Both the Basset and DeepEnhancer studies include comparisons to a baseline SVM approach, gkm-SVM . The Bas3.4.4.In addition to the location of enhancers, identifying enhancer\u2013promoter interactions in three-dimensional space will provide critical knowledge for understanding transcriptional regulation. SPEID used a CNN to predict these interactions with only sequence and the location of putative enhancers and promoters along a one-dimensional chromosome . It comp3.5.Prediction of miRNAs and miRNA targets is of great interest, as they are critical components of gene regulatory networks and are often conserved across great evolutionary distance ,234. WhiAs in other applications, deep learning promises to achieve equal or better performance in predictive tasks by automatically engineering complex features to minimize an objective function. Two recently published tools use different recurrent neural network-based architectures to perform miRNA and target prediction with solely sequence data as input ,237. Tho3.6.Proteins play fundamental roles in almost all biological processes, and understanding their structure is critical for basic biology and drug development. UniProt currently has about 94 million protein sequences, yet fewer than 100 000 proteins across all species have experimentally solved structures in Protein Data Bank (PDB). As a result, computational structure prediction is essential for a majority of proteins. However, this is very challenging, especially when similar solved structures, called templates, are not available in PDB. Over the past several decades, many computational methods have been developed to predict aspects of protein structure such as secondary structure, torsion angles, solvent accessibility, inter-residue contact maps, disorder regions and side-chain packing. In recent years, multiple deep learning architectures have been applied, including DBNs, LSTMs, CNNs and deep convolutional neural fields ,238.Here, we focus on deep learning methods for two representative sub-problems: secondary structure prediction and contact map prediction. Secondary structure refers to local conformation of a sequence segment, while a contact map contains information on all residue\u2013residue contacts. Secondary structure prediction is a basic problem and an almost essential module of any protein structure prediction package. Contact prediction is much more challenging than secondary structure prediction, but it has a much larger impact on tertiary structure prediction. In recent years, the accuracy of contact prediction has greatly improved ,239\u2013241.et al. developed a DeepCNF model that improved Q3 and Q8 accuracy as well as prediction of solvent accessibility and disorder regions [One can represent protein secondary structure with three different states or eight finer-grained states. The accuracy of a three-state prediction is called Q3, and accuracy of an eight-state prediction is called Q8. Several groups ,242,243 regions ,238. Deeab initio folding of proteins without good templates in PDB. Coevolution analysis is effective for proteins with a very large number (more than 1000) of sequence homologues [Protein contact prediction and contact-assisted folding (i.e. folding proteins using predicted contacts as restraints) represent a promising new direction for mologues , but farmologues and Coinmologues have shomologues , DNCON [mologues and PConmologues . Howevermologues .et al. [1 score on free-modelling targets as well as the whole set of targets. In CAMEO (which can be interpreted as a fully automated CASP) [Recently, Wang et al. proposedet al. ), Raptored CASP) , its preed CASP) . RaptorXab initio folding is becoming much easier with the advent of direct evolutionary coupling analysis and deep learning techniques. We expect further improvements in contact prediction for proteins with fewer than 1000 homologues by studying new deep network architectures. The deep learning methods summarized above also apply to interfacial contact prediction for protein complexes but may be less effective because on average protein complexes have fewer sequence homologues. Beyond secondary structure and contact maps, we anticipate increased attention to predicting 3D protein structure directly from amino acid sequence and single residue evolutionary information [Taken together, ormation .3.7.Complementing computational prediction approaches, cryo-electron microscopy (cryo-EM) allows near-atomic resolution determination of protein models by comparing individual electron micrographs . DetaileSome components of cryo-EM image processing remain difficult to automate. For instance, in particle picking, micrographs are scanned to identify individual molecular images that will be used in structure refinement. In typical applications, hundreds of thousands of particles are necessary to determine a structure to near-atomic resolution, making manual selection impractical . TypicalDownstream of particle picking, deep learning is being applied to other aspects of cryo-EM image processing. Statistical manifold learning has been implemented in the software package ROME to classify selected particles and elucidate the different conformations of the subject molecule necessary for accurate 3D structures . These r3.8.Protein\u2013protein interactions (PPIs) are highly specific and non-accidental physical contacts between proteins, which occur for purposes other than generic protein production or degradation . AbundanMany machine learning approaches to PPI have focused on text mining the literature ,263, butet al. [et al. [One of the key difficulties in applying deep learning techniques to binding prediction is the task of representing peptide and protein sequences in a meaningful way. DeepPPI made PPIet al. applied [et al. used deeet al. [Beyond predicting whether or not two proteins interact, Du et al. employedBecause many studies used predefined higher-level features, one of the benefits of deep learning\u2014automatic feature extraction\u2014is not fully leveraged. More work is needed to determine the best ways to represent raw protein sequence information so that the full benefits of deep learning as an automatic feature extractor can be realized.3.9.An important type of PPI involves the immune system's ability to recognize the body's own cells. The major histocompatibility complex (MHC) plays a key role in regulating this process by binding antigens and displaying them on the cell surface to be recognized by T cells. Owing to its importance in immunity and immune response, peptide\u2013MHC binding prediction is a useful problem in computational biology, and one that must account for the allelic diversity in MHC-encoding gene region.et al. [Shallow, feed-forward neural networks are competitive methods and have made progress towards pan-allele and pan-length peptide representations. Sequence alignment techniques are useful for representing variable-length peptides as uniform-length features ,269. Foret al. developeet al. [et al. [et al. found that the top methods\u2014NetMHC, NetMHCpan, MHCflurry and MHCnuggets\u2014showed comparable performance, but large differences in speed. Convolutional neural networks showed comparatively poor performance, while shallow networks and RNNs performed the best. They found that MHCnuggets\u2014the recurrent neural network\u2014was by far the fastest-training among the top performing methods.Deep learning's unique flexibility was recently leveraged by Bhattacharya et al. , who use [et al. develope3.10.k-core size and graph density, this work showed that deep learning can effectively reduce graph dimensionality while retaining much of its structural information.Because interacting proteins are more likely to share a similar function, the connectivity of a PPI network itself can be a valuable information source for the prediction of protein function . To incoet al. [An important challenge in PPI network prediction is the task of combining different networks and types of networks. Gligorijevic et al. developeet al. [Hamilton et al. addresseet al. optimize3.11.A field poised for dramatic revolution by deep learning is bioimage analysis. Thus far, the primary use of deep learning for biological images has been for segmentation\u2014that is, for the identification of biologically relevant structures in images such as nuclei, infected cells or vasculature\u2014in fluorescence or even brightfield channels . Once thet al. [We anticipate an additional paradigm shift in bioimaging that will be brought about by deep learning: what if images of biological samples, from simple cell cultures to three-dimensional organoids and tissue samples, could be mined for much more extensive biologically meaningful information than is currently standard? For example, a recent study demonstrated the ability to predict lineage fate in haematopoietic cells up to three generations in advance of differentiation . In biomet al. demonstrThe impact of further improvements on biomedicine could be enormous. Comparing cell population morphologies using conventional methods of segmentation and feature extraction has already proven useful for functionally annotating genes and alleles, identifying the cellular target of small molecules, and identifying disease-specific phenotypes suitable for drug screening \u2013290. Dee3.12.in situ hybridization or the heterogeneity of epigenomic patterns with single-cell Hi-C or ATAC-seq [Single-cell methods are generating excitement as biologists characterize the vast heterogeneity within unicellular species and between cells of the same tissue type in the same organism . For insATAC-seq ,294. JoiATAC-seq .et al. [However, large challenges exist in studying single cells. Relatively few cells can be assayed at once using current droplet, imaging or microwell technologies, and low-abundance molecules or modifications may not be detected by chance due to a phenomenon known as dropout, not to be confused with the dropout layer of deep learning. To solve this problem, Angermueller et al. trained et al. ,297. Deeet al. .et al. [Examining populations of single cells can reveal biologically meaningful subsets of cells as well as their underlying gene regulatory networks . Unfortuet al. classifiet al. [Neural networks can also learn low-dimensional representations of single-cell gene expression data for visualization, clustering and other tasks. Both scvis and scVIet al. developeet al. , they deet al. and relaThe sheer quantity of omic information that can be obtained from each cell, as well as the number of cells in each dataset, uniquely position single-cell data to benefit from deep learning. In the future, lineage tracing could be revolutionized by using autoencoders to reduce the feature space of transcriptomic or variant data followed by algorithms to learn optimal cell differentiation trajectories or by fe3.13.Metagenomics, which refers to the study of genetic material\u201416S rRNA or whole-genome shotgun DNA\u2014from microbial communities, has revolutionized the study of micro-scale ecosystems within and around us. In recent years, machine learning has proved to be a powerful tool for metagenomic analysis. 16S rRNA has long been used to deconvolve mixtures of microbial genomes, yet this ignores more than 99% of the genomic content. Subsequent tools aimed to classify 300\u20133000 bp reads from complex mixtures of microbial genomes based on tetranucleotide frequencies, which differ across organisms , using sMost neural networks are used for phylogenetic classification or functional annotation from sequence data where there is ample data for training. Neural networks have been applied successfully to gene annotation was able to classify wound severity from microbial species present in the wound . Recentlet al. associatet al. .Challenges remain in applying deep neural networks to metagenomics problems. They are not yet ideal for phenotype classification because most studies contain tens of samples and hundreds or thousands of features (species). Such underdetermined, or ill-conditioned, problems are still a challenge for deep neural networks that require many training examples. Also, due to convergence issues , taxonomHowever, because RNNs have been applied to base calls for the Oxford Nanopore long-read sequencer with some success with high specificity and sensitivity and improving the accuracy of new types of data such as nanopore sequencing. These two tasks are critical for studying rare variation, allele-specific transcription and translation, and splice site mutations. In the clinical realm, sequencing of rare tumour clones and other genetic diseases will require the accurate calling of SNPs and indels.Current methods achieve relatively high (greater than 99%) precision at 90% recall for SNPs and indel calls from Illumina short-read data , yet thiVariant calling will benefit more from optimizing neural network architectures than from developing features by hand. An interesting and informative next step would be to rigorously test if encoding raw sequence and quality data as an image, tensor or some other mixed format produces the best variant calls. Because many of the latest neural network architectures are already optimized for and pre-trained on generic, large-scale image datasets , encodinet al. [E. coli sequence from MinION nanopore electric current data with higher per-base accuracy than the proprietary hidden Markov model-based algorithm Metrichor. Unfortunately, training any neural network requires a large amount of data, which is often not available for new sequencing technologies. To circumvent this, one very preliminary study simulated mutations and spiked them into somatic and germline RNA-seq data, then trained and tested a neural network on simulated paired RNA-seq and exome sequencing data [In limited experiments, DeepVariant was robust to sequencing depth, read length and even species . Howeveret al. used biding data . HoweverMethod development for interpreting new types of sequencing data has historically taken two steps: first, easily implemented hard cutoffs that prioritize specificity over sensitivity, then expert development of probabilistic models with hand-developed inputs . We anti3.15.Artificial neural networks were originally conceived as a model for computation in the brain . AlthougCNNs were originally conceived as faithful models of visual information processing in the primate visual system, and are still considered so . The actEven when they are not directly modelling biological neurons, deep networks have been a useful computational tool in neuroscience. They have been developed as statistical time-series models of neural activity in the brain. And in contrast to the encoding models described earlier, these models are used for decoding neural activity, for instance, in brain\u2013machine interfaces . They haIt is an exciting time for neuroscience. Recent rapid progress in deep networks continues to inspire new machine learning-based models of brain computation . And neu4.Given the need to make better, faster interventions at the point of care\u2014incorporating the complex calculus of a patient's symptoms, diagnostics and life history\u2014there have been many attempts to apply deep learning to patient treatment. Success in this area could help to enable personalized healthcare or precision medicine ,354. Ear4.1.In 1996, Tu comparedWhile further progress has been made in using deep learning for clinical decision-making, it is hindered by a challenge common to many deep learning applications: it is much easier to predict an outcome than to suggest an action to change the outcome. Several attempts ,123 at ret al. [et al. [A critical challenge in providing treatment recommendations is identifying a causal relationship for each recommendation. Causal inference is often framed in terms of the counterfactual question . Johansset al. use deep [et al. first crA common challenge for deep learning is the interpretability of the models and their predictions. The task of clinical decision-making is necessarily risk-averse, so model interpretability is key. Without clear reasoning, it is difficult to establish trust in a model. As described above, there has been some work to directly assign treatment plans without interpretability; however, the removal of human experts from the decision-making loop make the models difficult to integrate with clinical practice. To alleviate this challenge, several studies have attempted to create more interpretable deep models, either specifically for healthcare or as a general procedure for deep learning (see Discussion).4.1.1.et al. [A common application for deep learning in this domain is the temporal structure of healthcare records. Many studies \u2013365 haveet al. used deeet al. have relet al. , but a g4.1.2.et al. [et al. [A clinical deep learning task that has been more successful is the assignment of patients to clinical trials. Ithapu et al. used a r [et al. applied 4.2.Drug repositioning (or repurposing) is an attractive option for delivering new drugs to the market because of the high costs and failure rates associated with more traditional drug discovery approaches ,371. A det al. [et al. [For example, Menden et al. used a s [et al. used gen [et al. to trainet al. [et al. [et al. [Drug repositioning can also be approached by attempting to predict novel drug\u2013target interactions and then repurposing the drug for the associated indication ,382. Wanet al. devised [et al. extendedDrug repositioning appears an obvious candidate for deep learning both because of the large amount of high-dimensional data available and the complexity of the question being asked. However, perhaps the most promising piece of work in this space is more 4.3.4.3.1.High-throughput chemical screening in biomedical research aims to improve therapeutic options over a long-term horizon . The objComputational work in this domain aims to identify sufficient candidate active compounds without exhaustively screening libraries of hundreds of thousands or millions of chemicals. Predicting chemical activity computationally is known as virtual screening. An ideal algorithm will rank a sufficient number of active compounds before the inactives, but the rankings of actives relative to other actives and inactives are less important . Computaet al. [et al. [Ligand-based approaches train on chemicals' features without modelling target features (e.g. protein structure). Neural networks have a long history in this domain ,23, and et al. ) exploreet al. ,394, wit [et al. , a deep The nuanced Tox21 performance may be more reflective of the practical challenges encountered in ligand-based chemical screening than the extreme enthusiasm generated by the Merck competition. A study of 22 ADMET tasks demonstrated that there are limitations to multi-task transfer learning that are in part a consequence of the degree to which tasks are related . Some of4.3.2.Much of the recent excitement in this domain has come from what could be considered a creative experimentation phase, in which deep learning has offered novel possibilities for feature representation and modelling of chemical compounds. A molecular graph, where atoms are labelled nodes and bonds are labelled edges, is a natural way to represent a chemical structure. Chemical features can be represented as a list of molecular descriptors such as molecular weight, atom counts, functional groups, charge representations, summaries of atom\u2013atom relationships in the molecular graph, and more sophisticated derived properties . Traditiet al. [et al. [Virtual screening and chemical property prediction have emerged as one of the major applications areas for graph-based neural networks. Duvenaud et al. generali [et al. applied [et al. \u2013407. Mor [et al. ,409 addret al. [one task-specific active compound and one inactive compound, the model is able to generalize reasonably well because it has learned an informative embedding function from the related tasks. Random forests, which cannot take advantage of the related training tasks, trained in the same setting are only slightly better than a random classifier. Despite the success on Tox21, performance on MUV datasets, which contains assays designed to be challenging for chemical informatics algorithms, is considerably worse. The authors also demonstrate the limitations of transfer learning as embeddings learned from the Tox21 assays have little utility for a drug adverse reaction dataset.Advances in chemical representation learning have also enabled new strategies for learning chemical\u2013chemical similarity functions. Altae-Tran et al. developeThese novel learned chemical feature representations may prove to be essential for accurately predicting why some compounds with similar structures yield similar target effects and others produce drastically different results. Currently, these methods are enticing but do not necessarily outperform classic approaches by a large margin. The neural fingerprints were narWe remain optimistic about the potential of deep learning and specifically representation learning in drug discovery. Rigorous benchmarking on broad and diverse prediction tasks will be as important as novel neural network architectures to advance the state of the art and convincingly demonstrate superiority over traditional cheminformatics techniques. Fortunately, there has recently been much progress in this direction. The DeepChem software ,412 and One open question in ligand-based screening pertains to the benefits and limitations of transfer learning. Multi-task neural networks have shown the advantages of jointly modelling many targets ,394. Oth4.3.3.When protein structure is available, virtual screening has traditionally relied on docking programs to predict how a compound best fits in the target's binding site and score the predicted ligand\u2013target complex . Recentlet al. [Structure-based deep learning methods differ in whether they use experimentally derived or predicted ligand\u2013target complexes and how they represent the 3D structure. The Atomic CNN and Topoet al. use a doet al. .et al. [There are two established options for representing a protein\u2013compound complex. One option, a 3D grid, can featurize the input complex ,419. Eacet al. demonstr4.3.4.60 synthesizable organic molecules with drug-like properties without explicit enumeration [De novo drug design attempts to model the typical design\u2013synthesize\u2013test cycle of drug discovery ,421. It meration . To testmeration .et al. [As deep learning models that directly output (molecular) graphs remain under-explored, generative neural networks for drug design typically represent chemicals with the simplified molecular-input line-entry system (SMILES), a standard string-based representation with characters that represent atoms, bonds and rings . This alet al. designedet al. . A drawbet al. .Another approach to de novo design is to train character-based RNNs on large collections of molecules, for example, ChEMBL , to firs5.Despite the disparate types of data and scientific goals in the learning tasks covered above, several challenges are broadly important for deep learning in the biomedical domain. Here, we examine these factors that may impede further progress, ask what steps have already been taken to overcome them, and suggest future research directions.5.1.Some of the challenges in applying deep learning are shared with other machine learning methods. In particular, many problem-specific optimizations described in this review reflect a recurring universal trade-off\u2014controlling the flexibility of a model in order to maximize predictivity. Methods for adjusting the flexibility of deep learning models include dropout, reduced data projections and transfer learning (described below). One way of understanding such model optimizations is that they incorporate external information to limit model flexibility and thereby improve predictions. This balance is formally described as a trade-off between \u2018bias and variance\u2019 .Although the bias-variance trade-off is common to all machine learning applications, recent empirical and theoretical observations suggest that deep learning models may have uniquely advantageous generalization properties ,429. Nev5.1.1.Making predictions in the presence of high-class imbalance and differences between training and generalization data are a common feature of many large biomedical datasets, including deep learning models of genomic features, patient classification, disease detection and virtual screening. Prediction of TF binding sites exemplifies the difficulties with learning from highly imbalanced data. The human genome has three billion base pairs, and only a small fraction of them are implicated in specific biochemical activities. Less than 1% of the genome can be confidently labelled as bound for most TFs.Estimating the false discovery rate (FDR) is a standard method of evaluation in genomics that can also be applied to deep learning model predictions of genomic features. Using deep learning predictions for targeted validation experiments of specific biochemical activities necessitates a more stringent FDR . However, when predicted biochemical activities are used as features in other models, such as gene expression models, a low FDR may not be necessary.What is the correspondence between FDR metrics and commonly used classification metrics such as AUPR and AUROC? AUPR evaluates the average precision, or equivalently, the average FDR across all recall thresholds. This metric provides an overall estimate of performance across all possible use cases, which can be misleading for targeted validation experiments. For example, classification of TF binding sites can exhibit a recall of 0% at 10% FDR and AUPR greater than 0.6. In this case, the AUPR may be competitive, but the predictions are ill-suited for targeted validation that can only examine a few of the highest-confidence predictions. Likewise, AUROC evaluates the average recall across all false positive rate (FPR) thresholds, which is often a highly misleading metric in class-imbalanced domains ,430. Con5.1.2.Genome-wide continuous signals are commonly formulated into classification labels through signal peak detection. ChIP-seq peaks are used to identify locations of TF binding and histone modifications. Such procedures rely on thresholding criteria to define what constitutes a peak in the signal. This inevitably results in a set of signal peaks that are close to the threshold, not sufficient to constitute a positive label but too similar to positively labelled examples to constitute a negative label. To avoid an arbitrary label for these examples, they may be labelled as \u2018ambiguous\u2019. Ambiguously labelled examples can then be ignored during model training and evaluation of recall and FDR. The correlation between model predictions on these examples and their signal values can be used to evaluate if the model correctly ranks these examples between positive and negative examples.5.1.3.In assessing the upper bound on the predictive performance of a deep learning model, it is necessary to incorporate inherent between-study variation inherent to biomedical research . Study-l5.2.Deep learning-based solutions for biomedical applications could substantially benefit from guarantees on the reliability of predictions and a quantification of uncertainty. Owing to biological variability and precision limits of equipment, biomedical data do not consist of precise measurements but of estimates with noise. Hence, it is crucial to obtain uncertainty measures that capture how noise in input values propagates through deep neural networks. Such measures can be used for reliability assessment of automated decisions in clinical and public health applications, and for guarding against model vulnerabilities in the face of rare or adversarial cases . Moreoveet al. [et al. [et al. [In classification tasks, confidence calibration is the problem of using classifier scores to predict class membership probabilities that match the true membership likelihoods. These membership probabilities can be used to assess the uncertainty associated with assigning the example to each of the classes. Guo et al. observedet al. . In addi [et al. that des [et al. discover [et al. used temAn alternative approach for obtaining principled uncertainty estimates from deep learning models is to use Bayesian neural networks. Deep learning models are usually trained to obtain the most likely parameters given the data. However, choosing the single most likely set of parameters ignores the uncertainty about which set of parameters (among the possible models that explain the given dataset) should be used. This sometimes leads to uncertainty in predictions when the chosen likely parameters produce high-confidence but incorrect results. On the other hand, the parameters of Bayesian neural networks are modelled as full probability distributions. This Bayesian approach comes with a whole host of benefits, including better calibrated confidence estimates and moreet al. [et al. [Several other techniques have been proposed for effectively estimating predictive uncertainty as uncertainty quantification for neural networks continues to be an active research area. Recently, McClure & Kriegeskorte observedet al. introduc [et al. proposed [et al. .Despite the success and popularity of deep learning, some deep learning models can be surprisingly brittle. Researchers are actively working on modifications to deep learning frameworks to enable them to handle probability and embrace uncertainty. Most notably, Bayesian modelling and deep learning are being integrated with renewed enthusiasm. As a result, several opportunities for innovation arise: understanding the causes of model uncertainty can lead to novel optimization and regularization techniques, assessing the utility of uncertainty estimation techniques on various model architectures and structures can be very useful to practitioners, and extending Bayesian deep learning to unsupervised settings can be a significant breakthrough . Unfortu5.3.As deep learning models achieve state-of-the-art performance in a variety of domains, there is a growing need to make the models more interpretable. Interpretability matters for two main reasons. First, a model that achieves breakthrough performance may have identified patterns in the data that practitioners in the field would like to understand. However, this would not be possible if the model is a black box. Second, interpretability is important for trust. If a model is making medical diagnoses, it is important to ensure the model is making decisions for reliable reasons and is not focusing on an artefact of the data. A motivating example of this can be found in Ba & Caruana , where aAs the concept of interpretability is quite broad, many methods described as improving the interpretability of deep learning models take disparate and often complementary approaches.5.3.1.Several approaches ascribe importance on an example-specific basis to the parts of the input that are responsible for a particular output. These can be broadly divided into perturbation- and backpropagation-based approaches.et al. [et al. [et al. [et al. [Perturbation-based approaches change parts of the input and observe the impact on the output of the network. Alipanahi et al. and Zhouet al. scored g [et al. used a s [et al. inserted [et al. introduc [et al. applied [et al. .et al. [A common drawback to perturbation-based approaches is computational efficiency: each perturbed version of an input requires a separate forward propagation through the network to compute the output. As noted by Shrikumar et al. , such meet al. solve anet al. [Backpropagation-based methods, in which the signal from a target output neuron is propagated backwards to the input layer, are another way to interpret deep networks that sidestep inefficiencies of the perturbation-based methods. A classic example of this is calculating the gradients of the output with respect to the input to compuet al. proposedet al. ,456. Netet al. ,457. Bacet al. , and newet al. ,459,460.et al. noted thet al. , which h5.3.2.Another approach to understanding the network's predictions is to find artificial inputs that produce similar hidden representations to a chosen example. This can elucidate the features that the network uses for prediction and drop the features that the network is insensitive to. In the context of natural images, Mahendran & Vedaldi introducet al. [A related idea is \u2018caricaturization\u2019, where an initial image is altered to exaggerate patterns that the network searches for . This iset al. leverage5.3.3.et al. [et al. [Activation maximization can reveal patterns detected by an individual neuron in the network by generating images which maximally activate that neuron, subject to some regularizing constraints. This technique was first introduced in Ehran et al. and applet al. ,466,468. [et al. applied 5.3.4.et al. [et al. [et al. [et al. [Several interpretation methods are specifically tailored to recurrent neural network architectures. The most common form of interpretability provided by RNNs is through attention mechanisms, which have been used in diverse problems such as image captioning and machine translation to select portions of the input to focus on generating a particular output ,470. Demet al. applied [et al. used a h [et al. leverage [et al. later exet al. [Visualizing the activation patterns of the hidden state of a recurrent neural network can also be instructive. Early work by Ghosh & Karamcheti used cluet al. showed tet al. allows iet al. [et al. [Another strategy, adopted by Lanchatin et al. looks at [et al. , illustrMurdoch & Szlam showed t5.3.5.Interpretation of embedded or latent space features learned through generative unsupervised models can reveal underlying patterns otherwise masked in the original input. Embedded feature interpretation has been emphasized mostly in image- and text-based applications ,478, butFor example, Way & Greene trained a VAE on gene expression from The Cancer Genome Atlas (TCGA) and use et al. [et al. [Other approaches have used interpolation through latent space embeddings learned by GANs to interpret unobserved intermediate states. For example, Osokin et al. trained [et al. trained 5.3.6.et al. [It can often be informative to understand how the training data affects model learning. Towards this end, Koh & Liang used infet al. used graet al. [Finally, it is sometimes possible to train the model to provide justifications for its predictions. Lei et al. used a g5.3.7.While deep learning lags behind most Bayesian models in terms of interpretability, the interpretability of deep learning is comparable to or exceeds that of many other widely used machine learning methods such as random forests or SVMs. While it is possible to obtain importance scores for different inputs in a random forest, the same is true for deep learning. Similarly, SVMs trained with a nonlinear kernel are not easily interpretable because the use of the kernel means that one does not obtain an explicit weight matrix. Finally, it is worth noting that some simple machine learning methods are less interpretable in practice than one might expect. A linear model trained on heavily engineered features might be difficult to interpret as the input features themselves are difficult to interpret. Similarly, a decision tree with many nodes and branches may also be difficult for a human to make sense of.There are several directions that might benefit the development of interpretability techniques. The first is the introduction of gold standard benchmarks that different interpretability approaches could be compared against, similar in spirit to how the ImageNet and CIFA5.4.et al. [A lack of large-scale, high-quality, correctly labelled training data have impacted deep learning in nearly all applications we have discussed. The challenges of training complex, high-parameter neural networks from few examples are obvious, but uncertainty in the labels of those examples can be just as problematic. In genomics, labelled data may be derived from an experimental assay with known and unknown technical artefacts, biases and error profiles. It is possible to weight training examples or construct Bayesian models to account for uncertainty or non-independence in the data, as described in the TF binding example above. As another example, Park et al. estimateFor some types of data, especially images, it is straightforward to augment training datasets by splitting a single labelled example into multiple examples. For example, an image can easily be rotated, flipped or translated and retain its label . 3D MRI Simulated or semi-synthetic training data have been employed in multiple biomedical domains, though many of these ideas are not specific to deep learning. Training and evaluating on simulated data, for instance, generating synthetic TF binding sites with PWMs or RNA-sData can be simulated to create negative examples when only positive training instances are available. DANN adopts tIn settings where the experimental observations are biased towards positive instances, such as MHC protein and peptide ligand binding affinity , or the Multimodal, multi-task and transfer learning, discussed in detail below, can also combat data limitations to some degree. There are also emerging network architectures, such as Diet Networks for high-dimensional SNP data . These u5.5.Efficiently scaling deep learning is challenging, and there is a high computational cost associated with training neural networks and using them to make predictions. This is one of the reasons why neural networks have only recently found widespread use .et al. [Many have sought to curb these costs, with methods ranging from the very applied to et al. inferredet al. or the tet al. . Some haet al. .While steady improvements in GPU hardware may alleviate this issue, it is unclear whether advances will occur quickly enough to keep pace with the growing biological datasets and increasingly complex neural networks. Much has been done to minimize the memory requirements of neural networks ,509,510,Distributed computing is a general solution to intense computational requirements and has enabled many large-scale deep learning efforts. Some types of distributed computation ,514 are Cloud computing, which has already seen wide adoption in genomics , could f5.6.A robust culture of data, code and model sharing would speed advances in this domain. The cultural barriers to data sharing, in particular, are perhaps best captured by the use of the term \u2018research parasite\u2019 to describe scientists who use data from other researchers . A fieldet al. [The sharing of high-quality, labelled datasets will be especially valuable. In addition, researchers who invest time to preprocess datasets to be suitable for deep learning can make the preprocessing code when jointly learned with other modalities (audio or text) . Deep gret al. [et al. [et al. [k-means.Jha et al. showed t [et al. trained [et al. used DBN [et al. . This apMultimodal learning with CNNs is usually implemented as a collection of individual networks in which each learns representations from the single data type. These individual representations are further concatenated before or within fully connected layers. FIDDLE is an exet al. [et al. [Multi-task learning is an approach related to transfer learning. In a multi-task learning framework, a model learns a number of tasks simultaneously such that features are shared across them. DeepSEA implemenet al. demonstret al. ,394 and et al. ,543. Kea [et al. systematet al. [Multi-task learning is complementary to multimodal and transfer learning. All three techniques can be used together in the same model. For example, Zhang et al. combinedDespite demonstrated improvements, transfer learning approaches pose challenges. There are no theoretically sound principles for pre-training and fine-tuning. Best practice recommendations are heuristic and must account for additional hyper-parameters that depend on specific deep architectures, sizes of the pre-training and target datasets, and similarity of domains. However, the similarity of datasets and domains in transfer learning and relatedness of tasks in multi-task learning are difficult to access. Most studies address these limitations by empirical evaluation of the model. Unfortunately, negative results are typically not reported. A deep CNN trained on natural images boosts performance in radiographic images . HoweverIn the medical domain, multimodal, multi-task and transfer learning strategies not only inherit most methodological issues from natural image, text and audio domains, but also pose domain-specific challenges. There is a compelling need for the development of privacy-preserving transfer learning algorithms, such as Private Aggregation of Teacher Ensembles . We sugg6.Deep learning-based methods now match or surpass the previous state of the art in a diverse array of tasks in patient and disease categorization, fundamental biological study, genomics and treatment development. Returning to our central question: given this rapid progress, has deep learning transformed the study of human disease? Though the answer is highly dependent on the specific domain and problem being addressed, we conclude that deep learning has not yet realized its transformative potential or induced a strategic inflection point. Despite its dominance over competing machine learning approaches in many of the areas reviewed here and quantitative improvements in predictive performance, deep learning has not yet definitively \u2018solved\u2019 these problems.As an analogy, consider recent progress in conversational speech recognition. Since 2009, there have been drastic performance improvements with error rates dropping from more than 20% to less than 6% and finaSome of the areas we have discussed are closer to surpassing this lofty bar than others, generally, those that are more similar to the non-biomedical tasks that are now monopolized by deep learning. In medical imaging, diabetic retinopathy , diabetiIn other domains, perfect accuracy will not be required because deep learning will primarily prioritize experiments and assist discovery. For example, in chemical screening for drug discovery, a deep learning system that successfully identifies dozens or hundreds of target-specific, active small molecules from a massive search space would have immense practical value even if its overall precision is modest. In medical imaging, deep learning can point an expert to the most challenging cases that require manual review , though Conversely, the most challenging tasks may be those in which predictions are used directly for downstream modelling or decision-making, especially in the clinic. As an example, errors in sequence variant calling will be amplified if they are used directly for genome-wide association studies. In addition, the stochasticity and complexity of biological systems imply that for some problems, for instance, predicting gene regulation in disease, perfect accuracy will be unattainable.We are witnessing deep learning models achieving human-level performance across a number of biomedical domains. However, machine learning algorithms, including deep neural networks, are also prone to mistakes that humans are much less likely to make, such as misclassification of adversarial examples ,549, a rWe are optimistic about the future of deep learning in biology and medicine. It is by no means inevitable that deep learning will revolutionize these domains, but given how rapidly the field is evolving, we are confident that its full potential in biomedicine has not been explored. We have highlighted numerous challenges beyond improving training and predictive accuracies, such as preserving patient privacy and interpreting models. Ongoing research has begun to address these problems and shown that they are not insurmountable. Deep learning offers the flexibility to model data in its most natural form, for example, longer DNA sequences instead of k-mers for TF binding prediction and molecular graphs instead of pre-computed bit vectors for drug discovery. These flexible input feature representations have spurred creative modelling approaches that would be infeasible with other machine learning techniques. Unsupervised methods are currently less developed than their supervised counterparts, but they may have the most potential because of how expensive and time-consuming it is to label large amounts of biomedical data. If future deep learning algorithms can summarize very large collections of input data into interpretable models that spur scientists to ask questions that they did not know how to ask, it will be clear that deep learning has transformed biology and medicine.7.7.1.We recognized that deep learning in precision medicine is a rapidly developing area. Hence, diverse expertise was required to provide a forward-looking perspective. Accordingly, we collaboratively wrote this review in the open, enabling anyone with expertise to contribute. We wrote the manuscript in markdown and tracked changes using git. Contributions were handled through GitHub, with individuals submitting \u2018pull requests\u2019 to suggest additions to the manuscript.To facilitate citation, we defined a markdown citation syntax. We supported citations to the following identifier types (in order of preference): DOIs, PubMed Central IDs, PubMed IDs, arXiv IDs and URLs. References were automatically generated from citation metadata by querying APIs to generate Citation Style Language JSON items for each reference. Pandoc and pandoc-citeproc converted the markdown to HTML and PDF, while rendering the formatted citations and references. In total, referenced works consisted of 372 DOIs, six PubMed Central records, 129 arXiv manuscripts and 48 URLs (webpages as well as manuscripts lacking standardized identifiers).https://greenelab.github.io/deep-review. To ensure a consistent software environment, we defined a versioned conda environment of the software dependencies.We implemented continuous analysis so the manuscript was automatically regenerated whenever the source changed . We confIn addition, we instructed the Travis CI deployment script to perform blockchain timestamping ,553. Usi"} {"text": "Benjamin Xu should not have an Bsc. Xiangyi Kong's degree should have appeared as an MD. Richard Xu should not have an Bsc. Lishun Liu should have an Bsc instead of an MS. Ziyi Zhou should have an Bsc instead of an MS.In the article, \u201cHomocysteine and all-cause mortality in hypertensive adults without pre-existing cardiovascular conditions: Effect modification by MTHFR C677T polymorphism\u201d,"} {"text": "Let X and Y are said to be negatively quadrant dependent , if Two random variables mentclass2pt{minimI, the r.v.s. A and B of I, and any coordinatewise nondecreasing function G and H with A much stronger concept than LNQD was introduced by Joag-Dev and Proschan : for a fet al. [et al. [et al. [et al. [et al. [et al. [Some applications for LNQD sequence have been found. For example, Newman establiset al. studied [et al. obtained [et al. establis [et al. discusse [et al. obtained [et al. obtained [et al. establis [et al. establis [et al. proved tThe main purpose of this paper is to discuss the limit theory for LNQD sequence. In Section\u00a0C denotes a positive constant, which may take different values whenever it appears in different expressions. We have Throughout the paper, f and g on X and Y satisfying : For any absolutely continuous functions Now we state the law of iterated logarithm for LNQD sequence.We will need the following property. Letbe a strictly stationary LNQD sequence withandPutThenOur theorem extends the corresponding results of Corollary 1.2 in Choi . Choi esThe proof of Theorem (Lehmann )Let random variables X and Y be NQD, theniffandgare both nondecreasing (or both nonincreasing) functions, thenandare NQD.Letbe an LNQD sequence of random variables with mean zero and finite second moments. LetandThen, for allandwe knowIn particular, we haveBy Lemma Letbe an LNQD sequence of random variables withandDefineandThen, for anywhere \u03a6 is the standard normal distribution function.X be a standard normal random variable and definef be the unique bounded solution of the Stein equationf is given byWe will apply the Stein method. Let ee Stein )2.6\\docx, by the definition of LNQD and Lemma For fixed (H1) and ,\\documeows from that\\dom be an integer such thatn sufficiently large,n is sufficiently large.It suffices to show that for e Acosta , it is ei. By Lemma n. By using the standard subsequence method, (By the definition of LNQD and Lemma a=2man), and (2.1a=2man), , we geta.s. Now follows a.s. Now and , 2.18)\\documenther with and for linear processes generated by LNQD sequence with finite variance.e.g., Hannan [et al. [The linear processes are of special importance in time series analysis and they arise in wide variety of concepts (see, , Hannan , Chapter, Hannan establis, Hannan proved a, Hannan obtained, Hannan establis, Hannan discusse [et al. providedLetbe a strictly stationary LNQD sequence withandbe a sequence of real numbers withDefine the linear processesThenThe proof of Theorem Letbe a strictly sequence of random variables, be a monotone decreasing sequence of nonnegative real numbers. ThenLet Letbe a strictly stationary LNQD sequence of random variables withThenm, Let to prove , it is snt as in .B will be given later. Noting the choice of Finally, in order to prove , it remant as of , there ei. By Lemma B sufficiently large such that By the definition of LNQD and Lemma then by and (3.1 then by , observi>1. Thus holds byBy a Beveridge and Nelson decomposition for a linear process, for In this paper, using the Kolmogorov type maximal inequality and Stein\u2019s method, the law of the iterated logarithm for LNQD sequence is established with less restriction of moment conditions, this improves the results of Choi from \\do"} {"text": "This study was conducted to know the status of bovine herpesvirus-1 (BHV-1) antibodies in the bovines of the selected area of Uttarakhand.A total of 489 serum samples, 392 of cattle and 97 of buffaloes were randomly collected from the unvaccinated bovine population of five districts viz., Dehradun, Haridwar, Nainital, Pithoragarh, and Udham Singh Nagar and were tested by avidin biotin enzyme-linked immunosorbent assay for BHV-1 antibodies.The overall prevalence was observed to be 29.03%. At district level, the highest prevalence was recorded in Pithoragarh district (40.00%) while it was lowest in district Udham Singh Nagar (16.00%). The prevalence of BHV-1 antibodies was found to be higher in unorganized dairy units (31.02%) compared to organized farms (26.51%) in Uttarakhand. Buffaloes were found to have greater prevalence (38.14%) than cattle (26.78%) while on sex-wise basis; it was found that more females (30.08%) were harboring antibodies to the virus than males (16.21%).The study revealed that the population in the area under study has been exposed to BHV-1 and hence prevention and control strategies must be implemented. Bovine herpesvirus 1 (BHV-1), which infects domestic and range cattle, is associated with several clinical conditions including infectious bovine rhinotracheitis (IBR), infectious pustular vulvovaginitis, balanoposthitis, conjunctivitis, and generalized disease in newborn calves. BHV-1 has been classified into subtypes 1.1, 1.2a, and 1.2b using restriction enzyme analysis. Subtype 1.1 is associated with respiratory disease while subtypes 1.2a and 1.2b are implicated in infectious pustular vulvovaginitis/infectious balanoposthitis syndrome and cause mild respiratory disease [The virus is transmitted primarily through aerosol or genital contact. Clinical signs are generally mild, and the virus does not cause high mortality, infection results in a latent state and lifelong infection. Reactivation of latent BHV-1 infections can occur due to corticosteroid treatment or stress due to transportation, overcrowding or adverse weather conditions. The productivity and reproductivity of the animals is greatly decreased as an outcome of the disease .Various workers have published reports regarding the prevalence of this disease from different parts of India -7.Enzyme-linked immunosorbent assay (ELISA) is a sensitive and specific test in terms of detection of the low level of antibody for several viral diseases; this has been extensively used in recent last by many workers to monitor the seroprevalence of IBR in cattle population . There aThis investigation was carried out to know the prevalence of BHV-1 antibodies in bovines of Uttarakhand using avidin biotin ELISA (AB-ELISA) and to determine the significance of risk factors such as management, species, and sex associated with the BHV-1 prevalence.As per CPCSEA guidelines, a study involving clinical samples does not require the approval of Institute Animal Ethics Committee.viz., Dehradun, Haridwar, Nainital, Pithoragarh, and Udham Singh Nagar . BHV-1 antigen-coated plate was brought to room temperature. 100 \u00b5l of diluted control and test sera were added in duplicates to the wells. The plate was incubated on shaker for 1 h at 37\u00b0C. Thereafter was washed 3 times with washing buffer (50 \u00b5l of Tween 20 to 100 ml of PBS \u00d71). Then, 100 \u00b5l of biotin anti-immunoglobulin G conjugate diluted 1:30000 in blocking buffer was added to all the wells. Plate was again incubated on shaker for 1 h at 37\u00b0C and washed 3 times with washing buffer. 100 \u00b5l of Avidin-horseradish peroxidase conjugate diluted to 1:20,000 in blocking buffer was added to each well. Plate was again incubated on shaker for 20 min at 37\u00b0C and then washed 3 times with washing buffer. Then, 100 \u00b5l of chromogen/substrate was added to all the wells. The plate was then incubated at room temperature for 8-10 min. Then, 50 \u00b5l of stopping solution (1 N sulfuric acid) was added to all the wells. The plate was read in the ELISA plate reader at 492 nm. Results of test sera were expressed as percent positivity (PP) values calculated as shown below:PP = (Average OD of sample/Average OD of strong serum positive) \u00d7 100The test sera with PP values \u226545 were considered positive for IBR antibodies positive.Statistical analysis of data was performed as per the method described by Snedecor and Cochran .Out of 489 serum samples screened, 142 (29.03%) were found to be positive AB-ELISA for BHV-1 antibodies . At distSeroprevalence was higher in unorganized dairy units (31.02%) compared to organized farms (26.51%) in Uttarakhand. At species level also higher prevalence level was observed in both cattle and buffalo from unorganized sector compared to those from organized sector. However, the dependency of prevalence on management system followed was nonsignificant at p\u22640.05. Species wise seroprevalence was significantly higher in buffaloes (38.14%) than cattle (26.78%) at p\u22640.05. Overall, sex-wise seroprevalence indicated that more females (30.08%) were affected by BHV-1 than males (16.21%). Dependency of prevalence on sex was nonsignificant at p\u22640.05. At species level also for prevalence percentage in both cattle and buffaloes was more in females than males .et al. [et al. [et al. [et al. [In Uttarakhand, in earlier studies, Jain et al. and Nand [et al. have rep [et al. have als [et al. . The difet al. [viz., Udham Singh Nagar, Dehradun, and Pithoragarh. Although the prevalence of different districts differed significantly, however, there is a need to study in detail the other factors such as migration of animal, breeding practices, geographical location, and climatic conditions and their effect on virus spread before associating the occurrence on antibodies in particular district. Much higher prevalence rates from different parts of India and world were reported by various workers [et al. [et al. [et al. [In agreement with this study, Thakur et al. also rep workers -20. Wher [et al. , Das et [et al. , Singh a [et al. , and Sin [et al. from difet al. [et al. [et al. [et al. [et al. [et al. [Similar to our observation Rajesh et al. in Keralet al. observed [et al. also com [et al. and Sath [et al. also rep [et al. . Gonzale [et al. indicateet al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [Similar to our observations, on species wise prevalence, other researchers have reported greater prevalence in buffaloes than cattle. In Uttarakhand, Thakur et al. have als [et al. who also [et al. , Nandi e [et al. , and Tra [et al. in India [et al. and Jain [et al. in indep [et al. in Gujar [et al. in Uttar [et al. in Uttaret al. [et al. [et al. [et al. [et al. [et al. [et al. [Observations of this study in context to sex wise prevalence were in agreement to study of Thakur et al. who have [et al. in Uttar [et al. also obs [et al. ; Krishna [et al. in south [et al. in Bihar [et al. in Uttar [et al. . Contrar [et al. had obseThis study revealed that the BHV-1 is strengthening its roots in different parts of Uttarakhand. The present findings warrant that a suitable action plan should be implemented to control the spread of virus in the state.A collection of serum samples, screening of samples by ELISA was done by VT. The entire work was done under supervision of MK. Data were analyzed by VT. The manuscript was prepared by VT, and corrections/modifications were done by MK and RRL. All authors read and approved the manuscript."} {"text": "The amount of boron waste increases year by year. There is an urgent demand to manage it in order to reduce the environmental impact. In this paper, boron waste was reused as an additive in road base material. Lime and cement were employed to stabilize the waste mixture. Mechanical performances of stabilized mixture were evaluated by experimental methods. A compaction test, an unconfined compressive test, an indirect tensile test, a modulus test, a drying shrinkage test, and a frost resistance test were carried out. Results indicated that mechanical strengths of lime-stabilized boron waste mixture (LSB) satisfy the requirements of road base when lime content is greater than 8%. LSB can only be applied in non-frozen regions as a result of its poor frost resistance. The lime\u2013cement-stabilized mixture can be used in frozen regions when lime and cement contents are 8% and 5%, respectively. Aggregate reduces the drying shrinkage coefficient effectively. Thus, aggregate is suggested for mixture stabilization properly. This work provides a proposal for the management of boron waste. In China, its reserve has reached 35 million tons, and emission still grows at a rate of 1.6 million tons per year [Boron waste is discharged in the production of borax pentahydrate, borax decahydrate, anhydrous borax, per year . Its envIt can be seen from et al. [et al. [et al. [Ozdemir et al. added bo [et al. studied [et al. investiget al. [et al. [et al. [et al. [et al. [For the purposes of sustainable development and industrial ecology, boron waste has been gradually used in the concrete and brick. Top\u00e7u et al. investig [et al. prepared [et al. also poi [et al. demonstr [et al. utilized [et al. .et al. [It can be seen that reuse of boron waste in brick and concrete increases the managing efficiency. For concrete, its content should be less than 15%. For brick and lightweight aggregate, its amount should be less than 30%. Remarkably, the strength of concrete and brick obviously decreases when the content exceeds the limited value. This means that boron waste cannot be massively applied in high strength material. Therefore, further research is needed to explore other reuse methods. Fortunately, Modarres et al. proposedet al. [et al. [et al. [In order to find a good stabilized method, many studies have been performed in recent years. Qian et al. investig [et al. used cem [et al. studied Based on the above analysis, we can find that boron waste has a significant impact on the environment. Its reuse is helpful for protecting the environment. In order to increase the reused amount as much as possible, it was reused in the road base material in this study. Lime and cement were used to stabilize this waste. The stabilized waste mixture was prepared in a laboratory. Specimens were then made according to the standards JTG E51-2009 , and theBoron waste, which was discharged in the production of borax, was used in this study. Its color is light brown, as shown in uC) and coefficient of curvature (cC) are also given in Basic properties of boron waste are given in The content of effective calcium oxide and magnesium oxide in hydrated lime is 60.4%. Ordinary Portland cement with a level of 42.5 MPa was used in this study. The properties of the cement are listed in 3. The properties of the soil were tested according to the Test Methods of Materials Stabilized with Inorganic Binders for Highway Engineering of China (JTG E51-2009) [Limestone aggregate and soil were selected for experiments. The apparent density of aggregate was 2.677 g/cm51-2009) , as giveHydrated lime and cement have been widely used to stabilize soil and granular material ,17,18. TUnconfined compressive strength (UCS) is an important index of geo-materials ,21,22. AP is the ultimate load, N; H is the height of specimen, mm; and d is the diameter of specimen, mm. a is the width of loading strip, mm. \u03b1 is the center angle corresponding to half width of the strip, rad. E is resilient modulus, MPa.An indirect tensile strength (ITS) can be used to evaluate the anti-cracking performance of geo-materials . Therefoet al. [l1 was recorded when the specimen was loaded for 1 min. Then, the pressure was unloaded. The value l2 of the gauge was recorded when the specimen was unloaded for 0.5 min. Then, the next pressure was subsequently applied. This process was repeated step by step until all stages were finished. The resilient modulus (E) can be computed by the following equations.P is the maximum pressure, MPa; H is the height of specimen, mm; and l, Baghini et al. indicateet al. , a maximet al. [Pozo-Antonio and Idiaet al. indicateet al. . Two glaim, im\u22121 are the masses of specimen on the ith day and the (i \u2212 1)th day, respectively, g; pm is the dry mass of specimen, g; \u03b4i is the drying shrinkage deformation on the ith day, mm; i,jX and iX\u22121j, are the values of the jth gauge on the ith day and (i \u2212 1)th day, mm; \u03b5i is the drying shrinkage strain on the ith day, %; l is the initial length of specimen before test, mm; \u03b1di is the shrinkage coefficient on the ith day, %.In this study, the drying shrinkage coefficient was employed to evaluate the drying shrinkage performance. It can be calculated by the following equations:et al. [et al. [FRI is the indicator that represents the frost resistance\u2014a high value means a good frost resistance. FTR is the unconfined compressive strength of specimen after five freeze\u2013thaw cycles, MPa; and cR is the unconfined compressive strength of control specimen, MPa.Kelestemur et al. and Jafa [et al. have ind [et al. . These set al. [Palomo et al. indicateet al. [In order to save the construction cost, lime was firstly used to stabilize the boron waste alone. The feasibility of the lime-stabilized boron waste (LSB) for the road base was investigated. The optimum moisture content (OMC) and maximum dry density (MDD) for every proportion were determined. Kweon et al. indicateet al. [It can be seen from et al. . The uncet al. . For theAs shown in et al. [et al. [et al. [Balen et al. proposed [et al. pointed [et al. proposed [et al. . The comet al. [et al. [et al. [et al. [et al. [et al. [As listed in et al. and Moda [et al. . Edeh et [et al. proposed [et al. also tho [et al. thought [et al. also ind [et al. . In otheUnconfined compressive strength (UCS) is a common mechanical property, which has been used to determine the proportion of stabilized mixture. Therefore, the UCS\u2019s of the stabilized mixtures were investigated. Three curing periods at 7, 28, and 90 days were selected for the experiment. Results are shown in et al. [et al. [It can be seen from et al. and Taha [et al. and is a [et al. . Therefoet al. [As shown in et al. pointed et al. [et al. [For a cementitious composite, Idiart et al. indicate [et al. proposedet al. [E) was obtained by a uniaxial compression test. The indirect tensile strength (ITS) and ultimate tensile strain (UTS) were calculated by Equations (1) and (2), respectively. Results are listed in As shown in et al. indicateet al. . TherefoE) of LCBS increased with the increase in boron waste. For LCBA, the resilient modulus decreased with the increase in boron waste. Their trends are different. This is caused by the difference in the elastic properties of aggregate, boron waste, and soil. ITS and UTS decrease with the increase in boron waste. This means that boron waste has an adverse effect on the tensile properties. Besides, ITS of LCBA-1 is the highest among all of the mixtures. The ultimate tensile strain (UTS) of LCBS-1 is the highest. UTS of LCBA-1 decreases by 17.0% compared with that of LCBS-1. It seems that the decline in UTS is adverse to drying shrinkage, but the shrinkage coefficient of LCBA-1 is less than that of LCBS-1. Therefore, LCBA-1 may be a good proportion on the whole.It can be found from FTIR spectra of the lime-stabilized matrix and the lime\u2013cement-stabilized matrix were recorded with a Nexus 6700 spectrometer in order to reveal their chemical reactions. Results of the FTIR test are shown in \u22121 indicates the formation of MgCO3. The bonds around 1000 cm\u22121 correspond to Si\u2013O and Al\u2013O tension bonds. They are the characteristic bonds of the alkaline polymer. Bonds around 780 correspond to Si\u2013O\u2013Si bonds. Bonds around 3600 cm\u22121 indicate the presence of CaO or Ca(OH)2. SiO2 and Al2O3 in boron waste will be activated by CaO (or Ca(OH)2), and C\u2013S\u2013H and CaO (or Ca(OH)2) will form the C\u2013S\u2013H gel structure. The reaction equations can be written as follows:As shown in et al. [However, Palomo et al. proposed1.The unconfined compressive strength (UCS) of lime-stabilized boron waste (LSB) meets the strength requirements of road base when lime content is greater than 8%. Its frost resistance is very poor. Thus, lime-stabilized boron waste can only be used in non-frozen regions.2.A lime\u2013cement-stabilized boron waste mixture has higher compressive and tensile strengths than those of a lime-stabilized one. Drying shrinkage coefficients of lime\u2013cement-stabilized boron waste mixtures are all less than those of lime-stabilized boron waste. Fillers such as soil and aggregate improve drying shrinkage. It is suggested that soil and aggregate are properly added in order to reduce the drying shrinkage coefficient. Frost resistance indexes of lime\u2013cement-stabilized boron waste mixtures are all greater than 70%. Therefore, a lime\u2013cement-stabilized boron waste mixture is suggested for frozen regions. According to the results of this study, the proportion of LCBA-1 is the most suitable for road base.3.2 and Al2O3 in boron waste is activated by CaO (or Ca(OH)2). Boron was not activated by lime and cement. Therefore, boron waste can be reused as filler in road base in order to solve its effect on the environment.The hydration process of lime and cement formed the strength of a stabilized mixture. SiOBoron waste was reused in road base in this study. The performances of lime and lime\u2013cement-stabilized boron waste mixtures were investigated by various experimental methods. Some conclusions can be drawn based on the above analysis. They are as follows:In summary, boron waste can be used to construct the road base directly. Cement should be used to stabilize the mixture in order to enhance its mechanical strength and frost resistance. The use of boron waste in road base will effectively reduce its impact on the environment. In the future, a trial section of road base should be constructed to verify its serviceability for different regions."} {"text": "Supracondylar fractures are common in children and are associated with significant morbidity.The purpose of our study was to assess and compare the clinical and radiological outcome of management of supracondylar fractures by both wire configurations, along with identifying factors that predispose to complications.We retrospectively reviewed all paediatric cases admitted with a supracondylar fracture over a five year period. We reviewed case notes, theatre records and radiographs to determine the age of the patient, classification of fracture, treatment method, delay to theatre, duration of surgery, wire configuration, Baumann\u00b4s angle, radiocapitellar alignment, anterior humeral alignment and complications.During the five year period we admitted 132 patients and complete notes were available for 123 patients for analyses. For all the patients managed with wire stabilisation 23% developed complications, including 13% with significant complications including nerve injuries and fracture displacements. All five nerve injuries had crossed wires, whereas all for fracture displacements had lateral wires. Baumann\u00b4s angle was 76.7 degrees in the group with no complication and 72.2 degrees in the significant complication group (p=0.02). Radiocapitellar line and anterior humeral line were not satisfactory in 5% and 15% of the group with no complications, and 17% and 33% of the group with significant complications.We found more complications in lateral pinning configurations, although all nerve injuries were in patients with crossed wire configurations. The factors we believe are associated with a higher likelihood of complications are inadequate post-operative radiological appearance. The iSupracondylar fractures are commonly classified based on the Gartland system of classification, where they are divided into three types; Type I being non-displaced, Type II being displaced but with an intact posterior cortex and Type III being displaced and without any cortical contact , althoug1).Along with a posterior fat pad sign in Type I fractures, three radiographic parameters used to evaluate a supracondylar fracture are the Baumann\u2019s angle, anterior humeral line and radiocapitellar line . The Bau2). Both have advantages and disadvantages. The crossed wire configuration is biomechanically more stable, especially when resisting axial forces [et al. [et al. [Percutaneous K-wiring is the most widely advocated method to stabilise displaced supracondylar fractures after reduction. There is no clear consensus on the configuration of K-wiring. Commonly used configurations include a crossed configuration with a medial and a lateral K-wire, and lateral configuration with two lateral K-wires l forces -20. Brau [et al. conducte [et al. , 21. Bra [et al. also rep [et al. .The purpose of our study was to assess and compare the clinical and radiological outcome of management of supracondylar fractures by both wire configurations, along with identifying factors that predispose to complications.2We retrospectively reviewed all supracondylar fractures in children between the ages of 2-15 years old that were admitted to our unit between November 2009 and November 2014. We carried out a review of case notes and theatre records to determine the age of the patient, time to operation theatre and duration of surgery. We carried out a review of patients\u2019 radiographs to determine the type of Gartland fracture, Baumann\u2019s angle, radiocapitellar alignment and anterior humeral line post-intervention/surgery. Radiographs were also used to determine the type of wire configuration used. We performed students t-test for statistical analyses and a p value of <0.05 was considered significant.3There were 132 patients with Gartland type II and type II fractures admitted to out unit over the five year period and complete notes were available for 123 patients. Out of 123 patients, 12 were managed nonoperatively, and 13 were managed with a manipulation under anaesthesia. None of these patients had any complications. All the remaining 98 patients were treated with K-wiring, either crossed or lateral. They had a mean age of 6.1 years (SD 2.6 years). These included 61 type II and 37 type III fractures. Fifty-nine patients were managed with crossed K-wires and 39 were managed with lateral K-wires. The ages and fracture types were not significantly different between the two wire configuration groups.Out of these patients managed with wire stabilisation 23% (22 patients) developed complications, including 13% (13 patients) with significant complications including nerve injuries (five patients) and fracture displacements (four patients). Out of the five nerve palsies, two were ulnar nerve palsies, two were radial nerve palsies, and one was a median nerve palsy. The mean age, classification, time to theatre and duration of surgery were not significantly different between the patients with and without complications (p > 0.05). The rate of complications was not different between the two groups; 33% in lateral wire configuration compared with 26% in those treated with crossed wires. Five of the significant complication patients had lateral wire configuration whereas the other eight had crossed wires. All five nerve injuries had crossed wire configuration, whereas all four fracture displacements had lateral wire confirmation. The mean Baumann\u2019s angle was 76.7 degrees in the group with no complication and 72.2 degrees in the significant complication group (p=0.02). The radiocapitellar line and anterior humeral line were not satisfactory in 5% and 15% of the group with no complications, and 17% and 33% of the group with significant complications.4et al. [et al. [et al. [A systematic review in 2012 et al. patients [et al. . The aut [et al. looking et al. [et al. [et al. made an incision over the medial epicondyle. Gatson et al.. treated four crossovers as intention-to-treat whereas Kocher et al. excluded crossovers. Both studies looked at ulnar nerve injuries and changes in Baumann\u2019s angle and humerocapitellar angle. Only Gatson et al. looked at range of movement and loss of carrying angle. Neither study found a significant difference in the clinical or radiological parameters between the two wire configurations. Although the definition for loss of reduction varied between the two studies, neither identified a difference between the wire configurations in relation to loss of reduction. Although Kocher et al. did not report any nerve injuries, Gaston et al. reported two cases with the crossed configuration. They report one case of \u2018tenting of the nerve\u2019 with incomplete recovery at three months follow-up, and one case of \u2018pin indenting the nerve\u2019 at 90 degrees of elbow flexion with complete recovery at three months.Two further randomised controlled trials were performed by Gaston et al. and Koch [et al. looking Our results suggest that both wire configuration patters are valid but the complication profile varies. In our study fracture displacement was seen only with lateral wiring, and nerve injuries only seen with crossed wires. Our results support those of biomechanical studies , 21, 28 Our study has limitations. It is a retrospective study where the procedure as carried out by a number of different surgeons. We did not look at the three lateral wire configuration or K-wire sizes. There is some evidence that three lateral wires produce a more stable construct than two lateral wires , 28, andWe found more complications in lateral pinning configurations, although all nerve injuries were in patients with crossed wire configurations. The factors we believe are associated with a higher likelihood of complications are inadequate post-operative radiological appearance. We suggest that future randomised controlled trials are sufficiently powered with larger patient numbers to detect significant differences in clinical and radiological outcomes."} {"text": "PLOS One reviewing process. The reality is that Pilling et al. (PLoS One 8:e77193, PLOS One and within the published EFSA Evaluation of Thiamethoxam.The published Commentary by Hoppe et al. (Environ Sci Eur 27\u201328, PLOS One insisted that they be combined into a single manuscript. In the end, the final Pilling et al.\u2019s [http://registerofquestions.efsa.europa.eu/roqFrontend/outputLoader?output=ON-3067). A detailed authors\u2019 response to each of the points raised by Hoppe et al. [In response to the increasing call for industry to be more transparent, Syngenta took the decision to publish the pivotal honeybee field studies that supported the honey bee safety of thiamethoxam. These field studies included 12 separate pollen and nectar field residue trials and five long-term (4 consecutive years) field effects studies on honeybees carried out in four geographically widespread locations in France. The primary purpose of these field trials was to investigate and test the bee safety of thiamethoxam. As such these studies have to be carefully designed to avoid as far as is possible any other potential confounding factors, e.g. exposure to other insecticides. In response to the criticism of Hoppe et al. 2105) that the \u201cdesign and adherence to the protocol was described inadequately\u201d and that it was \u201cdoubtful whether the study was implemented in a traceable way\u201d, it should be noted that these studies also have to comply with the strict, legally enforceable, quality control requirements of Good Laboratory Practice to ensure a viable crop.This quoted figure of 70\u00a0% total colonies lost in Hoppe et al. is complHoppe et al. state thAs can be seen from Table\u00a0Hoppe et al. compare Hoppe et al. criticisPLOS One\u2019s decision to publish Pilling et al.\u2019s [PLOS One carried out a second additional review of this paper, by a member of the PLOS One editorial board. Once again this paper was accepted as stands (see link to detailed comments from this second review http://www.plosone.org/annotation/listThread.action?root=82356). One of the comments made by the editor during this second review was \u201cThe effort was comprehensive and seems honestly described.\u201dHoppe et al. questionet al.\u2019s paper, bet al.\u2019s paper waWe contend that the alleged deficiencies claimed by Hoppe et al. to under"} {"text": "Phaseolus vulgaris L.) is a favored food legume with a small sequenced genome (514 Mb) and n = 11 chromosomes. The goal of this study was to describe R and LD in the common bean genome using a 768-marker array of single nucleotide polymorphisms (SNP) based on Trans-legume Orthologous Group (TOG) genes along with an advanced-generation Recombinant Inbred Line reference mapping population and an internationally available diversity panel. A whole genome genetic map was created that covered all eleven linkage groups (LG). The LGs were linked to the physical map by sequence data of the TOGs compared to each chromosome sequence of common bean. The genetic map length in total was smaller than for previous maps reflecting the precision of allele calling and mapping with SNP technology as well as the use of gene-based markers. A total of 91.4% of TOG markers had singleton hits with annotated Pv genes and all mapped outside of regions of resistance gene clusters. LD levels were found to be stronger within the Mesoamerican genepool and decay more rapidly within the Andean genepool. The recombination rate across the genome was 2.13 cM / Mb but R was found to be highly repressed around centromeres and frequent outside peri-centromeric regions. These results have important implications for association and genetic mapping or crop improvement in common bean.Recombination (R) rate and linkage disequilibrium (LD) analyses are the basis for plant breeding. These vary by breeding system, by generation of inbreeding or outcrossing and by region in the chromosome. Common bean ( Common bean is an important food legume with interesting genetics that is also a good protein and micronutrient source for many consumers around the world . The croLinkage disequilibrium (LD) analysis is related to population genetics and has been the basis for genetic and association mapping done in common beans and other inbreeding crops, especially with advanced generation populations . LD is tLD is influenced, among other factors, by the rate of chromosomal recombination (R) within a species across multiple generations of inbreeding or cross-breeding and depends on the mating system of the plant and the location within the genome . Thus, rApart from these issues, epistatic interactions may create non-random associations among unlinked loci, and genomic differentiation between subspecies can limit LD decay . Since RSingle nucleotide polymorphism (SNP) markers have been developed over the past six years for common beans. The first SNPs to be developed for the crop were designed based on amplicon re-sequencing of mapping parents ,16 conseet al. [et al. [et al. [et al. [et al. [et al. [LD analysis and evaluations of R can be organized based on a single chromosomal region, various genomic regions or the entire genome, especially when working with SNP markers. For example, SNP markers have been useful in locus specific studies ,23\u201326 oret al. were sho [et al. . However [et al. or Song [et al. , have a [et al. used a R [et al. used an [et al. has not The goals of this study were 1) to provide the genetic and physical map locations for the gene-based SNP assays developed by Blair et al. which aret al. [et al. [et al. [et al. [et al. [et al. [2 to a fine powder, which was mixed with extraction buffer and incubated at 65 C in a 15 mL Falcon tube. Protein removal was accomplished using two chloroform\u2013isoamyl alchohol extractions at 1:1 ratio, which were shaken with the tissue homogenate and centrifuged at 10,000 rpm, removing the upper aqueous layer for DNA precipitation. The resulting purified DNA was quantified on a Hoefer DyNA Quant 2000 fluorometer and diluted to a standard concentration (200 ng/\u03bcl) for use in SNP marker evaluation.The plant material used in this study were 1) a recombinant inbred lines (RIL) of the inter-genepool, Andean x Mesoamerican genetic mapping population BAT93 x Jalo EEP558, described as a core mapping population by Freyre et al. and used [et al. and the [et al. and 2) a [et al. . These w [et al. that rep [et al. . The pla [et al. , which iet al. [http://dnatech.genomecenter.ucdavis.edu/). SNP genotyping calls were made with Bead-studio software package .The SNP markers used for this study were from the legume trans-legume orthologous gene (TOG) series made for common bean from the sequencing of BAT93 and Jalo EEP558 parental genotypes as described in Blair et al. . The maret al. [et al. [11).Alleles for the population were then used for genetic mapping carried out in Mapdisto v. 2.0 assuminget al. and simp [et al. were plablast.ncbi.nlm.nih.gov) search using default parameters for significance. The query sequence included the 120 bp of flanking sequences, or 60 bp on both left and right sides, of each of the common bean SNP markers compared to the chromosomal sequences available for Phaseolus vulgaris [phytozome.jgi.doe.gov). The most homologous physical position of each SNP amplicon sequence was estimated based on the lowest E-value hit found by the similarity search and recorded in Mega base pairs (Mb). Multiple matches were not considered for the TOG markers. The physical positions of the SNP markers were used in the construction of a customized comparative map for all 11 chromosomes carried out with the software R showing physical (Mb) and genetic (cM) distances. Chromosomal identity and the orientation per chromosome were based on the physical map. After collecting mapping information on both scales, scatter plots were created with R software to analyze the relationship between linkage map distance and physical distance for each chromosome. Polynomial line-fitting was used to determine the points of inflection and flattening in the curves fit to each of the chromosomal plots; with these indicating suppressed recombination typical of the centromeres and peri-centromeric (pCENR) regions as described in Bhakta et al. [et al. [The genetic map from the methodology described above was predominantly made of TOG markers and was aligned with the physical map through sequence comparisons through a nucleotide BLASTn markers, which are similar to the COS (conserved orthologous sequence) markers that have been useful in Solanaceous plant species and advocated by Lee et al. [Phaseolus vulgaris G19833 genome assembly, Phytozome v1.0. The overlap (intersection) between markers and genes was calculated using the bedtools \u201cintersect\u201d function [As a core set of SNPs, the highly conserved markers from this study are useful because of their association with orthologous loci across Expressed Sequence Tags from the transcriptomes of e et al. . To furtfunction .a priori criteria of stratification using STRUCTURE 2.3.2 [et al. [Genepools based on population structure were determined with non RE 2.3.2 , a burn- [et al. . Polymor2) between all pairs of markers with the software package TASSEL 2.1 [2 capture different aspects of the gametic associations [2 estimate were obtained with a two-sided Fisher\u2019s exact test as done in the same program.The overall LD was estimated by calculating the square value of correlation coefficient , in several contexts. The TOG-based map is viewable using CMap v. 1.01 [Phaseolus vulgaris v1.0 GBrowse genome viewer, at https://legumeinfo.org/genomes/gbrowse/Pv1.0. Underlying data files are at the LIS database: https://legumeinfo.org/data/public/Phaseolus_vulgaris/mixed.map1.7PMp/.The genotypes used are in v. 1.01 with maret al. [et al. [https://legumeinfo.org/data/public/Phaseolus_vulgaris/mixed.map1.7PMp/.The SNP based BAT93 x JaloEEP558 genetic map was constructed with all the single copy TOG markers from Blair et al. and was et al. , as wellet al. , resulti [et al. . Groups et al. [Of the 812 genetic markers in the final map, all 768 SNP markers were tested for physical mapping to the common bean genome sequence from Schmutz et al. . In totaThe cM / Mb scatter plots for all common bean chromosomes and for Phaseolus vulgaris genome. Meanwhile 66 markers had possible multiple hits under the parameter of difference of e-10 from the best hit to next best hit and an E-value threshold of 1e-30. At the 1e-40 level, 497 TOG markers were exclusive hits to one gene; while in a total number of cases two TOG markers corresponded to the same gene. No difficulties were found searching for the BAT93/Jalo EEP558 derived TOG markers against the genome assembly for G19833, which was the genotype used in the sequencing project of Schmutz et al. [-30. The results of gene correspondences is given in Of the 768 new TOG markers, the majority, a total of 702 sequences had highly significant singleton hits to predicted or actual genes in the I on linkage group b02d, at the end of the short arm of chromosome Pv02 was flanked at 0.9 cM by three new SNP markers, namely TOG961744_119, TOG906764_834 and TOG906764_376 in this highly recombinogenic and evolutionarily active region that has been amply characterized for the understanding of the necrotic response to BCMNV strains of bean common mosaic virus [Since the TOG markers were well distributed, it was important to note that some of the new SNPs could be useful in substituting phenotypic or legacy molecular markers. In the case of phenotypically useful markers the virus resistance gene called dominant ic virus .Phs) which influences seed size and where TOG897715_56 and TOG897715_587 were genetically linked at 1.7 cM. Several examples of SNPs linked to isozymes mapped by Freyre et al. [Aco2), chalcone synthase (ChS), chitanase (Chi) and diaphorase (Diap); while the substitution of RFLP markers by SNPs is self-evident.Another well-characterized gene with new flanking SNP markers was the locus for phaseolin protein 2. Distinction of linkage groups by linkage disequilibrium blocks was only achievable in the analysis carried out within the Andean genepool. On the other hand, inter-chromosomal linkage disequilibrium was more prevalent within the Mesoamerican genepool likely due to its extensive race substructure. When genepool structure was not accounted for, genome-wide linkage disequilibrium was notoriously widespread and did not decay with genetic distance. To study the relationship of R and LD, respectively, with gene density (2 (%) and recombination rate (cM/bp) value in that window. The relationship was significant in both cases with P = 0.031 and P<0.0001, respectively. Therefore, we can conclude that where the gene density was higher, the recombination rate and r2 were higher. This indicated that LD decay was higher in windows that were gene-rich compared to those windows that were gene-poor.Overall LD measured as romosomes and at eomosomes . As D\u2032 i density , we condThe TOG based SNPs were useful for creating a saturated genetic map for the BAT93 x Jalo EEP558 population, which has been widely used in previous studies as a reference population . The SNPet al. [et al. [In both the work of Blair et al. and Hyte [et al. , the dis [et al. , but the [et al. ,44,45. GPhaseolus vulgaris was useful for linkage group to chromosome identification and therefore marker orientation. For example the isozyme or phenotypic markers for BCMV resistance (I gene) and enzyme Chs mapped to the correct locations on Pv2 as did the loci for isozymes Aco2 and Diap on Pv5 and the locus for Chi and the seed protein phaseolin (Phs) on Pv7 according to original mapping [et al. [In our study, observations from the refined genetic map showed the TOG markers were well distributed across and within all the linkage groups with even genetic and physical distances between most markers with the exception of those at chromosome ends. Another observation was that the genetic map was smaller overall than previous maps. This could be explained by the fact that some of the SNP markers were grouped in blocks, making for a more condensed genetic map. Meanwhile, the full genome sequence of mapping , for thi [et al. for the \u03a72 = P\u22650.05). We analyzed the SNP distribution further in two ways: 1) by comparing linkage groups and 2) by comparing regions within the physical map for each chromosome.In the final genetic mapping the number of SNP markers varied between linkage groups. The number of total markers per linkage group ranged up to 145 for Pv2 (49.0 Mb) with more than 110 markers on Pv1 (52.2 Mb in length) and Pv3 (52.3 Mb). Meanwhile, linkage groups Pv4 (45.8 Mb), Pv5 (40.7 Mb) and Pv10 (42.2 Mb) were low in SNP marker saturation with 22, 40 and 27 TOGs, respectively. The remaining linkage groups had similar numbers of markers ranging from 61 on Pv11 (50.2 Mb) to 88 on Pv7 (51.7 Mb). A chi-square test (significance P\u22640.05) showed that the distribution was not equal between linkage groups for the SNP markers. In contrast the RFLP and SSR markers were more evenly distributed as anchors across all linkage groups was telocentric, as predicted by Bhakta et al. due to tet al. [et al. [et al. [et al. [Our genetic mapping results compared favorably with those of the common bean genome sequencing and re-sequencing study of Schmutz et al. or the n [et al. . In thos [et al. was corr [et al. but to a [et al. , perhaps [et al. was modeet al. [et al. [I gene for virus resistance on Pv02.In summary regarding the linkage analysis, we presented a saturated genetic map with well characterized SNPs on a reference population for common bean, derived from the cross BAT93 x Jalo EEP558, and we used this map for estimating recombination rates and LD decay across the genome. This genetic map has been useful in determining recombination rates among many types of markers. As a core mapping population, the BAT93 x JaloEEP 558 set had many comparable markers with other highly saturated genetic maps such as those of C\u00f3rdoba et al. and Gale [et al. . As concet al. [LD estimates were made for the same SNP markers that were genetically mapped above and for both Andean and Mesoamerican genotypes from Blair et al. , which aet al. ,11. The et al. [et al. [A further result of our study was that LD was stronger and decayed slower within the Mesoamerican genepool, likely due to its more extensive race substructure. This is interesting because a bottleneck for the Andean beans, as has been speculated by arguing a Mesoamerican origin of the common bean , would iet al. and by Ket al. . The Braet al. found si [et al. . The abi [et al. , sub-spe [et al. . Domesti [et al. .A final point for our study was that the rate of LD decay was low when marker associations were studied within linkage groups and for each genepool separately. More focused studies of LD at certain loci have shown that LD decay was moderate and variable around drought tolerance genes \u201326 acrosWithin Andean LD analysis is likely to show higher recombination than within Mesoamerican genepool mapping due to less population structure and potentially higher polymorphism. LD rates have been shown to affect the potential for QTL and association mapping of traits in common bean. Given the higher within-Andean LD, initial successes with genome wide association have been for Andean studies ,53,54. MS1 Table(DOCX)Click here for additional data file.S2 Table(XLSX)Click here for additional data file.S3 Table(XLSX)Click here for additional data file.S1 Fig(PDF)Click here for additional data file.S2 Fig(PDF)Click here for additional data file.S3 Fig(PDF)Click here for additional data file.S4 Fig(PDF)Click here for additional data file.S5 Fig(PDF)Click here for additional data file.S6 Fig(PDF)Click here for additional data file.S7 Fig(PDF)Click here for additional data file."} {"text": "Although medication is generally avoided wherever possible during pregnancy, pharmacotherapy is required for the treatment of pregnancy associated hypertension, which remains a leading cause of maternal and fetal morbidity and mortality. The long-term effects to the child of in-utero exposure to antihypertensive agents remains largely unknown.The aim of this study was to systematically review published studies on adverse outcomes to the child associated with in-utero exposure to antihypertensive medications.OVID, Scopus, EBSCO Collections, the Cochrane Library, and Web of Science databases were searched for relevant publications published between January 1950 and October 2016 and a total of 688 potentially eligible studies were identified.Following review, 47 primary studies were eligible for inclusion. The Critical Appraisal Skills Programme checklist was used to assess study quality. Five studies were of excellent quality; the remainder were either mediocre or poor. Increased risk of low birth weight, low size for gestational age, preterm birth, and congenital defects following in-utero exposure to all antihypertensive agents were identified. Two studies reported an increased risk of attention deficit hyperactivity disorder following exposure to labetalol, and an increased risk of sleep disorders following exposure to methyldopa and clonidine.The current systematic review demonstrates a paucity of relevant published high-quality studies. A small number of studies suggest possible increased risk of adverse child health outcomes; however, most published studies have methodological weaknesses and/or lacked statistical power thus preventing any firm conclusions being drawn. Hypertensive disorders of pregnancy have been identified as the leading cause of maternal death in industrialized countries, complicating approximately 7% of pregnancies ,2.Prior to pregnancy, approximately 3% of women have an existing diagnosis of hypertension of whom a quarter will go on to develop preeclampsia ,3. A furConsequently, there has been a move to treat hypertensive women of childbearing age more aggressively in an attempt to normalize blood pressure (BP) prior to and during pregnancy and reduce the occurrence of preeclampsia . DespiteAlthough in-utero exposure to any medication during the first trimester is associated with the highest risk of teratogenic malformations, exposure during the second and third trimesters has also been linked to functional and behavioural abnormalities, which may not be immediately apparent \u20139.Although the fetogenic and teratogenic risks associated with angiotensin-converting enzyme inhibitors (ACEi) and AT1 blockers (ARB) use during pregnancy are well recognized the potein utero to one or more antihypertensive medications compared with unexposed children.To determine whether in-utero exposure to antihypertensive medication is associated with adverse child outcomes, we reviewed primary literature reporting child outcomes following exposure A systematic review protocol was designed as per standard Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines . The datThe reference lists from eligible papers were scanned for other relevant studies. The search was limited to English and German language articles. Narrative and systematic reviews (with no synthesis of data), studies published only as abstracts, letters, or conference proceedings, discussion papers, animal studies, and editorials were excluded.Initial screening of titles was carried out to identify potentially relevant studies, followed by screening of abstracts and then by full paper review. All titles and abstracts were independently evaluated by two reviewers (M.S. and J.S.M.) for consistency of inclusion/exclusion.Quality assessments were conducted by two independent reviewers (C.F. and M.S.) using a modified version of the Critical Appraisal Skills Programme quality assessment tool for randomized controlled trials , case\u2013cohttp://www.crd.york.ac.uk/prospero/[http://www.prisma-statement.org/.The current review was registered with the International Prospective Register of Systematic Reviews: prospero/. The PRIThe database searches retrieved 816 citations. Removal of duplicates followed by review of titles and abstracts yielded 47 relevant studies. The PRISMA flowchart illustrates the number of titles, abstracts, and full papers excluded.In total, 22 studies were conducted in Europe; 16 in North America, three in Asia, one in central America, and one in Africa. Four of the 47 studies were multinational, three of which were across continents . Of thesN.Descriptive tables Tables \u20136 showinStudies were categorized according to study design and quality and bias assessed accordingly . No study achieved the maximum score for quality, five were graded as excellent ,31,33,51For clarity and ease of interpretation, the results of the eligible studies have been reported, according to the drug of exposure, as early (birth to 1 month) and long-term (>1 month) outcomes.Five case\u2013control studies \u201328,30,31et al.[et al.[In the cohort study of 1418 women, rated as excellent, Lennest\u00e5l et al. reportedl.[et al. also repet al.[In the case\u2013control study rated as excellent, studying major congenital anomalies , and being small for gestational age , Nakhai-Pour et al. reportedet al.[P\u200a<\u200a0.0001) and increased odds of low birth weight [Banhidy et al. reported1.8\u20132.7) . However1.8\u20132.7) .et al.[Caton et al. reportedet al.[In a second case\u2013control study rated as mediocre, by Caton et al., 5021 caet al.[et al.[Van Gelder et al. reportedl.[et al. reportedAny hypertensive summary. In summary, although seven studies reported increased odds of preterm birth and low birth weight in treated mothers, particularly those with chronic hypertension and four studies, a possible increase in the incidence of congenital malformations study designs did not allow assessment of the relative importance of hypertension vs. exposure to antihypertensive patients. The ORs for perinatal mortality, preterm birth, and congenital cardiovascular defects are reported as Forest plots in Supplementary Figs. 2\u20134.One case\u2013control study , 16 cohoet al.[In a cohort study of 1063\u200a238 pregnant women, rated as excellent, Lennest\u00e5l et al. reportedet al. also repet al.[In a cohort study of 100\u200a029 women, also rated as excellent, Orbach et al. reportedet al. also repHowever, there was also increased odds of preterm birth , being small for gestational age , and intrauterine growth restriction , for offspring exposed to untreated maternal hypertension in-utero compared with the normotensive untreated reference group. The authors did not et al.[In a mediocre cohort of 911\u200a685 women, Meidahl Petersen et al., reporteet al. also repet al.[Ray et al., in a stet al.[P\u200a<\u200a0.001), low birth weight , and being small for gestational age following atenolol exposure compared with an untreated hypertensive comparison group, with the largest risk of being small for gestational age following exposure to atenolol during early pregnancy .In a mediocre cohort study of 312 women, Lydakis et al. reportedet al.[In a cohort study of 1223 women rated as poor, Xie et al. reportedet al.[P\u200a=\u200a0.25), being small for gestational age , or being born preterm , in 109 offspring, 55 of whom were exposed in utero to labetalol [P\u200a=\u200a0.02) associated with labetalol. In a poor cohort study of 491 women, Bayliss et al.[P\u200a<\u200a0.01), and lower ponderal index following in-utero exposure to atenolol in the first trimester of pregnancy, compared with untreated hypertensive women.In a cohort study rated as poor, investigating the possible effects of labetalol for the treatment of severe preeclampsia, Heida et al. reportedabetalol . The autss et al. reportedet al.[P\u200a<\u200a0.001) in a cohort study, rated as poor, of 398 women. The authors [3\u200a\u00d7\u200a104; P\u200a=\u200a0.001).Lip et al. reportedet al.[et al.[Caton et al., in a mel.[et al. reportedet al.[et al.[P\u200a<\u200a0.01). In a poor cohort study investigating the effects of labetalol on 129 offspring, Munshi et al.[P\u200a<\u200a0.01); however, there was no significant difference in the incidence of intrauterine growth retardation or birth asphyxia.In a mediocre RCT of 100 women assessing fetal head circumference and placental weight, Fidler et al. observedl.[et al. reportedhi et al. reportedet al.[N\u200a=\u200a11), compared with a comparison group of unexposed, normotensive women (N\u200a=\u200a11). There was no difference in heart rate (HR), respiratory rate or palmar sweating between the two treatment groups; however, mean SBP in infants was significantly different at 2-h post birth (P\u200a<\u200a0.05). This difference disappeared after 72\u200ah.In a poor cohort study of 22 women, Macpherson et al. reportedet al.[et al.[P\u200a<\u200a0.05).In an RCT of 176 women rated as mediocre, Plouin et al. reportedl.[et al. reportedet al.[P\u200a=\u200a0.02).Lieberman et al. reportedet al.[et al.[P\u200a=\u200a0.008) and hypotension . No differences in birthweight, intrauterine growth restriction, hypoglycemia, respiratory distress syndrome, 1 and 5\u200amin Apgar scores, or NICU admissions were reported between the two treatment groups.In a mediocre RCT of 263 women with mild-to-moderate chronic hypertension, Sibai et al. reportedl.[et al. reportedet al.[In a poor cohort study to investigate functional development of the fetal central nervous system, involving 117 women, 21 of whom were treated with nifedipine or labetalol for pregnancy-induced hypertension, Gazzolo et al. reportedBeta-blocker summary: In summary, exposure to beta-blockers in the setting of hypertension, was associated with an increased risk of preterm birth, being small for gestational age, increased perinatal mortality and low birth weight; however, there was limited evidence to suggest that this increased risk was due to the medication rather than the underlying hypertensive disease, with only six of 13 studies reporting increased risk controlling for underlying hypertension. There was also some evidence of an increased risk of cardiovascular malformations; however, relevant studies did not include an untreated comparison group [on group ,29. Threon group ,44,53, wThree RCTS ,62,63, oet al.[In the one study rated as excellent, Bateman et al. reportedin utero to nifedipine and offspring without in-utero antihypertensive exposure.Gruppo , in an Ret al.[P\u200a=\u200a0.003) compared with normotensive untreated women in a poor cohort study of 156 women. The authors [P\u200a=\u200a0.08), or congenital malformations among offspring with in-utero exposure to calcium channel blockers.Magee et al. reported authors reportedet al.[P\u200a=\u200a0.007), miscarriages associated with in-utero exposure to calcium channel blockers compared with an untreated normotensive group. The authors [P\u200a=\u200a0.10) with in-utero exposure to calcium channel blockers compared with an untreated normotensive group.Weber-Schoendorfer et al., in a co authors reportedet al.[P\u200a=\u200a0.03).Hall et al. reportedet al.[In a mediocre case\u2013control study of 54\u200a016 women, involving 22\u200a865 cases taken from the Hungarian Congenital Abnormality Registry and 38\u200a151 controls without malformations, Sorensen et al. reportedet al.[In an RCT of 49 hypertensive women rated as mediocre, Fenakel et al. reportedet al.[et al.[In a mediocre cohort study of 87\u200a407 pregnant women, Davis et al. reportedl.[et al. also repCalcium channel blocker summary. In summary, the results of the calcium channel blocker studies are mixed, with evidence of increased perinatal mortality from two RCTs, increased odds of preterm birth and perinatal mortality from the cohort studies, and evidence of increased malformations from the case\u2013control study. None of the reviewed studies assessed the effect of untreated hypertension on outcomes, thus limiting the interpretation of the data and the relative importance of in-utero exposure rather than underlying disease. The ORs for perinatal mortality, preterm birth, and congenital cardiovascular defects are reported as Forest plots in Supplementary Figs. 2\u20134.One case\u2013control study and seveet al.[In a cohort study involving 465\u200a754 offspring, rated as excellent, Li et al. reportedet al. reportedet al.[In a cohort study involving 1063\u200a238 women, rated as excellent, Lennest\u00e5l et al. reportedet al.[P\u200a<\u200a0.001), low birth weight , lower gestational age , and miscarriage associated with in-utero exposure to an ACEi or ARB, in comparison with a nonteratogen-exposed group. Diav-Citrin et al.[P\u200a=\u200a0.838) or congenital malformations .In contrast, in a cohort study rated as poor, comparing ACEi and/or ARB (252 women) therapy vs. other antihypertensive therapy (256 women) or nonteratogenic exposure (495 women), Diav-Citrin et al. reportedin et al. did not et al.[Tabacova et al. reportedet al. also repet al.[P\u200a<\u200a0.001), lower birth weight (P\u200a<\u200a0.001), and lower gestation ages (P\u200a<\u200a0.001) in those exposed to ACEi or ARBs compared with those exposed to other antihypertensive medications and untreated normotensive mothers, in a cohort study of 388 women, rated as poor. The authors [P\u200a=\u200a0.99).Moretti et al. reported authors reportedet al.[et al.[et al.[Caton et al. reportedl.[et al. reportedl.[et al. reportedACEi and ARB summary. In summary, ACEi and ARBs were associated with an increased risk of preterm delivery and miscarriage. The results relating to congenital abnormalities were conflicting with four studies reporting no increased risk and five reporting increased risk. The results, as a whole, are further confounded by the lack of inclusion of an untreated hypertensive group. The ORs for congenital cardiovascular defects are reported as a Forest plot in Supplementary Fig. 4.Two cohort studies ,65 and tet al.[In the only study rated as excellent, Orbach et al. reportedet al.[et al.[P\u200a<\u200a0.05). However, there was no difference in IQ or behavior.In a cohort study of 1223 women rated as poor, Xie et al. reportedl.[et al. reportedet al.[et al.[Fidler et al. observedl.[et al. reportedMethyldopa summary. In summary, methyldopa exposure was associated with an increased risk of preterm birth, perinatal mortality, and low birth weight. The RCTs reported no difference among methyldopa exposed offspring in other outcomes such as BP, IQ, behavior and placental weight. An increased risk of intrauterine growth restriction and low Apgar scores was reported by one cohort study. However, the study designs used did not permit any conclusions to be drawn as to the relative importance of in-utero methyldopa exposure vs. the underlying effects of hypertension on the offspring. The ORs for perinatal mortality and preterm birth are reported as Forest plots in Supplementary Figs. 2 and 3.One linkage cohort study involvinin utero on later childhood development. The RCT [One RCT and thre The RCT and two The RCT ,13 asses The RCT assessed The RCT ,13,49 an The RCT quality.et al.[Cockburn et al., in a meet al.[Chan et al., investiet al.[In a mediocre cohort study of 202 patients, Pasker-De Jong et al. investiget al.[in utero, suggesting a possible dose\u2013response relationship.In a poor cohort study of 44 patients, Huisjes et al. reportedThe current study reviewed 47 published studies reporting the effects of in-utero exposure to antihypertensive medication on the fetus and child. Thirty-two of these studies were of poor or mediocre quality, with small study populations, and incomplete adjustment for confounding, and lack of quality.Although there is a widely held view that antihypertensive patients, such as beta-blockers, may be associated with a variety of detrimental fetal outcomes, such as low birth weight or congenital malformations, these beliefs are not based on robust data from appropriately designed and powered studies to conclusively confirm any associations. Furthermore, few studies have investigated the possible long-term outcomes following in-utero exposure to antihypertensive agents. The four studies which have done so have had small study populations, lacked statistical power and reported conflicting results. Although no IQ or developmental differences were reported for methyldopa and labetalol in two studies ,49, sleeThe five studies graded as excellent ,33,51,57et al.[One of the excellent studies assessing any antihypertensive medications by Lennest\u00e5l et al. reportedThe results of their study seem to indicate a beta-blocker driven association for congenital abnormalities; however, the increased frequency of such abnormalities may have been due to the underlying chronic hypertension. The authors fail to mention the severity of hypertension present in patients, which would impact on treatments used and outcomes of pregnancy. Furthermore, as the authors used only one point of reference to establish drug exposure, it is possible that as pregnancy progressed women may have experienced altered exposure to different antihypertensive patients or other potentially teratogenic agents, possibly confounding the results further. Lastly, because BP was not reported in the first midwife visit at which medication use was established it is not possible to determine whether patients in the untreated comparison group were truly normotensive. Therefore, the reported increase in the prevalence of congenital abnormalities and birth outcomes reported in this study may be related to underlying, or inadequately controlled, hypertension rather than antihypertensive therapy.et al.[The second excellent study to assess any antihypertensive exposure during pregnancy by Nakhai-Pour et al., reporteThe one study graded as excellent in quality assessing ACEi exposure during pregnancy, reported an increased risk of birth defects when compared with an untreated normotensive group, but no difference when compared with an untreated hypertensive control group . HoweverThe only study to assess calcium channel blocker exposure during the last 30 days of pregnancy rated as excellent, reported no increased risk of seizures compared with an unexposed normotensive comparison group . This stet al.[The fourth study deemed excellent was conducted by Orbach et al., who invTwelve of the 47 studies included in this review were RCTs; 11 of which were reported between 1982 and 2000. The gold standard for study design is an RCT, however, the need for randomization may conflict with the ethical treatment of hypertensive pregnant women and so introduce selection bias in the control groups.et al.[Women included in these RCTs were often hospitalized for severe preeclampsia as reported by Heida et al., which iThe study population size in the RCTs reviewed in this paper was between 25 and 300 patients. As the studies were small scale, they may have lacked statistical power.The majority of reviewed studies were case\u2013control or cohort studies in design, which inherently introduces the issue of confounding from unknown or unmeasured confounders, underlying or poorly controlled hypertension, and the issue of medication adherence. Studies using data from poison centers or teratology centers ,24 may iIt is unclear whether the magnitude of the increased risks, such as birth defects ,57, repoThe results are further confounded by the lack of information describing the types of hypertension treated in women included in the exposed groups. It is likely that the risk profiles and outcomes associated with chronic hypertension, gestational hypertension, and preeclampsia are different; therefore, a knowledge of the underlying condition is important and should be considered and reported. Only 16 of 47 studies reported in this review provided such information, and the majority of these studies assessed beta-blocker exposure.Although it is recognized that the results of cohort studies should be reported as RR, ten of the cohort studies ,51,57,65Treatment of hypertension during pregnancy and assessing the potential risks to the offspring is further complicated by the underlying disease. Untreated hypertension results in preterm birth, low birth weight and increased mortality. It would therefore be reasonable to suggest that poorly controlled hypertension may carry the same risks as untreated hypertension. Case\u2013control and cohort studies reviewed in this paper have failed to determine whether hypertension was appropriately treated and controlled, and failed to assess adherence. Several studies reviewed ,38,45,57Limitations to this review include exclusion of conference abstracts, unpublished studies, studies reported in any language other than English or German, which may have resulted in possible publication bias. However, when the literature search was performed, only two articles were excluded due to language. A further limitation was the wide date-range over which the included studies were published, the majority before 2000. As a result attempted contact with authors was either impractical or unsuccessful.In conclusion, adverse child outcomes such as preterm birth, perinatal mortality, low birth weight, risk of congenital abnormalities, or other detrimental outcomes, following in-utero antihypertensive exposure have been reported in the literature. However, most published studies have had methodological weaknesses and/or lacked statistical power thus preventing any firm conclusions being drawn.Further research in this area is required to ensure that health professionals have sufficient data to treat hypertension during pregnancy to firmly demonstrate a lack of detrimental outcomes.We acknowledge the support from the Farr Institute at Scotland. The Farr Institute at Scotland is supported by a 10-funder consortium: Arthritis Research UK, the British Heart Foundation, Cancer Research UK, the Economic and Social Research Council, the Engineering and Physical Sciences Research Council, the Medical Research Council, the National Institute of Health Research, the National Institute for Social Care and Health Research (Welsh Assembly Government), the Chief Scientist Office , the Wellcome Trust, (MRC Grant No: MR/K007017/1).There are no conflicts of interest."} {"text": "To monitor the effect of nutrition and pregnancy on oxidative status of animals under the arid condition of South Sinai.Blood samples were taken from two groups of animals: The first group retained in farm and fed on concentrate (high diet) and another group grazing natural forage (low diet). Each group was subdivided into pregnant and non-pregnant animals. Blood samples were assayed for their content of malondialdehyde (MDA), total antioxidant capacity (TAC), catalase (CAT), and superoxide dismutase (SOD) enzymes.MDA level significantly increased in pregnant animals fed either concentrate or grazing low-quality forage and accompanied by a low level of TAC in pregnant grazing animals fed low-quality forage. The activity of CAT decreased in pregnant fed either concentrate or grazing and SOD significant decrease in the pregnant grazing group. These data suggested that the animals might have experienced some degree of oxidative stress and lipid peroxidation and indicating that redox homeostasis was impaired in those pregnant and specially fed on forage rations.Pregnancy constituted the most oxidative stress facing the grazing and concentrated diet feed sheep and goats under arid and saline conditions of Southern Sinai, Egypt. On the other hand, antioxidant enzyme SOD activities were determined in erythrocyte of non-pregnant and late pregnant animals.et al. [et al. [et al. [Lipid peroxidation product as MDA was assayed using the commercially available kit according to Ohkawa et al. and expr [et al. , Aebi [1 [et al. , and Nis [et al. , respectData of various parameters were analyzed using computerized software program SPSS version 17.0 using one-way analysis of variance. Comparison of means was carried out by the least square difference. Differences were considered to be significant at p<0.05.The lipid peroxidation, non-enzymatic antioxidant levels, and enzymatic antioxidant activities in pregnant and non-pregnant fed on concentrate or grazing low-quality natural forage are presented in In goats, MDA level significantly increases in both pregnant fed concentrate or grazing than that found in non-pregnant animals fed on concentrate (p<0.01) or grazing (p<0.05). TAC level was lower in pregnant goats fed grazing than the other groups and significantly lower (p<0.01) when comparing with non-pregnant fed concentrate animals. CAT activity significantly decreases in pregnant fed either concentrate or grazing than non-pregnant animals fed on concentrate. Furthermore, CAT activity in grazing pregnant was significant low when comparing with non-pregnant fed concentrate (p<0.001) and pregnant fed concentrate (p<0.05). SOD activity in grazing pregnant was significantly lower value than pregnant fed concentrate (p<0.05) and non-pregnant fed either concentrate (p<0.01) or grazing (p<0.05).In sheep, MDA level in pregnant sheep either fed on concentrate or grazing showed higher level than non-pregnant, but MDA level in grazing pregnant was significant higher (p<0.01) than that found in pregnant animals fed on concentrate and non-pregnant fed on either concentrate or grazing. TAC level in pregnant sheep fed grazing was the lowest level when comparing with other groups but significantly low with non-pregnant fed concentrate (p<0.01) and pregnant animals fed concentrate (p<0.05). CAT activity showed a significant decrease (p<0.05) in either pregnant fed concentrate or grazing than that found in non-pregnant animals fed on concentrate. SOD value in grazing pregnant was significant low than non-pregnant fed concentrate (p<0.01) and grazing non-pregnant (p<0.05).The present study recorded that pregnant goats and sheep were exposed to an increased risk of oxidative stress during pregnancy than non-pregnant groups, by the observed increase of MDA value and decrease of TAC level, CAT, and SOD activities, but the oxidative stress was more observed in pregnant grazing than pregnant fed concentrate sheep and goats. Although nutritional factor is one of the oxidative stress, pregnancy constituted the most oxidative stress facing the animals since oxidative stress index (MDA) of both pregnant groups was increased, but TAC level, CAT, and SOD activities were decreased.Pregnancy is known to be stressful on organisms, which accelerates the production of reactive oxygen species and oxidative stress . The reaIt is well known that high ROS concentrations may lead to oxidative stress and be the cause of many diseases in animals . HoweverLipid peroxidation produces a wide variety of aldehydes, which can be formed as secondary products such as MDA . MDA is Specific biomarkers of lipid peroxidation such as MDA and increased level of MDA are an indication of lipid peroxidation. In our study, MDA level significantly increased in a pregnant animal fed either concentrate or grazing. Similar result obtained in pregnant sheep fed on medium to low-quality forages and pregOther studies suggested that MDA concentration increased during late pregnancy . On the et al. [The increase in placental progesterone (P4) is accompanied by an augment in blood circulating lipids and MDA, which is a marker of oxidative stress . Mohebbiet al. reportedet al. , due to et al. [TAC level in our study revealed lower value in a pregnant grazing animal, and this result was in agreement with Amer et al. , who monet al. [et al. [et al. [Our results indicated a decrease in CAT activity in pregnant animals fed either concentrate or grazing, and this result was in agreement with Erisir et al. , who fou [et al. reported [et al. recordedet al. [et al. [SOD is well known to be a superoxide radicals\u2019 scavenger which is an essential factor in the protection against free radical damage and is considered the first defense against pro-oxidants . In our et al. , who rec [et al. found thOur results indicated that changes in the nutritional level of the diet had little effect on animals compared with a factor of pregnancy. In this respect, many findings are consistent with the claim that pregnancy is a state of oxidative stress in ruminants ,6,29-31.et al. [et al. [et al. [It is essential to consider the oxidative stress indicators are affected by nutrition, and modifications of metabolic substrate may affect their biosynthesis and turnover at the tissue level. It has been shown that feeding dairy cows with high levels of starch increases oxidative stress, possibly due to cellular changes related to oxidative phosphorylation . In sheeet al. recorded [et al. refer th [et al. reportedPregnancy constituted oxidative stresses on sheep and goats fed concentrate or grazing. The nutritional level of the diet had very little effect on blood oxidant and antioxidant status.KGMM and MFN designed the experimental study, ARA and ASAS performed oxidant and antioxidants markers assay, whereas KGMM and ARA completed data analysis, revision, and writing of the article. All authors read and approved the final manuscript."} {"text": "The udder configuration and teat/udder morphology were recorded before milking. Milk samples (100 ml/cow) were collected aseptically. Milk somatic cell counts (SCC) and milk differential leukocyte counts were performed microscopically. Milk leukocytes were isolated from milk samples by density gradient centrifugation. Phagocytic index (PI) of milk neutrophils and macrophages were evaluated by colorimetric nitro blue tetrazolium assay. Lymphocytes proliferation response was estimated by MTT assay and expressed as stimulation index.In vitro PI of milk neutrophils was found to be significantly (p<0.01) lower in flat teat. In vitro PI of milk macrophages was found to be significantly (p<0.01) lower in the round and flat teats compared to pointed and cylindrical teats.There was a significant (p<0.01) positive correlation between milk SCC with mid teat diameter, teat base diameter and significant (p<0.05) negative correlation between milk SCC and the height of the teat from the ground. Milk SCC was found to be significantly (p<0.01) lower in bowl-shaped udder and higher (p<0.01) in pendulous type. Milk macrophage percentage was positively (p<0.01) correlated with udder circumference. PI of milk neutrophil was negatively (p<0.01) correlation between teat base diameter, and PI of milk macrophages was found to be positively (p<0.01) correlated with teat apex diameter. Both PI of milk neutrophils and macrophages was found to be significantly (p<0.01) lower in the animals having flat and round teat and pendulous type of udder. in vitro activity of milk leukocytes hence facilitates the incidence intramammary infections.Udder risk factors such as teat shape and size, teat to floor distance, udder shape, and size may decrease the The cr health -5.Association of udder morphology with the occurrence of mastitis has already been established worldwide ,7. Howevin vitro milk leukocyte activity.Therefore, this study is undertaken to shed some light on the association between the udder morphology and The experiments on animals including all procedures of this study were approved by Institutional Animal Ethics Committee (Registration number: 763/03/a/CPCSEA).nd to 4th parity and mid lactations (150-200 days of lactation cycle) were selected from the herd of Eastern Regional Station of ICAR-National Dairy Research Institute, Kalyani, Nadia, and West Bengal, India. All the experimental animals were kept in loose housing system under the routine managemental practices followed in the institute\u2019s herd. All the animals were screened weekly for clinical mastitis throughout the study.A total of 48 high yielding crossbred cows of 2et al. [et al. [The mammary glands of all the animals were analyzed for the presence of any gross lesions. The udder configuration and teat/udder morphology were recorded following Bhutto et al. before t [et al. . The folTeat length (cm): Average of all four teats.Teat apex diameter (cm): Average of all four teats.Mid teat diameter (cm): Average of all four teats.Teat base diameter (cm): Average of all four teats.Teat shape: Pointed, cylindrical, round, flat.Udder shape: Cup, round, bowl.Udder position: Pendulous, non-pendulous.Composite milk samples (100 ml/cow) from all four quarters were collected into sterile tubes through hand milking for consecutive 3 times with 1 month apart. Teat dipping before the collection was done with an effective teat dip solution (0.5% iodine or 4% hypochlorite) for at least 20-30 s before milking. Then, the teats were carefully scrubbed with a cotton cloth or gauze pad moistened with 70-80% ethyl alcohol.SCC and DLC of milk samples were measured microscopically ,4.Isolation of milk leukocytes - viz., neutrophils, lymphocytes, and macrophages - was performed by density gradient centrifugation -5.In vitro PI of milk neutrophils and macrophages was evaluated by colorimetric nitro blue tetrazolium reductive assay [ve assay .Mitogen-induced milk lymphocyte proliferation response was measured by colorimetric MTT (tetrazolium) assay and exprin vitro activity of milk leukocytes) was analyzed by Spearman rank order correlation. Effect of teat shape, udder shape, and udder position were analyzed by one-way ANOVA.All analysis was performed using SYSTAT software package. The correlation between udder morphological parameters and udder immunological parameters positive correlation between milk SCC with mid teat diameter, teat base diameter and significant (p<0.05) negative correlation between milk SCC and the height of the teat from the ground. There was also a positive correlation between milk SCC and teat length and negative correlation between milk SCC and teat apex diameter though it was nonsignificant.There was no significant correlation between milk DLC and teat morphology. However, milk macrophage percentage was positively (p<0.01) correlated with udder circumference.There was no significant correlation between SI of milk lymphocytes with teat morphology.PI of milk neutrophil was found to be positively correlated with teat apex diameter and height of teat from ground. However, there was a significant (p<0.01) negative correlation between teat base diameter and PI of neutrophils.in vitro PI of milk macrophages with teat apex diameter.PI of milk macrophages was nonsignificantly positively correlated with teat length, mid teat diameter, teat base diameter and height of teat from the ground and negatively correlated with udder circumference. However, there was a significant (p<0.01) positive correlation between in vitro milk leukocytic activity in high yielding crossbred cows have been presented in Effect of teat shape on milk SCC, DLC and In vitro PI of milk neutrophils was found to be significantly (p<0.01) lower in flat teat. In vitro PI of milk macrophages was found to be significantly (p<0.01) lower in the round and flat teats compared to pointed and cylindrical teats.Concanavalin A (Con A) induced milk lymphocyte blastogenic response was found to be unaltered with different teat shape. in vitro milk leukocytic activity in high yielding crossbred cows have been presented in Effect of udder shape on milk SCC, DLC and in vitro milk lymphocyte blastogenic responses were found to be unaltered with different udder shapes. In vitro PI of milk, neutrophil was found to be higher in round shaped udder compared to cup and bowl-shaped udder though the difference was nonsignificant. In vitro PI of milk macrophages was significantly (p<0.01) lower in bowl-shaped udder compared to round and cup-shaped udder.Con A induced in vitro milk leukocytic activity in high yielding crossbred cows have been presented in Effect of udder position on milk SCC, DLC and In vitro PI of milk neutrophil did not differ significantly between different udder positions. However, it was lower in the pendulous type udder. The in vitro PI of milk macrophages was significantly p<0.01 lower in the pendulous type of udder compared to non pendulous type of udder.There were no significant difference in ConA induced milk lymphocyte blastogenic response under different udder positions. et al. [et al. [et al. [et al. [The udder and teats are the first line of defense against intra-mammary infection, and the association between mastitis resistance and several udder type traits have been reviewed by many workers ,8,11. Inet al. stated t [et al. reported [et al. and Sing [et al. also repin vitro activity of milk leukocytes. Here, we found PI of milk neutrophil and macrophages was positively correlated with teat apex diameter and height of teat from ground which could be explained by the fact that higher activity of milk neutrophils and macrophages in the case of higher teat to ground distance makes the mammary gland more resistance to intramammary infections. Con A induced milk lymphocyte blastogenic response could not be compared as no literature available in this regard.To the best of our knowledge, this is the pioneer study correlating the udder morphology and in vitro milk lymphocyte blastogenic response and PI of milk macrophages were negatively correlated. The earlier reports on the relationship between teat length and mastitis were also contradictory. Some authors suggested longer teats were positively correlated with mastitis incidence [et al. [et al. [et al. [The probability of mastitis occurring varies considerably between different teat shapes, sizes, teat placement, and the morphology of the teat tip . In any ncidence , whereas [et al. could no [et al. reported [et al. stated a [et al. . During [et al. .in vitro activity of milk leukocytes under different teat shape. In vitro phagocytic activity of milk neutrophils and macrophages were lower in flat type teat compared to round and cylindrical teats. These findings were in accordance with the reports of Slettbakk et al. [et al. [These results suggest that the chances of mastitis are higher if the teat length is shorter and if the teat diameter is greater. There was a significant alteration in the k et al. and Bhut [et al. reported [et al. . However [et al. have alset al. [The udder and teats are the first line of defense against intramammary infection. Udder morphology is very heritable and could serve as a marker trait for selection to reduce mastitis in dairy cattle ,22. Uddeet al. stated tet al. [et al. [In this present investigation, we found higher activity of milk leukocytes and lower milk SCC in round shaped udder as reported by Saloniemi et al. indicate [et al. reportedin vitro activity of milk leukocytes hence facilitates the incidence intramammary infections.The study identified possible udder risk factors for incidence of mastitis such as teat shape and size, teat to floor distance, udder shape and size and found that these factors reduced the JM planned the study. TS recorded the data. PRG and DB provided technical support and helped in data analysis. PKD analyzed the data. JM drafted and revised the manuscript. All authors read and approved the final manuscript."} {"text": "With the emergence of high-throughput \u201comics\u201d data, network theory is being increasingly used to analyze biomolecular systems. In particular, proteins rarely act alone, which prompted the arrival of proteomics. \u201cNetwork proteomics\u201d is just defined as the research area that introduces network theory to investigate biological networks ranging from protein structure networks to protein-protein interaction networks. In addition, the application of network proteomics in biomedical fields has increased significantly. This special issue has collected contributions, not only focusing on the state of the art of methodology in \u201cnetwork proteomics\u201d itself but also focusing on the current status and future direction of their applications in translational medical informatics. Aspergillus oryzae, which would enhance its metabolic network reconstruction. By using the molecular dynamics simulation and principle component analysis, H. Wan et al. investigated the interaction between the last half repeat in TAL effectors and its binding DNA. This work would give a deeper understanding of the recognition mechanism of protein-DNA interactions.This issue starts with discussing the role of protein structure that participates in interactions. N. Raethong et al. developed a strategy to annotateProtein complexes offer detailed structural characteristics of protein-protein interactions. H. Zhang et al. proposed a method by combining the high frequency modes of Gaussian network model and Gaussian Naive Bayes to identify hot spots, which are residues that contribute largely to protein-protein interaction energy. G. Hu et al. performed a comparative study of elastic network model and protein contact network for protein complexes in case of hemoglobin.Protein-protein interactions not only are limited to binary associations but also employ special topological structures to perform their biological functions. N. Bernab\u00f2 et al. approached the comparison of two networks models obtained from two different text mining tools to suggest that actin dynamics affect spermatozoa postejaculatory life. J. Xu et al. also compared the topologies of network modes of drug-kinase interactions between established targets and those of clinical trial ones.Both protein structures and protein-protein interactions are involved in the subject of a growing number of pharmacological studies. F. Ye et al. identified that compounds DC_C11 and DC_C66 are two small inhibitors against protein-protein interactions between CARM1 and its substrates, while Y. Wang et al. identified that CID_70128824, CID_70127147, and CID_70126881 are three potential inhibitors for targeting the androgen receptor to treat prostate cancer. Finally, a review paper completes the issue. X. Li et al. introduced recent advances in the development of network models of identifying synergistic drug combinations.Altogether, we wish this issue has given a wider development of structural biology and systems biology with the advantage of biological network analysis as well as prospecting the future of this area towards translational bioinformatics and systems pharmacology.Guang HuLuisa Di PaolaFilippo PullaraZhongjie LiangIntawat Nookaew"} {"text": "Childhood adversity predicts adolescent suicidal ideation but there are few studiesexamining whether the risk of childhood adversity extends to suicidal ideation inmidlife. We hypothesized that childhood adversity predicts midlife suicidal ideation andthis is partially mediated by adolescent internalizing disorders, externalizingdisorders and adult exposure to life events and interpersonal difficulties.At 45 years, 9377 women and men from the UK 1958 British Birth Cohort Studyparticipated in a clinical survey. Childhood adversity was prospectively assessed at theages of 7, 11 and 16 years. Suicidal ideation at midlife was assessed by the depressiveideas subscale of the Revised Clinical Interview Schedule. Internalizing andexternalizing disorders were measured by the Rutter scales at 16 years. Life events,periods of unemployment, partnership separations and alcohol dependence were measuredthrough adulthood.Illness in the household, paternal absence, institutional care, parental divorce andretrospective reports of parental physical and sexual abuse predicted suicidal ideationat 45 years. Three or more childhood adversities were associated with suicidal ideationat 45 years .Psychological distress at 16 years partially mediated the associations of physical abuse, sexual abuse with suicidalideation. Adult life events partially mediated the association of parental divorce and physical and sexual abuse with suicidal ideation at 45 years.Adversity in childhood predicts suicidal ideation in midlife, partially mediated byadolescent internalizing and externalizing disorders, adult life events andinterpersonal difficulties. Understanding the pathways from adversity to suicidalideation can inform suicide prevention and the targeting of preventiveinterventions. We hypothesized that: (1) childhoodadversity predicts suicidal ideation at midlife; and (2) the association of childhoodadversity and suicidal ideation is partially mediated through (n\u00a0=\u00a017\u00a0416). Analyses werebased on 9377 participants in a clinical survey at 45 years. The response rate for theclinical survey was 78% of those invited, representing 54% of the surviving population ;Sexual abuse by a parent .Childhood adversity was defined as exposure to traumatic events or chronic stressors Study from 16 to 42 years wascalculated from the activity histories available from 1974 to 2009 .et al.n\u00a0=\u00a09377). A total of 30 datasets of the imputation were run andanalyses indicated that the measures were stable across the imputations. Parameterestimates from the 30 imputations were estimated using the MIM function in STATA.Imputed analyses are presented with unimputed prevalence figures. Multiple imputation, under the \u2018missing at random\u2019 assumption, was used to address theissue of missing data, using the ICE program in STATA. The measures described above wereincluded in the imputation equations. Employment status at 45 years, and social class at7 and 42 years were also included as they were significantly associated with attrition internalizing and externalizingdisorders at 16 years and (b) the adulthood life events andinterpersonal difficulties.Further logistic regression analyses assessed the associations of: (i) interpersonaldifficulties ; and (ii) adultdrinking problems (at 33 or 42 years) with midlife suicidal ideation, adjusting forgender, social class at 42 years and qualifications at 33 years. Finally, theassociations between childhood adversities and suicidal ideation were re-run,additionally adjusting for internalizing disorders; (ii) externalizingdisorders; or (iii) interpersonal difficulties and problem drinking was investigatedusing the p\u00a0<\u00a00.05) and adjusted for gender where not significant(p\u00a0\u2a7e\u00a00.05).All analyses tested for interactions between childhood adversity and gender: analyseswere stratified by gender if the interaction was significant.We carried out analyses to examine how much this adult adversity score might mediate theassociation between childhood adversity and suicidal ideation. The indirect percentage ofthe total mediation effect was between 3.28 and 8.63 for feeling that life was not worthliving and between 7.86 and 8.67 for depressive ideas. The percentage of mediationexplained was of little greater magnitude than the individual life events andinterpersonal difficulties (see Supplementary Table).The aims of our study were to assess whether the effects of childhood adversity on suicidalideation extended beyond adolescence to midlife and whether this was partially mediated byadolescent internalizing disorders, externalizing disorders and adult exposure to lifeevents and interpersonal difficulties. We confirmed that specific childhood adversitieswhich included illness in the household, paternal absence and divorce prospectively predictsuicidal ideation at 45 years even after adjustment for confounding and mediating factors.Retrospectively recalled parental sexual and physical abuse also show strong associationswith suicidal ideation at 45 years. Childhood adversity predicts adult life events,supporting continuity of exposure to adversity across the lifecourse. Adulthoodinterpersonal difficulties predicted suicidal ideation.et al.In terms of investigation of mediation, psychological ill-health in childhood partiallymediated the association of illness in the household, paternal absence and physical andsexual abuse. Adult life events partially mediated the association of paternal absence,divorce and physical and sexual abuse, while partnerships in adulthood and problem drinkingpartially mediated the association of sexual and physical abuse. We have confirmed ourhypotheses that some childhood adversities are associated with midlife suicidal ideation andthat this is partly mediated by subsequent life events, internalizing and externalizingdisorders, although the variance explained in mediation is small. Our findings supportKendler's aetiological framework for affective disorders (Kendler et al.et al.et al.et al.et al.et al.etal.et al.et al.Studies of youth suicide have demonstrated the importance of parent-related events in thefamily such as parental hospital admission with mental illness, parental divorce and maritaldisruption (Fergusson et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.In terms of other types of childhood adversity sexual (Fergusson et al.et al.et al. (et al.Lack of maternal and paternal care has been linked to adult suicidal ideation (Enns.et al. state th. et al., althouget al.et al.et al.et al.The magnitude of the effects of the mediating factors of internalizing, externalizing andinterpersonal disorders may have been small due to methodological limitations: risk factorexposure misclassification, lack of key variables at different stages of the lifecourse andlack of coverage of all relevant risk factors within the three groups of mediators.Additionally we needed to dichotomize the mediating variables, rather than use continuousscores because of the skewed nature of their distribution. Thus we may have underestimatedthe effect of the mediators. Moreover, as the models are fully adjusted we only expected tosee small levels of mediation as the other covariates in the model also contribute to theoutcome. Looking at individual possible mediators on top of the effects of other covariatesmeans that individual mediators are unlikely to explain a large percentage of theassociations. However, in our study we may also be missing some key variables that areshaped by childhood adversity and transmit the risk of suicidal ideation across thelifecourse. One of these may be how the experience of adversity influences feelings ofself-worth, mastery and the ability to develop positive and trusting relationships. In turnthis may influence coping capacities and the ability to ask for help in a crisis (Gunnellet al.et al.et al.et al.et al.A common theme across studies of risk factors for suicidal ideation and suicide is lowsocial support and social isolation (Heikkinen et al.It is a strength that we had prospective measurements of childhood adversities andmediating factors at different life stages. To our knowledge this is the first study toexamine the role of mediators of prospectively measured childhood adversity and itsassociation with midlife suicidal ideation. It is a limitation that childhood sexual andphysical abuse were reported retrospectively (Colman et al.et al.et al.Suicidal ideation and completed suicide are not equivalent, and less than 1 in 200 of thosewith suicidal ideation proceed to suicide (Gunnell Some elements of childhood adversity have an impact on midlife suicidal ideation afteradjustment for mediating factors. This is in keeping with effects of adversity in criticalperiods in childhood having long-term consequences for mental health that may reflecteffects on the developing brain and neuroendocrine responses to stress.et al.Understanding the pathways from adversity to suicidal ideation can inform suicideprevention (Gunnell"} {"text": "The present special volume \u201cEcotoxicology in Tropical Regions\u201d contains 19 research articles that were selected from presentations at two conference sessions at the SETAC Europe 24th and 25th meetings in Basel 2014 and in Barcelona 2015. The papers address major ecotoxicological issues in several tropical and sub-tropical countries including Brazil, Mexico, Costa Rica, Columbia, Nicaragua, Costa Rica, Ethiopia, Thailand, Vietnam, and Iran.In 2017, more than 16,000 scientists from 184 countries published a \u201cWarning to humanity letter\u201d . Similar results were reported in the paper by Sobrino-Figueroa Daphnia magna, commonly used to assess water quality. On the other hand, Khatikarn et al. In Moura et al. Macrobrachium and Cardina sp.) were more sensitive to pesticides than temperate crustaceans such as Daphnia magna. They discuss possible reasons for these differences and recommend using native species for first tier prospective ecological risk assessment (ERA).In the paper by Daam and Rico Crassostrea rhizophorae was evaluated for chemical biomonitoring. The authors found that this oyster, commonly associated with mangrove trees, is a suitable species for biomonitoring in Caribbean coastal systems. Six papers are dealing with pesticide environmental contamination in Costa Rica. Costa Rica has one of the richest biodiversity on Earth. It also has an extensive agriculture, with banana and pineapple as main export crops, and has the highest pesticide use per capita in Central America. Pesticide spray drift and runoff from plantations pose toxicity risks to aquatic organisms downstream these crops and to humans. The paper of M\u00e9ndez et al. Ca\u0148o Negro wetland and watershed located in northern Costa Rica. Arias-Andr\u00e9s et al. In the paper by Aguirre-Rub\u00ed et al. Three papers present studies from pesticide toxicity in Vietnam. Tam et al. A paper by Teklu et al. This Special Issue presents a unique collection of papers, describing various actual ecotoxicological issues in ten tropical and sub-tropical countries. As anthropogenic pressures and environmental problems that these and other tropical countries are currently facing will increase during the coming decades, it is essential that the scientific community carries out more research in tropical regions. We hope that the results and methodologies presented here will help policymakers to motivate stronger environmental protection measures to limit contaminant emissions and habitat destruction in tropical regions.Finally, we would like to thank all the reviewers who helped improve the quality of the research articles included in this Special Issue."} {"text": "The mechanical properties of these films were also improved when compared with films produced without CO. Conversely, the water barrier properties of composite films decreased (p < 0.05) when the concentration of gelatin in composite films increased. Comparing with pure gelatin films, water and oxygen barrier properties of gelatin films decreased when manufactured with the inclusion of CO.The objectives of this study were to develop composite films using various gelatin sources with corn oil (CO) incorporation (55.18%) and to investigate the mechanical and physical properties of these films as potential packaging films. There were increases ( Biopolymers, based on carbohydrates and proteins, have been extensively studied to develop edible/biodegradable films with more versatile properties . HoweverGenerally, protein-based films offer better mechanical and barrier properties than those manufactured from polysaccharides . This iset al. [In addition to lipid incorporation, altering the pH of film-forming solutions may also be one of the many approaches required to improve functional properties of biopolymer films ,22,23,24et al. , higher et al. [et al. [In our previous study , films met al. ), and co [et al. .Beef skin gelatin (Bloom 220), pork skin gelatin (Bloom 225) and fish skin gelatin from warm water Tilapia (Bloom 240) were purchased from Healan Ingredients Ltd. . Corn oil and glycerol were obtained from Sigma Aldrich Co. . For pH adjustment, NaOH was obtained in pellet form and lactic acid was obtained from Merck and May & Baker Ltd. , respectively.et al. [et al. [Gelatin powders were solubilised in distilled water at concentrations between 4%\u20138% (w/v). Glycerol was added as a plasticizer at a constant glycerol/gelatin powder ratio of 2:5 (w/w). Corn oil (CO) was introduced into all gelatin solutions at a set concentration of 55.18% (w/w), in accordance with those recommendations proposed by Wang et al. . Dispers [et al. . All solScanning electron microscopy (SEM) was used to investigate the morphology of the films using JSM-5510 at 5.0 kV. Film samples were cut into appropriate-sized samples and mounted on stubs using double-sided adhesive tape. Prior to analysis, films were coated with gold to make the samples conductive. Subsequently, samples were observed at 1000\u00d7 magnification.Film thickness was measured using a hand-held digital micrometer . Measurements were carried out at different film locations and the mean thickness value was used to calculate the permeability of the films.L = 97.79, a = 5.18, b = 7.91) and Hunter L, a and b colour values were measured. Colour coordinates ranged from L = 0 (black) to L = 100 (white), \u2212a (greeness) to +a (redness), and \u2212b (blueness) to +b (yellowness).The surface colour of gelatin films was measured using a Minolta chromameter . Gelatin films were placed on the surface of a white standard plate , set at a wavelength of 500 nm. Four film specimens were taken from each film sample and cut into rectangular pieces (45 \u00d7 10 mm). The sample was placed on the inner side of a transparent plastic 10 mm cuvette and the absorbance measured.et al. [The opacities of films were calculated by the following equation according to the method described by Gontard et al. :opacityThe mechanical properties of films were evaluated and included; tensile strength (TS), elongation at break (EAB) and puncture strength (PS) using an Imperial 2500 Mecmesin force and torque tester according to the ASTM-D882 . TS of aWater vapour permeability (WVP) was calculated from the following equation:et al. [et al. [WVP of gelatin films was measured according to the WVP correction method of McHugh et al. which iset al. for dete [et al. . Distillet al. [3\u00b7\u03bcm/m2\u00b7day\u00b7kPa); S = slope indicating the transmission rate of oxygen; \u03b2 = permeability coefficient (4.776), a constant value as all treatments were conducted at the same conditions; A = surface area of the film (m2); T = thickness of the test film samples (\u03bcm); V = volume of the chamber (mL).Oxygen permeability was conducted according to the method developed by Papkovsky and Wanget al. . An oxygThe film solubility was determined according to the method of Gontard and Guilbert by trimm\u22121 with automatic signal gain collected in 32 scans with a resolution of 4 cm\u22121 that were rationed against a background spectrum. Four replicate spectra were obtained and the average spectrum was taken for analysis. The IR spectra for gelatin films were determined using a Varian 600-IR Series FTIR equipped with horizontal attenuated total reflectance (ATR) ZnSe cell at room temperature. The detector used was a Deuterated Tri-Glycine Sulfate (DTGS-KBR). Before film analysis, a background spectrum using a clean crystal cell was recorded. Films were placed onto the crystal cell and the cell was clamped into position on the FTIR spectrometer. FTIR spectra were recorded in the range of 500\u20134000 cmp < 0.05 were considered to be significant. Differences between means were compared using Duncan\u2019s multiple range test method.Statistical analyses were performed using one-way analysis of the variance (ANOVA) and the LSD test (least significant difference) which showed the statistically different values. Statgraphics Centurion XV software programme was used with differences at The surface micrographs obtained by SEM are shown in p < 0.05) when gelatin concentrations of 6% and 8% were utilized the L values, with films derived from pork gelatin sources possessing the highest L value. Results also indicated that increasing the concentration of gelatin decreased (p < 0.05) the L value of composite films manufactured from gelatin derived from beef sources. Composite films derived from beef gelatin sources also had lower (p < 0.05) a values compared to those derived from pork and fish gelatin sources.The colour attributes of gelatin composite films derived from beef, pork and fish sources are presented in b values. However, composite films derived from beef gelatin sources possessed higher (p < 0.05) b values. This was most probably due to the base colour of beef gelatin powder which possessed a yellowish hue compared to the other gelatin powder sources. The results obtained also indicated a decrease in L and a values for all composite films when compared with gelatin films manufactured without the use of CO [b values for all gelatin composite films were higher compared to those gelatin films manufactured by Nur Hanani et al. [Additionally, composite films manufactured using pork- and fish-derived gelatins did not significantly show differences in se of CO , regardli et al. which dip < 0.05) opaqueness for all gelatin-based composite films affected by gelatin source, particularly at concentrations of 6% and 8%, for example, composite films manufactured from beef gelatin at 6% and 8% possessed the lowest TS values of all composite films manufactured. The increased concentration of gelatin also increased (p < 0.05) the TS values of composite films derived from pork and fish gelatin sources with fish gelatin employed at an 8% concentration showing the highest TS value (11.14 MPa). This trend was also observed for all composite films when the EAB test was conducted. Differences in EAB values for all composite films were significantly (p < 0.05) dependent on the concentration of gelatin used, especially for those derived from beef gelatin. With respect to PS, composite films manufactured from gelatin derived from beef sources had the lowest PS value, regardless of the gelatin concentration used. The results also showed that no significant differences in PS values were found between composite films manufactured from gelatin derived from pork and fish sources. Statistical analysis also showed that increasing the concentration of gelatin increased (p < 0.05) the PS values of all gelatin composite films. The mechanical properties of gelatin composite films are shown in When the composite films produced in this study were compared with pure gelatin films , resultsp < 0.05) WVP values than composite films derived from fish gelatin sources. The results also demonstrated that increasing gelatin concentration from 4% to 8% increased (p < 0.05) the WVP values of all gelatin-based composite films. According to Nur Hanani et al. [et al. [et al. [The WVP of gelatin-based composite films incorporating CO is shown in i et al. and Hoqu [et al. , some ge [et al. , it was [et al. .et al. [According to Bertan et al. , adding et al. ,6. The lp < 0.05) higher compared to films derived from beef and pork gelatin sources. Overall, increasing the concentration of gelatin in films increased (p < 0.05) OP. When the composite films manufactured in this study were compared with pure gelatin films [Oxygen permeability of gelatin composite films is presented in in films it was oet al. [The increased OP observed in composite films may in fact be related to the significant presence of CO (highest concentration of oil use in such films to date) and the hydrophobic nature of this lipid component, consequently facilitating oxygen transfer ,6,37,38.et al. , in spitp < 0.05) in film water solubility values for composite films derived from fish gelatin. However, these films had the lowest solubility values when compared to composite films derived from beef and pork gelatin sources, regardless of concentration used. At a gelatin concentration of 4%, no significant differences in the water solubility of composite films were determined between those formed from both beef and pork gelatin. When compared with the pure gelatin films [p < 0.05) in the water solubility of the films particularly for composite films derived from pork and fish gelatin sources. According to the scientific literature, Gontard et al. [et al. [et al. [The solubility of gelatin-based composite films is presented in in films , the addd et al. and Bert [et al. obtained [et al. observedHowever, contrary results were obtained for composite films manufactured from gelatin derived from beef sources. At a 6% gelatin concentration, films derived from beef gelatin sources had the highest water solubility. This was most likely due to the presence of lactic acid usage which was added during the preparation of the solution to control the pH at 10.54. During the preparation of film forming solutions, lactic acid had been used in the solutions of beef gelatin sources at 6% concentration to adjust the pH value. Lactic acid is hygroscopic, therefore it had attracted and held water molecules and encourage solubility of film in water.\u22121 correspond to amide-A and water molecules, amide I, amide II, and amide III, respectively [2 groups of glycine [FTIR analysis was carried out on all composite films . The absectively ,42,43. A glycine ,42,43.\u22121 is the most useful peak for infrared analysis of secondary protein structures like gelatin [et al. [et al. [\u22121 is characteristic for the coil structure of gelatin. Increasing gelatin concentrations, particularly for films manufactured from gelatin derived from pork and fish sources, shifted the amide I peak to a higher wavenumber. The change in amide I band for gelatin films suggested that CO might affect the helix coil structure of gelatin films as suggested by Jongjarenrak et al. [Among these absorption bands, the amide I band between 1600 and 1700 cm gelatin ,45. Jong [et al. and Yaki [et al. reportedk et al. . These a\u22121. The peaks around 2925 and 2854 cm\u20131 are related with the symmetric and asymmetric stretching vibration of the aliphatic group (CH2) [\u22121 also appeared and indicated strong C = O absorption from the oil. Composite films manufactured from gelatin derived from beef sources with 6% concentration displayed a higher wavenumber of 1116 cm\u22121 compared with other gelatin films. The peak shift is most likely due to the presence of lactic acid which was used during the preparation of the solution to standardize pH solution values. This spectrum supports previously described results, for example, the increased solubility of composite beef gelatin derived films due to the presence of lactic acid.The FTIR spectra of the films also showed some interactions occurred between gelatin and CO as indicated by the presence of a high peak in the frequency range between 2500 and 3100 cmup (CH2) ,47. Meanet al. [Experimental studies were conducted and based on the proposed model for optimum gelatin-based composite film manufacture as outlined by Wang et al. . This mo"} {"text": "A substantial proportion of the burden of depression arises from its recurrent nature. The risk of relapse after antidepressant medication (ADM) discontinuation is high but not uniform. Predictors of individual relapse risk after antidepressant discontinuation could help to guide treatment and mitigate the long-term course of depression. We conducted a systematic literature search in PubMed to identify relapse predictors using the search terms \u2018(depress* OR MDD*) AND (relapse* OR recurren*) AND (predict* OR risk) AND (discontinu* OR withdraw* OR maintenance OR maintain or continu*) AND (antidepress* OR medication OR drug)\u2019 for published studies until November 2014. Studies investigating predictors of relapse in patients aged between 18 and 65 years with a main diagnosis of major depressive disorder (MDD), who remitted from a depressive episode while treated with ADM and were followed up for at least 6 months to assess relapse after part of the sample discontinued their ADM, were included in the review. Although relevant information is present in many studies, only 13 studies based on nine separate samples investigated predictors for relapse after ADM discontinuation. There are multiple promising predictors, including markers of true treatment response and the number of prior episodes. However, the existing evidence is weak and there are no established, validated markers of individual relapse risk after antidepressant cessation. There is little evidence to guide discontinuation decisions in an individualized manner beyond overall recurrence risk. Thus, there is a pressing need to investigate neurobiological markers of individual relapse risk, focusing on treatment discontinuation. Notably, although such high-risk patients are typically prime targets for treatment, they would derive as little advantage from the medication as those who will not relapse either way.Hence, it is critical to establish predictors of the relapse risk for individual patients. In terms of clinical guidance, this raises two related questions. The first asks what the individual's relapse risk would be after discontinuation. Patients who have a very low risk of relapse after discontinuation have less scope for benefiting from further treatment AND (relapse* OR recurren*) AND (predict* OR risk) AND (discontinu* OR withdraw* OR maintenance OR maintain or continu*) AND (antidepress* OR medication OR drug)\u2019 for published studies until November 2014. The search resulted in 899 retrieved studies.a);f) and unclear cases discussed jointly. Authors of individual studies were not contacted.I.M.B. and Q.J.M.H. first screened all titles. Abstracts of all titles judged potentially relevant by either author were then judged on inclusion criteria et al.Of note, the natural course of depressive episodes suggests that re-emergence of symptoms within 6\u20139 months might be due to relapses into the index episode while thereafter they indicate a new episode failed to reach significance in three studies , there was no discontinuation effect in the minority groups. However, power in these groups was low .One study investigated the effect of race and ethnicity . Although the drug\u2013placebo difference in prophylactic efficacy was higher for patients with recurrent depression (i.e. 31% v. 11%), the treatment\u00a0\u00d7\u00a0prior episode events interaction analysis did not significantly predict time to re-emergence (p\u00a0=\u00a00.25), possibly due to power issues .Of note, in the study by Keller et al. the risket al.et al.et al.Only two studies report examining an interaction of severity at onset with treatment, both failing to reach significance . Though these results are suggestive, neither study explicitly tested for interactions. McGrath et al. after milnacipram discontinuation. Analyses in a further study investigating specific residual symptoms, namely phobic anxiety, also failed to yield significant results and symptoms emerging with treatment . This effect was absent in patients with a non-specific response to treatment and in those with atypical vegetative symptoms. The interaction of treatment and neurovegetative symptoms alone was not reported by McGrath et al. . However, no interaction was reported and the effect did not survive when controlling for neurovegetative pattern , even though melancholic and neurovegetative patterns were uncorrelated. Melancholic subtype did not interact significantly with treatment in the study by McGrath et al. , but not in the continuation group, though no interaction was reported. Relapse rates were 28.5% v. 27.2% and 53.3% v. 40.7% for continuation and discontinuation groups with high and low anxiety, respectively. Fava et al. .Somatic pain ratings were found to interact significant with treatment in one study on duloxetine (Fava Other variables investigated only once and not found to interact with treatment are presented in et al.et al.et al.et al.et al.Many of the factors reported on here appear to have robust properties as predictors of relapse or recurrence risk independent of medication discontinuation: number of previous episodes (Berlanga et al.However, the relevant question for the individual patient really is about the differential impact of medication: will this particular person benefit from continuing treatment? This requires understanding and predicting not just overall relapse risk, but the specific consequences of medication discontinuation, i.e. effects within the separate arms. Although people with a high risk of relapse have more scope to benefit from maintenance treatment, they may still not respond and therefore not benefit. In addition, discontinuation itself is likely to have an effect on relapse, indicated by the increased risk in the early months following discontinuation, which is independent of length of prior treatment (Viguera Strikingly, our systematic review identified only 13 studies examining the latter, based on only nine datasets. This is thus very poorly understood even though data relevant to this question are routinely available in studies examining continuation and maintenance treatment of depression.et al.et al.et al.et al.et al.et al.et al.et al. (et al.Overall, the state of the field is insufficient to draw either positive or negative conclusions. Nevertheless, a few findings are noteworthy. First, guidelines typically recommend continuation treatment for around 4\u20139 months (Bauer .et al. found to. et al. found anet al. (et al. (et al. (Second, the reviewed studies individually failed to show clear effects of prior episode number. Two meta-analyses have addressed this, with discrepant results. While Viguera et al. found th (et al. came to (et al. found noet al.et al.et al.et al.et al.et al.Third, the studies have also not provided clear support for the influence of residual symptoms. This is again surprising. Residual symptom load affects relapse risk overall (Nierenberg et al. (et al. (et al.The studies by Stewart et al. and Nier (et al. both sho. et al., this rapost-hoc comparisons between subgroups correcting for multiple comparisons to identify subgroup difference that gave rise to the significant interaction term. Few studies followed this approach, typically either not reporting the interaction term, or, if the interaction term is reported to be significant, not doing sufficient post-hoc comparisons to establish what gave rise to the significant interaction term. Furthermore, corrections for multiple comparisons were not consistently reported.Methodologically, to establish the effect of risk reduction after discontinuation of a certain predictor and whether a subgroup identified by the predictor would not benefit from continuous ADM treatment, one would ideally first compute the significance level of the interaction of treatment and predictor and, only if significance is reached, do et al.et al.et al.et al.a priori to benefit from continuation treatment. If treatments are encapsulated, blinding may be partially broken if capsules are intentionally or unintentionally opened and the true drug/placebo identified. This may artificially increase drug\u2013placebo relapse rates and thereby reduce the impact of moderators. Andrews et al. (One possibility for the lack of significant findings is that antidepressant discontinuation has a far stronger effect on relapse rates than any other variable, and that only very large studies or meta/mega-analyses could identify the smaller moderating factors. The strength of the effects for instance of number of prior episodes reported in two meta-analyses (Viguera s et al. identifiet al.et al. (The present systematic review has some limitations. First, since we only used one database for our search and the database had not access to the full text of all relevant studies for the search, it is possible that we missed eligible studies. There may also be a positive bias due to reporting biases, as positive findings were more likely to be cited in the reviews of which we examined, and as they are more likely to form part of the title of the paper which we based our initial search on. We did not formally evaluate the quality of the included studies as the results were overall weak. A second drawback of the review is the fact that we excluded studies that investigated the effect of psychotherapy on relapse risk after antidepressant discontinuation as this would have confounded the findings given that psychotherapy is known to be effective in preventing relapses (e.g. Hollon .et al. . In factet al.et al.v. discontinuation. On the one hand, this is clearly due to the scarcity of studies that have attended to the problem. On the other hand, the strong effects on relapse rates of both antidepressants and predictors would have rendered strong interaction effects a distinct possibility. It is hence critical to revisit the existing datasets to re-examine this problem. In doing so, the field can now avail itself of novel techniques both from machine learning and computational psychiatry (Huys et al.et al.et al.et al.Maintenance treatment after a remission from depression, particularly after multiple episodes, is the standard of care. However, it is not a panacea. Patients discontinue for a variety of reasons including side effects; and there are indications that tachyphylaxis (Rothschild et al. (et al. (Finally, the list of predictors evaluated so far includes no neurobiological assessments. This should be addressed as such measurements hold great promise in predicting individual outcomes. Farb et al. , for ins (et al. found th"} {"text": "Have we not uncovered any new findings that would make exercise prescription more efficient and overcome many of the purported barriers to participation? Is there no evidence to help health professionals to adequately choose and design exercise program for specific outcomes?We are pleased to present this special issue. As we noted in our call for papers for this special issue,Physical inactivity is considered one of the most important public health problems of the 21st century and incrIt is important to recognize that the positive effects of exercise are null if people do not engage with it and if the programs engaged with do not produce improvements in the desired outcomes , 9. In tContributions from A. Wittke et al., W. D. N. Santos et al., and J. Steele et al. have considered applications of resistance training exercise in both healthy and diseased populations (breast cancer survivors). They have provided insight into the applications of resistance training (application of progressive high effort), its effects in combination with supplementation (protein), and both the positive outcomes and risk of adverse effects in a clinical population (breast cancer survivors).Work from C. Ranucci et al., T. Dalager et al., and L. Fox et al. have also offered insights into \u201creal world\u201d multidisciplinary approaches to exercise. C. Ranucci et al. report the positive effects of a family-based multidisciplinary approach to improving health status, nutrition habits, and physical performance in overweight and obese children or adolescents. T. Dalager et al. showed the implementation of \u201cIntelligent Physical Exercise Training\u201d compared with moderate intensity physical activity on a workplace setting upon musculoskeletal health. Further, L. Fox et al. provide important \u201creal world evidence\u201d on quantitative and qualitative data feedback from men with prostate cancer who had undergone a structured exercise intervention.Lastly, S. C. E. Schmidt et al. report on the results of an important 18-year longitudinal study examining the effects of physical activity types, fitness, and health in adults. They report key findings regarding the role of type of physical activity upon fitness and health, as well as the impact of confounding sociodemographic factors.We hope that the contributions from authors in this special issue serve to aid in enhancing specific exercise prescription in a range of populations and that they also stimulate further interest and work in advancing our understanding of exercise in both health and disease.Paulo GentilFabr\u00edcio Boscolo Del VecchioJames Steele"} {"text": "In this paper, we study the complete convergence and complete moment convergence for weighted sums of extended negatively dependent (END) random variables under sub-linear expectations space with the condition of Additivity has been generally regarded as a fairly natural assumption, so the classical probability theorems have always been considered under additive probabilities and the linear expectations. However, many uncertain phenomena do not satisfy this assumption. So Peng \u20135 introdSub-linear expectations generate lots of interesting properties which are unlike those in linear expectations, and the issues in sub-linear expectations are more challenging, so lots of scholars have attached importance to them. Numbers of results have been established, for example, Peng \u20135 gainedby Zhang \u20139. He alby Zhang also resby Zhang studied et al. [et al. [et al. [et al. [The complete convergence has a relatively complete development in probability limit theory. The notion of complete convergence was raised by Hsu and Robbins , and Choet al. , Wang et [et al. and Wu a [et al. , and so [et al. gained g [et al. research [et al. under suIn the next section, we generally introduce some basic notations and concepts, related properties under sub-linear expectations and preliminary lemmas that are useful to prove the main theorems. In Section\u00a0The study of this paper uses the framework and notations which are established by Peng \u20135. So, wn-dimensional random vectors defined severally in the sub-linear expectation space Assume that a space (ii)(Extended negatively dependent) A sequence of random variables \u03b5\u0302.It is distinct that if C is on behalf of a generic positive constant which may differ from one place to another. Let n, n.In the following, let The following three lemmas are needed in the proofs of our theorems.is a slow varying function if and only ifwhereandSupposeandis a slow varying function. (i)Then, for(ii)Ifthen for anyand(i) By Lemma equality , and \\do(ii) By the proof of (i), we can imply that for any . By , we can that by , \\documeCase 2: By , we can n large enough,By and \\docBy Definition n))2. By , we have that by \\documen}I32. By , we can This finishes the proof of Theorem Without loss of generality, we can assume that Case 1: n, there exists k such that Note and 4.44.4, whicproof of . Next, w thus by , 4.13),,4.13\\doc that by , \\documeCase 2: n large enough,By We use the same notations as those in Theorem her with yields t"} {"text": "A multi-model X-ray imaging approach was implemented to probe the interaction of nanomaterials with a mammalian cell in three dimensions. With further developments, this approach could have an impact on nanomedicine and nanotoxicology. The scope of their application has also broadened dramatically. With this comes an increase in the possibility of human exposure to such materials and several of their properties give cause for concern 22 nanoparticles, a promising antitumour agent. Dual-energy scanning transmission X-ray microscopy (STXM) was used to image a whole macrophage at different sample orientations , they acquired two tomographic tilt series by rotating the sample around a single axis. Each tilt series consists of 46 images with a tilt range of \u00b179.4\u00b0.In this issue of et al., 2005i.e. the electron density of the sample cannot be negative) and a loose support . In reciprocal space, the measured data are used as the constraint in each iteration. After several hundreds of iterations, the algorithm converges to a global solution that is consistent with the measured data and physical constraints. It has been experimentally demonstrated that EST not only produces superior results to other traditional tomographic algorithms, but also is broadly applicable to three-dimensional imaging of biological and physical samples, ranging from breast tumors and cellular structures to single atoms in materials (Miao a) shows the three-dimensional volume rendering of the reconstructed macrophage with the Gd@C82(OH)22 nanoparticles in dark red, the nucleus in brown and different types of lysosomes in yellow. The three-dimensional reconstruction provides more faithful cellular structure information than the two-dimensional projection image . Furthermore, by subtracting the two reconstructions measured above and below the Gd absorption edge, the three-dimensional distribution of the nanoparticles inside the macrophage was obtained . They found that nanoparticles were mainly aggregated in lysosomes and no nanoparticles were distributed in the nucleus. These observations were further corroborated by the X-ray fluorescence microscopy image of the same macrophage .By combining dual-energy STXM and EST, Jiang and collaborators reconstructed the macrophage structure with a three-dimensional resolution of 75\u201380\u2005nm. Fig. 1ge Fig. 1b. Furthed Fig. 1c. They ge Fig. 1d.et al., 1999et al., 2007et al., 2008et al., 2017et al., 1995et al., 2015et al., 2017et al., 2017et al., 2017et al., 2015Looking forward, several additional developments could make this multi-model imaging approach widely applicable in nanomedicine and nanotoxicology. First, coherent diffractive imaging (CDI) has to be incorporated to improve the spatial resolution (Miao"} {"text": "Even though mental disorders represent a major public health problem for women and respective children, there remains a lack of epidemiological longitudinal studies to assess the psychological status of women throughout pregnancy and later in life. This epidemiological cohort study assessed the relationship between mental disorders of 409 Brazilian women in pregnancy and 5\u20138 years after delivery.per capita income, family size, work, marital status and body mass index.The women were followed from 1997 to 2000 at 17 health services, and subsequently from 2004 to 2006 at their homes. Mental disorders were investigated by the Perceived Stress Scale-PSS, General Health Questionnaire-GHQ and State-Trait Anxiety Inventories-STAI. The relationship between scores of the PSS, GHQ and STAI 5\u20138 years after delivery and in pregnancy was assessed by multivariate linear regression analysis, controlling for the following confounders: maternal age, education, per capita income (adj. R2 varied from 0.15 to 0.37). PSS, GHQ and STAI scores in the 3rd trimester of pregnancy were positively associated with scores of the PSS, GHQ and STAI in the 1st and 2nd trimesters of pregnancy (adj. R2 varied from 0.31 to 0.65).Scores of the PSS, GHQ and STAI 5\u20138 years after delivery were positively associated with scores of the PSS, GHQ and STAI in the three trimesters of pregnancy, and inversely associated with maternal age and The results of this study reinforce the urgency to integrate mental health screening into routine primary care for pregnant and postpartum women. Women with chronic infectious diseases, metabolic diseases, cardiopathy, mental diseases, hypertension/pre-eclampsia/eclampsia, vaginal bleeding and multiple deliveries were excluded from the study. Details of the cohort have been published previously , with a precision of 100\u00a0g. Height was measured with a SECA\u00ae wall-mounted stadiometer , with a precision of 0.1\u00a0cm. The anthropometric measurements were performed according to the recommendations of Jelliffe & Jelliffe and classified according to WHO recommendations WHO, . The womJelliffe .et al. and 5\u20138 years after delivery, using versions of the Perceived Stress Scale (PSS), the General Health Questionnaire (GHQ) and the State-Trait Anxiety Inventories (STAI) validated in Brazil, respectively, by Luft et al. , Mari & et al. and Biag was used for storage and statistical analysis of the data. The relationship between scores of the PSS, GHQ and STAI 5\u20138 years postpartum (dependent variables) and in the three trimesters of pregnancy (independent variables) was assessed by multivariate linear regression analysis, using the backward stepwise selection method. The relationship between scores of the PSS, GHQ and STAI in the 3rd trimester of pregnancy (dependent variables) and in the 1st and 2nd trimesters (independent variables) was also assessed. The following confounding factors were included in the models: maternal age, education, n\u00a0=\u00a0745) in the first phase of the study, and the ones who participated in the original cohort (n\u00a0=\u00a0865). Comparison of the characteristics between the mothers included in the cohort and those who did not conclude the study showed no significant differences.There were losses of 33.6% and 42.7% of the sample, respectively, considering all the mothers who were located (per capita income (adj. R2\u00a0=\u00a00.21). Scores of the GHQ 5\u20138 years after delivery were positively associated with scores of the GHQ in the three trimesters of pregnancy, and inversely associated with maternal age and per capita income (adj. R2\u00a0=\u00a00.18). Scores of the SAI 5\u20138 years after delivery were positively associated with scores of the SAI in the 2nd and 3rd trimesters of pregnancy, and inversely associated with per capita income (adj. R2\u00a0=\u00a00.15). Scores of the TAI 5\u20138 years after delivery were positively associated with scores of the TAI in the 1st and 2nd trimesters of pregnancy, and inversely associated with maternal age and per capita income (adj. R2\u00a0=\u00a00.37).R2 varied from 0.31 to 0.65).per capita income and mental disorders 5\u20138 years after delivery. The other socioeconomic and demographic factors investigated did not show statistically significant results. BMI, an indicator of the nutritional status of the women, was not associated with mental disorders; though Nagl et al. in the three trimesters of pregnancy, lower age and l et al. and Moly 5\u20138 years after delivery compared with the mean scores in pregnancy, while the GHQ, SAI and TAI scores maintained the mean values in the four different periods investigated, indicating higher levels of perceived stress 5\u20138 years after delivery than in pregnancy. Schmied et al. (et al. (et al. (Comparing the mean scores of the women in the highest PSS tertile with the mean scores of the whole population, and the mean scores of the populations studied by Cohen et al. , we concd et al. observed (et al. did not (et al. found thet al.et al.per capita income (Abdollahi et al.et al.et al.et al. (et al. (et al. (Similar to other studies in the literature, lower age (Bottino .et al. , materna (et al. referred (et al. . In fact (et al. showed tet al. (et al. (et al. (Abdollahi et al. identifi (et al. conclude (et al. , using aet al. (A limitation of this study was the lack of assessment of mental disorders in the immediate post-partum period. Even though mental disorders had not been investigated in the immediate postpartum period, it was probably a problem for those pregnant women considering the associations of the PSS, GHQ and STAI scores between the 3rd and 1st and 2nd trimesters of pregnancy. Witt et al. referredper capita income are associated with mental disorders 5\u20138 years after delivery.Mental disorders in the three trimesters of pregnancy, lower age and Referral of young, low-income pregnant women with mental disorders, particularly anxiety and depression, should be encouraged to psychological or psychiatric treatment. Finally, we reinforce the urgency to integrate mental health screening into routine primary care for pregnant and postpartum women."} {"text": "Magnetic nanoparticles are a highly valuable substrate for the attachment of homogeneous inorganic and organic containing catalysts. This review deals with the very recent main advances in the development of various nanocatalytic systems by the immobilisation of homogeneous catalysts onto magnetic nanoparticles. We discuss magnetic core shell nanostructures as substrates for catalyst immobilisation. Then we consider magnetic nanoparticles bound to inorganic catalytic mesoporous structures as well as metal organic frameworks. Binding of catalytically active small organic molecules and polymers are also reviewed. After that we briefly deliberate on the binding of enzymes to magnetic nanocomposites and the corresponding enzymatic catalysis. Finally, we draw conclusions and present a future outlook for the further development of new catalytic systems which are immobilised onto magnetic nanoparticles. Catalysts play a very important role in modern science and technology as they improve reaction yields, reduce temperatures of chemical processes and promote specific enantioselectivity in asymmetric synthesis. There are two main types of catalysis, heterogeneous, where the catalyst is in the solid phase with the reaction occurring on the surface and homogeneous, where the catalyst is in the same phase as the reactants . Both prThe bridge between heterogeneous and homogeneous catalysts can be achieved through the use of nanoparticles . NanoparOne particularly useful and important group of nanoparticles is magnetic nanoparticles (MNPs). These nanoparticles may be composed of a series of materials such as metals like cobalt and nickel, alloys like iron/platinum and metal oxides like iron oxides , and ferIn addition to all the above, magnetic nanoparticles can serve as a highly useful catalyst support enabling immobilization and magnetic recovery of the catalyst . In the 2 [Several types of magnetic nanostructures have developed for the use of catalysis ,18, incl2 as well 2 . These n2 and phot2 . In addi2 .sec-alcohols with synthetically useful selectivity under ambient temperature, low catalyst loading, and using acetic anhydride as the acylating agent. The use of this catalyst enabled the isolation of resolved alcohols with good to excellent enantiomeric excess. Most importantly, the interaction between the nanoparticle core and the organocatalyst unit results in a chiral heterogeneous catalyst which is recyclable to an unprecedented extent\u201432 times consecutive cycles\u2014while retaining high activity and enantiomeric selectivity profiles [Another type of catalyst which is of interest for organic synthesis involves the use of organic molecules. These molecules show a large degree of specificity for their reactions and may allow a more successful reaction than conventional chemistry. There are two types which may potentially be used for the purposes of organic reactions, there are metal complexes which may be used to generate chemical reactions through oxidation state changes in the metal centre as well as organocatalysts which contain no metal centre and utilise conformational changes to initiate the reaction. While the metal centres may be in certain instances toxic the organocatalysts are not, but may be synthetically complex to produce, and as a result in both cases it is also desirable to retain these catalysts after use for recycling. For example, it was reported that \u201chypernucleophilic\u201d 4-N,N-dimethylaminopyridine (DMAP) catalyst immobilised on magnetite nanoparticles was very active and recyclable in acyl transfer processes, Stieglitz rearrangements and hydroalkoxylation reactions at loadings between 1% and 10% and could be recovered and recycled over 30 times without any discernible loss of activity . In anotprofiles .Overall, the binding of catalysts to magnetic nanoparticles allows the retention of these materials after the end of the reaction for reuse. This will prevent the necessity of additional purification techniques to remove the catalyst from the waste stream making it a \u201cgreener\u201d catalyst comparing to previous approaches. This area has deserved a lot of attention and there have been several good reviews on the subject of catalysts immobilised onto magnetic nanoparticles ,27,28,30et al. [3O4 nanoparticles can be reused up to five times with good yield. Peric\u00e0s et al. [et al. [Over the last few years significant progress was made in the development of new inorganic catalytic systems which are immobilized onto magnetic nano-carriers. Normally, to protect magnetic core and retain magnetic properties the core is coated with some non-magnetic relatively inert shell such as silica. The silica shell is very easy to be functionalized and good for binding of various catalytic species including transition metal complexes. As reported by Astruc et al. ruthenius et al. have als [et al. reportedet al. [Another catalyst was reported by Garc\u00eda-Garrido et al. which inet al. [Esmaeilpour et al. coated met al. utilizedet al. [Zolfigol et al. developeet al. [Kim et al. publisheet al. reported the immobilization of a palladium species to the surface of cobalt ferrite nanoparticles for the use in Suzuki coupling [et al. [3O4 nanoparticle and then binding to a palladium metal centre. This catalyst was used for the conversion of an alkyne to an alkene with both high yield and selectivity (both >90%).Phan coupling and for coupling . It was [et al. involvinet al. [Nazifi et al. functionet al. [It was reported by Nakagaki et al. that maget al. [It was also demonstrated by Naeimi et al. that broet al. [A silica shell was also utilised by Yoon-Sik Lee et al. for the et al. [et al. [et al. [3PW12O40 to the surface of silica coated magnetic nanoparticles providing a new catalyst system which has shown good activity in the esterification of free acids in methanol.However, it is not necessary for the silica shell to be inert and this might contribute to the catalysis. For example, it was demonstrated that with limited chemical modification a silica shell can also act as a mesoporous catalyst. Yeung et al. produced [et al. reported [et al. bound thet al. [et al. [3O4 nanoparticles which had been stabilized by poly(N-vinylimidazole). The addition of copper (II) to this polymer iron oxide nanocomposite produced a species that exhibited oxidative catalysis of 2,6-dimethylphenol in water. These species were extracted from the reaction medium with a recovery rate of over 95% and conversion rates of as high as 79.2% with supplementary catalysis.The coating of nanoparticles with polymer shells was also explored in recent years. For example, an interesting catalytic system was reported by Heinze et al. . They us [et al. reportedet al. [A very interesting approach was reported by Stark et al. . The reset al. [Commercially available polymer encapsulated magnetic beads can also be used to produce new catalytic systems. For example, Kanoh et al. reportedet al. [3O4@ZIF-8) which were used as catalytic particles for the Knoevenagel reaction in a capillary flow reactor.Remarkable zeolitic framework coated magnetic nanocomposites have been used as catalysts for a flow reaction system. Yeung et al. , designeet al. [Graphitic carbon coated magnetic nanoparticles are also very good materials for further functionalisation in catalysis applications. A highly innovative approach involving the binding of pyrene-tagged dendritic systems to carboet al. ,55. The The same group has reported the immobilization of hybrid palladium species on a magnetic carbon coated Co nanoparticles using ROM polymerization. This approach has resulted in generation of a magnetically recyclable palladium catalyst for Suzuki-Miyaura cross-coupling reactions . FinallyAs we mentioned, there is a considerable interest in the immobilization of organic catalyst molecules onto heterogeneous substrates. In this case magnetic nanoparticles offer a number of advantages as a substrate enabling the retrieval and recycling of organic catalysts which are normally impossible to remove from the reaction mixture.et al. [3O4 nanoparticles using a triethoxysilane derivative as a linker. The nanocatalysts were applied for the synthesis of biologically active pyrazolophthalazinyl spirooxindoles in high yield . For exaet al. bound megh yield . The autet al. [et al. [Ponti et al. bound an [et al. the chiret al. [3O4 nanoparticles with an organosilicon linker. This nanocomposite effectively catalyzed the production of \u03b2-phosphomalonates from diethyl phosphite and \u03b1, \u03b2-unsaturated malonates with the added benefit of catalytic action without the danger of noxious fumes from pyridine. The catalyst could also be easily recovered using a magnet and reused at least 10 times without substantial degradation in the activity.Parizi et al. reportedet al. [In another work Moradi et al. reportedet al. [An organosilicon linker was also used by Karimi et al. to bind et al. [Kiasat et al. used a cet al. [Functional organic polymer coating is also good way to protect magnetic core and functionalise nanoparticles with organic catalytic species. Pericas et al. describeet al. .et al. [Very interesting magnetic catalytic exchange resins were prepared by Reiser et al. using poet al. .et al. [Another method for polymer coating of magnetic nanoparticles was reported by Seidi et al. who usedet al. [There are also reports on direct binding of organic catalytic species to the surface of magnetic iron oxide nanoparticles. For example, functionalised with 4-piperadinecarboxylic acid, magnetite nanoparticles were produced directly via base catalysed precipitation as reporet al. . These net al. [et al. [Initially, Reiser et al. developeet al. . More re [et al. also repet al. [sec-alcohols, the conjugate addition of dimethyl malonate to a nitro-olefin and the desymmetrisation of meso-anhydrides. Despite no physical deterioration of the heterogeneous catalysts being detected on analysis after multiple recycles, in the cases of both the conjugate addition to nitroolefins and the desymmetrisation of meso anhydrides, significant levels of background catalysis by the nanoparticles in the absence of the organocatalyst was detected. This unexpected result clearly demonstrates that magnetite nanoparticle substrates can actually participate in catalytic processes [However, as it was shown by Connon et al. , the uncrocesses .Since nanoparticles are close in size to biomolecules it is likely that they may exhibit some size related properties close to that of the molecules themselves. Enzymes are very active biomolecules, which can serve as highly specific and efficient catalysts. Many enzyme nanomimics which have been produced from nanoparticles showed specific catalytic activities . Enzymeset al. [Candida rugosa stabilised with surfactant. The resulting nanocomposites were then used for the multi-step synthesis of ethyl-isovalerate. Conversion rates of near 80% were reported. It was observed that across a number of catalytic cycles the bound enzyme retained greater catalytic activity than the free enzyme.In one recent work magnetic nanoparticles were produced via precipitation in the presence of gum Arabic by Huizhou et al. . To theset al. [Chitosan stabilised magnetic nanoparticles were used by Kow-Jen Duan et al. for the et al. [Vali et al. developeet al. [In similar work by Kim et al. epoxide et al. [et al. [Yarrowia lipolytica to carbon nanotubes further functionalised with iron oxide nanoparticles. These nanocomposites were shown to be highly effective for the chiral resolution of -1-Phenyl ethanol in heptane. They reported the resistance of the nanocomposites to sonication for up to 30 min.Nanocomposites of carbon nanotubes, amyloglucosidase, and magnetic iron oxides were reported by Goh et al. that wer [et al. which inAs we can see from the review above, there are some interesting new developments in catalytic systems immobilised on magnetic nanoparticles. Still there is a clear domination of the standard approach involving silica coating of magnetic iron oxide core followed by functionalisation using appropriate alkoxysilane derivatives. However, very important steps have also been made in the development of various functional polymeric coatings. In addition to mesoporous silica shells, significant progress has also been achieved in the preparation of new mesoporous zeolite-like and MOF Another interesting development is the immobilization of polydentate dendritic ligands ,55 on maThe report on the background catalytic activity of uncoated magnetite nanoparticles serves aAnother important aspect is the development of new magnetically immobilised enzymatic catalysts. Despite the number of challenges and difficulties in this field, there is a great potential, particularly for biopharmaceutical applications of enzyme based catalysts.Finally, we think that hyperthermic capabilities of magnetic substrates should be explored in the near future. This field is still very poorly developed despite the great potential opportunities to combine the catalysis with the selective local heating of reagents at the substrate. This should provide significant cost and energy savings, particularly for high temperature catalytic reactions."} {"text": "Sub-Saharan Africa and other resource-limited settings (RLS) bear the greatest burden of the HIV epidemic globally. Advantageously, the expanding access to antiretroviral therapy (ART) has resulted in increased survival of HIV individuals in the last 2 decades. Data from resource rich settings provide evidence of increased risk of comorbid conditions such as osteoporosis and fragility fractures among HIV-infected populations. We provide the first review of published and presented data synthesizing the current state of knowledge on bone health and HIV in RLS.With few exceptions, we found a high prevalence of low bone mineral density (BMD) and hypovitaminosis D among HIV-infected populations in both RLS and resource rich settings. Although most recognized risk factors for bone loss are similar across settings, in certain RLS there is a high prevalence of both non-HIV-specific risk factors and HIV-specific risk factors, including advanced HIV disease and widespread use of ART, including tenofovir disoproxil fumarate, a non-BMD sparing ART. Of great concern, we neither found published data on the effect of tenofovir disoproxil fumarate initiation on BMD, nor any data on incidence and prevalence of fractures among HIV-infected populations in RLS.To date, the prevalence and squeal of metabolic bone diseases in RLS are poorly described. This review highlights important gaps in our knowledge about HIV-associated bone health comorbidities in RLS. This creates an urgent need for targeted research that can inform HIV care and management guidelines in RLS.http://links.lww.com/COHA/A9. Resource-limited settings (RLS) that constitute low- and middle-income countries continueet al. involving 884 HIV-infected individuals and 654 controls estimated the prevalence of low BMD among HIV-infected individuals to be as high as 67%, 15% of whom had osteoporosis. The magnitude of low BMD was 6.4 times greater and that of osteoporosis 3.7 times greater than in HIV-uninfected controls [The World Health Organization has categorized low BMD into osteopenia and osteoporosis. In postmenopausal women and men, 50 years and above, osteoporosis is defined as a T-score at or below 2.5 SD whereas osteopenia is defined as a T-score between 1 and 2.5 SD below the young adult mean value. Premenopausal women, men below 50 years or children who have a BMD Z-score at or below 2.0 of the age and sex-matched population are classified as having low bone mass. . In the controls \u201325. Owincontrols \u201335. Amoncontrols \u201339. In acontrols . Among icontrols , regardlcontrols .et al.[In RLS with a disproportionately high burden of HIV and background nutritional deficiencies , known ret al. from Souet al.. Anotheret al. which thet al.. Similaret al.\u25aa,57\u201360 wet al.\u201363.There are very limited data in any RLS regarding BMD longitudinal changes among HIV-infected persons. In a 48-week, multisite, second-line trial in South Africa, India, Thailand, Malaysia, and Argentina , HIV-infet al.[A strong body of evidence from longitudinal data in RRS shows that among the different antiretroviral drugs, the potential effect of TDF on bone health is particularly concerning \u201370. In Aet al. observedet al.. More inet al.. Though et al.\u201387. TDF et al. and posset al.,80,86\u201394et al.[P\u200a<\u200a0.001).In RLS, the two WHO recommended first line ART treatment regimens for adults and children above 15 years contain TDF; TDF, lamivudine 3TC) and EFV or TDF, emtricitabine (FTC), and EFV, which exposes many HIV-infected individuals to the negative impact of TDF on bone health TC and EF. Converset al. reportedet al.[et al.[et al.[Worldwide, it's estimated that more than one billion people are characterized as having vitamin D deficiency <20\u200ang/ml), or insufficiency <30\u200ang/ml) regardless of the economic setting. According to a recent review by Mansueto et al. the prevet al.,97\u2013102 aet al.,103\u2013109 et al. and the et al., Spain [0\u200ang/ml, et al., and Isret al. to skin \u200ang/ml reet al.,110\u2013113.et al. which reet al.,114. In et al.. In viewet al.,105. In l.[et al. found thl.[et al.. Though l.[et al. among HIet al.[In RLS, the HIV burden among young adult women is high . Women aet al. showed aWith the scale up of ART, more HIV-infected children are surviving into adolescence. In 2012, an estimated 2.1 million adolescents (10\u201319 years) were living with HIV in RLS , constitIn 2015, 11 out of the 16 million people receiving ART globally were in the WHO Africa region alone . HoweverCost-effective and feasible strategies to prevent osteoporosis for both HIV-infected and noninfected populations.Identification of simple-low cost tools to detect early osteopenia.Strategies to minimize or avoid ARV-associated bone loss such as ART choice, dose optimization, and ARV switching.Research among HIV-infected populations focusing on women of reproductive age and special populations such as perinatally infected children and adolescents.With more people starting ART and liviTo successfully conduct research addressing the above mentioned gaps in bone health comorbidities in RLS, there is need to work through several existing research networks either regionally or globally. This will ensure effective design and quality implementation approaches are employed. Importantly, involving key policy makers both domestically and regionally upfront will make the future policy implementation more successful.The review reveals overlapping prevalence of low BMD in RLS and RRS, with a generally higher prevalence of low BMD in RLS overall compared to RRS. We highlight important gaps in our knowledge about HIV-associated bone health comorbidities in RLS. In particular, there are scarce data on bone health mainly from cross-sectional studies that call for urgent need for research that can inform management guidelines in metabolic bone complications in RLS.F.K.M. would like to thank Professors Todd T Brown and Mary Glenn Fowler, Dr Francis Kiweewa, MU-JHU Research Collaboration, Consortium for Advanced Research and Training in Africa, Makerere University School of Public Health and University of the Witwatersrand.F.K.M. has received an R01 grant from the National Institute of Allergy and Infectious Diseases of the National Institutes of Health (NIH) under Award Number R01AI118332NIH for bone health-related work as the Principal Investigator, and support as a site investigator on NIH funded microbicide trials network protocols. K.R. has received support from Senior Research Scholar, Thailand Research Fund (TRF) for his work.There are no conflicts of interest.\u25aa of special interest\u25aa\u25aa of outstanding interestPapers of particular interest, published within the annual period of review, have been highlighted as:"} {"text": "Hearing loss is considered as the most common sensory disorder in human population that occurs at all ages world-widely and sensorineural hearing loss (SNHL) is the most common type of hearing loss. Various insults could induce SNHL, including acoustic trauma, ear and brain tumors, aging, noise exposure, or ototoxic medications or chemicals. SNHL is caused by irreversible loss of sensory hair cells and/or degeneration of spiral ganglion neurons. SNHL is not yet curable due to the lack of automatic regeneration of hair cells and spiral ganglion neurons in the cochlea. In recent years, exciting animal studies on signaling pathway manipulation, gene therapy, and stem cell transplantation as well as pharmaceutical agents have demonstrated that hair cells and spiral ganglion neurons could be triggered to regenerate, suggesting that hearing loss might be curable eventually in the future. Neural plasticity is the key feature for hair cells and spiral ganglion neurons, and it is especially important for the newly regenerated hair cells and spiral ganglion neurons to be functionally integrated into auditory pathways. In this special issue on neural plasticity of hair cells and spiral ganglion neurons, we are pleased to present a series of articles that represent the latest advances in hair cell development, protection and regeneration, spiral ganglion neuron development and protection, and inherited hearing loss.Hair Cell Development. J. Hang et al. report that the onset time of hearing may require the expression of prestin and is determined by the functional maturation of outer hair cells. H. Nie et al. (\u201cPlasma Membrane Targeting of Protocadherin 15 Is Regulated by the Golgi-Associated Chaperone Protein PIST\u201d) report that PIST regulates the intracellular trafficking and membrane targeting of the tip-link proteins CDH23 and PCDH15 in hair cells. Hair Cell Damage and Hair Cell Protection. X. Liu et al. report that enhanced cell-cell adhesion and activation of \u03b2-catenin-related canonical Wnt signaling pathway may play a role in the protection of the cochlear from further damage. M. Fu et al. (\u201cThe Effects of Urethane on Rat Outer Hair Cells\u201d) report that urethane anesthesia is expected to decrease the responses of outer hair cells, whereas the frequency selectivity of the cochlea remains unchanged. X. Fu et al. (\u201cLoss of Myh14 Increases Susceptibility to Noise-Induced Hearing loss in CBA/CaJ Mice\u201d) report that Myh14 may play a beneficial role in the protection of the cochlea after acoustic overstimulation in CBA/CaJ mice. L. Shi et al. (\u201cCochlear Synaptopathy and Noise-Induced Hidden Hearing Loss\u201d) provide a brief review to address several critical issues related to NIHHL: mechanisms of noise-induced synaptic damage, reversibility of synaptic damage, functional deficits in NIHHL animal models, evidence of NIHHL in human subjects, and peripheral and central contributions of NIHHL. Hair Cell Regeneration. Y. Shu et al. report that adenovirus vectors are capable of efficiently and specifically transfecting different cell types in the mammalian cochlea and therefore provide useful tools to study inner ear gene functions and evaluate gene therapies for treating hearing loss and vestibular dysfunction. X. Lu et al. review recent research progress in hair cell regeneration, synaptic plasticity, and reinnervation of new regenerated hair cells in the mammalian cochlea. C. Wang et al. report the capability of a behavioral assay in noninvasively evaluating hair cell functions of fish larvae and its potential as a high-throughput screening tool for auditory-related gene and drug discovery. Spiral Ganglion Neuron Development and Protection. P. Chen et al. report that NLRP3 may have specific functions in spiral ganglion neurons that are altered in both syndromic and nonsyndromic sensorineural deafness. X. Bai et al. investigated the toxicity of glutamate in spiral ganglion neurons and they found that the protection of edaravone is related to the PI3K pathway and Bcl-2 protein family. Inherited Hearing Loss. X. Gu et al. identified a missense mutation in the LCCL domain of COCH that is absent in 100 normal hearing controls and cosegregated with impaired hearing. J. Chen et al. (\u201cIdentification of a Novel ENU-Induced Mutation in Mouse Tbx1 Linked to Human DiGeorge Syndrome\u201d) confirm the pathogenic basis of Tbx1 in DGS, point out the crucial role of DNA binding activity of Tbx1 for the ear function, and provide additional animal model for studying mechanisms underlying the DGS disease. Y. Guo et al. report that, for Mandarin Chinese, a tonal language, the temporal E cues of Frequency Region 1 (80\u2013502\u2009Hz) and Region 3 contribute more to the intelligence of sentence recognition than other regions, particularly the region of 80\u2013502\u2009Hz, which contains fundamental frequency (F0) information. L. He et al. report that mutations in POU4F3 are a relatively common cause (3/16) for ADNSHL in Chinese Hans, which should be routinely screened in such cases during genetic testing. C. Zhang et al. report the first nonsense mutation of POU4F3 associated with progressive hearing loss and explored the possible underlying mechanism.The articles in this special issue provide valuable insights into development, protection, and regeneration of hair cells and spiral ganglion neurons. By highlighting findings in these articles, we hope this special issue will provide not only new perspectives for future directions in hearing research but also potential therapeutic strategies for treating hearing loss. Renjie Chai Geng-Lin Li Jian Wang Jing Zou"} {"text": "Burkholderia pseudomallei have been previously reported. Therefore the statement \u201cAlthough melioidosis involves most organs, parotid involvement is rare. To the best of our knowledge, confirmed melioidosis parotitis was not previously reported except for a case that occurred after systemic melioidosis [6]\u201d is inaccurate. It should read :\u201cPediatric suppurative parotitis caused by B. pseudomallei has already been reported in Thailand by Dance et al. [B. pseudomallei parotitis in Hainan, China.\u201dSince the publication of our article , we havee et al. , who dese et al. , who obs"} {"text": "Plants suffer osmotic and ionic stress under high salinity due to the salts accumulated at the outside of roots and those accumulated at the inside of the plant cells, respectively. Mechanisms of salinity tolerance in plants have been extensively studied and in the recent years these studies focus on the function of key enzymes and plant morphological traits. Here, we provide an updated overview of salt tolerant mechanisms in glycophytes with a particular interest in rice (Oryza sativa) plants. Protective mechanisms that prevent water loss due to the increased osmotic pressure, the development of Na+ toxicity on essential cellular metabolisms, and the movement of ions via the apoplastic pathway (i.e. apoplastic barriers) are described here in detail.Elevated NaThe online version of this article (doi:10.1186/1939-8433-5-11) contains supplementary material, which is available to authorized users. Salini5) (IRRI ). SalineCl . MoreovCl (IRRI ).+ accumulate in excess in plants particularly in leaves over the threshold, which leads to an increase in leaf mortality with chlorosis and necrosis, and a decrease in the activity of essential cellular metabolisms including photosynthesis and alschroeder ) to plan+ uptake into shoots mediated by the intrusive apoplastic ion transport is considerably high under salinity stress . In rico et al. ; Yadav eo et al. ; Ochiai o et al. ). TherefL p). In case of water uptake in root cells, \u0394\u03a8 is the difference between \u03a8 of extracellular solution and intracellular sap solution. Under non-stress condition, intracellular \u03a8 is generally more negative than that of the soil solution, resulting in water influx into roots according to the water potential gradient. The water potential (\u03a8) is approximately consistent with the sum of the pressure potential (\u03a8p) and the osmotic potential .Salinity-induced osmotic stress reduces water uptake into plant roots. Plants regulate water transport under salinity stress because a sufficient amount of water is indispensable for the cells to maintain their growth and vital cellular functions such as photosynthesis and metabolisms. In the long distance water transport from roots to shoots, evaporation is one of the main motive forces for the water movement, especially in the apoplastic pathway. Salinity/osmotic stress directly or indiosm, salinity stress immediately reduces \u0394\u03a8 thus water influx. If the water potential gradient is reversed due to severe salinity/osmotic stress , water efflux from roots (dehydration) can occur. To minimize the influence of a reduction in water influx or dehydration upon the growth under salinity/osmotic stress, plants set independent strategies in motion by regulating the root L p (L pr) and attempting to restore \u0394\u03a8 . However, time is required to accumulate enough solutes inside the cell to get a decrease in intracellular \u03a8osm (osmotic adjustments). Signal transduction and changes of related-gene expression, in contrast, are a relatively quick response . In rica et al. ), but thZea mays) or wheat (Triticum aestivum) was reported to be in a range of 0.5-0.7\u2009MPa seedlings, no significant change in the L pr has been reported when plants were treated with 100\u2009mM NaCl for 4\u2009hrs in a sharp contrast to the severe L pr repression by 200\u2009mM NaCl treatments for 4\u2009hrs . Such L pr reductions should be effective to prevent dehydration under stress conditions more severe than -0.5\u2009MPa . Severe salinity stress markedly decreases \u03a8 of the soil solution, which can reverse the osmotic gradient between the inside and the outside of root cell, which generates water efflux (dehydration). In these conditions, shutdown of the water transport attributed to the L pr reduction should be essential to minimize water loss at the initial phase of severe salt stress for survival Plants shut down L pr even upon moderate salinity stress conditions to get ready for more severe stress in advance because such a sequence occurs in nature ; and (ii) L pr reductions could be a sign of conversion of the growth status of plant cells from the rapid growth mode with high water absorption to the protect/tolerant one with less water uptake as a strategy for the survival under salinity stress . Consis Steudle ; Peyrano Steudle ; Carvaja Steudle ; Mart\u00edne Steudle ; Mart\u00edne Steudle ; Boursia Steudle ). In rooe et al. ). Howevem et al. ; Hachez m et al. ; Horie em et al. ). Howevee et al. ).L pr was observed in the plants of japonica rice cultivar Nipponbare under salinity stress of 100\u2009mM NaCl within 24\u2009hrs by the pressure chamber method . The result suggests that at least Nipponbare rice plants might not be able to promote an immediate repression of the L pr in response to the osmotic stress phase. Similar exceptions of no influence of salinity stress on L pr have been reported using tobacco and reln et al. ). Whetheassioura ) and barassioura ) is alsoArabidopsis plants have indicated that PIP aquaporins mediate water uptake by roots and are a predominant component of the L pr family called aquaporins . Furthel et al. ; Mart\u00ednel et al. ). These Arabidopsis and maize , which is stimulated by salinity stress . Salinil et al. ), suggese et al. ). The ime et al. ; Boursiae et al. ; Maurel e et al. ; Horie ee et al. ). In Arac et al. ). More rc et al. ). TogethPIPs were identified . Recenti et al. ; Sakuraii et al. ). A ubiqi et al. ). These o et al. ; Li et ao et al. ). These o et al. ). On theo et al. ) and ricosm against an osmotic gradient between root cells and outside saline solution, which eventually restore the water uptake into roots during salinity stress (Greenway and Munns ;;Arabidop and Zhu ). They a and Zhu ). The wou et al. ; Quinteru et al. ) . For moa et al. ). Taken 2+ accumulation identified a candidate locus named HvNax4, which happened to have a considerable influence on the shoot Na+ accumulation whether the SOS1-type Na+/H+ exchanger exists and plays a primary function in salt tolerant mechanism in barley as in Arabidopsis?, (iii) if so, whether the SOS-like mechanism is a component constituting the high-affinity K+ uptake-coupled Na+ extrusion system in barley roots?, and (iv) whether rice roots exhibit activity of the high-affinity K+ uptake-coupled Na+ extrusion as found in other cereals, in which the OsSOS1-3 protein are involved?A recent genetic study using barley on Znn et al. ). The Hvi et al. ). It hasu et al. ). Furthea et al. ). The rea et al. ). Togeth+ level while keeping the high level of K+, resulting in a high cytosolic K+/Na+ ratio that is preferable for vital cellular metabolisms . It hasn et al. ). The seBlumwald ). Furthe+/H+ antiport activity at the tonoplast membrane was reported in sugar beet . The moe et al. ; Gaxiolae et al. ) indicato et al. ; Rodr\u00edguo et al. ; Yamaguco et al. ). Transgo et al. ), suppore et al. ).Arabidopsis, six AtNHX genes were identified, whose gene products can be divided into two classes, class I (AtNHX1-4) and class II (AtNHX5 and 6) other than SOS3/CBL4. Unlike vacuolar-localized class I transporters, AtNHX5 and 6 class II transporters were postulated to function in endosomal vesicles and loss of functions of both transporters rendered the atnhx5 atnhx6 double mutant plants more salt sensitive . All clo et al. ). The SOu et al. ). Tonoplu et al. ), suggeso et al. ). A recel et al. ). Moreovl et al. ). These l et al. ). For mol et al. ; Rodr\u00edgul et al. ; Yamagucl et al. .+/H+ exchange activity is driven by the vacuolar proton gradient established by two independent proton pumps, vacuolar H+-ATPase (V-ATPase) and vacuolar H+-translocating pyrophosphatase (V-PPase) . These Plantago maritima plants maintains a greater salt-induced Na+/H+ antiport activity on the tonoplast than that of salt-sensitive Plantago media plants, suggesting that the innate difference in the ability for the Na+ sequestration into vacuoles could be the cause of the difference in salt sensitivity . Theref+ ions that reach the xylem by passing through barrier mechanisms in roots under salinity stress are transported to shoots. In addition to independent barrier components introduced above, plants retain a different protection mechanism at the cell-xylem apoplast interface. It has been shown that Na+ reabsorption occurs from the xylem stream by surrounding tissues, and as a result, reduces the net Na+ flow into shoots .AtHKT1;1 cDNA that encodes a Na+ transporter has been isolated from Arabidopsis plants as a homolog of the TaHKT2;1 gene, which is a Na+/K+ co-transporter in wheat (Triticum aestivum) . The dir et al. ). Severau et al. ; Sunarpiu et al. ; Davenpou et al. ; Horie eu et al. ; M\u00f8ller u et al. ). Particu et al. ) indicatl Figure . Supportr et al. ). Recente et al. ). Furthei et al. ). Togethi et al. ) . This ms Figure , which ie et al. ; Hauser e et al. ).+ reabsorption mechanisms have been found in cereals such as rice and wheat based on genetic QTL analyses. In rice, the OsHKT1;5 gene has been identified, based on the influence of the shoot K+ content (SKC1) locus on the K+ accumulation in xylem sap and shoots during salinity stress, as one of the primary genes causing a difference in salt tolerance between a tolerant indica cultivar Nona Bokra and a susceptible japonica cultivar . The lon et al. ). Given n et al. ). The Ose et al. ; Hauser e et al. ). In whem et al. ). A mores et al. ; James es et al. ; Byrt ets et al. ). Interes et al. ; James es et al. ). The Nag et al. ), which g et al. ; Hauser g et al. ). Furtheg et al. ; Horie eg et al. ; Hauser g et al. ). Togeth+ loading, the SOS1 transporter in Arabidopsis plants has been suggested to play a crucial role in Na+ retrieval from xylem vessels depending on the degree of salinity stress . This fd Tester ).+ content of barley plants under salinity stress, it has been suggested that salt tolerance is highly associated with the ability to restrict Na+ accumulation in shoots that mediates K+ loading to the xylem apoplastic space under salinity stress is yet to be determined. However, K+ efflux activity mediated by the K+ outward-rectifying channel (KORC) and/or the nonselective outward-rectifying channel (NORC) (Wegner and Raschke ). + uptake under salinity stress among different rice cultivars have been performed . In facy et al. ). They hy et al. ) have aly et al. ). They hy et al. ; Ranathuy et al. ). A simichreiber ).Arabidopsis root . With ro et al. ) has sheSoil salinity is a serious problem in the world agriculture. Owing to efforts of investigators and elevated levels of technologies, our knowledge on the mechanisms of plant salinity tolerance is dramatically expanding these days. However, individual plant species exhibits distinct salt sensitivity due to morphological differences and the difference in the ability of protection components that the plant has evolved to depend on. Therefore, challenges to elucidate the roles/significances of protection mechanisms in plant salt tolerance including morphological barriers at molecular, cellular and whole plant levels in further depth will be crucial to develop high-yielding salt-tolerant cultivars."} {"text": "Corrigendum:After publication, we noted terminological issues with the use of rRNA genes and OTUs in the article in addition to typographical issues in the Methods section, Reference list and Acknowledgements. In the \u201cDNA extraction and library preparation\u201d section of the Methods, we have removed \u201c-htp\u201d from the first sentence. In the same paragraph, we have replaced the lowercase u in \u201cul\u201d with the Greek letter mu. Terminology issues with rDNA genes and rDNA:https://microbiomejournal.biomedcentral.com/submission-guidelines/preparing-your-manuscript/research-articlefor some recommended guidelines about this we were not aware of. We therefore have changed all usage to \u201c16S rRNA genes\u201d. None of these affect the meaning in any way.The terminology we used to describe surveys of 16S rRNA genes was sometimes inappropriate and also now recommended against, even though it is technically correct (16S rDNA). Some of this was due to simple editing errors (16S rDNA genes) and some of it was using terminology we have previously used but which is now recommended against (16S rDNA). SeeTerminology regarding OTUs vs. species:Additionally, our study made use of common methods to classify sequences into what are known as \u201cOTUs\u201d or operational taxonomic units. We stated that these are used as a proxy for species but did not intend to say \u201cspecies\u201d in parts of the paper when we meant OTUs. In the Methods and Results and Discussion sections, we wrote that OTUs were a proxy for species for people who were not familiar with OTUs. It was pointed out to us by multiple people that this just confuses the matter and therefore we have therefore removed all uses of \u201cspecies\u201d when we mean \u201cOTUs\u201d. We have also removed the \u201cproxy\u201d lines. Some of these changes may affect how people interpret our text (if they thought we were truly quantifying species) but do not in any way impact the conclusions of our work. Reference issues:There a number of issues in the references. We have updated the references to add a DOI for Aas et al. (2005); Arndt et al. (2012); Belzer & De Vos (2012); Cogen, Nizet & Gallo (2008); Grice et al. (2009); Knief et al. (2010); Krych et al. (2014); Langille et al. (2014); Linton & Hinton (1988); Liu, Tan & Exley (2015); Murphy et al. (2007); Ribeiro et al. (2011); Schwarzberg et al. (2014); Seifried, Wichels & Gerdts (2015); Steyn et al. (1998); Willems (2014); Young et al. (2007) and Zaura et al. (2009). We have also corrected the titles of Brooks et al. (2015), Maule et al. (2009), McDonald et al. (2011), Novikova et al. (2006) and Venkateswaran et al. (2014). Ichijo et al. (2016) was published with the incorrect issue name (\u201cApril\u201d); this has been removed. McMurdie & Susan (2013) has been corrected to McMurdie & Holmes (2013). The author list of Menninger et al. (2013) has been corrected to Dunn et al. (2013); we have also corrected the title. The author list of Caporaso (2012) has been corrected to Caporaso et al. (2012); we have also corrected the title and journal name in addition to adding the volume and pagination information. In-text citations to McMurdie & Susan (2013), Menninger et al. (2013) and Caporaso (2012) have been updated to reflect the correct author list. We have also added a new reference to Wickham (2009) to the reference list and replaced the reference to Wilkinson (2011) on page 7 with this new reference.Acknowledgement Section:There is a spelling error for a name in the Acknowledgements and the ORCID link / ID is incorrect. We have corrected this to read Steven Kembel and the ORCID ID to be: 0000-0001-5224-0952."} {"text": "Acute proximal humeral fractures in the elderly are generally treated non-operatively if alignment is acceptable and in stable fracture patterns. When operative treatment is indicated, surgical fixation is often difficult or impossible to obtain. Hemiarthroplasty has long been the standard of care. However, with its reliance on tuberosity healing, functional outcomes and patient satisfaction are often poor. Reverse shoulder arthroplasty has emerged as a new technology for treating proximal humeral fractures but the indications for its use remain uncertain. While not conclusive, the evidence suggests that reverse shoulder arthroplasty yields more consistent results, with improved forward elevation and higher functional outcome scores. The primary advantages of hemiarthroplasty are improved shoulder rotation and shorter operative time. Complication rates do not vary significantly between the two options. Although higher quality trials are needed to further define the role of reverse shoulder arthroplasty, current evidence suggests that this is a reasonable option for surgeons who are highly familiar with its use. Dependi22.1Fractures of the proximal humerus are very common in the elderly population, resulting in approximately 6% of all fractures in adults . While u2.2In 1970, Neer described the classification that is most prevalent today . This cl3High demand and physiologically young patients may have increased tolerance for repeat surgery and it may be reasonable to attempt a reconstruction even in higher grade proximal humerus fractures in this population. Low functional demand patients may be better served with a replacement, and those with pre-morbid symptoms of rotator cuff pathology or evidence of rotator cuff arthrosis may be better served with RSA . RSA is Based on radiographic criteria, primary arthroplasty may be indicated if healing is unlikely or if there is vascular compromise of the humeral head . Fractur4Both HHR and RSA have been extensively described in the literature and the operative techniques are not within the scope of this article . With ra5In one systematic review of patients treated with HHR for proximal humeral fractures, 41% of patients reported unsatisfactory outcomes . However6et al. and Namdari et al. found increased complication rates with RSA as compared to HHR, while Mata-Fink et al. reported better outcomes overall with RSA compared to HHR based on the Constant score, ASES and OSS [No long term outcome studies have been published on RSA in the fracture setting. This has led many authors to question its durability and the wisdom of using this implant in physiologically younger patients for fracture . Current and OSS -11.et al. demonstrated superior forward elevation with RSA with mildly decreased external rotation [et al. similarly found superior forward elevation without a significant decrease in external rotation with RSA [et al. did not observe any significant range of motion differences between the two prostheses [et al. and Namdari et al. [et al. did not find a significant difference in either revision or complication rates [Range of motion (ROM) comparison between the two procedures by Mata-Fink rotation . Ferrellwith RSA . Howeverostheses , and witi et al. , 10. Maton rates . It is iThe results of these three systematic reviews did not demonstrate clear superiority of one prosthetic option over the other. Both appear to be viable options; further prospective studies are needed to further elicit differences in functional outcomes and to further define optimal indications.et al. conducted the only randomized trial in the literature to date comparing RSA with HHR for acute proximal humeral fractures [1. All patients underwent CT scan imaging. There was a minimum 2 year follow-up. A single modular system was used and the post-operative rehabilitation program was standardized across both groups. Functional outcome measures including the Constant, DASH, UCLA scores, active range of motion and tuberosity healing, were significantly higher in patients treated with RSA. The revision rate was lower with RSA. Functional outcomes were poorer with revision of HHR to RSA compared to cases treated with RSA primarily. Successful outcomes in the HHR group were dependent on tuberosity healing. The presence of an irreparable rotator cuff was a strong predictor of failure in HHR [Sebastia-Forcada ractures , Table 1e in HHR .et al. [Baudi et al. reportedet al. .Cuff and Pupello comparedet al. [In another retrospective study that compared HHR with RSA, Gallinet et al. observedet al. .et al. [vs 58 RSA). The Oxford Shoulder Score was higher in the RSA group than the HHR group at 5 years. Revision rates were not significantly different between groups [In 2013, Boyle et al. reportedn groups .et al. [In a much smaller study, Young et al. did not et al. [1) treated with HHR or RSA with followup averaging 3.6 years. Patients with RSA had significantly better functional outcomes scores and satisfaction. The RSA patients were older, with a mean age of 80 years compared to 69 years in the HA group; RSA patients had better forward elevation, and higher functional scores as measured by ASES, University of Pennsylvania score and Single Assessment Numeric Evaluation. Quality of life measures and rotation were not significantly different between groups [Garrigues et al. . reporten groups .et al. [In a cost-effectiveness analysis, Chalmers et al. observedIn summary, both HHR and RSA appear to offer good pain relief with no difference in DASH scores, a measure of disability in daily life, in studies that used this outcome measure. However, functional outcomes in HHR are significantly lower when the tuberosities do not heal, a factor which does not appear to affect the functional outcomes in RSA with the exception of rotation. Survivorship continues to remain a concern with RSA, although revision rates appear to be higher with HHR. The cost of hemiarthroplasty prosthesis is considerably lower than RSA implants; however data suggests that HHR is more expensive when the higher rehabilitation costs are considered. While both HHR and RSA are reasonable implant choices for elderly patients with acute proximal humerus fractures, RSA appears to carry certain advantages, particularly in elderly and low-demand patients, because a successful outcome is much less contingent on tuberosity healing."} {"text": "Plants, being sessile in nature, are constantly exposed to environmental challenges resulting in substantial yield loss. To cope with harsh environments, plants have developed a wide range of adaptation strategies involving morpho-anatomical, physiological, and biochemical traits. In recent years, there has been phenomenal progress in the understanding of plant responses to environmental cues at the protein level. This progress has been fueled by the advancement in mass spectrometry techniques, complemented with genome-sequence data and modern bioinformatics analysis with improved sample preparation and fractionation strategies. As proteins ultimately regulate cellular functions, it is perhaps of greater importance to understand the changes that occur at the protein-abundance level, rather than the modulation of mRNA expression. This special issue on \u201cPlant Proteomic Research\u201d brings together a selection of insightful papers that address some of these issues related to applications of proteomic techniques in elucidating master regulator proteins and the pathways associated with plant development and stress responses. This issue includes four reviews and 13 original articles primarily on environmental proteomic studies.The first review by Hossain et al. summarizPanicum virgatum) using the iTRAQ labeling method followed by nano-scale liquid chromatography mass spectrometry analysis. Li et al. [Hylocereus polyrhizus fruits at the posttranscriptional level. Li et al. [Camellia sinensis L.), highlighting the molecular mechanism involved in secondary metabolite production. Zhao et al. [Paeonia lactiflora Pall.) in response to Paclobutrazol, a triazole compound inhibiting growth of lateral branching. By using gel-free proteomics, Alqurashi et al. [Arabidopsis thaliana proteome composition implicating the role of cAMP in biotic and abiotic stress responses by inducing complex changes in cellular energy homeostasis. Rasool et al. [Rhynchophorus ferrugineus Oliv.) using two-dimensional gel electrophoresis (2-DE) and MALDI-TOF mass spectrometry. Liu et al. [Ginkgo biloba L. by exploiting 2-DE coupled with MALDI-TOF/TOF mass spectrometry. Yu et al. [Hosta \u201cGold Standard\u201d leaves from various regions at different development stages and under excess nitrogen fertilization using 2-DE coupled MALDI-TOF/TOF MS. Findings provide new insights towards understanding the mechanisms of leaf color regulation in variegated leaves. Zhang et al. [Populus deltoids) plants using the 2-DE technique followed by MALDI-TOF-TOF mass spectrometry analysis. Wang et al. [Zea mays), resulting in increased seed protein and lysine contents. The zein, non-zein, and total protein extracts of the seeds of transgenic plants are analyzed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE).Among the 13 original articles, six articles highlight iTRAQ-based proteomic approaches. Ye et al. present i et al. provide i et al. present i et al. emphasizi et al. present o et al. perform i et al. analyze l et al. present u et al. present g et al. unravel g et al. demonstrg et al. provide This special issue on \u201cPlant Proteomic Research\u201d is an attempt to provide researchers with a glimpse of advanced mass spectrometry techniques with a special emphasis on candidate proteins and pathways associated with plant development and stress responses. We believe that this special issue reflects the current perspective and state of the art of plant proteomics, which would not only enrich us in understanding the plant\u2019s response to environmental clues but would further help us in designing better crops with the desired phenotypes. The articles in this issue will be of general interest to proteomic researchers, plant biologists, and environmental scientists.We would like to express our gratitude to all authors for their high quality contributions and numerous peer reviewers for their critical evaluation and valuable suggestions. Moreover, we render our heartiest thanks to the Managing Editor Yong Ren and Section Managing Editor Yue Chen for giving us the opportunity to serve \u201cPlant Proteomic Research\u201d as Guest Editors and Editorial Office, a special mention goes to Sophie Suo for her untiring efforts in coordinating with authors and keeping us updated about the manuscript submission and review process, which helped us in completing the surmount task on time. Finally, we extend our sincere thanks to those professionals whose expertise in proofreading and formatting greatly improved the quality of this special issue."} {"text": "Sensory pain as assessed with the Pain Perception Scale did not show any significant change. Patients suffering from chronic pain benefited from the multimodal pain treatment up to twelve months after completion of the treatment.Chronic pain has high prevalence rates and is one of the top causes of years lived with disability. The aim of the present study was to evaluate the long-term effects of a multimodal day-clinic treatment for chronic pain. The sample included 183 chronic pain patients who participated in a four-week multimodal day-clinic treatment for chronic pain. The patients' average current pain intensity (NRS), sensory and affective pain , and depression and anxiety (HADS) were assessed at pre- and posttreatment, as well as at three follow-ups . Multilevel models for discontinuous change were performed to evaluate the change of the outcome variables. Improvements from pretreatment to posttreatment and from pretreatment to all follow-ups emerged for pain intensity (NRS; 0.54\u2009\u2264\u2009 Chronic pain is a major health care problem. A recent review and meta-analysis including 86 studies found an average prevalence estimate of 31% , as wellefficacy in RCTs under controlled conditions and their effectiveness under clinically representative conditions and P\u00f6hlmann et al. [25] (both\u2009\u2009d=0.7). Compared with Sch\u00fctze et al. [d=0.10\u20130.20) and Ruscheweyh et al. [d=0.26), the improvement at posttreatment regarding symptoms of anxiety represents a slightly higher effect (d=0.40). Compared to Borys et al. [d=0.55), however, the effect on anxiety at discharge was lower. At 12-month follow-up, anxiety improved (d=0.34) slightly more than that in Borys et al. [d=0.22). Anxiety at 12-month follow-up was not different from baseline in the study by Ruscheweyh et al. [The reduction of depressive symptoms at the end of the treatment s et al. (d=0.77)i et al. (d=0.7),n et al. (d=0.80)s et al. , Hampel s et al. , and Russ et al. who repoe et al. s et al. (d=0.55)s et al. (d=0.22)s et al. h et al. . Corresph et al. , depressThe data of the current study were collected in a naturalistic setting. This enhances the external validity of the results. But as the study was neither randomized nor controlled, the internal validity of the results is limited. Therefore, we cannot exclude confounders like time effects as possible causes for the found improvements. Yet, a 4-year follow-up study on the course of chronic pain in the community reported that chronic pain shows low recovery rates . NeverthAs chronic pain is most probably caused by an interaction of biopsychosocial factors, multimodal pain treatment programs seem to provide the most effective therapy. The current study supports the notion that chronic pain patients benefit from multimodal treatments under the conditions of routine care in the long term."} {"text": "Bacteroidetes, characterized by a distinct gliding motility, occurs in a broad variety of ecosystems, habitats, life styles, and physiologies. Accordingly, taxonomic classification of the phylum, based on a limited number of features, proved difficult and controversial in the past, for example, when decisions were based on unresolved phylogenetic trees of the 16S rRNA gene sequence. Here we use a large collection of type-strain genomes from Bacteroidetes and closely related phyla for assessing their taxonomy based on the principles of phylogenetic classification and trees inferred from genome-scale data. No significant conflict between 16S rRNA gene and whole-genome phylogenetic analysis is found, whereas many but not all of the involved taxa are supported as monophyletic groups, particularly in the genome-scale trees. Phenotypic and phylogenomic features support the separation of Balneolaceae as new phylum Balneolaeota from Rhodothermaeota and of Saprospiraceae as new class Saprospiria from Chitinophagia. Epilithonimonas is nested within the older genus Chryseobacterium and without significant phenotypic differences; thus merging the two genera is proposed. Similarly, Vitellibacter is proposed to be included in Aequorivita. Flexibacter is confirmed as being heterogeneous and dissected, yielding six distinct genera. Hallella seregens is a later heterotypic synonym of Prevotella dentalis. Compared to values directly calculated from genome sequences, the G+C content mentioned in many species descriptions is too imprecise; moreover, corrected G+C content values have a significantly better fit to the phylogeny. Corresponding emendations of species descriptions are provided where necessary. Whereas most observed conflict with the current classification of Bacteroidetes is already visible in 16S rRNA gene trees, as expected whole-genome phylogenies are much better resolved.The bacterial phylum Bacteroidetes comprise bacteria widespread in the biosphere and isolated from many distinct habitats, including temperate, tropical and polar ecosystems saline environments only , whereas Cryomorphaceae mostly live in cold, marine environments and gene cluster (polysaccharide utilization loci) transfer between environmental and gut Bacteroidetes from the Bacteroidetes as the novel phylum Rhodothermaeota. Nevertheless, many unsatisfactory aspects of Bacteroidetes classification might still persist.g Woese, . Before chenbach . The anaIndeed, only monophyletic taxa can be accepted in a taxonomic classification because its purpose is to summarize the phylogeny of the classified organisms Hennig, , and genBacteroidetes, it is strongly recommended to include the G+C content especially when describing every species of Flavobacteriaceae pilot phase as well as the One Thousand Microbial Genomes phase 1 (KMG-1) projects What is the relationship between the phylogenomic trees and the proposed taxonomic classifications or the 16S rRNA gene phylogenies? (ii) Which taxa need to be revised because they are evidently non-monophyletic? (iii) Which taxon descriptions that lack G+C values should be augmented with information from genome sequences? (iv) Which taxon descriptions deviate from G+C content values calculated from genome sequences and should now be emended accordingly? (v) Does the correction of G+C values improve their fit to the phylogeny?The Genomic Encyclopedia of Bacteroidetes (ingroup), Chlorobi, Planctomycetes, and Verrucomicrobia (outgroup) type-strain genomes originating from the GEBA pilot phase and the KMG-1 number of identities within BLAST hits (high-scoring segment pairs) to the overall length of these hits and thus is unaffected by incomplete genome sequencing, and subjected to 100 pseudo-bootstrap replicates calculation of the G+C content from genome sequences was done as in a previous study , based on the priority of the respective genus names. A comparable situation has been solved recently for the Ignavibacteriaceae differ from Bacteroidetes by a considerable number of phenotypic characters. The large amount of phospholipids found in Rhodothermus (and its relatives) is unusual for aerobic Bacteroidetes rather than ester-linked polar lipids . Three branches with weak to moderate support, respectively, would need to be wrong to obtain monophyletic Chitinophagia and Chitinophagales. The UCT and CCT show a distinct picture with a monophyletic Chitinophagia and Chitinophagales with 94\u201399% support under ML and <60\u201379% support under MP. Thus, regarding the monophyly criterion, it might or might not be adequate to place Saprospiraceae in Chitinophagales according to our analyses. The monophyly of the family itself, including the type genus Saprospira, is supported by the 16S rRNA gene trees with 95\u2013100%. Thus, given its uncertain position relative to the remaining Chitinophagia and Chitinophagales, the taxonomic placement of Saprospiraceae should be reconsidered.With the y Figure with theSaprospiraceae has been proposed first in Bergey's manual that form long filaments (up to 500 \u03bcm) and do not possess sphingophospholipids. Moreover, the 16S rRNA gene trees indicate that the group comprising both Saprospiraceae and Chitinophagaceae . Lewinella, Saprospira and Aureispira exhibit gliding motility, but only Saprospira and Aureispira form helical filaments tolerates up to 6% NaCl and 70\u00b0C thrive in salt lakes and crystallizer ponds and grow at maximum temperatures of 50\u00b0C in medium with at least 5% NaCl and up to saturation contain PG, but no glycolipids with 99% support in the GBDP analysis. Flexithrix and Rapidithrix formed a clade with reasonable bootstrap support in the UCT and CCT. Marivirga and Roseivirga did not form a clade either, with a conflicting branch supported by 94% in the GBDP tree. Because of their overall lower resolution, the 16S rRNA gene trees do not indicate in which of these three distinct groups of Flammeovirgaceae its type genus, Flammeovirga, is placed. For this reason, further revisions of Flammeovirgaceae have to be postponed until more genome sequences are available. Based on phylogenetic results , and Thermonema lapsum (Flammeovirgaceae) was used as an outgroup for the 16S rRNA gene sequence-based phylogeny. A similar set of strains was investigated in the description of the two other Marivirga species, M. lumbricoides , hence no taxonomic consequences will be undertaken in our study.Within s Figure , as all Butyricimonas and Odoribacter from Porphyromonadaceae to place them in the new family Odoribacteraceae appeared paraphyletic because Pseudopedobacter saltans was placed as sister group of Pedobacter glucosidilyticus with 100% support and Pedobacter oryzae as sister group of Arcticibacter svalbardensis and Mucilaginibacter paludis with <50% support appeared paraphyletic with high confidence in the phylogenomic tree because the clade comprising F. litoralis and F. roseolus (separated by long branches) occurred in clades with 77 and 99% support together with other genera, to the exclusion of F. elegans, which formed the sister group of Microscilla marina with maximum support , which was placed in an uncertain position. Apart from tree topology, the branch lengths in the whole-genome and 16S rRNA gene tree indicated that the Flexibacter species are too divergent to be placed in a single genus. Additionally, Microscilla marina and F. elegans appear to be too divergent to be placed in a single genus , but it was emphasized that Flexibacter is still heterogeneous and should be subdivided based on additional molecular taxonomic data to obtain monophyletic groups. The solution to merge the two genera is thus more conservative. Moreover, the overall divergence of the clade comprising the two genera is lower than the divergence of other clades comprising only a single genus appeared paraphyletic in the phylogenomic tree because Croceitalea dokdonensis was placed as sister group of M. lutaonensis. Support was weak (<60%), however, and in the UCT and CCT Croceitalea and Muricauda appear as separate groups with moderate support. Thus, even though in the description of the moderate thermophile M. lutaonens the genus Croceitalea was not considered also appeared paraphyletic in the phylogenomic tree because two Vitellibacter species were placed as sister group of Aequorivita sublithincola with 94% support. In the UCT and CCT the clade comprising both genera is maximally supported, but within the clade the two genera are intermixed without much support. The genomic divergence of the clade comprising both genera appears as lower than the divergences of many clades that contain only a single genus and Chlostridia are considered as halotolerant or halophilic Bacteria or halotolerant or halophilic .Our phylogenetic analysis revealed much agreement between genome-scale data and the current classification of ia Oren, . The recRhodothermaeota and Balneolaeota, but also of other groups reclassified here, it is likely that ecological studies using metagenomic binning or similar techniques will benefit from the further improved classification. Studies that do not distinguish between Rhodothermaeota, Balneolaeota, and Bacteroidetes may insufficiently describe microbial compositions of environmental habitats and perhaps make misleading assumptions on environmental conditions , and thus environmental studies using this probe are not influenced by the mentioned reclassifications.In other areas such as medicine and industry, the risk classification of a prokaryotic species to cause infectious diseases is based on its taxonomic classification ABAS, . Given tBacteroidetes species descriptions needs to be corrected. The according emendations proposed below are numerous but by no means excessive. Indeed, we do not propose to correct all G+C content values whose precision or accuracy could be improved but only those that are too imprecise or inaccurate given that the within-species deviation in G+C content is at most 1% superphylum,\u201d considering the recent proposal to include the phylum rank in taxonomy .Bal.ne.o.lae.o'ta order of the phylum is The description is the same as given by Munoz et al. with theBalneolia from the phylum Rhodothermaeota.This change was necessary due to the removal of Balneolia is part of the phylum Balneolaeota and has been additionally circumscribed on the basis of whole-genome phylogenetic analysis.The description is the same as given by Munoz et al. with theBalneolia from the phylum Rhodothermaeota. The description is the same as for the order Balneolales.This change was necessary due to the removal of The description is the same as given by Nakagawa with theBalneola from Cytophagia.This change was necessary due to the removal of Saprospira, type genus of the type order of the class, -ia ending to denote a class; N.L. fem. pl. n. Saprospiria, the class of the order Saprospirales).Sa.pros.pi'ri.a .Sa.pro.spi.ra'les .Le.wi.nel.la'ce.ae . Seawater is required for growth. Flexirubin-type pigments are not produced. Carotenoid-type pigments are produced. The respiratory quinone is MK-7. The genomic G+C content is around 45\u201353%.Cells are ensheathed, Gram-stain negative flexible rods (up to 3 \u03bcm) that form long filaments (up to 150 \u03bcm) and are motile by gliding. Typical fatty acids are iso-CBacteroidetes, order Saprospirales ord. nov., class Saprospiria class. nov., and currently comprises only the type genus, Lewinella.This family belongs to the phylum Haliscomenobacter type genus of the family; -aceae ending to denote a family; N.L. fem. pl. n. Haliscomenobacteraceae the Haliscomenobacter family).Ha.lis.co.me.no.bac.te.ra'ce.ae . Typical fatty acids are iso-C15:0, summed feature 3 (either C16:1 \u03c97c/C16:1 \u03c96c or C16:1 \u03c97c/iso-C15:0 2-OH) and either iso-C17:0 3-OH or iso-C15:0 3-OH. Flexirubin-type pigments are not produced. Carotenoid-type pigments are produced. The respiratory quinone is MK-7. Cells are oxidase-positive. The genomic G+C content is around 47\u201354%.Cells are Gram-stain negative, non-motile long rods (up to 5 \u03bcm) that form long needle-like filaments (up to 300 \u03bcm). Some are enclosed by a narrow, hardly visible hyaline sheath , Phaeodactylibacter and Portibacter.This family belongs to the phylum Microscilla type genus of the family; -aceae ending to denote a family; N.L. fem. pl. n. Microscillaceae the Microscilla family).Mi.cros.cil.la'ce.ae and Eisenibacter gen. nov.This family belongs to the phylum Bernardetia type genus of the family; -aceae ending to denote a family; N.L. fem. pl. n. Bernardetiaceae the Bernardetia family).Ber.nar.de.ti.a'ce.ae , Hugenholtzia, and tentatively also Garritya.This family belongs to the phylum The description is the same as given by Munoz et al. with the15:0 and other non-hydroxy branched-chain fatty acids. Major polar lipids are diphosphatidylglycerol, phosphatidylethanolamine and either phosphatidylglycerol or phosphatidylcholine . The major menaquinone is MK-7. The genomic G+C content varies around 45%.Cells are non-motile or motile by means of flagella. The dominant fatty acids are iso-CThe description is the same as given by Munoz et al. with the15:0, C18:1 \u03c97c, summed feature 3 (C16:1 \u03c96c and/or C16:1 \u03c97c).Major polar lipids are diphosphatidylglycerol (cardiolipid) and diphosphatidylcholin. Some species possess halocapnines. The major fatty acids are iso-CThe description is the same as given by Ludwig et al. with the16:0, iso-C17:0, anteiso-C17:0. The genomic G+C content varies around 65%.Major polar lipids are diphosphatidylglycerol, phosphatidylethanolamine and phosphatidylglycerol. The major fatty acids are iso-CThe description is the same as given by Munoz et al. with the15:0, with a low ratio of anteiso-C15:0 to iso-C15:0. The genomic G+C content varies around 40\u201350%.Cells are non-motile. Metabolism fermentative, major end products are diverse organic acids. Major menaquinones are MK-9 and MK-10. Major fatty acid iso-CThe description is as given by Krieg et al. , with thCells are long rods (up to 3.5 \u03bcm) that form long helical filaments (up to 500 \u03bcm) and are motile by gliding. NaCl is required for growth and some can be tolerate NaCl at a concentration of up to 9% (w/v). Cytochrome oxidase and catalase activities are variable. Flexirubin-type pigments are not produced. Carotenoid-type pigments are produced. The respiratory quinone is MK-7. The genomic G+C content is around 33\u201348%.Bacteroidetes, order Saprospirales ord. nov., class Saprospiria class. nov., and currently comprises the genera Saprospira (the type genus) and Aureispira.This family belongs to the phylum Bernardet named after Jean-Fran\u00e7ois Bernardet, researcher at INRA Research Center, Jouy-en-Josas, France, and chairman of ICSP subcommittee on the taxonomy of aerobic Bacteroidetes; N.L. fem. n. Bernardetia a genus named after Jean-Fran\u00e7ois Bernardet).Ber.nar.de'ti.a .B. li.to.ra'lis .Gar.ri'ty.a species of the genus is Garritya polymorpha.On the basis of 16S rRNA gene sequence and phylogenomic analysis, the genus represents a separate branch within the order polymorpha, variable in form).G. po.ly.mor'pha .Hu.gen.hol'tzi.a .H. ro.se'o.la species of the genus is Thermoflexibacter ruber.On the basis of 16S rRNA gene sequence analysis as well as previously published physiological and morphological data, the genus represents a branch of uncertain affiliation within the order rubra, red).T. ru'bra .Ei.se.ni.bac'ter species of the genus is Eisenibacter elegans.On the basis of 16S rRNA gene sequence analysis as well as previously published physiological and morphological data, the genus represents a separate branch within the order E. e'le.gans .Flexibacter elegans Reichenbach 1989b, 2067ALBasonym: Flexibacter elegans .A. aes.tu.a'ri.i .A. e.chi.no.i.de.o'rum .A. ni.o.nen'sis A. soe.sok.ka.ken'sis Vitellibacter vladivostokensis Nedashkovskaya et al. 2003Basonym: Vitellibacter vladivostokensis .C. gin.sen.gi.ter'rae Basonym: Epilithonimonas ginsengisoli .C. hal.per'ni.ae Basonym: Epilithonimonas lactis .C. psy.chro.to'le.rans. .C. te'nax .C. xi.xi.so'li (N.L. n. Epilithonimonas xixisoli Feng et al. 2014Basonym: Epilithonimonas xixisoli (Feng et al., The description is the same as for Flexibacter flexilis, as its known features, while scarce, already differentiate at the genus level.The description is the same as for the species Cytophagales, and must be separated from all other species formerly classified in Flexibacter. The description of the genus must accordingly be restricted.On the basis of 16S rRNA gene sequence analysis as well as previously published physiological and morphological data, the type species of the genus represents a separate branch within the order The description is as given by Derrien et al. with theThe description is as given by Nedashkovskaya et al. with theThe description is as given by Yoon et al. with theThe description is as given by Rautio et al. with theThe description is as given by Song et al. with theThe description is as given by Zhilina et al. with theThe description is as given by Downes et al. with theThe description is as given by Prasad et al. with theThe description is as given by Nedashkovskaya et al. with theThe description is as given by Johnson et al. with theThe description is as given by Robert et al. with theThe description is as given by Hayashi et al. with theThe description is as given by Whitehead et al. with theThe genome sequence-derived G+C content was reported earlier (Land et al., The description is as given by Bakir et al. with theThe description is as given by Holdeman and Moore with theThe description is as given by Castellani and Chalmers with theThe description is as given by Nishiyama et al. with theThe description is as given by Benno et al. with theThe genome sequence-derived G+C content was reported earlier (Pati et al., The description is as given by Bakir et al. with theThe description is as given by Fenner et al. with theThe description is as given by Eggerth and Gagnon with theThe description is as given by Ueki et al. with theThe description is as given by Castellani and Chalmers with theThe description is as given by Eggerth and Gagnon with theThe description is as given by Eggerth and Gagnon with theThe description is as given by Urios et al. with theThe description is as given by Morotomi et al. with theThe description is as given by Brettar et al. with theThe description is as given by Vandamme et al. with theThe description is as given by Schlesner et al. with theThe description is as given by Sakamoto et al. with theThe description is as given by Sakamoto et al. with theThe description is as given by London et al. with theThe description is as given by Leadbetter et al. with theThe description is as given by Bowman with theThe genome sequence-derived G+C content was reported earlier (Abt et al., The description is as given by Johansen et al. with theThe genome sequence-derived G+C content was reported earlier (Pati et al., The description is as given by K\u00e4mpfer et al. with theThe description is as given by Sangkhobol and Skerman with theThe genome sequence-derived G+C content was reported earlier (Glavina Del Rio et al., The description is as given by K\u00e4mpfer et al. with theThe description is as given by Imhoff with theThe description is as given by Imhoff with theThe description is as given by Gibson et al. with theThe description is as given by K\u00e4mpfer et al. with theThe description is as given by Kim et al. with theThe description is as given by Quan et al. with theThe description is as given by Young et al. with theThe description is as given by K\u00e4mpfer et al. with theThe description is as given by Montero-Calasanz et al. with theThe description is as given by Loveland-Curtze et al. with theThe description is as given by K\u00e4mpfer et al. with theThe description is as given by K\u00e4mpfer et al. with theThe description is as given by Sang et al. with theThe description is as given by Montero-Calasanz et al. with theThe description is as given by Pires et al. with theThe description is as given by Strahan et al. with theThe description is as given by Weon et al. with theThe description is as given by Benmalek et al. with theThe description is as given by Lee et al. with theThe description is as given by Nedashkovskaya et al. with theThe description is as given by Raj and Maloy with theThe description is as given by Shivaji et al. with theThe description is as given by Reichenbach with theThe description is as given by Winogradsky with theThe description is as given by Dong et al. with theThe description is as given by Reddy and Garcia-Pichel with theThe description is as given by Chelius and Triplett with theThe genome sequence-derived G+C content was reported earlier (Lang et al., The description is as given by Liu et al. with theThe description is as given by Hofstad et al. with theThe description is as given by Lawson et al. with theThe description is as given by Nedashkovskaya et al. with theThe description is as given by Nedashkovskaya et al. with theThe description is as given by K\u00e4mpfer et al. with theThe description is as given by Zhang et al. with theThe description is as given by Saha and Chakrabarti with theThe description is as given by Zhang et al. with theThe description is as given by Lee et al. with theThe description is as given by Yoon and Im with theThe description is as given by Yoon and Im with theThe description is as given by Yi et al. with theThe description is as given by Sheu et al. with theThe description is as given by Dong et al. with theet al. (Van Trappen et al., The description is as given by Van Trappen The description is as given by Bernardet et al. with theThe description is as given by Dong et al. with theThe description is as given by Dong et al. with theThe description is as given by Fujii et al. with theThe description is as given by Larkin et al. with theThe description is as given by Lewin with theThe description is as given by Nedashkovskaya et al. with theThe description is as given by Maci\u00e1n et al. with theThe description is as given by Van Trappen et al. with theThe genome sequence-derived G+C content was reported earlier (Riedel et al., The description is as given by van Veen et al. with theThe genome sequence-derived G+C content was reported earlier (Daligault et al., The description is as given by Moore and Moore with theThe description is as given by Buczolits et al. with theThe description is as given by Buczolits et al. with theThe description is as given Hirsch et al. with theThe description is as given by Surendra et al. with theThe description is as given by Kumar et al. with theThe description is as given by Quan et al. with theThe genome sequence-derived G+C content was reported earlier (Stackebrandt et al., The description is as given by Weon et al. with theThe genome sequence-derived G+C content was reported earlier (Abt et al., The description is as given by Pinhassi et al. with theThe description is as given by Khan et al. with theThe description is as given by Bhumika et al. with theThe description is as given by Lewin with theThe description is as given by Pankratov et al. with theThe description is as given by Arun et al. with theThe description is as given by Vancanneyt et al. with theThe description is as given by Kim et al. with theThe description is as given by Kwon et al. with theThe description is as given by Nagai et al. with theThe description is as given by Hardham et al. with theThe genome sequence-derived G+C content was reported earlier (G\u00f6ker et al., The description is as given by Chin et al. with theThe description is as given by Zhou et al. with theThe genome sequence-derived G+C content was reported earlier (Riedel et al., The description is as given by Sakamoto and Benno with theThe description is as given by Sakamoto et al. with theThe description is as given by Sakamoto and Benno with theThe description is as given by Zhou et al. with theThe description is as given by Yang et al. with theThe description is as given by Schlesner and Hirsch with theThe genome sequence-derived G+C content was reported earlier (Clum et al., The description is as given by Gosink et al. with theThe description is as given by Kim et al. with theThe description is as given by Nedashkovskaya et al. with theThe description is as given by Shah and Collins with theThe description is as given by Summanen et al. with theThe description is as given by Willems and Collins with theThe description is as given by Shah and Collins with theThe description is as given by Shah and Collins with theThe description is as given by Fournier et al. with theThe description is as given by Shah et al. with theThe description is as given by Summanen et al. with theThe description is as given by Avgustin et al. with theThe description is as given by Lawson et al. with theThe description is as given by Downes et al. with theThe description is as given by Avgustin et al. with theThe description is as given by Avgustin et al. with theThe description is as given by Shah and Collins with theThe description is as given by Hayashi et al. with theThe description is as given by Shah and Collins with theThe description is as given by Willems and Collins with theThe description is as given by Wu et al. with theThe description is as given by Downes et al. with theThe description is as given by Shah and Collins with theThe description is as given by Wu et al. with theThe description is as given by Downes et al. with theThe description is as given by Wu et al. with theThe description is as given by Sakamoto et al. with theThe genome sequence-derived G+C content was reported earlier (Pati et al., The description is as given by Alauzet et al. with theThe description is as given by Shah and Gharbia with theThe description is as given by Shah and Collins with theThe description is as given by Shah and Collins with theThe description is as given by K\u00f6n\u00f6nen et al. with theThe description is as given by Ueki et al. with theThe description is as given by Glazunova et al. with theThe description is as given by Imhoff with theThe description is as given by Chen and Dong with theThe description is as given by Cao et al. with theThe description is as given by Vaz-Moreira et al. with theThe description is as given by Bowman et al. with theThe description is as given by Bowman et al. with theThe description is as given by Donachie et al. with theThe description is as given by Bowman et al. with theThe description is as given by Schmidt et al. with theThe description is as given by Collins et al. with theThe description is as given by Cho and Giovannoni with theThe description is as given by Nedashkovskaya et al. with theThe description is as given by Nedashkovskaya et al. with theThe description is as given by Lau et al. with theThe description is as given by Ryu et al. with theThe description is as given by Larkin and Williams with theThe genome sequence-derived G+C content was reported earlier (Copeland et al., The description is as given by Chelius et al. with theThe description is as given by Ivanova et al. with theThe description is as given by Chen et al. with theThe description is as given by An et al. with theThe description is as given by Yabuuchi et al. with theThe description is as given by Takeuchi and Yokota with theThe description is as given by Migula with theThe genome sequence-derived G+C content was reported earlier (Lail et al., The description is as given by Ten et al. with theThe description is as given by Finster et al. with theThe description is as given by Stanier with theThe description is as given by Romanenko et al. with theThe description is as given by Qiu et al. with theThe description is as given by Nobre et al. (Tenreiro et al., The description is as given by Zhang et al. with theThe genome sequence-derived G+C content was reported earlier (Lang et al., The description is as given by Begum et al. with theHK prepared genomic DNA. TW sequenced the genomes. MH, NI, NK, and SM annotated the genomes. JM and MG phylogenetically analyzed the data. MGL and MG collected the G+C content information. RH, MGL, and MG collected the phenotypic information. RH, JM, MGL, and MG interpreted the results. All authors read and approved the final manuscript.This work was performed under the auspices of the US Department of Energy's Office of Science, Biological and Environmental Research Program, and by the University of California, Lawrence Berkeley National Laboratory under contract No. DE-AC02-05CH11231. RH was supported by the German Bundesministerium f\u00fcr Ern\u00e4hrung und Landwirtschaft, grant No. 22016812 for Brian J. Tindall.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Dry socket is one of the most common complications that develops after the extraction of a permanent tooth, and its prevention is more effective than its treatment.Analyze the efficacy of different methods used in preventing dry socket in order to decrease its incidence after tooth extraction.A Cochrane and PubMed-MEDLINE database search was conducted with the search terms \u201cdry socket\u201d, \u201cprevention\u201d, \u201crisk factors\u201d, \u201calveolar osteitis\u201d and \u201cfibrynolitic alveolitis\u201d, both individually and using the Boolean operator \u201cAND\u201d. The inclusion criteria were: clinical studies including at least 30 patients, articles published from 2005 to 2015 and written in English. The exclusion criteria were case reports and nonhuman studies.30 publications were selected from a total of 250. Six of the 30 were excluded after reading the full text. The final review included 24 articles: 9 prospective studies, 2 retrospective studies and 13 clinical trials. They were stratified according to their level of scientific evidence using SIGN criteria (Scottish Intercollegiate Guidelines Network).All treatments included in the review were aimed at decreasing the incidence of dry socket. Locally administering chlorhexidine or applying platelet-rich plasma reduces the likelihood of developing this complication. Antibiotic prescription does not avoid postoperative complications after lower third molar surgery. With regard to risk factors, all of the articles selected suggest that patient age, history of previous infection and the difficulty of the extraction are the most common predisposing factors for developing dry socket. There is no consensus that smoking, gender or menstrual cycles are risk factors.Taking the scientific quality of the articles evaluated into account, a level B recommendation has been given for the proposed-procedures in the prevention of dry socket. Key words:Dry socket, prevention, alveolar osteitis, risk factors. Dry socket is the most common complication following tooth extraction and one Its incidence is approximately 3% for all routine extractions and can exceed 30% for impacted mandibular third molars , and manIt has been suggested that increased local fibrinolytic activity is the main etiological factor in developing dry socket. Increased in fibrinolytic activity could result in the premature loss of the intraalveolar blood clot after extraction . The fibThe treatment of alveolitis depends on each professional\u2019s clinical experience primarilThe Cochrane Collaboration published a review on local procedures for managing dry socket, and concluded there was no evidence to support any of the procedures should be included in its treatment .The aim of this systematic review is to analyze the different methods used for preventing dry socket. The following question emerged: what is the most effective method for preventing dry socket and reducing its incidence? In addition, would identifying the risk factors for dry socket reduce its incidence?A Cochrane and PubMed-MEDLINE databases search of articles was conducted between May 2015 and December 2015. The key words \u201cdry socket\u201d, \u201crisk factors\u201d, \u201calveolar osteitis\u201d and \u201cfibrynolitic alveolitis\u201d were used. After that, the terms were combined using the Boolean operator \u201cAND\u201d, in order to obtain the articles that included two or more of the words used in the search.The inclusion criteria were clinical studies that included at least 30 patients published from 2005 to 2015 and written in English. The exclusion criteria were case reports and nonhuman studies.Articles were selected by one of the authors first reading the titles and abstracts and then reading the full text of the articles that met the inclusion criteria Fig. . The PRIThe complete texts of 30 articles were analyzed out of the 250 studies initially obtained from the search. Six of these 30 articles were excluded because they had no direct relationship with the subject and 24 relevant articles were finally selected to be included in our systematic review: 9 prospective studies, 2 retrospective studies and 13 clinical trials Table , Table 2The articles we reviewed analyzed three different methods for preventing dry socket: chlorhexidine -23, antiThe concentration and formulation of chlorhexidine in preventing dry socket used 0.12%, 0.2% and 1% gel formulation -21,23 oret al. one of the most studied medications, in 500 mg or 2 g doses. Bortoluzzi et al. studied et al. , tetracyet al. and topiet al. . It must with another preventive method like eugenol (Alvogyl\u00ae) , the latet al. (With respect to antibiotic prescription, there is a consensus between the eight articles included in the review. Seven of them conclude that the prophylactic regimen is unnecessary, since it does not prevent dry socket. Even so, Halpern and Dodson do descret al. concludeet al. (Platelet-rich plasma may also have a preventive effect as well as being efficient in dry socket management . The twoet al. points oIn relation to the other methods described for dry socket prevention, the results were diverse. Both warm saline mouth rinse and abso- Risk factorset al. (Regarding dry socket risk factors, Chuang et al. makes a et al. (These authors point out that the only modifiable factor is the pre- surgical infection site that could be inoculated with microorganisms from the external environment in the newly exposed socket after extraction . Partharet al. goes on et al. (Another modifiable factor may be tobacco use, even though there is no clear data indicating a higher predisposition in smoking patients. Three articles ,43,45 deet al. found siet al. , althougOnly Eshghpour and Nejat describeet al. (et al. (With respect to the menstrual cycles and oral contraceptive use, Eshghpour et al. found a (et al. did not (et al. .et al. (et al. (Abu Younis et al. believeset al. -45 suppoet al. . Parhasa (et al. also poi (et al. .et al. (Professional experience was only analyzed in one article , showinget al. were theOnly Oginni states tet al. (Finally, with reference to age and gender as risk factors, all of the authors except Eshghpour and Nejat describeet al. found a Some limitations encountered during the review process were the lack of consensus in the preventive methods used in the articles included in the review, being the formulation and dosage of the method studied different in each article, and therefore their comparison was quite challenging. Also, the diversity of the risk factors considered in the articles included made it difficult to compare all the studies with accuracy.Chlorhexidine administration or platelet rich plasma reduce dry socket development. Antibiotic prescriptions do not have a preventive effect on postoperative inflammatory complications.Age, history of previous infection and difficulty of extraction are risk factors for developing dry socket and should therefore be taken into account by the clinician when carrying out the procedure. There is no consensus that tobacco use and menstrual cycles play a role in the development of dry socket.After the article\u2019s analysis and according to their scientific quality, a level C recommendation is given to all the therapeutic procedures proposed for preventing dry socket."} {"text": "The 2015 World Health Organization (WHO) Guidelines on when to start antiretroviral (ARV) therapy and on pre-exposure prophylaxis (PrEP) for HIV have shoOn 8 June 2016, the UN General Assembly signed the political declaration on HIV and AIDS: on the fast-track to accelerate the fight against HIV and to end the AIDS epidemic by 2030 . To fulfDespite recent progress and these aspirational goals, actual roll-out at a global scale is just beginning, and considerable challenges remain unmet. The planning and organization of demonstration studies beyond MSM in the United States and the UK has been slow, and implementation-relevant information for both the general population and specific key populations is still lacking. While concerns about adherence and effectiveness (particularly among MSM) have abated, some other issues have emerged, including 1) the preferred ARV agent, although tenofovir\u2013emtricitabine is the only licensed agent at present; 2) potential of less frequent dosing, cost and sustainability (as much in lower-middle income as in higher-income countries); and 3) a long list of implementation questions that vary by setting and target population. The latter may make monitoring of impact and outcome more complex.NEMUS, in collaboration with UNAIDS, has followed its first supplement on PrEP beyond clinical trials with thiet al. [A paper by C\u00e1ceres et al. discusseet al. [The contribution by McGillen et al. examineset al. [Also focusing on sub-Saharan Africa, Cowan et al. state thet al. [A paper by Zablotska-Manos et al. discusseet al. [Ravasi et al. discuss et al. [\u00ae is a major barrier to PrEP implementation, together with inadequate health systems and a weakening civil society.The final regional paper by McCormack et al. focuses et al. [This special issue includes three papers focused on specific populations. One contribution by Sevelius et al. focuses et al. [\u00ae as daily PrEP in adolescents. A number of general and unique challenges have arisen in this age group. Prime among these is adherence to daily medication, which is particularly challenging among younger populations, but other individual level barriers and structural challenges are also described.Hosek et al. focus onIn turn, Coleman and McLean provide Finally, Cairns and Race present Given the need presented in this series and the promise and potential of the clinical trials and recommendations from WHO, it is hoped that we will see an impressive expansion of combination prevention including PrEP in the next 2\u20135 years. With just 14 short years before the UNAIDS target to end AIDS by 2030, unless we urgently, actively and extensively deploy all of the effective interventions at our disposal, this goal will slip away from our grasp. No one should be left behind ."} {"text": "Pediatric femoral shaft fractures account for less than 2% of all fractures in children. However, these are the most common pediatric fractures necessitating hospitalization and are associated with prolonged hospital stay, prolonged immobilization and impose a significant burden on the healthcare system as well as caregivers. In this paper, the authors present a comprehensive review of epidemiology, aetiology, classification and managemement options of pediatric femoral shaft fractures. Pediatric femoral shaft fractures are uncommon, constituting less than 2% of all fractures in children; yet they are a significant burden on healthcare systems and families as they are the most common fractures requiring hospitalization in children , 2. Theset al. ..1) [44].External fixator application is associated with complications manifold -48. The Fixator removal is usually done after 3-4 months when bridging callus is noted in at least 3 of the 4 cortices on AP and lateral views. An alternative strategy (\u201cportable traction\u201d) is to remove fixator at around 6-8 weeks when early callus is noted and to apply a walking spica. This method minimises stress shielding and allows pin tracts to ossify with the spica acting as a protective splint .ii)Developed by the Nancy group in France , 59 in t2) [et al. [The preferred technique for insertion of these nails is a retrograde technique with 2 small incisions just above the distal femoral physis . Antegra2) . The dia2) . During 2) , 60, 63.2) , 60, 63. [et al. observedComplications include excessive shortening which leads to nail protrusion and limb length discrepancy. The most common complication is pain or skin irritation at the nail insertion site caused by a prominent nail end , 66. HigAntegrade vs retrograde insertion of flexible nails\u2022 et al. [Frick et al. conducteet al. .et al. [et al. demonstrated that given satisfactory cortical starting points in the distal fragment, retrograde insertion provides greater stability [Mehlman et al. conductetability .Steel vs titanium flexible nails\u2022 et al. [et al. [Wall et al. compared [et al. it was n [et al. .Flexible interlocked nailing\u2022 et al. [Linhart and Roposch describeet al. reportedet al. .et al. [Ellis et al. conducteet al. .3), which has reduced the chances of osteonecrosis, the use of rigid intramedullary nails in adolescents is on the rise again [Rigid intramedullary nails initially fell out of favour compared to flexible intramedullary nails as these nails had a piriformis entry and hence were associated with avascular necrosis (AVN) of the femur head and with injuries to the growth plate leading to growth arrest . Howeverse again , 87.Current literature suggests that rigid intramedullary nail with a trochanteric entry point is the preferred mode of fixation of shaft femur fractures in adolescents . Howeveriii)Pediatric trauma surgeons have largely moved away from the traditional open reduction and compression plating to the more modern submuscular bridge plating which offers stability without disturbing the vascularity of the fracture fragments hence leading to early union.4) [Submuscular bridge plating provides excellent stability; it is especially useful in the management of proximal/distal fractures that are not suitable for IM nailing/external fixation , 94. Thi4) -100.Femoral shaft fractures in children are amongst the commonest fractures necessitating hospitalization. The major determinant of treatment modality is age of the child. Fractures in children below 6 years of age can be managed non-operatively with excellent outcomes. Elastic stable intramedullary nails are preferred for children < 11 years of age or those with body weight < 50 kg with a length stable transverse or short oblique fracture. Length unstable fractures and fractures at the proximal ends of femur may be managed by submuscular plating or external fixation. For children above 11 years of age or those with body weight > 50 kg, rigid intramedullary nailing or submuscular plating is preferred. Piriformis fossa entry nails should be avoided to prevent the complication of avascular necrosis of femoral head.The authors confirm that this article content has no conflicts of interest."} {"text": "The aim of this study was to compare the shear bond strength of a self-adhering flowable composite (SAFC) and resin-modified glass ionomer (RMGI) to mineral trioxide aggregate (MTA) and calcium-enriched mixture (CEM) cement. A total of 72 acrylic blocks with a central hole (4 mm in diameter and 2 mm in depth) were prepared. The holes were filled with MTA (sub group A) and CEM cement. The specimens of both restorative materials were divided into 6 groups; overall there were 12 groups. In groups 1 and 4, SAFC was used without bonding while in groups 2 and 5 SAFC was used with bonding agent. In all these groups the material was placed into the plastic mold and light cured. In groups 3 and 6, after surface conditioning with poly acrylic acid and rinsing, RMGI was placed into the mold and photo polymerized. After 24 h, the shear bond strength values were measured and fracture patterns were examined by a stereomicroscope. Data were analyzed using the two-way ANOVA and student\u2019s t-test.P=0.008 and 0.00, respectively). In both materials, RMGI had the lowest shear bond strength values (2.25 Mpa in MTA and 1.32 Mpa in CEM). The mean shear bond strength were significantly higher in MTA specimen than CEM cement (P=0.003). There was a significant differences between fracture patterns among groups (P=0.001). Most failures were adhesive/mix in MTA specimen but in CEM cement groups the cohesive failures were observed in most of the samples. The use of bonding agent significantly increased the shear bond strength of FC to MTA and CEM cement (The bond strength of self-adhering flowable composite resin to MTA and CEM cement was higher than RMGI which was improved after the additional application of adhesive. Following traumatic injuries or restorative procedure, normal dental pulp exposure may occur inadvertently. In this situation, vital pulp therapy (VPT) is performed by placing the direct pulp capping biocompatible materials to maintain the health and vitality of dental pulp . It is oCalcium-enriched mixture (CEM) cement is suggested biocompatible pulp capping material produced to overcome the drawbacks of MTA . These bet al. [et al. [et al. [After VPT a definite leakage free restoration should be used. Different studies showed that acid etching before composite build up and nature of solvent in the adhesives may influence the mechanical properties and bond strength of pulp covering agent to composite resin , 9. Hashet al. also demet al. . Oskoee [et al. and Ajam [et al. showed tet al. [Vertise Flow is a self-adhering flowable composite which combines an all-in-one adhesive system and a flowable composite for a step-less system . The preet al. revealedThere is no information on the adhesion of SAFC to MTA or CEM cement. Therefore, the purposes of this study was to determine the shear bond strength of SAFC alone and in conjunction with a self-etch adhesive to MTA and CEM cement and also compare it with RMGI cement. The null hypothesis was that there would be no differences between the bond strength values in different study groups.n=36). RootMTA and CEM were mixed according to their manufacturers\u2019 instructions. The pastes were poured into the holes in the center of acrylic blocks, flattened with a spatula, covered with a moist cotton pellet and temporary filling materials . Then, the specimens were stored for 72 h at 37\u00b0C temperature and 100% humidity. After storage, the temporary materials and cotton pellets were removed without interfering with the surfaces of the pulp capping materials. Then, the specimens of each material were divided into the six groups (n=12). In this study, 72 acrylic blocks were with a central hole measuring 4 mm in diameter and 2 mm in depth. The samples were divided into two groups for 40 sec. In groups 1 and 4, after air drying of the specimen, the SAFC was actively applied directly over MTA or CEM with no adhesive. The SAFC was placed into the plastic mold with 3 mm diameter and 2 mm height in one increment. Then the SAFC was light cured with light density 600 mw/cm In groups 2 and 5, after air drying of the specimen, Opti-Bond all-in-one adhesive was actively applied on the surfaces with a brush and after air thinning, was light cured for 20sec. Then SAFC was subsequently applied to the conditioned surfaces, similar to the previous groups.In groups 3 and 6, the surfaces of pulp capping agents were conditioned with 10% poly acrylic acid for 20 sec, then rinsed for 30 sec and air dried. After that, the powder and liquid of RMGI was mixed according to the manufacturer\u2019s instructions; the paste was placed into the plastic mold and light cured for 40 sec similar to above mentioned groups.\u00b0C temperature for 24 h. After that, the plastic molds were carefully removed from the specimen before the shear bond strength test.The prepared specimens were kept in 100% relative humidity at 37The specimens were mounted in the universal testing machine and shear forces were applied at the crosshead speed of 0.5 mm/min until the fracture occurred. The maximum loads at failure were recorded in Newton and were then converted into the Mps.The fractured specimens were observed under a stereomicroscope under 40\u00d7 magnification to determine the failure mode. The failure types were categorize as adhesive , cohesive (any deficiency in the pulp capping agent surface) and mixed (combination the adhesive residue and deficiency in the pulp capping surfaces).The two-way ANOVA analysis was applied to determine the interaction effect between the experimental groups and if it was applicable, the post hoc comparison and t-test was used to compare the shear bond strength results among groups. Also, the chi-square and Fissure\u2019s exact test were used to compare the fracture surface pattern between groups. The level of significance was set at 0.05.P=0.008, 0.000, respectively). In both pulp capping agents, RMGI had the lowest shear bond strength values (P=0.05). For all intermediate materials in this study, the mean shear bond strength values were significantly higher in MTA than CEM; however, these difference was not obvious in RMGI than other intermediate materials (P=0.003). There were a significant difference between fracture patterns between groups (P=0.001). Most failures in MTA specimens were adhesive/mix, but in CEM cement specimens, the cohesive failure was the predominant mode of fracture except for RMGI materials which adhesive failures were also observed. All groups showed significant differences with CEM/SAFC with no bonding ( bonding .The results of our study showed that SAFC (with or without application of adhesive) had superior bond strength compared to RMGI, either over MTA or CEM cement. The null hypothesis was rejected, because the bond strength changed in relation to adhesive application and filling materials. More researches have shown that, both MTA and CEM cement can be used effectively as pulp capping agents because they have the ability to stimulate the differentiation of dental pulp stem cells to odontoblast like cells and ultimately initiate the formation of dentinal bridge which is thicker, less porous along with less pulp inflammation than the calcium hydroxide material , 16, 17.In this study, we used SAFC as an intermediate material before permanent composite restoration over Root MTA or CEM cement. The SAFC used in this experiments a novel flowable resin based composite that eliminates the etching, priming and bonding steps in order to simplify the adhesive procedures to dentin and enamel . et al. [et al. [et al. [vs. self-etch) did not have a significant effect on the shear bond strength of composite resin to pulp capping biomaterials [et al. [The results of our study demonstrated that surface treatment with an all-in-one adhesive before SAFC significantly increased the shear bond strength of SAFC to Root MTA and CEM cement. It was in consistent with the results of the study by Tuloglu et al. . Also, t [et al. showed t [et al. . However [et al. indicateaterials , 22. Kay [et al. , evaluat [et al. , 23. In [et al. . In our [et al. . Thus, iet al. [As mentioned previously, the additional application of self-etching bonding would enhance the shear bond strength of SAFC to MTA and CEM cement. However, the results were in argue with Yesilyurk et al. who concet al. . In our et al. . This faet al. .et al. [et al. [In this study, SAFC had significantly higher mean of shear bond strength values than RMGI for both MTA and CEM cement. This results were contrary to the results reported by Ajami et al. that sho [et al. revealedet al. [et al. [According to the manufacturers, SAFC used in this study consists of phosphate functional monomer (GPDM) which may interact with the calcium ions . Both MTet al. revealed [et al. reported [et al. , 31. In [et al. . Bond st [et al. . Because [et al. . Furtherin vitro study, it was found that the application of a self-etch adhesive improves the bond quality of Vertise Flow to MTA and CEM cement. Moreover, the bond strength of Vertise Flow was significantly greater than resin-modified-glass ionomer in both materials. All materials had significantly higher bond strength in MTA group than CEM cement after 72 h setting.Within the limitation of this"} {"text": "The microbial biofilm is an important factor for human infection. Finding effective antimicrobial strategies should be considered for decreasing antimicrobial resistance and controlling the infectious diseases. Treatment of infected canal systems may not be able to remove all bacteria and so bacterial persistence after treatment may occur. Application of antibacterial nanoparticles may be a potential strategy to improve the elimination of bacteria from the canal. Furthermore, mechanism of action and applications of photodynamic therapy and Photon-induced photoacoustic streaming (PIPS) and GentleWave system was reviewed. Different anatomy and complexities of the canal, in addition to dentin composition, are key challenges for effective disinfection in endodontics . AntimicIn order to overcome the limitations of ordinary root canal irrigants and medicaments, using nanoparticles to disinfect the canal system has been proposed.Antibacterial nanoparticles (NPs) Nanomaterial denotes a natural or manufactured material containing unbound particles in which half or more of the particles in number and size is in the size range of 1-100 nm . These mThe electrostatic interaction between negatively charged bacterial cells and positively charged NPs, and also accumulation of increased number of NPs on the cell membrane of the bacteria have been associated with the loss of membrane permeability and unsuitable membrane function . Enterococcus faecalis biofilm comparing camphorated phenol and chlorhexidine (CHX) gluconate [in vitro study showed that nanosilver gel is not efficient enough against Enterococcus faecalis; however, triple antibiotic paste and CHX gel showed better antibacterial activity than calcium hydroxide (CH) and so can be used as an alternative medicaments in endodontic treatment [et al. [Enterococcus faecalis and showed that silver NPs with CH has a significant inhibitory effect on the biofilm of Enterococcus faecalis. Antibacterial NPs show a broad spectrum of antimicrobial activity. According to Vier and Figueiredo , 10 metaluconate . An in vreatment . Zhang e [et al. assessedet al. [et al. [in vitro study showed that adding silver NPs to MTA and CEM increased their antibacterial activity [et al. [Enterococcus faecalis and concluded that it may exhibit strong antibacterial activity against planktonic Enterococcus faecalis and better residual inhibition effects against Enterococcus faecalis growth on dentin than CH.Barreras et al. indicate [et al. showed tactivity . Fan et [et al. investigAntimicrobial photodynamic therapy (APDT) APDT is a two-step procedure that involves the application of a photosensitizer, followed by light illumination of the sensitized tissues, which would generate a toxic photochemistry on target cells, leading to killing of microorganisms -20. Nowaet al. [et al. [et al. [Enterococcus faecalis with the optical fiber is better than when the laser light is applied directed at the access cavity. APDT may be combined with the usual mechanical instrumentation and chemical antimicrobials , 22. Garet al. compared [et al. evaluate [et al. also shoet al. [Enterococcus faecalis. They concluded that NaOCl was the most effective in Enterococcus faecalis elimination, while Er:YAG laser also resulted in great decrease in viable counts. The use of both commercial APDT systems resulted in a weak reduction in the number of bacteria. The worth option was Nd:YAG irradiation.Meire et al. comparedEnterococcus faecalis. The volume of damage on these targets is influenced by the photosensitizer solvent used during APDT. Soukos et al. [Enterococcus faecalis (53%). William et al. [Peptostreptococcus micros, Streptococcus intermedius, Fusobacterium nucleatum and Prevotella intermedia, and concluded that PAD killed these bacteria at statistically significant levels compared to controls.According to George and Kishen , 28, APDs et al. conductem et al. measuredet al. [Effect of PAD on bacterial endotoxins has also been studied. Endotoxin, a part of the cell wall of gram-negative bacteria, is composed of lipids, polysaccharides, and proteins and is referred to as lipopolysaccharide -33. Shreet al. evaluatePhoton-induced photoacoustic streaming (PIPS)PIPS is based on the radial firing stripped tip with laser impulses of subablative energies of 20 mJ at 15 Hz for an average power of 0.3W at 50 \u03bcs impulses. These impulses induce interaction of water molecules with peak powers of 400W. This creates successive shock waves leading to formation of a powerful streaming of the antibacterial fluid located inside the canal, with no temperature rising , 36.Unlike the conventional laser applications, the unique tapered PIPS tip is not mandatory to be placed inside the canal itself but rather in the pulp chamber only. This can reduce the need for using larger instruments to create larger canals so that irrigation solutions used during treatment can effectively reach to the apical part of the canal and also canal ramifications. This procedure can effectively remove both vital and nonvital tissues, kill bacteria, and disinfect dentin tubules , 38.et al. [et al. [via PIPS and 6% NaOCl has great effect in inhibiting Enterococcus faecalis. Peters et al. showed t [et al. concludeet al. [in vitro biofilm and showed an improved cleaning of the infected dentin on PIPS groups when compared to the PUI group. The extraordinary result from this study was the fact PIPS tip was placed 22 mm away from the target area, while sonic, ultrasonic, and passive irrigation were made at the exact target area. Jaramillo et al. [versus 100% disinfection on PIPS, with a total of 1 min of irrigation with the same solution. Alshahrani et al. [Ordinola et al. evaluateo et al. showed 8i et al. also shoin vitro study, Zhu et al. [et al. [In an u et al. compared [et al. showed tGentlewave irrigation Gentlewave (GW) system aims to clean the root canal through generation of different physiochemical mechanisms including a broad spectrum of sound waves. Multisonic waves are initiated at the tip of GentleWave\u2122 handpiece, which is positioned inside the pulp chamber . It deliet al. [et al. [et al. [According to Haapasalo et al. the GW Set al. . Accordi [et al. , the GW [et al. . In a mu [et al. reportedRecent advances in root canal disinfection using new technology and on the basis of recent studies may improve the ability to disinfect the root canal system. However, conventional methods are still helpful for obtaining good prognosis."} {"text": "In addition, Parkinson\u2019s disease comprised only 3.2% of our cohort of 1363 post mortem brains [et al. [Substantia nigra, despite not being directly sampled. If the mtDNA variants we detected were contributing to the pathogenesis of neurodegeneration, we would have expected to see a difference between diseases cases and controls. If mtDNA mutations are contributing to cell loss, then one might even expect to see a reduction in the mutation burden in affected tissue - but this was not the case in our study (including DLB-PD, Fig. 3 & Supplementary Fig. 12 in Wei et al. [et al. [As Simon ur study was not m brains . However [et al. , and are [et al. .et al., point out, they have shown that neurons containing mtDNA mutations are present in early and pre-clinical Lewy body disorders [Substantia nigra, and is not just an innocent bystander in a toxic cellular environment at a particular phase of the disorder.As Simon isorders , thus deisorders (and bee"} {"text": "Apple trees are often subject to severe salt stress in China as well as in the world that results in significant loss of apple production. Therefore this study was carried out to evaluate the response of apple seedlings inoculated with abuscular mycorrhizal fungi under 0, 2\u2030, 4\u2030 and 6\u2030 salinity stress levels and further to conclude the upper threshold of mycorrhizal salinity tolerance.\u2212 and Na+ concentrations clearly increased and K+ contents obviously decreased in non-mycorrhizal roots in comparison to mycorrhizal plants, this caused mycorrhizal plants had a relatively higher K+/Na+ ratio in root. In contrast to zero salinity level, although ascorbate peroxidase and catalase activities in non-inoculated and inoculated leaf improved under all saline levels, the extent of which these enzymes increased was greater in mycorrhizal than in non-mycorrhizal plants. The numbers of survived tree with non-mycorrhization were 40, 20 and 0 on the days of 30, 60 and 90 under 4\u2030 salinity, similarly in mycorrhization under 6\u2030 salinity 40, 30 and 0 respectively.The results shows that abuscular mycorrhizal fungi significantly increased the root length colonization of mycorrhizal apple plants with exposure time period to 0, 2\u2030 and 4\u2030 salinity levels as compared to non-mycorrhizal plants, however, percent root colonization reduced as saline stress increased. Salinity levels were found to negatively correlate with leaf relative turgidity, osmotic potential irrespective of non-mycorrhizal and mycorrhizal apple plants, but the decreased mycorrhizal leaf turgidity maintained relative normal values at 2\u2030 and 4\u2030 salt concentrations. Under salt stress condition, ClThese results suggest that 2\u2030 and 4\u2030 salt concentrations may be the upper thresholds of salinity tolerance in non-mycorrhizal and mycorrhizal apple plants, respectively.The online version of this article (doi:10.1186/s40529-014-0070-6) contains supplementary material, which is available to authorized users. Generaet al. ). Under et al. ). In add [et al. ).Malus \u00d7 domestica Borkh.) is one of the most valuable horticultural fruit crops in the world. Although it produces the highest yield in China, it is not a halophyte species and sensitive to salinity, therefore it is subject to severe salt stress in many areas . To dea [et al. ; Bouksil [et al. ). Thoughet al. [et al. [et al. [et al. [et al. [et al. [et al. fungi are associated with the roots of over 90% of terrestrial plant species . Many iet al. ; Mena-Vi [et al. ; Miransa [et al. ). Munns [et al. ) demonst [et al. ; Marulan [et al. ). AM col [et al. ; Harisna [et al. ; He et a [et al. ). Althou[et al. tested At present, few researches focus on mechanisms in mycorrhizal apple plants to alleviate salt stress. Therefore, this study is undertaken to identify the potential interaction between AM fungi and apple plants in salinity environment and further to define the upper threshold of mycorrhizal salinity tolerance of apple seedlings.Malus hupehensis Rehd. root stock) with trunk diameter of 0.7 cm and height of 50 cm. The selected mycorrhizal inoculum of Glomus versiforme consisted of 15 isolated spores per milliliter and was provided by Qingdao Agricultural University, Shandong Province, China. It was derived from pot culture prepared with Trifolium repens L. grown in 1:9 sterilized soil-sand and contained colonized pieces of root, soil and spores.Experimental plants were one-year-old Red Fuji apple seedlings (3 in volume), each filled with 25 kg of brawn soil sterilized at 121\u00b0C for 2\u2030hours and its chemical properties were as followings: organic matter 7.64\u2030g kg\u22121, available nitrogen 24.36\u2030mg kg\u22121, available phosphorus 3.17 mg kg\u22121, available potassium 61.24\u2030mg kg\u22121, salinity 0.093\u2030 and pH 7.2 . In the transplantation process, in order to assure a rapid colonization the root systems of AM treatment were uniformly sprinkled with 100 ml Glomus versiforme inoculum. Counterparts of non-AM treatment were received volumetric sterilized soil-sand free of spores. After apple seedlings survived, the pots were watered with 1000 ml saline solution every 15 days throughout the experiment, fresh water was added as necessary to make up for losses by evaporation or transpiration. Determination was conducted at 30, 60 and 90 days after salt treatment.The experimental design consisted of four treatments with four salinity levels , each treatment involved in 10 non-AM inoculated and 10 AM inoculated plants and replicated six times. Apple seedlings were transplanted in plastic pots . Leaf pet al. and proet al. . Catalaet al. respecttion (Lu ). The ostion (Lu ). Leaf rtion (Lu ).LeafrelLSD (least significant difference) test at the 0.05 probability level.All experiments were repeated as indicated. Values presented are means. The effects of the treatments were tested by one-way analysis of variance (ANOVA). Means were compared between the treatments using the In non-mycorrhizal plants root length colonization rate was always zero, however, mycorrhizal plants presented a trend in increase in percent root colonization at zero, low (2\u2030), moderate (4\u2030) and high (6\u2030) salinity levels. Table Under no saline condition, leaf relative turgidities in non-mycorrhizal and mycorrhizal plants remained a comparative steady-state level from 53.75% to 54.56% throughout the experiment Figure A. As salThere were similar trends between non-mycorrhizal and mycorrhizal plants in increase in ascorbate peroxidase and catalase activities under different salt stress levels, nevertheless the increase rate was higher in mycorrhizal plants as compared to non-mycorrhizal plants Figure A, B. The- and Na+ in plant root, regardless of non-mycorrhizal and mycorrhizal plants, but the extent of which they raised was diverse. When apple trees were stressed under zero salt level, the Cl- and Na+ contents in non-mycorrhhizal and mycorrhizal roots ranged from about 0.08 to 0.09 g kg-1 at 30 and 60 days, but they increased to 0.1272 and 0.093 g kg-1 at 90 days, respectively. With an increasing salinity stress level and exposure time period to salinity stress, the increase rates of Cl- and Na+ concentration in non-mycorrhhizal plants were apparently higher than in mycorrhizal plants. Compared to non-mycorrhizal plants, K+ concentrations in mycorrhizal roots slightly increased under zero salt level and were obviously higher under moderate and high salinity stress. Due to lower Na+ concentration and higher K+ concentration in mycorrhizal roots in comparison to non-mycorrhhizal roots, therefore ratios of K+ to Na+ ions in roots were higher than in non-mycorrhizal plants.Table Results in Figure The finding that AM fungi inoculation with apple seedlings could improve salt tolerance of host plants under stress condition is very important information for saline areas in China as well as in the world. These results evidence that, to reduce the unfavorable effects of salinity on apple plant growth, the use of AM inoculation ought to be considered as biological method to alleviate salinity stress.et al. [et al. [et al. [et al. [Root length colonization rate is an important indicator suggesting plants become mycorrhizal. In the present study root colonization rate was found to negatively correlate with salinity stress. This conclusion is consistent with previous report that soil salinity can affect AM fungi by slowing down root colonization, spore germination (Juniper and Abbott ). Howeveet al. ; Yamato [et al. ; Peng et [et al. ). The di [et al. ).+ and Cl- ions bind water that is needed to be mobilized by the plants . In they et al. ; Kothari [et al. ). In our [et al. ). The fu [et al. ).et al. [There is accumulating evidence that production of reactive oxygen species is a major damaging factor in plants exposed to different environmental stresses, including salinity . Result- and Na+, because NaCl was used to develop a salinity gradient. Root Cl- and Na+ concentrations were lower in mycorrhizal than in non-mycorrhizal plants under given salinity conditions, resulting from dilution effects due to growth enhancement by AM fungi colonization . In thii et al. ). The nui et al. ). The redhalter Maintenn et al. ). The coThe data in survived apple seedlings verifies that 4\u2030 and 6\u2030 salt concentration were the limits of salinity tolerance in non-mycorrhizal and mycorrhizal apple seedlings, respectively. This further confirmed that AM fungi increased salinity tolerance of mycorrhizal apple seedlings as compared to non-mycorrhizal counterparts exposed to salinity stress.+/Na+ ratio, the number of survival and induced antioxidant enzymes, but the extents of which they responded to salinity stress were not the completely same. 2\u2030 and 4\u2030 salt concentrations may be the upper thresholds of salinity tolerance in non-mycorrhizal and mycorrhizal apple plants, respectively.In conclusion, non-mycorrhizal and mycorrhizal apple seedlings existed similar performances with respect to salinity stress such as reduced leaf turgidity, osmotic potential, K"} {"text": "Nanoporous anodic alumina (NAA) has become one of the most promising nanomaterials in optical biosensing as a result of its unique physical and chemical properties. Many studies have demonstrated the outstanding capabilities of NAA for developing optical biosensors in combination with different optical techniques. These results reveal that NAA is a promising alternative to other widely explored nanoporous platforms, such as porous silicon. This review is aimed at reporting on the recent advances and current stage of development of NAA-based optical biosensing devices. The different optical detection techniques, principles and concepts are described in detail along with relevant examples of optical biosensing devices using NAA sensing platforms. Furthermore, we summarise the performance of these devices and provide a future perspective on this promising research field. Recently, the combination of different optical techniques with sensing platforms based on nanomaterials has received notorious attention. These nanostructures can confine, guide, transmit and enhance optical signals when they interact with light, as well as working as containers capable of accommodating and immobilising analyte molecules in a selective manner. This has enabled the design and development of ultra-sensitive optical biosensing systems for a broad range of analytes and applications . Amid th) [etc.) . Further) [etc.) .In this scenario, this review is aimed at compiling and summarising the most relevant advances and developments of optical biosensors based on NAA. First, we will introduce a brief description of the fabrication process of NAA and the different electrochemical approaches used to develop optical platforms based on NAA. Second, we will describe the most relevant and recent developments of optical biosensing systems based on NAA, providing a detailed description about their sensing principles, performances and practical applications. Finally, we will provide a prospective outlook on this field.etc.). Since the first decades of the 20th century, this electrochemical process has been intensively used for a broad range of industrial applications, including surface finishing, automobile engineering, machinery, corrosion protection, and so on [2O3)) featuring close-packed arrays of hexagonally arranged cells containing a cylindrical nanopore at the centre, which grows perpendicularly to the underlying aluminium substrate (2SO4), oxalic (H2C2O4) or phosphoric acids (H3PO4), in which both electrodes, the anode ) and cathode ), are immersed and dissolution throughout the anodisation process. First, aluminium oxide grows at the aluminium-alumina interface, due to the countermigration of ionic species through the oxide barrier layer. Second, Al2O3 is dissolved at the alumina-electrolyte interface. This electrochemical process can be described by the following reduction-oxidation equations:(i)Formation of alumina )(ii)Dissolution of alumina )(iii)Diffusion of aluminium cations (within oxide barrier layer (anode))(iv)Hydrogen evolution (electrolyte-cathode interface (cathode))NAA is produced by electrochemical anodisation of aluminium in different acid electrolytes evolve throughout the anodisation process. For this reason, the experimental anodic current efficiency is always lower than 100%.i.e., from 3 to 8 \u00b5m\u00b7h\u22121) and exponentially decreasing and fast in the HA regime . Pore geometry in NAA can be defined by such structural parameters as pore diameter (dp), pore length (Lp) and interpore distance (dint) or hard anodisation (HA) regimes ,12,13,14e (dint) a. These decades . In thati.e., Al3+ and O2\u2212) across the aluminium-alumina interface, the alumina barrier layer and the alumina-electrolyte interface [i.e., HO\u2212 and O2\u2212) from the bulk electrolyte to the reaction interface [i.e., the longer the pore length, the slower the mass transport of oxygen-containing anionic species from the bulk electrolyte to the reaction interface) . This phterface) b.2O3) contaminated with impurities incorporated into the NAA structure from the acid electrolyte during the anodisation process. The latter, however, is basically composed of pure Al2O3. Nevertheless, it is worth stressing that, as other studies have pointed out, the real chemical structure of NAA is composed of more than two chemical layers. For instance, Yamamoto et al. [i.e., close to the central pore) to the inner one .As for its chemical structure, NAA presents an onion-like structure distributed in a layered fashion . Previouo et al. analysedo et al. . We founi.e., pore diameter, pore length and interpore distance) and its chemical composition . Therefore, the fabrication conditions and the structural design of the NAA structure become crucial factors to produce optical sensing platforms based on NAA with enhanced capabilities for biosensing applications.The optical properties of NAA are directly related with its pore nanoporous structure [3PO4 0.4 M at 110 V and 10 \u00b0C for 15 min. Next, the acid electrolyte was replaced by an aqueous solution of H2C2O4 0.015 M, and the anodisation voltage was set to 137 V. The electrolyte temperature was kept at 0.5 \u00b0C during this anodisation step to prevent NAA from burning. The time length of this step was 2 min. Highly uniform periodic modulations of the pore diameter in NAA were obtained by repeating this procedure. Notice that, in this process, the interpore distance in both layers remains constant due to the anodisation voltages being set to yield the same interpore distance in the MA and HA conditions. This prevented NAA from pore branching. The pore diameter of each segment was established by the acid electrolyte and the anodisation voltage, while the length of each segment was controlled by the anodisation time, making it possible to modulate the pore diameter in depth. This work was the origin of a flood of studies about different electrochemical approaches to generate pore diameter modulations in NAA [Electrochemical approaches aimed at producing pore diameter modulations in NAA have been intensively researched due to the interesting optical properties of the resulting NAA structures. The most widespread method used to generate pore diameter modulations in NAA is to switch the anodisation regime from MA and HA in an alternating fashion a. Pore d latter) . In thiss in NAA ,21,22.etc.) in order to engineer the effective medium of NAA in depth [et al. [et al. [et al. [et al. [et al. [Multilayered NAA structures consist of stacks of layers of NAA featuring different levels of porosity ,31,32,33in depth ,31,32,33 [et al. reported [et al. demonstri.e., an integrated current passed through the system), the pore diameter in that segment is engineered by the pore widening time. This enables the structural engineering of the nanoporous structure of NAA in depth and the replication of nanostructures based on different materials with optical properties. For example, He et al. [et al. [et al. [et al. [et al. [i.e., linear cones, whorl-embedded cones, funnels, pencils, parabolas and trumpets) by the fabrication parameters within distances of several hundreds of nanometres. Therefore, any change in the effective medium inside the NAA matrix can be quantified. As a result of its outstanding capabilities, SPR is one of the most widespread analytical methods used to detect biological events . This technique is capable of monitoring any dynamic change in the effective medium of NAA in real time and in situ. Furthermore, a suitable design of the nanoporous structure of NAA makes it possible to confine and guide light and to minimise scattering losses, which are key parameters to develop optical systems with enhanced sensitivity [et al. [et al. [et al. [et al. [et al. [SPR is an optical technique based on the generation of surface plasmons by an evanescent electromagnetic wave, which takes place when light is shined on the surface of a prism coated with metal. Typically, this system is implemented in a Kretschmann configuration, where an NAA layer is grown by anodising a thin aluminium film grown on the prism . The plasitivity . Figure [et al. combined [et al. , who cha [et al. . In this [et al. used a S [et al. engineer [et al. deposite6 of the Raman signal from molecules adsorbed onto the surface of a metal substrate with nanometric roughness, known as \u201chot spots\u201d or \u201chot junctions\u201d. SERS is an ultra-sensitive technique able to detect, identify and quantify trace amounts of analyte molecules. NAA has been demonstrated to be an excellent platform to develop SERS substrates, due to its organised nanostructure and cost-competitive and scalable fabrication process. Typically, SERS-NAA platforms are fabricated by sputtering or evaporating metals, basically gold and silver, on the top or bottom surface of NAA substrates, although other options, such as embedded nanowires, decorative nanoparticles and metallic membranes, have been explored, too. As a result of its versatility in terms of geometry, SERS-NAA platforms can be broadly engineered by the anodisation parameters and the metal deposition conditions to provide optimised SERS signals for specific sensing applications. SERS-NAA platforms featuring gold or silver nanoparticles on their surface have been demonstrated to provide strong Raman signal enhancements of up to the order of 106.Surface-enhanced Raman scattering (SERS) is a surface-sensitive optical technique that enhances Raman scattering by molecules adsorbed on rough metal surfaces or nanostructures. The most likely mechanism that explains the enhancement effect of SERS is associated with the excitation of localised surface plasmons. Nevertheless, the real mechanism of SERS is still a matter of debate. The SERS effect is translated into an enhancement of the order of 10et al. [et al. [3)2+. The size and density of silver nanoparticles was tuned to optimise SERS signals. The sensing performance of this SERS-NAA platform was assessed by detecting p-aminothiophenol. A point-by-point SERS mapping of this analyte on the NAA platform was acquired to characterise the homogeneity of the adsorbed molecules. Ji et al. [et al. [i.e., the distance between adjacent nanowires in the NAA matrix) was assessed when detecting 4-aminobenzenethiol. They found that the SERS intensity signal can be increased over 200-fold when the interwire gap distance is reduced from 35 to 10 nm. Another type of SERS-NAA platform was developed by Valleman et al. [et al. [For instance, Ko et al. decorate [et al. used sili et al. used a s [et al. grew siln et al. , who car [et al. decorate+ centres associated with ionised oxygen vacancies in the amorphous structure of NAA [et al. [Although the photoluminescence (PL) properties of NAA were reported several decades ago and extensive studies have been carried out since then, their origin is still in doubt ,70,71. Ie of NAA ,73,74. P [et al. used PL-et al. [et al. [i.e., by changing the effective medium). These PL-NAA platforms have been tested with success for detecting biological substances, such as organic dyes, enzymes and glucose ; where neff is the effective refractive index of NAA; Lp is the pore length; and m is the order of the RIfS fringe; the maximum of which is located at the wavelength, \u03bb. These RIfS peaks are the base of label-free optical biosensors capable of monitoring in real time and in situ binding events of biomolecules. These can be detected through changes in the effective optical thickness of NAA, which are translated into shifts in the RIfS peak positions [et al. [et al. [et al. [Reflectometric interference spectroscopy (RIfS) is a highly sensitive optical sensing technique based on the interaction of white light with thin films . In the ositions ,79. Thesositions ,82. For [et al. demonstr [et al. reported [et al. develope [et al. . This de [et al. .et al. [et al. [et al. [et al. [et al. [i.e., refractive index). The obtained results were validated by a theoretical model , demonstrating that the control over the nanoporous structure makes it possible to optimise optical signals in RIfS for biosensing applications.Dronov et al. develope [et al. , who com [et al. have pre [et al. demonstr [et al. demonstret al. [in vitro drug release techniques performed under static conditions.Notice that RIfS-NAA platforms not only can be used to monitor binding events between analyte molecules and surface functional groups, but also to study the release of molecules from NAA platforms. Kumeria et al. used thietc.) make them an outstanding and promising alternative to widely used porous silicon platforms. NAA is a highly attractive nanomaterial for developing a broad range of optical biosensing systems for clinical, industrial and environmental analyses. NAA can be structurally engineered to provide complex nanostructures with optimised optical properties. Furthermore, this material can be modified with desired functionalities to endow optical biosensors with high selectivity towards target analytes. It is expected that further development and progress in structural and chemical modifications of NAA will enable the development of innovative optical sensing systems based on NAA, featuring up-to-the-minute capabilities. Finally, this review has demonstrated that there is still an excellent opportunity for further advances and developments of optical sensing systems based on NAA, in particular through further miniaturisation and integration into lab-on-chip systems for real-life applications.This review has summarised recent progress in the use of NAA platforms to develop optical biosensing systems. We have presented in detail the fundamental aspects of each optical technique used in combination with these nanoporous platforms and provided relevant concepts and examples of devices based on different optical sensing principles. As these studies have demonstrated, the key features of NAA platforms (e.g., cost-competitive and scalable fabrication process, well-defined and versatile nanoporous structure, chemical and physical stability, optical activity,"} {"text": "Copy number variation (CNV) is a phenomenon in which sections of the genome, ranging from one kilo base pair (Kb) to several million base pairs (Mb), are repeated and the number of repeats vary between the individuals in a population. It is an important source of genetic variation in an individual which is now being utilized rather than single nucleotide polymorphisms (SNPs), as it covers the more genomic region. CNVs alter the gene expression and change the phenotype of an individual due to deletion and duplication of genes in the copy number variation regions (CNVRs). Earlier, researchers extensively utilized SNPs as the main source of genetic variation. But now, the focus is on identification of CNVs associated with complex traits. With the recent advances and reduction in the cost of sequencing, arrays are developed for genotyping which cover the maximum number of SNPs at a time that can be used for detection of CNVRs and underlying quantitative trait loci (QTL) for the complex traits to accelerate genetic improvement. CNV studies are also being carried out to understand the evolutionary mechanism in the domestication of livestock and their adaptation to the different environmental conditions. The main aim of the study is to review the available data on CNV and its role in genetic variation among the livestock. The copy number variants (CNVs) are a structural variation in the genome of an individual in the form of losses or gains of DNA fragments. CNV is an important source of genetic and phenotypic variation . Union oNon-allelic homologous recombination (NAHR), non-homologous end-joining (NHEJ), fork stalling and template switching (FoSTeS), and L1-mediated retro transposition are some of the mechanisms which generate rearrangements in the genome and possibly account for the majority of CNV formation ,6. NAHR SNP arrays are being used normally for CNV detection and analysis in humans because of its availability and economic feasibility . In geneIt is a Hidden Markov Model (HMM) algorithm which integrates multiple parameters such as LRR, BAF, the population frequency of the B allele (PFB) of SNPs, the distance between neighboring SNPs and the allele frequency of SNPs -27. It iCnvPartition is based on a different proprietary sliding window approach which detects CNVs by processing LRR and BAF. Only those homozygous deletion events segregating in different animals were reported by this algorithm due to concern quality calls .The cn.MOPS (Mixture of PoissonS) algorithm is based on the Bayesian approach for the detection of CNV in multiple samples for NGS data. It decomposes read variations across multiple samples into integer copy numbers and noise by its mixture components and Poisson distributions, respectively. The advantages of using this method are- it identifies overlapping sequences and estimates allele-specific copy numbers .QuantiSNP uses different HMMs unlike PennCNV. QuantiSNP uses both LRR and BAF frequency independently whereas in pennCNV treat them as combined. It uses a fixed rate of heterozygosity for each SNP .It is a python package for CNV detection on whole exome sequencing data from amplicon-based enrichment technologies. This program uses SDe termed as experimental variability, in the LRR distribution .et al. [taurine cattle is higher than the overlap between taurine and indicine cattle. Largest CNV diversity was reported among the zebu cattle [Upadhyay et al. using thet al. ,35,36. Iu cattle .et al. [indicine and African taurine cattle breeds than in European taurine using Vst for population differentiation which indicates the breed divergence and population history. Pezer et al. [et al. [Recent studies reported that CNVs evolved 2.5 folds faster than SNPs and helped to promote a better adaptation in different environments . Liu et et al. reportedr et al. suggeste [et al. in their [et al. . Differeet al. [indicine than in African groups and taurine breeds. This observation may suggest the independent domestication events of cattle in Europe, Africa, and Southeast Asia [et al. [et al. [et al. [Hou et al. reportedast Asia ,41. Hou [et al. using ar [et al. identifi [et al. . Bae et [et al. also ide [et al. -37,42-46 [et al. ,16,47-52et al. [BTG3, PTGS1, and PSPH) which are involved in the fetal muscle development, prostaglandin (PG) synthesis, and bone color. Ma et al. [et al. [Yang et al. identifia et al. identifi [et al. using a [et al. .et al. [Fontanesi et al. identifiet al. . Differeet al. .et al. [Copy number variants account for about 1-3% of the horse genome and mostly of intragenic than those located in intergenic regions . Ghosh eet al. using 40et al. [Chicken has a unique genome arrangement due to the presence of micro- and macro-chromosome . Griffinet al. first stet al. ,58. Receet al. , 59-66.et al. [CATHL4) in the Nellore cattle sample, but these genes were found only in a single copy in human and mice. CNV overlapping with KIT gene was found to be associated with color-sidedness in English Longhorn cattle [et al. [MTHFSD and GTF2I in the CNVRs. Yoshida et al. [et al. [CIITA gene that showed nematode resistance in Angus cattle. MTHFSD gene covering the CNVR1206 found to be associated with milk protein yield in Spanish HF cattle [et al. [MSH4 gene was found to be associated with impaired gamete formations in laboratory mice and recombination rate in cattle [et al. [Bickhart et al. reportedn cattle . Severaln cattle . Upadhya [et al. found gea et al. found th [et al. found thF cattle . Reyer e [et al. found GTn cattle , 70, 71. [et al. suggesteASIP) gene, the itchy E3 ubiquitin protein ligase homolog (mouse) (ITCH), and the adenosylhomocysteinase (AHCY) loci [et al. [ORAOV1 gene in Rhodesian and Thai Ridgeback dogs which is responsible for characteristic dorsal hair ridge. A different pattern of white coat color was reported due to the duplication of the KIT gene in pig and in some cattle breeds [The dominant white coat in sheep is associated with duplication of 190 kb genomic fragment which encompasses three genes viz. the agouti signaling protein (CY) loci . Hillber [et al. identifie breeds .et al. [et al. [BTG3, PTGS1, and PSPH) in diverse sheep breeds which were involved in fetal muscle development, PG synthesis, and bone color.GO analysis and functional studies in sheep reported that many CNVRs are associated with genes related to environmental response and biological functions . Liu et et al. indicateet al. . Yang et [et al. found seAKR1C gene may be a possible cause of disorders of sexual development such as male-pseudohermaphroditism due to its role in testicular androgen production and sexual development [et al. [CSMD1 gene which encodes for a transmembrane and a candidate tumor suppressor protein [A homozygous deletion in the elopment . GO analelopment . Ghosh e [et al. confirme protein .SOX5 transcription factor interferes with SOX5 expression, and the regulation of gene expression is critical during cell differentiation for the development of the comb and wattles [et al. [PRLR and SPEF2 genes [et al. [SOX6 gene in chicken also has a similar function as reported in many species for the proliferation and differentiation of skeletal muscle cells. SOX6 gene expression is positively correlated with the number of CNV for CNP13 region in the chicken genome.Duplication of segment of DNA at intron 1 non-coding region of the wattles . Luo et [et al. suggesteF2 genes . Lin et [et al. suggesteRecent studies for CNV detection have enabled the construction of CNV map which in turn helps in identification of CNVs associated with economically important traits. With the advancement in the techniques and reduced cost of sequencing, researchers are now focusing on the CNV study for detecting the genetic variations, as CNV shows more inclusions and complex genetic variants than SNP sites. Current research for the identification of the CNV regions (CNVRs) throughout the genome in domestic species will change the concept of breeding for genetic improvement. Development of robust and convenient CNV detection techniques could further facilitate unveiling of genetic secrets for molecular breeding of poultry and other farm animals.MP conceptualized and designed the manuscript. BV, SC, and MP prepared manuscript draft and reviewed. CR contributed in literature collection. MP, DRP, and KA edited and made critical comments on the manuscript. MP and BV made critical comments on the revised manuscript and edited for final submission. All authors read and approved the final version."} {"text": "Genetic studies in autism have pinpointed a heterogeneous group of loci and genes. Further, environment may be an additional factor conferring susceptibility to autism. Transcriptome studies investigate quantitative differences in gene expression between patient-derived tissues and control. These studies may pinpoint genes relevant to pathophysiology yet circumvent the need to understand genetic architecture or gene-by-environment interactions leading to disease.We conducted alternate gene set enrichment analyses using differentially expressed genes from a previously published RNA-seq study of post-mortem autism cerebral cortex. We used three previously published microarray datasets for validation and one of the microarray datasets for additional differential expression analysis. The RNA-seq study used 26 autism and 33 control brains in differential gene expression analysis, and the largest microarray dataset contained 15 autism and 16 control post-mortem brains.While performing a gene set enrichment analysis of genes differentially expressed in the RNA-seq study, we discovered that genes associated with mitochondrial function were downregulated in autism cerebral cortex, as compared to control. These genes were correlated with genes related to synaptic function. We validated these findings across the multiple microarray datasets. We also did separate differential expression and gene set enrichment analyses to confirm the importance of the mitochondrial pathway among downregulated genes in post-mortem autism cerebral cortex.We found that genes related to mitochondrial function were differentially expressed in autism cerebral cortex and correlated with genes related to synaptic transmission. Our principal findings replicate across all datasets investigated. Further, these findings may potentially replicate in other diseases, such as in schizophrenia.The online version of this article (10.1186/s11689-018-9237-x) contains supplementary material, which is available to authorized users. Autism spectrum disorders (ASDs) constitute a heterogeneous group of neurodevelopmental disorders characterized by impaired social interaction, disrupted development of communication skills, and repetitive behaviors . Over anTranscriptome studies in autism have investigated quantitative differences in gene expression between the mRNA samples extracted from post-mortem tissue from patient brains as compared to control brains \u201319. One We have conducted an alternative analysis of the transcriptome data using differentially expressed genes from an RNA-seq dataset and thren\u2009=\u200929 control samples, n\u2009=\u200927 autism samples). The Voineagu et al. dataset also contained samples from the cerebellum, which were not used in our study. The two other microarray datasets were from Chow et al. [n\u2009=\u20096 control samples, n\u2009=\u20096 autism samples) [We analyzed gene expression in autism and control cerebral cortex using genes from Parikshak et al., an RNA-seq study , and micsamples) . We downThe Parikshak et al. dataset samples came from the National Institute of Child Health and Human Development-funded University of Maryland Brain and Tissue Bank and the Autism Tissue Program . The Voip\u2009<\u20090.05) in at least half of the autism or control samples, rather than half of the samples overall. All three studies for the microarray datasets used log2 transformation and quantile normalization. Voineagu et al. showed that all samples met quality control parameters, specifically, if the interarray Pearson correlation was not greater than 0.85 and if the array was an outlier in hierarchical clustering [Each microarray dataset was downloaded or received in its normalized form, except that we renormalized the Voineagu et al. dataset to include more probes. Our renormalized version of the Voineagu et al. dataset was the same except that we eliminated probes that did not have significant expression using alg method . For DAV package , with EuWe set out to discover other biological processes that were not previously reported to be differentially regulated in the Parikshak et al. or VoineWe performed separate DAVID functional annotation clustering analyses for the up- and downregulated genes from Parikshak et al. For the upregulated genes, few gene sets were significantly enriched after Benjamini-Hochberg adjustment (Additiot test p\u2009=\u20090.001), Chow et al. dataset (p\u2009=\u20090.039), and Garbett et al. dataset (p\u2009=\u20090.076) . It was also downregulated in a similar multivariate analysis after limiting the dataset only to frontal cortex (p\u2009=\u20090.041) or temporal cortex (p\u2009=\u20090.057). While the mitochondria pathway was associated with autism, it was not associated with seizures, speech delay, motor delay, or global functioning in the Voineagu et al. dataset (p\u2009>\u20090.3 for each comparison).To exclude the possibility that the mitochondria pathway\u2019s downregulation was unique to the Parikshak et al. study, we next validated this downregulation in other genomic datasets. We used three microarray studies for validation Fig.\u00a0. To ensup values were not significant , likely because of reduced sample sizes .Because these studies\u2019 subjects overlapped, we did a separate validation analysis of the Chow et al. and Garbett et al. datasets after removing all but the subjects unique to these studies. The mitochondria pathway was still downregulated in these analyses, although the GABRA1, which codes for a gamma-aminobutyric acid (GABA) receptor subunit, and ATP5A1, which codes for an ATP synthase subunit, were strongly correlated (correlation\u2009=\u20090.876).In our reanalysis of the Parikshak et al. genes, the synapse-related gene sets were the most strongly enriched in those downregulated in autism, so we next determined the relationship between those synapse-related gene sets and the mitochondria pathway. Across all three microarray datasets, these two pathways had strong Pearson correlation Fig.\u00a0. To exclGiven that the mitochondria pathway was among the most enriched in the Parikshak et al. downregulated genes, we did a separate analysis to confirm the importance of this pathway in autism cerebral cortex. Using the Voineagu et al. dataset, we performed a differential gene expression analysis using limma between We next performed DAVID functional annotation clustering of the original up- and downregulated genes. The upregulated genes were enriched in only one gene set Additional file 2:Table S1. Properties of the Parikshak, Voineagu, Chow, and Garbett datasets. Table depicting the properties of the datasets analyzed in this study. Properties listed include sample size, number of features in the dataset, brain region, age range, gender, PMI range, RIN cutoff, and the source of the cases. (XLSX 12 kb)Additional file 3:Table S2. DAVID functional annotation clustering analysis of Parikshak et al. upregulated genes. Using the genes upregulated in autism cerebral cortex from Parikshak et al., DAVID functional annotation clustering was performed to generate groups of enriched gene sets. (XLSX 118 kb)Additional file 4:Table S3. DAVID functional annotation clustering analysis of Parikshak et al. downregulated genes. Using the genes downregulated in autism cerebral cortex from Parikshak et al., DAVID functional annotation clustering was performed to generate groups of enriched gene sets. (XLSX 134 kb)Additional file 5:Table S4. Synapse pathway and mitochondria pathway genes. The mitochondria pathway genes were downregulated in autism cerebral cortex in Parikshak et al. and were members of the GO \u201cMitochondrion\u201d term. The synapse pathway genes were also downregulated in Parikshak et al. and were members of the UniProt \u201cSynapse\u201d term. All genes in the synapse pathway and in the related \u201cM12\u201d module from Voineagu et al. [u et al. were excAdditional file 6:Figure S2. Heatmaps of mitochondrial genes in the Chow et al. and Garbett et al. microarray datasets. The rows are genes and the columns are subjects; the top vertical bar shows whether a subject was from autism (blue) or control (red). Generally, lower gene expression (blue in heatmap) maps onto the autism participants . Intensity of color is determined by a Z-score normalized by gene. Shown below the heatmap is the overlap of each sample with other study datasets, using the first letter of each study. (PDF 577 kb)Additional file 7:Table S5. Differential expression analysis between autism and control cerebral cortex in the Voineagu et al. dataset. Table of differential expression analysis between autism and control cerebral cortex in the Voineagu et al. dataset, adjusted for RIN, PMI, age, sex, and cortical location . The analysis was performed using the Bioconductor package limma and adjusted for multiple testing using the Benjamini-Hochberg method. Because several individuals were included twice in this analysis, the analysis was also redone limiting samples only to frontal or temporal cortex. Note that the p values in the cortex-specific analyses are not adjusted for multiple testing. (XLSX 265 kb)Additional file 8:Table S6. DAVID functional annotation clustering analysis of Voineagu et al. upregulated genes. Using the genes upregulated in autism cerebral cortex from Voineagu et al., DAVID functional annotation clustering was performed to generate groups of enriched gene sets. (XLSX 48 kb)Additional file 9:Table S7. DAVID functional annotation clustering analysis of Voineagu et al. downregulated genes. Using the genes downregulated in autism cerebral cortex from Voineagu et al., DAVID functional annotation clustering was performed to generate groups of enriched gene sets. The top 5 clusters had several gene sets related to mitochondrial function. (XLSX 49 kb)"} {"text": "The replication crisis addresses a fundamental problem in psychological research. Reported associations are systematically inflated and many published results do not replicate, suggesting that the scientific psychological literature is replete with false-positive findings . On the individual study-level, some researchers use selective outcome reporting to illegitimately present findings in an opportunistic way. Outcome reporting bias is very prevalent in clinical science and indicates that authors omit or change primary outcomes on basis of the results in order to avoid undesired findings . According to the authors this finding cannot be explained by heterogeneity, publication bias or allegiance effects . Using a continuous measure of study quality ranging from 0 to 8 points in a meta-regression showed that each additional point increase in study quality reduced the average effect size by \u22120.07 points . The impact of low-quality study bias was very recently replicated by Cristea et al. (Study quality is an important determinant of treatment efficacy in clinical science, but unfortunately, most published psychotherapy trials use poor methods such as small sample sizes, inadequate concealment of allocation, no intent-to-treat analyses, and unblinded outcome assessors (e.g., Newby et al., a et al. in a meta et al. , evidenca et al. . Due to a et al. . Furthera et al. or even a et al. in real-a et al. . That isa et al. . ReplicaAs in other psychological specialties (see Bakker et al., The author confirms being the sole contributor of this work and approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "In the article titled \u201cIncorporating Family Function into Chronic Pain Disability: The Role of Catastrophizing\u201d , an acknThe authors would like to acknowledge Dr. Majtaba Habibi, a co-adviser of Dr. Akbari's thesis, for his role in the project and revising an earlier version of the manuscript.This follows a decision by Shahid Beheshti University about an authorship claim by Dr. Habibi."} {"text": "Judo competition is characterized structurally by weight category, which raises the importance of physiological control training in judo. The aim of the present review was to examine scientific papers on the physiological profile of the judokas, maintenance or loss of weight, framing issues, such as anthropometric parameters (body fat percentage), heart rate responses to training and combat, maximal oxygen uptake, hematological, biological and hormones indicators. The values shown in this review should be used as a reference for the evaluation of physical fitness and the effectiveness of training programs. Hence, this information is expected to contribute to the development of optimal training interventions aiming to achieve maximum athletic performance and to maintain the health of judokas. Judo became an official sport event in the Olympic Games of Munich in 1972 for men and in the 1992 Games (Barcelona) for women . In the 1964 Games, judo was invited in Tokyo for men, whereas the same happened for women in the 1988 Games (Seoul). In 1961, there were world championships in Paris. The first competition with weight categories took place in the 1964 Games (Tokyo), including three categories and an open, while in 1965 (Rio de Janeiro), five categories and an open were included in the program.Competitive judo demands high-intensity intermittent actions, in which optimal physical attributes are necessary in order to achieve technical-tactical development and success in combat ,2. ActuaJudo trainers and judokas should have a solid knowledge about physiological responses to competitions and physiological adaptations to training in order to design an adequate training session and season . A combiThere are many authors who have written about the variation of physical and physiological profile of judokas during the training season ,15, as wJudo is a combat sport with strict competitive weight categories. In 1964, when judo became an official Olympic event, a competitive weight class system was adopted. Judokas should optimally select the weight class that is appropriate to their height and physique. However, many of them often undergo severe weight reduction through calorie restriction in order to select a lower-weight class and, in this way, to gain an advantage over other judokas in a particular weight class. In order to achieve the weight that would allow them to participate in a specific class, many contestants undergo drastic food restriction, especially during the week preceding the competition. The amount of weight loss is subsequently regained as the athletes compensate for the sustained energy drain by excess food intake during the post-competition period . This raBody mass and body fat percentage are essential measurements that could change largely according to sex, age, weight category and training. With regards to sex, it has been established that body fat percentage for elite female judokas is about 10% higher than for male judokas . A studyet al. has been extensively studied in judo, both in real competition, simulations of competition and training. [La\u2212] values have ranged 7\u201310 mmol\u00b7L\u22121 after combat simulations [\u2212] have not been found, either between competition levels [et al. [\u2212] of 9.1 \u00b1 1.1 mmol\u00b7L\u22121 after a specific training judo . In addition, Franchini et al. [\u2212] after combats, at least in simulated combat. In these studies, it was found that peak [La\u2212] was higher after the first combat compared to the second, third and fourth combat, about 2 mmol\u00b7L\u22121 [et al. [\u2212] during competition in regional-level male judokas.The concentration of lactate in blood [Laulations ,55,58,60n levels or sexes [et al. reportedi et al. found sii et al. , there wmmol\u00b7L\u22121 ,62. Takemmol\u00b7L\u22121 . Serranoet al. [12 L\u22121 and 5.8 \u00d7 109 L\u22121, respectively. These authors reported values of Hb of 15.5 g\u00b7L\u22121 and Hct of 43% for the same group of judokas. In female high level judokas, Hb was 12.5 g\u00b7dL\u22121 and 38% for Hct [\u22121 and 42%, respectively) than in adult judokas [Hematological evaluation in judokas has focused on concentrations of total erythrocytes and leukocytes, as well as quantification of hemoglobin (Hb) and hematocrit (Hct). Malczewska et al. reported for Hct . In the judokas .et al. [\u22121) and Htc (from 42% down to 39%) after a five-week training period in young judokas (16 years old). This finding was in agreement with that of Umeda et al. [2). This study concluded that energy restriction seemed to exacerbate alterations in immune markers, such as immunoglobulin, and complements induced by vigorous exercise at seven days after a competition. Although the changed values were still within normal limits, the authors hypothesized that the potential cumulative effect of these changes over many competitions in one year might well induce abnormal levels with a possibly harmful clinical effect on judokas. In contrast, the study of Kowatari et al. [Hb and Hct did not change significantly during a 20-day rapid weight loss period before competition ,29. A stet al. found a a et al. , who shoi et al. failed t3/\u00b5L and 2.98 \u00b1 0.82\u20137.95 \u00b11.8 \u00d7 103/\u00b5L, respectively, in 18 male judokas [On the other hand, white blood cells and neutrophils in the blood values showed a significant increase after training judo, \u00b11.2\u201310.7 \u00b1 1.8 \u00d7 10 judokas . Another judokas .et al. [et al. [et al. [et al. [Ohta et al. and Umed [et al. controll [et al. showed a [et al. only was [et al. ,29.et al. [Kim et al. pointed Among sports where athletes are categorized by weight, judo is characterized by relatively short duration, high intensity and intermittent exercise with a combat lasting ~7.18 min in males. It has been shown that a single judo combat is able to induce mobilization of both protein and lipid metabolism .et al. [Finaud et al. observedet al. ,30.et al. [\u22121).On the contrary, Filaire et al. observed\u22121). Therefore, the authors hypothesize that since the judoka study was carried out with the misleading use of lipid metabolism (lipolysis), bringing with it the increase in free fatty acids and glycerol after the effort, characterized by an intake of a carbohydrate diet rather than that recommended, this can be the case. A study about thet al. [Stress management before a competition for athletes is very important, especially for those who reduce their weight, such as judokas. In general, adrenocortical hormone is well known as a stress marker, and cortisol (C) is representative of it. Nowadays, the ratio of testosterone (T) to C (T/C) is a good marker of overtraining diagnosis . In a stet al. and Obmiet al. , suggestet al. [Salvador et al. comparedvs. interregional) by comparing physiological responses and psychological responses prior to judo competitions [The relationships between psychophysiological variables have been investigated at two levels (regional etitions . C levelCorrelations between T levels and fighting in male participants in judo contests have been studied . A positet al. [Toda et al. investiget al. [et al. [Degoutte et al. examinedet al. , in line [et al. , showed This study has a something\u2019s limitations. It discussed a compendium of studies related to the functional and physiological characteristics judo athlete. However, it is not a systematic review of these aspects. Coaches can consider directions and guidelines; and can serve as support for their workouts. Therefore, data must be taken with caution.The values shown in this review are vital for training control. It collaborates with the planning of training and maintaining the health of judokas."} {"text": "In Da Cunha et al. , we provhttp://ecogenomics.github.io/CheckM/)) and Anvi\u2019o also revealed a high contamination index (between 45% and 57%), necessarily underestimated because quality analyses are limited to defined sets of markers [On top of the very high heterogeneity detected in Loki\u2019s genome 78.21%), our quality analyses with CheckM . No otheWe suspected that patches of contamination could also be present in Heimdall LC3 RpoA since this subunit is encoded by a single gene in LC3 (like in Thaumarchaeota and the related Bathyarchaea and Aigarchaea), whereas all other Asgards have a dimeric version . Unlike Spang et al. , we neveMethanopyrus kandleri in the 2D tree of Spang et al. [Spang et al. argued tg et al. \u20138. This g et al. .Spang et al. argued tWe never claimed that the Asgard/Eukarya affiliation could not be obtained without EF2. We ourselves obtained it when we removed EF2 from the concatenation of the eocyte proteins . ZarembFinally, the confirmed presence of many Eukaryotic Signature Proteins (ESPs) in the genomes of the additional Asgards cannot bSpang et al. stated t"} {"text": "Inflammatory bowel disease (IBD) is an autoimmune disease of unknown etiology and can lead to inflammation and cancer. Whey proteins contain many bioactive peptides with potential health benefits against IBD. We investigated the effect of low-temperature-processed whey protein concentrate (LWPC) on the suppression of IBD by using a dextran sodium sulfate (DSS)-induced colitis model in BALB/c mice. Oral intake of LWPC resulted in improved recovery of body weight in mice. Histological analysis showed that the epithelium cells of LWPC-treated mice were healthier and that lymphocyte infiltration was reduced. The increase in mucin due to the LWPC also reflected reduced inflammation in the colon. Transcriptome analysis of the colon by DNA microarrays revealed marked downregulation of genes related to immune responses in LWPC-fed mice. In particular, the expression of interferon gamma receptor 2 (Ifngr2) and guanylate-binding proteins (GBPs) was increased by DSS treatment and decreased in LWPC-fed mice. These findings suggest that LWPCs suppress DSS-induced inflammation in the colon by suppressing the signaling of these cytokines. Our findings suggest that LWPCs would be an effective food resource for suppressing IBD symptoms. Inflammatory bowel disease (IBD), including Crohn\u2019s disease (CD) and ulcerative colitis (UC), is a chronic autoimmune disease condition, the pathogenesis of which is unknown . IBD is et al. [Crassostrea gigas) has been shown to reduce inflammation through immunomodulation [Recent research reveals that food-derived peptides from various sources, including seafood, milk and plants, contain bioactive functional properties, especially in the suppression of colitis. Young et al. showed tdulation . Milk isdulation . Bioactidulation , or indidulation , improvedulation ,12. Bothdulation ,14,15. Wdulation , showingdulation . These set al. [et al. [The functional properties of cheese whey protein are known to differ from the properties of raw milk based on the sterilization method applied during processing. The milk pasteurization process is known to cause a low degree of protein denaturation; thus, whey generated by pasteurized milk processing has been reported to have more bioactive functions when compared to whey processed at higher temperatures. Li et al. showed t [et al. found thTherefore, the present study was conducted to evaluate the effects of low-temperature-treated whey proteins in relation to their suppression of colon inflammation in the dextran sulfate sodium (DSS) mouse model of experimental colitis. The possible mechanisms by which whey protein may exert its action were studied via DNA microarrays followed by a comparison of the gene expression levels.The low-temperature-processed whey concentrate (LWPC) powder was a commercial product kindly gifted by Asama Chemical Co. Ltd. The LWPC was dissolved in distilled water and heated at 70 \u00b0C for 2 h, and this solution was concentrated by freeze-drying. The resulting powder is named high-temperature-processed whey protein concentrate (HWPC). The protein profiles of HWPC and LWPC were analyzed by SDS-polyacrylamide gel electrophoresis with 5%\u2013The two treatment diets were prepared based on the AIN-76 diet . Colitis was induced in Groups 1 to 3 through the administration of 2.5% dextran sulfate sodium (DSS) in the drinking water [Female BALB/c mice (4 weeks old) were obtained from CREA Japan Inc. and housed in isolated cages at 20 \u00b0C under a 12 h light/dark cycle. After 10 days of acclimatization with the AIN-76 diet and water provided ng water , and eacet al. [Five colon tissue samples were obtained from each treatment group representing all 5 mice. Each tissue sample was taken from a different site so as to represent the whole column within a treatment group. The colon tissues were fixed overnight in 4% paraformaldehyde in phosphate-buffered saline, embedded in paraffin wax and sectioned (10 \u03bcm) using a microtome, followed by staining with hematoxylin and eosin (HE). Twenty-five slide spots from each treatment group were used for microscopical examination with 5 spots representing each tissue section. The slides were viewed under a light microscope. The slides were further evaluated with regard to histological damage to the colon using a semi-quantitative scoring system with minet al. , with a The mouse colon was homogenized, and proteins were extracted with general SDS-PAGE sample buffer. The protein concentration was assessed using the Bio-Rad protein assay method. Equal amounts of each extract were then subjected to protein separation on SDS-PAGE (15% gel) and then electro-transferred onto a nitrocellulose membrane. The membrane was blocked with Tris-buffered saline and 6% milk protein and then incubated overnight at 4 \u00b0C with primary rabbit antibodies for mucin 2 , followed by incubation with a secondary antibody for rabbit IgG labeled with horse radish peroxidase. Detection was performed using the ECL system . Colon tissues were ground in liquid nitrogen, followed by RNA extraction using the SV Total RNA isolation system according to the manufacturer\u2019s protocol. The extracted RNA was checked by spectrophotometer analysis and electrophoresis and then stored at \u221280 \u00b0C for further use.RNA samples from each treatment replicate were pooled, and 500 ng of RNA from each treatment condition were used for transcriptome analysis of the mouse colon using a GeneChip 430A 2.0 mouse array . Experiments were performed according to the manufacturer\u2019s technical manual (Affymetrix). Data obtained from scanning the arrays were normalized and analyzed using Arraystar software . Genes that were up- or down-regulated by \u22652-fold in comparison with the DSS treatment were categorized using the Database for Annotation, Visualization and Integrated Database (DAVID) version 6.7 ,26 and tt-test.To validate the changes in the expression of the selected genes, we performed quantitative real time RT-PCR (QPCR). We selected guanylate binding protein 1 (Gbp1:NM_010259), Gbp2 (NM_010260) and interferon gamma receptor 2 (IFNgr2: NM_008338) for QPCR analysis. cDNA was synthesized using Superscript II reverse transcriptase and oligo-dT primers. Primers for the genes were selected from PrimerBank ,29 as deet al. [et al. [et al. [et al. [The protein profiles of LWPC and HWPC were compared by SDS-PAGE . LWPC shet al. and Mori [et al. . In the [et al. . It is c [et al. obtained [et al. also obt [et al. .et al. [DSS-induced intestinal injury serves as an experimental model for human ulcerative colitis and is a well-established method for the chemical induction of intestinal injury . The chaet al. found thUnder inflammatory conditions, excess infiltration of neutrophils can be observed clearly. Microscopic images of colon sections from mice subjected to different treatments are shown in et al. [et al. [et al. [in vivo and in vitro.In the colon, mucins are produced by goblet cells for lubrication . The avaet al. reported [et al. and Hong [et al. also fouTo investigate the mechanism underlying the effect of LWPC, we evaluated the transcriptome of the mouse colon by DNA microarray analysis. Compared with the control group, 677 out of 22,626 probed genes were up- or down-regulated by over two-fold in the DSS treatment group , Table 3The 110 genes regulated by both DSS and LWPC were further classified into functional groups using the DAVID functional annotation program according to the GO biological process in the GO ontology, as shown in et al. [Quantitative real-time RT-PCR was performed to confirm the expression levels of Gbp1 and Gbp2 A,B. Bothet al. found thet al. . Thus, tet al. . Accordiet al. . Accordiet al. , both Ilet al. . LWPC diet al. [et al. [et al. [Whey protein isolates subjected to high hydrostatic pressure have been shown to affect the suppression of tumor necrosis factor-alpha, interleukin 8 and interleukin 18 in Caco-2 colon cancer cells . Rusu etet al. showed t [et al. reported [et al. . Lactofe [et al. reported [et al. . HoweverDSS-induced colitis was prevented by oral intake of LWPC, which also enhanced the recovery from colitis. Comprehensive analysis of gene expression in the colon by DNA microarray analysis showed that GBPs were downregulated by LWPC intake. Expression of GBPs is known to be regulated by interferon gamma. IFNgr2, a receptor for interferon gamma, was also downregulated by LWPC, which suggests that oral intake of LWPC results in the suppression of IFN gamma-mediated pathways and leads to the suppression of inflammation."} {"text": "Polycyclic aromatic hydrocarbons (PAHs) are widespread in marine ecosystems and originate from natural sources and anthropogenic activities. PAHs enter the marine environment in two main ways, corresponding to chronic pollution or acute pollution by oil spills. The global PAH fluxes in marine environments are controlled by the microbial degradation and the biological pump, which plays a role in particle settling and in sequestration through bioaccumulation. Due to their low water solubility and hydrophobic nature, PAHs tightly adhere to sediments leading to accumulation in coastal and deep sediments. Microbial assemblages play an important role in determining the fate of PAHs in water and sediments, supporting the functioning of biogeochemical cycles and the microbial loop. This review summarises the knowledge recently acquired in terms of both chronic and acute PAH pollution. The importance of the microbial ecology in PAH-polluted marine ecosystems is highlighted as well as the importance of gaining further in-depth knowledge of the environmental services provided by microorganisms. This review highlights the sources and fate of polycyclic aromatic hydrocarbons (PAHs) in the marine environment with particular emphasis on the microbial ecology and the biological pump controlling global PAH fluxes. Polycyclic aromatic hydrocarbons (PAHs) have attracted the interest of many scientists from different disciplines beyond biology, physics and chemistry. They are fascinating compounds due to their universality, their presence in interstellar space Tielens and theiet\u00a0al.et\u00a0al.et\u00a0al.PAHs are natural compounds synthesised by organisms, produced by combustion, and derived from fossil fuels and transformation processes Neff . In the et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al. by indigenous oil-degrading bacteria and Gram-positive groups (Bacillus and Microbacterium) as sentinels for PAH biodegradation in Gulf beach sands. In coastal wetlands, polluted by weathered oil containing complex PAHs . They enter the marine environment by deposition on surface waters. Atmospheric PAH depositions have toxic effects on the oceanic phytoplankton communities thus perturbing the first level of marine food webs organisms, ensuring the microbial loop through top-down regulation and ecosystem services by metabolic networks. These interactions, characterized by the coexistence of different species involved in positive and/or negative interactions, are an integral part of the functioning of ecosystems and govern microbial processes involved in the hydrocarbons biodegradation , genes involved in the first step of aerobic PAH biodegradation (Bordenave et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al. (rhd genes was correlated with bioavailable PAH contents. The diversity of rhd genes was higher in deposited sediments than in suspended sediments and overlaying water (Xia et\u00a0al.rhd genes in environmental studies has been further demonstrated by targeting their transcripts, which were shown to be induced just after the addition of fresh bioavailable PAHs contained in heavy crude oil (Paiss\u00e9 et\u00a0al.l.et\u00a0al. demonstret\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al.In the absence of functional evidence, the contribution of PAHs to structuring microbial communities could be difficult to evaluate because of the presence of multicontaminants. To overcome these difficulties, the high-throughput sequencing technology associated with appropriate statistical analyses, such as co-occurrence network analyses, offers the possibility of identifying pollutant\u2013degrader interactions (Yergeau Because of the sediment complexity, the environmental observations alone are insufficient to fully understand the mechanisms controlling the microbial assemblages in the presence of PAHs. Laboratory studies, including single-cell physiology as well as appropriate microcosm and mesocosm experiments, are methods that facilitate understanding complex microbial assemblages (Cravo-Laureau and Duran et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al.et\u00a0al. (Oxygen and redox oscillations are major fluctuations in estuarine and tidal sediments (Cravo-Laureau and Duran l.et\u00a0al. .et\u00a0al.et\u00a0al.et\u00a0al. (et\u00a0al.The influence of PAHs on the interactions between microbial communities and meiofauna has been revealed by microcosms for studies into bioremediation strategies (Louati l.et\u00a0al. demonstrl. et\u00a0al., highliget\u00a0al.Although the issues related to oil pollution, particularly the concerns regarding to PAHs, are increasingly taken into consideration with the introduction of specific countermeasures, PAHs are still threatening environmental health. The marine environment is the major sink and receptacle of PAHs, which enter through different forms whose bioavailability varies according to their origin and their weathering processes. Microorganisms are the main actors in controlling and determining their fate in all marine ecosystems. The biological pump plays a pivotal role in the surface water and water-column trapping of PAHs, originating from both oil spills and atmospheric deposition, and then transporting them to the bottom. The utilisation of this spectacular phenomenon, by promoting it with fertilisers, for cleaning the open oceans and for carbon sequestration is currently under debate (De La Rocha The exploitation of oil reservoirs in more and more extreme conditions, such as deep sea and cold areas, in the context of climate change complicates the challenge. The study of oil reservoirs in more extreme conditions is required because these colder environments present different environmental parameters, including hydrostatic pressure, low temperature and water acidification, that may shape microbial processes. Further knowledge of the ecology of the adapted bacterial populations described in deep-sea sediments and of the PAH degraders found in cold Arctic Ocean sediments, both responsible for low PAH-degradation rates, is required to complete the global model of the fate of PAHs in the marine environment.Additionally, it is of paramount importance to gain information on how the modification of the resulting environmental parameters may affect the microbial assemblages, the biological pump and the global carbon cycle in a climate change scenario."} {"text": "The aim of this study was to evaluate the effect of intra-canal calcium hydroxide (CH) remnants after ultrasonic irrigation and hand file removal on the push out bond strength of AH-26 and EndoSequence Bioceramic sealer . n=34) based on the CH removal technique; i.e. either with ultrasonic or with #35 hand file. Then specimens were divided into two subgroups according to the sealer used for root canal obturation: AH-26 or BC Sealer. After 7 days, 1 mm-thick disks were prepared from the middle portion of the specimens. The push out bond strength and failure mode were evaluated. Data were analyzed by the two-way ANOVA and Tukey\u2019s post hoc tests. A total of 102 single-rooted extracted human teeth were used in this study. After root canal preparation up to 35/0.04 Mtwo rotary file, all the specimens received CH dressing except for 34 specimens in the control group. After 1 week, the specimens with CH were divided into 2 groups (P<0.05). The dominant mode of failure in all subgroups was of mixed type except for the BC Sealer specimens undergoing CH removal with hand file which dominantly exhibited adhesive mode of failure. The push out bond strength of both sealers was lower in specimens receiving CH. These values were significantly higher when CH was removed by ultrasonic (CH remnants had a negative effect on the push out bond strength of AH-26 and BC Sealer. Ultrasonic irrigation was more effective in removing CH. Amin et al. [These findings were consistent with the results of the studies of Guiotti et al. . Howeveret al. , 28-30. et al. [et al. [When comparing the control specimens, the push out bond strength of AH-26 was higher than BC Sealer but the difference was not statistically significant. However, in specimens receiving CH, the push out bond strength of AH-26 was significantly higher than BC Sealer regardless of the CH removal technique. Al-Haddad et al. confirmeet al. , 11. Sho [et al. attributIt should be noted that the clinical significance of this decrease in push out bond values and its effect on the outcome of endodontic treatments are not clear yet.et al. [et al. [et al. [The dominant mode of failure in all experimental groups was mixed with an exception of BC Sealer specimens which undergone CH removal by hand files. In the latter, the dominant mode of failure was adhesive indicating that higher levels of residual CH in BC Sealer specimens affectively reduced the bond strength. Mixed mode of failure may also be due to uninform CH removal from root canal walls that must be determined with further research. Akcay et al. and Gokt [et al. reported [et al. used gut [et al. . FurtherResidual CH on dentinal walls of the root canal negatively affects the push out bond strength of AH-26 and Endosequence BC Sealer."} {"text": "Epidemiological studies have probed the correlation between telomere length and the risk of lung cancer, but their findings are inconsistent in this regard. The present meta-analysis study has been carried out to demonstrate the association between relative telomere length in peripheral blood leukocytes and the risk of lung cancer using an established Q-PCR technique.A systematic search was carried out using PubMed, EMBASE, and ISI before 2015. A total of 2925 cases of lung cancer and 2931 controls from 9 studies were employed to probe the relationship between lung cancer and telomere length. ORs were used at 95% CI. Random-effects models were used to investigate this relationship based on the heterogeneity test. Heterogeneity among studies was analyzed employing subgroup analysis based on type studies and the year of publication.2=93%) compared to patients with squamous cell lung cancer, which was 1.78 . The meta-regression revealed that the effect of telomere length shortening, decreased and increased with the year of publication and the age of risks to lung cancer, was clearly related to short telomeres lengths.Random-effects meta-analysis revealed that patients with lung cancer were expected to have shorter telomere length than the control . The summary of the pooled ORs of telomere length in adenocarcinoma lung cancer patients was 1 (95%CI=0.68-1.47, ILung cancer risks clearly related with short telomeres lengths. In patients with breathing problems, lung cancer risk can be predicted by telomere length adjustment with age, sex, and smoking. Lung cancer is a leading cause of death worldwide. In 2008, more than 1.6 million new cases of lung cancer were diagnosed, which accounts for 13% of all new cancer cases. About 1.4 million people have lost their lives due to lung cancer that is estimated to be responsible for 18% of all deaths from lung cancer compared to 1.78 for SCC lung cancer. The pooled ORs showed that patients with SCC lung cancer were expected to have shorter telomere length than the control; however, it was found that there was no relationship between telomere length and the risk of lung cancer in patients with adenocarcinoma lung cancer. Overall, pooled ORs for both subtype was 1.40 , which indicated shorter telomere length in both lung cancer subtype.A subgroup analyses were performed in order to discover the source of heterogeneity and further explore the effects of the histologic subtype of lung cancer on telomere length as well as evaluate any differences between trials by ruling out the confounding effect of the histologic subtype of lung cancer. The pooled summary of ORs of telomere length in adenocarcinoma lung cancer patients was 1 , I2 87 %, P=0.1], and females and age 1.32 .2 of sensitivity and specificity were 84.9% (P=0.001), 71.7% (P=0.001), 67.9% (P=0.001), 59.1% (P=0.008), and 81.1% (P=0.001), respectively. There was also little difference in the sensitivity analysis due to inaccuracies in the data presented or incorrect information in the articles.Sensitivity of these nine studies was demonstrated by forest plots as shown in et al.[To investigate the relationship between lung cancer and telomere length shortening in this meta-analysis, 2925 lung cancer cases and 2931 controls of 9 different articles were used (OR=1.13). The measurement method for the telomere length in all included studies, except Sun et al. was realet al.[et al.[et al.[P=0.003)[The greater number of studies showed significant associations between shorter telomere lengths in peripheral blood leukocytes and the risks of different subtype of lung carcinomas. Wu et al. and Jangl.[et al. found sil.[et al. indicate[P=0.003). Longer [P=0.003). The res[P=0.003),7. In ad[P=0.003). Both an[P=0.003). Consequ[P=0.003). When th[P=0.003).et al.[et al.[Various case control studies have reported shorter telomeres to be associated with the risk of lung cancer, whereas results from Seow et al. showed tet al.. They deet al.. Sanchezl.[et al. found a l.[et al.. Adenocaet al.[et al.[et al.[Telomere shortening has also been associated with smoking status. Our metet al. demonstret al.. Smokinget al.,34. The et al.. Jang etl.[et al. reportedl.[et al.. Shen etl.[et al. observedl.[et al..In conclusion, a positive association was found between short telomere length and the risk of lung cancer. In patients with breathing problems, the risk of lung cancer can be predicted by telomere length adjusted with age, sex, and smoking. The molecular mechanisms that induce a decrease in telomere length may be implicated in the expansion of histologic subtypes of lung cancer. The several remarkable advantages of the current study include the fact that search strategy was employed extensively with reviewing of titles and abstracts manually. The search period spanned for a long time so that studies are not excluded based on the years of publication. Finally, further large studies will be required to authenticate these results and evaluate the effect of genetic and environmental risk factors before and after cancer diagnosis."} {"text": "L. monocytogenes, together with the severity of human listeriosis infections, make L. monocytogenes of particular concern for manufacturers of cold-stored \u201cready to eat\u201d (RTE) foods. L. monocytogenes has been isolated from a wide variety of RTE foods and is responsible for several outbreaks associated with the consumption of RTE meat, poultry, dairy, fish and vegetable products. Although L. monocytogenes is among the most frequently-detected pathogens in dry fermented sausages, these products could be included in the category of RTE products in which the growth of L. monocytogenes is not favored and have rarely been implicated in listeriosis outbreaks. However, L. monocytogenes is highly difficult to control in fermented sausage processing environments due to its high tolerance to low pH and high salt concentration. In many Mediterranean-style dry fermented sausages, an empirical application of the hurdle technology often occurs and the frequent detection of L. monocytogenes in these products at the end of ripening highlights the need for food business operators to properly apply hurdle technology and to control the contamination routes of L. monocytogenes in the processing plants. In the following, through an up-to-date review of published data, the main aspects of the presence of L. monocytogenes in Mediterranean-style dry fermented sausages will be discussed.The morphological, physiological and epidemiological features of Peritrichous flagella give them a typical tumbling motility, occurring at 20\u201325 \u00b0C. Based on somatic (O) and flagellar (H) antigens, 13 serotypes of L. monocytogenes have been recognized. These are identified alphanumerically: 1/2a, 1/2b, 1/2c, 3a, 3b, 3c, 4a, 4ab, 4b, 4c, 4d, 4e and 7 and is able to multiply in food matrices at water activity (aw) values of 0.92 and in NaCl concentrations of 12%, generally lethal to other microorganisms. L. monocytogenes is a ubiquitous organism, widely distributed in the environment: the principal reservoirs are soil, forage and water . In thel., 2002 ).L. monocytogenes is the etiologic agent of listeriosis. Human cases of listeriosis are almost exclusively caused by L. monocytogenes. Very rare cases of infections are attributed to L. ivanovii and L. seeligeri. The difference in the pathogenic potential of L. monocytogenes strains has been demonstrated by means of in vivo bioassay and in vitro cell assay ; Lineage II, strains isolated from sporadic cases of listeriosis ; Lineage III, strains rarely associated with cases of listeriosis (serotypes 4a and 4c) . The dil., 2008 ; Robertsl., 2009 ). Rasmusnn, 2002 ). The linn, 2002 ). The mal., 2005 ), to lifl., 2007 ).L. monocytogenes, together with the severity of human listeriosis infections, make L. monocytogenes of particular concern for manufacturers of cold-stored \u201cready to eat\u201d (RTE) foods , L. monocytogenes must not be present at levels exceeding 100 CFU/g during the shelf-life. In RTE foods able to support its growth, L. monocytogenes must be absent in 25 g at the time of leaving the production plant. However, if the producer can demonstrate that the product will not exceed the limit of 100 CFU/g throughout its shelf-life, this criterion does not apply and in l., 2002 ; Van Coil., 2004 ; Shen etl., 2006 ). L. monl., 2000 ; Gillespl., 2006 ; Public l., 2006 ; U.S. Del., 2006 ; Todd anl., 2006 ). The EUl., 2006 ) lays doon, 2005 ). In 201L. monocytogenes has been previously found in every stage along the pork processing industry , includl., 1997 ; Korsak l., 1998 ). The sol., 2002 ). L. monl., 2000 ).et al., 2003 [et al., 2002 [et al., 1994 [et al., 1995 [et al., 1996 [et al., 1995 [et al., 2005 [et al., 2008 [et al., 2002 [et al., 2013 [et al., 1991 [et al., 2002 [et al., 2002 [et al., 2003 [et al., 2005 [The prevalence in feces is generally comprised between 0% and 50% . This wl., 2002 ). The rol., 2002 ; Nesbakkl., 1994 ; Saide-Al., 1995 ; Borch el., 1996 ). Contaml., 1995 ), and rel., 2005 ; L\u00f3pez el., 2008 ), highlil., 2002 ; Meloni l., 2013 ). Several., 1991 ; Ripamonl., 2002 ; Autio el., 2003 ; Fabbi el., 2005 ) correlaet al., 2005 [et al., 2013 [The most frequent serotypes found in carcasses and slaughterhouse environments are 1/2a, 1/2c , while L. monocytogenes contamination tends to increase along the pork supply chain . Raw mel., 2002 ).L. monocytogenes compared to the muscles of freshly slaughtered pigs (0%\u20132%). Raw meat represents the primary source of contamination of final products by l., 1999 ; Katharil., 1999 ; Kanuganl., 2002 ; Th\u00e9venol., 2005 ). In turay, 1996 ; Chasseil., 2002 ). The lel., 1996 ; Th\u00e9venol., 2005 ). Pork ml., 2005 ; Mureddul., 2014 ).L. monocytogenes can persist over time in the processing environment capacity and weak or moderate ability to form biofilm after 24 to 40 h of incubation. Isolates from serotypes 1/2a, 1/2b and 4b presented higher adherence when compared to isolates from serotype 1/2c showed a reduction of L. monocytogenes growth after 24, 48 and 72 h of incubation in isolates from processing environments and finished products.Once introduced into the plants, l., 2008 ), forminl., 1990 ; Gandhi l., 2012 ). In recl., 2012 ; Meloni l., 2014 ; Mureddul., 2014 ), the evl., 2014 ). Harborl., 2014 ). This cl., 2011 ). Decontl., 2004 ). Withouos, 1999 ; Meloni l., 2014 ). Previol., 2012 ). In a rl., 2014 ), the inL. monocytogenes isolated from meat-processing environments belong mainly to serotypes 1/2c and 1/2a .Mediterranean-style dry fermented sausages are characterized by their relatively longer shelf-life and the exceptional hygienic background, which is brought about by the production of lactic acid in the fermentation process (pH < 4.5\u20135) and low water activity (<0.90) of the final product . In genet al., 2005 [The meat used depends on eating habits, customs and the preferences prevailing in the geographical region where the fermented sausage is produced . This isl., 2005 ; Ord\u00f3\u00f1ezl., 2005 ). The saet al., 2005 [et al., 2009 [After filling and the first warming up at 20\u201322 \u00b0C for 4\u20136 h, the fermentation stage for the manufacture of a standard dry fermented sausage can be summarized as follows: one to two days at 18\u201324 \u00b0C and 60% relative humidity (RH) and five days at 15 \u00b0C and 70% RH . After l., 2005 ; Ord\u00f3\u00f1ezl., 2005 ). Howevel., 2005 ; Jofr\u00e9 el., 2009 ). At thel., 2009 ).L. monocytogenes is inhibited in fermented sausages by sequential steps: the \u201churdle technology\u201d concept includes several sequential hurdles, essential at different stages of the fermentation or ripening process . Due tori, 2002 ). These ri, 2002 ). In theri, 2002 ). Other l., 1994 ; Greco el., 1999 ; Mazzettl., 1999 ). This, l., 1999 ). This cl., 1993 ; Torrianl., 1994 ; Grazia l., 1998 ; Leroy al., 1998 ; Hebert l., 2000 ; Lucke, l., 2000 ). These l., 2005 ). This il., 2005 ). Only tl., 2005 ). This sl., 2005 ).L. monocytogenes at several stages. The raw materials may be contaminated from the slaughterhouse environment, during the production process or by contact with contaminated unprocessed raw materials, unclean surfaces or people in the l., 2007 ). L. monsausages , reachinl., 1989 ; Cordanol., 1989 ; Levine l., 2001 ; Th\u00e9venol., 2005 ; De Cesal., 2007 ; Meloni l., 2009 ; Meloni l., 2012 ; Mureddul., 2014 ; Meloni l., 2014 ; Dom\u00e9necl., 2015 ).L. monocytogenes have rarely been implicated in critical listeriosis outbreaks , and the global number of annual cases per 100,000 people is only 0.0000055 , representing a major public health concern . Many Ml., 2012 ). Mostlyl., 2012 ). Insuffl., 2014 ), and L.in, 1991 ), becausl., 2004 ; Th\u00e9venol., 2006 ).et al., 2006 [et al., 2014 [et al., 2014 [et al., 1991 [et al., 1991 [et al., 2009 [et al., 2012 [et al., 2007 [et al., 2009 [As already noted for raw meat and meat-processing environments, also in the Mediterranean-style sausages at the end of ripening, serotypes 1/2c, 1/2a and 1/2b are more often detected , while l., 1991 ). In Ital., 2009 ; Pontelll., 2012 ). Despitl., 2007 ) were rel., 2009 ).L. monocytogenes from various sources, including raw meat, slaughterhouse environments, production processes and post-processing conditions. In order to prevent these contamination sources, good manufacturing practices, correct sampling schemes, adequate cleaning and disinfection procedures and HACCP principles have to be applied. The use of starter cultures and the correct drying to lower the water activity can minimize the potential for growth of L. monocytogenes in Mediterranean-style fermented sausages. However, the frequent detection of L. monocytogenes at the end of ripening of these products highlights the need for food business operators to apply hurdle technology properly and to control the contamination routes of L. monocytogenes in meat processing plants.The outcome of the previous paragraphs can be summarized from a safety standpoint as follows: Mediterranean-style fermented sausages may be contaminated with"} {"text": "Mindfulness based interventions (MBIs) are increasingly used to help patients cope with physical and mental long-term conditions (LTCs). Epilepsy is associated with a range of mental and physical comorbidities that have a detrimental effect on quality of life (QOL), but it is not clear whether MBIs can help. We systematically reviewed the literature to determine the effectiveness of MBIs in people with epilepsy.Medline, Cochrane Central Register of Controlled Trials, EMBASE, CINAHL, Allied and Complimentary Medicine Database, and PsychInfo were searched in March 2016. These databases were searched using a combination of subject headings where available and keywords in the title and abstracts. We also searched the reference lists of related reviews. Study quality was assessed using the Cochrane Collaboration risk of bias tool.n\u2009=\u2009171) and China (Hong Kong) (n\u2009=\u200960). Significant improvements were reported in depression symptoms, quality of life, anxiety, and depression knowledge and skills. Two of the included studies were assessed as being at unclear/high risk of bias - with randomisation and allocation procedures, as well as adverse events and reasons for drop-outs poorly reported. There was no reporting on intervention costs/benefits or how they affected health service utilisation.Three randomised controlled trials (RCTs) with a total of 231 participants were included. The interventions were tested in the USA contains supplementary material, which is available to authorized users. The prevalence of stress, anxiety, and depression is higher among people with epilepsy when compared with the general population, and suicide rates are similarly elevated \u20135. The pStress is widely recognised as a risk factor for developing both anxiety and depression, and may also be a trigger for seizures in people with epilepsy . For exa\u2026psychotherapy can improve depression and anxiety in patients with epilepsy\u201d. In this context, Tang et al. ). In tern et al. found th1,106\u2009=\u20090.35, p\u2009=\u20090.555). No significant difference was found between groups on the DCSES in Thompson et al. [1,37\u2009=\u20092.14, p\u2009=\u20090.152) and Thompson et al. [equally effective regardless of anti-depressant medication or psychotherapy\u201d (however study was not powered to detect this).The intervention group in Thompson et al. were alsn et al. . In addition, a greater decrease in depressive symptoms was found among the intervention group than the TAU group which approached significance for the PHQ-9 , and when the analysis was limited to those who provided both baseline and interim data the results were significant .Thompson et al. reportedp\u2009=\u20090.125 both groups). This implies that the improvement in BDI-II scores was of no clinical significance.In Tang et al. there wap\u2009<\u20090.001, hp2\u2009=\u20090.288, 95% CI \u22126.44, \u22121.76). Using McNemar tests, it was shown that there was a clinically significant reduction in the MT group (p\u2009=\u20090.012) but not the SS group (p\u2009=\u20090.065) between pre-and post-intervention.Only Tang et al. measured1,106\u2009=\u20098.02, p\u2009=\u20090.006), associated with number of sessions attended, while the improvement in Thompson et al. [1,37\u2009=\u20093.029, p\u2009=\u20090.090). In both studies physical and mental health QOL measures improved , with the improvement in the intervention group being clinically important in more participants and statistically significantly better (\u03c72 (1)\u2009=\u20094.356, p\u2009=\u20090.037, \u03c6\u2009=\u20090.269). Other aspects of QOL were also found to have improved including energy, mood, medication effect and seizure worry.An epilepsy specific QOL measure was used in Tang et al. - Qualitn et al. approachn et al. ), but wen et al. found stThompson et al. also mea1,37\u2009=\u20094.75, p\u2009=\u20090.036 [1,106\u2009=\u20096.01, p\u2009=\u20090.016 [2,35\u2009=\u20090.47, p\u2009=\u20090.63).Change in knowledge and skills relating to depression were assessed in Thompson et al. and Thom\u2009=\u20090.036 ; F1,106\u2009\u2009=\u20090.016 ) with im\u2009=\u20090.016 noted thsignificantly negatively correlated with change in BDI score \u201d. In addition, mBDI scores were found to be significantly associated with changes in knowledge and skills scores in Thompson et al. [According to Thompson et al. , a changn et al. . This stCognitive functioning was assessed using a number of measures in Tang et al. . These ineither the participants nor the project staff were blinded to the group assignment\u201d. The study was also judged as being at high risk of attrition bias because of missing data not being described and further due to the study\u2019s repeated measures design, only participants who completed interim assessments were included in analyses. The Thompson et al. [Using the Cochrane Collaboration \u2018Risk of Bias\u2019 tool Table\u00a0, studiesn et al. study waThis review identified three studies which reported the findings of RCTs utilising MBIs in the treatment of people with epilepsy. The primary outcomes assessed in the three included papers were depression , 35 and \u201csmaller differences seen in self-efficacy and self-compassion, or changes seen at follow-up\u201d. In addition, all studies noted their results and generalisability were limited by taking place in just one site [This review included only three articles for final data extraction which limits the applicability of the findings. Despite positive findings, their generalisability and reliability are limited by several factors. As previously noted two of the studies were at unclear/high risk of bias - randomisation and allocation procedures, adverse events and reasons for drop-outs were poorly reported. Furthermore, two of the papers were described as being underpowered - Thompson et al. highlighone site , 36, or one site . All stuDemographic and epilepsy characteristics of participants were poorly reported in Thompson et al. and Thoma possible beneficial effect on seizure frequency\u201d while \u201cno reliable conclusions\u201d could be reached on the effects of psychological treatments on psychological outcomes and QOL [To our knowledge no other systematic reviews have focused on the use of MBIs alone for people with epilepsy. However other systematic reviews have included a wider range of psychological treatments, some involving aspects of mindfulness. A review of psychological treatments for epilepsy suggested there may be \u201c and QOL (has bee and QOL examined and QOL ) for peoAs the scope of this review was restricted to MBIs, interventions that included only some aspects of mindfulness were excluded. For example, Acceptance and Commitment Therapy (ACT) and Yoga for patients with drug-refractory epilepsy have been found to improve seizure frequency and QOL in RCTs , 43. YogThe findings of this review were limited due to the small number and poor quality of studies included. It was not possible to conduct a meta-analysis and results were therefore presented in a narrative format. In addition, we were unable to locate the full paper for one study at the full paper screening stage \u2013 the paper was therefore excluded. Furthermore, for pragmatic reasons our search strategy did not include an extensive grey literature search. However we did conduct a basic (as opposed to advanced) search of Google and searched electronic databases that index grey literature items. A strength is that as previously noted, we are not aware of another review focusing on the use of MBIs for people with epilepsy and it has therefore been valuable in determining the extent of existing research in this area.Further research is required before conclusions can be reached on the effectiveness of mindfulness as a therapeutic intervention for people with epilepsy. In order to establish longer-term outcomes of participation in MBIs, larger RCTs with longer follow-up periods and active control groups are required along with more detailed information relating to participants\u2019 epilepsy."} {"text": "Using reports of forest losses caused directly by large scale windstorms from the European forest institute database (comprising 276 PD reports from 1951\u20132010), total growing stock (TGS) statistics of European forests and the daily North Atlantic Oscillation (NAO) index, we identify a statistically significant change in storm intensity in Western, Central and Northern Europe (17 countries). Using the validated set of storms, we found that the year 1990 represents a change-point at which the average intensity of the most destructive storms indicated by PD/TGS\u2009>\u20090.08% increased by more than a factor of three. A likelihood ratio test provides strong evidence that the change-point represents a real shift in the statistical behaviour of the time series. All but one of the seven catastrophic storms (PD/TGS\u2009>\u20090.2%) occurred since 1990. Additionally, we detected a related decrease in September\u2013November PD/TGS and an increase in December\u2013February PD/TGS. Our analyses point to the possibility that the impact of climate change on the North Atlantic storms hitting Europe has started during the last two and half decades. As pointed out by Feser et al.Studies of historical large scale storms have mostly focused on the meteorological parameters 134et al.et al.et al.et al.20\u22121 or more. On the other hand, Seidl et al.Forests and forested area are growing quickly in Europe1213http://www.efiatlantic.efi.int/portal/databases/forestorms/). All of the storms included in the analysis have caused PD/TGS of at least 0.012%, and they were validated to be of large scale (>500\u2009km in diameter) with the reanalysed weather datasets provided by Wetterzentrale and NCAR/NCEPIn this paper, we present a systematic approach for combining forest growth12et al. (2010) totals around 960\u2009Mm3 PD in 60 years. Forest losses have been greatest in Central Europe, where they have totaled approximately 340\u2009Mm3 in 1951\u20132010. In Northern and Western Europe, the losses have been around 260\u2009Mm3 and 290\u2009Mm3, respectively. The most heavily affected countries have been France (\u2248260\u2009Mm3), Germany (\u2248240\u2009Mm3) and Sweden (\u2248220\u2009Mm3). Largest individual damages have resulted from the storms Vivian and Wiebke , Lothar and Martin , Gudrun , Kyrill ; and Klaus . The bias caused by increased TGS values was removed from our analyses by using PD/TGS, rather than PD, as the proxy variable that indicates storm intensity. Additionally, in our analyses, we only focused on a validated set of large scale storms indicated by PD/TGS\u2009\u2265\u20090.012%. The decadal PD/TGS in Europe from 1951 to 2010 for the validated set of storms is shown in TGS has increased in Western, Central and Northern Europe from 1951 to 2010 by nearlet al.\u22121 . It has recently been shown\u22121. Spruce trees that can be uprooted account roughly for 30\u201340% of the TGS in Europe. Thus, when the gust speeds exceed 42\u2009ms\u22121, the potential for forest damages increases considerably. This can be seen from www.europeanwindstorms.org ((c) Copyright Met Office, University of Reading and University of Exeter. Licensed under Creative Commons CC BY 4.0 International Licence: http://creativecommons.org/licenses/by/4.0/deed.en_GB). The blue and red areas representing uprooting and tree breakage regimes, respectively, are meant to be illustrative rather than quantitative.Usbeck et al.et al.Hanna 2-value is, in fact, very close to zero). This is broadly in agreement with the findings of Allan et al.2 value of the correlation is 0.65; if the last decade is left out, the value is 0.75. While this could be a result of a sampling error in a data set of just six points, Hanna et al.et al.It is clear from To answer the question whether the apparent change around 1990 is real or not, we utilize the generalized likelihood ratio testIn addition to the aforementioned procedure, we also test the hypothesis of a change-point at 1990 in a model that explicitly takes into account the effects of NAO index on PD/TGS. The model in question has the form of a Generalized Linear Model (GLM), where the covariates are the yearly-averaged NAO indices and the dependent variables are the PD/TGS values. The shape parameter of GPD is modeled as We made a systematic climatological study of 56 large scale windstorms based on data of significant forest damages (PD/TGS\u2009\u2265\u20090.012%) from 1951 to 2010 in 17 countries representing Western, Central and Northern Europe. The value of the study arises from the fact that the data is independent of meteorological observations \u2013 which, of course, do not provide complete coverage of the area \u2013 and of meteorological models.et al.et al.et al.\u22121 (see www.europeanwindstorms.org)Our results confirm that in the past three decades (1981\u20132010), PD/TGS in European forests caused by severe extratropical storms have become 3\u20134 times as large, per decade, as they were in the 1950\u2019s, 1960\u2019s and 1970\u2019s. During the past 60 years, also the TGS has almost doubled in Europe. Unlike Schelhaas et al.\u22121, that there has been a decline in European windstorms during the present century. At first sight, this appears contradictory to our finding of the change-point in 1990. However, one should note that Dawkins et al. only examined the period 1979 onward, and thus the decline is relative to the two last decades of the 20th century that were dominated by the catastrophic storms of 1990 and 1999. Our data, on the other hand, suggests that the storminess in the first decade of this century was still at a clearly higher level than it was prior to 1990.Dawkins et al.In accordance with earlier studies , the dates of occurrence of primary damages and the reanalysed weather datasets provided by Wetterzentrale and NCAR/NCEPStorm intensity was defined by PD/TGS/storm count. Storm intensity and storm count were investigated relative to the monthly NAO-index. Climatic variability of the severe large scale storms was assessed for seasons DJF and SON. The climatic assessment was carried out using annual seasonal averages but the results are presented on decadal scale to concentrate on climatic variability and change.How to cite this article: Gregow, H. et al. Increasing large scale windstorm damage in Western, Central and Northern European forests, 1951-2010. Sci. Rep.7, 46397; doi: 10.1038/srep46397 (2017).Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations."} {"text": "As the root canal system considered to be complex and unpredictable, using root canal irrigants and medicaments are essential in order to enhance the disinfection of the canal. Sodium hypochlorite is the most common irrigant in endodontics. Despite its excellent antimicrobial activity and tissue solubility, sodium hypochlorite lacks some important properties such as substantivity and smear layer removing ability.The aim of this review was to address benefits and drawbacks of combining sodium hypochlorite with other root canal irrigants and medicaments.According to the reviewed articles, NaOCl is the most common irrigation solution in endodontics. However, it has some drawbacks such as inability to remove smear layer. One of the drawbacks of NaOCl is its inability to remove the smear layer and lack of substantivity.The adjunctive use of other materials has been suggested to improve NaOCl efficacy. Nevertheless, further studies are required in this field. However, due to the fact that 35 to 50% of the surface area of the canal system remains untouched by endodontic instruments using an appropriate root canal irrigantis necessary in order to improve disinfection of the root canal system [In ideal form, irrigants should have antimicrobial action, tissue-dissolution activity, demineralization, lubrication, and ability to removing smear layer and debris .2As NaOCl has excellent antimicrobial action and also tissue solubility, it has been used in endodontics as the most common irrigation solution. NaOCl is available as 1-15% aqueous solutions with alkaline pH (around 11) .One important drawback of NaOCl is its inability forsmear layer removal . Therefoi.e. dentin) and are released gradually. Chlorhexidine (CHX) is a cationic bis-guanide with residual antibacterial activity. Combination of NaOCl and CHX has been proposed to increase their antibacterial action [One of the other drawbacks of NaOCl is the lack of substantivity . Substanl action .The antibacterial effect of NaOCl can be achieved in two different ways. A high concentration of chloride ions produces high cytotoxicity, which explains the excellent antibacterial effect. But at a lower pH, the high proportion of hypo chloric acid is explanation of the antibacterial effect. The fact that the proportion of chloride ions decreases in the solution does not mean that the solution will lose its antibacterial effect -4.3Adding EDTA to NaOCl can decrease the pH of NaOCl in a time-dependent manner. This may affect free chlorines in solution and increase in chlorine gas and hypochlorous acid, subsequently decreases the hypochlorite ion , 10.Equal proportions of 1-2% NaOCl with 17% EDTA may result in pH of 8.0 from an initial value of 10 after 48 hours. However, when mixed in a 1:3 ratio, pH was stable during 48-hour time, possibly because of immediate interaction between solutions . Baumgaret al. [et al. [Zehnder et al. conclude [et al. are in aet al. [et al. [Girard et al. evaluate [et al. conclude [et al. .et al. [et al. [Saquy et al. evaluateet al. , 15. Gra4The combination of CHX and NaOCl has the ability to increase their antibacterial characteristics. Furthermore, CHX has a unique property named substantivity . The assOne study using electrospray ionization concluded that 2% CHX may rapidly produce orange-brown precipitate in combination with 1% and 5.25% NaOCl, and orange-white precipitate in combination with 0.16% NaOCl . On the et al. [Some studies have been done to show the chemical composition of the flocculate created by association of CHX with NaOCl -22. Marcet al. combinedet al. [ex vivo. They showed a decrease in count of patent dentinal tubules in coronal and middle thirds of the canal between irrigation with irrigation with 5.25% NaOCl and 5.25% NaOCl/2% CHX [et al. [et al. [et al. [Using nuclear magnetic resonance, Krishnamurthy and Sudhakaran detectedet al. showed t [et al. studied [et al. conclude [et al. also sho [et al. .In conclusion, combination of CHX and NaOCl may cause some color changes and also the formation of an insoluble precipitate may interfere with canal seal. The canal system can be dried using sterile paper points before the final rinse by CHX.5et al. [et al. [ALX is a disinfectant with greater affinity for major virulence factors than CHX . Kim et et al. showed t [et al. in 2017 [et al. showed g6It has been shown that good results may be obtained by the association of NaOCl to CA in permanent teeth -36. Thiset al. [Zehnder et al. concludeet al. showed t7MA is used as a conditioner in dental adhesives . For smeet al. [There is only one study on the interaction between NaOCl and MA. Chandak et al. showed t8in vitro [High surface tension is one of the major drawbacks of NaOCl. Adding surfactants increase the ability of NaOCl to penetrate the main canal and canal irregularities in vitro , 46. Comet al. [et al. [Stojicic et al. concludeet al. . William [et al. revealedet al. [Another issue is the effect of surfactants on the stability of NaOCl. Adding surfactants modifies the stability of NaOCl , 50. Ethet al. evaluateet al. [et al. [Cameron et al. showed tet al. . Howeveret al. . In addi [et al. showed t9Octenisept is an antimicrobial/antibiofilm agent can be potentially combined with NaOCl during root canal treatment. A recent study showed that the whitish precipitate formed with the NaOCl-OCT mixture was identified as phenoxyethanol, a compound already present in OCT, and it may occlude dentinal tubules .Bukhary and Balto showed g10et al. [Chlor-XTRA is a new NaOCl-based irrigation solution composed of 5.85% NaOCl and a detergent to reduce surface tension. Its appearance is clear light yellow green. It is completely soluble in water with a chlorine-like odour. It is 2.6 times more digestive than regular NaOCl. Furthermore, its wetting ability is 2.5 times greater than regular NaOCl . Recentlet al. demonstret al. [et al. [Jungbluth et al. showed t [et al. showed t11et al. [A new modified NaOCl solution (Hypoclean) has been introduced by Giardino. This solution is a detergents-based irrigant composed of 5.25% NaOCl and 2 detergents . Recentlet al. showed t122 are samples of such usage that utilize chemicals with the aim of facilitating the tissue remnant removal [Root canals cannot be cleaned by physical instruments alone. Chemicals can help to supplement this procedure. Irrigation with NaOCl and intracanal placement of Ca(OH) removal .2 and NaOCl. Hasselgren et al. [2 paste has the ability of tissue dissolving after 12 days. They also showed an increase in tissue dissolving of NaOCl after pretreatment with Ca(OH)2 for 30 minutes up to 7 days. In another study, Metzler and Montgomery [2 paste for 7 days followed by irrigation with NaOCl can clean canal isthmuses better than hand mechanical preparation alone. Yang et al. [2 partially dissolved pulp tissue. The anaerobic environment could not alter the tissue dissolving property. Wadachi et al. [2 for seven days or NaOCl for >30 seconds. Combination of NaOCl and Ca(OH)2 was more effective than separate protocols. On the other hand, some researches have shown that Ca(OH)2 may be ineffective for pulpal tissue dissolving. Morgan et al. [2 for tissue dissolving. In summary, it seems that pretreatment with Ca(OH)2 medicament may increase the tissue dissolving effect of NaOCl.Some controversies have been reported regarding the synergistic effects of Ca(OH)n et al. showed tntgomery concludeg et al. showed ti et al. also shon et al. also sho13MTA has been produced as gray and white MTA; both with the base of Portland cement. Hydrophilic powder needs some moisture for optimal setting. Traditionally, MTA powder is mixed with supplied sterile water in a 3:1 powder/liquid ratio. Different liquids have been suggested for mixing with the MTA powder -66.et al. [Using differential scanning calorimetry, Zapf et al. concludeet al. , immersiet al. [Ballester-Palacios et al. showed tet al [et al. [Al-Anezi et al . showed [et al. assessedAccording to this study and other documents -77, it cDue to the excellent antimicrobial activity and tissue solubility, NaOCl is the most common irrigation solution in endodontics. However, it has some drawbacks such as inability to remove smear layer. One of the drawbacks of NaOCl is its inability to remove the smear layer and lack of substantivity. Therefore, the adjunctive use of other materials has been suggested to improve its efficacy.Combining NaOCl with EDTA decreases free available chlorine dramatically. However, using EDTA or CA was suggested removing the smear layer associated with mechanical instrumentation.Combination of CHX and NaOCl has been suggested to enhance their antibacterial activity and induce substantivity.Combining NaOCl with MA and CA reduces free available chlorine.Regarding adding surfactants to NaOCl, studies regarding the effect of surfactants on the tissue solubility of NaOCl are controversial. However, surfactants reduce free available chlorine in NaOCl solutions.2 may enhance the ability of tissue dissolving of NaOCl.Pretreatment with Ca(OH)NaOCl is detrimental to MTA reaction product formation. Furthermore, immersion of white MTA in NaOCl may result in the formation of brown discoloration. In addition, NaOCl significantly lowers the surface roughness of MTA, decreases its setting time, improves its handling properties and increases its cytotoxicity.Combining NaOCl with some other irrigating solutions such as MTAD, oxygen peroxide, and strong acids (like phosphoric acid and nitric acid) should be studied in the future."} {"text": "They concluded that the animals scratched \u2018primarily because of an immune/stimulus itch, possibly triggered by ectoparasite bites/movements\u2019 [acute if they have rapid onset and/or short course or chronic if they extend over a prolonged period of time. The response to different types of stressors also reflects this difference [In their interesting article, Duboscq et al. used behvements\u2019 , pp. 1\u20132fference .et al. [et al. [The authors put forth different hypotheses on self-directed behaviours related to the effect that parasitological (presence of ectoparasites), environmental and social factors have in affecting the rates of self-directed behaviours, including scratching. Such hypotheses are presented as alternative in the first part of the Introduction , p. 2 anet al. also expet al. , p. 2 oret al. , p. 11, et al. ,5 whereaet al. . In humaet al. . Whateve [et al. . In maca [et al. . In prim [et al. \u201315. Hencet al. [if but when a certain factor can be more effective than another in producing scratching. Stressors can be additive but they operate in the short- or long-term depending on whether they are transient (acute) or prolonged (chronic) [Once established that the different factors are not alternative and that their relationship with scratching has been demonstrated, it is worth focusing on the role that each factor can have in relation to the time scale. We will particularly focus on scratching. The framework of Duboscq et al. could bechronic) . Thus, iet al. study group, the scratching due to long-lasting ectoparasite load can be the primary component of the overall baseline scratching levels. However, this does not exclude the possibility that scratching can significantly increase\u2014compared with baseline\u2014in response to a transient stressor, such as a conflict. In the short period around the event, the aggression can be primarily responsible for scratching variation but only a sequential, temporal analysis can unveil this aspect [t0) and the tight time window (t0\u2009\u2212\u2009t1) immediately following the aggressive event in order to detect the variation of scratching levels [t0 and t1 cannot be linked to parasite load if the load is not significantly different between t0 and t1. There is no reason to believe that, in the absence of any other additional perturbing factor, the ectoparasite load varies significantly in the minutes immediately preceding and following the stressful event. Of course, an extra control measure can be taken in this respect if deemed necessary. It may be questioned that in the short term a change in the parasite activity , andow t0\u2009\u2212\u2009t immediatet al. [In conclusion, the relative weight of the different factors affecting scratching can change depending on the time scale and timing selected. The article by Duboscq et al. is valua"} {"text": "The concept of mental toughness (MT) has garnered substantial interest over the past two decades. Scholars have published several narrative reviews of this literature (e.g., Connaughton et al., r = 0.67 at p < 0.05; Cowden et al., r = 0.171 at p < 0.05; Cowden et al., Well-executed systematic reviews offer many advantages for summarizing, synthesizing and integrating findings across studies when compared with non-systematic evaluations (e.g., clear and accountable methods; see Gough et al., Given the predominance of self-reported MT among the primary studies of Lin et al.'s systematLin et al.'s findingsAs the first systematic review of the quantitative literature on the associations between MT and key correlates, I commend Lin et al. for theiThe author confirms being the sole contributor of this work and approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Selegiline is used to treat Parkinsonian patients. Other indications of its use have recently been discovered.Scouting special and beneficial side effects of selegiline treatment.Two-year old male Wistar rats were daily treated with 0.25 mg/kg of selegiline s.c. (subcutaneous injection). The rats were sacrificed following a four-weeks\u2019 treatment.Mass of testes, number of sperms, progressive motility of sperms, and their viability definitely increased.Selegiline can successfully be used to stop/counterbalance certain symptoms of aging. Pharmacet al .Peculiar pharmacological characteristics of selegiline were discovered by Knoll and Magyar , was proet al [The original indication of selegiline has had its renaissan. Gordon et al carried et al , 9.et al [Knoll and Knolet al experimein vivo experiments on rats to show individual differences in sexual activity and also how it can be improved by treatments with selegiline [et al [tuberculum olfactorium, corpus striatum and substantia nigra in the state of resting [Preliminary pharmacological experiments aimed at the use of selegiline in the possible improvement of human reproduction power. The initial process of reproduction in some mammalian organisms contain two steps. One of them is when a male approaches a female, they copulate so that the sperms of the male enters the sexual organ of the female. Dall\u00f3 , Dall\u00f3 alegiline -23. Dalle [et al carried e [et al . Knoll ce [et al , 24 dedu resting . SelegilKnoll, Dall\u00f3 and co. -24 classAnother essentially important factor of reproduction is the entire quality of sperms that is the number of sperms, their progressive motility and viability (ratio of live versus dead sperms).et al [et al [Clinical observations by Urry et al , 26 showl [et al found thIn vitro effects of caffeine and theophylline on human semen quality were tested by Dougherty et al [-2 through 10-6 M concentrations without any change in sperm motility and viability of sperms.ty et al from 10-et al [Recent publications by Mihalik et al -31 treatet al gave the first direct proof of penetration of selegiline into the testes of rats [Tekes of rats . Followi of rats .Similar results were found when using a rabbit model . The selet al [Zieher et al determinet al ,Contrary to the effect of selegiline on the physical/copulatory activity of rats and on the brain\u2019s dopamine concentration, the concentration of deprenyl in the testes did not show any definite increase following a four-weeks\u2019 treatment using selegiline in aged rats.This paper is meant to show how selegiline treatment influences the \u201cquality\u201d of sperms following chronic treatments with their usual therapeutic dose.22.11) was the kind gift of Chinoin Chemical and Pharmaceutical Works(Budapest); its present name is Sanofi-Aventis/Chinoin .Selegiline .2.32.3.1Ten rats (five treated and five controls) were o.c. treated with a 0.25 mg/kg daily dose of selegiline for four weeks. Treatments were performed according to the experimental protocol approved by ethical committee of ANTSZ , permission number: 1810/03/2004. The experimental conditions conformed to 86/509 EEC (European Economic Community) regulation. The control group received only the physiological salt solution.2.3.2The testes and epididymes were removed and the cauda epididymes were ligated at autopsy and kept in PBS until semen evaluation. Following the separation of the complete epididymis, the testicles were weighed. The epididymes were placed in Petri dishes containing 0.5 mL PBS with 10 mg/mL of BSA (Bovine Serum Albumin). Cauda epididymes were minced with iridectomy scissors, by five deep cuts. The Petri dishes were covered and incubated at 37 \u00b0C for 10 minutes. Prior to the assessment of sperm motility, 10 \u00b5L of supernatant was dropped at the prewarmed slides and covered with a lid. The sample was held at 37 \u00b0C while the velocity parameters, including the progressive motility of minimum 100 sperms at 10 fields, were observed. Progressive sperm motility was evaluated using the standard method of Bearden and Fuquay . The meaThe total sperm number was determined by using a hemocytometer. Approximately, 10 \u00b5L of the diluted semen was added to 390 \u00b5L of Hank\u2019s Balanced Formaldehyde Solution. Ten microliters of semen, fixed in formaldehyde were transferred to the counting chamber of the hemocytometer and the cells were counted with the help of a light microscope (200x). The sperm cell number of ten large squares multiplied by the dilution rate gave the original sperm cell count.Sperm cell morphology was evaluated by stain of Eosin-Nigrosine and Spermac Stain at a 400x magnification as it is prescribed .2.3.3Small pieces of each testicle and adrenal gland were precisely weighed by analytical scales. The pieces were homogenized with PBS by a tissue homogenization (Janke&Kunkel Ika-Werk Ultra Turrax). The homogenates were centrifuged at 4000 rpm for 20 minutes. The supernatant was stored at - 20 \u00b0C. The stock-dilution of tissue was 1g/10ml at the homogenization which was 1:5 diluted individually for the determination of tissue hormone concentrations. Serum and tissue cortisol and testosterone concentrations were measured by ELISA . The intra-assay variability (CV) of cortisol ELISA was 3.3%; the inter-assay CV was 6.1%. The intra-assay CV of testosterone ELISA was 3.21%; the inter-assay CV was 4.74%.et al [et al [The testes were halved, homogenized in a solution of 0.1 M trifluoroacetic acid, centrifuged and their dopamine contents were measured by HPLC according to Rao et al . The anal [et al containi31).Important characteristics of sperms show definite improvement in their progressive mobility (Table 2).The average mass of the testes, number of sperms and viability (live/dead sperms) were also increased relative to these characteristics of rats in the control group. Cortisol content in the serum increased, while that in the adrenal gland significantly decreased. Selegiline treatment definitely affected the testosterone level in the sera, the adrenal glands and testes, whose levels decreased (Table 4H\u00e1rsing and Vizi did not et al [et al [Zieher et al raised tl [et al suggesteet al [et al [et al [in vitro. Ramirez et al [Our results do not correspond to those of Urry et al , 26, andl [et al as low tl [et al and Bavil [et al used catez et al publishe"} {"text": "Multiple primary lung cancer may present in synchronous or metachronous form. Synchronous multiple primary lung cancer is defined as multiple lung lesions that develop at the same time, whereas metachronous multiple primary lung cancer describes multiple lung lesions that develop at different times, typically following treatment of the primary lung cancer. Patients with previously treated lung cancer are at risk for developing metachronous lung cancer, but with the success of computed tomography and positron emission tomography, the ability to detect both synchronous and metachronous lung cancer has increased.We present a case of a 63-year-old Hispanic man who came to our hospital for evaluation of chest pain, dry cough, and weight loss. He had recently been diagnosed with adenocarcinoma in the right upper lobe, with a poorly differentiated carcinoma favoring squamous cell cancer based on bronchoalveolar lavage of the right lower lobe for which treatment was started. Later, bronchoscopy incidentally revealed the patient to have an endobronchial lesion that turned out to be mixed small and large cell neuroendocrine lung cancer. Our patient had triple synchronous primary lung cancers that histologically were variant primary cancers.Triple synchronous primary lung cancer management continues to be a challenge. Our patient\u2019s case suggests that multiple primary lung cancers may still occur at a greater rate than can be detected by high-resolution computed tomography. In both men and women, lung cancer is the leading cause of cancer-related death. Risk factors associated with lung cancer include environmental risk factors, genetic factors, and tobacco smoke exposure. Patients with lung cancer are at an increased risk of developing secondary lung cancers at the same time as the first (synchronous) or later in life (metachronous). In these cases, it is critical to determine whether the secondary tumor is an independent primary tumor or a recurrence or metastasis of the first primary tumor, because this determination influences how the disease is staged and managed as well as the patient prognosis.et al. [In 1924, the first case of two distinct primary lung cancers was published by Beyreuther et al. , whereaset al. .We identified a unique case of triple SLC, including adenocarcinoma, squamous cell carcinoma, and mixed small and large cell neuroendocrine carcinoma, which has poor prognostic implications. Biopsy of the other lesions helped us differentiate metastatic disease from primary lung cancer. The fact that our patient had three different primary cancers made treatment challenging.A 63-year-old homeless Hispanic man diagnosed with lung cancer 5\u00a0months previously in our hospital presented with ongoing sharp chest pain of 2\u00a0weeks\u2019 duration, shortness of breath, chronic dry cough, and weight loss of 20 pounds within 2\u00a0months. The patient also had experienced physical activity limitations for the previous 2\u00a0months. He denied experiencing diarrhea, nausea, vomiting, or abdominal pain, and he did not have anorexia, fever, chills, or phlegm production.The patient\u2019s past medical history included a cerebrovascular accident with residual left-sided weakness, bronchial asthma, lung adenocarcinoma (stage IIIB) diagnosed by computed tomography (CT)-guided right upper lobe biopsy 5\u00a0months previously. He was receiving adjuvant cisplatin-based chemotherapy. His social history included 55 pack-years of smoking and a family history of cancer; his mother had colon cancer, and his brother had an unknown type of cancer. He had no environmental exposures. He was taking aspirin and atorvastatin and was using an albuterol inhaler and a fluticasone inhaler.The patient was cachectic and afebrile on physical examination, with a blood pressure of 146/84\u00a0mmHg, and his oxygen saturation was 94% with administration of 2\u00a0L of oxygen by nasal cannula. He had bilateral grade 3 clubbing and a bronchial breath sound in the right upper lobe. Cardiovascular examination revealed normal heart sounds, and the patient\u2019s abdomen was soft with no organomegaly. In addition, laboratory analyses of the patient\u2019s peripheral blood and urine were normal, although his sodium levels were low.A chest x-ray showed a persistent right upper lobe mass Fig.\u00a0. This wa1)/forced vital capacity ratio, moderately decreased FEV1, and moderate obstruction.In view of the patient\u2019s new symptoms of acute chest pain and shortness of breath, repeat chest CT was done, which ruled out pulmonary embolism but showed a persistent right upper lobe mass with new mediastinal lymphadenopathy. Positron emission tomography (PET) was performed, which revealed hypermetabolic activity in the 6-cm mass in the right upper lobe , right hilar nodes (SUV 2.8), and 1.5-cm left hilar lymph nodes (SUV 4.9) Fig.\u00a0. No hypeThe patient underwent bronchoscopy for further evaluation prior to the planned mediastinoscopy. An endobronchial lesion in the left main stem Fig.\u00a0 was an iWe identified a rare case of triple SLC, including adenocarcinoma, squamous cell carcinoma, and mixed small and large cell neuroendocrine carcinoma.et al. [MPLC can be synchronous (occurring simultaneously) or metachronous (occurring at different times) . The criet al. , is a seet al. .et al. [Antakli et al. made modet al. . They adet al. .field cancerization, suggests that carcinogenic exposure or genetic factors affect tissues or organs, potentiating many cells in the same area to become transformed [There are multiple challenges associated with diagnosing a second primary lung cancer. First, if the secondary tumor arises within 2\u00a0years of the primary tumor, it is challenging to determine whether that tumor is a new primary tumor or a result of a residual metastasis from the original primary tumor. Second, if the secondary tumor is of the same histological type, it complicates the diagnosis of a new cancer because it is plausible that a secondary tumor of the same histological type could be an extension of the original cancer. Third, if the secondary tumor arises in an area impacted by radiotherapy (given as a treatment for the primary tumor), there can be further uncertainty whether the secondary tumor is a result of metastasis from the original tumor rather than a new cancer. Additionally, a synchronous second primary cancer judged to have associated metastatic disease would classify the patient as stage IV, which would likely contraindicate a potentially curative resection. The erroneous designation of a metachronous second primary cancer as a local recurrence might have similar therapeutic implications. A steadily growing hypothesis, known as nsformed .p53 tumor suppressor gene (chromosome 17) occur frequently in lung carcinoma, with rates up to 70% in small cell lung cancer and 50% in non-small cell lung cancer. Studies have shown that p53 mutational analysis is good diagnostic tool for diagnosing multiple SLC in 35 to 66% of cases [Analysis of the clonal origin of tumors can help determine whether multiple lung tumors arise from the same clone and therefore the same tumor. Mutations in the of cases . In addiof cases . Analysiof cases . This anof cases .et al., who showed that surgical intervention is safe and effective for patients with resectable metachronous lung cancer and good pulmonary reserve [stereotactic ablative radiotherapy), or percutaneous image-guided tumor ablation [Multiple synchronous primary lung cancers should be treated as two separate and distinct tumors, including their staging and treatment . Managem reserve . Howeverablation , 17.et al. [et al. [et al. [et al. [et al. [Multiple retrospective analyses have illustrated the prognostic implications of MPLC. Lee et al. demonstr [et al. demonstr [et al. reported [et al. illustra [et al. showed tet al. [et al. [et al. [There are four histological lung tumor types: adenocarcinoma, squamous cell, and mixed large and small cell carcinoma. Usually, lung cancers have a single histological type; however, there are reports of patients with MPLC with different histology, including our patient. Jung-Legg et al. reported [et al. describe [et al. reportedMPLC can be defined as synchronous or metachronous, and the management of patients with MPLC continues to be a challenge. Although the use of high-resolution CT has increased the ability to diagnose MPLC, our patient\u2019s case suggests that MPLCs may still occur at a greater rate than can be detected."} {"text": "B\u00f6ttiger et al. present n\u2009=\u200949 to n\u2009=\u200995,072. Given that the total number of analysed cases is n\u2009=\u2009126,829, the study by Yasunaga et al. -physicians typically provide more optimal post-return of spontaneous circulation treatments, including therapeutic hypothermia and percutaneous coronary intervention\u2019 . While tn\u2009=\u200918,462). Hagihara et al. [The same limitations apply to the second largest study is a retrospective analysis of two previous publications, independently describing survival after OHCA in the UK (paramedic-based EMS) and in Germany (physician-based EMS). While survival in Germany was significantly higher, ambulance response times were also shorter in Germany. No information is available on important prognostic factors such as age of patients or percentage of cases with shockable rhythm.The third-largest study by Fischer et al. (n\u2009=\u2009429n\u2009=\u20094144) is a conference abstract presenting limited information. The authors again used the national Japanese OHCA database and the period of data collection overlaps with both Japanese studies described earlier.The work by Kojima et al. of patients suffering from OHCA. The pooled sample size with n\u2009=\u2009126,829 was dominated by two Japanese studies [They correctly mention significant heterogeneity among the study sizes (ranging from studies , making Second, von Vopelius-Feldt and Benger point out that the study by Fischer et al. did not Third, it is correct that the cited publication by Kojima et al. is a conA major limitation in this whole scientific field \u2013 and as discussed in our publication \u2013 is tha"} {"text": "To study the effect of dietary supplementation of lecithin and carnitine on growth performance and nutrient digestibility in pigs fed high-fat diet.A total of 30 weaned female large white Yorkshire piglets of 2 months of age were selected and randomly divided into three groups allotted to three dietary treatments, T1 - Control ration as per the National Research Council nutrient requirement, T2 - Control ration plus 5% fat, and T3 - T2 plus 0.5% lecithin plus 150 mg/kg carnitine. The total dry matter (DM) intake, fortnightly body weight of each individual animal was recorded. Digestibility trial was conducted toward the end of the experiment to determine the digestibility coefficient of various nutrients.There was a significant improvement (p<0.01) observed for pigs under supplementary groups T2 and T3 than that of control group (T1) with regards to growth parameters studied such as total DM intake, average final body weight and total weight gain whereas among supplementary groups, pigs reared on T3 group had better intake (p<0.01) when compared to T2 group. Statistical analysis of data revealed that no differences were observed (p>0.05) among the three treatments on average daily gain, feed conversion efficiency, and nutrient digestibility during the overall period.It was concluded that the dietary inclusion of animal fat at 5% level or animal fat along with lecithin (0.5%) and carnitine (150 mg/kg) improved the growth performance in pigs than non-supplemented group and from the economic point of view, dietary incorporation of animal fat at 5% would be beneficial for improving growth in pigs without dietary modifiers. Swine industry has a major economic impact on agriculture throughout the world. Compared to other livestock species, pig rearing is considered to be more advantageous due to its low investment for farming, quick returns, better feed conversion efficiency, higher fecundity, short generation interval, and significance in improving socioeconomic status of weaker section of the society ,2. Pork Feed cost plays a major role in determining profitability of swine production and feed itself contributes 65-75% of total cost of production . Cereal In swine diet, fat utilization can be improved by feed additives such as lecithin and L-carnitine. For proper utilization of fat by animals, they should be digested and absorbed well in gastrointestinal tract ,13. BecaAddition of higher quantities of fat in the diet for pigs may cause oxidative stress and to prevent this stress antioxidant such as vitamin E (\u03b1-tocopheryl acetate) is to be included in the diet. Antioxidants will protect fat from oxidation and thereby prevent the production of rancidity substances ,13. ConsThe experimental design and plan of this study strictly followed the norms of the Institutional Animal Ethics Committee of Kerala Veterinary and Animal Sciences University (KVASU), Pookode, Kerala. Requisite permission for the selection of animals and laboratory analysis was granted by Academic Council and Institutional Animal Ethics Committee of KVASU, Pookode, Kerala.The study was conducted at Centre for Pig Production and Research, College of Veterinary and Animal Sciences, Mannuthy, Thrissur.A total of 32 weaned female large white Yorkshire piglets of 2 months of age were selected and randomly divided into three dietary groups as uniformly as possible with regard to weight. Each group had four replicates with two piglets per replicate in control group (T1) and three piglets per replicate in experimental groups (T2 and T3). All piglets were maintained under uniform management conditions throughout the experimental period of 98 days. Each piglet was fed with standard grower and finisher ration containing 18% and 16% of crude protein (CP) and 3265 kcal of metabolizable energy per kg of feed. The three groups of piglets were allotted to three dietary treatments as follows: T1 intake. The piglets were weighed at the beginning of the experiment and subsequently at fortnightly intervals to estimate total weight gain, average daily gain, and feed conversion efficiency, respectively. A digestibility trial was conducted toward the end of the experiment to determine the digestibility coefficient of the nutrients and availability of minerals like calcium and phosphorus by total collection method.Chemical compositions of feed and fecal sample were analyzed as per methods described in Association of Official Analytical Chemists .Data collected on various parameters were statistically analyzed by completely randomized design method as described by Snedecor and Cochran . Means wst, 2nd and 12th week whereas supplementary groups (T2 and T3) showed similar and better intake than control group in all weekly intervals except 5th, 6th and 9th week where combination group (T3) showed higher intake than T1 and T2 group. This is in agreement with the findings of Overland and Sundstol [et al. [et al. [Data on weekly average feed intake of pigs given the three experimental rations T1, T2 and T3 are presented in Sundstol and Piao [et al. who had [et al. observedL-carnitine serves as a co-substrate for the enzyme carnitine acyltransferase for reversible acetylation of coenzyme A and thereby acts as a carrier for transport of long-chain fatty acid from the cytosol into the inner mitochondrial membrane for undergoing \u03b2-oxidation of fatty acids ,23. Carnet al. [et al. [Danek et al. suggesteet al. ,21,29 re [et al. revealed [et al. . For ass [et al. ,32.et al. [et al. [et al. [et al. [et al. [The data with regard to fortnightly average body weight of pigs are presented in et al. and Brumet al. who had [et al. observed [et al. had repo [et al. reported [et al. reported [et al. reportedet al. [et al. [et al. [et al. [et al. [et al. [Statistical analysis of data presented in et al. who had [et al. had obse [et al. had repo [et al. inferred [et al. observed [et al. by supplet al. [et al. [The apparent digestibility of nutrients and availability of minerals in the experimental rations estimated from digestibility trial in pigs belonging to three dietary treatments are represented graphically in Figures-et al. reported [et al. noticed The cost of feed per kg for three rations for the overall period was Rs. 24.38, 25.41 and 27.56, respectively. The cost of ingredients used for this study was as per the rate contract fixed for the supply of various feed ingredients to the farm for the year 2014-2015. The cost of feed per kg body weight gain of pigs maintained on the three dietary treatments was Rs. 77.37, 75.03 and 80.00 for the overall period and the values were statistically similar.The dietary incorporation of animal fat at 5% level or animal fat along with lecithin (0.5%) and carnitine (150 mg/kg) had significantly improved the total DM intake, average final body weight, and total weight gain in pigs than non-supplemented group but no differences were observed among them for average daily gain, feed conversion efficiency, and nutrient digestibility during the overall period. From the economic point of view, dietary supplementation of animal fat at five per cent level (T2) would be beneficial for improving the growth in weaned Large White Yorkshire pigs without dietary modifiers.KA was involved in the design of the study. AS carried out the experiment, collection, and analysis of the data and prepared the first draft of the manuscript under the guidance of KA. KA, PG revised the manuscript. PSB drafted and edited the manuscript."} {"text": "Gallus gallus, at least 8 maternal lineages have been identified. While breeds distributed westward from the Indian subcontinent usually share haplotypes from 1 to 2 haplogroups, Southeast Asian breeds exhibit all the haplogroups. The Vietnamese Ha Giang (HG) chicken has been shown to exhibit a very high nuclear diversity but also important rates of admixture with wild relatives. Its geographical position, within one of the chicken domestication centres ranging from Thailand to the Chinese Yunnan province, increases the probability of observing a very high genetic diversity for maternal lineages, and in a way, improving our understanding of the chicken domestication process.Chickens represent an important animal genetic resource and the conservation of local breeds is an issue for the preservation of this resource. The genetic diversity of a breed is mainly evaluated through its nuclear diversity. However, nuclear genetic diversity does not provide the same information as mitochondrial genetic diversity. For the species A total of 106 sequences from Vietnamese HG chickens were first compared to the sequences of published Chinese breeds. The 25 haplotypes observed in the Vietnamese HG population belonged to six previously published haplogroups which are: A, B, C, D, F and G. On average, breeds from the Chinese Yunnan province carried haplotypes from 4.3 haplogroups. For the HG population, haplogroup diversity is found at both the province and the village level (0.69).G. gallus spadiceus. However, there was no geographical evidence of gene flow between wild and domestic populations as observed when microsatellites were used.The AMOVA results show that genetic diversity occurred within the breeds rather than between breeds or provinces. Regarding the global structure of the mtDNA diversity per population, a characteristic of the HG population was the occurrence of similar pattern distribution as compared to In contrast to other chicken populations, the HG chicken population showed very high genetic diversity at both the nuclear and mitochondrial levels. Due to its past and recent history, this population accumulates a specific and rich gene pool highlighting its interest and the need for conservation. Gallus gallus gallus is the primary maternal ancestor of the domestic chicken and one in the Indian subcontinent. Furthermore, in a recent study, Errikson et al. [G. sonneratii).Chickens represent an important protein source for humans, as shown by a strong increase of poultry production around the world . Local populations contribute more specifically, to family poultry production, which is quite important for low income farmers from Africa, Asia, Latin America and the South Pacific. These local populations that are easy to raise, are resilient to harsh environmental conditions and may harbour original features of disease resistance . Within esticus; ). Liu et [et al. showed tn et al. highlighet al. [Liu et al. performeet al. ,9,10. Soet al. ,11. In Let al. ,13. In Aet al. . The hetOn the basis of microsatellite information, African and Asian local chicken populations showed high genetic diversity ,5,4,11. In a previous analysis, we demonstrated that the local population of Vietnamese chickens, namely the H'mong chicken, showed a high genetic diversity and could not be subdivided into subpopulations . This poet al. [et al. [G. g. gallus and one population of G. g. spadiceus from Thailand (Gg1). Thus, a total of 30 new sequences of Red junglefowl were added.Vietnamese chickens originate from the northern Ha Giang province bordering the Yunnan Chinese province. The local chicken population from this province was previously considered to belong to the H'mong Black skin chicken. However previous genetic data demonstrated that no breed differentiation and no congruence between the black phenotype and the genetic structure occurred in the province . Therefoet al. , chicken [et al. : two popet al. [et al. [Details on chicken blood collection and DNA extraction were described previously in Berthouly et al. . Laws an [et al. . PCR proGallus gallus, involving 4 subspecies and 437 Gallus gallus domesticus (40 Chinese breeds) already analysed by Liu et al. [HM462082-HM462217).Vietnamese samples were compared with 93 wild u et al. . The seqet al. [h) diversity, nucleotide (\u03c0) diversity, pairwise differences, and a Minimum Spanning Tree (MST) from Kimura-2 parameter distances betweenes , analyset al. [et al. [G. gallus murghi, with sequences from Sri Lanka chickens published by Silva et al. [et al. [In order to have a more general view of the distribution of chicken haplogroups, we combined our data with haplotypes published by Oka et al. on Japan [et al. on Indiaa et al. and with [et al. . We used [et al. ) to esti [et al. . Applyin [et al. and the et al. [Gallus gallus from Hano\u00ef, H and E [For the HG chicken population, we obtained a segment of 506 bp of mtDNA HVS-I sequence. Among the 106 sequences, 33 variables sites were found, involving 36 mutational events, which defined 25 haplotypes. All but 6 mutations were transitions. The 25 haplotypes observed belonged to six haplogroups previously found by Liu et al. which ar Africa) ,11-13. TG. g. gallus Gg3 population from Vietnam, 2 haplotypes, G6 and G25 were observed. This corroborates the result from Liu et al. [G. gallus gallus population (i.e. Gg2). Haplotypes from clade A were not observed in such a population in previous published studies. In both populations from Thailand, Gg1 and Gg2, we found three new haplotypes from clade I. Until now, this clade was only represented by 2 haplotypes in samples originating from Vietnam [From our u et al. , who onl Vietnam . Thus, c(h) and nucleotide diversity (\u03c0) per commune ranged from 0.75 to 1 (mean h = 0.86 \u00b1 0.088) and from 0.007 to 0.019 (mean \u03c0 = 0.013 \u00b1 0.004) respectively (Table \u03c0 = 0.013 \u00b1 0.005).In the HG population, haplotype diversity ly Table . The higet al. [As aforementioned, six haplogroups where found in the HG province. Only the Chinese Lv'erwu breed exhibited a similar rate. On average, breeds from the Yunnan province carried haplotypes from 4.3 haplogroups. With the HG population, this haplogroup diversity was not only found at the province level, but also at the village level. For villages where at least three animals were sampled, haplotype diversity averaged 0.86 and haplogroup diversity 0.69. This means that within a village, nearly two chickens out of three carried sequences from two different matriarchal haplogroups. This is extremely high and implies that conserving chickens from only one village from the HG province would make it possible to maintain more matriarchal lineages than would the conservation of African, Chilean or Indian local chickens ,13,14. Tet al. . This poet al. . Poolinget al. . These rK2P distance. This analysis excluded the four breeds that carried only one haplotype. The K2P plot . When the breeds were grouped according to geographic location (provinces), the major part of diversity was present within breeds 80.2%), while only a fraction was diagnostic of the provinces (4.8%). The remaining variation was present between breeds within each of the provinces (14%). Multidimensional scaling was constructed using %, while 2 test; P = 0.251). In other words, no geographic distribution pattern of haplogroups was observed in the Ha Giang province. This result, in agreement with nuclear data [Commune average pairwise differences were only significant for three pairwise commune comparisons: communes 65 and 40; communes 65 and 20; and communes 88 and 7. Two major haplotypes B1 and A1 occurred at frequencies 33% and 11% respectively overall in the HG population ..G. gallu2 test; P = 0.451) between admixed vs. non-admixed were observed. Even if some farmers admit catching eggs in the forest of these admixed communes, the occurrence of such a practice may be lower than hybridisation between wild cocks and domestic hens. This implies that hybrid offspring of crosses between domestic cocks and junglefowl hens are uncommon or rarely back-crossed into the domestic population while the hybrid offspring of domestic females and wild males could be more easily integrated into the domestic population. Thus, such communes can not be discriminated using mtDNA but they can be discriminated with the use of microsatellites which take into account male as well as female mediated gene flows. Examination of wild sympatric populations from the Ha Giang province would be necessary to ascertain the presence or absence of mtDNA gene flow from wild to domestic populations.In a previous analysis of nuclear diversity , four coet al. and Silva et al. this haplogroup belongs to an ancestral population from the Indian subcontinent that is now extinct. Furthermore, Haplogroup C has mainly been found in Japanese breeds that originate from commercial exchanges with Korea, Taiwan or the eastern provinces of China [We combined all sequences published by Oka et al. (; Japanes[et al. (; Indian et al. (; Sri Lan([et al. . We foun [et al. , we founs Figure . An impou et al. . Moreoveu et al. , 45% of [et al. . Haplogr [et al. . Assumin. murghi , but onl. gallus . Therefoof China . This haThe Vietnamese HG chicken showed high genetic diversity at both the nuclear and the mitochondrial level when compared to other local breeds. This genetic diversity is the result of many factors. One of the most important is the geographical position within a domestication centre area. Furthermore, we should not underestimate the impact of human and livestock migrations. This genetic diversity has been moreover supplied by the continuous current gene flow between domestic and wild population as shown with nuclear DNA . Hence, et al. [et al. [However, the widespread occurrence of free-ranging livestock is raising fear that introgressive hybridisation with wild populations would lead to a loss of biodiversity in the wild populations that could decrease their fitness. Chazara et al. suggeste [et al. showed tAll authors read and approved the final manuscript. NVT carried out sample collection; CB carried out sample collection, sequencing, the computational analysis and prepared the manuscript; GM created the distribution maps, XR and JRM participated in the computational analysis and preparation of the manuscript; BB and NB participated in the sequencing, MTB contributed to the revision of the manuscript; EV participated in the design of the study and the revision of the manuscript; VCC participated in the coordination of the study; JCM participated in the design, coordination of the study, and revision of the manuscript.DOC Samples information. Summary of breeds used, their origin, number of individuals and GenBank accession numbers.Click here for file"} {"text": "Littoral cell angioma is a rare primary vascular neoplasm of the spleen, composed of littoral cells that line the splenic sinuses of the red pulp. It was thought to be a benign, incidental lesion. However, many recent reports have described it to be a malignant lesion with congenital and immunological associations. The definitive diagnosis can only be made after histology and immunohistochemistry studies. She had been diagnosed with a thyroid adenoma with cystic degeneration 20 years ago and had persistent complaints of fatigue for the last 2 years. On physical examination, she was found to have hepatosplenomegaly. Blood examination showed moderate anemia; liver and renal function tests were normal. CT scan and MRI were performed. No splenic mass was seen on nonenhanced CT; however,A 47-year-old Chinese male was found to have multiple splenic lesions. On B-mode USG, hemangioma was suspected and CT scan showed multiple lesions of the spleen . The patet al.[13et al.[34LCA is usually asymptomatic >55%) and is only discovered incidentally. It has no predilection for any particular age-group or sex; cases have been reported over the age range of 1\u201377 years (median age: 50 years) and in both sexes 5. Howeve5% and iset al. females et al.1 Recent ret al.134 Associet al.13 Bi et al3et al.348On unenhanced CT scan, LCA masses are only rarely visible ; however49In conclusion, LCA, is a rare vascular benign tumor of the spleen that may portend malignancy\u20134 and ma"} {"text": "Human enteric viruses are causative agents in both developed and developing countries of many non-bacterial gastrointestinal tract infections, respiratory tract infections, conjunctivitis, hepatitis and other more serious infections with high morbidity and mortality in immunocompromised individuals such as meningitis, encephalitis and paralysis. Human enteric viruses infect and replicate in the gastrointestinal tract of their hosts and are released in large quantities in the stools of infected individuals. The discharge of inadequately treated sewage effluents is the most common source of enteric viral pathogens in aquatic environments. Due to the lack of correlation between the inactivation rates of bacterial indicators and viral pathogens, human adenoviruses have been proposed as a suitable index for the effective indication of viral contaminants in aquatic environments. This paper reviews the major genera of pathogenic human enteric viruses, their pathogenicity and epidemiology, as well as the role of wastewater effluents in their transmission. Patients suffering from viral gastroenteritis or viral hepatitis may excrete about 10of stool . Consequof stool .Human enteric viruses are causative agents of many non-bacterial gastrointestinal tract infections, respiratory infections, conjunctivitis, hepatitis and other serious infections such as meningitis, encephalitis and paralysis. These are common in immunocompromised individuals with high morbidity and mortality attributable to these infections in both developed and developing countries. Most cases of enteric virus infections have particularly been observed to originate from contaminated drinking water sources, recreational waters and foods contaminated by sewage and sewage effluents waters .Wastewater treatment processes such as the activated sludge process, oxidation ponds, activated carbon treatment, filtration, and lime coagulation and chlorination only eliminate between 50% and 90% of viruses present in wastewater , the use2.A diverse range of enteric virus genera and species colonize the gastrointestinal tracts of humans producing a range of clinical manifestations and varying epidemiological features. From a public health perspective, the most important of these are the rotaviruses, adenoviruses, noroviruses, enteroviruses as well as Hepatitis A and E viruses.2.1.Rotaviruses are large 70 nm nonenveloped icosahedral viruses that belong to the family Reoviridae . A rotavThere are seven species of rotaviruses, designated A to G, of which groups A\u2013C infect humans . At leasN-acetylneuraminic acid residues on the cell surface membrane of the host cell, followed by VP5* or directly by VP5* without the involvement of sialic acid residues. In both cases, the identity of the receptors has remained unclear although, they are thought to be part of lipid micro domains [Rotaviruses infect mature enterocytes in the mid and upper villous epithelium of the host\u2019s small intestines . During domains . As a re domains . After c domains .+ ions, results in the transit of undigested mono- and disaccharides, fats, and proteins into the colon. The undigested bolus is osmotically active, resulting in impairment water absorption by the colon which leads to an osmotic diarrhea [The pathology of rotavirus infections have been based on a few studies of the jejunal mucosa of infected infants which have revealed shortening and atrophy of villi, distended endoplasmic reticulum, mononuclear cell infiltration, mitochondrial swelling and denudation of microvilli . Rotavirdiarrhea . The cladiarrhea . Severe diarrhea .Rotaviruses have been recognized as the leading cause of severe diarrhea in children below 5 years of age, with an estimated 140 million cases and about 800,000 deaths and about 25% of all diarrheal hospital admissions in developing countries each year . Group A2.2.Human enteroviruses are members of the family Picornaviridae, which consist of nonenveloped virus particles containing a 7,500-nucleotide single-stranded positive sense RNA genome protected by an icosahedral capsid . The gen\u03b12\u03b21, \u03b1v\u03b23, and \u03b1v\u03b26), decay-accelerating factor , the coxsackievirus-adenovirus receptor (CAR), and intracellular adhesion molecule 1 (ICAM-1) [The pathogenicity of enteroviruses is mediated by an arginine-glycine-aspartic acid (RGD) motif found on the viral capsid proteins of the picornavirus family . About s(ICAM-1) . Typical(ICAM-1) .Most enterovirus infections are asymptomatic or result in only mild illnesses, such as non-specific febrile illness or mild upper respiratory tract infections. However, enteroviruses can also cause a wide variety of clinical illnesses including acute haemorrhagic conjunctivitis, aseptic meningitis, undifferentiated rash, acute flaccid paralysis, myocarditis and neonatal sepsis-like disease . EnterovOne of the most distinctive enterovirus diseases is poliomyelitis. It is almost invariably caused by one of the three poliovirus serotypes. Polioviruses may also cause aseptic meningitis or nonspecific minor illness . The nor2.3.Adenoviruses are nonenveloped viruses, about 90 nm in diameter with a linear, double-stranded DNA genome of 34\u201348 kb and an icosahedral capsid . On the v\u03b23 or \u03b1v\u03b25 integrins leading to endocytosis [Adenovirus infection of host epithelial cells is mediated by the fibre and penton base capsid proteins. In the case of adenovirus subgroups A and C\u2013F, the attachment to cells is mediated by a high affinity binding of the fiber protein to a 46 kDa membrane protein known as the coxsackie adenovirus receptor (CAR), a member of the immunoglobulin receptor super family serving as a cell to cell adhesion molecule in tight junctions . Subgrouocytosis .The major receptor for adenoviruses, CAR is not normally accessible from the apical surfaces. As a result, the initial adenovirus infection is presumed to occur through transient breaks in the epithelium allowing the luminal virus to reach its receptor or during the repair of injured epithelium when CAR might be accessible . FollowiAdenovirus infections occur worldwide throughout the year . The ser2.4.Noroviruses are members of the family Caliciviridae . NorovirThe VP1 subunit consists of a shell (S) and a protruding (P) domain that is made up of a middle P1 and a distal P2 subdomains . While tNoroviruses are a major cause of acute viral gastroenteritis, affecting people of all age groups worldwide . Outbrea3.Municipal wastewater is a mixture of human excreta (sewage), suspended solids, debris and a variety of chemicals that originate from residential, commercial and industrial activities . Raw sewThe assessment of the microbiological quality of wastewater effluents has traditionally depended on indicator organisms, such as coliforms or enterococci, which however do not always reflect the risk of other microbial pathogens such as viruses, stressed bacterial pathogens and protozoa . In partViral pathogens have frequently been detected in waters that comply with bacterial standards ,62. Huma4.Enteric viruses in wastewater treatment plants are removed by a combination of irreversible adsorption as well as inactivation by disinfectants . ProcessThe inactivation of viruses by disinfection is a process affected by suspended particles. Disinfection relies on the ability of either chemical disinfectant molecules or high-energy photons (in the case of UV disinfection) coming into contact with the viruses . Chemica5.et al. [Escherichia coli and Enterococcus faecalis were rapidly inactivated by chlorine with inactivation levels of (>5 log10 units) while there was poor inactivation (0.2 to 1.0 log10 unit) of F+-specific RNA (FRNA) bacteriophage (MS2) at doses of 8, 16, and 30 mg/liter of free chlorine. Armon et al. [The study of the inactivation of enteric viruses following wastewater disinfection is complicated by the low and variable levels of enteric viruses frequently seen in effluents . Researcet al. , observen et al. also shon et al. .Ct values, defined as disinfectant concentration (C) multiplied by the contact time (t) between the disinfectant and microorganism. The recommendations direct that public utilities must ensure a 4-log (Ct 99.99%) inactivation of viruses [In the United States, the Environmental Protection Agency (EPA) recommends the use of an additional criterion for the evaluation of water disinfection based on viral inactivation. The standard makes use of viruses .6.\u22123 to 1.0 \u00d7 102 liter\u22121 depending on the level of treatment [et al. [The inability of wastewater treatment systems to ensure a complete inactivation of viruses in wastewater effluents has serious implications on public health. Virus levels in treated wastewater, measured by cell culture assay, range from 1.0 \u00d7 10reatment . Human e [et al. detected [et al. . Even hi [et al. .7.The discharge of inadequately treated sewage water has a direct impact on the microbiological quality of surface waters and consequently the potable water derived from it. The inherent resistance of enteric viruses to water disinfection processes means that they may likely be present in drinking water exposing consumers to the likelihood of infection. In one study, Human adenoviruses were detected in about 22% of river water samples and about 6% of treated water samples in South Africa . In anotEnteric viruses are the most likely human pathogens to contaminate groundwater. Their extremely small size, allows them to infiltrate soils from contamination sources such as broken sewage pipes and septic tanks, eventually reaching aquifers. Viruses can move considerable distances in the subsurface environment with penetration as great as 67 m and horizontal migration as far as 408 m . In a stet al. [3 virus particles\u00b7liter\u22121 in recreational beaches in America. Moc\u00e9-Llivina et al. [Another important human exposure pathway is through recreational waters. Human enteric viruses have frequently been detected in coastal waters receiving treated wastewater effluents. Xagoraraki et al. reporteda et al. detecteda et al. .et al. [et al. [Numerous outbreaks of enteric virus associated diarrhea have been linked to the consumption of water contaminated with viruses. Kukkula et al. , showed [et al. reported8.et al. [et al. [Viral contaminants may persist on food surfaces or within foods for extended periods . Pre-haret al. showed a [et al. using seet al. [Post-harvest contamination of raw food may occur as a result of human handling by workers and consumers, contaminated harvesting equipment, transport containers, contaminated aerosols, wash and rinse water or cross contamination during transportation and storage . Recontaet al. using seet al. .et al. [Probably one of the most recognized food borne transmission of enteric virus infections is through the consumption of shellfish grown in sewage polluted marine environments. Shellfish, which includes molluscs such as oysters, mussels, cockles, clams and crustaceans such as crabs, shrimps, and prawns , are filet al. report met al. \u2013100 was et al. . Althouget al. .9.Current safety standards for determining food and water quality typically do not specify what level of viruses should be considered acceptable. This is in spite of the fact that viruses are generally more stable than common bacterial indicators in the environment. While there has been a significant amount of research on the impacts of inadequately treated wastewater effluents in developed countries, the same can not be said of developing countries which coincidentally are faced with a huge burden of infectious diseases emanating from pollution of water bodies with wastewater effluent discharges (von Sperling and Chernicharo most of et al. [The challenge in ensuring safe water with regards to viral pathogens is that the detection of putative indicators of viral pathogens such as bacteriophages does not always correlate with that of other viruses particularly pathogenic enteric viruses . Human aet al. , adenoviet al. . Adenoviet al. , suggestet al. . To thiset al. , polyomaet al. which shet al. . The impet al. , and thaet al. ."} {"text": "Although there has been more than 100 years since Miller first postulated about the caries etiopathogenesis, this disease remains the most prevalent noncontagious infectious disease in humans. It is clear that the current approaches to decrease the prevalence of caries in human populations, including water fluoridation and school-based programs, are not enough to protect everyone. The US National Institutes of Health Consensus Development Program released a statement in 2001 entitled \u201cDiagnosis and Management of Dental Caries Throughout Life: : 1\u201324) and listed six major clinical caries research directions. The \u201cepidemiology of primary and secondary caries\u201d needs to be systematically studied with population cohort studies that collect information on natural history, treatment, and outcomes across the age spectrum.Research into \u201cdiagnostic methods\u201d, including established and new devices and techniques, is needed. Development of standardized methods of calibrating examiners is also needed.\u201cClinical trials\u201d of established and new treatment methods are needed. These should conform to contemporary standards of design, implementation, analysis, and reporting. They should include trials of efficacy.Systematic research on caries \u201crisk assessment\u201d is needed using population-based cohort techniques.Studies of \u201cclinical practice\u201d including effectiveness, quality of care, outcomes, health-related quality of life, and appropriateness of care are needed.\u201cGenetic\u201d studies are necessary to identify genes and genetic markers of diagnostic, prognostic, and therapeutic value.This Special Issue is a sample of the current research efforts addressing a subset of the topics described above. I would like to thank the authors for their excellent contributions, in addition to many colleagues who assisted me in the peer-review process. Finally, I would like to thank the support provided by my Guest Editors, Dr. Marilia Buzalaf, from the University of S\u00e3o Paulo, Brazil, and Dr. Figen Seymen, from the Istanbul University, Turkey.On the topic of \u201cepidemiology and caries risk assessment\u201d F. J. S. Pieralisi et al., K. K. M\u00e4kinen, J. Vanobbergen et al., M. Okada et al., M. Ferraro and A. K. Vieira, A. P. Dasanayake et al., and J. D. Ruby et al. provide different perspectives to the problem which are good examples of how difficult is to study this multifaceted disease in a more comprehensive fashion.There is currently great interest in \u201cdiagnostic methods,\u201d and the use of technology to differentiate diseased and healthy tissue, as well as to provide care. Different aspects of diagnostic and treatment approaches are addressed by S. Umemori et al., M. Aspiras et al., J. Wu et al., L. Karlsson, M. Yildirim et al., and A. Huminicki et al.The biggest challenge continues to be designing rigorous clinical trials that can provide conclusive answers related to approaches that more effectively control caries at the segments of the population with higher risk for developing the disease.I invite you to read, evaluate, and share this collection of 13 papers that comprise this special issue. Furthermore, I hope the readers will be interested in participating more actively in this debate of what approaches are more efficient to revert the current figures of caries prevalence and what aspects of this disease should be the focus of research in the coming years.Alexandre R. VieiraAlexandre R. Vieira"} {"text": "Sir,et al. with great interest.[et al. concluded that this observation was due to neonatal adrenal hemorrhage.[I read the recent publication on neonatal adrenal hemorrhage by Qureshi interest. The authinterest. Qureshi morrhage. There mimorrhage."} {"text": "Although irritable bowel syndrome (IBS) is a common gastrointestinal disorder, its prevalence is unknown, especially in the urban population of Bangladesh. This community-based study aimed to find out the prevalence of IBS and healthcare-seeking patterns using the Rome-II definition.P \u2264 0.05.A population-based cross-sectional survey of 1503 persons aged 15 years and above was carried out in an urban community of Bangladesh. The subjects were interviewed using a valid questionnaire based on Rome-II criteria in a home setting. Statistical analysis was performed with Statistical Package for Social Science (SPSS) Programmers and the level of significance was set at n = 116) with a male to female ratio of 1:1.36 (49 vs. 67). \u201cDiarrhoea-predominant IBS\u201d was the predominant IBS subgroup. Symptoms of abdominal pain associated with a change in stool frequency (100%) and consistency (88.8%) were quite common. All IBS symptoms were more prevalent among women (P < 0.000). In the past one year, 65.5% (n = 76) IBS subjects had consulted a physician with a slightly higher rate of women consulters (68.6 vs. 61.2%). The main predictor for healthcare-seeking was the presence of multiple dyspeptic symptoms.A response rate of 97.2% yielded 1503 questionnaires for analysis. The prevalence of IBS was found to be 7.7% (The prevalence of IBS in the urban community was found to be similar to that in rural communities. A higher rate of consultation was found among urban IBS subjects than in the rural subjects, with sex not seen to be a discriminator to seek consultation. The overall prevalence rates of irritable bowel syndrome (IBS) are similar 10-20%) in most industrialized countries.0-20% in Limited Only 10-56% of adults with symptoms of IBS present for medical evaluation.1017 This10The present study aimed to find out the overall prevalence of IBS and healthcare-seeking patterns in an urban community of Bangladesh using the Rome-II definition. One of the purposes of this study was to arrive at a diagnosis without the help of investigations.This observational study was conducted from the months of November 2004 to March 2005. A valid questionnaire was administered using home-based personal interviews to 1503 subjects aged 15 years and above in a defined area of Dhaka city of Bangladesh. Door-to-door surveys were carried out by physicians assisted by community healthcare workers.The questionnaire was based on a previously published study conducteNo laboratory tests or endoscopic examinations were done in the study due to lack of feasibility.For this study, \u201cRome-II\u201d definition of IBS required abdominal pain and at least two or more of the three abdominal pain symptoms occurrinP \u2264 0.05.The data were processed for handling, and statistical analysis was performed with the computer-based Statistical Package for Social Science (SPSS) Programmers. Significance values were assessed during comparisons using the Chi-squared test with Yates' correction whenever necessary, and the level of significance was set at n = 116) using the Rome-II study definition. The gender-specific prevalence was 8.6 cases per 100 (n = 67) in women, and 6.7 cases per 100 (n = 49) in men. IBS was 1.36 times more prevalent in females than in males (P > 0.05). The majority of the IBS cases were in the 15\u201344 years' age group [The mean age of the study population was 32.18\u00b112.98 years with an age range of 15 to 97 years. The crude age- and gender-specific prevalence estimates and the prevalence of IBS subgroups have been described in ge group . No otheIBS subgroups summarized in n = 116) and consistency was quite common in the urban community. Colonic pain tended to be spastic in nature, i.e., the pain was relieved by defecation in 37.0% (n = 43) of the IBS cases. All IBS symptoms were more prevalent in women than in men and these differences were statistically significant (P < 0.000).The symptom responses of subjects summarized in P < 0.000).The majority of the IBS subjects reported having dyspeptic symptoms with heartburn in 84, of which were 30.17% were male and 42.24% were female. Frequent belching was seen in 78 , nocturnal abdominal pain was seen in 38 (32.7%), and vomiting was seen in nine (7.7%) cases. All the dyspeptic symptoms like heartburn and frequent belching were found in higher frequency in females; these differences were statistically significant (P >0.05). The consultation rate was found to increase with increasing age (P = 0.002) with heartburn and frequent belching being the two most common causes for consultation [P = 0.031). The main predictor for seeking healthcare was the presence of multiple dyspeptic symptoms. Only nine (7.7%) IBS subjects consulted healthcare professionals for the severity of their symptoms.Approximately 76 (65.51%) IBS subjects consulted a physician in the past one year with a higher rate of women consulters cases in this survey [et al.[et al.[et al.[Abdominal pain associated with a change in stool frequency and/or consistency was present in all the IBS was the predominant IBS subgroup (50%) in the urban survey, only 0.8% cases had diarrhea-predominant IBS in the rural survey.[et al, reported almost equal prevalence of diarrhea-predominant and constipation-predominant IBS in a Canadian population according to Rome-II criteria.[et al.[et al.[et al.[Whereas \u201cdiarrhea-predominant IBS\u201d n = 5 was the criteria. All the a.[et al. reportedl.[et al.n = 76) had consulted a physician in the past one year, and there was a slightly higher rate of women consulters (68.6 vs. 61.2%). These findings are consistent with those of Heaton et al.[et al.[et al.[et al.[et al.[et al.[n = 62; 81.57%) of IBS subjects were found to consult for multiple symptoms in this study. Masud et al.[et al.[et al.[et al.[et al.[The majority of IBS subjects (on et al. and Jonel.[et al. Talley el.[et al. however,l.[et al. but not l.[et al. The majoud et al. the datal.[et al. No signil.[et al.; the nexl.[et al. reportedl.[et al. reportedThis study demonstrated that the prevalence of IBS in the urban population of Bangladesh was similar to that reported by most other recent population-based studies done in other countries and also to that of the rural population when compared on the basis of strict Rome-II criteria. From the findings of the present study, we can conclude that IBS prevalence does not seem to be confounded by the factors which are likely to differ between western and non-western lifestyles, or between urban and rural lifestyles. A good number of IBS subjects seek healthcare for their dyspeptic symptoms with increasing age, and the numbers of dyspeptic symptoms were found to be important discriminators to seek consultation. We still have a poor understanding of the triggers for consultation among these patients. More research is required and preferably with psycho-social assessment and some appropriate investigations to find out the exact prevalence of IBS in urban people and to clarify patients' reasons for seeking medical advice."} {"text": "Sir,et al. studied on eight cases with mumps-induced orchitis and concluded that \u201clong-term follow-up is recommended for all patients with abnormal semen analysis, particularly those with bilateral testicular involvement, since they may develop oligoasthenospermia several years after the infection or improve with item\u201d.[Infection is accepted as a possible underlying cause of male infertility. Some infections are confirmed for the negative effect on sperm quality, for example, genital tract infection. In additth item\u201d. However,et al. reported that a febrile episode could have marked effects on semen parameters and sperm DNA integrity and this might be related to future infertility.[et al. studied the characteristics of human sperm chromatin structure following an episode of influenza and high fever and found that influenza could have latent effects on sperm chromatin structure and might result in transient release of abnormal sperm.[et al. studied in a mice model and reported that human influenza virus could induce chromosome aberration of spermatozoa.[The influenza is a respiratory virus infection. It mainly affects the respiratory tract. However, there are some reports on the effect of influenza on sperm quality. As a febrile illness, influenza might affect the semen quality. Sergerie ertility. Evenson al sperm. Indeed, rmatozoa. Thadani rmatozoa.8 Sharma rmatozoa. These arrmatozoa.et al.[Hence, it can be concluded that there are strong evidences confirming the effect of influenza infection on sperm quality. The query if influenza can induce future infertility is interesting. There is a lack of evidence on this topic. Indeed, there are some reports confirming that influenza might lead to infertility in animals but not in human beings. However,et al. this miget al.Emerging swine flu is a kind of variant of classical H1N1 influenza virus infection and hence it is doubtless that swine flu can affect the sperm quality. At least, the acute febrile episode can affect the sperm. However, there exist no official reports on this aspect as well as the correlation between swine flu and infertility. Nevertheless, the presentation and effect of swine flu in infertile case has never been well explored."} {"text": "Atopic dermatitis (AD) is a chronic pruritic skin disease. It results from a complex interplay between strong genetic and environmental factors. The aim of this work was to study some biochemical markers of the dermatosis. This included detection of R576 interleukin-4 receptor alpha allele gene. Twenty five patients with AD and 25 controls participated in this study. Atopic dermatitis (AD) is recognized as a strongly heritable chronic, pruritic inflammatory skin condition that is most common in early childhood and predominantly affects the skin flexures. Current evidence indicates that AD is strongly genetic, with enhanced levels of phenotype concordance reported in monozygotic relative to dizygotic twins. The indiAim of this study was to identify some immunological and chemical markers in AD and their relation to disease severity. Aim was also to detect genotype R576 IL-4 receptor \u03b1 allele and to clarify its segregation with AD as well as its usefulness as clinical marker of the disease.et al, by using the Nottingham eczema severity score.[Twenty five patients with AD and 25 age and sex matched controls participated in this study. All the participants had received no antihistamines or systemic or topical corticosteroids during the period of three weeks before clinical evaluation (a wash-out period), and were subjected to skin-prick test. AD diagnosis was based on the criteria of Hanifin and Rajka. The sevety score.Twenty five healthy, nonatopic, age and sex matched, and unrelated volunteers comprised the control group. They were enrolled in the study if their skin testing were negative and after excluding history of allergic conditions.et al.[All subjects included in this study were subjected to the following: a complete clinical history was obtained followed by clinical examination. Stool and urine analysis was done to exclude parasitic infestations for its affect on eosinophil count and activity. Determinet al. Serum ILet al. Determinet al. Also, coet al. Serum toet al.Determination of R576 IL-4 \u03b1 allele was by PCR-based restriction fragment length polymorphism.DNA was isolated using the PUREGENE DNA isolation kit purchased from Gentra. DNA was extracted according to the method of Bubbon.t-test, analysis 2 of variance (F-test), correlation coefficient, Chi-square (X2) test, exact Fisher exact, and Odds ratio and relative risk.The statistical method used for analysis of the data was according to Kirkwood. The statThe results of this study showed that the most common allergens causing positive skin test among atopic patients were mixed pollens (46.6%), hay dust (33.3%), smoke (21.3%), house dust mite (18.6%) and mixed fungus (16%), cotton and wool (9.3%), mixed feather (6.6%), and animal dander (5.3%).P < 0.001). There was also a significant association between homozygosity for the R/R576 allele and atopy (P = 0.02). The relative risk of R576 allele in atopy was 7.3. For homozygous R/R576, there was a statistical significance for the severe versus mild disease (P = 0.03) [P < 0.001). Also, there was no statistically significant difference between the serum IL-4 values and allelic variants in all atopic patients. There was a significant positive correlation between serum ECP and peripheral blood eosinophil cell count, and serum total IgE. There was no statistically significant correlation between serum IL-4 and serum ECP, peripheral blood eosinophils cell count, or total serum IgE (see tables).There was a statistically significant association between R576 allele and atopy as compared with control group ( = 0.03) . Levels = 0.03) . There wThe present study supported the association of R576 allele with atopy severity which haet al,[et al,[et al,[et al,[et al.[et al,[This result agreed with Deichmann et al, who had l,[et al, who had l,[et al, Beghe etl,[et al, and Hytol,[et al. It seemsl.[et al, had repoet al,[et al,[et al,[Rogala et al, hypothesl,[et al, have foul,[et al, have fouet al,[et al, could find no evidence of linkage or association of atopic asthma with R576 allele in Italian subjects. Whereas Haagerup et al,[The result regarding the association of R576 allele with atopy was in contradiction with Mitsuyasu et al, who had up et al, had founet al,[et al,[et al,[The result regarding the association of R576 allele with atopy markers agreed with Cui et al, who suggl,[et al, found sil,[et al, stated tl,[et al,32et al, Kandil et al, Joseph-Bowen et al, and Higashi et al.[In AD, there was a highly significant increase in serum ECP as compared to control group and there was a strong correlation between its level and disease severity. Similar results were obtained by Di Lorenzo hi et al.\u201336 Eosinhi et al.38 contaihi et al.40 Their In AD group, there was also a highly significant increase in serum total IgE as compared with the control group and there was a highly significant positive correlation between increased serum total IgE levels and Nottingham eczema severity scoring system. These results indicate that serum total IgE may be used as an indicator of AD and its severity, especially in the peak season of allergy. Significant higher levels of serum IgE have been found in other studies.42 Laske et al,[A majority of subjects identified as carrying a single copy of the mutant allele were found to have atopy, suggesting an intermediate dominant effect, with (increasing) homozygotes suffering more severely (gene dosage effect). However, the finding that some carriers of the R576 allele, including one who was homozygous, were not atopic, indicates that the penetrance of this allele may be modified by other factors. These may include distinct genetic loci that impart susceptibility to or protection from atopy, and environmental factors such as the level and duration of exposure to allergens. This reset al, a Frenchet al,[et al,[et al,[et al,[et al,[et al,[The suggested molecular mechanism underlying the observed enhanced signaling with Q576R mutation and the association with R576 allele with atopy is that the substitution of arginine for glutamine at position 576 alters the binding profile of the adjacent phosphorylated tyrosine residue (Y575) and decreases the binding of phosphotyrosine phosphatases (SHP-1). SHP-1 dephosphorylates regulatory phosphotyrosine residues and has been implicated in the termination of IL-4 receptor signaling leading to exaggerated IL-4 responses. Hershey et al, and Hansl,[et al, stated tl,[et al, who repol,[et al, noted thl,[et al, who suggl,[et al, reportedIt could be postulated that patients with atopy having R576 allele may express a more highly active variant of the IL-4R; this mutation may predispose persons to allergic diseases by altering the signaling function of the receptor. So, R576 allele acts as an allergic susceptibility and disease-modifying gene and may serve as a clinically useful marker of asthma severity as one or two copies of R576 allele were associated with more severe disease. R576 allele correlates with markers of atopy, namely: IgE, ECP and eosinophil count."} {"text": "Several autoantibodies directed against cardiac cellular proteins including G-protein-linked receptors, contractile proteins and mitochondrial proteins, have been identified in patients with dilated cardiomyopathy (DCM). Among these autoantibodies, anti-\u03b21-adrenoreceptor (AR) antibodies have long been discussed in terms of their pathogenetic role in DCM. Anti-\u03b21-AR antibody-positive patients with DCM showed significant deterioration of NYHA functional class as well as reduced cardiac function compared to those in autoantibody-negative patients. Various studies with a limited number of patients indicate that the use of immunoadsorption to eliminate immunoglobulin G (IgG) significantly improves cardiac performance and clinical status in heart failure patients. Since removal of autoantibodies of the IgG3 subclass induces hemodynamic improvement and an increase in the left ventricular ejection fraction, antibodies belonging to IgG3 such as anti-\u03b21-AR antibodies might play an important role in reducing cardiac function in patients with DCM. According to a recent report, however, the effect of hemodynamic improvement by immunoadsorption threapy was similar among patients who were positive and negative for anti-\u03b21-AR antibodies, indicating that the beneficial effects of immunoadsorption might be not directly associated with the selective elimination of the \u03b21-AR autoantibodies. Immunoadsorption therapy is a new therapeutic option for patients with DCM and heart failure, but further investigations are required to elucidate the specific antigens of cardiac autoantibodies responsible for the hemodynamic effects. Dilated cardiomyopathy (DCM) is a progressive myocardial disease characterized by contractile dysfunction and ventricular dilatation. DCM is not a rare cause of congestive heart failure and the leading reason for heart transplantation world wide . AlthougA variety of experimental studies suggest that alterations of the immune system might be involved in the pathogenesis of DCM . A numbeet al. [in vivo.Previously, Wallukat and his colleagues observed the immunoglobulin G (IgG) fraction in sera from DCM patients could induce a positive chronotropic effect on neonatal rat cardiac myocytes . That efet al. also rep1) [An extremely high incidence of anti-\u03b21-AR autoantibodies is also reported in end-stage DCM patients who require mechanical cardiac support . In sele1) . Further2). This IA for patients with DCM was first reported in an uncontrolled pilot study by Wallukat et al. [et al. [et al. [vs. conventional therapy. The treatment group underwent monthly IA followed by immunoglobulin substitution for 3 months. IA therapy led to a significant decrease in \u03b21-AR autoantibody levels. The increase in LVEF and improvement of NYHA class were significantly greater in the treatment group compared with those in the control group. Muller et al. [et al. [Removal of \u03b21-AR autoantibodies with immunoadsorption (IA) is achieved by passing a patient\u2019s plasma over columns that remove immunoglobulins . It possesses nonselective physical features, but causes marked reduction of plasma levels of IgG3. In our protocol, plasma IgG and IgG3 levels dropped an average of 37% and 58% per single IA procedure, respectively.Previous studies used a variety of IA methods including specific anti-\u03b21-AR antibody binding peptide columns , nonspecet al. reportedet al. . Those set al. [Usually IA was followed by intravenous immunoglobulin (IVIG) to prevent infectious complications that might arise from inappropriate lowering circulating IgG levels . Unlike et al. did not et al. , and haset al. . They foet al. [et al. [Although IA is a new therapeutic option for patients with DCM, the mechanism of left ventricular functional benefit from IA is not known. IgG adsorption removes not only anti-\u03b21-AR-autoantibodies but also all other potentially pathogenic autoantibodies affecting the heart in this class of immunoglobulins. Mobini et al. has repoet al. . Schimke [et al. reportedAccording to previous reports, the following questions remain to be resolved : First, Measurements of anti-\u03b21-AR autoantibodies may be helpful for the monitoring of clinical status in patients with DCM. IA therapy to eliminate autoantibodies is a new and promising therapeutic option for those patients. However, further studies are necessary to elucidate the specific antigens of cardiac autoantibodies as well as cellular mechanisms responsible for the observed functional effects."} {"text": "Sesamum indicum (EES), vitamin C (VC), and EES + VC in promoting fertility and finding a possible link between their profertility effects and their antioxidant activities.This study investigates the efficacy of ethanolic extract of G (EES only), VCG (vitamin C only), and EES + VCG (EES in conjunction with vitamin C). Control was given 5 ml/kg BW/day of normal saline orally; EESG was administered 0.3 g/kg BW/day of EES; VCG was administered 15 mg/kg BW/ day of VC; while EES + VCG was administered both 0.3 g/kg BW/day of EES and 15 mg/kg BW/day of VC. All treatments were for 10 weeks.Forty adult male Wistar rats [Body weight (BW) 186.56 \u00b1 0.465 g] were randomly analyzed into four groups of ten rats each: Control, EESIndependent-sample T test was used to analyze the obtained results.The results obtained showed that EES, VC, and more importantly EES + VC are capable of significantly increasing BW gain, seminal parameters, testosterone level, and body antioxidant activities.These findings lead to the conclusion that EES + VC as well as ESS and VC promote fertility due to both their testosterone-increasing effects and their antioxidant effects. Because infertility are highly diversified in etiology, their treatments also require diversified approach. Some orthodox medications and traditional medications have proved really useful in the treatment of infertility. However,Sesamum indicum (EES) and citrus fruits [sources of vitamin C (VC)] to promote fertility. If we could establish that EES in conjunction with VC significantly promotes fertility, then there is a possibility that some cases of infertility could be treated cheaply with EES and VC. Many studies have also shown that antioxidants can enhance fertility either directly or indirectly and that most plants rich in antioxidants have the tendency to increase sperm counts, motility, and enhance sperm morphology. were randomly analyzed into four groups of 10 rats each: Control, EESTwenty-four hours after the last treatment, each animal was weighed and then sacrificed by cervical dislocation. Up to 4.0 ml blood samples were collected via cardiac puncture. Blood sample obtained from each rat was divided into two portions: 2.0 ml in a plain bottle and the other 2.0 ml in an ethylenediaminetetraacetic acid bottle. Plasma and serum were obtained by centrifugation at 3000 rpm for 20 minutes. Testis and caudal epididymis were excised from each rat.The left testis of each rat was homogenized for tissue superoxide dismutase (SOD) and catalase (CAT) activities, and malondialdehyde (MDA) concentration. Sperm count, motility, morphology, and life-death ratio of the rats' spermatozoa were carried out from the epididymis. Serum testosterone level was determined. Plasma and tissue SOD and CAT activities, and MDA concentrations were determined using the methods described by Fridovich, Sinha,88 and VarP-value <0.05.The data obtained are presented as mean \u00b1 SD. The control and test groups were compared using Independent-sample T test. Significance level was taken at G group as well as EES + VCG was, however, significantly higher than that of the control, while weight increase in VCG showed no significant difference from that of the control [Comparing their final and initial weights showed that there was significant body weight gain in all the groups over the period of the experiment. The weight gain in EES control .G showed no significant difference from that of the control, while the testicular weight of EESG and EES + VCG was found to be significantly higher than that of the control [The testicular weight of VC control .G and EES + VCG treated animals was significantly higher than that of the control, whereas SC for VCG alone showed no significant difference from that of the control. Likewise, LDR of EESG and EES + VCG was found to be significantly higher than that of the control, while VC showed no significant difference from the control [SC for EES control .G and EES + VCG groups had SM that was significantly higher than that of the control, whereas SM for VCG was not significantly different from that of the control. SMP was, however, not significantly different in all the test groups compared with the control [EES control .P-value <0.05) increase in STL in all the test groups compared to the control [There was significant compared to the control [Plasma and testicular SOD activity were found to be significantly higher in all the treated groups compared to the control, plasma CAT activity was only significantly higher in EES + VCG [While testicular catalase (CAT) activity was found to be significantly higher in all the treated groups compared to the control. In a similar way, testicular MDA concentration was found to be significantly lower in all the treated groups compared to the control [Plasma Malondialdehyde (MDA) concentration was found to be significantly lower in all the treated groups and in conjunction with VC (EES + VCG) significantly increases weight gain. The significant increase in weight gain in EESG can be linked to the high fat composition[et al.[et al.[Sesamum indicum significantly improved weight gain; and by Hanefy et al.[G reveals the possibility that EES in conjunction with VC have a synergistic effect on body weight, after all VC acting alone does not significantly affect body weight [Even though VC is very important for normal health condition by its antioxidant and detoxification actions, VC all b control . This isl [et al. On the cmposition and its on[et al. that metl.[et al. that Sesfy et al. that mety weight . A complet al.[et al.[et al.[The observed ability of VC to increase SC is in liet al. that antl.[et al. that a bl.[et al.. Salawu l.[et al. earlier G was far higher than the LDR of all other groups [et al.[The life-death ratio (LDR) of the EES + VCr groups . This fus [et al. that antG [et al.[et al.[G was, however, lower than that of EES + VCG [The observed significant increase in SM and SMP in ESS + VCG further G [et al. that VC l.[et al. The incrES + VCG . This juThe significant increase in both plasma and testicular SOD activity and CAT et al.,[et al.,[et al.[VC acting alone, EES acting alone, and VC + EES acting together significantly increased blood testosterone level. This finding is parallel to the observations of Salawu et al., Ukwenya ,[et al., and Shit.,[et al. that ses.,[et al.\u20135 and th.,[et al.."} {"text": "The increasing availability of large-scale protein-protein interaction data has made it possible to understand the basic components and organization of cell machinery from the network level. The arising challenge is how to analyze such complex interacting data to reveal the principles of cellular organization, processes and functions. Many studies have shown that clustering protein interaction network is an effective approach for identifying protein complexes or functional modules, which has become a major research topic in systems biology. In this review, recent advances in clustering methods for protein interaction networks will be presented in detail. The predictions of protein functions and interactions based on modules will be covered. Finally, the performance of different clustering methods will be compared and the directions for future research will be discussed. Within cells, proteins seldom act as single isolated species to perform their functions. It has been observed that proteins involved in the same cellular processes often interact with each other . ProteinA protein interaction network is generally represented as an interaction graph with proteins as vertices (or nodes) and interactions as edges. Various topological properties of protein interaction networks have been studied, such as the network diameter, the distribution of vertex degree, the clustering coefficient and etc. These network analyses have shown that protein interaction networks have the features of a scale-free network -7 and \u201csClustering in protein interaction networks is to group the proteins into sets (clusters) which demonstrate greater similarity among proteins in the same cluster than in different clusters. In protein interaction networks, the clusters correspond to two types of modules: protein complexes and functional modules. Protein complexes are groups of proteins that interact with each other at the same time and place, forming a single multimolecular machine, such as the anaphase-promoting complex, RNA splicing and polyadenylation machinery, protein export and transport complexes, etc . FunctioRecently, many research works have been done on the problem of clustering protein interaction networks. These works rely on very different ideas and approaches. This paper tries to help readers keep up with recent and important developments in the field, and to give readers a comprehensive survey on the different approaches. This paper is organized as follows: At first, the graph-based clustering methods including the density-based and local search algorithms, the hierarchical clustering algorithms, and other optimization-based algorithms, are given in Section 2. Then the approaches of combination with other information are discussed and some ensembles are given in Section 3. In Section 4, the validation and comparison of the clustering methods are discussed. Then the application of the clustering methods for protein function prediction and protein-protein interaction prediction are given in Section 5. At last, challenges and directions for future research are discussed in Section 6.G, where vertices represent proteins and edges represent interactions. The relationship between two proteins can be the simple binary values: 1 or 0, where 1 denotes the two proteins interact and 0 denotes the two proteins do not interact. In such cases, the graph is unweighted. Sometimes, the edges of graph G are weighted with a value between 0 and 1. In such cases, the weight represents the probability that this interaction is a true positive.In general, a protein interaction network is represented as an undirected graph In recent years, various graph-based clustering algorithms have been developed for detecting protein complexes and functional modules in protein interaction networks. According to whether the algorithm can identify overlapping clusters, these algorithms can be classified into two types: Non-overlapping clusters detecting algorithms and overlapping clustering identifying algorithms. These algorithms can also be divided into the follows: density-based and local search algorithms, hierarchical clustering algorithms, and other optimization-based algorithms, according to different definition and ideas.Based on the assumption that the members in the same protein complex and functional module strongly bind each other, a cluster can be referred as a densely connected subgraph within a protein interaction network. Several algorithms for finding dense subgraphs have been proposed.d) of a subgraph with n vertices and m edges is generally defined as d=2m/(n(n-1)) is the maximal possible number of them. The idea behind the use of this definition in [et al[g as g built on the edge e and g.where ition in is that chi et al develope al[g as where iet al[However, this definition is not feasible when the network has few triangles or higher order cycles. To avoid of the limitation, Li et al redefineuN is the set of neighbors of vertex u and vN is the set of neighbors of vertex v, respectively.where \u03bb-module denotes the weight of edge e, u,vI denotes the set of common vertices in uN and vN . Correspondingly, Li et al defined \u03bb*-module of weighted protein interaction networks, as shown in Table \u03bb *-module of weighted protein interaction networks can help improve the accuracy of clustering. Another contribution of their work is that FAG-EC and HC-Wpin can identify the functional modules in a hierarchy by changing the values of parameter \u03bb and such hierarchical organization of modules approximately corresponds to the hierarchical structure of GO annotations. More attractive strength of FAG-EC and HC-Wpin is their efficiencies. The total time complexities of FAG-EC and HC-Wpin are both O(2mk). As is well known the scale-free of protein interaction networks, k is very small and can be considered as a constant. Thus, FAG-EC and HC-Wpin are very fast which can be used in large protein interaction networks as the protein-protein interactions accumulate.where et al[Recently, Wang et al combinedet al) and theet al.[et al.[et al. in [Besides the two typical metrics discussed above, a number of other metrics have also been suggested to be used in the hierarchical clustering algorithms. Hartuv and Shamir used theet al. developel.[et al. suggestet al. in . They prThe definition of similarity metric or distance measure is a crucial step for hierarchical clustering. How to evaluate the metrics is another challenge in hierarchical clustering. Two evaluation schemes suggested by Lu et al, which are based on the depth of hierarchical tree and width of ordered adjacency matrix, may be useful. Moreover, Chen et al gave a fThe obvious advantage of hierarchical clustering approach is that it can present the hierarchical organization of protein interaction networks. Its drawback is that it can not generate overlapping clusters except that special pre-processing or other strategies are used. In addition, the hierarchical clustering approaches are known to be sensitive to the noisy data in protein interaction networks .et al[In addition to the density-based and local search algorithms and hierarchical clustering algorithms, some other optimization-based algorithms are also frequently used. For example, King et al proposedet al.[Another optimization model for the discovery of clusters was proposed by Newman and Girvan , in whicet al. suggesteet al., and optet al., extremaet al., and speet al.-57.et al.[Recently, Hwang et al. presenteSTM consists of four steps :(1) Compute signals transduced between all vertex pairs;(2) Select cluster representatives for each vertex;(3) Formation of preliminary clusters;(4) Merge preliminary clusters.et al extended STM to CASCADE [In STM, the Erlang distribution is used to model the signal transduction behavior of the network. STM considers only the least resistance paths between protein pairs in a network and propagates the occurrence probability through a shortest path between a protein pair. More recently, Hwang CASCADE , in whicet al[et al[Among others, the Markov Cluster Algorithm (MCL) ,61 has bet al transforet al, MCL waset al and RNSCet al. More reet al showed t al[et al have proet al.[et al[Furthermore, in the recent past, some novel optimal clustering approaches have been proposed for the discovery of protein complexes or functional modules. Mete et al., for exaal.[et al investigal.[et al in sociak-plex [k\u2264n/2, for a given k, where n was the number of vertices in the cluster, and the peripheries of a core was defined as the set of vertices that were not in the core and whose distances to any member in the core were equal to l .In recent years, much attention has been focused on the clustering algorithms for finding overlapping clusters. For the overlapping clusters, each protein may be involved in multiple complexes or functional modules. This is particularly true of protein interaction networks for most proteins having more than one biological function. Some of the above mentioned clustering algorithms, such as STM , can be et al.[k-clique percolation communities. A k-clique is a complete subgraph of size k. Two k-cliques are said to be adjacent, if they share exactly k-1 vertices. A cluster is defined as a union of all k-cliques that can be reached from each other through a series of adjacent k-cliques. Based on CPM, a powerful tool CFinder for finding overlapping clusters has been developed by Adamcsek et al.[k; 2) the proteins not included in any k-cliques are neglected. To overcome the disadvantages of CPM, people often adopt some pre-processing or post-processing when using it. Jonsson et al.[et al proposed two types of strategies: size control [k=3 to generate initial clusters and then iteratively used k+1 to separate the clusters of size larger than a given integer S until all the identified clusters of size were less than S.In 2005, Palla et al. investigek et al.. Though on et al. construc control and line control when usiet al[et al[et al[Zhang et al suggeste al[et al proposed al[et al proposedet al.[Another method based on clique for identifying overlapping clusters is COD (Complex Overlap Decomposition) proposed by Zotenko et al.. COD reqet al.. The veret al.[et al.[et al.[et al.[et al. combined the flow-based approach with two new metrics: semantic similarity and semantic interactivity, where Gene Ontology (GO) annotations were used to weight protein-protein interactions. Different methods adopted for the selection of essential proteins will result in different overlapping clusters. Thus, to select the informative vertices more exactly will help to identify the overlapping clusters more accurately.The essential proteins have always been counted as having a close connection to the overlapping clusters -82.Typicet al. proposedl.[et al. suggestel.[et al. developel.[et al.,82 was al.[et al.. Later il.[et al., Cho et et al[Moreover, some extended hierarchical clustering algorithms can also be used for the identification of overlapping clusters. Pinney et al, for inset al based onIn addition, the algorithms of detecting overlapping community structures in other complex networks, such as fuzzy clustering , EAGLE , and nodThe above discussed methods for identifying clusters are mostly based on graph theoretic properties solely and only require the protein-protein interaction data. Unfortunately, protein interaction networks, as we all know, can not avoid of the false positives and false negatives . To lesset al.[et al.[et al.[Jiang and Keating describeet al. integratl.[et al. developel.[et al. presenteet al.[et al.[Jung et al. presenteet al., a tool l.[et al. extractel.[et al. and LCMAl.[et al., to estiet al.[et al.[et al.[et al.[et al.[et al.[et al.[Owing to the attribute that members in a cluster typically perform a specific biological function , severalet al. related l.[et al. proposedl.[et al. developel.[et al.. Segal el.[et al. introducl.[et al. presentel.[et al. also intl.[et al.. Recentll.[et al. proposedMore recently, Ulitsky and Shamir transforet al.[et al.[Except for gene expression data, authors also usually combined protein interaction networks with GO annotations. Typically, the flow-based approach proposed by Cho et al., as alrel.[et al. suggestel.[et al. mapped kWith the rapidly expanding resource of microarray data and other biological information, such as structure profiles and phylet al.[Ensemble clustering ,109 has k-way partitioning, and multilevel k-way partitioning, with two topology driven distance metrics were used to obtain six base clusterings, and then a consensus method based on Principal Component Analysis (PCA) was developed to reduce the dimensionality of the consensus problem. Asur et al.[In , initialur et al. also deset al.[et al.[Another ensemble framework for clustering protein interaction networks was proposed by Greene et al.. They fiet al. was adopet al.. Consensl.[et al. was a soAs being in nascent stage, ensemble clustering approach inevitably faces some challenges for the discovery of protein complexes and functional modules. A series of crucial factors, such as choosing the basic clustering methods, building a consensus, and adapting for soft clustering, must be taken into account carefully.Biological validation of the predicted clusters in protein interaction networks is very essential. As previous discussed, disparate results can be obtained from the same protein interaction network with different algorithms or even with the same algorithm where different parameters are chose. Therefore, different solutions must be carefully compared in order to select the approach and parameters which provide the best outcome. Validation is a process of evaluating the performance of the clustering or prediction results derived from different approaches. This section will introduce several basic validation approaches for clustering in protein interaction networks.Previous studies have showed that proteins in the same cluster often have high functional homogeneity . The funC contains k proteins in the functional group F, and the entire protein interaction network contains |V| proteins. P-value with a hypergeometrical distribution shows the probability that a given set of proteins is enriched by a given functional group merely by chance. Smaller P-value indicates that the predicted cluster is not accumulated at random and is more significant biologically than one with a larger P-value. The function annotation can be obtained from MIPS [where the predicted cluster rom MIPS or GO , has been suggested to quantify the overall clusters.Sn and In denotes the number of significant and insignificant clusters, respectively and min(pi) denotes the smallest P-value of the significant clusters i (i=1 to n). The cutoff is used to distinguish a significant cluster from insignificant clusters. We say a cluster is significant if its corresponding smallest P-value is lower than the cutoff value.where Another method for assessing the functional homogeneity of proteins within a predicted cluster is redundancy , as shown represents the number of classes in the classification scheme, and sprepresents the relative frequency of the class in the predicted cluster. All values of R lie between 0 and 1. With this scoring system, clusters containing many proteins with highly consistent classifications will receive high scores (R closer to 1), whereas those with disparate or conflicting classifications will receive low scores (R closer to 0).where Pc) and the known complexes (Kc) is often done. The gold-standard data used as known complexes are available form those catalogued in the MIPS database [OS between a predicted cluster Pc and a known complex Kc is generally calculated by formula (9) [To evaluate the performance of algorithms for clustering in protein interaction networks, a comparison of the predicted clusters (database . The ovemula (9) ,21,22: is larger than a specific threshold \u03b4. Generally, 0.2 is used in the literature [where |terature .Sn=TP/(TP+FN), where TP (true positive) is the number of the predicted clusters matched by the known complexes with OS\u2265\u03b4, and FN is the number of the known complexes that are not matched by the predicted clusters [Sp=TP/(TP+FP), where FP is equal to the total number of the predicted clusters minus TP. Generally, another integrated method, called f-measure, as shown in formula (10) [Obviously, known complexes and predicted clusters are expected to be matched as many as possible. Sensitivity and specificity ,22 are tclusters ,22. SpecolP of a random overlap between them. The olP is defined as:Also, we can determine a best matched known complex for a predicted cluster by minimizing the probability i is the number of the common proteins between the predicted cluster Pc and the known complex Kc. The smaller the olP is, the more consistent they are.where T, as that has been done by Broh\u00e9e and Helden [n known complexes and m predicted clusters, the contingency table is a n*m matrix where row i corresponds to the thi known complex, and column j to the thj cluster. The value of a cell ijT indicates the number of common proteins that appear both in complex i and cluster j. In addition, some other measurements, such as positive predictive value (PPV), accuracy, and separation, can also be used to evaluate the match between a set of known complexes and a clustering result. More details about these measurements, the reader are referred to [One can also match the clustering result with the known protein complexes by building a contingency table d Helden . Given nerred to .et al., generally tend to share similar temporal expression profiles, subcellular localizations, and gene phenotypes, which support the functional relevance of modular organization. Moreover, the robustness of a clustering algorithm can be validated by different levels of graph alterations, such as proportions of edges added or deleted at random can be used to test the algorithm\u2019s robustness against the false positives and false negatives.Besides the above measurements, a comparison of the clustering results performed on protein interaction networks and on random networks is usually used. The random network requires having the same size and the same degree distribution as the original protein interaction network. Generally, one can get a corresponding random network by shuffling the edges between vertices in the original network ,22. SomeUp to now, there have been few special works for quantitative evaluation of the clustering algorithms except for some comparison works that have been done in each proposed algorithm for demonstrating its validity. Only in 2006, a systematic quantitative evaluation of four clustering algorithms: MCL ,61, MCODet al.[Tuji et al. comparedet al., a densiet al., a hieraet al.[Typical applications of clustering protein interaction networks are protein function prediction and protein-protein interaction prediction. For a cluster, as pointed by Hartwell et al., its memet al.[As there exits a large number of function-unknown proteins, even for the most well-studied yeast, about one-fourth of the proteins remain uncharacterized , and theet al. have givet al.[et al.[It is well known that the protein-protein interaction data available now are incomplete, though a number of high-throughput biotechnologies have been applied to biological systems. Recently, a series of computational methods have been developed for predicting protein-protein interaction data ,121. Espet al. predictel.[et al. suggesteClustering protein interaction networks can be used not only for predicting false negatives, but also for purifying false positives, as shown in Fig.In the post-genomic era, an important work is to analyze biological systems from network level, in order to understand the topological organization of protein interaction networks, identify protein complexes and functional modules, discover functions of uncharacterized proteins, and obtain more exact networks. To achieve this aim, a series of clustering approaches have been proposed. For different types of clustering algorithms, each has its own advantages and disadvantages. Every algorithm has certain problems while it exhibits good performances in other cases. The main challenges for clustering protein interaction networks are identified as follows:(1) Up to now, all methods for predicting protein-protein interactions are known to yield a nonnegligible amount of noise and to miss a fraction of existing interactions . Therefo(2) Clusters of a protein interaction network may overlap with each other. Most proteins have more than one molecular function and participate in more than one biological process. For example, some proteins form transient associations and are part of several complexes at different stages. Most cellular processes are carried out by multi-protein complexes. Therefore, the traditional clustering approaches of putting each protein into one single cluster do not suit this problem well. Moreover, how heavily two clusters should overlap with each other is not certain.(3) Recent advances in the development of high-throughput techniques have led to an unprecedented amount of protein-protein interaction data becoming available in a variety of simple organisms. It is computationally difficult for most of current clustering algorithms to accurately identify protein complexes or functional modules from large-scale protein interaction networks, especially to discover meso-scale clusters.(4) There are little priori knowledge for clustering protein interaction networks, such as cluster number and cluster size. How many clusters should we produce? How large are clusters suitable? How to validate different clustering results with various sizes? These are all challenges for designing effective clustering algorithms.(5) Current clustering approaches mainly focus on detecting clusters in static protein interaction networks for most existing biological data are static. However, both the protein-protein interactions and protein complexes are dynamically organized when implementing special functions. Dynamic modules generally correspond to the sequential ordering of molecular events in cellular systems. How to explore dynamic modules from static protein interaction networks is a very difficult task.et al.[While some clustering approaches have been applied successfully in the discovery of protein complexes or functional modules, methods for clustering and analyzing protein interaction networks are less mature. Particularly, the methods for identifying dynamic modules are in a nascent stage. Methods which use time-series gene expression profiling data to manifest the temporal complexity of protein interaction networks may be useful to the exploration of dynamic modules. For example, Li et al. have sucet al. may alsoFurthermore, techniques and methods for developing both robust and fast clustering algorithms are directions for further researches. In the future, \u201coverlap\u201d will continue to be a hot topic for clustering protein interaction networks, which include how many molecular functions a protein can perform, how many biological processes a protein can participate in, and how many cellular components a protein can be associated with or located in. Moreover, we should investigate the question that if there some relationship between the two properties: overlapping and hierarchical organization of clusters, which were usually taken into account separately before. Some works have been done in complex networks, such as word association networks and scientific collaboration networks , to deteThe authors declare that they have no competing interests.JW and ML drafted the manuscript together. YD and YP participated in revising the draft. All authors have read and approved the manuscript."} {"text": "Stroke in young poses a major health problem. Thrombophilic factors have been implicated in 4-8% of the young strokes worldwide. Protein S deficiency is a rare cause of recurrent ischemic stroke in young population. Only a few sporadic cases have been described in the literature. We are reporting a case of protein S deficiency-related recurrent ischemic stroke in a 16-year-old girl. Early diagnosis and targeted approach can help such patients to prevent recurrent thrombotic episodes. Protein S is a naturally occurring vitamin K-dependent protein, which in conjunction with active protein C, inhibits the clotting cascade. Protein S deficiency is known to be of clinical significance in patients with deep venous thrombosis or pulmonary emboli. The overall estimated incidence of deep vein thrombosisis is one episode for every 1,000 persons. Protein S deficiency has been also found to be associated with cerebrovascular occlusion, although the exact role is controversial.A 16-year-old girl presented with acute onset left sided hemiparesis without loss of consciousness. General physical examination was unremarkable. Neurological examination revealed findings consistent with left-sided hemiparesis. A similar episode occurred three years back. No precipitating factors such as chronic drug intake were present. Family history was negative for vascular events or other predisposing factors for stroke.CT head revealedWorkup for thrombophilias revealed reduced protein S function alongwith protein C; whereas, antithrombin III, anticardiolipin antibodies, and lupus anticoagulant were within normal limits. A diagnosis of protein S deficiency was kept and the patient was managed with intravenous heparin followed by oral anticoagulants. Neurological functions improved and patient was discharged on oral anticoagulants. Repeat thrombophilic profile after three months revealed protein S functional activity 42% of the normal with patient showing remarkable recovery.et al.[et al.[et al.[Stroke in young population has a high incidence of approximately 24\u201335%, according to some studies in India. Abraham et al. from Velet al.[et al.[et al.[et al.[et al.[The importance of thrombophilic disorders in arterial stroke has been debatable. Ischemic stroke has been reported as a rare manifestation of protein S deficiency. Girolami et al. and Sie l.[et al. were amol.[et al. studied l.[et al.\u20139 Douay l.[et al. reportedIn this 16-year-old patient without any risk factors, the acquired factor S deficiency possibly played a role in the recurrent ischemic stroke. Factor S deficiency should be considered in venous stroke, recurrent pulmonary embolism, unusual site of venous occlusion, family history of vascular events, and stroke in young population. Aetiology of such vascular events in young must be thoroughly investigated so as to guide prevention and treatment of this devastating disease. Measurement of total and free protein S levels should be a part of the evaluation for any young adult who has had a stroke."} {"text": "This review is focused on molecular momentum transport at fluid-solid interfaces mainly related to microfluidics and nanofluidics in micro-/nano-electro-mechanical systems (MEMS/NEMS). This broad subject covers molecular dynamics behaviors, boundary conditions, molecular momentum accommodations, theoretical and phenomenological models in terms of gas-solid and liquid-solid interfaces affected by various physical factors, such as fluid and solid species, surface roughness, surface patterns, wettability, temperature, pressure, fluid viscosity and polarity. This review offers an overview of the major achievements, including experiments, theories and molecular dynamics simulations, in the field with particular emphasis on the effects on microfluidics and nanofluidics in nanoscience and nanotechnology. In Section 1 we present a brief introduction on the backgrounds, history and concepts. Sections 2 and 3 are focused on molecular momentum transport at gas-solid and liquid-solid interfaces, respectively. Summary and conclusions are finally presented in Section 4. Feynman at the 1959 annual meeting of the American Physical Society \u20139. Motor\u22121, while that for a MEMS device with a size of 1 \u03bcm is 106 m\u22121 and for a NEMS device having a length of 1 nm is 109 m\u22121. The large surface to volume ratio for MEMS and NEMS devices enables factors related to surface effects to dominate the fluid flow physics at micrometer to nanometer scales , Lauga et al. [r:T0 is the reference temperature, \u03b2 is a dimensionless coefficient of order one, v is the fluid kinetic viscosity, Tk is the fluid thermal diffusivity, and pc is the specific heat. The shear-dependent slips observed in Ref. [High shear rates also induce viscous heating as a result of the dissipation of mechanical energy. The viscous heating then inevitably results in temperature increase and viscosity decrease of the liquids. Considering a traditional exponential law for the liquid viscosity a et al. proposed in Ref. by the C3.3.4.et al. reported that slip was not observed in vacuum conditions but only when the liquid sample was in contact with air in their sedimentation experiments. In Ref. [et al. reported that tetradecane saturated with CO2 resulted in no-slip but with argon resulted in significant slip. In Ref. [et al. calculated the slip lengths for liquid flows between two infinite parallel plates by modeling the presence of either a depleted water layer or nanobubbles as an effective air films at the walls and found the results were consistent with some experimental measurements. Using patterned surfaces to trap gases, Refs. [Many measured apparent slips were ascribed to the presence of small amount of gas trapped or pinned on rough, patterned and/or hydrophobic surfaces ,352\u2013354. In Ref. , Granick In Ref. , Trethews, Refs. ,219,256 et al. [et al. [et al. performed shock wave induced cavitation experiments and atomic force microscopy measurements of flat polyamide and hydrophobized silicon surfaces immersed in water and showed that surface nanobubbles were not just stable under ambient conditions but also under enormous reduction of the liquid pressure down to \u22126 MPa. This implied that surface nanobubbles were unexpectedly stable under large tensile stresses.In 1983 Ruckenstein et al. proposedet al. ). Detailet al. . Some reet al. \u2013376 conc [et al. found th [et al. ,377\u2013382. [et al. , Borkent\u03bc1 flowing over a layer of height h with viscosity\u03bc2, the apparent slip length is [The apparent slip length will be very large when considering liquid flows over gas films. Considering a liquid of viscosity ength is :(63)Ls\u00a0\u03bc1/\u03bc2 \u2248 50. As pointed out in Ref. [\u03bc is the viscosity of the liquid, \u03c1 is the density of the gas, and thu is the thermal velocity of the gas. The apparent slip length is independent of the gas film thickness.For gas-water interfaces in Ref. , there a in Ref. :(64)Ls\u00a0et al. [et al. [et al. [Exceptions should also be noted. Steinberger et al. stated t [et al. proposed [et al. investig3.3.5.3.3.5.1.et al. [Cho et al. studied et al. slip waset al. the morp3.3.5.2.et al. [et al. [Apparent slip lengths are thought to arise in a thin layer of liquids with lower viscosity near the wall of a smooth solid surface or in regions of higher shear next to the peaks and ridges of a rough solid . Howeveret al. reported [et al. showed l [et al. .3.3.5.3.et al. reported that the slip lengths, either for normal slip or stick slip, usually decreased with increasing temperature using molecular dynamics simulations. From Lauga et al.\u2019s model for the slip length slip at macroscopic level measured by experiments. The microscopic slip length is about nanometers. The macroscopic slip length, however, spans nanometers to micrometers. Therefore, a challenge for deepening related researches is that they are incompatible in magnitude as well as in physical mechanisms. Frankly speaking, the physical mechanisms for liquid slip over solid surfaces still remain obscure at the moment.The boundary conditions of liquid flows over solid surfaces depends on various physical factors, such as surface wettability, roughness, patterns, liquid shear rate, polarity, temperature and pressure. Molecular behaviors affected by many factors are unclear. It is necessary, but still a challenge to decouple their influences. Results obtained by different experimental techniques are often very different due to the large uncertainty in the measurements at nanoscale. Therefore, more novel experimental techniques to detect molecular behaviors near liquid-solid surfaces more accurately should be developed. More measurements and molecular dynamics simulations are needed.The molecular dynamics simulation method is a powerful tool to detect molecular behaviors near solid surfaces, keeping in mind the long distance from the experimental measurements. The disadvantages of this method includes that it often deals with very ideal liquid-solid surfaces and the computational expense of large scale systems is heavy. It is highly desired for molecular dynamics simulations to consider real liquid and solid situations. Large scale computation using high quality computers will be useful for molecular dynamics simulations to simulate micrometer scale systems. Other particle-based techniques, such as lattice-Boltzmann and atomistic-continuum hybrid methods, are also necessary alternatives.We are convinced that wettability and surface roughness (patterns) may be the most important factors affecting the molecular momentum transport between liquid and solid at interfaces. From this point of view, we will be able to prepare surfaces and channels in engineering with known and controllable boundary conditions, and consequently control the friction of micro- and nanoflows. Perhaps using surface patterns to trap gases and using surface coatings to artificially change the wettability are most feasible. Therefore, there leaves more open questions on formation mechanisms, physical properties and lifetime of the surface patterns trapped nanobubbles to researchers. Carbon nanotubes with rare fluid transport efficiency, as well as good electrical, optical, thermal and mechanical properties, will be a promising platform for microfluidics and nanofluidics engineering in MEMS/NEMS in the future."} {"text": "Dear Editor,et al. with great interest.[et al., reported their experience and success on using the laparoscopic transperitoneal approach for irreducible inguinal hernias.[I read the current publication of Jagad interest. Jagad et hernias. I agree"} {"text": "Morphinofobia among the general population (GP) and among health care professionals (HP) is not without danger for the patients: it may lead to the inappropriate management of debilitating pain. The aim of our study was to explore among GP and HP the representation and attitudes concerning the use of morphine in health care.A cross-sectional study was done among 412 HP (physicians and nurses) of the 4 hospitals and 10 community health centers of Beira Interior and among 193 persons of the GP randomly selected in public places. Opinions were collected through a translated self-administered questionnaire.A significant difference of opinion exists among GP and HP about the use of morphine. The word morphine first suggests drug to GP and analgesia to HP . The reasons for not using morphine most frequently cited are: for GP morphine use means advanced disease (56%), risk of addiction (50%), legal requirements ; for HP it means legal risks and adverse side effects of morphine such as somnolence - sedation The socio-demographic situation was correlated with the opinions about the use of morphine.False beliefs about the use of morphine exist among the studied groups. There seems to be a need for developing information campaigns on pain management and the use of morphine targeting. Better training and more information of HP might also be needed. During the last decades, considerable progress has been recorded in the knowledge of the action and the use of opiates in pain management -3. In spWhy has this knowledge not been applied by HP and did not have the expected effects among the GP? Do morphine frighten so much the GP and HP becoming a public health problem?Morphinofobia can be defined as a set of false beliefs concerning the negative effects of morphine in the management of pain , an inapMorphinofobia seems widespread and caused by ignorance, prejudices, false beliefs, economic marketing strategies and limitations in the availability of morphine ,10-16.et al. [et al. [et al. [In 1960, the studies of Robins et al. and Abel [et al. reported [et al. ,14,20-25et al. [Musi et al. studied Our study aims to compare morphinofobia among the general population (GP) and health professionals (physicians and nurses) (HP) in a country where the consumption of morphine was multiplied by 4 over the last decade , though et al. [The survey was carried out between August and November 2005 using two structured questionnaires, developed based on the model of Musi et al. . One of The GP was recruited randomly on a given day in two shopping centres, three urban restaurants, the weekly marketplace and at the railroad station of Guarda. The participation criteria were: at least 18 years old, able to answer the questionnaire and living in the region of Beira Interior.As to the GP, a questionnaire was addressed to 800 HP (nurses and physicians) employed at four hospitals of Beira Interior and ten community care centres with the agreement of the regional Department of Health of Beira Interior.The HP were working in internal medicine, general surgery, paediatrics, oncology, orthopaedics, emergency and community home care. The participation criteria were: be employed in one of the hospitals or community home care centres for at least a year and having a completed training as a nurse or as a physician.The questionnaires were distributed via the management of the institutions and answers were returned by the same channel under ceiled envelope. The inquiry was anonymous. The study was approved by the Research Ethics Committee of the University Institute of Kurt B\u00f6sch Sion, Switzerland.et al. [For each group, we developed a specific questionnaire based on the study of Musi et al. .The first part of each questionnaire addresses socio-demographic data. The second part of the questionnaire addresses the perception people have of \"morphine\", its efficacy and its side effects. The questionnaire also explores the attitudes concerning the use of morphine and its acceptance. A judgment scale of 5 levels, ranging from \"completely disagree\" to \"completely agree\" was used.2of Pearson \u00bb, \u00ab t-test for matched samples \u00bb and \u00ab correlation of Pearson \u00bb using the software program SPSS version 15.0. Significant differences between categories or groups of variables were defined at 95%.Data was analyzed with \u00ab Khi The sampling among GP and HP yielded a total of 606 respondents.194 persons of GP answered the questionnaire. One questionnaire was discarded. About six out of ten respondents were women. Age range was between 18 and 80 years. About 20% of the respondents had not been to school and about 30% had only attended primary school. Almost half of the interviewed people lived in urban areas. A vast majority of respondents (87%) were Catholics and 46 physicians (11%). The participation rate of nurses was 52,3% the physicians' 46,0%. On the average the participation rate was 51,5%. Three quarters of the respondents were women. The average age was 35,5 years. About 70% of the respondents lived in semi-urban areas. The majority of the HP (93%) were Catholics , whereas for GP it first means \u00abdrugs\u00bb . Other differences exist between GP and HP, such as \u00abmedication\u00bb , \"sedation- somnolence\u00bb , \u00abcancer\u00bb , \u00abdependency\u00bb \u00abopiate\u00bb , \u00abrelief\u00bb . Some similarities also exist, such as \u00abpain-suffering\u00bb and \u00abend of life - death\u00bb . One third of GP \u00ab does not know \u00bb what morphine stands for. The results are summarized in Table Significant differences exist between GP and HP in their perception of the word \"morphine\". For HP the word \"morphine\" first stands for \u00ab gesic \u00bb 3,9% , wheThe opinions among GP and HP concerning the use of morphine as an analgesic appear in Table it means that it is serious\u00bb , the smallest for \u00abthere is a risk of somnolence or sedation\u00bb GP shows more false beliefs than HP concerning the use of morphine as an analgesic. The largest difference exists for \u00abTable sex of the respondents and the questions \u00abrisks of delirium\u00bb , \u00abdiminish the surviving period\u00bb , \u00abrisks of increasing doses\u00bb and \u00abthe legal risks\u00bb . A weak negative relationship was seen between sex and the expressions \u00abit means that it is serious\u00bb , \u00abrisks of dependency\u00bb , \u00abrisks of somnolence\u00bb \u00ablimited life expectancy\u00bb and \u00abrisks of discrimination\u00bb . Men are less prone to consider and use morphine as an analgesic than women.Data analysis shows an absence of a significant relationship between the age of the respondents and the perceptions of the use of morphine. The older the respondents, the more false beliefs exist about the use of morphine.A positive weak relationship was observed between the level of training of the respondents and the variable \u00ablegal risks\u00bb was observed. A weak positive relationship was observed between place of living and the expressions \u00abit means that it is serious\u00bb , \u00abrisks of dependency\u00bb , \u00abdiminish surviving period\u00bb , \u00ablimited life expectancy\u00bb and \u00abrisk of discrimination\u00bb . No relationship was noticed with the variables \u00abrisk of delirium\u00bb , \u00abrisks of somnolence\u00bb , \u00abrisks of accustom\u00bb and \"legal risks\u00bb .A weak negative relationship between This study compared the use of morphine as perceived by GP and HP in the region of Beira Interior in North-Eastern Portugal. There are differences of perception but also common fears. It might well induce some reluctance regarding the use of morphine. This in turn might influence negatively patient care in general and pain management more specifically.et al. [et al. [et al. [et al. [Most studies reporting \"false beliefs\" regarding the use of morphine in pain management focus either on specific ethnics groups or on health professionals ,4,29-31.et al. in North [et al. reports [et al. and Robi [et al. show a set al. [et al. [et al. [In our study morphinofobia among HP seems related to false beliefs on side effects of morphine, risks of addiction and legal constraints in the prescription of morphine. Yet the word morphine is principally associated with the notion of analgesia. Musi et al. reports [et al. mention [et al. analysed [et al. reports [et al. studying [et al. in a revet al. [Many studies report fears and false beliefs concerning the use of morphine in pain management among HP ,22,38-40et al. report fet al. and fromet al. .et al. [et al. [et al. [et al. [The existence of false beliefs on pain, addiction and abuse of morphine have also been reported by Gilson et al. in a stu [et al. reported [et al. studying [et al. question [et al. ,46-49. S [et al. . Ripamon [et al. concludesocio-demographic features and the perceptions of the use of morphine in pain management. Yet morphinofobia was highest among little-educated older men living in rural areas. The cultural and geographic influences on attitudes and beliefs regarding morphine among patients with non cancerous pains have been stressed by Monsivais et al. [et al. [Our results showed a rather weak relationship between s et al. and Cice [et al. . However [et al. is cauti [et al. mentioneet al. [et al. [Health professionals play an important role as far as morphinofobia is concerned, be it through a possible lack of knowledge regarding morphine ,54, be iet al. and Edwa [et al. thereforThere are limits to our study. First a generalisation to the population of Beira Interior of our observations might not be indicated because of the small sample of GP and its opportunistic nature. Second, our study focused on attitudes and perceptions on morphine of GP and HP and did not take in consideration the patients' vision. Third, it should be kept in mind that some of our results are matter of debate in the specialized literature ,52. LastThis study contributes to a better understanding of \"the myths of morphine\" among the general population and health professionals in the region of Beira Interior. It suggests that efficient pain management is not limited to the prescription of an adequate analgesic according to \u00ab the golden standard \u00bb. The success of a morphine prescription is influenced by a multitude of other factors.et al. [Our results are in accordance with the results of the study by Musi et al. done in This leads us to suggest that there is a need for information campaigns targeting the general population and for better training programs targeting health care professionals based on the theory of planned clinical behaviour in orderThe authors declare that they have no competing interests.HV carried out the study concept, drafted the manuscript, the data analysing and interpretation, the follow-up and participated in the questionnaire design and data collection. EKM carried out the design of the questionnaire, the study concept and participated in the draft of the manuscript. MF carried out the design and the translation of the questionnaires, survey and data collection and contributed in the data analysis and interpretation. CHR conceived the study, participated in the questionnaire design, data analysis and interpretation and the draft of the manuscript. PC carried out the draft of the manuscript and participated in the data analysis and interpretation. All the authors approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-684X/9/15/prepub"} {"text": "Adjunctive therapy with locally delivered antimicrobials has resulted in improved clinical outcomes. The aim of this study was to evaluate the efficacy and safety of locally administered minocycline microspheres (Arestin\u2122) in the treatment of chronic periodontitis.A total of 60 sites from 15 patients in the age group of 35-50 years, who had periodontal pockets measuring 5-8 mm and had been diagnosed with chronic periodontitis, were selected for the study. The selected groups were randomly assigned to either the control group (group A) or the treatment/test group (group B). Only scaling and root planing were done at the base line visit for the control sites followed by local application of Arestin\u2122 (1 mg). Clinical parameters such as plaque index, gingival index, and gingival bleeding index were recorded at baseline, day 30, day 90, and day 180 in the selected sites of both the groups. Probing pocket depth also was recorded at baseline, day 90, and day 180 for both the groups.A statistically significant reduction was observed in both groups. Group B showed better results than Group A and these differences were statistically significant.The results of this study clearly indicate that treatment with scaling and root planing plus minocycline microspheres (Arestin\u2122) is more effective and safer than scaling and root planing alone in reducing the signs of chronic periodontitis. These are found in high levels in gingival tissues and gingival crevicular fluid of patients with periodontal disease.[Periodontal diseases are polymicrobial in nature and the complex interaction among many microbes makes the disease a challenging one to understand and treat. Host inflammatory response to plaque microorganisms causes tissue damage, leading to the destruction of the periodontal tissues in an episodic manner. Bacterial products have the ability to stimulate host cells to secrete a wide variety of inflammatory mediators that have numerous biological activities, some of which cause soft tissue and bone destruction. Cytokines such as tumor necrosis factor- \u03b1 (TNF- \u03b1), interleukin-1\u03b2 (IL-1\u03b2), and inflammatory mediators such as prostaglandin EThe extent and degree of periodontal destruction varies widely from patient to patient, and in different sites within the same patient. These variations could be due to the presence of complex subgingival populations of microorganisms in different sites. Periodontal diseases are characterized by the presence of periodontal pockets, which are not easily accessible for plaque removal.Bacteria in the subgingival area are organized in a complex microbial biofilm. Biofilms are matrix-enclosed bacterial populations that are adherent to each other and/or surfaces or interfaces. These microbial plaques are extraordinarily persistent, difficult to eliminate, and play a vital role in periodontal disease. PhysicalSuccessful treatment is dependent on halting tissue destruction through the elimination or control of etiological agents, together with microbial shift towards one typically present in health. Mechanical therapy alone may fail to eliminate invasive, pathogenic bacteria because of their location within the gingiva and dental tissues, which are inaccessible to periodontal instruments. Treatment strategies aiming primarily at suppressing or eliminating specific periodontal pathogens include local and systemic administration of antibiotics. In orderOne of the biggest benefits of any locally delivered drug is that it does not require patient compliance for regular drug intake. The clinician places the drug which releases the antimicrobial for an extended period of time at a steady pharmacological level. Another Arestin\u2122 is made up of minocycline, a semi-derivative of tetracycline, and a very potent broad-spectrum antibiotic. Minocycline has significant antimicrobial activity against a wide range of organisms as well as an anticollagenase effect. MinocyclArestin\u2122 delivers minocycline in a powdered microsphere delivery system. The microspheres have diameters ranging from 20 to 60 \u03bc. The active ingredient is minocycline hydrochloride which exists as particles distributed throughout the interior of the microspheres. When Arestin\u2122 is administered, it immediately adheres to the periodontal pocket.[Gingival crevicular fluid hydrolyzes the polymer, causing water-filled channels to form inside the microspheres. These holes provide escape routes for the encapsulated antibiotic for sustained release. The active drug dissolves and diffuses out of the microspheres through the channels into the surrounding tissues. After ten days, the microspheres are fragmented and continue to release minocycline for 14 days or longer; eventually, these microspheres completely bioresorb.Traditional therapies such as tooth brushing, flossing, subgingival irrigation, and mechanical debridement are successful for patients with mild periodontal diseases. However, as the periodontal pocket deepens, the patient's home care procedures as well as professional debridement loses effectiveness, making local drug delivery a viable option. The addition of local delivery can also help to maintain or control the disease between maintenance visits.The purpose of this study was to compare the clinical effects of scaling and root planing with those of scaling, root planing, and local administration of minocycline hydrochloride (1 mg) (Arestin\u2122) delivered subgingivally in patients with chronic periodontitis.A total number of 15 patients aged 35-50 years diagnosed with chronic periodontitis and having probing depths ranging from 5 to 8 mm as well as radiographic evidence of bone loss, were selected for the study from the Department of Periodontics, Meenakshi Ammal Dental College and Hospitals, Chennai.Ethical approval was obtained from the Institutional ethical committee for the study.Inclusion criteria for patient selection.Patients in the age group of 35-50 years.Patients diagnosed as suffering from chronic periodontitis having a probing pocket depth of 5 to 8 mm with radiographic evidence of alveolar bone loss and without mobility of teeth.Patients willing to take part in the study and maintain appointments regularly.Exclusion criteria for patient selection.Patients having systemic diseases like diabetes mellitus, hypertension, bleeding disorders, hyperparathyroidism.Pregnant women and lactating mothers.Patients allergic to tetracyclines.Patients who have had periodontal treatment in last six months.Antibiotic therapy within three months prior to treatment.Long-term therapy within a month prior to enrollment with medications that could affect periodontal status or healing.Patients with medical or dental therapy scheduled or expected to occur during the course of this study that could have an impact on the subjects ability to complete the study.A total number of 60 sites from 15 patients were selected for the study. The duration of the study was for six months. Four sites were identified for the study in each patient: Two sites served as control sites (Group A) and two sites on the contralateral side served as test sites (Group B). For all patients, general, oral and full mouth periodontal examination was carried out and informed consent was obtained from the patients. On screening day (day 0), patient evaluation was followed by impressions for the fabrication of acrylic stents required for the measurement of pocket depths in the control and test sites during the study period . VariablThe following parameters were recorded:Plaque indexGingival indexGingival bleeding indexProbing pocket depthThe control and test sites were grouped and treated as follows:Group A (control) - Comprised of 30 sites; only scaling and root planing was done at the baseline visit.Group B (test) - Comprised of 30 sites; scaling and root planing was followed by local application of Arestin\u2122 (1 mg) at the baseline visit.th day. During this visit, all clinical parameters, except probing depth, were measured. An additional application of Arestin\u2122 (1 mg) was given in the test sites,. The control and test sites were also examined on the 90th and 180th days, and all clinical parameters including probing pocket depth were recorded.Both the control and test sites were again examined on the 30Application of minocycline microspheres .After insertion of the local drug delivery system, the patients were advised not to eat hard food that could traumatize the gingiva. They were also advised not to brush the area for 12 h or to floss or use interproximal cleaning devices for ten days. During the study period, the patients were instructed to continue regular tooth brushing and interdental cleansing. They were also instructed not to use any mouth washes for the duration of the study.t-test/Student's paired t-test wherever appropriate. The proportion of positivity of the gingival bleeding index was compared between control and test groups by Pearson's chi-square test with Yate's continuity correction/Fisher's exact test (two-tailed) wherever appropriate. In the present study, P < 0.05 was considered as the level of significance.Means and standard deviations were estimated from the samples for each study group. Mean values were compared by Student's independent P < 0.0001). The mean reduction in the gingival index score in group A was 0.23 \u00b1 0.35 from day 0 to day 30, 0.28 \u00b1 0.50 from day 0 to day 90, and 0.38 \u00b1 0.49 from day 0 to day 180, all statistically significant differences (P < 0.0001). The mean reduction in probing pocket depth values for group A was 0.24 \u00b1 1.07 from 0 to day 90, 0.37 \u00b1 1.08 from day 0 to day 180, a difference that was not statistically significant. The mean reduction in plaque index scores in group B from day 0 to day 30 was 0.56 \u00b1 0.45, 0.72 \u00b1 0.32 from day 0 to day 90, and 0.78 \u00b1 0.32 from day 0 to day 180, all statistically significant differences (P < 0.0001)%. The mean reduction in gingival index scores in group B was 0.72 \u00b1 0.55 from day 0 to day 30, 0.80 \u00b1 0.41 from day 0 to day 90, and 0.95 \u00b1 0.49 from day 0 to day 180, all statistically significant differences (P < 0.0001). The mean reduction in probing pocket depth for group B was 1.73 \u00b1 0.87 mm from 0 to day 90, and 1.66 \u00b1 0.96 mm from day 0 to day 180, again a statistically significant difference (P < 0.0001) [The mean reduction in the plaque index score in group A from day 0 to day 30 was 0.46 \u00b1 0.43, 0.42 \u00b1 0.42 from day 0 to day 90, and 0.53 \u00b1 0.39 from day 0 to day 180, all statistically significant differences ( 0.0001) .P < 0.0001). The reduction in bleeding on probing was significant in both groups A and B. The reduction in group B was significantly higher than the reduction seen in group A on the 30th day, 90th day, and 180th day.P < 0.0001). The mean reduction in probing pocket depth between days 0 to 90 and days 0 to 180 was statistically significant between the two groups (P < 0.0001).th day from the baseline, but there was a significant difference in the plaque index on the 90th and 180th days between the two groups [et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.[In our study, no statistically significant difference was observed between the two groups on the 30o groups . These f [et al., Mullur e,[et al., Saito et,[et al., Jones et,[et al., Timmerma,[et al., Hagiwara,[et al., and Vans.,[et al.et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.[In our study, the mean reduction in plaque index scores between days 0 and 30 for groups A and B was not statistically significant . Howeveret al., Vansteen,[et al., Saito et,[et al., Jones et,[et al., Timmerma,[et al., Radvar e,[et al., Hagiwara,[et al., Vansteen,[et al., and Kina.,[et al.et al.,[et al.,[et al.,[et al.[When group A and group B were compared, a reduction in bleeding was seen from the sites in both the groups, but the percentage of reduction in bleeding from the sites was more in group B compared to sites of group A . This waet al., Vansteen,[et al., by Mullu,[et al., and Timm.,[et al.et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.,[et al.[A significant reduction in probing pocket depth was found in group B when compared with group A . This waet al., Vansteen,[et al., Saito et,[et al., Jones et,[et al., Timmerma,[et al., Radvar e,[et al., Makoto U,[et al., Hey-Riye,[et al., Hagiwara,[et al., Vansteen,[et al., Kinane e,[et al., Williams,[et al., Dean et ,[et al., and Gree.,[et al.The above results show that scaling and root planing plus Minocycline microspheres provide significantly greater probing depth reduction than scaling and root planing alone. This significant change in all the clinical parameters examined in the test group, is because Arestin\u2122 releases therapeutic doses of the drug for more than 14 days, well above the minimum inhibitory concentration needed to kill most putative pathogens for periodontal disease. A pharmaFrom the results of the study, the following conclusions can be drawn:Test sites where Minocycline microspheres were employed, displayed a statistically significant reduction in all the clinical parameters after treatment as compared to control sites, which showed only minimal changes.A degradable, subgingivally placed drug delivery system containing 1 mg Minocycline microspheres, is a safe and efficient adjunct to scaling and root planing in the treatment of chronic periodontitis.The results of this study confirm that Minocycline microspheres are a safe and efficient adjunct to scaling and root planing, and can produce significant clinical benefits when compared to scaling and root planing alone."} {"text": "Patients, who meet criteria 1 and 2 and possibly 4, but not criterion 3, can be designated as schizophrenia with secondary negative symptoms. Only those patients who satisfy all four criteria should be designated as schizophrenia with deficit syndrome. Those patients meeting criteria 1, 2 and 3, but not criterion 4, could either be schizophrenia with primary, non-enduring negative symptoms though these patients with passage of time may satisfy full criteria for schizophrenia with deficit syndrome and thus become schizophrenia with deficit syndrome.Carpenter To further distinguish deficit syndrome from primary, non-enduring negative symptoms, the authors suggesteOver the years, considerable research on deficit schizophrenia has been conducted. Patients with deficit schizophrenia differ from non-deficit schizophrenia on variables such as risk factors, symptom profiles, neuropsychological functioning, family history, course of illness, treatment response and structural and functional neurobiology. Studies have also shown that deficit schizophrenia has long-term diagnostic stability. Some resantecedent, concurrent and predictive validators.To address the question of validity of deficit schizophrenia, available data can be divided into three classes of potential validators. This scheme represents an adaptation and enlargement of the validating criteria for psychiatric illnesses as outlined by Robin and Guze. The poteet al.[et al.[et al.[There is evidence to suggest that patients with deficit schizophrenia differ from non-deficit schizophrenia on sociodemographic variables pertaining to gender and marital status; on other sociodemographic variables no significant differences have been reported. In an initial report, Carpenter et al. stated tl.[et al. reportedl.[et al.10 Furthel.[et al. Some of l.[et al.12 In a fl.[et al. found thOne study suggested that patients with deficit schizophrenia have lower rate of obstetric complications.Studies have shown that patients with deficit schizophrenia have poorer premorbid adjustment than patients with non-deficit schizophrenia before the appearance of positive psychotic symptoms.\u20131215\u20132112There is some evidence to suggest that when compared with non-deficit schizophrenia, the patients with deficit schizophrenia appear to have an increased family history of schizophrenia,22\u201324 wit1422A winter birth excess has repeatedly been associated with schizophrenia. However, it is interesting to note that subjects with deficit schizophrenia demonstrate summer birth as a risk factor for the deficit form of schizophrenia. This finding has been consistently replicated in population-based studies of incident cases of psychosis30 and prClinical description of the concept of deficit schizophrenia has been described earlier. Various studies have evaluated the construct validity of deficit and non-deficit subtypes and have shown that deficit schizophrenia is associated with greater social and physical anhedonia,33 more a2034161736On neurological examination compared to a matched non-deficit group, the deficit schizophrenia subjects were found to have higher level of soft neurological signs, higher i15et al.[22Patients with deficit schizophrenia exhibit selective impairments on a number of neurocognitive measures that are purportedly partly subserved by dorsolateral prefrontal basal ganglia-thalamocortical circuit. Buchanan et al. comparedet al. In anothet al. These diet al. Studies et al.2240 on set al.22 However,et al.22Oculomotor dysfunction, i.e. eye tracking disorder is significantly associated with deficit schizophrenia and it is suggested that both may have common pathophysiology of cerebral cortical-subcortical circuits.42 Furtheet al.[There are inconsistent findings from structural imaging studies with regard to deficit schizophrenia. Some studies have reported that patients with deficit and non-deficit groups differ significantly in prefrontal white matter volume, with the non-deficit group having smaller volumes and the deficit group having similar white matter volume compared to normal controls. Another et al. comparedet al. in whichet al.[N-acetylaspartate (NAA) to creatine plus phosphocreatine in deficit schizophrenia compared to healthy subjects or non-deficit patients, a finding suggesting a neuronal loss in the medial prefrontal cortex of deficit patients.[During the attempt to retrieve poorly encoded words, patients of non-deficit syndrome had significantly greater activation of the left frontal cortex compared to patients with the deficit syndrome. However, there was no difference between the two groups in the activity of the hippocampus during memory retrieval. Liddle eet al.54 reportpatients.et al.[et al.[Studies have suggested various biological markers/correlates of deficit schizophrenia. Waltrip et al. reportedet al. However,l.[et al. did not P = 0.006), which remained significant even when the data were corrected for covariates like positive psychotic symptoms and demographic features known to be associated with cytomegalovirus seropositivity.[Another study linked deficit schizophrenia with the presence of serum antibodies to cytomegalovirus concentrations in deficit schizophrenia. Ribeyre et al. reportedet al. which in.[et al., who founet al.[Ozdemir et al. measuredet al.[Studies which have evaluated the genetic differences between deficit and non-deficit schizophrenia fail to suggest differences between the two groups. Wonodi et al. reportedet al.Studies have shown that patients with deficit schizophrenia continue to exhibit poorer social and occupational function than other patients with chronic schizophrenia161721 an161716934521et al.[Relatively few studies have attempted to examine the improvement of primary negative symptom with antipsychotics and even there the findings are conflicting. In clinical trials that used the SDS to distinguish deficit and non-deficit groups, the positive symptoms of patients with deficit schizophrenia showed the same therapeutic response to clozapine as patients with non-deficit schizophrenia, but there was no improvement in the negative symptoms of patients with deficit schizophrenia. However,et al. failed tet al. however,et al.et al.[Criteria for the deficit group were explicitly developed by Carpenter et al. Since thFrom available data, it can be concluded that there are meaningful and robust differences between deficit and non-deficit cohorts and the available data are consistent with both the separate disease hypothesis and the competing hypothesis of a disease severity continuum. Hence, further studies are required to firmly establish one or the other hypothesis. However, at present, it can be concluded that separating deficit and non-deficit groups reduces the heterogeneity of schizophrenia."} {"text": "Thus computational approaches for detecting protein complexes from protein interaction data are useful complements to the limited experimental methods. They can be used together with the experimental methods for mapping the interactions of proteins to understand how different proteins are organized into higher-level substructures to perform various cellular functions.Most proteins form macromolecular complexes to perform their biological functions. However, experimentally determined protein complex data, especially of those involving more than two protein partners, are relatively limited in the current state-of-the-art high-throughput experimental techniques. Nevertheless, many techniques (such as yeast-two-hybrid) have enabled systematic screening of pairwise protein-protein interactions Given the abundance of pairwise protein interaction data from high-throughput genome-wide experimental screenings, a protein interaction network can be constructed from protein interaction data by considering individual proteins as the nodes, and the existence of a physical interaction between a pair of proteins as a link. This binary protein interaction graph can then be used for detecting protein complexes using graph clustering techniques. In this paper, we review and evaluate the state-of-the-art techniques for computational detection of protein complexes, and discuss some promising research directions in this field.Experimental results with yeast protein interaction data show that the interaction subgraphs discovered by various computational methods matched well with actual protein complexes. In addition, the computational approaches have also improved in performance over the years. Further improvements could be achieved if the quality of the underlying protein interaction data can be considered adequately to minimize the undesirable effects from the irrelevant and noisy sources, and the various biological evidences can be better incorporated into the detection process to maximize the exploitation of the increasing wealth of biological knowledge available. Most proteins form complexes to accomplish their biological functions -3. In faet al. [in vitro purification of whole-cell lysates [While there are a number of ways to detect protein complexes experimentally, Tandem Affnity Purification (TAP) with mass spectrometry is the pet al. have sho lysates . This meen masse have enabled the construction of PPI networks on a genomic scale. A graphical map of an entire organism's interactome can be constructed from such experiments by considering individual proteins as the nodes, and the existence of a physical interaction between a pair of proteins as a link between two corresponding nodes. Given that protein complexes are molecular groups of proteins that work together as \"protein machines\" for common biological functions, we may expect the protein complexes to be functionally and structurally cohesive substructures in the binary PPI networks [Recently, high-throughput methods for networks . Researcnetworks , suggestnetworks . HoweverBefore we review the current computational approaches for protein complex detection, let us make a principled distinction between two biological concepts, namely, protein complexes and functional modules . A proteLet us now introduce some terminologies which are widely used in protein complex mining. Then, we will present the use of traditional graph clustering techniques for complex mining followed by some new emerging techniques for this task.G = , where V is the set of nodes (proteins) and E = {|u, v \u2208V } is the set of edges (protein interactions). A graph G1 = is a subgraph of G if and only if V1\u2286V and E1\u2286E. For a node v \u2286V , the set of v 's direct neighbors is denoted as vN where vN = {u |u \u2286V, \u2286 E }. v 's degree in G, deg (v ), is the cardinality of vN, i.e., |vN|. Density. A PPI network is often modeled as an undirected graph G, denoted as density (G ), is defined to quantify the richness of edges within G as shown in equation (1) [density (G ) \u2264 1. If density (G ) = 1, then G is the fully connected graph or a clique, which has the maximum number of edges, i.e., every pair of distinct vertices is connected by an edge.The density of the graph tion (1) . BasicalClustering Coeffcient. The clustering coeffcient of a node v is the density of the subgraph formed by vN and their corresponding edges, which quantifies how close v 's neighbors are to being a clique (complete graph) [e graph) .Local Neighborhood. Given a node u \u2208V, its local neighborhood graph uG is the subgraph formed by u and all its immediate neighbors with the corresponding interactions in G . It can be formally defined as uG= , where uV= {u } \u222au N, and uE= {| \u2208j , vk E, v \u2208u V}If the edges in the PPI network are weighted , thew (e ) is the weight of the edge e .where v is the weighted density of the subgraph formed by Nv and their corresponding edges.Similarly, the weighted clustering coeffcient of the node First, we review the conventional graph clustering approaches for protein complex mining. These methods mine for cliques or densely connected subgraphs in PPI networks which could correspond to protein complexes. While the methods mainly use the PPI networks for mining, additional information, such as gene expression data ,24, funcIn this section, let us describe the graph clustering approaches that use PPI networks as the sole underlying dataset for the mining task.MCODE.The MCODE algorithm proposed by Bader et al. [k-core in node 5's neighborhood graph with k=2 [d = 5/6 . Thus, node 5 has an initial weight w (5) = k \u00d7 d = 2 \u00d7 5/6 = 1.67. Next, the node with the highest weight is selected as an initial cluster. Node 2, as node 1's neighbor, satisfies the weight constraint to be included into the cluster because w (2) = 3 \u2265 (1 \u2013 wT) \u00d7 w (1). Here wT is a threshold for cluster formation that is set as 0.2 by default. Similarly, nodes 3 and 8 are also added into this cluster and finally MCODE predicts {1, 2, 3, 8} as a protein complex. Subsequently, {4, 5, 6, 7} is detected as another putative protein complex from this sample PPI graph.r et al. is one owith k=2 ,29 and dThe experimental results of MCODE method showed that the number of predicted complexes is generally small and the size of many predicted complexes is often too large.Clique.\t\t\t\t\t Spirin and Mirny [nd Mirny proposedMCL. Markov Clustering (MCL) [ng (MCL) ,31 can ang (MCL) , MCL is ng (MCL) -36.LCMA.\t\t\t\t\t Instead of adopting the over-constraining full cliques as the basis for protein complexes, Li et al. [i et al. devised i et al. describeDPClus. Amin et al. [n et al. proposedPCP. Chua et al. [a et al. proposeda et al. to evalua et al. and a paHub Duplication.\t\t\t\t\t\t Ucar et al. [r et al. developer et al. ) are firCFinder. Adamcsek et al. [k-clique percolation clusters as functional modules using a Clique Percolation Method [k-clique is a clique with k nodes and two k-cliques are adjacent if they share (k \u2013 1) common nodes. A k-clique percolation cluster is then constructed by linking all the adjacent k-cliques as a bigger subgraph.k et al. providedn Method . In partSCAN.\t\t\t\t\t\tMete et al. [e et al. proposede et al. . SCAN fiGS. Navlakha et al. [a et al. applied a et al. to clustCMC. Liu et al. [u et al. recentlyu et al. . CMC theu et al. . TherefoProteins which interact with each other can be expected to be activated and repressed under the same conditions. In other words, interacting proteins are likely to exhibit similar gene-expression profiles. In fact, gene expression data has been widely exploited to annotate protein functions (guilt by association) and predict novel protein-protein interactions -52. We dGFA. Feng et al. [G = , two different density definitions as shown in equation (3) are used:g et al. proposedw (v ) is the weight of the protein v weighted by \u2013expressionev ) is the log fold change of v's gene-expression profile. GFA first applies DSA (Densest Subgraph Algorithm) [where gorithm) to find DMSP. Maraziotis et al. [s, its neighbors and even its indirect neighbors are iteratively included based on different criteria to form the module.s et al. developeMATISSE. Ulitsky et al. [k neighbors with the highest weights are picked to form a set of (k + 1) seeds. Last, after selecting the seeds, Jointly Active Connected Subnetworks (JACS) are obtained by several operations . Two small JACSs are merged to form a new one if they are closely connected.y et al. also proFunctional information can also be incorporated to accurately detect protein complexes. Since proteins within the same protein complex are generally aggregated to perform a common function, the functional enrichment of a cluster can be used to indicate its tendency to be a real complex. The reliability of interactions, evaluated by the consistency of functional similarity between two proteins, can also help to provide cleaner PPI data for protein complex detection.RNSC. King et al. [g et al. proposedDECAFF. Li et al. [i et al. proposedi et al. are procSWEMODE. Lubovac et al. [c et al. presentec et al. -62. Secoc et al. .STM. Cho et al. [o et al. extendedo et al. to identet al. [The techniques discussed above have used pairwise physical interactions detected by high-throughput experiments such as Y2H as the PPI dataset for detecting protein complexes. More recently, there are some researchers who attempt to detect protein complexes from interaction data obtained solely from TAP experiments. Unlike Y2H method which detects direct physical interactions, using TAP data requires careful weighing of the detected links as TAP also detects indirect interactions in protein complexes. Krogan et al. were oneet al. ,31 is thet al. [Caroline et al. proposedet al. ,31 is apet al. [et al. [Pu et al. also appet al. . Similar [et al. used altet al. [G = where U and V represent the sets of baits and preys respectively and E describes the bait-prey relationships detected in the experiments as shown in Figure Recently, Geva et al. proposedIn this section, we review a number of emerging techniques for protein complex detection that are different from the application of traditional graph clustering described in the previous section.The previous graph clustering methods described above are unsupervised and are more or less based on the basic assumption that dense subgraphs in PPI networks are likely to be protein complexes. The protein complexes detected by many of these methods must be either cliques or defective cliques ,67 or aret al. [Qi et al. proposedTo obtain the training data, they collect available known protein complexes and also generate some random subgraphs as non-complexes. Topological and biological properties of these training graphs are then summarized as features. A probabilistic Bayesian network (BN) is then applied to integrate all these features and the parameters of this BN model are learned from the training data.Given a graph candidate and its corresponding features, a log likelihood ratio score can thus be calculated by the BN model to show whether it is qualified to form a complex. A simulated annealing search is furthIn the previous section, we have mentioned that some researchers have explored the use of TAP data instead of Y2H data for complex detection. However, as TAP does not detect direct pairwise protein-protein interactions (unlike Y2H), the PPI networks constructed using TAP data are not ideal for detecting protein complexes. Recently, several techniques are proposed to directly detect protein complexes from the TAP data without constructing the PPI networks.et al. [Rungsarityotin et al. applied et al. [Z, where each entry ijz indicates that the thi protein is in the complex j . The prior distribution of Z is learned from an infinite latent feature model. By considering the pairwise similarity between proteins obtained by a graph diffusion kernel [Z is further inferred to indicate the protein complex memberships by the Gibbs sampling.Chu et al. used a Bn kernel , the posTwo adjacent interactions (those with a common protein) may be mutually exclusive ,74 due tet al. [Jung et al. recentlyet al. and LCMAet al. to generet al. [Jin et al. exploiteWith the increasing availability of PPI data for most species , it has become feasible to use cross-species analysis to derive insights into the evolution of the PPI networks for complex detection.et al. recently proposed a series of methods for comparative analysis in two or more species. They used these methods for conserved pathway detection [Sharan etection , proteinetection and consetection -80. Basiu, v ) in the orthology graph is weighted by the sequence similarity between the protein pair u and v . An edge , ) is associated with a pair of weights , w ), where w is the weight of the interaction . Two models, the protein-complex model and null model, are proposed to learn the weights of interactions and detect protein complexes in each species. In [In , each nocies. In , the rescies. In . A subgrcies. In .QNet was developed for queries in PPI networks. The similarity between two graphs is defined based on the node and edge similarity and the penalty scores for node deletion and insertion. QNet then performs tree queries and bounded-treewidth graph queries by the color coding algorithm [In , a tool lgorithm . Conservet al. [Another group of researchers, Dutkowski et al. also proet al. [In the genome-wide screen for protein complexes using affnity purification and mass spectrometry reportedet al. , they obet al. .et al. [Zhang et al. proposedet al. and someet al. [Recently, Leung et al. proposedet al. [To provide insights into the organization of protein complexes, Wu et al. presentsThe ability to detect overlapping cores is essential to understand how different cores are organized into the higher-level structures in PPI networks and how these cores communicate with each other to perform cellular functions. It also facilitates better detection of protein complexes from PPI networks, which will be shown in the evaluation results in the next section.Before we present the results of our comparative experiments, let us first introduce the various evaluation metrics that have been used to evaluate their computational methods for complex detection. We will then present the experimental results of comparing different state-of-the-art techniques using these evaluation metrics.Overall, there are three types of evaluation metrics used to evaluate the quality of the predicted complexes and compute the overall precision of the prediction methods.NA between a predicted complex p = and a real complex b = in the benchmark complex set, as defined in equation (4) below, can be used to determine whether they match with each other. If NA \u2265 \u03c9 , they are considered to be matching . Let P and B denote the sets of complexes predicted by a computational method and real ones in the benchmark, respectively. Let cpN be the number of predicted complexes which match at least one real complex and cbN be the number of real complexes that match at least one predicted complex. Precision and Recall are then defined as follows: [Precision, recall and F-measure are commonly-used evaluation metrics in information retrieval and machine learning. For evaluating protein complex prediction, we need to define how well a predicted complex which consists of a set of protein members, matches an actual complex, which is another set of protein members. The neighborhood affnity score follows: ,66,87 :, positive predictive value (PPV ) and accuracy (Acc ) have also been proposed to evaluate the accuracy of the prediction methods [n benchmark complexes and m predicted complexes, let ijT denote the number of proteins in common between thi benchmark complex and thj predicted complex. Sn and PPV are then defined as follows:Recently, sensitivity functional annotations. The statistical significance of the occurrence of a protein cluster (predicted protein complex) with respect to a given functional annotation can be computed by the following hypergeometric distribution in equation (9) ,89:(9)C contains k proteins in the functional group F and the whole PPI network contains |V | proteins. The functional homogeneity of a predicted complex is the smallest p-value over all the possible functional groups. A predicted complex with a low functional homogeneity indicates it is enriched by proteins from the same function group and it is thus likely to be true protein complex. By setting a common threshold which specifies the acceptable level of statistical significance, the numbers of predicted complexes with functional homogeneity under this threshold for the various methods can then be used for evaluating their respective overall performance.where a predicted complex Sn ), positive predictive value (PPV ) and Accuracy (Acc ). For sensitivity (Sn ), if a method predicts a giant complex which covers many proteins in the known real complex set, then this method will get a very high Sn score. As for PPV value (PPV ), it does not evaluate overlapping clusters properly. Here is a case in point: if the known gold standard MIPS complex set (with proteins that belong to multiple complexes) [PPV value is 0.772 instead of 1 (indicating an imperfect match) while the Precision and Recall are both correctly 1. As such, the Accuracy (Acc ) score, as the geometric average of Sn and PPV , will also not make good sense. In addition, all the evaluation metrics described above assumed that a complete set of true protein complexes is available, where in reality we are far from it. If a method predicts an unknown but real protein complex (which is not similar with any of the known complexes), all of these evaluation metrics will regard it as a false positive. Furthermore, for P-values, since its calculation relies on the availability of the proteins' functional information, its applicability would be limited in the less studied genomes. As such, so far it has mainly been used in the model organism yeast for which rich molecular functional information is available.It is important to realize that the evaluation metrics described above can only provide us some sense of how well the current graph mining techniques can be used to detect the protein complexes from protein interaction data. These metrics are by no means absolute measures \u2014 they all have their own limitations, especially for sensitivity (mplexes) is takenRelatively speaking, the Precision, Recall, F-measure and P-values are thus more acceptable for evaluating the performance of current techniques. Still, we need to treat the current evaluation metrics with caution, as more research is needed to come up with a robust evaluation metric for the protein complex prediction task.For this review, we have performed extensive experiments to compare the existing techniques for which we are able to obtain the software implementations \u2014 either source codes or binary executable systems. Those existing techniques that do not provide available software are not included in the comparison exercise. Fortunately, we have a good representative collection of implemented algorithms for comparison: MCODE , RNSC 5, MCL 3131, DPClu\u03c9 is set as 0.20 to evaluate if a predicted complex matches with a gold standard protein complex (see equations (4) and (5)).In order to evaluate the predicted complexes, the set of real complexes from was seleWe have compared these techniques over two publicly available benchmark yeast PPI datasets, namely DIP data and KrogcpN) and the number of real complexes that match at least one predicted complex cbN. Taking MCODE on DIP data as an example, it has predicted 50 complexes, of which 44 match 21 real complexes. These 50 predicted complexes cover 844 proteins out of 4930 proteins in DIP. As shown in these two tables, MCL and RNSC assigned every protein (4930) into its predicted complexes as long as they are present in PPI networks while all the other methods only assigned those highly interactive proteins (or the proteins that occurred in the dense subgraphs) into the predicted complexes. In fact, both MCL and RNSC basically partitioned the PPI network simultaneously into non-overlapping clusters while the remaining approaches are more sensible by generating clusters in a one-by-one manner and allowing overlaps in the clusters/complexes. We also noticed that for DECAFF algorithm the number of predicted complexes that matches at least one real complex (cpN) is significantly higher than the other methods \u2014 this is mainly because it is designed to search many dense and possibly overlapping complexes from the PPI networks.Tables Figures http://db.yeastgenome.org/cgi-bin/GO/goTermFinder.pl). The complexes with only one protein are discarded because calculating P-values for those complexes makes no sense according to the equation (9). We considered a predicted complex with a corrected P-value \u2264 0.01 to be statistically significant. The results showed that MCODE was able to obtain the highest proportion of significant complexes.Figure Unfortunately, this was an artefact of its predicting very few complexes as compared to all the other methods. Ignoring MCODE, then COACH and DECAFF have both achieved decent proportions of their predicted complexes as significant. As for DPClus, MCL and RNSC, because they predicted many protein complexes with extremely small size which resulted in large P-values since they could occur by chance. For CORE, it generated many protein-complex cores with only one protein. Given such a core with one protein, CORE can only form a protein complex by including all the interacting partners of the protein as attachment. These protein complexes have low statistical significance, leading to the low performance of CORE.en masse have already become routine in the laboratories for generating large datasets of protein interaction data , while the high-throughput technologies for detecting protein complexes remained relatively immature. Hence computational approaches for detecting protein complexes are needed to help fill up the relatively empty map for the protein \"complexome\".Identifying protein complexes is important for biological knowledge discovery since many important biological processes in the cell are carried out through the formation of protein complexes. However, there is currently a wide gap between data on protein complexes and (pairwise) protein-protein interactions. High throughput technologies for detecting pairwise protein-protein interactions In this paper, we have reviewed current computational approaches that have been proposed to exploit the abundant protein interaction data to bridge the data gap for protein complexes. Protein interaction graph mining algorithms that identify graphical subcomponents in the protein-protein interaction networks can be used for predicting protein complexes. We have surveyed the state-of-the-art algorithms by describing the traditional graph clustering methods as well as the recent emerging techniques for computational detection of protein complexes from PPI and other data sources. Table On the other hand, our results also show that more further research is needed to improve these methods. In the following discussions, we describe three key challenges for further improvements .The computational methods are highly dependent on the quality of the underlying interaction data. Unfortunately, despite the abundance of PPI data, the data quality of these data still leaves much to be desired. For example, the experimental conditions in which the PPI detection methods are carried out may cause a bias towards detecting interactions that do not occur under physiological conditions, resulting in false positive detection rates that could be alarmingly high . At the PPI networks are very large graphs with thousands of vertices and tens of thousands of edges, even for a simple organism such as yeast. For the more complicated species such as the human being, the scale and complexity of the PPI networks are clearly overwhelming. Graph mining on the PPI networks is certain to test the limits of any computational methods. The fact that many graph-based problems, such as subgraph isomorphisms ,100 and Through integrating various independently obtained biological evidences, we can assess/weight the protein interactions by using appropriate confidence measures. For example, we can employ metrics from biological evidences such as reproducibility of the interactions from multiple experimental methods, support from such other non-interaction data as co-expression, co-localization and shared functions, as well as the conservation of the protein interactions across other genomes, etc to address the limitations in the current quality of PPI data. Computational methods, such as Bayesian network models and kernFrom this review, it is clear that researchers have been tireless in devising new computational approaches for detecting protein complexes. It is indeed heartening that our evaluation results have showed that the proposed methods have generally improved in performance over the years. In time, we will be able to fill up the currently rather empty map of the complexome with combined efforts from biologists as well as computational scientists computational scientists.The authors declare that they do not have any competing interests.XL and MW drafted the manuscript together. MW was responsible for performing experiments to compare the existing techniques. CKK and SKN participated in discussion and conceptualization as well as revising the draft. All authors have read and approved the manuscript."} {"text": "Sir,et al. reported two male cousins from a consanguineous family.[Megarbane s family. In theirs family. is the os family..Waardenburg syndrome (WS) is a rare autosomal-dominant condition characterized by sensory/neural hearing loss, pigmentary abnormalities of the skin, hair, and eyes, and craniofacial anomalies. Our patiet al.[et al.[CHARGE syndrome was first described in 1979 by Hall et al. in 17 chl.[et al. proposedl.[et al.6 Our patAnother rare condition that could be considered in the differential diagnosis is Al Frayh-Anophthalmia, microcephaly, hypogonadism, MR syndrome. Al Frayh and Haque describeLenz syndrome is a rare X-linked recessive condition first reported by Lenz. All affeAlthough our patient has not skeletal malformations and inheritance of this case is uncertain, other many features thinking Megarbane syndrome. We described this case as a second Megarbane syndrome paper to the literature."} {"text": "The Golden Spike data set has been used to validate a number of methods for summarizing Affymetrix data sets, sometimes with seemingly contradictory results. Much less use has been made of this data set to evaluate differential expression methods. It has been suggested that this data set should not be used for method comparison due to a number of inherent flaws.We have used this data set in a comparison of methods which is far more extensive than any previous study. We outline six stages in the analysis pipeline where decisions need to be made, and show how the results of these decisions can lead to the apparently contradictory results previously found. We also show that, while flawed, this data set is still a useful tool for method comparison, particularly for identifying combinations of summarization and differential expression methods that are unlikely to perform well on real data sets. We describe a new benchmark, AffyDEComp, that can be used for such a comparison.We conclude with recommendations for preferred Affymetrix analysis tools, and for the development of future spike-in data sets. The issue of method validation is of great importance to the microarray community; arguably more important than the development of new methods . The micPerhaps the most well-known and widely used benchmark for Affymetrix analysis methods is Affycomp . This is1. It uses data sets which only have a small number of DE spike-in probesets.2. It only uses fold change (FC) as a metric for DE detection, and hence cannot be used to compare other competing DE methods.More recently, the MicroArray Quality Control (MAQC) study has deveet al. et al. [ [et al. has sugget al. [Irizarry et al. detail t1. Spike-in concentrations are unrealistically high.2. DE spike-ins are all one-way (up-regulated).3. Nominal concentrations and FC sizes are confounded.While we agree that these are indeed undesirable characteristics, and would recommend the creation of new spike-in data sets that do not have these characteristics, we do not believe that these completely invalidate the use of the Golden Spike data set as a useful comparison tool.et al. [et al. [Perhaps more serious is the artifact identified by Irizarry et al. . They sh [et al. have recet al. [et al. [et al. [et al. [et al. [et al. [The Golden Spike data set has been used to validate many different methods for summarizing Affymetrix data sets. Choe et al. originalet al. , GCRMA [et al. and MBEIet al. . Liu et [et al. used the [et al. can outp [et al. used the [et al. . Chen et [et al. used the [et al. , FARMS a [et al. . All of et al. [et al. [The reason for the differing results arise from the different choices made at various stages of the analysis pipeline. In particular, different DE methods have been used in the papers cited above. Only Choe et al. and Liu [et al. have com [et al. and Cybe [et al. ; and the [et al. . In addiet al. [1. Summary statistic used . Note that Choe et al. broke thet al. [2. Post-summarization normalization method. Choe et al. comparedet al. [3. Differential expression (DE) method. Choe et al. comparedet al. and SAM et al. .et al. [4. Direction of differential expression. Choe et al. [5. Choice of true positives. Choe et al. used allet al. [6. Choice of true negatives. Choe et al. used allet al. [et al. [et al. [et al. paper [et al. paper [Table et al. , Hochrei [et al. and Chen [et al. papers. l. paper . There al. paper that we l. paper use diffThe most commonly used metric for assessing a DE detection method's performance is the Area Under the standard ROC Curve (AUC). This is typically calculated for the full ROC chart , but can also be calculated for a small portion of the chart (e.g. FPRs between 0 and 0.05). Other metrics that might be used are the number or proportion of true positives for a fixed number or proportion of false positives, or conversely the number or proportion of false positives for a fixed number or proportion of true positives.In this study we have analyzed all combinations of the various options shown in the last row of Table et al. [et al. [et al. [We can see from Table et al. . Figure [et al. . This ap [et al. . There wThe choice of whether 1-sided or 2-sided tests should be used for comparison of methods is debatable. A 1-sided test for down-regulation is clearly not a sensible choice given that all the known DE genes are up-regulated. We would expect a 1-sided test of up-regulation to give the strongest results, given that all the unequal spike-ins are up-regulated. However, in most real microarray data sets, we are likely to be interested in genes which show the highest likelihood of being DE, regardless of the direction of change. As such, we will continue to use both a 2-sided test, and a 1-sided test of up-regulation in the remainder of the paper. In our comprehensive analysis, however, we also include results for 1-sided tests of down-regulation for completeness.Figure et al. [et al. [et al. [Irizarry et al. showed t [et al. ), these [et al. , leads uet al. [et al. [Thus far, we have not considered the effect of post-summarization normalization, which was shown by Choe et al. to have [et al. . Furtheret al. should not have included the empty null probesets\". As such, for the remainder of this paper will we not use the empty probesets in loess normalization. In our comprehensive analysis we also include, for completeness, results when using all of the following post-summarization normalization strategies: no post-summarization normalization, a loess normalization based on all spike-in probesets, a loess normalization based on all the unchanging probesets and a loess normalization based on the equal-valued spike-ins.We agree with Gaile and Miecznikowski that \"thWe turn now to the issue of DE detection methods. Figure The end goal of an analysis is often to identify a small number of genes for further analysis. As such, we might be interested not in how well a method performs on the whole of a data set, but specifically in how well it performs in identifying those genes determined to be most likely to be DE. As such we are particularly interested in the ROC chart at the lowest values of FPR. Figure Figure So far we have used all of the genes that are spiked-in at higher concentrations in the S samples relative to the C samples as our true positives. This is perhaps the best and fairest way to determine overall performance of a DE detection method. However, we might also be interested in whether certain methods perform particularly well in \"easier\" or \"more difficult\" cases. Indeed, many analysts are only interested in genes which are determined not only to have a probability of being DE that is significant, but also have a FC which is greater than some pre-determined threshold. In order to determine which methods perform more strongly in \"easy\" or \"difficult\" cases, we can restrict our true positives to just those genes than are known to be DE by just a small FC, or to those that are very highly DE.Figure We have created ROC charts for each combination of analysis choices from the final row of Table 1. AUC where equal-valued spike-ins are used as true negatives, spike-ins with FC > 1 are used as true positives, a post-summarization loess normalization based on the equal-valued spike-ins is used, and a 1-sided test of up-regulation is the DE metric. This gives the values shown in Table 2. as 1. but using a 2-sided test of DE. This gives the values shown in Table 3. as 1. but with low FC spike-ins used as true positives. This gives the values shown in Figure 4. as 1. but with medium FC spike-ins used as true positives. This gives the values shown in Figure 5. as 1. but with high FC spike-ins used as true positives. This gives the values shown in Figure 6. as 1. but with all unchanging probesets used as true negatives.7. as 1. but with all unchanging probesets used as true negatives, and a post-summarization loess normalization based on the unchanging probesets.8. as 1. but with a post-summarization loess normalization based on all spike-in probesets.9. as 1. but with a no post-summarization normalization.10. as 1. but giving the AUC for FPRs up to 0.01.11. the proportion of true positives without any false positives (i.e. the TPR for a FPR of 0), using the same conditions as 1.12. the TPR for a FPR of 0.5, using the same conditions as 1.13. the FPR for a TPR of 0.5, using the same conditions as 1.We are happy to include other methods if they are made available through Bioconductor packages. We also intend to extend AffyDEComp to include future spike-in data sets as they become available. In this way we expect this web resource to become a valuable tool in comparing the performance of both summarization and DE detection methods.et al. [One of the main problems with comparing different analyses of the same data sets is knowing exactly what code has been used to create results. As an example, the loess normalization used in a number of the papers shown in Table et al. have mad1. provide full details of all parameter choices used in the papers Methods section, or2. make the code used to create the results available, ideally as supplementary information to ensure a permanent record.We recommend that journals should not accept method comparison papers unless either of these is done. This paper was prepared as a \"Sweave\" document . The souet al. [We have performed the most comprehensive analysis to date of the Golden Spike data set. In doing so we have identified six stages in the analysis pipeline where choices need to be made. We have made firm recommendations about the choices that should be made for just one of these stages if using the Golden Spike data for comparison of summarization and DE expression detection methods using ROC curves: we recommend that only the probesets that have been spiked-in should be used as the true negatives for the ROC curves. By doing this we overcome the problems due to the artifact identified by Irizarry et al. . We woul1. The use of a post-summarization loess normalization, with the equal spike-in probesets used as the subset to normalize with. This is also recommended by Gaile and Miecznikowski .2. The use of a 1-sided test for up-regulation of genes between the C and S conditions. This mimics the actual situation because all the non-equal spike-ins are up-regulated.3. The use of all up-regulated probesets as the true positives for the ROC chart.Using the above recommendations, we created ROC charts for all combinations of summarization and DE methods Figure . This shIt should be noted that the design of this experiment could favor certain methods. We have seen that the intensities of the spike-in probesets are particularly high. Estimates of expression levels are known to be more accurate for high intensity probesets. This could favor the FC method of determining DE.Furthermore, the replicates in the Golden Spike study are technical rather than biological, and hence the variability between arrays might be expected to be lower in this data set than in a typical data set. Again, this might favor the FC DE method.et al. [We agree with Irizarry et al. that theWe encourage the community to develop further spike-in data sets with large numbers of DE probesets. In particular, we encourage the generation of data sets where:1. Spike-in concentrations are realistic2. DE spike-ins are a mixture of up- and down-regulated3. Nominal concentrations and FC sizes are not confounded4. The number of arrays used is large enough to be representative of some of the larger studies being performed todayet al. [We believe that only by creating such data sets will we be able to ascertain whether the artifact noted by Irizarry et al. is a pecet al. [ affy package (version 1.16.0). GCRMA expression measures were created using the Bioconductor gcrma package (version 2.10.0). PLIER expression measures were created using the Bioconductor plier package (version 1.8.0). multi-mgMOS expression measures were created using the Bioconductor puma package (version 1.4.1). FARMS expression measures were created using the FARMS package (version 1.1.1) from the author's website [affy package and code from the author's website [goldenspike package (version 0.4) [puma package (version 1.4.1).The raw data from the Choe et al. study waet al. . All anaet al. affy pac website . DFW exp website . Cyber-Tion 0.4) . All othThe code used to create all results in this document is included as Additional file DE \u2013 differentially expressed or differential expression, as appropriate. FC \u2013 fold change. MAQC -MicroArray Quality Control. ROC \u2013 receiver-operator characteristic. FPR \u2013 false-positive rate. TPR -true-positive rate. FDR \u2013 false-discovery rate. AUC \u2013 area under curve (in this paper this refers to the area under the ROC curve).RDP designed the study, performed all analysis, developed the AffyDEComp website, and wrote the manuscript.Source code used to create this paper and AffyDEComp. This is a zip file containing R and Sweave code. Sweave code is a text document which contains both LaTeX and R code, and as such can be used to recreate exactly all the results in this paper using open source tools. Also included is R code to recreate all the charts available through AffyDEComp. See the README file for further details.Click here for file"} {"text": "Hypospadias is a highly prevalent congenital anomaly. The impact of the defect and operative interventions on sexual and reproductive function has been addressed by few publications. It is essential to know the possible outcomes of intervention for appropriate counseling, operative planning, and follow-up. English articles indexed in Pubmed dealing with the long-term sexual and reproductive outcome following hypospadias repair from 1965 to 2007 were reviewed. To our knowledge, there was no prospective trial comparing the impact of various techniques on sexual outcome. There is considerable discordance in literature regarding the effects on sexual function. A few publications report patient and partner dissatisfaction with the appearance of genitalia. Sexual dissatisfaction is often attributed to penile size. Ejaculatory disturbances range between 6 and 37% of operated individuals. There is no convincing evidence for impaired fertility. The long-term follow-up is essential to identify problems and to address them appropriately. Literature documenting the outcome of specific operative procedures and analysis based on severity of hypospadias will be informative. The long-term follow-up of the newer techniques which are more commonly used are awaited. In hypospadias, the inherent difficulties to reconstruct the urethra, straighten the penis, and to restore the appearance of the penis are evident from the number of techniques and modifications described in the literature. However, the impact of the deformity extends beyond the realms of a structural defect, by virtue of the diverse functions of the penis. To counsel parents and patients appropriately, it is essential to know the effect on sexual function and reproduction. Literature on the long-term outcome, impact on sexual function and reproduction continues to be sparse. We reviewed the published literature on the sexual and reproductory outcome of hypospadias.et al.[P = 0.809). The single reason for dissatisfaction in hypospadias group was smaller penile size. Mureau et al.[et al.[et al.[Publications on the psychological, social, and sexual development of patients operated on for hypospadias are still rare and the results are somewhat discordant. The possible explanations for these discrepancies are mainly methodological, with too small series, low rates of response to questionnaires, study populations of different ages and above all the absence of a control group, which prevents any comparison of the results with those of a reference population. Another et al. observedau et al. interviel.[et al. used a gl.[et al. observedet al.[et al.[et al.[et al.[Berg et al. in the el.[et al. comparedl.[et al. assessedl.[et al. comparedet al.[et al.[Sexual sensation has not been well documented in most articles. Bubanj et al. noted thl.[et al. noted thet al.[et al.[P < 0.05) and those with complications had been more often ridiculed than those in the distal group and those without complications . In a comparative study, Mureau et al.[The social and sexual life of adults operated for hypospadias during childhood has been studied by a few authors. Aho et al. comparedl.[et al. observedau et al. comparedet al. observed that though the frequency of intercourse during 4 weeks was significantly lesser for those who were operated for hypospadias, there were no significant difference between patients with hypospadias and controls regarding inhibition in seeking sexual contacts or patterns of sexual relationships. Those with distal hypospadias were more satisfied with their sexual life.[et al. noted that the commonest sexual complaints included short penis, increased curvature, painful erection, and no erection. The erectile problems were more in those who had proximal hypospadias.[et al.[et al.[et al. They noted that the long-term sexual function and satisfaction were excellent, in spite of them having undergone multiple procedures.[et al.[et al. noted that 10 out of 25 who underwent re-operation for hypospadias had recurrence of chordee.[et al. followed up patients following hypospadias repair after puberty. They found that higher number of study patients had ventral curvature during erection (40% vs. 18%) compared to controls.[The erectile problems in hypospadias may be attributed to surgically correctable and noncorrectable causes. More commonly encountered correctable causes include persistent chordee, torsion, inadequate cosmetic outcome, etc. Commonest surgically uncorrectable cause is the size of the penis. Achieving a straight penis is one of the objectives of hypospadias correction. With a constant move toward achieving a normal-looking penis, the results of contemporary repairs are likely to be different. Sommerlad reviewedual life. Of the 7ospadias. They alss.[et al. studied l.[et al. Zaontz iocedures. The longs.[et al. Six of t chordee. Bubanj econtrols. This undcontrols. With a det al.[et al.[et al.[et al.[Inability to achieve satisfactory ejaculation is documented in almost all publications. Reported incidence ranges from 6 to 37%. Problems reported include weak or dribbling ejaculation, having to milk out ejaculate after orgasm, quantity of semen passing after intercourse, anejaculation with or without orgasm, etc. Liu et aet al. observedl.[et al. assessedl.[et al. documentl.[et al. studied l.[et al. whether et al. found that men who had hypospadias during childhood were less likely to live with a partner and that they had fewer children (0.8 vs. 1.1).[Literature is scant on the fertility of men who had hypospadias. Aho vs. 1.1). The diffvs. 1.1). Of the 3vs. 1.1).5Genital perception is mostly unaffected, especially in those with distal hypospadias operated in childhood. An adverse effect on sexual life has been noted, but the results are discordant. Disturbance in sexual performance seems to be attributed to small penile size. Ejaculatory disturbance has been noted in almost all series. There is no convincing evidence for impaired fertility. The operative procedures based on which these studies were conducted have mostly been replaced by modern techniques which are more anatomical. Furthermore, the operations are being performed at an earlier age. When hypospadias reconstruction is performed in early childhood, it essential to keep in mind the possible long-term sexual and reproductive implications and to choose options that are least likely to impair sexual and reproductive functions. The long-term follow-up of newer operative techniques are awaited. Evaluation of psychosexual, erectile, ejaculatory and reproductive function of specific techniques, and for varying degrees of hypospadias will give a better idea of the outcome of various procedures. Using a validated objective scoring system will help compare the results of various techniques."} {"text": "Intracranial vascular anomalies involving the middle cerebral artery (MCA) are relatively rare, as such knowledge will be helpful for planning the optimal surgical procedures.We herein present the first case of a ruptured internal carotid artery aneurysm arising at the origin of the hypoplastic duplicated MCA associated with accessory MCA and main MCA aplasia, which was revealed by angiograms and intraoperative findings.In practice, this case highlights the urgent need to preoperatively recognize such vascular anomalies as well as understand the collateral blood supply in cerebral ischemia associated with these MCA anomalies. Teal et al. establiset al.9 AlthougA 66-year-old female suffered a sudden onset of headache and a loss of consciousness. On admission, a computed tomography (CT) scan revealed a diffuse subarachnoid hemorrhage (SAH) with laterality on the right Sylvian fissure . A threeet al.[Intracranial vascular anomalies involving the MCA are relatively rare. Teal et al. establiset al.9 The coeet al.[6Uchino et al. diagnoseet al. However,et al.69 The caet al.[The embryologic explanation for anomalies and variations of the MCA remains unclear. The MCA develops after the ACA, and the ACA is considered a continuation of the primitive ICA. Thus, the MCA can be regarded as a branch of the ACA. Embryoloet al. suggesteet al.[The association between the dup-MCA, or the acc-MCA, and cerebral aneurysms has been well documented.8 Howeveret al. reported"} {"text": "We aimed to provide a summary of the existing published knowledge on the association between adverse birth outcomes and the development of wheezing during the first two years of life. We carried out a systematic review of epidemiological studies within the MEDLINE database. Epidemiological studies on human subjects, published in English, were included in the review. A comprehensive literature search yielded 72 studies for further consideration. Following the application of the eligibility criteria we identified nine studies. A positive association and an excess risk of wheezing during the first two years of life were revealed for adverse birth outcomes. Prem2.A systematic review of the existing literature on adverse birth outcomes related and wheezing was carried out. We posed the following review question: \u201cGiven the existing epidemiological evidence, is there a link between adverse birth outcomes and the occurrence of wheezing during the first two years of life?\u201d. We drew up a review protocol in advance following standards outlined in the MOOSE Guidelines for Meta-Analyses and Systematic Reviews of Observational Studies . We carrSearch terms used were chosen from the USNLM Institutes of Health list of Medical Subject Headings (MeSH) for 2007. These were: \u201cInfant, Low Birth Weight\u201d OR \u201cInfant, Very Low Birth Weight\u201d OR \u201cPremature Birth\u201d OR \u201cFetal Growth Retardation\u201d OR \u201cInfant, Extremely Low Birth Weight\u201d AND \u201cRespiratory Sounds\u201d, OR \u201cSigns and Symptoms, Respiratory\u201d OR \u201cWheezing\u201d. Although not officially MeSH terms, \u201cWheezing\u201d and \u201cSmall for gestational age\u201d was also added as key terms so as to broaden the scope of the search. Retrieved studies were checked against a list of eligibility criteria, while the references of each retrieved study were also checked by hand for additional studies that met the eligibility criteria.a priori eligibility criteria to restrict the studies included. Studies were only included if they referred to humans, were published in English after 1990, were epidemiological studies (of any study design) and they examined the presence of wheezing up to two years old. Studies not meeting these criteria were excluded from the review. Data were extracted systematically from each included study by two researchers separately using a standardized data extraction form. The following data were extracted from each study: study main characteristics, study population, study topic, and measures of effect and confidence intervals for each outcome.We defined 3.The main characteristics of the studies included in the analysis are given in et al. [et al. [et al. [et al. [et al. [et al. used ges [et al. gestatio [et al. , the <34 [et al. and the [et al. , respect [et al. ,26,32. H [et al. was not [et al. while teAll studies examined wheezing as an outcome although the time period of wheezing presence was different. Four studies examined wheezing at first year of life ,26,30, fet al. [et al. [et al. [et al. [et al. [et al. [Four studies measured the outcome as an odds ratio ,29,31,32 [et al. underlin [et al. reportedet al. [et al. [et al. [The prevalence of wheezing was also estimated by five studies ,26,27,30et al. in the t4.As indicated through this systematic review, we have gathered the existing epidemiological evidence in order to examine the possible association between adverse birth outcomes and the development of wheezing during early childhood. Furthermore we identified a positive association between adverse birth outcomes such as LBW, VLBW, PD, VPD and the development of wheezing during early childhood. No studies examining the association of term low birth weight and small for gestational age infants with wheezing were revealed and thus these outcomes were not further investigated into.Considerable variation in the prevalence of wheezing has been observed in previous studies. A prevalence increase has been noticed between countries and over time specifically from the 1970s up to the early 1990s . These dThere are many potential causes of wheezing including genetic/familial or environmental factors, and viral respiratory infections . GeneralThe majority of children who develop wheeze in early childhood are free of symptoms by adolescence or early adulthood ,37. In aEarly life events are important since the origin of airway abnormalities occurs early in infancy. Prospective birth cohort studies clarify the incidence of an illness by evaluating the risk factors and the possible confounders and/or modifiers related to the disease. This kind of studies may allow us to shed light on the primary factors that initiate wheezing and its correlation to long term implications such as chronic obstructive lung disease.Conclusively, this study area is of high importance due to the fact that long term implications are unknown. It is therefore desirable to determine the correlation between adverse birth outcomes and wheezing during early childhood and identify whether there are preventable or treatable risk factors. For the purpose of this study, review was restricted to wheezing as a health outcome while other important causes of wheezing during early childhood other than adverse birth outcomes were not examined. As mentioned above, well designed epidemiological studies which will evaluate the relevant confounders and possible exposures and risk factors during pregnancy are needed. This estimation of summary should be considered by epidemiologists, health care specialists and research community as the most interesting areas for further research work."} {"text": "The most important and widely utilized system for providing prognostic information following surgical management for renal cell carcinoma (RCC) is currently the tumor, nodes, and metastasis (TNM) staging system. An accurate and clinically useful staging system is an essential tool used to provide patients with counseling regarding prognosis, select treatment modalities, and determining eligibility for clinical trials. Data published over the last few years has led to significant controversies as to whether further revisions are needed and whether improvements can be made with the introduction of new, more accurate predictive prognostic factors. Staging systems have also evolved with an increase in the understanding of RCC tumor biology. Molecular tumor biomarkers are expected to revolutionize the staging of RCC by providing more effective prognostic ability over traditional clinical variables alone. This review will examine the components of the TNM staging system, current staging modalities including comprehensive integrated staging systems, and predictive nomograms, and introduce the concept of molecular staging for RCC. Over 200,000 new cases of kidney cancer are diagnosed and more than 100,000 deaths occur from this disease each year globally. RCC is a37et al.[Anatomical criteria have traditionally been used to stage RCC. Flocks and Kadesky were theet al. later moet al.et al.[The primary size of the tumor is a key component of the TNM staging system and remains one of the most important prognostic factors for RCC.8 In 199712et al. attempteet al.\u201319 As a et al. Althoughet al.22et al.[et al.[Several investigators have attempted to further improve the prognostic accuracy of T2 tumors by stratifying based on size. Frank, et al. found thl.[et al. reportedet al.[et al.[et al.[et al.[A 5-year cancer-specific survival rate for T3 disease ranges from 37% to 67%, which reflects this broad category that includes various clinical situations that involve tumor extension beyond the renal capsule.26 Tumorset al. reportedl.[et al. reportedl.[et al.\u201331 Leibol.[et al. reportedl.[et al. reportedet al.[et al.[et al.[et al.[et al.[The role of tumor size in T3a tumors has attracted little attention in literature. Siemer, et al. analyzedl.[et al. reportedl.[et al. also repl.[et al. investigl.[et al. reportedet al.[4338A few patients presented with RCC involving the ipsilateral adrenal gland at the time of diagnosis.38 The cu39et al. Others het al.42 Severaet al.4344 CurrRCC invades the venous system in 4\u20139% of newly diagnosed patients.46 In 19947485455et al.[The overall risk of lymph node metastasis is approximately 20% and 5-year survival rates of patients with lymph node involvement ranges from 11\u201335%.\u201359 Howev57et al.[et al.[et al.[Although it has been specified since the 6th edition of the TNM classification that histological examination of a regional lymphadenectomy specimens should routinely include 8 or more lymph nodes, few studies have challenged the N1-N2 subclassification. Previous studies have focused on the number of lymph nodes that were required for accurate staging as well as the utility and extent of the lymphadenectomy. Terrone,et al. reportedl.[et al. analyzedl.[et al. examinedThe anatomical, histological, and clinical factors that influence disease recurrence and survival in RCC make counseling patients particularly challenging. Many centers have aimed to integrate these independent prognostic indicators into comprehensive outcome models for both non metastatic and metastatic RCC to assist clinicians in facilitating patient counseling and identifying those patients who might benefit from treatment. The first report addressing this issue appeared in 1986 in which the factors predicting outcome for patients with metastatic RCC included performance status (PS), presence of pulmonary metastases, and metastatic-free interval. More recet al.[vs. 2 to 3), time from initial diagnosis (>1 year vs. 1 year), number of metastatic sites, prior cytotoxic chemotherapy, and recent weight loss. The Karnofsky or ECOG-PS scales are a convenient common denominator for the overall impact of multiple objective and subjective symptoms and signs on patients. Using this system, median survival times ranging from 2.1 to 12.8 months were observed across the five separate categories. As this cohort was examined prior to the initiation of immunotherapy, its validity for today's patient population is questionable.Elson, et al. presenteet al.[Motzer, et al. developeet al.[To analyze prognostic factors that would benefit modern day clinical trials, Motzer, et al. reviewedvs. two or three sites), the MSKCC definitions of risk groups were expanded to accommodate these two additional prognostic factors. Using this expanded criteria, favorable-risk was defined as zero or one poor prognostic factor, intermediate-risk was defined as two poor prognostic factors, and poor-risk was defined as more than two poor prognostic factors.A study of 353 patients with previously untreated advanced RCC at the Cleveland Clinic was conducted to assess and validate the model proposed from MSKCC. Four of The International Kidney Cancer Working Group is currently establishing a comprehensive database from centers that treat patients with metastatic RCC. This will be used to develop a set of prognostic factors in patients with metastatic RCC and ultimately to derive a single validated model. Preliminary studies were performed to determine the availability of a database that could be used for the planned analysis of prognostic factors, which involved the examination of 782 patients treated by the Groupe Francais d'Immunotherapie and patiThe Kattan postoperative prognostic nomogram was creaThe UCLA integrated staging system (UISS) is an extensive prognostic system that has been created for both localized and metastatic RCC. The initThe Mayo Clinic created an extensive outcome prediction model for patients with clear cell RCC who are undergoing a radical nephrectomy. Accordin7981et al.[In the metastatic setting, it is well accepted that PS is a strong predictor for survival. Similarly, several studies have shown that cancer-related symptoms were independent prognostic parameters in localized RCC.\u201385 Recen8688et al. recentlyet al.[et al.[The presence of distant metastases at diagnosis substantially changes the prognosis of patients with RCC.92 Leibovet al. reportedl.[et al. recentlyet al.[et al.[The outcome of patients with RCC nodal metastases is substantially worse than that of patients with localized disease. Hutterer, et al. developel.[et al. demonstrin situ detection of DNA, RNA, and protein in the same set of specimens, which can be correlated to clinical data with respect to disease progression, treatment response, and survival. The evaluation of protein expression in a high-throughput TMA is a natural extension to the efforts for molecular staging. Accurate models for predicting survival can be constructed using multiple molecular biomarkers. Kim, et al.[Molecular biomarkers may prove more effective for predicting survival than traditional clinical parameters such as tumor stage and grade. The nextm, et al.103 have Two nomograms were proposed that could be used to predict disease-specific survival. One nomogram is based on metastasis status and molecular markers . The second nomogram combines clinical and molecular variables . By including metastasis status, the nomograms accurately predict cancer-specific survival in patients with both localized and metastatic RCC. Both nomograms can be used to calculate 2- and 4-year cancer-specific survival rates as well as median survival. This study shows that accurate models for molecular staging of a solid tumor can be developed using a very limited number of markers. Although these nomograms are useful for visualizing our predictive models, they need to be validated on independent patient populations prior to being applied to patient care.et al.[et al.[et al.[et al.[Gene expression analysis studies have also demonstrated the ability to define patient prognosis. Takahashi, et al. showed tl.[et al. also idel.[et al. reportedl.[et al. identifiOver the last 10 years, there has been a gradual transition from the use of solitary clinical factors as prognostic markers for patients with RCC to the introduction of systems that integrate multiple factors to the introduction of molecular and genetic markers with the goal of improving patient prognostication. The field of RCC is rapidly undergoing a revolution led by molecular biomarkers. The understanding of tumor biology gleaned from molecular biomarker research will be critical to the future treatment of patients with RCC."} {"text": "However, if the same theoretical methods are used for analysis of actual experimental data, the apparent diffusion constants obtained are orders of magnitude lower than those in diluted aqueous solutions. Thus, it can be concluded that local restrictions of diffusion of metabolites in a cell are a system-level properties caused by complex structural organization of the cells, macromolecular crowding, cytoskeletal networks and organization of metabolic pathways into multienzyme complexes and metabolons. This results in microcompartmentation of metabolites, their channeling between enzymes and in modular organization of cellular metabolic networks. The perspectives of further studies of these complex intracellular interactions in the framework of Systems Biology are discussed.Problems of quantitative investigation of intracellular diffusion and compartmentation of metabolites are analyzed. Principal controversies in recently published analyses of these problems for the living cells are discussed. It is shown that the formal theoretical analysis of diffusion of metabolites based on Fick's equation and using fixed diffusion coefficients for diluted homogenous aqueous solutions, but applied for biological systems IndeeMay 2007 and anotMay 2007 , concernMay 2007 , arrive l., 2007 , howeverl., 2007 . Both stl., 2007 , 29. The [et al. is basedAnalysis of these two conflicting articles and their different historical and ideological backgrounds is most intriguing for the discussion of possible directions of development of strategies of metabolic research in the future. This is especially important because of the very rapid emergence of Systems Biology that is largely based on the application of mathematical modeling methods to systems with various complexities, in which compartmentation becomes one of the most important system \u2013 level properties, not predictable from the properties of isolated components only , 30\u201334.2/s [Biophysical Journal with most remarkable results, e.g. what seemed most unrealistic to Barros and Martinez was, by analysis of experimental data by Selivanov et al. [Biophysical Journal another article supported this conclusion: Iancu et al. [Barros and Martinez consider2/s . The fac2/s . Thus, t2/s \u201327 were 2/s . No expe2/s . Howeverv et al. , found tv et al. . In the u et al. presenteu et al. \u201337. Thesu et al. , is cleau et al. .et al. [Diffusion of metabolites in organized intracellular media has been studied for several decades with very clear results showing its restriction due to many physical factors and a multitude of parameters that are characteristic for the intracellular milieu , 38\u201340. et al. , and a met al. and Smolet al. . The latet al. . Fick's where the D is diffusion coefficient or diffusivity.Both Einstein and Smoluchowski described diffusion at a microscopic level, describing its molecular mechanism \u2013 the Brownian movement \u201347. The 2/4t and D = \u03bb2/6t, respectively [This equation was found for the movement in one dimension \u201347. For ectively \u201347. ThesD described by Einstein \u2013 Smoluchowski's equation is related to the particle radius r and the viscosity of the medium \u03b7 by Stokes \u2013 Einstein equation [The diffusion coefficient or diffusivity equation \u201347:et al. have found that at least eleven conditions assumed in derivation of Fick's law and the Einstein \u2013 Smoluchovski model are not entirely met in the intracellular milieu [d)\u22121 where C is concentration of binding sites and Kd is dissociation constant of solute from these complexes [2 where \u0394\u03bb is relative increase in path attributable to the barriers [et al. even recommend not to use \u201cdiffusivities\u201d or \u201cdiffusion coefficients\u201d for biological systems but to use some terms of the type of \u201cempirical transport coefficient\u201d when the Fick's equation is formally applied for intracellular processes [Dapp \u201c, Dapp = DFxD0, where Do is the diffusion coefficient in bulk water phase and DF is a diffusion factor accounting for all intracellular mechanisms locally restricting particles movement [DF value in some areas of cells, for example in myofibrils, close to sarcolemma and mitochondrial outer membrane may be in the range of 10\u22122\u201310\u22125 [All these classical theories have been developed for weakly interacting rigid particles at sufficiently low concentrations. Agutter r milieu \u201340. The r milieu \u201340. Thenbarriers \u201340. Finabarriers \u201349. Agutmovement , 26, 50.movement , 51, 52 0\u22122\u201310\u22125 , 50\u201352.The results of these local diffusion restrictions are microcompartmentation of metabolites and their channeling within organized multi-enzyme complexes which need to be accounted for to explain many biological phenomena. Indeed, none of the important observations in cellular bioenergetics could be explained by a paradigm describing a viable cell as a \u201cmixed bag of enzymes\u201d with homogenous metabolite distribution, since this simplistic theory excludes any possibility of metabolic regulation of cellular functions.et al., who showed that only due to this important characteristics of intracellular medium, the apparent diffusion coefficients may be decreased by order of magnitude depending upon the size of the diffusing particles and occupied volume fraction [The first phenomenon to be taken into account in all cells is macromolecular crowding: the high concentrations of macromolecules in the cells \u20139 decreafraction .At first sight, this macromolecular crowding should cause a real chaos by making intracellular communication by diffusion of reaction intermediates very difficult. This chaos and related problems are well described by Noble in his recent book . In real31P-NMR, showed that the diffusions of ATP and phosphocreatine both are anisotropic in muscle cells [in situ in permeabilized cardiac cells also showed that ADP or ATP diffusion in cells is heterogeneous and that the apparent diffusion coefficient for ADP (and ATP) may be locally decreased by an order, or even several orders of magnitude [Many new experimental techniques have been developed to study the molecular networks formed by protein\u2013protein interactions . In the le cells , 58. Recagnitude . A similagnitude , 52. In agnitude . There iDue to molecular crowding and hindered diffusion cells need to compartmentalize metabolic pathways in order to overcome diffusive barriers. Biochemical reactions can successfully proceed and even be facilitated by metabolic channeling of intermediates due to structural organization of enzyme systems into organized multienzyme complexes. Metabolite channeling directly transfers the intermediate from one enzyme to an adjacent enzyme without the need of free aqueous-phase diffusion , 49, 60.macrocompartments \u2013 subcellular regions which are large relative to the molecular dimension, and microcompartments which are of the order of the size of metabolites. Compartment means \u201csubcellular region of biochemical reactions kinetically isolated from the rest of cellular processes\u201d [via microcompartments or by direct transfer [Thus, the principal ways and mechanisms of organization of cell metabolism are macro- and microcompartmentation, metabolic channeling and functional coupling. By definition, the term compartmentation is usually related to the existence of intracellular ocesses\u201d , 62. Macocesses\u201d \u201367 and oocesses\u201d has beenocesses\u201d , 21. Mulocesses\u201d . Thus, tocesses\u201d \u201367, Krebocesses\u201d and manyocesses\u201d , 71. Newocesses\u201d , 37. Mictransfer , 49, 60.Interestingly enough, there is an exciting hypothesis that these phenomena, in particular metabolic channeling, are even older than life itself and related to its origin. Edwards and others .[in vivo , 90. Pau[in vivo . As anot[in vivo . These c[in vivo . ATP inhntration , 52. Aga [et al. showed tartments , 79, 84.2+ microdomains (Ca2+ sparks) which form a discrete, stochastic system of intracellular calcium signaling in cardiac cells [in vivo kinetics of the energy transfer by which high-energy phosphoryl fluxes through creatine kinase, adenylate kinase and glycolytic phosphotransfer, captured with 18O-assisted 31P-NMR, were shown to tightly correlate with the performance of the myocardium under various conditions of load [31P-NMR saturation or inversion transfer methods [et al. [Remarkably, in the heart the intracellular energy transfer networks are structurally organized in the intracellular medium where macromolecules and organelles, surrounding a regular mitochondrial lattice, are involved in multiple structural and functional interactions \u201327, 34. ac cells . The strac cells \u201387. Thesac cells , 83\u201390. ac cells , 96 usinac cells , 61. To ac cells \u201327. Due ac cells , 83\u201390. of load , 98, imp methods , 100. Mo methods . All the methods . Similar methods \u201337. This [et al. . These a [et al. . In addi [et al. . These m [et al. , 104 and [et al. . The aut [et al. .31P-NMR global creatine kinase flux in muscle in vivo, some laboratories [Biophysical Journal [31P-NMR flux behaviour of the creatine kinase system in creatine kinase knock-out mice or such with graded expression of creatine kinase [Some 10 years ago, based on measurements, utilizing a new technology that made it possible to directly measure by saturation transfer ratories , 106 camratories saying t Journal , and tha Journal and on ae kinase , 108, ase kinase . Now, ite kinase .However, modern biophysical chemistry can no longer be restricted to classical theories of diluted homogenous solutions, as misleadingly taken by Barros and Martinez . The secin vivo situation for cells that possess an extensive myofilament lattice, as muscle cells, or a dense cytoskeletal an mitochondrial network and other diffuson restrictions inferred by intracellular organization or macromolecular crowding. Moreover, biological sciences witness now a radical change of paradigms. Reductionism that used to be the philosophical basis of biochemistry and molecular biology, when everything from genes to proteins and organelles was studied in their isolated state, are superseded by Systems Biology, a holistic view that favours the study of integrated systems at all levels: cellular, organ, organism, and population, accepting that the physiological whole is greater than the sum of its parts [in vivo situation is much more realistically described and well illustrated by Selivanov et al. [in vivo situation. Apparent diffusion coefficients Dapp and diffusion factors DF (see above), as well as compartmentation, are system level properties not predictable from isolated components but arising from their interactions [in situ and, upon proteolytic digestion and determintation of the isotopically labeled peptides by mass spectrometry, sift through large protein sequence data bases to identify the cross-linked peptides and attribute them to the original partner proteins that at the time and conditions of chemical cross-linking did interact with each other [What modern biophysical chemistry needs is to critically reevaluate of use of the principles of classical physical chemistry that have been worked out for diluted aqueous solutions and to adapt these concepts to the ts parts , 30\u201334. ts parts . The sitts parts , who conv et al. . Clearlyv et al. also revractions , 33. Higch other ."} {"text": "Although various drugs for its treatment have been synthesized, the occurring side effects have generated the need for natural interventions for the treatment and prevention of hypertension. Dietary intervention such as the administration of prebiotics has been seen as a highly acceptable approach. Prebiotics are indigestible food ingredients that bypass digestion and reach the lower gut as substrates for indigenous microflora. Most of the prebiotics used as food adjuncts, such as inulin, fructooligosaccharides, dietary fiber and gums, are derived from plants. Experimental evidence from recent studies has suggested that prebiotics are capable of reducing and preventing hypertension. This paper will discuss some of the mechanisms involved, the evidence generated from both In ordePrebiotics are non-digestible food ingredients that can escape digestion under the harsh conditions of the upper gastrointestinal tract and reach the lower gut as substrates for the fermentation by selective indigenous gut microflora. Most of the prebiotics studied are plant derivatives such as fructooligosaccharides (FOS), inulin and fibers. FOS contains 2 to 10 fructose units linked by glycosidic bonds, while inulin is a fructose polymer with \u03b2-(2-1) glycosidic linkages with chains of 3 to 60 units. Both FOS and inulin are found abundantly in chicory and artichokes. The major component of chicory root is inulin. Inulin belongs to the fructan family, and occurs naturally as important storage carbohydrates. Other than chicory, fructans are also found present in artichokes, salsify, asparagus and onions . Plant fThis paper will discuss some of the antihypertensive mechanisms that have been documents. This paper will also discuss some of the experimental evidence on the antihypertensive properties of plant derived prebiotics, with emphasis on their hypocholesterolemic and hypoglycemia effects. However, controversies have arisen where some studies showed promising results while others exhibited insignificant findings. Thus, this paper will also highlight some of these controversies.2.Various mechanisms have been postulated to explain the ability of prebiotics to reduce the risk of hypertension. One of the possible mechanisms is via the lowering of blood lipid and cholesterol. Previous studies have demonstrated that intensive reduction of cholesterol may be beneficial in the treatment of patients with isolated systolic hypertension . The lipet al. [et al. [Soluble prebiotics such as pectin, konjac mannan and modified starches are soluble in solutions leading to a thickening and viscous effect. Such physicochemical properties have been found to affect physiological responses such as the lowering of blood cholesterol, and increasing satiety by delaying gastric emptying and a reduced speed of gastric transit in the upper gastrointestinal tract. Levrat-Verny et al. evaluate [et al. proposed [et al. . This ov [et al. .et al. [et al. [In another study, Lairon et al. suggesteet al. . Obesityet al. . Therefoet al. . GLP-1 iet al. . This co [et al. had prev [et al. .Another possible mechanism by which prebiotics could regulate blood pressure includes the attenuation of insulin resistance . Insulinet al. [+ have been found to exchange for Ca2+ in the distal colon regions [2+ which favour passive diffusion and consequently absorbed by the human colon [Additionally, prebiotics have also been reported to reduce the risk of hypertension by improving the absorption of mineral such as calcium in the gastrointestinal tract . Past stet al. conducteet al. . Howeveret al. . The conet al. . Prebiotet al. . Therefoet al. . Prebiot regions . This woan colon .3.in-vivo trials , LDL-cholesterol concentration by 25.9% (P<0.01), IDL-cholesterol level by 39.4% (P<0.001) and VLDL-cholesterol concentration by 37.3% (P<0.05) compared to the control group.In a study evaluating the influence of prebiotics on cholesterol, Mortensen et al. administP<0.02) and LDL-cholesterol (P<0.005) by 8.7% (\u00b1 3.3) and 14.4% (\u00b1 4.3), respectively compared to the control. In another study, Rault-Nania et al. [P<0.05) decrease in hepatic triglycerides concentration as compared to those in the control group, whereby the blood pressure of those on inulin and oligofructose were in the range of 138.0 \u00b1 2.2 and 136.9 \u00b1 2.0 mm Hg respectively, while the control showed a blood pressure of 145.8 \u00b1 1.3 mm Hg.Davidson and Maki conductea et al. found thet al. [P<0.05) hepatic triacylgylcerol by 48% compared to the control group. In another study, Busserolles et al. [P<0.05) decline of 0.33% in hepatic triacylglycerol as compared to the control. These studies provided the experimental and clinical evidence that the supplementation of prebiotics such as inulin and oligofructose could be used as a mean to control hypertension.Hypertriglyceridemia is often associated with a moderate hyperglycemia and insulinemia , and preet al. administs et al. found th3.2.et al. [P<0.05) reduced serum glucose. Similarly, Kok et al. [P<0.05) reduced blood glucose by 26.0% as compared to those of the control group which only showed a reduction of 9.7%, which was insignificant statistically. The finding indicated that diet supplemented with oligofructose was effective in reducing blood glucose and indirectly could alleviate the risks of hypertension.Experimental evidence has also demonstrated that diabetes elevates the risk of hypertension . Dietaryet al. administet al. found thk et al. studied P<0.05) improvement of glucose intolerance as compared to the control. Giacco et al. [P<0.02) reduction of postprandial insulin response as compared to those in the placebo group which did not show any significant differences.Similarly, Suzuki and Hara studied o et al. evaluatein-vivo trials.Various studies have highlighted the beneficial effects of prebiotics on physiological conditions such as lipid and glucose profiles that are directly associated with hypertension. Hence, there is a strong basis for continuous evaluation on prebiotics specifically aimed at utilizing longer and larger 4.et al. [et al. [Epidemiological studies have found that hypertensive patients frequently have a high level of serum cholesterol. Blood pressure and cholesterol are closely related and many researches have demonstrated that the lowering of cholesterol contributes to antihypertension effects. Thus, an increase in both serum cholesterol and triglycerides had been reported to elevate blood pressure values . Accordiet al. and Ferr [et al. , blood pet al. [et al. [According to Luo et al. , blood l [et al. showed tet al. [et al. [In another study, Pedersen et al. reported [et al. also sho [et al. , fifty-eet al. [et al. [Jenkins et al. used a cet al. . Fermentet al. . Accordi [et al. , acetateet al. [et al. [et al. [et al. [The concordance of hypertension and diabetes is increased in the population and hypertension is disproportionately higher in diabetics . High blet al. previous [et al. also sho [et al. demonstr [et al. evaluateFuture studies aimed at investigating the effects of prebiotics on serum cholesterol or blood glucose concentration should consider the choice of subjects and the length of the supplementation period. Past studies investigating on the effects of NDO such as inulin and FOS in humans on hypertension remains relatively controversial. However, considering the vast experimental evidence on their positive roles, the potential of prebiotics as an antihypertensive agents warrant further investigations.5.in-vitro experiments and in-vivo trials have exhibited the need for further evaluation of the antihypertensive properties of plant-based prebiotics.Results from recent studies support the antihypertensive potential of plant-based prebiotics, and shown that they could exert such a beneficial effect via various mechanisms. Although controversial findings are raised, positive outcomes from both"} {"text": "Amiodarone, a class III antiarrhythmic drug, has been found to be effective in the management of patients with life-threatening ventricular arrhythmias. The aim of this study was to test whether the co administration of vitamin-E with amiodarone can reduce amiodarone-induced liver damage.Twelve male albino rats were divided into three groups (ml vegetable oil/day by oral gavages daily for 2 weeks and were used as control group. The rats of the second group received 5.4 mg amiodarone/100 gm rat dissolved in vegetable oil daily by oral gavages for 2 weeks. In the third group, the rats received 5.4 mg amiodarone and 5 mg vitamin-E/100 gram rat dissolved in 2 ml vegetable oil by oral gavages daily for 2 weeks. Two weeks after treatment, the rats were sacrificed and liver specimens were immediately taken and processed for transmission electron microscopic examinations.Sections from the rat liver receiving amiodarone examined by electron microscopy showed disrupted hepatocytes with increased vacuolations. Degenerated organelles and disrupted nuclei were observed. The microvilli of bile canaliculi were disrupted and the hepatocytes showed increased lipid contents. Both endothelial cells and Kupffer cells were damaged. Phospholipids inside the mitochondria showed a loss of cristae. Sections from the liver of rats received amiodarone and vitamin-E showed lesser effects, especially in depositions of phospholipids in the mitochondria and the whole organelles and the nucleus showed minor damage in comparison to the previous group.Milder hepatotoxic effects are seen in rats administered amiodarone and vitamin E simultaneously suggesting that vitamin-E may play a role in amelioration of the effects of amiodarone. In addit13et al and Boltal[et al, found thal[et al,9 Amiodaral[et al, Characteal[et al,The antioxidant vitamin-E was shown to reduce lysosomal phospholipidosis and amioThe aim of this study was to test whether the co-administration of vitamin-E with amiodarone can reduce amiodarone-induced liver damage using the electron microscope.et al.[Twelve male albino rats were divided into three groups of four rats each. In the first group, the rats received 2 ml vegetable oil/day by oral gavages daily for 2 weeks and were used as a control group. The rats of the second group received 5.4 mg amiodarone (chlorhydrate D') from Sanofi, France; /100 gm dissolved in vegetable oil daily by oral gavages for 2 weeks. This corresponds to the maximum human daily therapeutic dose converted into the equivalent rat dose according to Paget's table. In the tet al.Small pieces of the liver parenchyma were fixed in 2.5% glutaraldehyde for 24 hours. The small pieces were washed by phosphate buffer . Postfixation was made in 1% osmium tetroxide buffered to pH 7.4 with 0.1 M phosphate buffer at 4\u00b0C for 1-2 h and then washed again in phosphate buffer to remove the excess fixative. The samples were dehydrated through ascending grades of ethanol followed by clearing in propylene oxide. The specimens were embedded in araldite. Polymerization was obtained by placing the capsules at 60\u00b0C. Ultrathin sections (100 nm) were prepared using ultramicrotome and picked up on uncoated copper grids. Following double staining with uranyl acetate and lead citrate, sections were examined and photographed using a JEOL 100 Cx transmission electron microscope, Japan.et al,[Malondialdehyde (MDA) was determined and measured according to the method of Yoshioka et al, using a et al, Data weret al, Data werOn electron microscope examination of the sections from the rat liver of the control group showed a hepatocyte with mitochondria with prominent cristae, plenty of rough endoplasmic reticulum and a nucleus that was surrounded by a nuclear membrane with chromatin masses and a nucleolus . The mitThe fine structure from rats receiving amiodarone drug showed degenerated hepatocytes with many vacuoles and damaged nuclear chromatin . The mitOn electron microscopic study of a section in the hepatocytes from rats received amiodarone and vitamin E showed rounded nuclei with intact nuclear envelope, chromatin masses, nucleolus and minimal lipid droplet . The hepP < 0.001).There was a significant increase in plasma MDA in group II as compared to group I , (P < 0.P > 0.05).There was a nonsignificant increase in plasma MDA in the group III as compared with group I , (P > 0.P < 0.001).There was a significant increase in plasma MDA in the group III as compared with group II , (P < 0.P < 0.001).Analysis of variance (F-Test) showed a significant difference between the studied groups , (P < 0.et al,[Amiodarone is a lipophilic antiarrhythmic/antianginal drug which is able to influence the physicochemical status of biological lipid components. Since oxidation of lipids is affected by their physicochemical state and amiodarone binds to lipoprotein. Lapenna et al, hypotheset al,[et al,[et al,[The fine structure of the hepatocytes from the rats that received amiodarone drug showed a degenerated hepatocyte with many vacuoles with damaged nuclear chromatin. These necrotic hepatocytes with disrupted cytoplasm and a large number of pathological organelles suggest the occurrence of amiodarone toxicity. Membranoet al, demonstrl,[et al, showed pl,[et al, illustraet al,[et al,[The mitochondria showed deposits and many lipid droplets. These intramitochondrial lipids are characterized by the lack of limiting membrane, amorphous appearance, a medium to high density, and a rounded or irregular form. In additet al, concludel,[et al, demonstret al,[In our study, the Kupffer cells were destructed with the blood sinusoids that were found to be fragmented. This is in accordance with the study of Ireton who founet al, revealedIn the present investigation, the electron microscopic study of a section in the liver from rats received amiodarone and vitamin-E showed a rounded nucleus with an intact nuclear envelope, chromatin masses, nucleolus and minimal lipid droplets. The hepatocytes of the same group revealed an intact rough endoplasmic reticulum but the mitochondria were still damaged without deposits. In addition, the hepatocytes showed a normal bile canalicului with normal microvilli and intact junctional complexes. The blood sinusoid of the hepatocytes was intact with a healthy Kupffer cell. The nuclei of the hepatocytes were intact and surrounded by a nuclear envelope and chromatin masses plus nucleolus were also shown. Vitamin E served to improve the antioxidant defense system.et al,[et al,[It has been demonstrated by several authors that antioxidants, such as vitamin-E27 and thet al, suggesteet al, Bansal el,[et al, showed tIn conclusion, our study shows that vitamin-E co-administration with amiodarone led to lesser histologic changes in the parenchyma of rat liver, suggesting that vitamin-E pretreatment may play a role in the amelioration of side effects of amiodarone."} {"text": "We report a rare case of Alport syndrome with progressive posterior lenticonus. A 24-year-old male presented to our tertiary eye care center with history of poor vision. At initial presentation, the patient had bilateral anterior lenticonus, posterior subcapsular cataract, and renal failure. The patient was diagnosed with Alport syndrome based on a positive family history of the disease and clinical findings. Further examination revealed progressive posterior lenticonus that was not present initially. The presence of such finding is important because it influences the surgical approach to avoid complications during cataract surgery. Alport syndrome is a rare clinical entity characterized by the familial occurrence of hemorrhagic nephritis and sensorineural deafness Alport 1927). Alport syndrome has a prevalence of 1/5000, with 85% of affected individuals having the X-linked form, where the affected males develop renal failure and usually have high-tone sensorineural deafness by the age of 20. The typical ocular signs are dot-and-fleck retinopathy, which occurs in 85% of the affected adult males, anterior lenticonus, which occurs in about 25%, and rare posterior polymorphous corneal dystrophy.927. AlpoA 24-year-old male presented to our tertiary eye care hospital complaining of cloudy vision bilaterally since childhood, with the right eye affected more than the left. The patient had long-standing episodes of hematuria and was on renal dialysis for chronic renal failure at the time of initial presentation. In addition, the patient had reported difficulty in hearing.+1 in the right eye and 20/60+1 in the left eye. The patient\u2019s manifest refraction was \u20132.50 \u20130.25 \u00d7 80 in the right eye and \u20132.50 \u20132.50 \u00d7 95 in the left eye. Intraocular pressure was normal bilaterally. Slit lamp examination of the right eye was within normal limits except for advanced anterior lenticonus, and fleck retinopathy was present on fundus examination [Ocular examination revealed best corrected visual acuity of 20/80mination . The lefmination . The diamination . ProgresSome authors have considered anterior lenticonus as a manifestation of Alport syndrome, whereas posterior lenticonus is not associated with systemic disease.6et al.et al.et al.et al.The ocular and clinical features of Alport syndrome are identical in both the X-linked and autosomal recessive forms. Retinopathy and cataracts are the only ocular abnormalities described in the rare autosomal dominant form of Alport syndrome.The X-linked mutations have been mapped to defects in the \u03b1-5-chain of the type IV collagen gene, compromising the COL4A5, COL4A3, and COL4A4 genes. All mutations lead to abnormalities in the basement membrane of the glomerulus, cochlea, retina, lens capsule, and cornea, which eventually contribute to the typical phenotype of Alport syndrome.10et al.et al.,et al.,et al.et al.\u03bcm).As Alport syndrome represents a mutation coding for collagen type IV, the lens capsule is considerably thinner. Junk et al.et al.et al.19et al.et al.et al.23The presence of cataract, either as a component of the disease or as a side effect of oral steroids following renal transplant, together with a fragile capsule makes cataract surgery more challenging. Phacoemulsification has been reported as a safe procedure in such cases. Zare In conclusion, Alport syndrome affects multiple systems, including the eye. The ocular manifestations are important to recognize in order to determine the proper medical and surgical therapy. Posterior lenticonus, which was once considered as an isolated manifestation is being reported more frequently in association with Alport syndrome, suggesting that posterior lenticonus is part of the disease."} {"text": "Ovarian tissue cryopreservation and transplantation have been considered as promising means of fertility preservation for women who have survived cancer, with livebirths being reported from this technique. Ovarian tissue cryopreservation can be offered to patients with different types of cancer. Among the cryoprotectants, glycerol appears to give the poorest results. The techniques of cryopreserving ovarian tissue and alternative approaches have been reviewed in this article. The readers are reminded that this technique is still experimental and informed consent to be obtained from patients after counseling with medical information on the risks involved. Advances in chemotherapy and radiotherapy have increased the survival rate of cancer patients, amazingly up to 90% for young cancer patients. Studies in the US have shown that by 2010, one in 200 individuals will be a survivor of childhood cancer.[Recovery of ovarian function after anticancer treatment is very much affected by the loss of follicles due to chemo- or radiotherapy, resultinin vitro maturation (IVM) followed by cryopreservation. Alternatively, oocytes could be isolated and cryopreserved from ovarian tissue biopsy or the whole ovary. If the patient had a partner, these oocytes could be fertilized after IVM and the resulting embryos could be frozen.[Fertility of female cancer patients could be preserved by various means. The ovaries may be transposed under peritoneum (oophoropexy) to protect them from pelvic irradiation. However,e frozen.6 Howeveret al.[The ovary has hundreds of primordial follicles containing immature oocytes which are smaller, dormant, less differentiated and without zona. Such immet al. have shoet al.[Ovarian tissue cryopreservation has been practised since the early 1950s. Parrot1 showed tet al. reportedet al. and rabbet al. These suet al.[Roughly three decades after Parrot's study, reports et al. showed tet al.[Newton et al. obtainedet al.[in vitro cultures of cryopreserved ovarian tissue were viable up to 10\u201315 days. In vivo restoration of ovarian function after cryopreservation and autologous transplantation of ovarian cortex in human beings was first documented by Oktay and Karilkaya.[et al.[in vitro and ICSI was performed, it did not fertilize. Later in 2004, normal embryonic development was reported by the same group in oocytes retrieved from frozen-thawed ovarian cortical strips transplanted beneath the abdominal skin of a breast cancer patient.[Hovatta et al. further arilkaya. This orta.[et al. reported patient.et al.[in vitro fertilization in a modified natural cycle after the transplantation of cryopreserved ovarian tissue also resulted in livebirth.[The first livebirth from frozen-thawed ovarian cortex after autologous orthotopic transplantation was reported by a Belgian group led by Donnez. Ovarian et al. had replet al. Pregnancivebirth. These liPotential indications for ovarian tissue cryopreservation have been listed by various authors.\u201326 Patie2532in vitro maturation with recombinant FSH and LH. ICSI was performed on three mature oocytes using her husband's sperm while two were fertilized normally. These fertilized eggs cleaved into good quality four-cell embryos which were frozen as the patient had to complete her therapy.[At our center, we approached fertility preservation for a married, ovarian cancer patient, in a different way. She underwent oophorectomy after which the ovary was sent to the IVF lab. Six immature oocytes were collected from visible follicles and subjected to therapy. To the b therapy. This app therapy.35 with eMedium to transport tissue to the labet al.[2 incubator until slicing of the tissue was completed.[It is well known that ovarian cortex has hundreds of primordial and growing follicles. Therefore, only ovarian cortex is required for ovarian tissue cryopreservation. Ovarian cortex may be obtained using different surgical techniques as described by various reports mentioned above. The ovarian cortex should be delivered in a suitable medium to the cryopreservation lab. Leibovitz L-15 (Life Technologies) medium has been used by various authors according to the method described by Gosden et al. to transet al. The tranet al. HEPES-buompleted. However,ompleted. to transompleted. kept theRemoval of stromal tissue before slicing the cortex is considered to be important to reduce the thickness of the tissue to be frozen. Presence of stromal tissue may impair the permeation of the cryoprotectant into the ovarian cortex, thereby, resulting in poor or reduced survival rates of the follicles.\u201340 The t2228Cryoprotectant and dehydration time are major factors in the cryopreservation of any tissue. DMSO, propanediol (PROH), ethylene glycol (EG) and glycerol have been used by various authors as cryoprotectants to cryopreserve ovarian tissue. Although PROH is the popular cryoprotectant to freeze early cleavage embryos, DMSO also appears to be widely used in ovarian tissue cryopreservation. Survival of primordial follicles was the poorest after freezing and thawing with glycerol as the cryoprotectant compared to DMSO, PROH and EG. EG has get al.[363840Initially, cryopreservation protocols for ovarian tissue did not have sugar in the cryoprotective medium.15 Additiet al. observedet al.36384041vs two) does not seem to make much difference although Gook et al.[No standard dehydration time or duration of incubation of ovarian tissue in the cryoprotective medium has been found in the literature; the duration varies from 15 to 90 min.1538\u201340 G15Most of the studies have used the Planer programmable freezer to cryopreserve human ovarian tissue. Protocols using DMSO or EG start freezing at 0\u00b0C while those using PROH start near room temperature. Seeding temperature also varies among the studies from -6\u00b0C to -9\u00b0C although most of the studies seed at -8\u00b0C. The program is similar in most of the reports with a cooling rate of -2\u00b0C/min to the seeding temperature followed by -0.3\u00b0C/min to -30\u00b0C (or -50\u00b0C) and finally, -50\u00b0C/min to -140 to -150\u00b0C141538 followet al.[et al.[et al.[et al.[Thawing of frozen ovarian tissue also varies in the published reports. Newton et al. applied et al. Gook et l.[et al. have alsl.[et al. have thal.[et al. The frozl.[et al. in whichet al.[The pregnancies and livebirths reported from human ovarian cryopreservation and transplantation resulted from slow freezing of ovarian tissue. Gook et al. have docet al. Vitrificet al. However,et al. Vitrificet al.,[Orthotopic and heterotopic sites were studied for transplantation of frozen-thawed ovarian tissue. The existing menopausal ovary has been the popular site in orthotopic transplantation studies.222339 Al2223et al., noted wiet al.,19in vitro maturationCollection of oocytes and in vitro maturation.[in vitro matured oocytes could be fertilized by ICSI and the resulting embryos could be frozen.[6An alternative means of fertility preservation is feasible in addition to ovarian tissue cryopreservation. Fresh ovarian cortex may be transplanted to monozygotic twins who are discordant for ovarian failure.33 Immatuturation.49 On thee frozen.640 This Cryopreservation of the whole ovary has been attempted as an alternative to ovarian cortical slices as these slices suffer from loss of follicles mainly from ischemic damage and sometimes due to cryoinjury.50 Transp51et al.[The account given so far clearly shows that ovarian tissue cryopreservation is still experimental, although pregnancies from this technique have been reported. Therefore, the patient should be counseled thoroughly that this technique is purely experimental and that the risk of reintroducing cancerous cells, although very rare, is not ruled out. Parents should be given available information and future prospects on ovarian tissue cryopreservation if the patient is a minor. Using the tissue is entirely left to the option of minor patients when they enter adulthood, as starting a family is their choice. Getting informed consent after conveying the relevant medical information explaining the risks and uncertainties would help adult patients and parents of minor patients to avail the technique of ovarian tissue cryopreservation. The article by Van Den Broecke et al. is recomOvarian tissue cryopreservation is a promising mean of fertility preservation. Although spontaneous pregnancies from ovarian tissue cryopreservation and transplantation have been reported, the technique is still considered to be experimental. The source of pregnancy after ovarian tissue transplantation from ovarian transplants or the recovered existing menopausal ovary is still unclear, although evidence is in favor of transplants. Cryopreservation of the intact ovary and concerns about ethical issues related to this technique have to be explored further."} {"text": "To the Editor: Burckhardt et al. (Clostridium difficile\u2013associated disease (CDAD) in Saxony, Germany. In contrast to the observation by Wilcox and Fawley in the United Kingdom as did other infectious gastroenteritides . Thus, a"} {"text": "The explanation is multifactorial, determined by the complex interactions between the tumor and its microenvironment, the virus, and the host immune response. This review focuses on discussion of the obstacles that oncolytic virotherapy faces and recent advances made to overcome them, with particular reference to adenoviruses.Targeted therapy of cancer using oncolytic viruses has generated much interest over the past few years in the light of the limited efficacy and side effects of standard cancer therapeutics for advanced disease. In 2006, the world witnessed the first government-approved oncolytic virus for the treatment of head and neck cancer. It has been known for many years that viruses have the ability to replicate in and lyse cancer cells. Although encouraging results have been demonstrated Although treatments for the disease have improved significantly, conventional chemotherapy or radiotherapy still have limited effects against many forms of cancer, not to mention a plethora of treatment-related side effects. This situation signifies a need for novel therapeutic strategies, and one such approach is the use of viruses. The ability of viruses to kill cancer cells has been recognized for more than a century . They acThe term \u2018oncolytic viruses\u2019 applies to viruses that are able to replicate specifically in and destroy tumor cells, and this property is either inherent or genetically-engineered. Inherently tumor-selective viruses can specifically target cancer by exploiting the very same cellular aberrations that occur in these cells, such as surface attachment receptors, activated Ras and Akt, and the defective interferon (IFN) pathway . Some viet al. [More recently, gene silencing by RNA interference technology has been utilized to confer tumor selectivity. MicroRNAs (miRNAs) or small interfering RNAs (siRNAs) regulate gene expression post-transcriptionally by translation block or cleavage of specific, complementary mRNA via the RNA-induced silencing complex (RISC). By inserting a complementary sequence next to a critical viral gene, it is possible to confine virus replication to tumor but not normal cells that express high levels of the corresponding miRNA. This has been demonstrated by several groups \u201338. G\u00fcrlet al. developedl1520 is an oncolytic Ad2/Ad5 hybrid with deletion of its E1B 55K and E3B genes. The E1B 55K protein is involved in p53 inhibition, viral mRNA transport and host cell protein synthesis shut-off [E1B 55K and E3B genes. A recent finding by Thomas et al. [dl1520 was less efficient in lysing cells infected in the G1 phase of the cell cycle due to a reduced rate of late viral protein synthesis, and this appears to be a result of the adenoviral gene product encoded by open reading frame 1 of early region 4 (E4orf1). As such there is a need to increase the potency of these viruses by identifying mutations that result in tumor selectivity but not those that result in attenuated virus replication and oncolysis. Since the first generation of replication-selective Ads was tested in pre-clinical experiments and clinical trials, several advances have been made to improve potency by dissecting the functions of different genes of Ad.Gene-manipulated oncolytic viruses such as Ad, herpes virus and vaccinia virus are being developed as a new class of anti-tumoral agent ,40,41. Sshut-off , was unable to replicate in quiescent normal cells but was able to do so in cancer cells with defective G1-to-S checkpoint. This virus has demonstrated superior anti-tumoral activity in vivo compared to dl1520 after intratumoral and intravenous injections [E1B 19K deletion (dl250) was significantly reduced in normal cells secondary to rapid apoptosis induction in the presence of tumor necrosis factor-\u03b1 (TNF-\u03b1), whilst the opposite occurred in cancer cells due to multiple defects in the apoptotic pathways [dl1520 and wild-type Ad2. E1B 19K-deleted Ad5-infected cancer cells also expressed lower levels of EGFR and anti-apoptotic proteins [The adenoviral ost cell . E1A norjections , althougjections ,52, adenjections \u201355 and ajections . Replicaression) was able to selectively target Epstein-Barr virus (EBV)-associated tumors such as Burkitt\u2019s lymphoma and nasopharyngeal carcinoma [dl331 to enable the synthesis of viral proteins. Interestingly, anti-tumoral efficacy in vitro and in vivo was superior to wild-type Ad5 and this might be the result of PKR-induced apoptosis, increased IFN-\u03b2 production, and the adenoviral E3B gene deletion.Ads also produce the virus-associated (VA) RNAs. These are RNA polymerase III transcripts that, amongst other functions, are obligatory for efficient translation of viral and cellular mRNAs by blocking the double-stranded RNA-activated protein kinase (PKR) ,59, a naarcinoma . This isE3 region could also affect its oncolytic potency. These include the E3 11.6K (or adenovirus death protein \u2013 ADP), which facilitates late cytolysis of infected cells and release of progeny viruses [E3B and E3 gp19K genes on the potency of oncolytic adenovirus will be discussed later.Gene products encoded by the adenoviral viruses . Ads tha viruses ,62. The TP53 gene that was approved in 2004 by China\u2019s State Food and Drug Administration for the treatment of head and neck cancer [E1A gene deletion) is that infectivity is limited to only one cycle. In contrast, oncolytic viruses can replicate and spread in cancer cells resulting in longer transgene expression. Together with tumor lysis this would lead to better therapeutic efficacy. Arming oncolytic viruses with anti-cancer genes has been a major focus in cancer virotherapy, and transgenes exploited include tumor suppressor, pro-apoptotic, anti-angiogenic, \u201csuicide\u201d, and immunomodulatory genes.The discovery of the genetic basis of malignancy has in part promoted the development of cancer gene therapy, which involves the introduction of exogenous nucleic acid to restore, express or inhibit a particular gene of interest. Viruses are at present the most efficient gene delivery system. A well-known example is Gendicine , an Ad5 vector encoding the human k cancer . AlthougINK4A-armed oncolytic Ad, which has shown good inhibition of gastric tumor xenografts [et al. [E1A gene is regulated by the human telomerase reverse transcriptase (hTERT) promoter and hypoxia response element, together with p53 under the strong cytomegalovirus (CMV) promoter. This virus showed tumor selectivity with efficient p53 expression and oncolysis. Nonetheless, targeting a single gene is unlikely to have a major impact on survival, given that in cancer a large number of genetic alterations affect only a core set of signaling pathways and processes, as has been recently described for pancreatic cancer [1, as studied by Hu et al. [et al. [et al. [et al. [in vivo. A reciprocal approach is to ablate the function of oncogenes post-transcriptionally by arming oncolytic Ad with small hairpin RNA (shRNA). Recent work includes those targeting hTERT [in vitro and in vivo.Like Gendicine, oncolytic viruses could be armed with tumor suppressor or pro-apoptotic genes that are frequently lost in cancer. One example is by the use of p16nografts . Wang et [et al. developec cancer . Hence tu et al. . Anti-tu [et al. and Chen [et al. utilized [et al. treated ng hTERT , Ki-67 [ng hTERT , Surviving hTERT , and Apong hTERT , all of et al. [E1 genes are under the control of the hTERT promoter, could stimulate peripheral blood mononuclear cells (PBMCs) to produce IFN-\u03b3 that has anti-angiogenic properties, resulting in reduced tumor vascularity and slowed growth in immunocompetent mice. However, Kurozumi et al. [et al. [The tumor microenvironment plays a critical role in promoting malignant cell growth and progression, as well as restricting virus spread. One important issue is tumor angiogenesis. A recent finding by Ikeda et al. suggestei et al. also shoi et al. . Recent i et al. \u201380, intei et al. ,82, cansi et al. , and trii et al. , as welli et al. and vasci et al. ,87. Kang [et al. made use [et al. ,90.E1B 55K deletion, ADP overexpression and CD/TK fusion gene expression is currently in a phase III trial in combination with radiotherapy for patients with prostate cancer.Gene-directed prodrug activation therapy (or suicide gene therapy) involves the delivery of a gene that would lead to the expression of an enzyme, followed by the administration of a prodrug that is activated selectively by this enzyme. One example is the HSV thymidine kinase (HSV-TK)-ganciclovir method, whereby HSK-TV is able to monophosphorylate ganciclovir, which is subsequently converted by cellular kinases to the triphosphorylated forms, blocking DNA synthesis and inducing cell death. Most publications have described the use of replication-deficient viruses with this approach, but recent studies that demonstrated its efficacy using replication-selective oncolytic Ads include treatment for prostate , gallblaet al. [et al. [in vivo. Induction of cancer cell death with an apoptosis-inducing agent prior to injection of oncolytic HSV could also produce channels for effective virus spread [et al. [Viruses are naturally larger than other anti-cancer agents such as chemicals and antibodies . After intratumoral injection, effective virus spread could be impaired by the extracellular matrix, areas of fibrosis and necrosis, and surrounding normal cells in the tumor bed, although Kolodkin-Gal et al. found th [et al. studied s spread . Elevates spread . Injecte [et al. examined [et al. . Hypoxia [et al. ,104. In [et al. or HSV [ [et al. ,107.et al. [et al. [in situ tumors and tumor xenografts. These receptors are trapped in the tight junctions and therefore not accessible to the virus. However, Ads that use receptor X could induce epithelial-mesenchymal transition and result in efficient oncolysis.For viruses that have reached the immediate vicinity of the tumor, cellular genetic changes could prevent successful virus entry into the cells. For cellular entry of most Ads , they must first bind to the Coxsackie and adenovirus receptor (CAR) on the surface membrane via the knob portions of their fibers, followed by internalization mediated by the viral penton proteins and cellular integrins. CAR is ubiquitously expressed in epithelial cells, but its expression is often downregulated in many cancer types due to activation of the Raf-MAPK pathway . Recent et al. . They alet al. . Strauss [et al. showed tCIP1/WAF normally inhibits cyclin-dependent kinase 2 (CDK2) by p21CIP1/WAF, whereby SET and PCNA normally increase viral DNA replication. In the case of vaccinia virus, recent work has suggested that cells with activated c-Jun NH2-terminal kinase (JNK) signaling cascade could activate PKR and bloc and warfarin treatment (to inhibit vitamin K-dependent coagulation factors) and found that this approach significantly increased the anti-tumoral effect of systemically delivered oncolytic Ad5 in nude mice [After intravenous delivery the liver, part of the reticuloendothelial system, is the predominant site of Ad5 sequestration with significant hepatocyte transduction ,172. Ad5et al. showed tet al. or otheret al. ,122. As ude mice . Good reude mice or by geude mice for liveet al. [in vivo. Shashkova et al. [P) gene product from wild-type MV, an IFN antagonist, has been found to exhibit reduced IFN sensitivity and better oncolytic potency in vivo [in vivo [A plethora of immunostimulatory genes have been inserted into the genome of oncolytic viruses with the aim of stimulating effective anti-tumoral immune responses. Recent examples include the heat shock proteins ,180, cheet al. utilizeda et al. used a f in vivo . A recom[in vivo . The expet al. [et al. [et al. [et al. [et al. [et al. [Antigen-specific activation and proliferation of lymphocytes are regulated by interaction of the peptide-antigen-major histocompatibility complex (MHC) with the T cell receptor, as well as both positive and negative signals from co-stimulatory molecules expressed on antigen-presenting cells (APCs). The most important of the APCs are the DCs. DCs are capable of capturing antigens secreted or shed by tumor cells and upon maturation, present the peptides to T cells. Endo et al. showed t [et al. and Rama [et al. demonstr [et al. used ano [et al. revealed [et al. utilizedE3 region of the adenoviral genome is divided into E3A and E3B and is involved in immune response evasion and virus release from cells. Because it is dispensable, this region is frequently deleted in many adenoviral mutants to provide more space for therapeutic gene insertion, although recent work has suggested that transgene expression was higher if gene was inserted at regions other than E3, such as L3 [E3B region, however, could attenuate the virus oncolytic potency by increasing macrophage infiltration and expression of TNF and IFN-\u03b3 [E3 gp19K whilst retaining other E3 regions [The ch as L3 . Deletiond IFN-\u03b3 ,133. Pot regions . In addi regions , gp19K i regions ,202. CTLThe field of oncolytic virotherapy is expanding and viruses continue to hold promise as effective treatments in combination with chemotherapy or other therapeutic modalities. As continuing work is being done to improve the currently available oncolytic viruses, novel viral species are also emerging and worth exploring, for example the porcine Seneca Valley virus , myxoma"} {"text": "Sir,et al.[et al.[et al.,[et al with \u2018comma sign\u2019. Few more predisposing conditions in addition to those mentioned by the authors are diabetes mellitus, mixed connective tissue disease, and hypothyroidism. Various new techniques, such as water jet, Dormia basket, polypectomy snare, Nd:YAG laser, Bezotome and modified lithotripter, have been developed to treat the bezoars with variable success rates. After removal, prevention of recurrence is important. Prophylactic oral enzyme andprokinetic along with psychotherapy and regular follow up may help in preventing recurrence.[We read the article entitled \u201cGiant trichobezoar of duodenojejunal flexure: A rare entity\u201d with intet al. and Dinyl.[et al. The male.[et al., had a hacurrence."} {"text": "Surgical correction of aortic coarctation (AC) was concurrently introduced by Crafoord and Nylin, Gross anP > 0.05). The complication rates (9.9% vs. 8.5%) and mortality rates (0.7% vs. 2.5%) were also not different (P = 0.05 to 0.1).Despite an initial report of poor results, subseque15The conclusion for native ACs was \u201cthe question remains not can it be done, but should it be done?,\u201d whereas the conclusion for post-surgical aortic recoarctation was \u201c... balloon angioplasty for relief of residual or recurrent aortic coarctation offers an acceptable alternative to repeat surgical repair\u201d despite the fact that results were similar or even better for native AC. I questioned this interpretation stating 20While treatment of native AC by balloon angioplasty was initially controversial,1719\u201024 i1722et al.et al.[etl.[et al. Brodsky,l.[et al. Cooper[3l.[et al. and Suarl.[et al. had repol.[et al.33 in thal.[et al.33 Finalll.[et al. did not l.[et al. and Rao[l.[et al. from ourl.[et al. used traIndications for balloon angioplasty are similar to those used for surgical intervention: Significant hypertension and/or congestive heart failure. While the controversy over routine use of balloon angioplasty in treatment of neonatal and infant coarctations continues, it may certainly be utilized in special circumstances, namely, 28Potential for arterial damage exists, especially in neonates and young children. Therefore, use of umbilical artery approach in neonates\u201038 and aThe diameter of the balloon selected for angioplasty should be carefully chosen. Small balloons do not produce adequate relief of obstruction and large balloons may produce aortic rupture or aneurysm formation. Initial balloon dilatation is performed with a balloon whose diameter is an average of aortic isthmus or transverse aortic arch and descending aorta at the level of diaphragm. If there is no adequate relief of obstruction (i.e. gradient <20 mmHg) and angiographic improvement, repeat dilatation at the same sitting with a balloon as large as the diameter of the descending aorta at the level of diaphragm should be undertaken. It is exTips of guide wires or catheters should not be manipulated over the freshly dilated coarctation segment to avoid aortic perforation. A guide Evaluation of arterial insufficiency and limb growth retardation in a groStenotic vascular lesions can be dilated by balloon angioplasty. However, the elastic recoil of the vessel wall may return the vessel lumen to the pre-dilatation diameter following removal of the balloon catheter. Such recoil and vascular dissection, if any, following balloon dilatation can be circumvented with implantation of endovascular stents. Additional principles of stent management, historical aspects of stent development, types of stents, technique of stent deployment and results of stent therapy will not be reviewed here because of limits of space. The interested reader is referred elsewhere44 for fuet al.[Because of growth issues and the need for large sheaths for stent implantation, most cardiologists limit stent usage to adolescents and adults, although a few have advocated their use in younger children.\u201048 Even et al. indeed pet al.51 or groet al. should bet al. larger cet al.In summary, the paper by Francis and associates which describes the utility of transcatheter intervention in neonates and young infants with critical coarctation of the aorta and left ventricular dysfunction, re-emphasizes prior reports advocating such approaches. Selection of site of entry for the procedure to minimize arterial damage, use of appropriate size balloon for angioplasty, avoiding manipulation of tips of catheters/guide wires across the freshly dilated coarctation segment, and incorporation of some method of evaluation of femoral artery sufficiency at the time of follow-up of the primary cardiac defect are germane to the success of the procedure. Avoiding use of stents in neonates and small infants and/or developing alternative strategies such as biodegradable stents or growth stents is recommended."} {"text": "Idiopathic pulmonary fibrosis (IPF) remains exactly that. The disease originates from an unknown cause, and little is known about the mechanisms of pathogenesis. While the disease is likely multi-factorial, evidence is accumulating to implicate viruses as co-factors (either as initiating or exacerbating agents) of fibrotic lung disease. This review summarizes the available clinical and experimental observations that form the basis for the hypothesis that viral infections may augment fibrotic responses. We review the data suggesting a link between hepatitis C virus, adenovirus, human cytomegalovirus and, in particular, the Epstein-Barr gammaherpesvirus, in IPF. In addition, we highlight the recent associations made between gammaherpesvirus infection and lung fibrosis in horses and discuss the various murine models that have been used to investigate the contribution of gammaherpesviruses to fibrotic progression. We review the work demonstrating that gammaherpesvirus infection of Th2-biased mice leads to multi-organ fibrosis and highlight studies showing that gammaherpesviral infections of mice either pre- or post-fibrotic challenge can augment the development of fibrosis. Finally, we discuss potential mechanisms whereby viral infections may amplify the development of fibrosis. While none of these studies prove causality, we believe the evidence suggests that viral infections should be considered as potential initiators or exacerbating agents in at least some cases of IPF and thereby justify further study. Idiopathic pulmonary fibrosis (IPF) is a progressive interstitial lung disease that severely compromises pulmonary function . IPF likFibrotic lung disease likely results from an inciting injurious event within the lung. Although the precise temporal sequence of events and mechanisms of disease are not understood, several common pathobiological characteristics are recognized. These include damage and loss of type I alveolar epithelial cells followed by hyperplastic expansion of type II cells ; variablDespite ongoing research driven by the need for therapy, the initiating or injurious agents are unknown, and it is not understood why the fibrosis is dysregulated and progressive . It is let al. [Two studies have suggested a link between infection with the hepatitis C virus (HCV) and IPF. HCV is a small, enveloped, positive-sense single-stranded RNA virus in the Flavivirus family . Replicaet al. were ablet al. . In a coet al. . One poset al. . Given tet al. [in situ hybridization for the adenovirus gene product E1A. E1A DNA was present in 3 out of 19 (16%) cases of IPF, in 5 of 10 (50%) cases of interstitial pneumonia associated with collagen vascular disease, and in 2 of 20 (10%) cases of sarcoidosis [et al. found that the incidence of E1A DNA was considerably higher in patients who had been treated with corticosteroids (67%) compared to those patients left untreated (10%). This finding raises the interesting possibility that corticosteroids, a common therapy for IPF, may make patients more susceptible to adenovirus infection or reactivation from latency. However, studies investigating the titer of anti-adenoviral IgG in IPF patients have failed to demonstrate an elevation above normal [Human adenoviruses have been suggested as etiological co-factors in the progression of interstitial lung disease -39. Adenet al. examinedcoidosis . While te normal . Despitee normal . This meIt is also conceivable that adenovirus infections could serve as exacerbating agents for patients with established lung fibrosis, although it is difficult to determine the frequency of this happening from published clinical literature. Furthermore, recent studies using an animal model of fluorescein isothiocyante (FITC)-induced fibrosis were unable to demonstrate significant exacerbation of FITC-fibrosis within the first 7 days post-mouse adenoviral infection . One notet al. [et al. [et al. [Human cytomegalovirus (HCMV), a betaherpesvirus, is a widespread opportunistic pathogen that persists in healthy individuals but normally only causes clinical manifestations in immune-compromised individuals . HCMV inet al. studied et al. . Also, t [et al. did note [et al. , HCMV Ig [et al. . Despiteet al. [The virus that has been associated most strongly with IPF is EBV. EBV is a gammaherpesvirus that is present in all populations, infecting more than 95% of humans within the first decades of life . An assoet al. sought tet al. [et al. [A couple of studies have associated the presence of active and latent EBV markers with IPF. Kelly et al. investig [et al. then detet al. [et al. [It should be noted that not all studies have found an association between EBV and IPF. In 1997, Wangoo et al. publishe [et al. failed tet al. [Although EBV had been detected with more frequency in the lungs of IPF patients than in the lungs of control patients in most previous studies, many members of each IPF cohort analyzed did not test positive for EBV infection at all. Tang et al. went on et al. drew other conclusions from their study that suggest susceptibility to viral infection and IPF depends on a genetic or acquired predisposition. Co-infection occurred more frequently in patients with the sporadic form of IPF compared to those with the familial form. Familial IPF is characterized by the incidence of IPF in two or more members of an immediate family [et al. note that the increased frequency of HHV-8 in the lungs of the IPF cohort is particularly interesting. In the United States, HHV-8 infection is predominantly found in patients with HIV infection and Kaposi's sarcoma [Tang e family . This le sarcoma , and allCollectively, the analyses of IPF lung tissue chronicled above create a rationale to study the association between viral infections and the occurrence of IPF, but do not provide evidence for a causal relationship between viruses and IPF. Demonstrating causation in humans requires detection of a virus in the lungs prior to clinical manifestations of IPF or evidence that an anti-viral therapy confers anti-fibrotic effects. The latter has been attempted with some success in a limited number of case studies, but no large trials have been conducted to date ,57. WhilPulmonary fibrosis has been reported to occur in both cats and horses -62. InteMurine models of pulmonary fibrosis have enabled identification of pathogenic cells and mediators that are believed to be important in human fibrotic disease, and they have facilitated further exploration of a pathogenic role for viruses in humans . The humet al. [et al. proposed that a viral infection made the previously protected lungs susceptible to fibrotic disease upon the event of an exogenous injury. These results are enticing, but it should be noted that the bleomycin was delivered during the peak of lytic viral infection. It is difficult to infer from these studies whether chronic latent infection with MHV-68 might also predispose the lung to subsequent fibrotic responses. Also, the mechanism(s) for how MHV-68 infection augmented the subsequent fibrotic response to bleomycin were not defined. This is an area of active research in our laboratory. We have determined that MHV-68 infection is latent in the lung by day 14 post-infection [In 2002, Lok et al. used MHVnfection . Mice giet al. [Ebrahimi et al. showed tet al. -75. The et al. . This stet al. [et al. [et al. [et al. [et al. [et al. MHV-68 has previously been reported to cause vascular damage [et al. [Mora et al. extended [et al. publishe [et al. , and Mor [et al. detected [et al. , who pro [et al. speculat [et al. . Similarr damage . Finally [et al. noted en [et al. .et al. [et al. [The authors attribute some of the TGF-\u03b21 dysregulation to epithelial cell damage. Interestingly, type II alveolar cells are a target of MHV-68, and Mora et al. speculatet al. . Furtheret al. ,83. Flan [et al. have rep [et al. ,85. Tablet al. [While most IPF patients have a slow, progressive disease, some patients have an acute deterioration in function that carries a poor prognosis. In the placebo arm of a study of 32 patients who died from IPF-related causes, Martinez et al. reportedet al. . This fiet al. . In theset al. [The recruitment of fibrocytes to the lung was also associated with viral exacerbation of FITC-induced fibrosis . Fibrocyet al. when theet al. [et al. [The studies with MHV-68 discussed here can only suggest that similar human viruses found in the lungs of IPF patients have a pathogenic role in fibrosis. Given that fibrosis in humans is progressive, it seems important to better understand the chronic reactivation that can occur during herpesviral infections. It is possible that repeated activation may lead to repeated rounds of epithelial cell damage, cytokine release and fibrocyte recruitment. Additionally, it will be important to determine whether long-term latent MHV-68 infection of na\u00efve mice augments the production of profibrotic factors and whether the same is true in EBV-infected human lungs. A latent infection does not appear to create a Th2 bias equal to that of IFN-\u03b3R-/- mice, but any bias may predispose a latently infected individual to a fibrotic trigger from an unrelated factor. It will also be helpful to more carefully study the cell types that harbor long-standing viral infection because the prolonged injury caused to such a cell type is likely responsible for increasing susceptibility to fibrosis. Flano et al. found th [et al. ,91. As vThe authors declare that they have no competing interests.KMV performed literature searches and wrote the bulk of the review. BBM reviewed and edited the article for content and clarity."} {"text": "The discovery of a family of membrane water channel proteins called aquaporins, and the finding that aquaporin 1 was located in the choroid plexus, has prompted interest in the role of aquaporins in cerebrospinal fluid (CSF) production and consequently hydrocephalus. While the role of aquaporin 1 in choroidal CSF production has been demonstrated, the relevance of aquaporin 1 to the pathophysiology of hydrocephalus remains debated. This has been further hampered by the lack of a non-toxic specific pharmacological blocking agent for aquaporin 1. In recent times aquaporin 4, the most abundant aquaporin within the brain itself, which has also been shown to have a role in brain water physiology and relevance to brain oedema in trauma and tumours, has become an alternative focus of attention for hydrocephalus research. This review summarises current knowledge and concepts in relation to aquaporins, specifically aquaporin 1 and 4, and hydrocephalus. It also examines the relevance of aquaporins as potential therapeutic targets in hydrocephalus and other CSF circulation disorders. Aquaporins are a family of integral membrane proteins that function as water channels. The existence of such water channels had been postulated for some time as the passage of water across certain membranes is too rapid to be explained on the basis of diffusion through plasma membranes . The ideet al. . AQ. AQ+/K+ ithelium but it iet al. [Understanding the response of AQP1 to changes in CSF pressure and, in particular hydrocephalus, is important for determining a possible therapeutic potential. If AQP1 expression was to be significantly down-regulated in hydrocephalus, its therapeutic potential would be limited. However, if it remained unchanged or even were increased then pharmacological blockade might result in a therapeutically beneficial reduction in CSF production. Mao et al. examinedet al, unpublished data). We have found that AQP1 protein is unchanged compared to saline injected controls at 3 and 5 days post-injection of kaolin using Western Blot analysis of choroid plexus tissue. AQP1 mRNA levels were lower in hydrocephalic mice at 3 days post-kaolin injection but unchanged compared to saline-injected controls at 5 days post-kaolin. There is no clear explanation for this finding. In may be a reflection of an early and temporary reduction in AQP1 transcription in response to hydrocephalus. This needs to be clarified by further studies in other models of hydrocephalus.We have studied AQP1 expression and localisation in the choroid plexus of hydrocephalic adult mice using a cisternal kaolin injection model . Nicchia et al. questionu et al. reported [et al. using a [et al. -48.The lAQP4 is the most abundant and widespread aquaporin in the brain and occurs in a short isoform and a long isoform . AQP4 foThe role of AQP4 in handling brain water and therefore brain oedema has been demonstrated in several pathologies including cerebral tumours, traumatic brain injury, cerebral ischaemia or stroke and brain abscess . In AQP4et al. [et al. [et al. [There are some conflicting reports in the AQP4-null phenotype in relation to ventricular size and CSF dynamics. Manley et al. reported [et al. reported [et al. reportedet al. [et al. [The main role of AQP4 in hydrocephalus appears to be as a compensatory mechanism. Mao et al. reported [et al. found thet al. [Using a communicating model of hydrocephalus secondary to subarachnoid inflammation after intraparenchymal injection of L-\u03b1-lysophosphatidylcholine (LPC) stearoyl, Tourdias et al. also repet al. [The finding of increased AQP4 expression in various models of hydrocephalus has been interpreted as a compensatory mechanism to allow for transependymal/parenchymal CSF absorption. Such adaption may be a mechanism that allows for the development of compensated or arrested hydrocephalus. This hypothesis was tested by Bloch et al. who studet al. .et al. [et al. [There is a small but consistent rate of spontaneous hydrocephalus in AQP4-null mice that has been reported by several groups. Feng et al. reported [et al. found thaet al. [et al. [According to Feng et al. , mice th [et al. . Regionaet al. [et al. [The regulation of AQP4 may be categorised in a similar manner to that of AQP1 and was recently reviewed by Yukutake and Yasui and prevet al. . AQP4 haet al. -66. Howe [et al. recently [et al. while ph [et al. ,70). Theet al. [B receptor agonist and vasoconstrictor peptide, resulted in down-regulation of AQP4 in rat brain.There is also some evidence for regulation of the overall level of AQP4. As for AQP1, the tonicity of cellular environment may be important for AQP4. Zeng et al. demonstret al. . Intereset al. also recet al. [Xenopus laevis oocytes which is mediated by the V1a receptor. This resulted in decreased water permeability. The effect was reduced with mutation of Ser180; phosphorylation which has already been reported to decrease water permeability [Localisation of AQP4, particularly in the astrocytic end feet is important for function as discussed above. A recent study suggests that trafficking of AQP4 may be used as a regulatory mechanism for AQP4. Moeller et al. found theability . This ph2+ and other mercurial compounds but these are toxic [2+ [et al. [2+. Tetraethylammonium (TEA) was also suggested as an AQP1 inhibitor [Determining the importance of AQP1 and 4 in hydrocephalus and CSF production would be facilitated by the availability of a non-toxic specific AQP1 or 4 blocking agents. Identifying such an AQP1 blocker remains a challenge . It is kre toxic . As mentoxic [2+ . However [et al. have sugnhibitor althoughnhibitor . Corticonhibitor , sheep fnhibitor as well nhibitor ,81. Thernhibitor ,82.et al. [Xenopus laevis oocytes expressing AQP1, and the same group reported similar findings in a swelling assay using human embryonic kidney (HEK292) cells expressing pEGFP/AQP1 [et al. [et al. found no evidence that acetazolamide inhibits AQP1 or 4 [Xenopus laevis oocyte osmotic permeability assay, Sorgaard & Zeuthen [et al. [Arylsulfonamides, including acetazolamide, a carbonic anhydrase inhibitor that is commonly used to reduce CSF production in the management of pseudotumor cerebri and other clinical situations, have been suggested as pharmacological blockers of AQP1 and 4. This has generated considerable controversy. Ma et al. reportedGFP/AQP1 . Using t [et al. reported [et al. ,87 had sQP1 or 4 ,88. In a Zeuthen found no [et al. who also [et al. ,87,91 coet al. [A small inhibitory effect of bumetamide, a loop diuretic that blocks the Na-K-Cl co-transporter, on AQP4 water permeability has leadet al. to blockApart from pharmacological blockade of AQP1, there are several other potential routes to modulation of AQP 1 expression, none of which have thus far been explored. These include methods to increase AQP1 degradation or reduction of AQP1 expression/transcription. Although our knowledge of the mechanisms and pathways underlying choroidal AQP1 regulation is lacking there are possible techniques that may be employed to increase AQP1 degradation. This includes the intraventricular administration of AQP1 antibodies which could potentially result in AQP1 internalisation and degradation, thus temporarily reducing CSF production. Further understanding the mechanisms through which AQP1 is internalised, as we have observed in our studies of choroidal AQP1 in hydrocephalic mice, may provide additional useful information to allow these pathways to be manipulated.in vivo, or gene therapy, is another potential tool for modulating AQP1 and thus CSF production. A small interfering RNA (siRNA) against AQP1 has been used by Boassa et al. [et al. [et al. [AQP1 transcription. The 5' region of AQP1 has multiple binding sites for TTF-1 the expression of which is also seen in the choroid plexus. Intraventricular injection of antisense TTF-1 oligodeoxynucleotide in rats resulted in a reduction of AQP1 mRNA and protein in the choroid plexus. These rats also had an increased survival compared to controls after water intoxification.AQP1 knockdown a et al. to studya et al. . Splinte [et al. has used [et al. reportedAquaporin 1 makes a substantial contribution to CSF production and is a potential therapeutic target in the management of CSF circulation disorders. Aquaporin 4 is important in brain water homoeostasis and consequentially in conditions involving both cytotoxic and vasogenic oedema. In hydrocephalus AQP4 has a protective effect by allowing resorption of transependymal CSF into brain capillaries. There is considerable scope for improving our understanding of aquaporins in relation to CSF physiology in health and in diseases such as hydrocephalus. The ultimate potential of aquaporin modulators in the management of these conditions and others remains to be determined. Continued study of aquaporins in hydrocephalus and other conditions is needed.The authors declare that they have no competing interests.All authors contributed to the writing of this review. All authors have read and approved the final version of the manuscript."} {"text": "Pediatric urolithiasis poses a technical challenge to the urologist. A review of the recent literature on the subject was performed to highlight the various treatment modalities in the management of pediatric stones. A Medline search was used to identify manuscripts dealing with management options such as percutaneous nephrolithotomy, shock wave lithotripsy, ureteroscopy and cystolithotripsy in pediatric stone diseases. We also share our experience on the subject.2 and proximal non-impacted ureteric stone less than 1 cm with normal renal function, no infection and favorable anatomy. Indications for PCNL in children are large burden stone more than 2cm or more than 150mm2 with or without hydronephrosis, urosepsis and renal insufficiency, more than 1cm impacted upper ureteric stone, failure of SWL and significant volume of residual stones after open surgery. Shock wave lithotripsy can be offered for more soft (< 900 HU on CT scan) renal stones between 1-2cm. Primary vesical stone more than 1cm can be tackled with percutaneous cystolithomy or open cystolithotomy. Open renal stone surgery can be done for renal stones with associated structural abnormalities, large burden infective and staghorn stones, large impacted proximal ureteric stone. The role of laparoscopic surgery for stone disease in children still needs to be explored.Shock wave lithotripsy should be the treatment modality for renal stone less than 1cm or < 150 mm Urinary lithiasis affects between 5-10% of the humans during their lifetime, 2-3% of them are children.[Management options for renal calculi are similar to those for adults. The majority of the stone disease in children can be managed with SWL, PCNL or a combination of treatment modalities. Open surgery is currently indicated in a few select cases. It is important to understand the effect of each treatment modality on the growth of the kidney. Stone location, composition, size; anatomy of collecting system; and presence of obstruction/infection are important factors in selecting the modality.2, soft renal stones (HU< 900 mm2 on CT scan) between 1 to 2 cm with normal renal function, no infection and favorable anatomy. However, the efficacy, need for ancillary procedures and treatment-related complications are not clearly defined as in the adult population. The theoretical long-term safety and bio effects of SWL on renal function and growth are debatable. Brinkman et al.,[et al.,[et al.,[Shock wave lithotripsy is currently the procedure of choice for treating most urinary stones in children. Shock wave lithotripsy should be the treatment modality for all renal stones less than 1 cm or < 150 mmn et al., noted no,[et al., noted thet al.,[et al.,[Important considerations in SWL are stone burden, composition and ability of the distal urinary tract to successfully pass the fragments. Children pass stone fragments well and do not require stenting routinely. Newman et al., reported,[et al., reported,[et al.,et al.,[Brinkman et al., reviewedet al.,et al.,[et al.,[Shock wave lithotripsy outcome for lower calyceal stones varies with lower pole anatomy. Tan et al., showed t,[et al., did not et al.,[et al.,[Management of pediatric staghorn calculus is technically challenging to the urologist. Al- Busaidy et al., found th,[et al., found thet al.,[et al.,[The cumulative risk of recurrence is higher in children as compared to adults. Afshar et al., reported,[et al., have alsShock wave lithotripsy is well tolerated with minimal morbidity. Minor complications such as bruising, ecchymosis and renal colic are reported in 11\u201350% cases. Authors treating large stone burdens have a reported steinstrasse rate of 1.9-5.4%.18 IncideWe have performed SWL in 53 children (with a mean age of 6.2 + 4.4 years) using Dornier compact delta lithotripter . General anesthesia was given to all pediatric patients at the start of SWL. Both ultrasound and fluoroscopy were used to localize and monitor the fragmentation. Shocks were started at Level one (10 kilovolts) and progressed to Level two (11.5 kilovolts) after 100 shocks. The intensity was increased to higher levels only if the desired fragmentation was not visible with fluoroscopy and ultrasound. In children, the power settings rarely exceeded Level three (12.75 kilovolts). The shocks were given at a frequency of 60. Procedure was terminated after complete fragmentation was noted on fluoroscopy and ultrasound. Number of shocks given never exceeded 1500. The mean stone length was 1.09 + 0.4 cm. The mean shocks required per session were 982 + 492. The mean intensity of the shocks was 11.81 + 0.5 kilovolts. The mean number of sessions required was 1.09 + 0.3. Adequate fragmentation was achieved in all. We feel that complete clearance should be achieved with minimal number of shocks, energy and need for ancillary procedures. Clinically insignificant residual fragments (CIRF) can be a source of recurrent stone formation and hence not considered as success. Overall complete clearance was achieved in 42 (79.2%) renal units at the end of three months. Seven (13.2%) patients had CIRF which was being conservatively followed. Ancillary procedures were required in four (7.5%) renal units, which included PCNL in three and ureteroscopy in one child.Since the first pediatric series reported by Woodside and associates in 1985,[et al.,[et al.,[et al.,[Studies demonstrate minimal scarring and insignificant loss of renal function after PCNL. Dawaba eet al., reported,[et al., showed t,[et al., also shoet al.,[et al.,[Improvement in technology and miniaturization of instruments, with availability of more efficient energy sources for intracorporeal lithotripsy has revolutionized endourological procedures in children. Helal et al., reported,[et al., describeWith the availability of holmium-Yttrium Almunium Garnet (Ho: YAG) laser, smaller pneumatic lithoclast and ultrasound probes, PCNL can be performed using smaller nephroscopes. We designed a smaller lithoclast probe with suction for use through a pediatric nephroscope and found it highly effective and safe in children. Various studies have demonstrated the safety of Ho: YAG laser in children. Ultrasound-guided puncture is a good alternative to fluoroscopy and has the advantage of avoiding radiation and preventing visceral injury.et al.,[Complications are similar to adults. Intraoperative bleeding requiring blood transfusion, injury to the pelvicaliceal system and sepsis are major concerns with PCNL in children. Kroovand et al., proposed3 (range: 94-989) with 130 complex calculi. In our earlier published data the stone clearance rate was 89.8%. With ancillary procedures (SWL), stone clearance increased to 96% at three months.[Indications for PCNL in children are similar to those in adults and include large burden stone more than 2cm, hard renal stone (> 900HU on CT scan) between 1 to 2cm, significant renal obstruction, urinary infection, failure of SWL and significant volume of residual stones after open surgery. We have performed PCNL in 222 renal units in children (mean age 8.9 \u00b1 3.9 years) from 1997 till date. Mean stone bulk was 335.6\u00b1122.6 mme months. For chile months. [Figure e months..Percutaneous nephrolithotomy and SWL are safe and efficacious in managing pediatric stones of 1-2cm; however, the choice should be tailored to the three-dimensional stone size and composition, using 3-D CT scan. When we Recurrence is a major problem as follow-up is not assured due to poor socioeconomics. In this scenario, SWL has higher retreatment rates requiring more ancillary procedures, thus defeating the purpose of giving the patient complete stone clearance with minimal morbidity and a single hospital stay. However,et al.,[et al.[Laparoscopic renal surgery is still not widely performed by pediatric urologists due to higher operative time, logistic support, lack of indications and sufficient surgeon experience. Laparoscopic retroperitoneal surgery has a definite role in the management of patients requiring open surgery for calculus disease but indications in pediatric patients are not well defined. Casale et al., reported.,[et al. successfet al.,[Van savage et al., noted thet al.[et al.,[Stone-free rates with SWL vary from 75-100% depending on the size of the stone. Landau et al. found th.[et al., found siWith the advent of smaller instruments and laser lithotripsy, URS for management of pediatric urolithiasis has become more common. With the availability of 4.5 and 6Fr semi-rigid ureteroscopes and a 6.9Fr flexible ureterorenoscope with Ho: YAG laser energy source, instrument-related complications are uncommon. Ho-YAG laser fibers are small and flexible with a short depth of penetration (0.4 mm), allowing it to be used safely with pediatric endoscopes. Further, laser fragmentation produces 2-3 mm fragments that can pass very easily down the ureter. Most series have employed ureteral stent placement, following ureteroscopic lithotripsy in pediatric patients. We reported successful use of supine antegrade flexible ureteroscopy in treating impacted upper ureteric calculi in a six-year-old pediatric patient.et al.,[et al.,[It has been shown that ureteral dilatation does not increase the risk of stricture and significant vesicoureteric reflux (VUR). Caione et al., reported,[et al., have sho,[et al.,40 The mo,[et al.,Ureteroscopy may provide more efficient stone clearance and hence should be preferred for distal ureteral stones, larger stones and impacted stones. We have performed URS in 86 patients . Ureteroscopy was possible with 6Fr semi-rigid ureteroscope in 81 while 6.8/7.5Fr flexible ureteroscope was used in five patients. Seventy-two (83.7%) patients had mid or lower ureteric stones and 14 (16.2%) patients had upper ureteric stones. Foty-eight (55.8%) patients required ureteric dilatation. Double J stent was placed postoperatively in 36 (41.8%). The procedure was successfully completed in all except one who required simultaneous antegrade flexible URS. Mean hospital stay was three days.Vesical stones in the pediatric age group in India, often present with a large stone burden. Vesical calculi can be managed by transurethral or percutaneous suprapubic lithotripsy. In children, especially in boys, because of the small caliber penile urethra and concerns about iatrogenic urethral stricture, transurethral cystolithotripsy may be more difficult. It is safe if stone burden is less than 1cm. Percutaneous cystolithotomy (PCCL) is a safe alternative with low morbidity and complication rate for large burden vesical stone.\u201345 PercuWe have performed 81 bladder stone surgeries between 1995 and 2006. The mean age of the patients was 6.4 \u00b1 2.3 years, range 1-15 years. The surgeries performed were cystolithotripsy (n=60), PCCL (n=13) and open cystolithotomy (n=8). We have used pneumatic, ultrasound and laser energy for stone fragmentation. In the pediatric age group (less than five years), a 7.5Fr ureteroscope/pediatric cystoscope was used.Sometimes small stones migrate from the upper urinary tract into the bladder and then are ejected out per urethra with the urinary stream. Infrequently, during passage through the urethra the calculus gets impacted even when there is no distal organic obstruction. The management of such recently impacted calculus varies according to the site and nature of the calculus. The methods to deal with the situation are to bring out the stone by forward milking or external urethrotomy if situated in the anterior urethra. The former procedure is quite traumatizing to the urethral wall and should be restricted to only smooth contour stone in presence of non-obstructed urethra. External urethrotomy even as a primary procedure is best avoided in penile urethra. Stone can also be removed with gentle pulling with artery forceps if lodged at the submeatus under proper anesthesia. In the case of posterior urethral stone, push back by applying vigorous per urethral xylocaine jelly and single attempt of per urethral catheterization(PUC) is worth trying. If unsuccessful, it is advisable to do suprapubic cystostomy (SPC) to prevent further urethral damage. After initial decompression of bladder with PUC / SPC, later endoscopic removal of the stone can be undertaken.et al.,[et al.,[et al., compared the efficacy of open cystolithomy and cystolitholapaxy in pediatric patients with primary bladder stones. The operative time was similar in the two groups. The hospital stay was significantly less after endourologic procedures than after open surgery. However, there were significantly more complications in the endourologic procedures.[Open stone surgery should be offered in situations where it is at least a viable and reasonable alternative to less invasive modalities. The pros and cons of the treatment should be explained in an unbiased manner to the attendants to effectively perform and implement this form of treatment if chosen. Sakkas eet al., estimateet al., The pooret al., Zargoosh,[et al., reported,[et al., Open surocedures.Pediatric urolithiasis poses a technical challenge to the urologist. Aims of the management should be complete clearance of stones, preservation of renal function and prevention of recurrence. Despite the consensus of SWL being the initial treatment of choice for most stones in pediatric patients, there are certain indications for other modalities as well. With improvement in instrumentation and technology, endoscopic management has become safe and effective. Percutaneous nephrolithotomy and SWL are safe and efficacious in managing pediatric stones of 1-2cm. Indications for PCNL in children are large stone burden, significant renal obstruction and renal infection. Ureteroscopy provides efficient stone clearance in mid and lower ureteric stones. Transurethral cystolithotripsy is generally avoided in pediatric patients, but is feasible in single vesical stone less than 1cm. Percutaneous cystolithotomy or open cystolithotomy is generally the alternative for pediatric vesical stones."} {"text": "The European Regional Office of the World Health Organization recently dramatically lowered its former recommendations for cumulative aircraft noise exposure levels associated with risks of adverse public health effects. WHO\u2019s recommendations, although lacking the force of law, are nonetheless of interest to aviation regulatory bodies and to the public at large. It is therefore important that WHO\u2019s recent recommendations receive and withstand careful scrutiny. WHO\u2019s (2018) recommendations are based on controversial assumptions, analyses and interpretations prepared by Guski et al. (2017). Gjestland (2018) identified a number of limitations of the opinions expressed by Guski et al. (2017). Guski et al. (2019) subsequently challenged some of Gjestland\u2019s (2018) observations. This paper responds to the defenses offered by Guski et al. (2019) of the opinions expressed in their prior (2017) publication. The European Regional Office of the World Health Organization recentlyLden = 45 dB to prevent adverse public health consequences. WHO\u2019s newly identified noise exposure levels are an order of magnitude lower than those identified by WHO in 2000. WHO\u2019s 2000 recommendations were not source-specific recommendations, but suggested a limit of LpA,16h = 55 dB to avoid health effects mediated by serious annoyance. The corresponding Lden value would have been higher for a full 24- (rather than 16-h) day period. A source-specific correction for aircraft, however, would have moved WHO\u2019s recommendation in the opposite direction. WHO\u2019s new [Lden = 45 dB from LpA,16h = 55 dB) represents \u201chalf as much noise\u201d as WHO\u2019s prior recommendation. This is a dramatic shift in the recommended \u201csafe\u201d limit on aircraft noise exposure, for which strong and reliable evidence is essential.WHO stronglyGjestland\u2019s criticalGuski et al. state thdn factors\u201d are commonly referred to as \u201cnon-acoustic\u201d , inter alia.Univariate regression accounts for only about one third of the variance of individual annoyance responses. The prevalence of high annoyance in communities is also influenced by additional acoustic and non-acoustic factors, however. Acoustic factors include maximum levels, number of flights, fleet composition, and their respective distributions over time. Each of these factors also include errors of measurement and/or prediction. Non-acoustic factors include personal noise sensitivity and attitudes toward the noise source. In the aviation industry all \u201cnon- LThe latter (non-acoustic) factors are essentially useless for regulatory purposes, however, since they are unknown in advance, and cannot be used for a priori predictions of annoyance prevalence rates. Further, aircraft fly over all members of a community, not just those who may be more or less individually sensitive to aircraft noise exposure. As long as the preferred measure of community response to aircraft noise is the prevalence of a consequential degree of annoyance, non-acoustic influences on annoyance are simply free variables that contribute to errors of prediction. CTL analysis estimates the net effects of non-acoustic influences on annoyance in the aggregate, and by treating them systematically as deviations from an assumed growth rate of annoyance.den -based factors) may explain up to 33% of the variance, while the other two-thirds are explained by non-acoustic factors.\u201d Guski et al. [Gjestland commentsi et al. insist tGjestland notes inGjestland identifiOne of the inclusion criteria for the meta-analysis of Guski et al. was \u201cstuThe annoyance questions of most modern noise surveys comply with ICBEN recommenTo avoid known and unknown biases, the opinions of each and every member of a target population must have an equal probability of representation in social survey findings. This is commonly provided by random selection of a current and exhaustive list of the target population compiled into a sample frame. Examination of the sampling methods of the HYENA study at Heathrow reveal disturbing deviations from EPSEM sampling. Eligible residents were contacted by mail by researchers at Imperial College. They received an information pack that requested their participation. At the same time a letter from HACAN, a noise interest group at Heathrow, was sent to its members and other voluntary organizations, urging them to participate in the survey. A follow-up letter is not uncommon in a mail survey but urging members of a select group , invites bias. HACAN\u2019s letter further solicits self-selection for participation in the study as follows: \u201cIf you have not received an information pack\u2026.and would be interested in taking part, please contact .\u201d See . EncouraLAeq 24h = 11 dB (Stockholm) and LAeq 24h = 22 dB (Milan). Such aircraft noise levels are only marginally credible as modeling estimates, much less as acoustic measurements. Guski et al. [Lden by applying a correction factor of 2.6414 dB [Likewise, there is reason to question the adequacy of the exposure estimates at several of the HYENA airports. Babisch et al. cite aircraft noise levels as low as .6414 dB . These vx-axis) by a community-specific annoyance decision criterion, expressed directly in units of decibels [CTL analysis assumes that the rate of growth of annoyance with transportation noise exposure is fully controlled by the duration-adjusted loudness of the exposure. This growth function is anchored to the noise-axis (decibels . CTL anae(\u2212A/m) prediction laterally along the abscissa displays different segments of the ogival growth function that do not appear parallel when pinned at different exposure values to the abscissa.) Their disbelief is further conditioned on their view that univariate logistic regression is some form of gold standard for the derivation of exposure-effect relationships.The meta-analysis of Guski et al. complete2. 2 for the CTL function and the polynomial regression functions preferred by Guski et al.A closer look at the actual response data reveals a different picture. Response data expressed as pairs of % HA and noise level have been found for all the surveys in the WHO full dataset except for the 2002 Amsterdam survey by Breugelmans et al. The results from these surveys are plotted in the panels of The comments by Guski et al. do not iAs has been shown in Gjestland\u2019s original paper , and fur"} {"text": "Ninety-one percent of global Human Immunodeficiency Virus (HIV) infection in children occurs in sub-Saharan Africa. Provider Initiated Testing and Counselling (PITC) Strategy is a means of reducing missed opportunities for HIV exposed or infected children. The present study determined the prevalence of HIV infection using PITC Strategy among children seen at the Paediatric Emergency Unit of Federal Medical Centre (FMC), Ido-Ekiti, and the possible route of transmission.Cross-sectional study on prevalence of HIV infection using PITC model. 530 new patients whose HIV serostatus were unknown and aged 15 years or below were recruited consecutively and offered HIV testing. Serial algorithm testing for HIV infection using Determine HIV-1/2 and Uni-Gold rapid test kits was adopted. Seropositive patients younger than eighteen months had HIV Deoxyribonucleic Acid Polymerase Chain Reaction (HIV DNA PCR) test for confirmation.Twenty-four (4.5%) of the 530 patients were confirmed to have HIV infection; of whom 19 (79.2%) were less than 18 months of old; with age range of 5 to 156 months. Fifteen (62.5%) of the infected children were females; likewise, the gender specific infection rate was higher (%) among the females compared with (%) among the males. Two of the HIV infected children\u2019s mothers were late, while the remaining 22 mothers (%) were HIV seropositive. Mother-to-child-transmission was the most likely route of transmission in the children.PITC strategy is vital to the early diagnosis and effective control of HIV infection in children. However, this cannot be totally effective if PMTCT is not optimized. Knowledet al. in a pro [et al. in an ou [et al. . It is h [et al. . Most stStudy design: the study was a prospective, hospital-based, descriptive cross-sectional study.Study setting: it was carried out at the Paediatric Emergency Unit of FMC, Ido-Ekiti over a period of six months (April-September 2012); a tertiary hospital that serves as a referral centre for the neighbouring towns in Ekiti State and neighbouring Ondo, Osun and Kwara States. It is a 180-bedded hospital that runs general and specialist clinical services in twenty departments. There is an ongoing HIV infection treatment, care and support programme in the hospital. The Paediatric Emergency Unit is a 13-bedded ward with about 100 patients seen monthly (unpublished data).Study participants: the subjects were consecutive new paediatric patients with unknown HIV serostatus, aged 0-15 years who presented in the PEU with any illness. The patients were recruited after signing or thumb-printing an informed consent form by the parents/caregivers. The assent of the patients who were seven years and older was sought by explaining the purpose of the study and details of the sample collection to them in a manner they would understand. Patients with documented HIV status at presentation were excluded from the study. Each patient was recruited once until the desired sample size was attained.Sample size determination: the minimum sample size required for the study was determined using the formula:P = 0.5 (no similar study had been done in the region); d = 0.05. The estimated minimum sample size was 385. However, a total of five hundred and thirty patients were however tested.Data collection: caregivers and patients were given HIV pre-test counselling using WHO guideline on PITC with the choice of \u201copting out\u201d [ing out\u201d . Counseling out\u201d was adopThe testing protocol for HIV infection: rapid test kits used for the HIV test were determine and Unigold. The HIV test was carried out and read by the researcher at the side-laboratory attached to the PEU according to the manufacturer\u2019s instructions. Serial algorithm testing for HIV infection was used [was used . The tesPatient management: patients that had HIV infection were immediately enrolled into the paediatric HIV/AIDS treatment, care and support programme at the hospital for full evaluation and treatment as recommended by the National Guidelines for the treatment of Paediatric HIV infection in Nigeria [ Nigeria .Data analysis: the data were entered into a personal microcomputer and analysed using the software, Statistical Package for Social Sciences (SPSS) version 15.0 Inc. Chicago, Illinois-USA. Categorical variables were expressed in proportions, ratios and percentages, while statistical test was done using Chi-square (\u03c72) test. Statistical significance was set at p-value less than 0.05.Ethical consideration: Institutional Ethical Approval was obtained from the Ethics and Research Committee of FMC, Ido-Ekiti. A written informed consent form detailing the study purpose, benefit, and possible risks to participants and their caregivers was duly signed by each caregiver. In addition, assent was obtained from children aged seven years and above who were in stable clinical condition.A total of 530 patients consisting of 296 (55.8%) males and 234 (44.2%) females participated in the study. The ages of the patients ranged between one day and 180 months, with a median age of 14 months. More than half (59.8%) of their caregivers were in social class III . Twenty-et al. [et al. [The prevalence of HIV infection has been shown to vary with locality and population subgroups. The prevalence of undiagnosed HIV infection among new patients in the present study using PITC Strategy was 4.5%. This was close to the overall national prevalence of 4.1% which also employed PITC Strategy though in a different clinical setting . It was et al. reported [et al. in 2001 [et al. .et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [The prevalence rates of paediatric HIV infection in earlier reports from Nigeria were also higher than the prevalence rate in the present study. Akpede et al. in 1997 [et al. in 1998 [et al. reported [et al. observed [et al. in their [et al. however [et al. , 24, 25. [et al. found a [et al. in Awka [et al. in Lagos [et al. from Lag [et al. from Awket al., [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [The differences between the prevalence rates in the previous reports and findings in the present study may have also been influenced by the documented HIV prevalence in pregnant women attending antenatal care settings in the various locations since studies have shown that a higher proportion of paediatric HIV infection is acquired through MTCT. The prevalence of HIV infection using PITC Strategy as compared with clinical criteria based-screening tends to be lower because relatively low risk populations are being screened. The role of gender as a risk factor for MTCT of HIV is not clear . The malet al., Oniyangi [et al. and Ogun [et al. which al [et al. , 28. The [et al. among pa [et al. also who [et al. . Althoug [et al. in 1998 [et al. Ugochukw [et al. and Angy [et al. reportedet al. [et al. [et al. [et al. [et al. [et al. [These findings reflect the importance of maternal HIV infection in paediatric HIV infection in Nigeria and therefore paediatric HIV control measures. It is therefore instructive that twenty-one (95.5%) of the 22 mothers of the patients with HIV infection were newly diagnosed after the primary diagnosis of HIV infection in their children in the present study. The diagnosis of HIV infection in a child usually leads to the diagnosis in a parent with unsuspected HIV infection as it was the case in the present study. Such a discovery would enhance the linking of mothers to HIV infection prevention and treatment programme. While this is complementary in the effort at combating HIV/AIDS in Nigeria, a situation where most infected mothers are detected before or during pregnancy is most desired. The contribution of blood transfusion to the prevalence of HIV infection has been documented , 23, 28.et al. Ugochukw [et al. and Angy [et al. reportedThe prevalence of HIV infection among patients presenting to the Paediatric Emergency Unit of the hospital was high with the predominant route of transmission being MTCT. It is recommended that Provider-Initiated Testing and Counselling Strategy be offered to all children presenting in health facilities; especially immunization clinics, under 5 well child clinics and also other wards that admits children. This will provide opportunity for early diagnosis and treatment in order to reduce the high mortality in children with HIV infection.Provider initiated testing and counselling for HIV leads to significantly higher rates of detection of new cases of HIV infection compared with screening based on clinical suspicion;Maternal to child transmission is the most common route of HIV infection in children;In Nigeria, Ekiti state has one of the least prevalence rates based on the sentinel survey among pregnant women attending ANC clinics in 2010.The prevalence of undiagnosed HIV infection among children presenting in an emergency care setting in Ekiti state, South-western Nigeria is higher than the prevalence rate documented among pregnant women attending ANC clinics in the same study area;All HIV seropositive infants above nine months were confirmed HIV infected using HIV DNA PCR test;Majority of the HIV infected children were less than two years.The authors declare no competing interests."} {"text": "Gene function, including that of coding and noncoding genes, can be difficult to identify in molecular wet laboratories. Therefore, computational methods, often including machine learning, can be a useful tool to guide and predict function. Although machine learning has been considered as a \u201cblack box\u201d in the past, it can be more accurate than simple statistical testing methods. In recent years, deep learning and big data machine learning techniques have developed rapidly and achieved an amazing level of performance in many areas, including image classification and speech recognition. This Research Topic explores the potential for machine learning applied to gene function prediction.We are pleased to see that authors brought the latest machine learning techniques on gene function prediction. Submissions came from an open call for paper, and they were accepted for publication with the assistance of professional referees. Forty-six papers are finally selected from a total of 72 submissions after rigorous reviews. They were presented from different countries and regions, including China, USA, Poland, Taiwan, Korea, Saudi Arabia, India, and so on. According to the topics, we categorize three subtopics for our special issue.Su et al. proposed a novel method called GPSim to effectively deduce the semantic similarity of diseases. Yu et al. constructed a weighted four-layer disease\u2013disease similarity network to characterize the associations at different levels between diseases. Three papers paid attention to miRNA and disease relationship. Qu et al. proposed a novel method to predict miRNA\u2013disease associations based on Locality-constrained Linear Coding. Zhao et al. proposed a novel computational model of SNMFMDA (Symmetric Nonnegative Matrix Factorization for MiRNA-Disease Association prediction) to reveal the relation of miRNA\u2013disease pairs. He et al. proposed an NRLMFMDA (neighborhood regularized logistic matrix factorization method for miRNA\u2013disease association prediction) by integrating miRNA functional similarity, disease semantic similarity, Gaussian interaction profile kernel similarity, and experimental validation of disease\u2013miRNA association. Besides miRNA, there is still a paper on lncRNA\u2013disease relationship prediction. A dual-convolutional neural networks with attention mechanism\u2013based method are presented for predicting the candidate disease lncRNAs .The first part of this special issue discusses the gene and disease relationship. Six papers included in this part are focused on general diseases. These papers propose novel methods to predict disease and gene/miRNA/long noncoding RNA (lncRNA) associations. Liu et al. classified muscle-invasive bladder cancer into two conservative subtypes using miRNA, mRNA, and lncRNA expression data; investigated subtype-related biological pathways; and evaluated the subtype classification performance using machine learning methods. Jiang et al. employed spectral clustering and a novel kernel to predict cancer subtypes. Two papers are focused on breast cancer. Abou Tabl et al. present a hierarchical machine learning system that predicts the 5-year survivability of the patients who went through specific therapy. Li et al. employed machine learning methods to select 54 novel breast cancer oncogenes and proved their findings with GO and KEGG. Three papers researched on other kinds of cancer. Liu et al. found lncRNA LINC00941 as a potential biomarker of gastric cancer. Gao et al. proposed an ensemble strategy to predict prognosis in ovarian cancer. Guo et al. developed rigorous bioinformatics and statistical procedures to identify tumor-infiltrating bacteria associated with colorectal cancer.There are seven papers on cancer and oncogenes. Two papers paid attention to cancer subtypes. Zhuang et al. employed a two-sample Mendelian randomization method to analyze the causal relationships between interleukin 18 (IL-18) plasma levels and type 2 diabetes using IL-18\u2013related SNPs (Single Nucleotide Polymorphism) as genetic instrumental variables. Sun et al. establish a multilevel comparative framework across three insulin target tissues to provide a better understanding of type 2 diabetes. Zhong et al. identified potential prognostic genes for neuroblastoma. Wang et al. predicted chronic kidney disease susceptibility gene PRKAG2 by comprehensive bioinformatics analysis. Lu et al. employed the Laplacian heat diffusion algorithm to infer novel genes with functions related to uveitis. Li et al. analyzed the blood gene expression signature for osteoarthritis with advanced feature selection methods.Two papers focused on type 2 diabetes and four papers paid attention to other diseases. Oubounyt et al. employed deep learning techniques to predict gene promoter regions. Dao et al. gave a review for detecting DNA replication origins in eukaryotic genomics with machine learning methods. Exons skipping is an important issue in gene structure research. Chen, Feng et al. and Chen, Song et al. analyzed the relationship between histone modifications and exons skipping. Two papers performed researches on RNA secondary structure prediction, which is a classical problem in computational biology. Wang et al. and Zhang et al. employed deep learning to predict RNA secondary structure, especially on pseudoknots.The second part focused on gene structure and function prediction. Four papers were involved in gene elements, and two papers researched RNA structure. Zhang et al. predicted noncoding RNA function with deep learning network. Zhao and Ma employed Multiple Partial Regularized Nonnegative Matrix Factorization for Predicting Ontological Functions of lncRNAs. Deng et al. proposed an integrated model to infer the gene ontology functions of miRNAs The work was supported by the National Key R&D Program of China (2018YFC0910405), the Natural Science Foundation of China (No. 61771331). by integrating multiple data sources. Zou et al. predicted enzyme function with hierarchical multilabel deep learning.Besides gene structure prediction, four papers focused on the gene function prediction, and five papers paid attention to gene identification. Due to the GO- and KEGG-rich knowledge for gene function, researchers would like to pay attention to noncoding RNA function prediction. Han et al. predicted ion channels genes and their types. Chen et al. paid attention to MADS-box gene classification and clustering. Liu et al. predicted gene expression patterns with a generalized linear regression model. Fu et al. identified microRNA genes with sequence and structure information. Qiang et al. predicted RNA N6-methyladenosine sites with machine learning and sequence features.There are also five papers on gene identification, expression pattern prediction, and sites modification. They are all involved with machine learning techniques. Zhu et al. predicted drug\u2013gene interactions with Metapath2vec. Xuan et al. resolve this problem with the latest machine learning technique gradient boosting decision tree. Four papers researched lncRNA\u2013protein interaction prediction. Xie et al. predicted this problem with improved bipartite network recommender algorithm. Zhan et al. combined sequence and evolutionary information on this problem. Zhao et al. employed random walk and neighborhood regularized logistic matrix factorization approach. Dai et al. paid attention to complex features for ncRNA\u2013protein interaction prediction. Three papers are focused on plant researches. Qu et al. found effective sequence features for classifying plant pentatricopeptide repeat proteins. Jiang et al. identified rice yield-related candidate genes by walking on the functional network. Zhang et al. mined Magnaporthe oryzae sRNAs with potential transboundary regulation of rice genes associated with growth and defense through expression profile analysis of the pathogen-infected rice. Three papers paid attention to RNA-seq data analysis. McDermaid et al. proposed a new machine learning\u2013based framework for mapping uncertainty analysis in RNA-seq read alignment and gene expression estimation. Wang et al. gave a systems analysis of the relationships between anemia and ischemic stroke rehabilitation based on RNA-seq data. Niu et al. developed rSeqTU, which is a machine learning\u2013based R package for predicting bacterial transcription units from RNA-seq data.Other researches were categorized as the third part of our special issue. There are 12 papers in total in this part. Two papers are focused on drugs. To conclude, papers in this special issue cover several emerging topics of advanced learning techniques and applications for bioinformatics. We highly hope this special issue can attract concentrated attention in the related fields. We thank the reviewers for their efforts to guarantee the high quality of this special issue. Finally, we thank all the authors who have contributed to this special issue.ZQ wrote the manuscript draft. DM helped to revise the text. AKS gave some helpful suggestions. The work was supported by the National Key R&D Program of China (2018YFC0910405), the Natural Science Foundation of China , Statutory Research funds of Institute of Informatics, Silesian University of Technology, Gliwice, Poland (BK/204/RAU2/2019), and the professorship grant of the Rector of the Silesian University of Technology (02/020/RGPL9/0184).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Acomys(spiny mouse) is incorrectly referred to as a hamster strain. It is a member of therodent family (Muridae). Also, the reference cited for Acomysregeneration was inadvertently modified during publication. Seifert et al. (2012a) wasincorrectly referenced as Seifert et al. (2012b). Finally, a reference for thediscussion on partial versus complete ear closure was accidentally omitted . The main text and references have now been corrected online.In the originally published version of this article, the The authors regret these errors."} {"text": "This study aimed to determine the prevalence, and relationship between dry eye and glycosylated haemoglobin (HbA1c) among patients with diabetes mellitus.this was a descriptive hospital-based study conducted among patients diagnosed with diabetes mellitus and attending the Diabetic Clinic at a tertiary health facility in Ibadan, south-western Nigeria. Dry eye was assessed using the standardized Ocular Surface Disease Index Questionnaire administered to the eligible respondents on dry eye symptoms. Detailed ocular examination including the tear break-up time (TBUT) and Schirmer I test were carried out and a recent glycosylated haemoglobin value was also obtained.one hundred and eighty-nine Type 2 diabetic patients were studied, with 68.8% female and a mean age of 60.2 \u00b1 10.3 years. The frequency of dry eye among patients was 21.7% . The most commonly reported symptoms of dry eye were \u201cfeeling of gritty sensation\u201d and \u201cblurred vision\u201d while \u201cdiscomfort in windy areas\u201d was the most common environmental trigger. No statistically significant correlation was noted between dry eye and HbA1c , and age dry eye is fairly common among patients with diabetes mellitus with most frequent symptoms being gritty sensation and blurred vision. No significant correlation was noted between dry eye and glycosylated haemoglobin (HbA1c). Dry eye is defined as a multifactorial disease of the tears and ocular surface resulting in symptoms of discomfort, visual disturbance, and tear film instability with potential damage to the ocular surface. It is accompanied by increased osmolarity of the tear film and inflammation of the ocular surface . Many paThis was a descriptive hospital-based study carried out in the ophthalmology and diabetic clinics of a tertiary hospital in Ibadan, Nigeria between December 2014 and January 2015. A sample size of 189 was calculated based on a previously reported prevalence of dry eye among diabetics of 52%, 95% confLastly, the tear break-up time (TBUT) was done followed by Schirmer I test with topical anaesthesia 30 minutes later to avoid any interference of results. TBUT was done by instilling a drop of 2% fluorescein strip wetted with sterile water into the conjunctival sac of each eye. The time interval between the last complete blink and the appearance of a random dark spot on the cornea under the cobalt blue filter of the slit-lamp was recorded with a stopwatch, and the mean of three timings was noted. A value of 10 seconds or less was considered as abnormal . SchirmeData management and analysis: data collected was analyzed using the Statistical Package for Social Sciences (SPSS) software . Summary statistics were presented using frequency tables, charts, means and rates. Chi-square and Fishers exact tests were used for categorical variables. Spearman rank-order correlation co-efficient was used to determine the relationship between dry eye and HbA1c. Level of statistical significance was set at < 5%.One hundred and eighty-nine patients participated in the study of which 59(31.2%) were males, M: F = . One hunet al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [et al. [The prevalence of dry eye in this study was 21.7% , similar to findings of Kaiserman et al. (20.6%) et al. , 25 . This is similar to previous studies , 13, 16 67. This et al. . These set al. , 33-35. [et al. reported [et al. and this [et al. . LimitatIn conclusion, dry eye is fairly common among patients with type 2 diabetes mellitus in our black African population with most of the affected patients experiencing the mild form of the disease. No significant correlation was noted between dry eye and glycosylated haemoglobin (HbA1c).Dry eye affects the ocular surface and results in tear film instability;Prevalence of dry eye increases with age and is higher among females;Patients with diabetes mellitus have higher prevalence of dry eye disease.A prevalence value for dry eye was derived for the region which can be used in further studies;No significant gender predilection for dry eye was noted in this study;No significant correlation was also noted between dry eye and glycosylated hemoglobin."} {"text": "A vast body of research demonstrates that many ecological and evolutionary processes can only be understood from a tri\u2010trophic viewpoint, that is, one that moves beyond the pairwise interactions of neighbouring trophic levels to consider the emergent features of interactions among multiple trophic levels. Despite its unifying potential, tri\u2010trophic research has been fragmented, following two distinct paths. One has focused on the population biology and evolutionary ecology of simple food chains of interacting species. The other has focused on bottom\u2010up and top\u2010down controls over the distribution of biomass across trophic levels and other ecosystem\u2010level variables. Here, we propose pathways to bridge these two long\u2010standing perspectives. We argue that an expanded\u00a0theory of tri\u2010trophic interactions (TTIs) can unify our understanding of biological processes across scales and levels of organisation, ranging from species evolution and pairwise interactions to community structure and ecosystem function. To do so requires addressing how community structure and ecosystem function arise as emergent properties of component TTIs, and, in turn, how species traits and TTIs are shaped by the ecosystem processes and the abiotic environment in which they are embedded. We conclude that novel insights will come from applying tri\u2010trophic theory systematically across all levels of biological organisation. Tri\u2010trophic research has followed two distinct paths: One has focused on the population biology and evolutionary ecology of simple food chains of interacting species; the other has focused on bottom\u2010up and top\u2010down controls over the distribution of biomass across trophic levels and ecosystem\u2010level variables. Here, we provide a roadmap to bridge these two long\u2010standing perspectives and identify two key challenges. First, determining whether and how ecosystem\u2010level TTIs emerge from food chain\u2010 and community\u2010level TTIs. Second, determining whether and how ecosystem\u2010level processes and abiotic factors feedback to shape the species traits that drive TTIs. Ecological and evolutionary outcomes of species interactions can only be fully understood after considering the multi\u2010trophic setting in which species are embedded. For example, phytophagous insects in terrestrial ecosystems go through periodic outbreaks in North America and Europe, destroying millions of hectares of forest each year . Early research treated pairwise interactions among trophic levels as constants and assumed that multi\u2010trophic systems could be understood by stringing together\u00a0these pairwise interactions in an additive fashion and, in doing so, indirectly influence patterns and the\u00a0amount of herbivory. These indirect effects alter plant trait evolution, population dynamics, and community structure. Natural enemies may also directly influence plant traits and this can affect plant relative allocation to direct vs. indirect defences and in turn herbivores.Plants alter herbivore\u2013natural enemy interactions. Plant traits influence herbivores, which indirectly affects natural enemies. Plants may directly influence natural enemies through the production of cues (volatile organic compounds), rewards (food) or morphological traits to alter natural enemy behaviours in ways that reduce or enhance herbivory.Herbivores alter plant\u2013natural enemy interactions. Natural enemy indirect effects on plants are contingent on herbivore traits influencing risk of predation or parasitism . Conversely, the expression of plant traits that attract predators or parasitoids is contingent upon the presence, type, and amount of herbivory via plant\u2010induced responses to damage.Under this perspective, non\u2010additive effects can be broadly classified into three types of interactions, each of which highlights diverse phenomena and are common to any type of tri\u2010trophic food chain:Ecosystem perspectiveBottom\u2010up and top\u2010down control. Feedbacks between the bottom\u2010up effect of plant productivity and the top\u2010down effects of natural enemies, where increasing productivity increases top\u2010down control though higher natural enemy density but may also reduce top\u2010down control by extending food chain length, resulting in secondary predators or parasitoids suppressing primary predators or parasitoids.Plant community composition. Increased herbivory following reduction in predation and parasitism (i.e. trophic cascades) leads to changes in plant communities from herbivore tolerant to herbivore resistant species, thus altering plant\u2013herbivore interactions.Herbivore behaviour. Natural enemies induce changes in herbivore behaviours through plastic responses or shifts in species composition that reduce herbivory.Research on TTIs from the ecosystem perspective has considered non\u2010additive effects in three separate contexts.et al. et al. et al. et al. From an ecological standpoint, TTIs have been studied within the context of indirect interactions mediated by changes in both the density and traits (i.e. plasticity) of trophically intermediate species have clearly selected for the sophisticated traits used by predators to locate these prey Abrams . Similaret al. et al. et al. et al. et al. et al. et al. Parallel to the species interactions perspective, a separate \u2018ecosystem perspective\u2019 on TTIs has also developed. Ecosystem\u2010level TTIs include processes underlying the distribution of biomass among trophic levels, as well as the direct and indirect effects of TTIs on ecosystem\u2010level processes . Trophic cascades occur from the top\u2010down when natural enemies indirectly control plant biomass and green food webs Allison . These cet al. et al. et al. et al. et al. et al. et al. et al. et al. The search for generalities in how trophic interactions affect ecosystem properties has relied on syntheses and meta\u2010analyses from multiple systems , but rigorous evolutionary tests are lacking , omnivory and intra\u2010guild predation (IGP) address non\u2010additive effects emerging from diverse natural enemy communities and their consequences for community stability , herbivore guilds (grasshoppers and sap\u2010feeders), and natural enemy guilds (active hunting and sit\u2010and\u2010wait spiders) because of the challenges of selectively manipulating small\u2010bodied arthropods because of the ability to selectively manipulate large\u2010bodied herbivores and predators and evolutionary (micro and macro) effects of abiotic forcing on species traits taking part in component tri\u2010trophic food chains Fig. , arrow 8et al. et al. et al. et al. et al. sensu McGill et al. et al. Although findings from elevational gradients reveal exciting first steps towards linking spatial variation in abiotic forcing and TTIs, key challenges remain. First, patterns of variation in species traits are likely driven not only by the direct influence of the abiotic environment acting on each trophic level but also by indirect effects acting among trophic levels (Rosenblatt & Schmitz Our understanding of the mechanisms underlying TTIs within and across levels of organisation will be accelerated through multiple new technologies, especially as they are applied in the context of experimental manipulations under field conditions and in combination with novel approaches for analysing large, complex datasets Fig. , arrow 9et al. et al. et al. et al. et al. New technologies in analytical chemistry now allow for increased resolution and sensitivity in sampling of volatile and non\u2010volatile compounds produced by plants and animals. For example, methods such as untargeted metabolomics analyses (e.g. Clancy et al. et al. et al. New genomic technologies provide insight into the genetic basis of the phenotypes underlying TTIs. Such techniques are\u00a0revealing the genetic architecture of traits and the processes underlying their evolution (e.g. Dobler et al. et al. et al. et al. et al. et al. et al. Genomics and sequencing techniques are also enabling ecologists to identify groups of microbes or individual taxa of importance to arthropod\u2010dominated food chains (Pineda et al. In addition to microbe mediation of plant and animal phenotypes, microbial community ecology is also revealing how trophic interactions among microbes can drive ecosystem function (Allison & Martini et al. et al. et al. Finally, advances in remote sensing and ecosystem\u2010level modelling can help connect TTIs with ecosystem processes. The availability of large databases is increasing not only for remote\u2010sensed plant diversity and traits (Asner & Martin Knowledge gained from research on TTIs has driven the development of much ecological and evolutionary theory, and empirical work demonstrates their importance for the function of both natural and managed systems. Nevertheless, the lens of tri\u2010trophic theory has not been applied to all levels of biological organisation. By doing so here, we point out gaps in our understanding and suggest novel ways to form linkages across scales of biological organisation by using TTIs. Our review suggests many novel questions that this proposed programme of research can address, but two key challenges subsume many of these finer points. First, determining whether and how ecosystem\u2010level TTIs emerge from food chain\u2010 and community\u2010level TTIs. Second, determining whether and how ecosystem\u2010level processes and abiotic factors feedback to shape the species traits that drive TTIs. Addressing these challenges will ultimately unite tri\u2010trophic perspectives under a single paradigm that guides future research in ecology and evolutionary biology.LAR and AP wrote the manuscript, KM and CB edited the manuscript, all content is based on discussions that took place during a workshop attended by all authors under the initiative of KM and CB. All co\u2010authors contributed substantially to revisions.No new data are associated with this manuscript."} {"text": "Prevention of thrombotic disorders such as cardiovascular disease and stroke is an urgent and important task for society. Prevention by suitable diet and exercise is recommended by government guidelines in many countries. In order to get useful and practical results by the recommendations, tests employed to assess the thrombotic status of patients, their quality of diet and their exercise levels have conclusive importance.in vivo wall shear rates are not included in these tests.Assessment of thrombotic status has been performed over the years by quantifying thrombotic factors and by measuring function using anticoagulated blood samples. However, these approaches do not seem to be successful in assessing thrombotic status. This may be due in part to the belief that adding calcium to anticoagulated blood can restore properties of the original native blood and that individual quantification of thrombotic factors can reflect properties of overall multifactorial native (non-anticoagulated) blood. However, this assumption is wrong and needs to change, not least in particular because relevant ex vivo) and thrombosis was measured at various shear rates. The Baumgartner, Sakariassen and colleagues thrombosis tests are triggered by various thrombogenic surfaces, including human arterial subendothelium, human fibrillar collagen and human tissue factor/phospholipids at wall shear rates varying from 100 to 32,000 s-1. Blood is drawn directly from an antecubital vein over the prothrombotic surface at various controlled-wall shear rates, thus avoiding coagulation and platelet activation before reaching the prothrombotic surface [A different approach based on physiological and biological (evolutionary) ideas was proposed by separate groups in the 1970s. The first group was made up of Baumgartner, Sakariassen and colleagues , and the surface . These tet\u00a0al. introduced the helium\u2013neon laser-induced in vivo thrombosis system established by Kovacs et\u00a0al. Subsequently they began research with shear-induced thrombosis/thrombolysis (fibrinolysis) tests both in animals and humans using native blood (ex vivo). Yamamoto and colleagues have analyzed the matching results obtained by ex vivo and in vivo tests in animal experiments. Ex vivo and in vivo results were closely correlated, although with exception in rodents with severe endothelial dysfunction, recommending a simultaneous endothelial function test (flow-mediated vasodilation test) [A thrombus is formed by the interaction between blood and blood vessels under blood flow (Virchow's triad). Yamamoto on test) . No corron test) .et\u00a0al. observed qualitative differences between fruit and vegetable varieties using shear-induced thrombosis/fibrinolysis ex vivo tests and He\u2013Ne laser-induced thrombosis in vivo test. They demonstrated that the antithrombotic activity of fruits and vegetables varies within the same species, that is, there are varieties with antithrombotic activity, those with prothrombotic activity and those with neither effect [Yamamoto r effect . In healr effect . Furtherr effect . Yamamotr effect . Similarr effect ,17,18.ex vivo Global Thrombosis Test (GTT)\u00a0is based on the principles of flow chamber techniques, first described by Baumgartner and Sakariassen, and their colleagues [et\u00a0al. [ex vivo Global Thrombosis Test, could be useful in assessing thrombotic status and bleeding of patients, in developing antithrombotic drugs and diets and in proposing antithrombotic programs utlilizing physical exercise for the prevention of thrombotic disorders.The lleagues ,4. The ulleagues and in plleagues . Tests ulleagues and the [et\u00a0al. . GTT has [et\u00a0al. . The slo [et\u00a0al. . If bloo [et\u00a0al. . GTT det [et\u00a0al. . Shear-d"} {"text": "Critical Care [We read with great interest the article by Carlo Custodero et al. recently published in cal Care . The autcal Care , 3, whercal Care and Robecal Care indicate"} {"text": "Kraemer MUG, Faria NR, Reiner RC Jr, et al. Spread of yellow fever virus outbreak in Angola and the Democratic Republic of the Congo 2015\u201316: a modelling study. Lancet Infect Dis 17: 330\u2013382017; \u2014The appendix of this Article has been updated. This correction has been made to the online version as of Feb 15, 2019."} {"text": "Henipavirus, is classified as a Biosafety Level-4 pathogen based on its high pathogenicity in humans and the lack of available vaccines or therapeutics. Since its initial emergence in 1998 in Malaysia, this virus has become a great threat to domestic animals and humans. Sporadic outbreaks and person-to-person transmission over the past two decades have resulted in hundreds of human fatalities. Epidemiological surveys have shown that NiV is distributed in Asia, Africa, and the South Pacific Ocean, and is transmitted by its natural reservoir, Pteropid bats. Numerous efforts have been made to analyze viral protein function and structure to develop feasible strategies for drug design. Increasing surveillance and preventative measures for the viral infectious disease are urgently needed.Nipah virus (NiV), a zoonotic paramyxovirus belonging to the genus NiV was named after Kampung Sungai Nipah (Nipah River Village) in Malaysia, where it was first isolated in 1998, before its subsequent spread into Singapore via exported pigs in 1999, leading to the abattoir worker infections infection was listed as a priority disease posing a public health risk by the World Health Organization (Henipavirus [the other pathogenic member of the genus is Hendra virus (HeV), reviewed in or an alternative open reading frame (C protein) 2), whichet al.et al.et al.et al.et al.et al.et al.et al.et al.et al.et al.Understanding susceptible hosts and routes for the spread of viral disease raises knowledge to curb epidemics. Bats are the second largest order of mammals after rodents of NiV could be dated to 1356 . The strains were divided into two lineages , with different clinical features and transmission routes in Bangladesh and Malaysia to prevent it nonspecific binding to host RNA surrounded by the viral envelope consists of its genome and the N protein, which is essential for the viral life cycle as a template for RNA-dependent RNA-polymerase (RdRp), composed of polymerase L and a polymerase cofactor P of the N-terminal domain of P, is characterized by an asymmetric pea-like form composed of three heterodimers. N0 remains an open conformation in the complex due to P-mediated inhibition of the polymerization of N and is then cleaved by cellular protease into F1 and F2 subunits linked by a disulfide bond. Two heptad-repeat regions (HR1 and HR2) in F1 contribute to the membrane merger encircling the center in its receptor-unbound status and changes its form in the G/Ephrin-B3 complex. Each blade module in the \u03b2-propeller enhanced by a disulfide bond (C181\u2013C601) contains a four-stranded (strands S1\u2013S4) antiparallel \u03b2-sheet (Fig.\u00a0 encircliet al.et al.et al.et al.Knowing the geographic distribution and transmission of a virus is the priority for the control of infection and resolving the structure and function of viral protein is the basis for anti-viral drug development. In this review, we are focusing on these aspects of the NiV. As a huge natural reservoir of viruses, including NiV, bats have been under renewed interest. Bats appear asymptomatic when infected by many viruses and play a pivotal role in viral spillover. Continually emerging and reemerging viruses from bats have been reported. In 2012, a novel rubula-like paramyxovirus from fruit bats was found to be responsible for a series of severe clinical symptoms appearing on a female wildlife biologist who performed a 6-week field exploration in South Sudan and Uganda (Albari\u00f1o"} {"text": "The aim of this systematic review was to explore safety issues and adverse events arising from PRN prescription and administration. Electronic databases including Scopus, PubMed [including Medline], Embase, Cinahl, Web of Science and ProQuest were systematically searched to retrieve articles published from 2005 to 2017. Selection criteria: we included all randomized controlled trials (RCTs) and studies with comparison groups, comparing PRN prescription and administration with scheduled administration, where safety issues and adverse events were reported. The authors independently assessed titles, abstracts and full-texts of retrieved studies based on inclusion criteria and risk of bias. Results were summarised narratively. The search identified 7699 articles. Title, abstract and full-text appraisals yielded 5 articles. The included studies were RCTs with one exception, a pre-test post-test experimental design. Patient populations, interventions and outcomes varied. Studies compared patient-controlled or routine administration with PRN and one trial assessed the effect of a practice guideline on implementation of PRN administration. More analgesia was administered in the patient-controlled than the PRN arms but pain reduction was similar. However, there was little difference in administration of psychotropic medicines. No differences between patient-controlled and PRN groups were reported for adverse events. The PRN practice guideline improved PRN patient education but non-documentation of PRN administration increased. This systematic review suggests that PRN safety issues and adverse events are an under-researched area of healthcare practice. Variations in the interventions, outcomes and clinical areas make it difficult to judge the overall quality of the evidence. Well-designed RCTs are needed to identify any safety issues and adverse events associated with PRN administration.PRN is the acronym for \u2018 The search strategy consisted of the keywords below, based on the authors\u2019 experiences and controlled vocabularies such as the MeSH :pro re nata)\u201d or \u201cas needed\u201d or \u201cas required.\u201d\u201cDrug-Related Side Effects and Adverse Reactions\u201d or \u201cAdverse Drug Event\u201d or \u201cAdverse Drug Reaction\u201d or \u201cDrug Side Effects\u201d or \u201cDrug Toxicity\u201d or \u201cSide Effects of Drugs\u201d or \u201cToxicity, Drug\u201d and \u201cPRN (pro re nata)\u201d or \u201cas needed\u201d or \u201cas required\u201d and Nurs*.\u201cDrug-Related Side Effects and Adverse Reactions\u201d or \u201cAdverse Drug Event\u201d or \u201cAdverse Drug Reaction\u201d or \u201cDrug Side Effects\u201d or \u201cDrug Toxicity\u201d or \u201cSide Effects of Drugs\u201d or \u201cToxicity, Drug\u201d and \u201cPRN independently screened titles and abstracts from the retrieved articles and decided which studies met the inclusion criteria: peer-reviewed RCTs in caring sciences, focus on PRN and published in online scientific journals. Next, two independent review authors (M.V. and S.J.) assessed the full-text of selected articles to ensure that they met the above-mentioned inclusion criteria using the methodological checklist developed by National Institute for Health and Care Excellence (NICE) . In caseTwo review authors (M.V. and S.A.) independently extracted the details of articles included in the review in terms of design, sample, intervention, prescription and administration and outcome measurement.Risk of bias is any error or deviation in the design, study process, analysis and reporting of RCTs, which can cause an underestimation or overestimation of results or inferences . Two autThe heterogeneity of the articles precluded a meta-analysis. Results are presented narratively.Since a meta-analysis could not be performed, no articles were excluded due to missing data and there was no assessment of heterogeneity.We used a theoretical framework of patient safety to accommodate the studies\u2019 heterogeneity in terms of designs, participants and interventions.The authors employed the \u2018grading of recommendations assessment, development and evaluation\u2019 (GRADE) criteria ,28 to asThese could not be undertaken, due to inability to pool results.Although no language limitations were applied, all relevant articles were in English. Five articles on the safety and efficacy of PRN prescription and administration are included in this systematic review. The characteristics of the studies are presented in The search identified 7699 articles that could be potentially included in the review. From independent appraisal of the titles and abstracts of the articles by two authors (M.V. and S.J.) deleting duplicates (26 articles) and articles not meeting the inclusion criteria (7650 articles) led to the selection of 23 articles. Reading the full-text of the articles by two authors of this systematic review (M.V. and S.J.) for the inclusion criteria and the selection of RCTs over other study designs led to inclusion of 5 articles. Manual search in the references lists of the included studies identified no more articles. The process of the search is described using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) flowchart in The included studies (n = 5) were published between 2005 and 2015. Two studies ,30 were Three studies ,31,32 weInterventions varied. Three studies considered analgesia ,30,31, oThe outcomes in the selected articles were diverse:Only Chibnall et al. measuredIn Chibnall et al. , routineMorad et al. measuredThe adverse effects of paracetamol ,Medication errors associated with PRN prescription and administration ,Neurologic deterioration, excessive sedation, nausea, vomiting, pruritus, insufficient analgesia, and/or respiratory insufficiency ,The adverse effects of the medicines, such as gastrointestinal bleeding or upset, as a secondary outcome ,Safety and ADRs using twelve-lead electrocardiograms at screening, after 8 weeks\u2019 treatment and during the treatment-free follow up period .Diverse adverse events were monitored, including:Eighteen studies were excluded ,44,45,46The risk of bias varied between studies .There were variations in the processes of random sequence generation and concealment among the studies .Chibnall et al. was blinAttrition is reported in All studies followed their protocols and reported their findings accordingly.Baseline characteristics of participants were similar in all studies. Cross-over design may have minimized the risk of allocation bias in the study of Chibnall et al. .p = 0.01), direct social interactions (p = 0.05) and work-like activity (p = 0.06) than during the placebo phase. They spent less time during the treatment phase engaged in independent self-care (p = 0.02). Emotional wellbeing, agitation, sleeping and independent walking did not differ between study phases (p = 0.80). No other studies reported psychological outcomes.Chibnall et al. found thIn the cross-over trial by Chibnall et al. , the preBaker et al. investigMorad et al. exploredHajimaghsoudi et al. exploredp = 0.003 and 54.8 vs. 29.9 g/h and p = 0.002).Morad et al. found paHajimaghsoudi et al. reportedPark et al. reportedChibnall et al. reportedBaker et al. reportedMorad et al. reportedHajimaghsoudi et al. found thPark et al. reportedpro re nata) prescription and administration in healthcare settings. Few randomized controlled trials compare PRN medication regimens with regular administration of the same drug [The authors aimed to investigate safety issues and adverse events associated with PRN (ame drug . We idename drug . Pain reame drug ,32. The ame drug .The paucity and size of relevant studies, diversity of designs, variations in populations and multiple interventions highlight the incompleteness of the evidence in this systematic review on patient safety and adverse events related to PRN administration. More studies are needed to explore whether PRN prescriptions and the associated transfer of decision-making to nurses or patients and reduced bureaucracy, affects patients\u2019 well-being and quality of care. The safety of PRN prescriptions may depend on appropriate education for nurses ,38,44,46Low sample sizes, difficulties with blinding, absence of information on sampling, randomization and attrition in some trials, variations in the designs, interventions, outcomes and results suggest that the overall quality of evidence is very low. The issues affecting the quality of the included studies differed and mainly stemmed from a lack of detail regarding the methods and interventions in the individual studies. Detailed reporting of the signs and symptoms of ADRs or \u2018undesirable effects\u2019 as listed in manufacturers\u2019 literature is essenWe tried to reduce bias during this review by conducting a thorough literature search using different keywords and databases. The Cochrane Risk of Bias Assessment is provided in A previous systematic review of PRN mInsufficient evidence for PRN administration and prescription suggests that PRN safety issues and adverse events are under-recognized. The development and implementation of PRN guidelines described in one of the studies did littWell-designed RCTs of PRN prescription and administration are needed to explore patient safety. The efficacy and effectiveness of PRN with other methods of medication administration and prescription is under-explored but our diverse findings suggest that safety will depend on context, both clinical area and staff preparation. PRN practice guidelines should be developed and evaluated , with a"} {"text": "Plectropomus areolatus) from India and found no evidence to support their findings of alternative reproductive tactics, unique school-spawning involving a single male with multiple females, or inverse size-assortment. The study lacks scientific credibility due to a lack of rigor in the methodology used, misinterpretation of observed behaviors, misinterpretation of the literature, and insufficient data. Their approach led the authors to produce spurious results and profound, invalid conclusions that violate the most basic assumptions of mate choice and sexual selection theory as applied to mating systems in marine fishes.Courtship and spawning behaviors of coral reef fishes are very complex, and sufficient sampling effort and proper methods are required to draw informed conclusions on their mating systems that are grounded in contemporary theories of mate choice and sexual selection. We reviewed the recent study by Karkarey et al. (BMC Ecol 17:10, Plectropomus areolatus) at a \u201cpristine\u201d site off Bitra, a remote atoll in the northern Lakshadweep archipelago off India. As part of their principal findings, Karkarey et al. [P. areolatus at Bitra showed a habitat-specific, inverse size-assortment in relationship to courtship in which \u201clarge males courted small females on the reef slope while small males courted equal-sized or larger females on the shelf.\u201d Both of these reported mating behaviors would appear to violate the most basic assumptions of mate choice and sexual selection theory as applied to marine fishes, and thus the study demanded further scrutiny.In a recent issue in this journal, Karkarey et al. conductey et al. describeP. areolatus and other marine fishes and seminal literature on mating systems and sexual selection. Based upon the serious issues contained in the study, which we grouped into five categories described below, we concluded that the study by Karkarey et al. [After careful consideration, we report here that the results of Karkarey et al. are unsuy et al. lacks scP. areolatus from within a larger school of females. To support their claim, they provide a photograph in the manuscript taken fg.\u00a02d in ). The auEach of us has observed the video file numerous times, both in real time and in slow motion, and we can find no plausible evidence of spawning or of any of the behaviors described by Karkarey et al. . In direFurther inspection of the proposed \u201cgamete cloud\u201d served P. areolatus schools. As early as 1999, Johannes et al. [P. areolatus swimming to, from or within spawning aggregations seem to be the only example of single-sex schooling behavior we know of among groupers within this genus\u201d. Johannes et al. [Karkarey et al. misrepres et al. reporteds et al. also dess et al. proposeds et al. but failP. areolatus at the site of their reported research. Descriptions by Johannes et al. [Karkarey et al. also obss et al. of inters et al. and servs et al. ) swimmins et al. ).The sequence of behaviors in the video provided by Karkarey et al. bears noP. areolatus provide evidence that the species does demonstrate two types of ARTs much like many other coral reef fishes with external fertilization were highest, they reported that sex ratios were highly biased towards males at the core of the aggregation. The authors of the study reported that \u201ccourting behavior \u2026seemed to reflect this shortage of females. Females were often harassed by several males simultaneously and often fled from them.\u201d In the areas where fish densities were highest, up to 40 males were observed engaging in this behavior. Johannes et al. [Previous studies on iewed by ) but notiewed by . Johanneiewed by describeiewed by describes et al. indicates et al. noted ths et al. match ths et al. , 13.P. areolatus at Bitra exhibit a unique spawning tactic involving individual males simultaneously mating with multiple females holds serious implications for sexual selection theory and mating systems of groupers and other marine fishes. Anisogamy generally leads to situations in which male gametes and individual males are in competition with each other to access and fertilize the eggs produced by females [From a broader perspective, the conclusions of Karkarey et al. that P. females \u201316. This females , 17, 18. females , 19.negatively correlated with population density [While the general conclusion by Karkarey et al. that \u201csc density \u201322. That density , 21, 22, density . TherefoP. areolatus, rendering the principal finding of their study and all associated conclusions regarding the existence, costs, and benefits of alternative reproductive tactics in the species as unsupportable and lacking scientific merit.Contrary to prevailing theories supported by extensive empirical evidence and numerous case studies of coral reef fishes, Karkarey et al. appear tP. areolatus are most commonly observed on the reef several days prior to spawning. During this time period, oocytes have not progressed to the point of hydration [Karkarey et al. providedydration , 23, 25.ydration , 25.P. areolatus, the abdomens of females are remarkably distended with hydrated eggs of actual spawning in ggs Fig.\u00a0. The proP. areolatus showed \u201ca habitat-specific inverse size-assortment\u201d, in which \u201clarge males courted small females on the slope, while small males courted equal-sized or larger females on the shelf\u201d. However, their methods and results suffer from a serious flaw: the authors cannot claim the behaviors they observed were actually courtship or representative of inverse-size assortative mating unless they are at least sometimes followed by a spawning event. The authors themselves state they never once observed a spawning event between a male\u2013female pair of fish. The only evidence they presented to justify their findings were that these behaviors were observed at spawning aggregations of P. areolatus in previous studies [Karkarey et al. contend studies , 12.While it is unclear whether courtship was measured at all, it is never appropriate to measure courtship as a \u201cbenefit\u201d accruing to males, because it is unknown whether these behaviors led to successful spawning or whether these individuals remained in the observation arena until spawning commenced. The authors did not observe mating in either large or small individuals, so it is also unreasonable to draw conclusions about inverse size-assortative mating. Additionally, local sex ratio contaminates the courtship rate measurements, because the \u201cbenefit\u201d is multiplied by the number of nearby females; this leads to an estimate of higher \u201cbenefit\u201d for males on the shelf, even though time spent in \u201ccourtship\u201d was claimed as the same in both habitats. This is the approach taken by Karkarey et al. , as showIt is implausible to make sound inferences about mating rates, potential mating opportunities, or costs associated with intra-sexual selection when none of the measurements used to generate them were based on verified reproductive activity. The observations by Karkarey et al. of male\u2013Conducting comprehensive quantitative analyses of fish mating behavior, courtship and mating rates, and related factors requires careful, appropriate design and analyses. These practices can result in novel findings that propel our interest and understanding in these and other organisms; however, rarely do these findings contradict established theory. In these instances, clear, irrefutable evidence is required that is supported by rigorous methodology, observations, and analyses. While Karkarey et al. should b"} {"text": "Hearing loss is a common sensory disorder that has been a serious concern globally. It is recently estimated that around 466 million people worldwide have disabling hearing loss, including 34 million children, and this number will increase to over 900 million by 2050 (according to WHO report 2018). The majority of hearing disorders occur due to the death of either inner ear hair cells (HCs) or spiral ganglion neurons (SGNs), thus leading to sensorial neural hearing loss (SNHL). SNHL is known as the most common form of hearing disorder that comprises about 85% of all hearing loss cases. This type of damage is induced by a variety of reasons such as inner ear trauma, ischemia, ototoxic drugs, noise exposure, inflammation, viral infections, genetic deficits, autoimmunologic reaction, and aging. SNHL is generally not reversible due to the lack of regeneration capacity of HCs and SGNs. However, various recent studies have determined that the HCs and SGNs hold a regenerative potential and it is possible to find the cure for SNHL in the near future. This is supported by the clear understanding of the genetical control and signaling pathways involved in the development of HCs and SGNs and their functions, the regenerative potential of residing adult stem cells, and the development of gene therapy and the clinical trials of new pharmaceutical compounds on damaged cochleae. Last year, we have published the first special issue of \u201cHearing Loss: Reestablish the Neural Plasticity in Regenerated Spiral Ganglion Neurons and Sensory Hair Cells\u201d, and this year in this second special issue, we are presenting a new series of articles to report the most recent advances in several major areas as summarized below: HC development, HC damage and protection, HC regeneration, SGN development and protection, inherited hearing loss, and inner ear drug delivery.In Vivo\u201d) for the first time experimentally demonstrate the backward traveling wave theory by measuring the phase spectra of the basilar membrane vibration at multiple longitudinal locations along the cochlea. X. Cheng et al. (\u201cModulation of Glucose Takeup by Glucose Transport on the Isolated OHCs\u201d) report that glucose is transported into OHCs via glucose transporter 1 and 4, which are mainly expressed on the lateral wall of OHCs and the glucose antagonist and ATP regulate this energy transport mechanism. S. Liu et al. determine that Pax2, Sox2, and Prox1 have differential and overlapping temporospatial expression patterns during the development of vestibular and auditory sensory organs in mice.A. Chang et al. report that the early environmental sounds promote functional maturation of HCs. Acoustic environment significantly decreases the ABR threshold, increases prestin expression in outer HCs, and enhance maturation of ribbon synapses in the postnatal mouse cochleae. P. Chen et al. demonstrate that the microglial-like cells are present in the developing mouse cochlea, and these cells go through the drastic morphological and distributional changes during the postnatal cochlear development. Also, these cells might participate in the maturation and remodeling of the cochlea. F. Chen et al. explore the pathogenesis of superior semicircular canal dehiscence (SSCD) in the guinea pig model and report that the bony fenestration of the superior semicircular canal mimics the hearing loss pattern of SSCD patients. J. Hong et al. investigate that the N-methyl-D-aspartate receptors regulate the number and distribution of inner HC ribbon synapses after gentamicin-induced ototoxicity and their inhibition by antagonist minimized the drug-induced ototoxicity, and thus maintain the integrity of ribbon synapses. L. Xia et al. (\u201cComparison of Acceptable Noise Level Generated Using Different Transducers and Response Modes\u201d) compare the acceptable noise level (ANL) in 20 mandarin subjects with normal hearing. The author obtained ANL through different methods and determined that the ANL in normal hearing listeners may not be affected by different modes of presentation. M. Waqas et al. provide a brief review about the mechanisms involved in the HC loss after noise-induced trauma and discuss the recent HC protection strategies to prevent and recover hearing function in mammals after noise-induced damage. X. Cheng et al. report that the combination of electric and acoustic hearing significantly improves the perception of music and Mandarin tones in pediatric cochlear implant patients. X. Ding et al. determine the association between tinnitus and sudden sensorineural hearing loss (SSNHL). The authors found that tinnitus can be ameliorated by the successful treatment of SSNHL. G. Li et al. provide a brief review on the autoimmune mechanisms involved in SSHL and discuss the role of immunosuppressive drugs in immune therapy. N. Zhao et al. report that the age-related hearing loss causes an increase in PnC sensitivity that in turn enhances acoustic startle responses in C57 mice. B. Li et al. investigate the extent of SNHL at high frequencies influences the ability to recognize compressed speech of lower frequencies in hearing loss patients.M. Tang et al. provide a comprehensive review to address the current challenges and problems in stem cell transplant-based treatments in the inner ear against deafness and present a critical viewpoint about electrical stimulations as a physical factor to modulate stem cell behavior and promote stem cell therapy to treat hearing loss.J. Li et al. determine that the type II SGNs participate in the contralateral suppression of the medial olivocochlear reflex after selectively inducing apoptosis in the type I SGNs using ouabain treatment.MYO15A Mutations Identified in One Chinese Family with Autosomal Recessive Nonsyndromic Hearing Loss\u201d) report three MYO15A variants c.3971C>A (p.A1324D), c.4011insA (p.Q1337Qfs\u221722), and c.9690+1G>A. These variants are absent in 200 normal controls and cosegregated with hearing disability in this family. X. Wang et al. (\u201cA Novel p.G141R Mutation in ILDR1 Leads to Recessive Nonsyndromic Deafness DFNB42 in Two Chinese Han Families\u201d) identify a novel p.G141R homozygous mutation in ILDR1 gene that may be the genetic cause of deafness in two unrelated Chinese Han families. X. Wu et al. report that the novel compound heterozygous missense mutation c.4472C>T p.T1491M and c.1973T>C p.V658A in PTPRQ gene is the genetic cause of recessively inherited sensorineural hearing loss in a Chinese family.H. Du et al. (\u201cIdentification of Binding Partners of Deafness-Related Protein PDZD7\u201d) determine the eleven novel PDZD7-binding proteins through yeast two-hybrid screening that are expressed in the inner ear. Most of the new PDZD7-binding partners such as TRIM35, CADM1, AMOT, BLZF1, Numb, KCDT10, CCDC27, and TRIP11 have not been reported before and will help to understand the role of PDZD7 in hearing transduction. P. Li et al. (\u201cKnock-In Mice with Myo3a Y137C Mutation Displayed Progressive Hearing Loss and Hair Cell Degeneration in the Inner Ear\u201d) report that Myo3a kinase domain Y137C mutant mice have an elevated hearing threshold, degenerated inner ear HCs, and structural abnormality in HCs stereocilia after 6 months of age, thus Myo3a is essential for maintaining the intact structure of HC and normal hearing function. S. Hu et al. (\u201cGenetic Etiology Study of Ten Chinese Families with Nonsyndromic Hearing Loss\u201d) identify novel pathogenic variants in six Chinese families with a hereditary hearing loss by targeted next generation sequencing. F. Zhang et al. synthesize a new nanoparticle capsule that can be used as a drug delivery route for the gentamicin transfer at the specific site in the inner ear. The authors also determine the sustained release capacity of gentamicin from the capsule by We believe that the studies included in this second special issue of \u201cHearing Loss: Reestablish the Neural Plasticity in Regenerated Spiral Ganglion Neurons and Sensory Hair Cells\u201d provide important insights into cochlear physiology and pathology as well as the important progress in technology that can be translated into clinical application of the medical treatment of cochlear damage in SNHL. We wish that this special issue will represent a significant contribution in the effort to achieve effective protection and treatment of hearing loss in the near future."} {"text": "Starch has been an inexhaustible subject of research for many decades. It is an inexpensive, readily-available material with extensive application in the food and processing industry. Researchers are continually trying to improve its properties by different modification procedures and expand its application. What is mostly applied in this view are their chemical modifications, among which organic acids have recently drawn the greatest attention, particularly with respect to the application of starch in the food industry. Namely, organic acids naturally occur in many edible plants and many of them are generally recognized as safe (GRAS), which make them ideal modification agents for starch intended for the food industry. The aim of this review is to give a short literature overview of the progress made in the research of starch esterification, etherification, cross-linking, and dual modification with organic acids and their derivatives. Numerous original articles regarding a starch modification by chemical, physical and/or enzymatic procedures have been published and starch has been extensively reviewed from different points of views. Novel reviews published within the last five years have mainly dealt with starch digestion and resistant starch ,4,5,6,7,et al. [et al. [In 2012 Kaur et al. reviewed [et al. reviewedAlthough starch has been reviewed in many aspects, and even though many reviews dealing with chemical modifications of starch have been published to date, the application of organic acids and their derivatives in starch modification has not been discussed in detail to the authors\u2019 knowledge. This article will give a short review of the research and advancements regarding the application of organic acids and their derivatives in starch modification. Although the authors have tried to focus on the 2002\u20132015 period of research, some articles published prior to 2002 are also considered, due to their valuable contribution to the issue.Starch acetates are additives approved in the food industry under number E1420. They are commonly produced with acetic acid and acetic anhydride as starch esterification reagents. In addition, vinyl acetate can also be used for esterification.Concerning the reaction of the above-mentioned reagents with starch, part of hydroxyl groups on anhydro-glucose units is substituted with acetyl groups and, consequently, esters (starch acetates) are formed. The number of acetyl groups incorporated into the starch molecule is dependent on the reactant concentration, pH, reaction time, and presence of catalysts.et al. [Bello-Perez et al. have proetc. [et al. [et al. [vice versa). However, the highest DS (2.934) was obtained with the 1:1 mixture, although total mmols were kept constant (180 mmol) and the lowest DS (1.837) was obtained when only anhydride was used. These results indicate that the reagent type does indeed influence the reaction efficiency.An increase in the reactant concentration positively influences the reaction efficiency. Namely, the reaction of starch with acetic anhydride in an aqueous medium with NaOH as the catalyst, results in a low degree of substitution, averaging from 0.01 to 0.2 [etc. . Babi\u0107 e [et al. have obt [et al. have aceThe appertaining synthesis routes significantly influence starch acetylation, too, e.g., when producing highly-substituted starch acetate, the addition time of activator potassium carbonate is of great importance\u2014it should be added after acetic anhydride is allowed to penetrate into the starch granules .2SO4 [\u2212), which reacts with acetic anhydride to build a starch acetate and NaOAc.The reaction of acetic anhydride with starch favourably yields C-3 and C-6 esters . HoweverDicke has repoIf acetylation is performed with vinyl acetate as the acetylating agent, C-2 esters are exclusively produced .et al. [Guan et al. have repet al. has repoet al. [Reaction time is an important factor for the acetylation efficiency. Han et al. have revA temperature increase from 25 to 30 \u00b0C facilitates diffusion of acetylating agents and starch swelling, which results in higher yields of a substituted product. However, acetylation is an exothermic reaction and a further increase of the temperature would negatively impact the reaction .et al. [However, Chi et al. have docWater is a commonly used reaction medium for acetylation. A higher water content aids dissociation, diffusion, and adsorption of the esterifying agent, which is favorable to the reaction. However, if the water to starch ratio exceeds 1.06:1, the reaction efficiency is reduced due to side reactions .Since a high water concentration is required to avoid mixing problems in industrial conditions, acetic anhydride hydrolyzes to acetic acid, which reduces the reaction efficiency. To overcome this problem, pyridine and DMSO can be used as solvents . In addiet al. [2 as a \u201cgreen\u201d solvent for low substituted starch acetate production. They have obtained potato starches substituted with acetic anhydride using NaOH as the catalyst with a range of the DS between 0.01 and 0.46, showing a high potential of the densified CO2 usage as a solvent medium. In addition, Shogren [A possible solution for these problems can be found in the research of Muljana et al. who have Shogren has repoSince an acetyl group is much bulkier than a hydroxyl group, it sterically hinders the structural organization of starch chains. Due to the repulsion between starch molecules, water percolation between chains is facilitated . Thus, tet al. [et al. [et al. [et al. [et al. [et al. [Sodhi and Singh have reget al. after acet al. . However [et al. have obs [et al. for acet [et al. for acet [et al. have pos [et al. have demStarch paste clarity and freeze-thaw stability are increased by starch acetylation ,26. Due etc. after gelatinization at 90 \u00b0C for 45 min . The reaet al. [Biswas et al. have preet al. [Thermoplastic starch-maleate esters have been prepared by an extrusion, with glycerol as plasticizer, as reported by Raquez et al. . FTIR anet al. [Tay et al. have preOrganic acids are extensively researched in terms of starch modification. Most studies are dedicated to commonly used acids and their derivatives, such as acetic anhydride, succinic anhydride, and OSA . HoweverThe variety of modified starches can be obtained not only by the selection of starting starch type, but also by the careful selection of modifying agents, catalysts, reaction temperature, and time. Reactions can be facilitated by the combined application of physical modifications, such as extrusion or pre-gelatinization. The complexity of modification is, therefore, high and by changing only one reaction parameter, it is possible to obtain a new product with significantly different properties. This allows much space for further research on reactions of starch with organic acids.In addition, a very important aspect is that some of these modified starches should be additionally researched in order to determine their safety for the consumption. On the other hand, a lot of them can already be characterized as good functional food ingredients or good bio-based packaging materials. Since starch and organic acids are easily available and are low-cost materials, the authors do not doubt that starch modification with organic acids and their derivatives will attract even greater attention in future studies."} {"text": "Toxoplasma gondii Using CRISPR/CAS9\u201d by Bang Shen et al. and \u201cEfficient Genome Engineering of Toxoplasma gondii using CRISPR/CAS9\u201d by Saima M. Sidik et al. made an impact on him by successfully implementing strategies to genetically manipulate T. gondii using CRISPR/CAS9 gene editing technology.Alfredo J. Guerra works in the field of molecular parasitology and structural biology. In this mSphere of Influence article, he reflects on how \u201cEfficient Gene Disruption in Diverse Strains of Toxoplasma gondii Using CRISPR/CAS9\u201d by Bang Shen et al. and \u201cEfficient Genome Engineering of Toxoplasma gondii using CRISPR/CAS9\u201d by Saima M. Sidik et al. made an impact on him by successfully implementing strategies to genetically manipulate T. gondii using CRISPR/CAS9 gene editing technology.Alfredo J. Guerra works in the field of molecular parasitology and structural biology. In this mSphere of Influence article, he reflects on how \u201cEfficient Gene Disruption in Diverse Strains of Toxoplasma gondii: \u201cEfficient Gene Disruption in Diverse Strains of Toxoplasma gondii Using CRISPR/CAS9\u201d by Shen et al. pathway but can also be used to tag specific genes and also generate point mutations in a site-specific manner.It is difficult to overstate the extent to which CRISPR/CAS9 gene editing technology has changed the landscape of genome editing. The field of apicomplexan biology is no exception to this trend. As a relative newcomer to the field of molecular parasitology, there are two papers that strongly influenced my current research in n et al. and \u201cEffk et al. . AlthougT. gondii. Shen et al. adapted a CRISPR/CAS9 system with a single guide RNA (sgRNA) to target genes in T. gondii for efficient gene disruption. Taking advantage of a resistance to fluorodeoxyribose that results upon the deletion of the uracil phophoribosyl transferase (UPRT) gene, Shen et al. site-specifically disrupted the UPRT locus both by the nonhomologous end-joining (NHEJ) pathway as well as via homologous recombination, where the UPRT locus was replaced with a pyrimethamine-resistant dihydrofolate reductase (DHFR). The strength of the paper by Shen et al. lies in showing that this approach can be extended to different strains of T. gondii, thereby expanding the utility of this approach in interrogating the complex biology of this apicomplexan parasite. Similarly, Sidik et al. were able to show the efficient disruption of genes via NHEJ using a CRISPR/CAS9 system with an sgRNA targeting the SAG1 gene. Sidik et al. then took their system a step further and showed that in a strain that is deficient in NHEJ (\u0394KU80), it is possible to introduce site-specific point mutations by supplying a repair template harboring the desired mutation. One advantage of this approach is the ability to use relatively short homology arms\u2014typically 40 bp. Sidik et al. leveraged drug resistance that results from the specific point mutations that were introduced to give a simple yet powerful confirmation that the desired mutations were introduced. Finally, Sidik et al. showed that CRISPR/CAS9 can be used to tag endogenous loci with an epitope tag in a \u0394KU80 strain by using a repair template harboring the desired tag flanked by relatively short (\u223c40-bp) homology arms to target the insertion in frame with the gene of interest.Shen et al. and Sidik et al. spearheaded the implementation of CRISPR/CAS9 for gene inactivation, gene tagging, and insertion of point mutations in T. gondii field as a whole. One impressive follow-up study is a genome-wide CRISPR screen that allowed Lourido and coworkers to determine the degree to which specific genes contribute to the fitness of the parasites in human foreskin fibroblast (HFF) culture (\u03b2 domain of T. gondii perforin-like protein 1 (TgPLP1) . In that"} {"text": "Micromachines aims to present the most recent research developments in scalable micro/nanopatterning. A total of eight papers are presented, including three review papers and five original research papers. The topics include \u201ctop-down\u201d approaches, \u201cbottom-up\u201d approaches, and the combination of \u201ctop-down\u201d and \u201cbottom-up\u201d approaches.This is the golden age of scalable micro/nanopatterning, as these methods emerge as an answer to produce industrial-scale nano-objects with a focus on economical sustainability and reliability. The improvement of scalability and reliability in nanomanufacturing is a key step to move nanotechnology advances closer to the customer market. This special issue in The Scalable Nanomanufacturing (SNM) program of the National Science Foundation (NSF) was initiated in 2011, and provides funding support to research and education to accelerate the commercialization of nanoscale inventions. Khershed Cooper (NSF) reviewed the initiation of the SNM program [Hu et al. provided a review of tip-based nanofabrication (TBN) for scalable manufacturing . TBN tecDu et al. provided a review of stencil lithography, which is a scalable and resistless nanomanufacturing technique . The curChauvin et al. reported a simple and scalable fabrication technique of porous gold nanowires based on laser interference lithography and dealloying . The golQuantum-dots (QD) are semiconductor particles and have been widely used in the fields of biomedical sensing, photovoltaic devices, and nano-electronics. Keum et al. introduced a scalable QD patterning technique using shape memory polymer (SMP) . The adhNanoimprint lithography (NIL) is a rapid and high resolution nanomanufacturing technique. Lin et al. reported a scalable, time-efficient, and high resolution fabrication process of silicon oxide . An NIL Hoshian et al. introduced a non-lithographic method to pattern silicon . Inkjet Ou et al. demonstrated antireflective SiC nanospikes with a simple and low-cost process by using metal nanoparticles as an etching mask . The metThe future of scalable micro/nanopatterning is bright. We hope that this special issue will help scientists and engineers to develop more advanced micro/nanopatterning techniques in the near future. We would like to thank all the contributors to this special issue and the reviewers for providing valuable comments to the submitted manuscripts."} {"text": "Escherichia coli,\u201d Nigam et al. . Of the 42 corresponding genes, 36 have an ACA sequence, the target site for MazF cleavage, <100\u2009nt upstream of their start codon. Based on this observation, Nigam et al. tested whether NA-dependent changes in the expression of a green fluorescent protein (GFP) reporter were affected by the positions of upstream ACA sequences. The major conclusion drawn by Nigam et al. was that the presence of an ACA sequence <100\u2009nt upstream of a start codon is associated with MazF-dependent regulation of expression for the corresponding gene. As described in detail below, none of the data presented by Nigam et al. support this conclusion.In their recent study, \u201cStress-Induced MazF-Mediated Proteins in m et al. identifiE. coli, as would be expected for any trinucleotide sequence. ACA sequences were found upstream of 36 of the 42 genes listed in Table 1 of the article by Nigam et al. While Nigam et al. describe this frequency as \u201cremarkable,\u201d it is anything but. In fact, for the 42 genes listed in their Table 1, the frequency of genes with an ACA trinucleotide <100\u2009nt upstream is not significantly higher than that for the set of all E. coli genes or the control set of 2,807 genes described by Nigam et al. as having a \u201cfree region upstream\u201d . Thus, these data do not support the conclusion that upstream ACA sequences contribute to MazF-dependent regulation. Moreover, given that most 5\u2032 untranslated region (UTR) lengths for E. coli genes are <50\u2009nt . Lastly, we note that the notion of a \u201cstress-induced translation machinery\u201d has not withstood careful additional and independent scrutiny by the field (In summary, the data presented by Nigam et al. do not spression , and 13/he field and that"} {"text": "The objective of this study was to investigate the effect of selected biopolymers on the rheological properties of surimi. In our paper, we highlight the functional properties and rheological aspects of some starch mixtures used in surimi. However, the influence of some other ingredients, such as cryoprotectants, mannans, and hydroxylpropylmethylcellulose (HPMC), on the rheological properties of surimi is also described. The outcome reveals that storage modulus increased with the addition of higher levels of starch. Moreover, the increasing starch level increased the breaking force, deformation, and gel strength of surimi as a result of the absorption of water by starch granules in the mixture to make the surimi more rigid. On the other hand, the addition of cryoprotectants, mannans, and HPMC improved the rheological properties of surimi. The data obtained in this paper could be beneficial particularly to the scientists who deal with food processing field. Merluccius productus) and Alaska pollock . In addition, various other species such as bigeye snapper (Priacanthus spp.), treadfin bream (Nemipterus spp.), lizardfish (Saurida spp.), croaker (Pennahia and Johnius spp.), are now also used in the preparation of commercial surimi in southeast Asia; the functional and compositional properties vary depending on the species used [According to Park and Morrissey , surimi ies used .Surimi is a unique and useful seafood analogue because of its gelling properties ,4. SinceStarch is made up of two polysaccharides, amylose and amylopectin . AmyloseThe chemical, physical, and functional properties of starches have been widely studied. Native starches from various botanical sources, such as maize, sweet potato, potato, and rice, have received extensive attention due to the differences in their structural and physico-chemical characteristics ,30,31. Iet al. [et al. [et al. [et al. [Normally, consumers evaluate the final acceptance of surimi-based products by their textural characteristics, which are considered to be most important. When a surimi-based product contains high-quality surimi as a predominant component, the resulting texture of the product tends to be rubbery . Lee et et al. also claet al. . Lee et [et al. noted th [et al. . The lev [et al. added va [et al. . The add [et al. reported [et al. . The bot [et al. . Yang an [et al. studied [et al. reportedet al. [During thermal processing of starch-surimi systems, significant changes in rheological properties were observed due to the gelatinization of starch and sol-gel transformation of fish proteins, using various starches ,49. Wu eet al. ,49 reporG\u2032) value for surimi with a higher addition of starch [G\u2032 did not decrease during the gel resolution stage [G\u2032 value was the absorption of water by starch granules in the surimi-starch mixture, resulting in more rigid surimi. Campo-Dea\u00f1o and Tovar [G\u2032 and loss modulus (G\u2033)) of sticks prepared from Alaska pollock (AP) and Pacific whiting (PW) surimi mixed with starch at concentrations of 7%, 11%, and 15%. The difference of (G\u2032\u2013G\u2033) had two distinct regions from a rheological viewpoint, namely, a linear viscoelastic region in which the G\u2032 value was greater than the value of G\u2033 and a non-linear region in which the value of G\u2032 decreased with increasing stress or strain and the G\u2033 value exhibited a contrasting trend [et al. [\u03c3max) or strain (\u03b3max). Campo-Dea\u00f1o and Tovar [G*), also increased. The authors also observed that, with identical starch concentrations, the PW samples had lower viscoelastic moduli than the AP samples [In the study of Chen and Huang , surimi f starch . They alon stage . The addnd Tovar investigng trend . These rng trend and More [et al. , could bnd Tovar found th samples . This reet al. [Dosidicus gigas). Two methods were used to prepare surimi: first, protein precipitation at the isoelectric point (type A), and, second, washing with an acid solution (type B). Four % sorbitol + 4% sucrose + 0.5% sodium tripolyphosphate, 4% sorbitol + 4% trehalose + 0.5% sodium tripolyphosphate, and 8% trehalose + 0.5% sodium tripolyphosphate were added to the surimi of type A or B. Viscoelastic parameter studies showed that surimi type A samples had significantly higher viscoelastic moduli than type B samples and were more rigid than type B samples. In type B samples, the influence of different cryoprotectants was not discernible. In contrast, in type A samples, trehalose favored less initial protein aggregation and therefore a more thermorheologically stable structure.A cryoprotectant is usually used to protect against damage of biological tissue from ice formation. Some cryoprotectants function by lowering the glass transition temperature of the solution or material. Many cryoprotectants function by forming hydrogen bonds with biological molecules like water molecules in a system. However, cryprotectants prevent freezing, and, in a glassy phase, a solution can maintain some flexibility. For proper DNA and protein function, hydrogen bonding in aqueous solutions is important. Thus, the biological material retains its native physiological structure, although it is no longer immersed in an aqueous environment because the cryoprotectant replaces the water molecules. During frozen storage, surimi is liable to lose quality due to denaturation and/or aggregation of myofibrillar proteins. Cryoprotectants prevent the protein, especially actomyosin, from denaturing/or aggregating. Thus the addition of cryoprotectant to surimi is needed to maintain its quality. Many compounds, including some low molecular weight sugars and polyols as well as many amino acids, carboxylic acids and polyphosphates were known to display cryoprotective effects in surimi ,54,17,52et al. comparedDosidicusgigas) made by two methods and stored frozen at \u221215 \u00b0C for 6 months were investigated by Campo-Dea\u00f1o et al. [G\u2032 and G\u2032\u2032 moduli, decreased between 45 and 50\u00b0C, reflecting an increase in fluidity of the semi-gel [et al. [T > 70 C, G\u2032 continues increasing, whereas G\u2032\u2032 remains almost constant, indicating the formation of a highly elastic myofibrillar protein gel. Moreover, in type A samples, there were some differences among the different cryoprotectants: in the type A sample containing sorbitol + trehalose caused a significant increase in G\u2032 and G\u2032\u2032 moduli after eight weeks of storage of type A surimi throughout the temperature range. Sorbitol + trehalose had a weak cryopreservative effect on surimi A; in this case, the cryoprotectant may have increased the internal mechanical stress produced by freezing-induced dehydration. The rheological properties of squid surimi during frozen storage at \u221218 \u00b0C and the influence of five levels of KGM on the textural properties of grass carp surimi gels were investigated by Xiong et al. [p < 0.05) with the KGM concentration. Xiong et al. [et al. [et al. [Aristichthy nobilis surimi. Therefore, KGM could be a potential enhancer of the gel properties in surimi processing. However, adding more than 2% KGM is not recommended because, at this level, the surimi gels could easily become too hard due to the strong hygroscopicity of KGM. Moreover, higher levels of KGM significantly reduced the whiteness of the surimi gels. The authors suggested that for a better surimi texture, the optimum level of KGM was 1%. The effect of KGM on myofibrillar protein from grass carp (g et al. . They obg et al. also com [et al. conclude [et al. also notThe structure of HPMC is presented in et al. [According to the report of Chen , HPMC haet al. noted thet al. used a tet al. . On the et al. used theThe biological origin of starch has an important influence on the physico-chemical and functional properties of starch systems in surimi and its products. The viscoelastic moduli and the gel strength increased with increasing the starch concentration in surimi. The addition of combinations of cryoprotectants improved the rheological properties of surimi. The breaking force and deformation of the surimi gels increased significantly with the increasing addition of konjac glucomannan from 0 to 2% during frozen storage at \u221218 \u00b0C, which could also affect the textural properties of the surimigels. These phenomena could also increase the gel-forming ability and improve the strength and elasticity. HPMC is a useful gelation aid material to improve the flow properties of surimi."} {"text": "Peste des petits ruminants (PPR) is a highly contagious transboundary animal disease with a severe socio-economic impact on the livestock industry, particularly in poor countries where it is endemic. Full understanding of PPR virus (PPRV) pathobiology and molecular biology is critical for effective control and eradication of the disease. To achieve these goals, establishment of stable reverse genetics systems for PPRV would play a key role. Unfortunately, this powerful technology remains less accessible and poorly documented for PPRV. In this review, we discussed the current status of PPRV reverse genetics as well as the recent innovations and advances in the reverse genetics of other non-segmented negative-sense RNA viruses that could be applicable to PPRV. These strategies may contribute to the improvement of existing techniques and/or the development of new reverse genetics systems for PPRV. Morbillivirus in the family Paramyxoviridae , a member of genus et al.et al.et al.et al.et al.The availability of complete genome sequences from vaccine strains and field isolates for all four lineages of PPRV -mediated suppression of signaling lymphocytic activation molecule (SLAM) receptor lead to reduced PPRV titers was previously analyzed and the role of RNA-dependent RNA polymerase (RdRp) was determined in attempted reverse genetics involving the N, P, and L proteins as well as the PPRV leader and trailer for minigenome expression is a potential bottleneck for viable PPRV rescue after cell transfection, which may be confused with the CPE induced by the rescued virus. Indeed, the CMV promoter was successfully used to rescue NDV, although it was shown to be less efficient in low virulent strains is one of the longest paramyxoviruses after the recently described novel Feline morbillivirus . Consequently, compared with other morbilliviruses, isolation of a field PPRV strain can be difficult due to the lack of sensitive cell lines or inadequate conditions of transportation and stocking of samples , and Orf virus, usually exhibit poor growth et al.et al.et al.et al.et al.et al.et al. et al.et al.et al.et al.et al.et al.Conventional cell lines that exhibit high performance in growth and propagation of PPRV are rarely available , virus rescue relies on co-transfection into eukaryotic cells with at least four plasmids representing the full-length antigenomic sequence of the virus and helper plasmids independently cloned downstream of the T7 promoter in the presence of an exogenous T7 RNA polymerase source. Even though rescue efficiency for some viruses with cytoplasmic replication can be improved under the control of Pol I and Pol II, the T7 promoter is being gradually replaced by the CMV promoter, which is directly recognized by eukaryotic RNA polymerase cloned into one plasmid vector as illustrated in Fig.\u00a0et al. , b. In tl. et al., b.et al.The single-plasmid system is a helper plasmid-free-based system that may be driven by a T7 or CMV promoter with or without exogenous T7 RNA polymerase Fig.\u00a0D. In thiet al.et al.et al.et al.et al.et al.Years after the approval of a global strategy for the control and eradication of PPRV, there are still continued reports of new PPRV cases, even in unusual hosts, worldwide (Boussini"} {"text": "Pankhurst et al., Chem. Sci., 2016, DOI: 10.1039/c6sc02912d.Correction for \u2018Inner-sphere In the original manuscript, the name of one of the authors was spelt incorrectly. The correct spelling of the author name \u2018Carlos Alvarez Lamsfus\u2019 has now been clarified and the complete, corrected author list is presented herein.The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} {"text": "Hallberg et al. provide a limited literature review on the reversal of type 2 diabetes mellitus (T2DM) . InsulinAnother critique of the Hallberg et al. review is the omission of important limitations in the study by Saslow et al. . In the Regarding another study, Hallberg et al. describe an impressive 78% \u201creversal\u201d rate of diabetes at one year using a definition that excludes metformin ; howeverUltimately, the review by Hallberg et al. presents an overly enthusiastic narrative \u2013 and not systematic \u2013 review of VLCDs for the treatment of T2DM that does not reflect the entirety of the current evidence available. The authors omit a discussion of high-carbohydrate diets being used to treat and reverse T2DM and fail to mention key limitations of the studies cited on ketogenic diets. Moreover, the authors\u2019 working definition of \u201creversal\u201d does not necessarily reflect improvements in the underlying pathophysiology of T2DM, i.e., insulin resistance or carbohydrate intolerance."} {"text": "Diseases emerging from wildlife have been the source of many major human outbreaks. Predicting key sources of these outbreaks requires an understanding of the factors that explain pathogen diversity in reservoir species. Comparative methods are powerful tools for understanding variation in pathogen diversity and rely on correcting for phylogenetic relatedness among reservoir species. We reanalysed a previously published dataset, examining the relative effects of species' traits on patterns of viral diversity in bats and rodents. We expanded on prior work by using more highly resolved phylogenies for bats and rodents and incorporating a phylogenetically controlled principal components analysis. For rodents, sympatry and torpor use were important predictors of viral richness and, as previously reported, phylogeny had minimal impact in models. For bats, in contrast to prior work, we find that phylogeny does have an effect in models. Patterns of viral diversity in bats were related to geographical distribution (i.e. latitude and range size) and life history . However, the effects of these predictors were marginal relative to citation count, emphasizing that the ability to accurately assess reservoir status largely depends on sampling effort and highlighting the need for additional data in future comparative studies. This study provides key support for the idea that bats are special as zoonotic reservoirs, because they host more zoonotic viruses per species than rodents. It also identifies key bat ecological traits that correlate with increased zoonotic viral diversity, e.g. bat species with smaller litters, larger body masses, greater longevity, more litters per year and geographical distributions overlapping with many other bat species (sympatry) carry a greater number of zoonotic viruses. It has since provided the basis for additional studies . The autes e.g. ,20,23) a,23 aet aet al. found that phylogeny explained little of the residual variation in their models predicting viral diversity in bats and rodents. The authors note that this was surprising, considering the strength of phylogenetic signal in other species\u2019 traits [In interspecies comparisons, phylogenetic methods have become commonplace to control for the statistical non-independence of species due to shared common ancestry ,25. Clos\u2019 traits . Althoug\u2019 traits , the lac\u2019 traits is unexp\u2019 traits \u201336). Add\u2019 traits ,37. Theret al., we reanalysed the data used in their study, applying more recent comparative methodologies and phylogenies. Luis et al. [et al. [et al. using the same BE [Given the importance of phylogeny in other comparative studies involving bats, and the lack of phylogenetic signal found by Luis s et al. used phys et al. , impleme [et al. phylogen same BE phylogen same BE phylogen same BE , we also same BE and rode same BE that res2.2.1.et al. [\u03bb) model (see \u00a72.2 below) and, depending on the analysis, one of three phylogenetic trees [\u03bb) to calculate phylogenetic signal in the individual species' traits listed above (where \u03bb was restricted to 0 \u2264 \u03bb \u2264 1).All data used in our analyses were taken from electronic supplementary material, table S14 of Luis et al. . As in tet al. ), migratet al. ), geograet al. , to contet al. ,25. To cet al. with a lic trees ,41,42. Tic trees phylogenic trees , we alsoic trees , we also2.2.et al. [phytools v. 0.5\u201320 [Like Luis et al. , we perfet al. . Failinget al. . For eacet al. ; Shi & Ret al. , hereaftet al. , hereaft. 0.5\u201320 . For allet al. [caper v. 1.0.1 [et al. (i.e. To examine which species\u2019 traits correlate with either number of zoonotic viruses (i.e. those viruses capable of infecting humans) or total number of viruses in the bat and rodent species included in Luis et al. , we ran et al. with Pagv. 1.0.1 . We re-rc) scores and assumed that models with AICc scores that differed by less than or equal to 2 units had similar support [R2).We ranked all PGLS models using corrected Akaike information criterion , we als2.4.et al. [et al. [Multi-collinearity between explanatory variables is common in ecological datasets. Many statistical methods are sensitive to collinearity and failing to address the relationships between variables can bias statistical inference . To dealet al. employ aet al. . VIFs do [et al. while encaper package [phylolm package v. 2.6 [Prior to running PGLS models, we examined the distribution of our response variables for bats, rodents and the bat/rodent combined data. We log-transformed these response variables to reduce skewness and checked subsequent PGLS models to ensure that residuals were homogeneous and normally distributed . We then package used aboe v. 2.6 to obtai3.3.1.et al. using the BE phylogenS13 from ).3.2.et al. where the first three principal components accounted for 88% of the variance in bat life-history strategies and 93% of the variance in rodent life-history strategies [In all of our phylogenetic PCAs, the first phylogenetic principal component (pPC1) explained more than 99% of the life-history trait variation across species for bats, rodents and the bat/rodent combined data, regardless of the phylogeny used . This is in contrast with Luis rategies . The firrategies . Using trategies tree, pPet al. data, using the same phylogeny as in the original paper [et al. [Reanalysis of the Luis al paper , but incal paper , our bes [et al. , which cFor rodents, using , our besReanalysis of the combined bat/rodent data using altered 3.3.As outlined above in \u00a73.2, pPC1 explained more than 99% of the life-history trait variation across species for bats, rodents and the bat/rodent combined data using the updated S&R and F&S et al. found sympatry, citations and PC1 to be important [For bats, using the more recent S&R phylogenmportant . Our besmportant tree.et al. [For rodents, using the more recent F&S phylogenet al. , but in contrast to above, did not explain any of the residual variation in the PGLS model for total viral richness .When we adopted an alternative modelling framework using VIFs to remove collinear variables and including all remaining traits in PGLS models to examine relative effect sizes, citation count was often the only significant predictor in models . However, in the PGLS model for total number of viruses carried by bats, both citation count and litter size were significant . Citation count had a positive relationship with total viral diversity, while litter size had a negative relationship . For num\u03bb < 0.0001; electronic supplementary material, tables S30 and S31).For rodents, citation count was the only significant predictor in models for number of zoonotic and total number of viruses carried by species (tables S30\u2013S31). In both models, citation count had a positive relationship with rodent viral diversity . Additio\u03bb = 0.1762; electronic supplementary material, table S34 and \u03bb = 0.2735; electronic supplementary material, table S35).For the combined bat and rodent PGLS model examining total viral richness, citation count was the only significant predictor and had a positive relationship with total viral richness . For number of zoonotic viruses in the bat/rodent combined data, citations, sympatry and no torpor use were all significant terms . All of these traits had a positive relationship with number of zoonotic viruses . In both PGLS models, phylogeny explained some additional residual variation . We found phylogeny had little effect in only our PGLS model for total viral richness when all ecological predictors were included . Previous work has found that phylogeny explains little residual variance in the relationship between viral richness and ecological traits in bats ,20,21. HFor rodents, our reanalysis did not alter the importance of phylogeny, bolstering the inference that species relationships are less important for explaining residual variation in rodent reservoir status. Although it is difficult to postulate why phylogeny does not impact the residual error structure in our models for rodents, we can speculate on what may drive lack of signal within patterns of viral richness and ecological traits in rodents . Given the challenges associated with extensively characterizing viral diversity, to date researchers have probably sampled only a small proportion of the viruses harboured by wild mammals . ConsideOur results highlight that the phylogenetic tree used for comparative inference can impact species' trait correlates of viral infection. Although the BE supertreWith a phylogenetic principal components analysis (pPCA), we were able to explain most of the variation (more than 99%) in the life-history traits of bats and rodents with a single phylogenetic principal component (pPC1). Again, this emphasizes the need to appropriately account for species' relationships throughout comparative analyses . Furtheret al. [et al. note there is a current lack of understanding about what may drive the relationship between torpor use and viral persistence in reservoir species [Regardless, both our best and full model approaches confirm the potential importance of ecological traits for predicting patterns of viral richness. The majority of our models contained species' sympatry, which prior work has shown correlates positively with patterns of parasite richness \u201323,64,68et al. also finet al. . Reducedet al. ,72. In fet al. . Howeveret al. . This, pet al. may limi species , highligAlthough we may be able to make broad inferences about the potential importance of ecological traits for driving patterns of viral richness in reservoir species, our reanalysis emphasizes that patterns of viral diversity are largely driven by citation count. In most of our PGLS regressions, citation count was often the only significant term and had a much larger effect size relative to ecological traits. This was particularly true for bats, where ecological traits exhibited marginal effects relative to citation count . Further, in our PGLS models containing all ecological predictors, although citation count did not always have the largest standardized effect size, its effect was consistent and not as variable as other traits (i.e. lack of torpor use or IUCN status) (et al. [et al. [While there are barriers to including more data in comparative analyses, such as a lack of natural history information for many species, the importance of species-level citation count brings to light that some of the patterns in this dataset could also be artefacts of the subset of species sampled. Bats and rodents are the two most speciose mammalian orders, comprising well over 60% of living mammalian diversity (approx. 2277 species of rodents and appret al. used a n [et al. ,68 shoulWe examined how methodological choices influence the outcomes of phylogenetic comparative analyses examining species' trait correlates of viral diversity. However, modern phylogenetic methods allow us to do much more than use phylogeny as a statistical control. They allow us to appreciate the evolutionary history of species, better understand patterns of biological diversity, trace character traits back through evolutionary time and make inferences about the evolution of those traits. For example, studies have used phylogeny to examine the risk of host pathogen shifts in primates and pred"} {"text": "Aquatic and terrestrial environment and human health have been seriously threatened with the release of metal-containing wastewater by the rapid growth in the industry. There are various methods which have been used for removal of ions from the environment, such as membrane filtration, ion exchange, membrane assisted liquid extraction and adsorption. As a sort of special innovation, a polymerization technique, namely molecular imprinting is carried out by specific identification for the target by mixing it with a functional monomer. After the polymerization occurred, the target ion can be removed with suitable methods. At the end of this process, specific cavities, namely binding sites, are able to recognize target ions selectively. However, the selectivity of the molecularly imprinted polymer is variable not only because of the type of ligand but also charge, size coordination number, and geometry of the target ion. In this review, metal ion-imprinted polymeric materials that can be applied for metal ion removal from different sources are discussed and exemplified briefly with different metal ions. Contamination of water with heavy metal ions affects the ecosystem seriously and this creates important problems [Pollution of water is a significant worldwide threat. Various kinds of chemical pollutants have been discharged to the environmental water by different industries and agricultural applications . Modern problems . The exiproblems .There are different methods used for the removal of metal ions from water and wastewater such as membrane processes, chemical precipitation, extraction, ion exchange and adsorption . MoleculThe form of the molecularly imprinted polymer can be a micro-/nanosized particle, hydrogel, cryogel, or monolith with template specific binding sites . The binMolecularly imprinted polymers can be prepared for any molecule depending on the application area. More than 10,000 molecules and biological structures like metal ions, hormones, proteins and cells have been imprinted successfully ,13,14,15Here, molecularly imprinted polymer applications for different metal ions are reviewed and various examples with metal ions discussed according to their contributions to the literature. Mercury is a metal that occurs naturally and is released primarily through geothermal activity on Earth . The toxZhang et al. synthesiMergola et al. publishe2/g with a size range of 63\u2013140 \u00b5m. They also observed a maximum adsorption capacity of 0.45 mg/g. According to the results, they showed that the mercury (II)-imprinted polymeric beads could be used several times without reducing their adsorption capacities. In another study, Anda\u00e7 et al. preparedXu et al. synthesiCopper is also a common toxic ion and excessive amounts of copper are dangerous to the environment and organisms. Copper pollution affects the ecosystem of urban areas. In addition, copper commonly affects the chemoreception and chemosensory abilities of aquatic animals which underlie key interactions including finding prey, avoiding predators, and detecting conspecifics . TherefoA multi-ion imprinting method was proposed for pre-concentration and removal of different ions (copper (II), mercury (II), cadmium (II) and nickel (II)) by Fu et al. . The phyRen et al. preparedKong et al. developeLead arises commonly associated with zinc, copper and silver ores in the environment. It is generally employed for several industrial applications such as paint, cables, pesticides and pipelines and the main anthropogenic input is through the fossil fuel of combustion engines . Among oMishra et al. reportedDenizli and his research group synthesized amino acid-based lead (II)-imprinted polymeric cryogels . They caEsen et al. preparedR2 = 0.9998) was obtained in the range of 0.2\u221250 \u03bcg/L. The limit of detection and quantification values was calculated as 0.06 and 0.19 \u03bcg/L, respectively. Real sample studies were also performed with lake and tap water, and according to the results, recovery values varying from 95.5 to 104.6% were obtained.A highly selective lead (II)-imprinted polymer was prepared by Cai et al. based onCadmium is a poisonous and cancer-causing metal that can happen as a food contaminant and worldwide pollutant. Long-term occupational exposure to high cadmium concentrations may cause lung cancer, kidney and bone damages and hematuria . Varianc2/g with a size range of 63\u2013140 \u00b5m in diameter (Cadmium (II)-imprinted polymeric beads were prepared for removal of cadmium ions from cadmium-overdosed human plasma by Anda\u00e7 et al. . They mediameter . They alRahangdale et al. reportedCadmium (II)-imprinted polymeric materials were also prepared by Li et al. . They prChromium is also extensively used in different industries like leather tanning, photography and metal cleaning. Industrial wastewater containing heavy metal ions discharged to the environment and their accumulation is an important source of water pollution . ChromiuN-methacryloyl-(l)-histidine was polymerized as seen in A molecularly imprinted polymeric adsorbent for chromium (III) analysis was prepared by Birlik et al. . First, N-methacryloylamido histidine in order to prepare a pre-complex and then chromium (VI)-imprinted polymeric nanoparticles were synthesized using the surfactant-free emulsion polymerization. The particle size was measured to be 155.3 nm. Selectivity studies were performed with chromium (III) ion and according to the results, chromium (VI)-imprinted polymeric nanoparticles showed high affinity to chromium (VI) ion. The chromium (VI)-imprinted polymeric nanoparticles were used several times without decreasing their chromium (VI) adsorption capacities. Another study about chromium (VI)-imprinted polymeric nanoparticles was conducted by Uygun et al. . These rNickel is a silvery-white transition metal that takes on a high polish. The toxicity of nickel depends on the way of its exposure and the solubility of the compound like other metals .Zhou et al. preparedErs\u00f6z et al. preparedIn a study conducted by Tamahkar et al. , nickel In addition to the metal ions mentioned above, there are also other metals like manganese, aluminum, and cobalt which cause environmental pollution as well. These metals are briefly mentioned in this section. Manganese is a metal ion which is used in electrochemical, chemical, food and pharmaceutical applications. It is also used in ferrous metallurgy generally. Despite the fact that it is fundamental for human life, at levels exceeding 0.1 mg/L, the existence of manganese (II) in drinking water over the limits may cause accumulation and impact the nervous system .Khajeh-Sanchooli et al. prepared2/g with a size range of 63\u2013140 \u03bcm. Elemental analysis was also conducted and the results showed that the aluminum (III)-imprinted polymeric beads contained 640 \u03bcmol/g. The maximum adsorption capacity was 122.9 \u03bcmol/g. The aluminum (III)-imprinted polymeric beads can be used numerous times and the results showed that there is no significant decrease in their adsorption capacities.In a study conducted by Anda\u00e7 et al. , aluminuk = 4.17) is higher than the nonimprinted polymer (k = 0.74). These finding suggest that this methodology will introduce new opportunities in the area of removing metal ions and radioactive nuclides.A cobalt (II)-imprinted polymeric material was synthesized by Yuan et al. in order2/g, while the second one was 78.6 m2/g. The decrease of adsorption capacities of the cryogels were calculated as 38.5% for copper (II), 39.1% for lead (II), 66.9% for zinc (II) and 69.9% for cadmium (II). The maximum adsorption capacities of the cryogel were found for lead (II), cadmium (II), zinc (II) and copper (II) to be 7620, 5800, 4340 and 2540 \u00b5g/g, respectively. According to the results, ion-imprinted cryogels could be reused without a critical decrease in the adsorption capacity even after ten adsorption\u2013desorption processes. All these studies have been summarized according to different parameters in Tekin et al. preparedExtraction and measurement of metal ions from the aqueous environment remains a serious problem because of their toxicity and cancer risk. Because of this reason, ion-imprinted polymeric materials have been developed further over the last two decades. Especially, they have gained great attention in many areas of science such as chemistry, physics, biology, biochemistry and biotechnology. The most important reason is owing to its selectivity and affinity to the target molecules . Molecul"} {"text": "AIAA J.56, 346\u2013380) is undertaken to assess the progress and overall contributions of LES towards a better understanding of jet noise. In particular, we stress the meshing, numerical and modelling advances which enable detailed geometric representation of nozzle shape variations intended to impact the noise radiation, and sufficiently accurate capturing of the turbulent boundary layer at the nozzle exit. Examples of how LES is currently being used to complement experiments for challenging conditions (such as highly heated pressure-mismatched jets with afterburners) and guide jet modelling efforts are highlighted. Some of the physical insights gained from these numerical studies are discussed, in particular on crackle, screech and shock-associated noise, impingement tones, acoustic analogy models, wavepackets dynamics and resonant acoustic waves within the jet core. We close with some perspectives on the remaining challenges and upcoming opportunities for future applications.In the last decade, many research groups have reported predictions of jet noise using high-fidelity large-eddy simulations (LES) of the turbulent jet flow and these methods are beginning to be used more broadly. A brief overview of the publications since the review by Bodony & Lele (2008, This article is part of the theme issue \u2018Frontiers of aeroacoustics research: theory, computation and experiment\u2019. It is now becoming part of the tool set being used outside of academic research, i.e. in research and development efforts directed at design and implementation of concepts aiming to reduce the emitted noise. While the cost of jet noise predictions using LES remains relatively high, the computations have leveraged the continued advancements in high-performance computing and numerical methods, and, as a result, significant strides have been made during the last 10\u201315 years. These have brought improved quantitative accuracy in noise predictions both in terms of the overall sound pressure level (OASPL) directivity and spectral shape for a given observer direction. The purpose of this article is to present a concise overview of the progress made using LES and draw attention to some areas where LES is now contributing to the field of jet aeroacoustics. This is not an exhaustive review\u2014only the most salient aspects are discussed. In combination with other broader reviews of aeroacoustics and jet noise , it is h2.(a)quasi-realistic jet mean flow profiles near the nozzle exit. The turbulence resolving simulations started with the emulated mean flow seeded with perturbations using various approaches aiming to capture realistic jet flow turbulence. It was hoped that as the flow evolved, say after the first jet diameter or so, the physical discrepancy associated with not directly representing the nozzle and the boundary layer state at the nozzle exit accurately would be reduced, allowing comparison with laboratory measurements for the jet flow and its near- and far-field noise. As reviewed by Bodony & Lele [The jet LES studies available in 2008 used several pragmatic compromises . Most imy & Lele , achieviRe\u2009=\u2009UjD/\u03bd simulated was reduced to be in the 0.1 to 5\u2009\u00d7\u2009105 range in most of these early jet studies [Uj is the jet velocity and D is the nozzle exit diameter. Without nozzle geometry, there was no high-Reynolds number wall-bounded flow to consider. The choice of reduced Re was made to limit the modelling contributions, based on the argument that independence on Reynolds number in the jet plume is reached for Re\u2265100\u2009000 [Re increases, the large scale part remains relatively fixed, i.e. scales with D. At the same time, the dissipative scales shift to smaller spatial scales and broaden the Strouhal number St\u2009=\u2009fD/Uj range of noise to higher frequencies f, but may not significantly change the main jet characteristics such as peak radiation levels, OASPL directivity, etc. This does not mean that Re is an unimportant parameter, but rather that, above a certain value, the Reynolds number mainly affects the jet plume and radiated noise indirectly through the changes in nozzle-exit boundary layer state and early shear layer development. Furthermore, at high Reynolds numbers, the turbulence shows higher internal intermittency, i.e. the turbulent kinetic energy (TKE) dissipation rate, Kolmogorov scales etc. fluctuate more in different realizations of the flow. The upshot of this is that the probability of extreme events (which have quite a low probability), as well as high-order statistics of the turbulent flow and noise, may change with Reynolds number. This must be kept in mind when studying intermittent phenomena, such as crackle, or wave-packet intermittency.Another important compromise was that the Reynolds number studies , where U\u2265100\u2009000 or 400\u20090\u2265100\u2009000 . While tRe\u2009=\u2009106, which means that both the boundary layers and shear layers are likely to be transitional or turbulent, even under favourable pressure gradient. However, inclusion of the physical geometry in high Re flows leads to additional meshing and modelling challenges that need to be addressed without making the computational costs prohibitive. First, how to robustly generate tractable grids that appropriately capture the relevant geometric details for complex realistic nozzles? Second, how to efficiently resolve and/or model the thin turbulent boundary layer flow inside the nozzle? For laboratory jets, significant research on both topics has been undertaken and is discussed in the next two sections. As remarked by Freund [Rather than discuss these limitations of the previous tools, we will focus on the elements which have allowed the physically realistic simulations in recent years. Nowadays, it is well recognized that the state of the nozzle-exit boundary layer is an important parameter of the jet flow development and noise radiation. Therefore, most current simulations explicitly include a nozzle at the inlet of the computational domain and are performed at realistic Reynolds number. Whether it is for laboratory jet flows or full-scale nozzles at practical operating conditions, the nozzle-diameter-based Reynolds number is typically reported over y Freund , these e(b)The choices of mesh topology and numerical discretization are closely linked and directly steer the dispersion and dissipation errors, especially important in aeroacoustics. Historically, four approaches have been widely used to address the mesh-generation challenge in computational fluid dynamics: multi-block structured meshes, overset grids, Cartesian meshes and generalized unstructured grids . Structured meshes with high-order spatial discretization gained early popularity in jet noise simulations \u201313, initet al. have pursued detached-eddy simulations (DES), LES and hybrid methods for jet aeroacoustics and broader applications in a finite volume structured grid solver \u2018NTS\u2019 with multi-block overlapping grids. They use discretizations which blend central differencing with upwinding along with different levels of turbulence scale resolving models. A different finite element based unstructured discretization underlies the \u2018JENRE\u2019 solver [et al. [et al. [et al. [Further progress in jet computations has resulted from the use of general unstructured meshes. Shur \u2019 solver at Naval\u2019 solver ,28, and \u2019 solver to evalu [et al. present [et al. have dev [et al. reported [et al. .Similarly to the NTS solver, the early version of the compressible flow solver \u2018Charles\u2019 developed at Cascade Technologies used a blend of relatively non-dissipative central flux and more dissipative upwind flux on hexahedral-dominant unstructured grids with mesh adaptation. The method has successfully provided accurate jet noise predictions for a range of relatively simple geometrical configurations , but sti(c)et al. [grey zone where the flow transitions from RANS to LES mode.The need to accurately capture the thin boundary layers at the nozzle exit in their natural state (laminar or turbulent) was deemed as a critical future step by Bodony & Lele . A varieet al. have devet al. [Re\u2009=\u2009105. The initially laminar boundary layers were tripped inside the pipe nozzle by adding low-level random disturbances uncorrelated in the azimuthal direction, with specific amplitudes chosen to achieve targeted levels of peak turbulence at the nozzle exit. As an alternative to this tuned numerical forcing, other tripping procedures inspired by roughness strips used in experiments have been suggested, including geometrical tripping [When the nozzle interior flow is directly computed in LES, it remains impractical to fully resolve the high Reynolds number boundary layer turbulence. In many older studies, the nozzle boundary layers were assumed to be laminar, the Reynolds number was reduced and disturbances were introduced at the inlet or insidet al. ,15 investripping , where atripping , where tRe with initially laminar boundary layers [et al. [et al. [M\u2009=\u2009Uj/c\u221e\u2009=\u20090.9 turbulent jet at Re\u2009=\u2009106 and led to fully turbulent nozzle-exit boundary layers with significant improvement of the flow field and noise predictions. Subsequently, the study was extended to a range of Mach M\u2009=\u20090.4, 0.7 and 0.8 for the same converging-straight pipe nozzle, with similar accuracy in flow and noise predictions. All the simulations were performed in close collaboration with a companion experiment at Pprime Institute in Poitiers (France), and matched the experimental Reynolds number. Combined with appropriate resolution in the shear layers and jet plume (see next section), these modelling approaches resulted in sub-dB prediction accuracy for most relevant inlet angles \u03d5 and frequencies St\u2009=\u2009fD/Uj .While the early simulations with the Charles solver were also performed at reduced y layers , later ey layers . The mody layers \u201342 to mo [et al. , wall-mo [et al. and has [et al. \u201347. As s [et al. , the com(i)u\u2032 increase monotonically with distance as the shear layer spreads. For laminar shear layer, however, an overshoot in u\u2032 along the lipline occurs due to transition and then approaches values consistent with turbulent shear layer self-similarity. Br\u00e8s et al. [As already discussed, the turbulent jet flow which emerges from a nozzle begins with relatively thin shear layers. If the nozzle interior boundary layer is turbulent the near-lip shear layer is naturally turbulent. For laminar exit flow the shear layer becomes turbulent rapidly due to instabilities whose wavelength scales with the initial shear layer (momentum) thickness. If the flow at nozzle exit is already turbulent, the TKE and streamwise rms velocity s et al. report ts et al. ,15 in ths et al. . Therefo(ii)Prediction of far-field radiated noise from jet flow LES requires a hybrid approach, where the important scales of turbulence within the noise-producing region of the jet are resolved, and the propagation of the small amplitude acoustic fluctuations from the near-field source region to the far field is computed analytically. The Ffowcs Williams\u2013Hawkings (FW\u2013H) equation is one oacoustic data recorded on that distant surface might be corrupted by significant numerical errors. In particular, Bodony & Lele [et al. [et al. [et al. [In the FW\u2013H approach, the turbulent flow volume is surrounded by an acoustic data surface, i.e. FW\u2013H surface, on which the time-varying data is saved as the jet LES is computed, and the quadrupole contributions from the volume-distributed noise sources outside of the FW\u2013H surface are neglected. In a post-processing step, the far-field radiated sound is calculated using the FW\u2013H surface data via analytically known Green's function for a stationary or uniformly moving ambient medium. The placement of FW\u2013H surface, and the spatial and temporal resolution of the data saved on FH-W surface are important. If the FW\u2013H surface lies inside the turbulent flow, the data collected on it do not include the sound produced by the turbulent flow lying outside of it and (vigorous) crossing of turbulence across the surface is a contributor to spurious acoustics. On the other hand, if the FW\u2013H surface is placed too far away from turbulent jet, predictions based on it might also be incorrect because the y & Lele pointed [et al. introduc [et al. ,50 and i [et al. proposed3.in situ or laboratory testing.One way that LES are contributing to a better understanding of jet noise is as a complement to experiments, in particular, for tactical exhaust systems. Detailed measurements in full-scale engines are costly and difficult, and most laboratory facilities are limited to smaller-scale jets at lower temperatures. For such high-speed heated jet from realistic nozzle configurations, LES can arguably provide insight on the jet flow field and acoustic field in a more flexible and cost-effective way than most (a)To investigate the impact of inlet temperature non-uniformity on the jet flow and noise, LES were performed with the Charles solver for heated over-expanded supersonic jets issued from a faceted military-style nozzle . The num\u03d5. The results of this proof-of-concept LES study indicate that there is merit to the idea pioneered in experiments at lower temperature when taking into consideration more realistic conditions. Much work remains to fully analyse the LES data generated by this investigation, along with more concepts worthy of examination to improve the simulations and noise mitigation.(b)et al. [et al. [et al. have shown that crackle is emitted as a weak shocklet, whose shock-like signature can be traced all the way to the eddying motions in the jet shear layer. They also noted that high-frequency shocklets are emitted from the near-nozzle shear layer, consistent with Mach wave emission from eddies moving at Ortel convective Mach number Mco\u2009=\u2009(1\u2009+\u2009Mj)/(1\u2009+\u2009c\u221e/cj); however, the intense crackle emissions were found to move more rapidly and were relatively infrequent as shown in et al. [Ffowcs Williams et al. identifiet al. ; its souet al. ,57, the [et al. ,59 is a n et al. stress tn et al. of F-35 (c)Harper-Bourne & Fisher interpreet al. [et al. [et al. [In recent years LES of hot supersonic jets including over-expanded and under-expanded conditions have been investigated by many groups. These studies have included axisymmetric, chevroned and rectangular nozzles and cold, heated and highly heated conditions. While most effort has been devoted to the validation of the LES predictions , some anet al. studied et al. , althouget al. analysedet al. better c [et al. ,70 study [et al. applied Tam ,73, and et al. [et al. [Recent research on screech and other resonant acoustic phenomena see \u00a7\u00a7b, such aet al. in jet s [et al. ,87 detai4.Since Lighthill introduc(a)As reviewed by Jordan & Colonius , there het al. [One spectral approach ideally suited for turbulent jets is the frequency domain version of spectral proper orthogonal decomposition , referreet al. , SPOD \u2018iet al. , and ideresolvent analysis has emerged from dynamical system theory. This analysis of turbulent mean flows is based on the assumption that large-scale coherent structures can be modelled as responses of a linear operator to stochastic forcing [Another frequency-domain technique called forcing . Most re forcing and demo forcing .M\u2009=\u20091.5 case seen in figure 7g\u2013j, which are at the root of aft-angle supersonic jet noise, and the waves highlighted in figure 7m,n, which identify different acoustic resonance phenomena M\u2009=\u20090.9 LES database [et al. [et al. [et al. [M\u2009<\u20091.0. Indeed, at M\u2009=\u20090.9, both the upstream and downstream-propagating waves were observed in the jet core in the simulations, as well as the corresponding discrete tones in the near-field acoustic pressure very close to the nozzle exit for the LES and the companion experiments. While it had been postulated [Besides its contributions towards the understanding of the most energetic large scale coherent wavepacket structures that dominate the shear layer, the database was also [et al. and Schm [et al. , these w [et al. developestulated that comstulated et al. [LES-informed modelling of aeroacoustic sources, and in particular for jet noise sources, has been anticipated at least since the 1980s . The devet al. and the et al. ,109 and et al. , i.e. thet al. and otheet al. ,112 usin5.Looking back at the list of key open issues identified by Bodony & Lele in theirRe wall-bounded turbulence such as nozzle-interior boundary layers. Turning off the SGS model yielded erroneous profiles for turbulent fluctuations and also incorrect boundary layer profile [For all mesh topologies, the interplay between the modelling choices regarding the treatment of subfilter scale motions and the numerical bandwidth of the discretization schemes used, determines the effective bandwidth of the simulation results. While comprehensive cross-comparisons across different LES solvers and modelling of fine-scale motions are still lacking, some observations can still be made. Simulations with schemes employing minimum numerical dissipation require SGS modelling to prevent tail up of energy at the shortest scales . In this profile . Numeric profile ,115. Fur profile ,116, andSt\u2009=\u20091.5\u2009\u2212\u20093. While this frequency range is sufficient for supporting the modelling effort of the peak noise radiation, acoustic resonance and wavepacket dynamics [et al. [M\u2009=\u20090.9 turbulent jet previously discussed and showed that the frequency limit was increased from St\u2009\u2248\u20092 to 4 by doubling the resolution in the jet plume in all directions. For the refined LES case, the unstructured mesh contained 69 million cells (up from 16 million), the simulation time step was decreased by half (because of CFL constraints) and the CPU cost was increased 10-fold. Overall, this approach comes with a significant increase in computational cost, only partly mitigated by advancements in high-performance computing, and it is still unclear if tractable resolution can yield sufficiently high bandwidth predictions for practical applications. As an alternative to (or in combination with) finer grids, Bodony & Lele [generalized acoustic analogy approach [The other concern raised by Bodony & Lele was thatdynamics ,89,99, l [et al. conducteapproach applied approach ,118. Theapproach and stocapproach ,121 coulIn terms of research opportunities, prediction and reduction of the noise radiated by hot, supersonic jets has been a traditional research driver. Recent studies of strongly heated supersonic jets ,122 undeet al. [The trend towards higher bypass ratio, larger engines for commercial aircraft has elevated the importance of installation effects. Tyacke et al. have devet al. , non-ciret al. , multi-sPrediction of the vibro-acoustic environment associated with rocket launch has driven considerable research in recent years . Such caFrontier system will feature future-generation AMD CPUs and Radeon GPUs. Some jet simulations have already been performed on GPUs by Markesteijn, Semiletov & Karabasov (see [In terms of high-performance computing, the current trend is towards mixed architectures and incorporating some GPU computing for the jet simulation will likely be necessary to leverage these enhancements. Indeed, the Oak Ridge National Laboratory has recently announced the selection of Cray and AMD to provide the laboratory with its first exascale supercomputer for 2021 deployment. Poised to deliver greater than 1.5 exaflops of HPC and AI processing performance, the sov (see and the sov (see with ref6.We have attempted to provide a concise overview of the developments in jet LES during the last decade since the review by Bodony & Lele . Much pr"} {"text": "S-depalmitoylases in live cells and tissues\u2019 by Michael W. Beck et al., Chem. Sci., 2017, DOI: ; 10.1039/c7sc02805aCorrection for \u2018Michael addition-based probes for ratiometric fluorescence imaging of protein The authors regret that there was an error in ref. 30 of the original article. The correct reference, including the two relevant journals, is presented herein as The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} {"text": "Replying to S. Tiegs et al. Nature Communications 10.1038/s41467-019-13303-1 (2019)1 highlight the significance and relevance of the findings of Comer-Warner et al.2 on greenhouse-gas emissions from streambed sediments but raise questions about some aspects of the experimental design. We support their call for more detailed field and laboratory-based studies on this subject. However, we believe that their concerns relate to uncertainties and limitations in the experimental design that were discussed explicitly in the original paper (and accompanying transparent peer review process\u2014available online), or represent criticisms related to highly improbable minor anomalies that may unnecessarily dismiss experimental results as discussed below.Tiegs et al.1 , which propagate the arbitrary dismissal of important research due to philosophical criticisms of pseudoreplication7. For instance, Davies et al.4 have shown that the exact formation of suitable hypotheses based on mechanistic understanding can account for pseudoreplication within experimental design. Without such underlying hypotheses the number of necessary potential controls are infinite and hence infeasible. Furthermore, problems of pseudoreplication may be reduced if appropriate statistics addressing the pseudoreplication are used, for example through inclusion as random effects in linear mixed effects models8. While we welcome the contribution of Tiegs et al.1 to this longstanding discourse, our response aims particularly at those elements that advance the discussion beyond a repetition of previous pseudoreplication controversies7.It should be noted in a broader context that previous compelling articles have challenged arguments aligned to those of Tiegs et al.2 was based on a hypothetico-deductive approach that focused on the key predictors (i.e. controls and proxies for processes) derived from state-of-the-art understanding of our target variables (CO2 and CH4 emissions). This research design provided a framework for robust statistical analysis through clearly defined hypothesised processes and mechanisms in order to support meaningful statistical analyses.The research design of Comer-Warner et al.1 express concerns that Comer-Warner et al.2 included all samples within the same batch in the same incubator. We did not account for week of incubation (i.e. batch) within the statistical model as we do not consider that there is a reasonable mechanism by which sample incubation week was impacted by this experimental approach and the associated consistent sample storage over the course of batch incubations. We find Tiegs et al.1 argumentation that batch-specific conditions \u201clikely differed in unknown ways\u201d from batches tested in other weeks highly unconvincing. Moreover, we do not consider isolative segregation to have any discernible impact on the experimental results, as discussed below. In fact, we are convinced that the results of our experiments are more robust by exposing all experimental temperature treatments to the same incubation environment, rather than introducing unnecessary uncertainty and risk of technical failure or variance in performance through the use of different incubators, as suggested by Tiegs et al.1. Notably, the differences between the temperature at the top and bottom of the incubator were very small . Furthermore, non-linearity and threshold responses of greenhouse-gas fluxes to temperature have previously been found in a variety of ecosystems, e.g., refs. 14.Additionally, Tiegs et al."} {"text": "Cohen AJ, Brauer M, Burnett R, et al. Estimates and 25-year trends of the global burden of disease attributable to ambient air pollution: an analysis of data from the Global Burden of Diseases Study 2015. Lancet 389: 1907\u2013182016; \u2014In this Article, the following changes have been made to the supplementary appendix. In table 1, for the Pinault et al11 study, the country, the CEV (Stroke)-Relative Risk (95% CI), and the IHD-Relative Risk (95% CI) have been corrected (p 11); the second Pinault et al11 reference has been changed to Thurston et al ; a footnote has been added stating that the indicated data were from \u201cAdditional, unpublished analyses, provided by the principal investigator at the authors' request\u201d (p 12); the first Miller et al15 reference has been changed to Puett et al (2011); the LC-Relative Risk for Turner et al has been corrected (p 12); and the reference list has been updated. These corrections have been made to the online version as of May 23, 2017."} {"text": "Previously, Aydogdu et al. [\u03bcm-diameter dextranomer microspheres in a stabilized hyaluronic acid gel recruited numerous myofibroblasts around the dextranomer particles, a foreign body inflammatory reaction with a high density in CD68 positive cells, stimulating an enhancement in collagenous stroma [We read with great interest and appreciated very much the article written by \u00dcre et al. . In theiu et al. , comparis stroma . Moreoves stroma . We beli"} {"text": "It was so named because of its occurrence in shark liver oil, which contains large quantities and is considered its richest source. However, it is widely distributed in nature, with reasonable amounts found in olive oil, palm oil, wheat-germ oil, amaranth oil, and rice bran oil. Squalene, the main component of skin surface polyunsaturated lipids, shows some advantages for the skin as an emollient and antioxidant, and for hydration and its antitumor activities. It is also used as a material in topically applied vehicles such as lipid emulsions and nanostructured lipid carriers (NLCs). Substances related to squalene, including \u03b2-carotene, coenzyme Q10 (ubiquinone) and vitamins A, E, and K, are also included in this review article to introduce their benefits to skin physiology. We summarize investigations performed in previous reports from both Squalus spp.) liver oil, which contains large quantities and is considered its richest source [Human skin, covering the entire outer surface of the body, is the largest organ and is constantly exposed to sunlight stress, including ultraviolet (UV) light irradiation. The skin tissue is rich in lipids, which are thought to be vulnerable to oxidative stress from sunlight. Squalene A is a stt source . It is tExperimental studies have shown that squalene can effectively inhibit chemically induced skin, colon, and lung tumorigenesis in rodents . The pro1 in the hair follicles to lubricate the skin and hair of animals . In humaSqualene is not very susceptible to peroxidation and appears to function in the skin as a quencher of singlet oxygen, protecting human skin surfaces from lipid peroxidation due to exposure to UV light and other sources of oxidative damage , as discTruly one of nature\u2019s great emollients, squalene is quickly and efficiently absorbed deep into the skin, restoring healthy suppleness and flexibility without leaving an oily residue. New cosmetic emulsions with biomimetic molecules have been investigated using experimental designs . That stet al. [vernix caseosa (VC) substitute can be an innovative barrier cream for barrier-deficient skin. This is because of the excellent properties of VC in facilitating stratum corneum hydration. Different lipid fractions were isolated from lanolin and subsequently mixed with squalene, triglycerides, cholesterol, ceramides, and fatty acids to generate semi-synthetic lipid mixtures that mimic the lipid composition of VC. The results showed that the rate of barrier recovery increased and was comparable to VC lipid treatment. Okuda et al. [p < 0.05).In general, occlusion leads to increased skin hydration due to reduced water loss. Rissmann et al. revealeda et al. also fouIn vitro experimental evidence indicates that squalene is a highly effective oxygen-scavenging agent. Subsequent to oxidative stress such as sunlight exposure, squalene functions as an efficient quencher of singlet oxygen and prevents the corresponding lipid peroxidation at the human skin surface [et al. [t-butyl-4-hydroxytoluene. They also reported that squalene is not particularly susceptible to peroxidation and is stable against attacks by peroxide radicals, suggesting that the chain reaction of lipid peroxidation is unlikely to be propagated with adequate levels of squalene present on the human skin surface. Aioi et al. [2-) generation in rats in order to elucidate the mechanism whereby this compound decreases erythema induced by 1% lauroylsarcosine (LS) ointment. LS (200~400 \u03bcg/mL) caused overt production of O2- from cultured keratinocytes and peritoneal exudate leukocytes. O2- was significantly reduced by the addition of squalene (100 \u03bcg/mL). These results suggest that a possible role of squalene for alleviating skin irritation is by suppression of O2- production, which is dependent on different mechanisms of action of superoxide dismutase.Squalene has been reported to possess antioxidant properties. surface . Kohno e [et al. found thi et al. studied et al. [O-tetradecanoylphorbol-13-acetate. The mice were treated with 5% squalene and at the end of the prevention study, there was a 26.67% reduction in the incidence of tumors in the squalene-treated group. In a related branch of research, a protective effect was observed when squalene was given before and/or during carcinogen treatment. Experimental studies have shown that squalene can effectively inhibit chemically induced skin tumorigenesis in rodents [During the past few years, squalene was found to show protective activities against several carcinogens . Desai eet al. reported rodents .Squalene is also used as a material or additive in topically applied vehicles such as lipid emulsions and nanostructured lipid carriers (NLCs).et al. [in vitro transfection activity of emulsions was lower than that of liposomes in the absence of serum, the activity of squalene emulsions, for instance, was approximately 30 times higher than that of liposome in the presence of 80% (v/v) serum (p < 0.05).Lipid emulsions are potentially interesting drug delivery systems because of their ability to incorporate drugs with poor solubility within the dispersal phase . An emulet al. preparedet al. [in vitro and in vivo gene transfer. In addition, Wang et al. [in vivo analgesic activity of the emulsions was examined by a cold ethanol tail-flick test. The squalene system showed the ability to provide controlled delivery to prolong the analgesic duration in rats. The toxicity determined by erythrocyte hemolysis was also low for squalene emulsions.Kim et al. found thg et al. indicateSolid lipid nanoparticles (SLNs) are a new generation of oil-in-water nanoparticulate systems and are attracting attention as novel colloidal drug carriers. Distinct advantages of SLNs are the solid state of the particle matrix, the ability to protect chemically labile ingredients, and the possibility of modulating and prolonging drug release. NLCs are a novel type of lipid nanoparticle with a solid particle matrix possessing structural specialties and improvements such as an increased loading capacity, long-term physical and chemical stability, triggered release, and potentially supersaturated topical formulations. NLCs are produced by mixing solid lipids with spatially incompatible lipids leading to a lipid matrix with a special structure. Depending on the method of production and composition of the lipid blend, different types of NLCs can be obtained . The baset al. [\u00ae and squalene (12% w/v) showed respective mean particle sizes of 200 nm. The lipophilicity of NLCs decreased with an increase in the squalene content in the formulations. Psoralen derivatives for psoriasis treatment were loaded in NLCs to examine their ability to permeate via the skin. Enhanced permeation and controlled release of psoralen were both achieved using NLCs with squalene. The in vitro permeation results showed that NLCs stabilized with Tween\u00ae 80 increased the 8-methoxypsoralen flux 2.8 times over that of a conventional emulsion.Fang et al. found thet al. [et al. [Squalene monohydroperoxide (SQOOH) is a primary oxidized lipid produced from squalene by solar UV. It is produced at the human skin surface due to natural exposure to sunlight during daily activities. Recent studies demonstrated that repeated application of SQOOH to the skin can induce skin roughness and wrinkles in the hairless mouse . Uchino et al. also rep [et al. demonstret al. [p < 0.001). This may provide a useful model for studying skin aging, particularly with regard to the collagen content.Chiba et al. studied 1O2 quencher and unique free radical scavenger [et al. [Dunaliella salina protected human skin from UV light-induced erythema. Bando et al. [in vivo study focused on determining the mechanism of action of \u03b2-carotene against UVA-induced skin damage by characterizing \u03b2-carotene oxidation products. BALB/c mice were fed a \u03b2-carotene-supplemented diet, and homogenates from the dorsal skin were prepared after three weeks for UVA irradiation. The results indicated that dietary \u03b2-carotene accumulated in the skin and acted as a protective agent against UVA-induced oxidative damage, by quenching 1O2.Many other polyprenyl compounds structurally similar to squalene exist in nature and perform critical biological functions. \u03b2-Carotene is well known to be a potent cavenger . This pr [et al. demonstro et al. found thet al. [A number of studies have demonstrated that dietary \u03b2-carotene protects human skin from UV light-induced erythema, but little is known about the protective effect of dietary \u03b2-carotene on UVA-induced skin photoaging . Antilleet al. showed tp < 0.05).\u03b2-Carotene was partially successful in treating a photosensitivity disorder, erythropoietic protoporphyria, of which singlet oxygen is believed to be an important mediator. Several studies were performed to examine whether \u03b2-carotene protects against UV-induced erythema in healthy humans, with widely differing reported effects. The incidence of nonmelanoma skin cancer was reported to be inversely related to serum \u03b2-carotene concentrations, and earlier experimental UV-carcinogenesis studies found \u03b2-carotene to be photoprotective . HoweverCoenzyme Q10 is an important lipophilic antioxidant synthesized by the body . Topicalet al. [Coenzyme Q10 is a popular antioxidant used in many skin care products to protect the skin from free radical damage. The effects of coenzyme Q10 and colorless carotenoids on the production of inflammatory mediators in human dermal fibroblasts treated with UV radiation and possible synergistic effects of these two antioxidants were evaluated by Fuller et al. . Treatmeet al. [et al. [2 UVB, and assayed the levels of thymine dimers produced in epidermal DNA 2 h following UVB exposure. The results demonstrated that epidermal retinyl esters have a biologically relevant filter activity and suggest, besides their pleomorphic biologic actions, a new role for vitamin A that is concentrated in the epidermis (p < 0.05). Moreover, Alberts et al. [Squalene and some of its related substances, including vitamin A, were examined in an animal model to determine the existence of chemopreventive effects. Varani et al. pointed [et al. applied s et al. concludein vitro and in vivo [in vitro and in vivo skin absorption levels of retinol in the fuzzy rat. Results from those studies were used to help interpret the significance of the in vitro retinol human skin reservoir in determining systemic absorption (p < 0.001) [et al. [in vitro Franz diffusion assembly, and formulations were applied for 6 and 24 h. Vitamin A concentrations in the skin tissue suggested a certain drug localizing effect. High retinol concentrations were found in the upper skin layers following application of SLN preparations, whereas the deeper regions showed only very low vitamin A levels (p < 0.05).Vitamin A-derived agents still continue to be used to treat acne. Furthermore, retinoids act as chemopreventive and/or chemotherapeutic agents for several types of cancer. They have major effects on the growth and differentiation of normal, premalignant, and malignant epithelial cells both in vivo . Retinol in vivo . Retinol< 0.001) . Jenning [et al. evaluatein vivo. It is thought to play an important role in skin protection [et al. [R,R,R-a-tocopheryl acetate, 62.5 IU/kg diet) for 26 weeks. Vitamin E reduced the tumor yield in mice given UV and arsenite by 2.1-fold (p < 0.001). Those results show that vitamin E can strongly protect against arsenite-induced enhancement of UV-caused carcinogenesis.Vitamin E is the most potent lipid-soluble antioxidant otection . Uddin e [et al. examinedet al. [p < 0.05). Those results demonstrated that topical administration of \u03b1-tocopherol protects cutaneous tissues against oxidative damage induced by UV irradiation in vivo.Vitamin E is a group of eight different compounds, but only two of the forms, \u03b1-tocopherol and \u03b3-tocopherol, are commonly found in the human body . Lopez-Tet al. investiget al. [N,N-dimethylglycinate hydrochloride (\u03b3-TDMG), could protect against UV-induced skin damage in hairless mice. Topical pre- or post-application of a 5% (93 mM) \u03b3-TDMG solution in water/propylene glycol/ethanol (2:1:2) significantly prevented sunburned cell formation, lipid peroxidation, and edema/inflammation, which were induced by exposure to a single dose of UV irradiation at 5 kJ/m2 . Those results suggest that the topical application of \u03b3-TDMG may be efficacious in preventing and reducing UV-induced inflammation.Yoshida et al. investiget al. [in vivo. Therefore, topical formulations containing \u03b1-tocopherol at concentrations ranging from 0.1% to 1% are likely to be effective skin care measures to enhance antioxidative protection of the skin barrier. According to the antioxidant network theory, combinations with co-antioxidants such as vitamin C may help enhance the antioxidant effects and stability of vitamin E [Ekanayake-Mudiyanselage et al. recentlyitamin E . A betteet al. [Vitamin K is another squalene-related substance that exhibits benefits to skin physiology. Lou et al. examinedet al. [in vitro skin penetration and transdermal delivery of vitamin K , and whether these parameters were enhanced by lipid-based drug delivery systems. The experimental results demonstrated that the topical delivery of vitamin K incorporated in a lipophilic vehicle was low. It could be enhanced (~3-fold increase) by monoolein-based systems, which may be useful in increasing the effectiveness of topical vitamin K therapy.The application of vitamin K to the skin has also been used to suppress pigmentation and resolve bruising. Lopes et al. investigSqualene appears to be critical in reducing free radical oxidative damage to the skin. Although epidemiological, experimental, and animal evidence suggests antitumor properties, few human trials have been conducted to date to verify the role of squalene in cancer therapy. Further studies are needed to explore the usefulness of squalene for treating skin. Several implications can be drawn from this review. Squalene shows several advantages for skin tissues. It is also useful as a material in topically applied vehicles. Substances related to squalene such as \u03b2-carotene, coenzyme Q10, and vitamins A, E, and K also exhibit their benefits for skin physiology. Topical administration via the skin is an important route to supplement these compounds within skin tissues. The present success of squalene and its analogs shows the promise of further clinical trials for skin use."} {"text": "Plasmodium protozoa causing malaria.Ursolic acid (UA) is a natural terpene compound exhibiting many pharmaceutical properties. In this review the current state of knowledge about the health-promoting properties of this widespread, biologically active compound, as well as information about its occurrence and biosynthesis are presented. Particular attention has been paid to the application of ursolic acid as an anti-cancer agent; it is worth noticing that clinical tests suggesting the possibility of practical use of UA have already been conducted. Amongst other pharmacological properties of UA one can mention protective effect on lungs, kidneys, liver and brain, anti-inflammatory properties, anabolic effects on skeletal muscles and the ability to suppress bone density loss leading to osteoporosis. Ursolic acid also exhibits anti-microbial features against numerous strains of bacteria, HIV and HCV viruses and Ursolic acid fruit peel, marjoram (Origanum majorana) leaves, oregano (Origanum vulgare) leaves, rosemary leaves, sage leaves, thyme (Thymus vulgaris) leaves, lavender (Lavandula angustifolia) leaves and flowers, eucalyptus leaves and bark, black elder (Sambucus nigra) leaves and bark, hawthorn (Crataegus spp.) leaves and flowers, coffee (Coffea arabica) leaves and the wax layer of many edible fruits [Ursolic acid and related triterpene compounds like oleanolic acid, betulinic acid, uvaol or \u03b1- and \u03b2-amyrin are widespread in plants. Their content and composition differs between various species, due to the presence and activity of the enzymes responsible for their synthesis. Amongst plants matrices with a high content of ursolic acid and of potentially practical significance as a source of this compound one can mention apple which is a five-carbon building block utilized to create all terpenic compounds. For many years it has been believed that the mevalonate pathway (MVA) is the exclusive source of this compound. In this cytosol-carried metabolic pathway two molecules of Acetyl-CoA (created in the citric acid cycle) are transformed to one molecule of IPP through a six stage process. Recent investigations have discovered another route, the deoxyxylulose/methylerythritiol phosphate (DXP) pathway. In this plastid-located process isopentyl diphosphate is synthesized from pyruvate and glyceraldehyde-3-phosphate . SynthesThe second stage of UA production is synthesis of 2,3-oxidosqualene and its cyclisation leading to formation of \u03b1-amyrin. Molecules of IPP and its isomer dimethylallyl diphosphate (DMAPP) are used to create squalene (through the intermediates geranyl pyrophosphate and farnesyl pyrophosphate). Then squalene epoxidase oxidizes this compound to 2,3-oxidosqalene. The group of enzymes named oxidosqualene cyclases (OSCs) is responsible for the cyclisation and rearrangement of the terpenoid chain leading to the formation of various scaffolds, including \u03b1-amyrin .The last stage is modification of \u03b1-amyrin by group of cytochrome P450 enzymes called \u03b1/\u03b2-amyrin 28-monooxygenases. The methyl group-containing C-28 is oxidized to a carboxyl thus finishing the UA biosynthesis process .Ursolic acid is one of the most promising substances of biological origin when it comes to the prevention and therapy of cancer. Novel pharmacological strategies do not rely only on the destruction of tumor cells, but also modulate their metabolism to prevent angiogenesis and metastasis, enforce differentiation of cells and protect healthy tissues against inflammation and oxidative stress that may lead to neoplasm formation.in vitro and in vivo is shown in UA can be described as a multi-tasking agent; it influences several cell signaling enzymes and simultaneously protects it against carcinogenic agents. The summary of the studies describing ursolic acid\u2019s impact on carcinomas It should be noted that in the majority of mentioned works the authors were testing pure compounds or attributed therapeutic properties to ursolic acid. There are also numerous studies describing the effects of plant extract without assigning the activity to a particular compound; the authors did not include them in this work.To fully understand modus operandi of anti-cancer drugs one must take a closer look at cell signaling. This very complex system of communication coordinates all cellular activities and responses on extracellular signals. The schematic diagram of the main intracellular signal routes is presented in The anticancer activity of ursolic acid is associated with its ability to influence the activity of several enzymes. Therefore it is able to modulate processes occurring inside tumor cells activating routes leading to cell death and suppressing ones leading to the proliferation, growth and migration of cancer.The MAPK/ERK and PI3K/AKT/mTOR signaling cascades play critical roles in the transmission of signals from growth factor receptors to regulate gene expression. Both these pathways are responsible for anti-apoptotic and drug resistance effects in cells . The abiHigh expectations are also surrounding UA impact on nuclear factor \u03baB. Activity of NF-\u03baB is connected with reaction on such stimuli as cytokines, free radicals or antigens, and it plays crucial role in immunologic answer against infection. Malignant cells are characterized by abnormally high activity of this transcription factor, what leads to intense proliferation and make NF-\u03baB one of the main targets of modern oncotherapy . Capabilet al. [Forkhead box (FOX) proteins are family of transcription factors playing crucial role in regulating expression of genes involved in cell growth. FOXM1 has been recognized as exceptional important as its aberrant upregulation might be inducing genomic instability and leading to malignant transformation . FOXM1 het al. in theirApoptosis is the process of programmed cell death occurring as a result of activation of the specific cellular pathways. In contrast to necrosis, this process is highly regulated and leads to chromosomal DNA fragmentation. The induction of apoptosis by various agents is an important part of modern cancer therapies. Unfortunately apoptosis in cancer cells is often blocked by the activity of mutated genes regulating the cell cycle. Therefore different steps of the apoptotic process should be targeted to bypass such blocks .in vitro and in vivo. This aptitude is often connected with the Bcl-2 apoptosis regulators activity. This group of evolutionarily related proteins consist of both pro- and anti-apoptotic agents and is regarded as crucial in regulation of cell death through intrinsic apoptotic pathway [Apoptosis induction is the uppermost anti-cancer activity of ursolic acid. It has been reported in dozens of papers, as regards several cancer types, both pathway . Ursolic pathway ,38,39,65Caspases are family of cysteine proteases playing essential role in apoptosis. They are final step of cell death pathways and are responsible for e.g., DNA fragmentation, cleavage of nuclear proteins and as a result blebbing and cell death . Inhibitet al. [Sequencing of the human genome shown that retrotransposable elements make up about 45% of the human DNA. Almost all of these elements contain genes responsible for reverse transcriptase (RT) coding. In most tissues expression of RT-coding genes is very low, however high expression is distinctive for undifferentiated cells like embryos, germ cells or tumor cells. Sciamanna et al. revealedet al. [Role of ursolic endogenous reverse transcriptase as a mediator of ursolic acid properties was reported by Bonaccorsi et al. . Their wAngiogenesis is the formation of new blood vessels from other pre-existent ones during development, growth, wound repair or the female reproductive cycle. Angiogenesis is one of cancer\u2019s hallmarks since it is required for both tumor progression and dispersal of metastatic cells. This resulted in the fact that the inhibition of angiogenesis has become an alternative therapeutic approach to cancer therapy. The angiogenic process is activated by intracellular signals that activate resting endothelial cells, which are stimulated to release degrading enzymes allowing endothelial cells to migrate, proliferate, and finally differentiate to form new vessels. Any of these steps might be a potential target for pharmacological compounds .et al. [et al. [Anti-angiogenic properties of ursolic acid are usually attributed to inhibition of the downregulation of matrix metalloproteinases activity. Metalloproteinases are group of the enzymes involved in degradation of extracellular matrix. Their activity in tumor tissues is elevated due to increased demand for oxygen and glucose of neoplasm. UA inhibiting activity against MMP-9 has been confirmed by several research teams, however activity against MMP-2 remains subject of discussion: Huang et al. reported [et al. did not The connection between inflammation and cancer had been suggested as early as in 1863 by Rudolf Virchow. Currently chronic inflammation, with concomitant activity of cytokines and increased production of reactive oxygen species, is recognized as a cancerogenesis-promoting condition . The cycSeveral tests of the anti-carcinogenic activity of ursolic acid against different induction sources have been conducted. These test included chemical agents (such as benzo(a)pyrene, azoxymethane and tobacco smoke extract) ,96,97,98The ultimate goal of every cancer research is the implementation of the compound to clinical use. Currently ursolic acid is undergoing phase I trials to evaluate its safety and adverse effects in patients. Due to poor water solubility and low bioavailability ursolic acid had been administered as a liposomes. So far results of only three such studies have been published ,118,119\u2014The liver is one of the most important organs of the body. It is responsible for a wide range of metabolic functions, including detoxification of xenobiotics, production of hormones and digestive enzymes, glycoside and fat-soluble vitamins storage and the decomposition of red blood cells. Due to its strategic location and multidimensional functions the liver is prone to many diseases, like hepatitis, hepatic steatosis, cirrhosis, cholelithiasis and drug-induced liver damage. Fortunately liver is the only internal organ capable to regenerate\u2014as little as 25% of the original mass can reconstruct its full size.et al. [Eucalyptus tereticornis extract against ethanol toxicity in isolated rat hepatocytes. They found that this triterpene was able to decrease the loss of hepatocyte viability by as much as 76%. A similar problem has been studied by Saravanan et al. [in vivo using alcohol-administered rats. They reported that UA increased the level of circulatory antioxidants and serum protein and decreased the total bilirubin level and lipid peroxidation markers. Histopathological observations were in correlation with biochemical parameters. Paracetamol and tetrachloride were other liver-intoxicating agents tested by Shukla et al. [et al. [Ursolic acid showed good protective activity against a wide range of liver-threatening substances. Saraswat et al. were tesn et al. , this tia et al. and Mart [et al. , respectet al. [et al. [The impact of UA on metabolic disorders in high fat diet-fed mice and rats was surveyed in research conducted by Sundaresan et al. and Li e [et al. . The firet al. [Wang et al. were looet al. [Interesting results were acquired by Jin et al. . They diCardiovascular diseases are the major causes of mortality and morbidity in industrialized countries. They are responsible for about 30% of all deaths worldwide. Amongst the most common disorders of the cardiovascular system one can mention myocardial infraction (commonly known as heart attack), stroke, atherosclerosis, hypertension and varicose veins. Although not all cardiovascular diseases are life threatening, all of them significantly decrease life quality and generate enormous social and financial costs .et al. [The first study which reported the impact of UA on the cardiovascular system was conducted by Somova et al. . It reveet al. [et al. [in vivo on Wistar rats, respectively. Shimada and Inagaki focused on the inhibitory effect on angiotensin I-converting enzyme (ACE), which plays an important role in the regulation of blood pressure.Further research has been carried out in various directions. Vasorelaxant properties of ursolic acid were investigated by Aguirre-Crespo et al. , Rios et [et al. and Shim [et al. . The firet al. [et al. [Ursolic acid has been also used as a compound with a potent protective effect in artificially induced (by isoproterenol administration) myocardial infarction. Senthil et al. were tes [et al. ,135. It [et al. . They inet al. [et al. [Administration with ursolic acid also prevents injuries to blood vessels. Pozo et al. revealed [et al. the authet al. [et al. [The impact of ursolic acid on atherosclerosis is the subject of dispute amongst scientists since some studies show potentially beneficial effects while others show potentially negative effects . For exaet al. reported [et al. describeet al. [The potentially harmful effect of UA intake has been presented by Kim et al. . They diExcitotoxicity and oxidative stress are two phenomena that have been repeatedly described as being implicated in a wide range of disorders of the nervous system. Such ailments include several common idiopathic neurological diseases, traumatic brain injury, and the consequences of exposure to certain neurotoxic agents. Both excitoxicity and oxidative stress result from the failure of normal compensatory mechanisms to maintain cellular homeostasis and may lead to permanent damaging of the brain and decrease of cognitive functions .et al. [The first research focusing on the protective effect of ursolic acid on neurons has been conducted by Shih et al. . Neuronaet al. [Lu et al. were invet al. reportedet al. [et al. [Suppressing NF-\u03baB by ursolic acid as a method to attenuate cognitive deficits and avoid brain damage has been reported by Wang et al. and Li e [et al. . Their met al. [Wu et al. describeet al. [The impact of UA on the brain is not limited to the cellular level. Machado et al. , encouraet al. [Research conducted by Colla et al. confirmeet al. revealedSkeletal muscles contraction powers the human body\u2019s movements and is essential for maintaining stability. Muscle tissue accounts for almost half of the human body mass and, in addition to its power-generating role, is a crucial factor in maintaining homeostasis. Given its central role in human mobility and metabolic function, any deterioration in the contractile, material, and metabolic properties of skeletal muscle has an extremely important effect on human health.The term sarcopenia originates from the Greek words sarx (flesh) and penia (loss) and is used to describe the degenerative loss of muscle mass (atrophy) and its quality associated with aging. This expression is used to describe both: cellular processes and their outcomes such as decreased muscle strength, decreased mobility and function, increased fatigue and reduced energy needs. In addition, reduction of muscle mass in aged individuals has been associated with decreased survival rates following critical illnesses. It is estimated that sarcopenia affects more than 50% of people aged 80 and older .et al. [in vivo test on mice confirmed these capacities. Orally administered UA induced muscle hypertrophy, reduced denervation-induced muscle atrophy and changed the gene expression in muscles. Researchers connected triterpene activity with enhancing insulin/IGF-1 (insulin-like growth factor) signaling. A later paper by the same team [To develop potential therapy against skeletal muscle atrophy Kunkel et al. identifiame team reports et al. [in vitro and in vivo. They found that UA elevated the expression of anti-aging genes SIRT1 (ca. 35 folds) and PGC-1a (ca. 175 folds). In vivo tests on a mice model revealed a decreased level of cellular energy charges (such as ATP and ADP) and increased proliferation and neomyogenesis in muscle cells. The authors draw the conclusion that UA can be considered as a potential candidate for the treatment of pathological conditions associated with muscular atrophy and dysfunction, such as skeletal muscle atrophy, amyotrophic lateral sclerosis (ALS) and sarcopenia.Bakhtiari et al. were invThe direct impact of UA on muscle strength was surveyed by a Korean team led by Bang . SixteenBone is a dynamic tissue that undergoes continual adaptation during life to attain and preserve skeletal size, shape and structural integrity. It consists of highly specialized cells, mineralized and unmineralized connective tissue matrix, and spaces that include the bone marrow cavity, vascular canals, canaliculi, and lacunae. When the skeleton reaches maturity, its development continues in the form of a periodic replacement of old bone with new at the same location. This process is called remodeling and is responsible for the complete regeneration of the adult skeleton every 10 years. The purpose of remodeling in the adult skeleton is not entirely clear, although in bones that are load bearing, this process most likely serves to repair fatigue damage and to prevent excessive aging and its consequences. Several types of cells are involved in the remodeling process, but the two most important are osteoblasts (bone-forming cells) and osteoclasts .The activity of osteoblasts and osteoclasts is crucial for maintaining proper bone structure and is regulated by differentiation from mesenchymal precursor cells and apoptosis. Several factors can cause an imbalance between excessive osteoclastogenesis and inadequate osteoblastogenesis. The result is bone loss leading to osteopenia and osteoporosis . Pharmacet al. [in vivo, in a mouse calvarial bone.Lee et al. were theet al. [Eriobotrya japonica) was able to significantly decrease bone mineral density in oviarectomized mice by inhibiting osteoclast production. Dose-depended inhibitory effect of the extract on the differentiation of osteoclasts without any cytotoxicity was observed. A later paper by this research team [Tan et al. found thrch team presentset al. [The possible mechanism of osteoclastogenesis inhibition by UA was evaluated by Jiang et al. . The reset al. [et al. [Yu et al. were inv [et al. was alsoet al. [So far the influence of ursolic acid on other organs has been described only in a limited number of papers. The impact on skin has been investigated by two teams. One led by Both has beenet al. present et al. [et al. [Ding et al. and Pai [et al. conducteet al. [The protective effect of ursolic acid has been also tested by Chen et al. in lipopin vitro determination of minimal inhibition concentration (MIC) of UA and other triterpenes against different strains of bacteria [et al. [The fight against bacterial infections is one of the most important tasks of medicine. The development of antibiotics in the 1940s gave physicians a powerful tool against infections and has saved the lives of millions of people. However, because of the widespread and sometimes inappropriate use of these substances, strains of antibiotic-resistant bacteria have begun to emerge. These newer, stronger bacteria pose a significant threat to human health and a challenge to drug researchers. Therefore, there is a continuous search for new, safe antimicrobial agents, including those from natural sources. A number of researches has been performed to evaluate the anti-bacterial properties of ursolic acid and related compounds . Some ofbacteria ,173,174. [et al. focused Mycobacterium tuberculosis has been investigated by Woldemichael et al. [et al. [Calceolaria pinnifolia and Chamaedora tepejilote. Further research by Jim\u00e9nez-Arellanes et al. [Ursolic acid activity against tuberculosis-causing l et al. and Jim\u00e9 [et al. . They prs et al. confirmeet al. [Enterococci and Kim et al. [Staphylococcus aureus. Both studies showed that UA can be used simultaneously with antibiotics to enhance their activity.The ability to overcome bacterial resistance against antibiotics was also tested by Horiuchi et al. who focum et al. who usedet al. [Hu et al. conducteet al. [Proteobacteria.The impact of orally delivered ursolic acid on intestinal microbiota was studied by Feng et al. . Their rThe human immunodeficiency virus (HIV) and human hepatitis C virus (HCV) infections are chronic and wide-spread illnesses that represent serious public health problems. According to a 2012 UNAIDS report on the global AIDS epidemic, about 34 million people were living with HIV, 2.5 million had acquired new HIV infections and 1.7 million had died of HIV-related causes worldwide during 2011.et al. [50 near 1 \u00b5M. The expected mechanism of action was dimerization inhibition. Kashiwada et al. [HIV-1 protease is a retroviral aspartyl protease that is essential for the life-cycle of HIV. It cleaves newly synthesized polyproteins at the appropriate places to create the mature protein components of an infectious HIV virion. Due to its importance in metabolism this enzyme became the prime target for drug therapy. In 1996 Quere et al. found tha et al. confirmea et al. ,189,191.et al. [Ligustrum lucidum. Virus spreading was inhibited, at least partly, by suppressing NS5B RNA-dependent RNA polymerase. Garcia-Risco et al. [Calluna vulgaris) and they confirmed UA activity against human hepatitis virus C.It is also estimated that about 3% of the global population is infected with the hepatitis C virus. Chronic hepatitis C infection is the leading cause of cirrhosis, hepatocellular carcinoma and liver transplantations in developed countries . Anti-HCet al. reportedo et al. investiget al. [Mallotus peltatus extract. Investigators claimed that UA was probably inhibiting the early stage of multiplication and can be used as an anti-HSV agent.Research conducted by Bag et al. showed tPlasmodium.Malaria is the parasitic disease with the greatest impact\u2014it affects around 40% of the world\u2019s population, spanning across more than 100 countries. Its etiological agent is a protozoa belonging to the genus Satureja parvifolia and Morinda lucida. Innocente et al. [et al. [In 2006 two independent research teams led by van Baren and Cimae et al. and Dell [et al. developeet al. [Leishmania amazonensis. The curative effect of triterpene-rich fraction was similar to amphotericin B , however the dose required to eliminate microbes was smaller. Moreover, triterpenic fraction did not cause microscopic alterations in the liver, spleen, heart, lung, and kidney of the experimental groups.Yamamoto et al. investigTrypanosoma cruzi infections was reported by de Silva Pereirra et al. [The influence of UA on the treatment of a et al. They fouUrsolic acid, betulinic acid and six of their derivatives were tested against eleven mucocutaneous and cutaneous mycotic agents . The MICet al. [Nycanthes arbor-tristis against Brugia malayi and Wuchereria bancrofti\u2014tropical filariae responsible for elephantiasis. They discovered that UA was able to induce apoptosis of these nematodes by downregulating and altering the level of some key antioxidants.The activity of UA was also examined against parasites. Saini et al. evaluateUrsolic acid is a widespread compound of plant origin exhibiting wide range of the pharmacological activities. The biggest attention amongst scientists has been captured by the role that UA can play in treatment and prevention of cancer. Amongst other intriguing features of this triterpene anti-microbial properties and protective effect on internal organs against chemical-damage should be mentioned. However some studies point out negative effects of administration of this compound, suggesting that impact of UA on human\u2019s health in some cases can be compared to the double-edged sword. Analysis of literature indicates that various effects can be linked to one phenomenon. The example might be inhibition of NF-\u03baB activity, which leads to cancer cells apoptosis, anti-inflammatory effects and bone-forming activity."} {"text": "Castor oil (CO) is an inedible vegetable oil (VO) that has been employed extensively as a bioresource material for the synthesis of biodegradable polymers, cosmetics, lubricants, biofuels, coatings and adhesives. It is used in medicine, pharmaceuticals and biorefineries, due to its versatile chemistry. However, there has been less focus on CO as an alternative to toxic and expensive solvents, and capping/stabilizing agents routinely used in nanoparticle syntheses. It provides a richer chemistry than edible VOs as a solvent for green syntheses of nanoparticles. CO, being the only rich source of ricinoleic acid (RA), has been used as a solvent, co-solvent, stabilizing agent and polyol for the formation of polymer\u2013nanoparticle composites. RA is a suitable alternative to oleic acid used as a capping and/or stabilizing agent. Unlike oleic acid, it provides a facile route to the functionalization of surfaces of nanoparticles and the coating of nanoparticles with polymers. For applications requiring more polar organic solvents, RA is more preferred than oleic acid. In this review, we discuss the production, chemical and physical properties, triglyceride and fatty acid (FA) compositions and applications of CO, focusing on the use of CO and RA as well as other VOs and FAs in syntheses of nanoparticles and surface functionalization. However, the use of expensive and toxic materials for the syntheses of nanoparticles is becoming a critical concern. Many researchers have resorted to employing environmentally friendly renewable bioresource materials such as vegetable oils (VOs), carbohydrates and plant extracts in the syntheses of nanoparticles \u201315. CastOleum Palmae Christi (or CO) is a hydroxylated lipid obtained from the seed of the castor plant, Ricinus communis L. of the family Euphorbiaceae and native to tropical Asia and Africa [d Africa \u201318. It id Africa . The trid Africa ,20,21.FRA is a multifunctional compound, possessing a carboxylic acid, a double bond (between C9 and C10) and a secondary alcohol or hydroxyl (at C12) functional group. The hydroxyl group is beta to the double bond and protects that double bond from peroxide formation . These fCarboxylic acids such as oleic acid and stearic acid have been employed extensively as ligands or capping agents ,32. RA aThis review is split into four main sections: (i) facts about CO , (ii) composition and structure of CO and isolation of RA, (iii) application of CO in biomedicine, biopolymers, biochemicals, bioenergy, lubricants and coatings and (iv) utilization of CO (as well as other VOs) and RA (as well as other FAs) as capping ligands or solvents for nanoparticle syntheses and functionalization. The review is then concluded by highlighting the areas in nanoparticle syntheses where CO and RA can be used.2.2.1.2n=2x= 20) species within the family of Euphorbiaceae and the genus Ricinus. The castor plant is a coarse perennial crop that grows to approximately 10 ft in the tropics and has a stem diameter of 7.5\u201315 cm. In the temperate regions, the castor plant behaves as an annual crop with succulent stems and usually herbaceous [2, NaOH or NaOCl to remove the toxins [www.castoroil.in [Ethiopia (east Africa) is believed to be the most likely origin of castor in addition to places such as northwest and southwest Asia, the Arabian Peninsula and the subcontinent of India and China . The hisrbaceous . Picturee toxins ,38,44. Doroil.in .Figure 2.2.6 t [3 t of castor seeds annually, representing approximately 4.54% of the world's production. Mozambique produces approximately 3.01% of the world's annual production and is the leading producer of CO seed in Africa. Ethiopia, South Africa, Angola, Tanzania and Kenya are African countries also involved in castor production though their production figures are low. China, Brazil, Paraguay and Thailand are also noted for castor production [The oil content of the castor seed is approximately 45\u201350%, with a yield of 470 kg of oil per hectare ,45. The 6 t . The majoduction .Figure 2.3.et al. [\u22121 and 928.31 g mol\u22121, respectively. Salimon et al. [\u22121. Compared with olive oil, CO has a higher MW and thus has a higher viscosity. The density of CO is also reported to be 961 kg m\u22123 [The presence of the hydroxyl group on RA has a drastic effect on the viscosity, pour point, melting point, heat of fusion, solubility, crystal structure and polymorphism of CO . The hydet al. analysedn et al. also rep1 kg m\u22123 ,43. The 3.3.1.et al. [et al. [et al. [Most of the triglyceride molecules in CO consist of three molecules of RA connected to a glycerol moiety . Salimonet al. identifi [et al. ; Plante [et al. and Lin [et al. , however [et al. found fo [et al. found 40 [et al. . These a [et al. . Estolid [et al. .Figure 3.2.et al. [The uniqueness of CO compared with other VOs lies in its FA composition. Numerous groups have reported on CO FA composition from different countries . The FAset al. also shoet al. [Alternanthera triandra, Lam Syn. Alternanthera sessilis (L.) R. Br. seed oil as another source of RA (contains approx. 22.1% of RA). However, despite the possibility that other seed oils may contain RA, CO remains the only reported rich source of RA to date.Although CO is known to contain RA, which is a monohydroxy FA, Lin has idenet al. also rep3.3.et al. [4. Clarification of the FAs is done by mixing with n-hexane (1 : 5 w/v) and keeping at \u22124\u00b0C for 72 h in darkness. Chromatographic analysis of the resultant FAs revealed the purity to be within 87.50\u201388.10% of RA and 12.5\u201311.9% of palmitic acid, stearic acid, oleic acid, vaccenic acid, linoleic acid and linolenic acid. Solid residues found after clarification were identified to be 9-,10-dihydroxystearic acid [Several methods, including chemical and biochemical pathways, have been used to isolate RA from CO. The isolation occurs by hydroxylation of the ester linkages in the triglyceride molecules to yield RA and glycerol. The salt solubility-based fractionation method reported by Vaisman et al. is an exric acid .et al. [Candida rugosa, Pseudomonas cepacia and Geotrichum candidum lipases for hydrolysis of CO. In a typical reaction, tubes containing 100 mg of oil, 0.6 ml of 0.5 M phosphate buffer (pH 7) and approximately 2\u20135 mg of free lipase were stirred at 500 r.p.m. at 30\u00b0C for 1\u20134 h. The extent of hydrolysis was determined by titrating the hydrolysis mixture (in 20 ml of diethyl ether/ethanol/water (3 : 3 : 2)) to pH 12 with 0.1 N NaOH solution. The P. cepacia lipase was found to be effective in hydrolysing CO to RA to the tune of 27% compared with 13% recorded for C. rugosa and G. candidum. Ozcan & Sagiroglu [C. rugosa, porcine pancreatic and castor bean lipases for lipolysis of CO and obtained a yield of RA within 20\u201340%, considering a number of parameters such as pH, temperature, amount of substrate and enzyme. Interestingly, Piazza & Farrell [Avena sativa L.) to hydrolyse CO and obtained approximately 90% yield of RA.Biocatalysts such as lipase enzymes have been used to isolate RA from CO. Foglia et al. employedagiroglu also emp Farrell used lipet al. [An eco-friendly approach by the use of microwave-assisted extraction of RA from CO has also been reported by Karpakavalli et al. . In thei4.CO has received much attention as a valuable commercial feedstock for production of a variety of products in a wide range of industries spanning pharmaceuticals to lubricants. 4.1.et al. [3 receptors.Historically, CO has been known as a medicinal oil and primarily used as a purgative or laxative to ease constipation . As far et al. found thAdditionally, eye drops containing approximately 1.25% of homogenized CO are reported for the treatment of lipid-deficiency dry eye (i.e. meibomian gland dysfunction). The role of CO in treating dry eye is that it serves as a hydrophilic lipid that spreads over the human tear aqueous layer to correct the deficiency .et al. [Katzer et al. have alset al. .4.2.et al. [in vitro studies showed that these biopolymers degrade rapidly via hydrolysis after 10 days, releasing RA and its counterparts [The use of CO as a raw material in the synthesis of polymeric materials is very well established. The hydroxyl functionality is more suitable for isocyanate reactions, yielding polyurethane, while the double bond is dehydrated to obtain dehydrated CO, which is applied in producing paints, enamels, lacquers and varnishes . Polymeret al. reportedterparts . Table\u00a054.3.et al. [Very important industrial chemicals such as \u03b3-decalactone, sophorolipids, undecylenic acid, linoleic acid, sebacic acid, capryl alcohol, heptaldehyde, zinc ricinoleate, glyceryl ricinoleate and lithium 12-hydroxystearate are produced from CO ,77. Theset al. synthesiet al. . Biodieset al. .5.(i)\u2003environmentally benign and inexpensive;(ii)\u2003suitable alternatives to some toxic and expensive solvents or ligands traditionally used in nanoparticle syntheses;(iii)\u2003renewable source of raw material;(iv)\u2003biodegradable and provide versatile chemistry-based opportunities;(v)\u2003a source of carboxylic acids suitable as ligands/capping agents or for synthesizing safe chemical precursors for metal oxide and sulfide nanoparticle syntheses; and(vi)\u2003biocompatible, ensuring dispersion of nanoparticles in non-polar solvents.For biomedical applications (e.g. staining of proteins), nanoparticles should be: (i) biocompatible, (ii) water soluble and (iii) easily functionalized or chemically modified at the surface to tailor the interaction of the nanoparticles with target biomolecules [(i)\u2003CO is inedible and obviates possible competition as raw material for the food industry;(ii)\u2003CO is a natural source of polyol and presents a simple avenue for versatile chemical reactions;(iii)\u2003CO is the only rich source of RA that has been used as a building block for synthesis of several biochemicals;(iv)\u2003RA due to the presence of the hydroxyl functional group on its hydrocarbon chain provides a facile route for chemical functionalization and manipulation of nanoparticle surfaces to tailor it to a specific application;(v)\u2003CO and RA are more suitable for applications requiring highly polar organic solvents; andCO and RA possess antimicrobial properties.Green syntheses of nanoparticles are strongly advocated worldwide because of the disadvantages of the use of toxic solvents and chemicals, especially the effects on human health and the environment. Green chemistry principles embody the (i) design of less hazardous chemical syntheses, (ii) use of safer chemicals and solvents, (iii) use of renewable feedstocks and (iv) design of degradation . Thus, rolecules ,83,84. Folecules . Thus, C5.1.et al. [Metal nanoparticles are synthesized either by wet chemical, laser ablation, sputtering deposition or sonochemical methods. Diphenylmethane is a common solvent used in sonochemical reactions. Diphenylmethane is, however, reported to decompose to toxic by-products . In addiet al. have demet al. ,15,85\u201388et al. . Howeveret al. [et al. [4 in KOH solution), because several attempts to synthesize similar stable colloids with soya bean or cottonseed oils failed [As previously stated, CO is inedible and the seed oil is high yielding compared with most edible oils, and is found to be the best and inexpensive alternative to edible oils for metal nanoparticle syntheses . In addiet al. combined [et al. made cols failed .in situ by autoxidation of the drying oil [Interestingly, antimicrobial paints based on VOs and silver nanoparticles have been developed via a simple method centred on free radicals generated ying oil . Drying ying oil . Howeverying oil .5.2.Metal chalcogenide semiconductor nanoparticles are a useful class of inorganic materials that have received tremendous applications in solar cells and biomedical labelling . Some ke5.2.1.et al. [et al. [et al. [et al. [Xiao et al. demonstr [et al. prepared [et al. and Hard [et al. replicatet al. [Though olive oil was reported to be a suitable green solvent, Hardman et al. noted th5.2.2.et al. [Solvents such as octadecene and TOPO are often used as co-solvents with VOs and FAs. The reasons for the co-solvent addition are to: (i) decrease the viscosity of the oil to ensure uniform nucleation of the nanoparticles and (ii) reduce the strong binding of the FAs to the nanoparticles or reduce the extent of inhibition of the nanoparticle growth ,105. Akhet al. [et al. [et al. [Nyamen et al. thermoly [et al. synthesi [et al. also, apet al. [et al. [\u22121) and in TOPO (with molecular weight of 386.65 g mol\u22121) to be 0.56 and 0.95 eV molecule\u22121, respectively [\u22121, higher than the molecular weight of TOPO).Qu et al. revealed [et al. found thectively . Thus, t5.2.3.FAs are Lewis acids and have been extensively applied as capping agents and surfactants in the syntheses of nanoparticles. Oleic acid is known as a standard FA; the double bond and alkyl chain forming a \u2018kink\u2019 imparts colloidal stability . Oleic a5.2.4.et al. [Metal FA salts (MFASs) are polyvalent metal soaps, prepared by: (i) metathesis of a sodium or potassium FA salt with metal salts in aqueous or polar solvents, (ii) dissolution or fusion of metal oxides in hot FAs, or (iii) direct reaction of metal with hot FAs . MFASs het al. combinedet al. [et al. [Chen et al. decompos [et al. decomposMFASs decompose thermally through the formation of free radicals that combine, disintegrate into smaller molecules or react with other metal carboxylates to propagate to decompose metal carboxylates in MFASs .et al. [2S, MnS, PbS, CdS and ZnS nanocrystals by the solution-phase thermolysis of metal\u2013oleate complexes in alkane thiol. This method was considered simple and general for the synthesis of metal chalcogenides. Specifically, the metal\u2013oleate precursors were dissolved in solvent mixtures of oleylamine and dodecanethiol. The resultant mixtures were then heated to the required temperatures and maintained for a period. The reaction temperature, time and the molar ratio of the two solvents were varied to tune the sizes of the nanoparticles. The nanoparticle sizes were uniform and had average particle sizes of 18, 11, 47, 10 and 10 nm for Cu2S, MnS, PbS, CdS and ZnS nanocrystals, respectively. Similarly, Patel et al. [Choi et al. synthesil et al. obtained5.3.et al. [et al. [4:Yb/Er and obtained NaYF4:Yb/Er-C12 (Up-conversion nanoparticles (UCNps) have received considerable attention as fluorophores in bioimaging over organic fluorophores and semiconductor quantum dots. UCNps have high quantum yields, high photostability and narrow emission peaks. However, to efficiently use UCNps in bioimaging, the UCNps must be rendered water-dispersible and their surfaces functionalized . To rendet al. as an ex [et al. reacted b/Er-C12 . Interes5.4.3O4 nanoparticles, Lattuada & Hatton [The coating of surfaces of very small size monodispersed nanoparticles with polymers for specific application in biomedicine is a major challenge. Nanoparticle surfaces can be polymerized in various solvents with the appropriate polymer for a specific purpose if ligands on the surfaces are suitable for such polymerization reactions. RA (and CO) in this regard stands out as the most suitable ligand compared to oleic acid. In an attempt to develop polymer-coated monodispersed Fe& Hatton first (i3O4 nanoparticles composited with poly(lactic-co-glycolic) acid was reported by Furlan et al. [3O4 nanoparticles were synthesized using the Massart co-precipitation method. The RA played its role as a biocompatible ligand rendering the nanoparticles hydrophobic, which ensured their dispersibility in apolar and mildly polar organic solvents such as dichloromethane. Additionally, a wound healing bio-nanocomposite based on CO and chitosan-modified ZnO nanoparticles has also been reported [Similarly, RA stabilized Fen et al. as a magreported . The CO reported also pre6.CO has a useful versatile chemistry and has been reviewed as a valuable bioresource material for green syntheses of nanoparticles. It is used as a biocompatible solvent, co-solvent, and a capping and stabilizing agent in the syntheses of metal and metal chalcogenide nanoparticles, as well as a source of polyol for forming chemically bonded polymer\u2013nanoparticle composites that are biodegradable. CO is distinct from other VOs and contains huge amounts of ricinoleic acid, which is isostructural to the traditional oleic acid for capping magnetic and luminescent nanocrystals ,100,112.Ricinoleic acid is a more suitable stabilizing and/or capping agent for applications requiring more polar organic solvents (e.g. in lubricants) . It provThough the utilization of CO and ricinoleic acid as solvent/capping/stabilizing agents for nanoparticle syntheses has substantial potential in expanding the spectrum of nanoparticle applications, it is somewhat limited. Ricinoleic acid is more suited for low-temperature organometallic synthesis because of the possible oxidation of the hydroxyl group on its hydrocarbon chain . However"} {"text": "The traditional healthcare industry is undergoing a major paradigm shift due to the rapid advances and developments of mobile, wearable, and other wireless technologies. These Mobile Health technologies promise to bring tremendous benefits and opportunities to the diagnosis, prognosis, treatment, and prevention of human diseases for a better quality of life. In the meantime, mHealth also presents unprecedented performance and security challenges in the entire process of data collection, processing, analysis, synthesis, and visualization. For example, the advent of wearable technology has now made it possible to constantly monitor sophisticated biometrics for many people ranging from home athletes to chronic healthcare patients. A wide spectrum of devices are being designed as either a replacement of an existing healthcare monitor or a proposition for a new multifunction one. These devices typically require communication with a central healthcare system via cell phones or tablets, and thus, threats to the data at rest and in transit still exist, apart from a potential risk of misuse via patient profiling. Therefore, it is crucial to design and implement new mHealth technologies to build reliable, accurate, efficient, and secure healthcare environments for optimal patient care. The wide deployments of such wearable monitoring devices have also raised a critical issue to health informatics caused by the sheer volume and high complexity of health data collected anywhere and anytime. Machine learning and big data-oriented algorithms, models, systems, and platforms are needed to support the analysis, use, interpretation, and integration of diverse health data.This special issue includes 14 research articles, addressing various aspects of the recent mHealth advances and developments that use mobile and wireless devices to improve healthcare outcomes, services, and research. In the article titled \u201cMobile Aid to Assist with Care Decisions in Children with Autism Spectrum Disorder (ASD),\u201d A. Khan et al. developed an autism spectrum disorder intervention application that provides an outlet for children to express their emotions while providing an uncomplicated environment. In the article titled \u201cSystems and WBANs for Controlling Obesity,\u201d M. S. Mohammed et al. explored the use of wireless body area networks (WBANs) and related systems for controlling obesity and proposed to integrate such technologies into an intelligent architecture. In the article titled \u201cIndication of Mental Health from Fingertip Pulse Waves and Its Application,\u201d M. Oyama-Higa and F. Ou explored the use of the largest Lyapunov exponent (LLE) of the attractor to provide an effective indicator of mental health. In the article titled \u201cA New Remote Health-Care System Based on Moving Robot Intended for the Elderly at Home,\u201d B. Zhou et al. developed specialized robotics technologies for remote geriatric care. In the article titled \u201cA Mobile Multimedia Reminiscence Therapy Application to Reduce Behavioral and Psychological Symptoms in Persons with Alzheimer's,\u201d D. Imtiaz et al. developed a mobile technology-based solution to address behavioral and psychological symptoms of dementia (BPSD) that occur in individuals with Alzheimer's dementia. In the article titled \u201cQRS Detection Based on Improved Adaptive Threshold,\u201d X. Lu et al. designed an adaptive threshold algorithm for QRS detection that can be used in mobile devices. In the article titled \u201cGenuine and Secure Identity-based Public Audit for the Stored Data in Healthcare Cloud,\u201d J. Zhang et al. constructed an identity-based data-auditing system where an algorithm is used to calculate an authentication signature. In the article titled \u201cModeling Medical Services with Mobile Health Applications,\u201d Z. Wang et al. designed a medical service equilibrium model for evaluating the influence of mHealth applications on the medical service market to balance the supply of doctors and the demand of patients. In the article titled \u201cChinese Mobile Health APPs for Hypertension Management: A Systematic Evaluation of Usefulness,\u201d J. Liang et al. conducted a study of Chinese mobile health APPs for hypertension management to investigate the difference and effectiveness of the APPs between mainland China and other places. In the article titled \u201cLeveraging Multiactions to Improve Medical Personalized Ranking for Collaborative Filtering,\u201d S. Gao et al. constructed a medical Bayesian personalized ranking (MBPR) over multiple users' actions based on a simple observation that users tend to assign higher ranks to healthcare services that are meanwhile preferred in users' other actions. In the article titled \u201cA Systematic Review on Recent Advances in mHealth Systems: Deployment Architecture for Emergency Response,\u201d E. Gonzalez et al. conducted a broad survey of recent advances in mHealth systems. In the article titled \u201cAn Ensemble Multilabel Classification for Disease Risk Prediction,\u201d R. Li et al. explored the use of ensemble multilabel classification for disease risk prediction. In the article titled \u201cSemiautomatic Segmentation of Glioma on Mobile Devices,\u201d Y.-P. Wu et al. studied hard edge multiplicative intrinsic component optimization to preprocess glioma medical images. In the article titled \u201cHandling Data Skew in MapReduce Cluster by Using Partition Tuning,\u201d Y. Gao et al. explored the use of partition tuning-based skew handling (PTSH) to make improvements over the traditional MapReduce model in processing large healthcare datasets."} {"text": "Ultrafast laser microfabrication is a very powerful method for producing integrated devices in transparent materials . This te(1)Ultrafast laser ablation. In this category, we include papers where laser ablation is used to microstructure transparent materials. Bettella et al. [a et al. produceda et al. used las(2)Femtosecond laser irradiation and chemical etching. Here we include papers that demonstrate the potential of the combined use of femtosecond laser irradiation and chemical etching. In fact, on suitable materials the irradiated pattern is selectively removed by a subsequent chemical etching step in aqueous solutions of hydrofluoric acid or potassium hydroxide. The advantage of this approach with respect to ablation is an extended 3D structuring capability and much better surface quality, as widely demonstrated by Cheng [by Cheng in his rby Cheng showed f(3)Two-photon polymerization. In this category, we include papers that discuss the use of two-photon polymerization to directly write micro/nano structures with specific potential for lab-on-a-chip applications. Zandrini et al. [i et al. showed ti et al. reviewed(4)Combining subtractive and additive processes. In this category we include papers where the above processes are combined in order to provide more functionality on the same lab-on-a-chip. Sima et al. [a et al. revieweda et al. followeda et al. discusseThe special issue is composed of nine papers, including original research and reviews. The different contributions can be organized into the following four categories:We would like to thank all the contributors for submitting their papers to this Special Issue. We also thank all the reviewers for dedicating their time to help improve the quality of the submitted papers."} {"text": "Background: Metabolic syndrome increases the risk of cardiovascular disease (CVD) over and above that related to type 2 diabetes. The optimal diet for the treatment of metabolic syndrome is not clear. Materials and Methods: A review of dietary interventions in volunteers with metabolic syndrome as well as studies examining the impact of dietary fat on the separate components of metabolic syndrome was undertaken using only recent meta-analyses, if available. Results: Most of the data suggest that replacing carbohydrates with any fat, but particularly polyunsaturated fat, will lower triglyceride(TG), increase high density lipoprotein (HDL) cholesterol, and lower blood pressure, but have no effects on fasting glucose in normal volunteers or insulin sensitivity, as assessed by euglycemic hyperinsulinemic clamps. Fasting insulin may be lowered by fat. Monounsaturated fat (MUFA) is preferable to polyunsaturated fat (PUFA) for fasting insulin and glucose lowering. The addition of 3\u20134 g of N3 fats will lower TG and blood pressure (BP) and reduce the proportion of subjects with metabolic syndrome. Dairy fat (50% saturated fat) is also related to a lower incidence of metabolic syndrome in cohort studies. The metabolic syndrome is associated with an increased risk of cardiovascular disease and type 2 diabetes and enhances the risk of CVD in people with diabetes . There aTo systematically review meta-analyses of interventions that replace carbohydrates with fat in people with metabolic syndrome and interventions that examine these effects on the individual components of the syndrome. Pubmed was searched with the terms \u201cmeta-analysis AND dietary fat AND carbohydrate AND intervention AND \u201d. We reviewed 102 titles.A large amount of evidence has accumulated that replacing carbohydrates with fat of any sort will lower fasting triglyceride and increase HDL cholesterol. A meta-analysis by Mensink and Katan found thThe relationship between carbohydrate intake and TG has been controversial for many years with arguments about the persistence of the TG elevation effect ,9, with p < 0.001) and a lower level of HDL-cholesterol (WMD: \u22122.57 mg/dL (\u22120.07 mmol/l); 95 % CI \u22123.85, \u22121.28; p < 0.001) after the low-fat diets, compared with high-fat diets in 20 studies with 2016 participants.In a meta-analysis of high-fat versus low-fat diets in people with obesity, but no overt metabolic disturbance, Lu et al. 2018) [ [18] foun-3 PUFAs was associated with a lower metabolic syndrome risk . The plasma/serum n-3 PUFAs in controls were significantly higher than in metabolic syndrome cases , especially docosapentaenoic acid and docosahexaenoic acid. Guo et al. performeThe addition of fish oil fatty acids of at least 1 g/day lowers fasting TG and a metanalysis performed by Eslick et al. of 47 stn trials = 99), but lowered fasting insulin . Replacing saturated fat with PUFA lowered fasting glucose . Thus, PUFA is clearly the better fat for replacing carbohydrates in normal people. In people with type 2 diabetes [Fasting glucose lowering by reducing carbohydrate and replacing it with fat is far more controversial. A recent meta-analysis by Wanders et al. showed ndiabetes , high MUdiabetes . Surprisp = 0.02) and diastolic blood pressure mm Hg; p = 0.05) than did diets rich in cis-monounsaturated fat. Huntress et al. [There are much less data on blood pressure and carbohydrate replacement with fat in non-diabetics and the effects are relatively small. A meta-analysis performed by Shah et al. found ths et al. examineds et al. , MUFA loMansoor et al. performen = 21), ruminant TFA-rich lipids , or industrial TFA-rich lipids , no changes in peripheral insulin sensitivity were seen. Bendtsen et al. [p = 0.03) accompanied by a reduction in liver fat [Insulin resistance is a key and essential element of the metabolic syndrome (except the NCEP111 criteria), usually assumed on the basis of central adiposity. As noted by Wanders et al. , replacin et al. found non et al. found non et al. found non et al. showed tiver fat A contrary result was found in the Lipgene study where 47p < 0.028).In another report from the same study, in 337 volunteers , the preMost meta-analyses show that replacement of carbohydrates with fat lowers fasting TG and glucose and blood pressure, and increases HDL cholesterol with some differences, depending on whether the population has type 2 diabetes or not. There are some large intervention and cohort studies that show the opposite results, but these are in the minority. PUFA is probably superior to MUFA, while fish oil is superior to both."} {"text": "In the Funding section, the information provided is incomplete. The complete, correct Funding section is:http://www.isciii.es).This work was supported by the Instituto de Salud Carlos III and Fondo Europeo de Desarrollo Regional (FEDER). PI13/01668 ("} {"text": "We report a close replication but with measures of attachment that are considered superior in comparison to measures used by Van Lange et al., due to subsequent psychometric improvements. Psychometric analyses indeed showed that our attachment measures were reliable and valid, demonstrating theoretically predicted associations with other outcomes. With a sample (N = 879) sufficiently large to detect d = 0.19 , we failed to replicate the effect. Based on the available evidence, we interpret as there being no evidence for the link between attachment security and Social Value Orientation, but further replication research that uses solid measures and large samples can provide more definite conclusions about the association between attachment and SVO.We report a replication and extension of a finding from Studies 1 and 2 of Van Lange Close, intimate relations are thought to be rooted in an attachment system that helped to solve basic evolutionary pressures related to survival, caregiving and procreation. Central to attachment behaviours is the regulation of proximity between carers and infants . For exaOne of the hallmark premises of attachment theory is that having a responsive and reliable carer (or not) causes greater (or lesser) confidence in others, forming the basis for how people interact later in life , modern dimensional approaches treat attachment style as consisting of two underlying dimensions: anxiety and avoidance. Note that in the latter framework, security is not a separate dimension but a combination of low levels of both anxiety and avoidance .1.2.As stated above, adult attachment is seen as the result of prior experiences with carers. Although the majority of attachment research has been conducted in the context of close relationships, some research indeed suggests that people who are securely attached are also more supportive towards strangers . Van Lanet al. [and individualists grouped together; hereafter referred to as \u2018proselfs\u2019) scored higher on this scale (i.e. security), but only marginally significantly lower on the scales measuring attachment anxiety and avoidance in their attachment towards a variety of close others. They continued to examine contrasts between prosocials versus competitors and individualist in terms of the latter two dimensions and found that these were statistically significant and supportive of their hypothesis. Moreover, in Study 2 they found that prosocials more strongly endorsed the secure prototype.Van Lange et al. found th2.et al. measured attachment in two ways. In Study 1 (N = 573), they used a multi-item questionnaire that seems a precursor of 13 items published by Carnelley and Janoff-Bulman [Although the findings garnered considerable interest, they had considerable problems. First and foremost, Van Lange f-Bulman that evef-Bulman . This meet al. [N = 136), they measured attachment styles using a Likert-scale endorsement of each of the three attachment prototypes. Indeed, the reliabilities of Van Lange et al.'s [p = 0.09 and p = 0.06, respectively) in a sample of 573 participants. These marginal differences were obtained while controlling for gender and after dropping items from the attachment scale related to feelings about one's partner, yet in Study 2 the effect was present for partner-specific attachment.The lack of dimensionality might be one of the reasons for the low reliability reported for this scale in the Van Lange et al. paper. Iet al.'s [et al. [In terms of validity, there might also have been problems with the attachment measure's translation. In Van Lange et al.'s paper, t [et al. paper among Tilburg University's first year bachelor students. The study was thus not planned as a pure replication, which partly explains why we did not implement Van Lange 4.4.1.a priori power analysis. A post hoc sensitivity analysis in G*Power [d = 0.19. In order to test as conservatively as possible, we excluded eight participants who had four duplicate subject IDs. We ran no further studies to test the effect. All participants were first year bachelor students who participated in exchange for course credit. All data and syntaxes needed for these attachment analyses are available at https://osf.io/6kqzy/.We report the descriptives in G*Power with 4994.2.4.2.1.The exact composition of the battery of questionnaires in our test-weeks varied throughout the four years. In each year SVO was assessed by the same nine-item triple dominance measure used in the original . The lack of unidimensionality is also apparent from the very low Omega reliability for the security scale in 2011 and 2014, see Attachment was assessed with one or two measures per assessment wave. The switch of these instruments occurred based on considerations unrelated to the current study and were thus not informed by any preliminary analysis (which we did not conduct). In all studies, attachment style was assessed in a dimensional fashion. In 2011 and 2014, we used the Adult Attachment Scale , which et al. [Importantly, our attachment measures differed from Van Lange et al. in two w4.2.3.and to investigate whether our central measures (SVO and attachment) behaved like they are expected to behave on the basis of the existing literature.As noted, the data was collected in test-weeks. A full overview of relevant measures is reported in tables\u00a04.2.4.SW = 0.98, p < 0.01; prosocial SW = 0.98, p < 0.01), once significant for anxiety , significant for security , the Central Limit Theorem prescribes that distributions of means approach normality as long as the sample is sufficiently large [https://osf.io/6kqzy/files/) so we proceeded with parametric testing, as was also done in the Van Lange et al. [We also tested the SVO-attachment data for normality via the Shapiro\u2013Wilk test. Although this test was significant for avoidance (proself ly large . When we5.https://osf.io/6kqzy/).The following results were found with the merged datasets. Given that not all questionnaires were administered in each year, sample size varied per study. Furthermore, degrees of freedom varied per analysis. We will not list these descriptors for each analysis, but instead point the reader to the data and analysis scripts that are available online under the files at our Open Science Framework page and 269 proselfs (35%) and 95 individuals who could not be classified. If participants answered more than six dilemmas in a prosocial manner, they were classified as being prosocial. Note that the number of prosocials are somewhat higher than reported in the Van Lange paper, where between 43% and 49% of the sample was classified as prosocial, 1 For sake of consistency, we report ANOVAs in all our replication analyses.Given that attachment measures were measured at different scales (from 1 to 5 for the AAS and 1 to 7 for the ECR-R) we standardized them within each year for our aggregate (cross-sample) analysis. In our 2014 sample, we furthermore aggregate across the avoidance and anxiety dimensions of both available measures. We also analyse all scales separately and report the results in footnotes.et al. [2 We did not find higher levels of attachment security for prosocials than proselfs in the combined 2011 or 2014 samples; however, Cohen's d = 0.15, .3 Van Lange et al. [M = 0.03, s.d. = 1.01) people are less anxious than proselfs , Cohen's d = 0.03, F1,764 = 0.11, p = 0.74, or that prosocials are less avoidant in their attachment than proselfs , Cohen's d = 0.07, F1,763 = 0.86, p = 0.36. When we tested for the differences between prosocial and proself with the ECR reverse-scored and averaged as proxy for security, we also found no difference, Cohen's d = 0.03, F1,763 = 0.12, p = 0.73. We also considered testing the relationship between SVO using a continuous measure instead of a categorical measure. However, because the data were binomially distributed, we could not support analysing SVO as continuous.In the Van Lange et al. paper, r6.N = 879) we found no relationship between social value orientation and attachment, and we could thus not replicate Van Lange et al. [N = 879 participants, we could not confirm the hypothesis that SVO is related to attachment security.In a large sample study from four years , the alpha of a security composite will necessarily be modest . From a measurement perspective, it is problematic that Van Lange et al. [There are three possible explanations for the discrepancy. First, it is possible that our study was not well conducted, including the fact that our attachment measures differed. This is of course possible, but we could not identify any psychometric indicators to support such a claim. Our measures were more reliable than Van Lange et al.'s measureset al.'s . Moreove [et al. measuredstill tested the relationship with more outdated scales and also found nothing.One could further discuss what constitutes a solid replication . These analyses clearly showed robust patterns of attachment dimensions correlating in expected ways with theoretically related constructs. Reporting being less avoidant in one's attachment related to scoring higher on empathic concern, perspective taking, self-esteem, agreeableness and extraversion, and scoring lower on psychopathic tendencies. Reporting being less anxious in one's attachment related to scoring lower on behavioural inhibition and neuroticism, and higher on perspective-taking, self-esteem, openness to experience, extraversion and self-control. Importantly, we did not detect interaction effects between measures and assessment year. Note also that when we did use a measure that included a scale of attachment security, we also did not replicate the original Van Lange et al. [We think the soundness of our approach is evident from the fact that our attachment measures did not only have superior reliability as compared to the original study but also demonstrated construct validity in conducted auxiliary analyses , we did find that our sample was slightly more proself than Van Lange. It could be speculated that the urban versus provincial status of the samples might have contributed to this difference, but we should also note that Tilburg is itself a medium-sized (for Dutch standards) industrial town that is only a 1 h drive from Amsterdam. Variance restriction might have been an issue, although we do not think this accounts for the lack of replication. Finally, we note that our sample size was considerably larger than the original and our sensitivity analyses showed that we were well positioned to detect a small effect.et al.'s [p = 0.09 and p = 0.06, respectively) in a sample of 573 participants. These marginal differences were obtained while controlling for gender and after dropping items from the attachment scale related to feelings about one's partner (yet in Study 2 the effect was present for partner-specific attachment). In terms of validity, there might also have been problems with the attachment measure's translation. In Van Lange et al.'s [et al. [A final possibility is that the lack of replication of the original study was made possible by problems with the measurement of attachment. First and foremost, the reliabilities of Van Lange et al.'s Study 1'et al. [Post hoc, after knowing the outcome of these replication studies, one could argue that Studies 1\u20133 are not vital to the 1997 paper , with w [et al. for exam [et al. , p. 1082 [et al. similarl [et al. concludeWhere do we go from here? Going into this project, we did not think it was an unreasonable assumption that attachment was in some way related to SVO. After all, both relate to the honesty/humility factor of the HEXACO and the et al.'s [What are then the developmental origins of SVO? Although we think this is an exciting question, we simply don't know. In any case, given that this paper was the bedrock of an entire literature and that Van Lange et al.'s Studies"} {"text": "Cancer stem cells (CSCs) are a group of tumor cells with self-renewal property and differentiation potential. CSCs play a crucial role in malignant progression of several types of tumors. However, what is still controversial is the clinicopathological relationship between the Nanog marker and its prognostic value in the patients with breast cancer. The expression of Nanog in the patients with breast cancer and its correlation with clinicopathological prognostic factors was explored in the present study.A sample of 120 breast cancer tissues was obtained from the patients who referred to Imam Khomeini Hospital in Sari City, Iran during January 2012 and December 2016. The associations between Nanog expression and clinicopathological factors were analyzed based on immunohistochemical analysis.p = 0.001), lymph node metastasis (p = 0.01), and the stage of the disease (p = 0.003).The expression of Nanog was detected in 67 (55.8%) patients with a high expression rate in 24 (36%) cases (staining index \u22653). Moreover, there was a statistically significant relationship between Nanog expression and clinicopathological factors, including tumor grade (Findings of the study indicate that Nanog may act as a biomarker for prognostic prediction in patients with breast cancer. Breast carcinoma is the most common malignant tumor with the highest mortality rate in women. It involves more than 1.7 million cases around the world annually. Cancer Nanog is a key multidomain homeobox transcription factor for maintaining ESC pluripotency,12. HumaThe specimens of breast cancers were obtained from 120 patients referred to Imam Khomeini Hospital in Sari (Iran) during January 2012 and December 2016. Clinicopathologic parameters were age, tumor size, histological grade, perineural invasion, vascular invasion, lymph node metastasis, and tumor stage. Data were gathered using hematoxylin and eosin (H&E)-stained pathologic slides, pathological records, and hospital files. All the patients were women, with the mean age of 54.5 (ranging from 28 to 77) years. The samples were taken from the cancerous and adjacent normal tissues. For microscopic examination, the tissues were routinely fixed with formalin 10% before being embedded in paraffin.This research was performed using the samples stored after the pathological diagnosis. All the data were obtained from anonymous samples. Mazandaran University of Medical Science approved the study .Participants of the study included patients diagnosed with invasive ductal carcinoma following the breast surgery and those who did not receive neoadjuvant treatment. The inappropriate paraffin tissue blocks for immunohistochemical staining as well as those samples with incomplete documents were excluded from the study.00 \u00b0C for 13 min and then removed and put aside to reach room temperature. The tissue was washed with both running water and wash buffer. Next, the slides were incubated at envision for 60 minutes using diagnostic kit for monoclonal Nanog with 1/500 dilution, and then they were washed twice with wash buffer. The DAB solution was added and after appearing brown color, the slides were placed again in wash buffer for two minutes. Finally, the washed slides were stained with Mayer's hematoxylin, rinsed in distilled water, fixed in xylol and mounted with Entellan. Positive control kit for Nanog was Seminoma tissue. Our negative control was the tissue that the primary antibody did not shed. The nuclear staining was observed and scored by two pathologists according to the published criteria using a semi-quantitative score), followed by 33 (27.5%) as grade I and 19 (15.8%) as grade III of the tumor. Sixty-seven of the breast cancer patients (55.8%) of the study exhibited lymph node involvement. The higher Nanog expression was observed in 44 lymph node positive samples (36.7%).p = 0.001), tumor stage (p = 0.003), and lymph node involvement (p = 0.01) in breast cancer samples. There was not any relationship between Nanog expression and age (p = 0.71), tumor size (p = 0.25), perineural invasion (p = 0.06), and vascular invasion (p = 0.27).et al.[et al.[et al.[In this study, 55.8% of tumoral sample have expressed Nanog marker. Finicelli et al. also repl.[et al. found lil.[et al. observedl.[et al.,19. In al.[et al., tumor al.[et al., hormonel.[et al., and chel.[et al..et al.[et al.[et al.[et al.[et al.[et al.[et al.\u2019s[et al.[et al.[et al.[et al.[et al.[We identified Nanog protein to be predominately expressed in the nucleus of tumor cells. IHC analysis of Nanog in breast carcinoma tissues has shown both nuclear and cytoplasmic localization of this protein; a result that is compatible with ours,19. Our et al. and Wangl.[et al. have deml.[et al. and Wangl.[et al. did not l.[et al.,23. Withl.[et al.. Arif et[et al.\u2019s results l.[et al. and Wangl.[et al. and Wanget al.[et al.[In this study, we found no significant link between perineural and vascular invasion. However, there is no study reporting these two prognostic factors. Nagata et al. have shol.[et al., no signSome limitations of this study include short follow-up period, evaluation of distant metastasis, and survival rate of the patients. This study found a strong connection between Nanog expression and some clinicopathologic features in the patients with breast cancer, which includes lymph node metastasis, stage of the disease, and grade of disease. Our findings indicate that there is an association between the expression of Nanog and prognosis of the breast cancer patients. Moreover, worse prognostic characteristics were observed in the patients with high expression of Nanog. However, controversies exist among the studies conducted to evaluate this relationship."} {"text": "Neuroimage57, 1205\u20131211) investigated whether any regions activated more in response to passively viewing digits in contrast with letters and visually similar nonsense symbols and identified a region in the left angular gyrus. By contrast, Grotheer et al. found bilateral regions in vOTC which were more activated in response to digits than other stimuli categories while performing a one-back task. In the current study, we aimed to replicate the findings reported in Grotheer et al. with Price & Ansari's passive viewing task as this is the most stringent test of bottom-up, sensory-driven, category-specific perception. Moreover, we used the contrasts reported in both papers in order to test whether the discrepancy in findings could be attributed to the difference in analysis.The influential triple-code model of number representation proposed that there are three distinct brain regions for three different numerical representations: verbal words, visual digits and abstract magnitudes. It was hypothesized that the region for visual digits, known as the number form area, would be in ventral occipitotemporal cortex (vOTC), near other visual category-specific regions, such as the visual word form area. However, neuroimaging investigations searching for a region that responds in a category-specific manner to the visual presentation of number symbols have yielded inconsistent results. Price & Ansari (Price, Ansari 2011 Arabic digits) was postulated by Dehaene , but onlThe extent to which category-specific regions in the ventral visual stream, such as the visual word form area (VWFA), are specialized for domain-specific processing remains debated e.g. \u201310). The. The10])An opposing view, the interactive account of vOTC function, proposes that object recognition is dependent on forward and backward feedback loops between visual cortices and higher-order semantic processing regions . Supportet al. proposed that to be considered an NFA, the region \u2018should be anatomically consistent across subjects and should respond more to numerals than morphologically, semantically, or phonologically similar stimuli' [et al. [In order to resolve this debate, it is important to determine the criterion for a region to be considered functionally specific for number symbols. For example, Shum stimuli' , p. 6709stimuli' further stimuli' . Notably [et al. found thet al. [et al. [et al. [Price & Ansari used a met al. to test et al. ran a co [et al. ran a co [et al. . However [et al. . While P [et al. used a p [et al. used a o1.1.et al. [If an NFA can be reproducibly localized and is specific to Arabic numerals rather than familiar symbols more broadly, it should activate more strongly for Arabic numerals than other meaningful written symbols, regardless of task demands. The current study, therefore, aimed to replicate and extend the study by Price & Ansari using upet al. to deter2.https://osf.io/wz268/register/5a970dfec69830002df68ac2). This pre-registration was performed after data analysis. We had also previously registered our analysis plan on the OSF on 15 September 2017, before we completed data collection (https://osf.io/hcs7t).This article received results-blind in-principle acceptance (IPA) at Royal Society Open Science. Following IPA on 7 March 2019, the accepted Stage 1 version of the manuscript, not including results and discussion, was pre-registered on the OSF recruited from the London, Ontario, region participated in the study, and 27 of them were female. An additional three adults completed the study but were excluded from data analysis for failing to disclose that they were left-handed in advance. All included participants were right-handed with normal or corrected to normal vision.Forty adults between 18 and 37 years of age . A whole-brain high-resolution T1-weighted anatomical scan was collected using an MPRAGE sequence with 176 slices, a resolution of 1 \u00d7 1 \u00d7 1 mm voxels and a scan duration of 5 min and 21 s . The in-plane resolution was 256 \u00d7 256 pixels. Functional MRI data were acquired during the change detection task using a T2*-weighted single-shot gradient-echo planar sequence . Forty-eight slices were obtained in an interleaved ascending order with a voxel resolution of 2.5 \u00d7 2.5 \u00d7 2.5 mm. A multiband acceleration factor of 4 was used. There were 4 runs of the change detection task with 335 volumes. Padding was used around the head to reduce head motion. The total scan duration was approximately 45 min.et al. [et al. [The aim of this study was to test whether an NFA region can be localized using the passive design used by Price & Ansari and imaget al. compensaet al. but stil [et al. also use2.4.p < 0.001. These maps were then corrected for multiple comparisons using the Monte Carlo simulation procedure to determine a minimum cluster threshold [\u03b1 < 0.05. This cluster thresholding algorithm estimates and accounts for spatial smoothness and spatial correlations within the data . The functional images were corrected for head motion, low-frequency noise (high-pass filter with a cut point of two cycles per time point) and differences in slice time acquisition, and spatially smoothed with a 6 mm FWHM Gaussian kernel. The number of functional volumes acquired (335) exceeded the length of the behavioural task and was adjusted to 323 volumes for the second participant to match the duration of the task. However, this correction was not saved to the acquisition protocol and only that participant had this number of volumes acquired for functional runs. Therefore, the runs for the other 39 participants were trimmed to 323 volumes during pre-processing so that all runs had the same number of volumes. Three runs were excluded from further analysis because the participant's movement exceeded 3 mm over the total course of the run or 1 mm between volumes. An automatic alignment procedure using gradient-driven affine alignment in Brain Voyager was used to spatially align the functional data to the corresponding anatomical scan. Images were then spatially transformed to MNI-152 space. All contrast and conjunction analyses were run using voxel-wise general linear models and thresholded at an initial, uncorrected threshold of ata (see ).et al. [1.et al. [et al. [We ran a contrast similar to the one reported in Grotheer et al. to test [et al. had addi2.To test whether there was a region in the ITG that is number-specific, we ran the more stringent conjunction analysis reported in Price & Ansari to look 3.To test the alternative hypothesis that there is a region in the ITG that responds preferentially to familiar symbols, we ran a conjunction analysis to look for a region that responded to: (digits > scrambled digits) and (letters > scrambled letters).In order to resolve the discrepancy in the literature, we ran analyses reported previously in Price & Ansari and Grotet al. . Therefo3.3.1.M = 99.12%, s.d. = 2.2%). We first ran the contrast reported by Grotheer et al. [Analysis of the behavioural results revealed that button press data were not recorded for three participants due to a technical error, so data from these participants were excluded from the analyses. Accuracy on the change detection task was high for all remaining participants and (letters > scrambled letters). Results revealed bilateral clusters in the middle occipital gyrus that responded less to familiar characters than to scrambled symbols . This ag3.2.a priori region of interest (ROI) based on a recent meta-analysis by Yeo et al. [et al. which demonstrated significant activation in response to number symbols across studies, corresponding to the putative NFA , we ran additional analyses that were not included in the registered protocol to check the signal quality in our data. To investigate whether the absence of a region that responded specifically to viewing number symbols could be attributed to poor data quality and signal loss in the ITG, we took a two-step approach to determine the quality of data in an o et al. . A sphertive NFA . This ROtive NFA . To asseAs a second step, to determine the degree of signal dropout in this region, we employed a masking procedure using AFNI's 3dAutomask function to create a \u2018brain-only' mask of each subject's mean functional image. We then determined the percentage of voxels within the NFA ROI that were excluded from the brain mask, serving as a measure of signal loss in this region. We assessed these data quantitatively by varying the signal intensity \u2018clip fraction' parameter, from the default setting of 0.5 up to 0.75 (more conservative) . To summAs a final assessment, we sought to verify whether participants with a poor signal in the NFA ROI, relative to the group, influenced our activation results. We defined an exclusion criterion in which two conditions had to be true: (i) tSNR in the ROI was below 1 s.d. of the mean across our sample and (ii) the mean percentage of voxels excluded from the ROI was above 1 s.d. of the mean across our sample . Two participants met this criterion (P08 and P13). We then ran the same contrasts described above excluding these two participants from the analysis and the pattern of results remained the same . Figure\u00a04.Despite mixed evidence for the existence of an NFA, it has been proposed that it is an important research direction for gaining insight into the organizing principles of vOTC, as well as mathematical learning and development ,5. The aImportantly, this study cannot rule out the possibility that a region in vOTC responds to numbers when participants are asked to engage in tasks that require identifying numbers. We used a passive viewing task because previous investigations successfully used similar tasks to locate other category-specific regions in the ventral visual stream, including the parahippocampal place area , the fuset al. [et al. [Goal-directed attention imposed by task demands may be required in order to engage neurons in vOTC in response to numbers. Recent studies have provided evidence that a region in vOTC activates preferentially to number symbols when participants are performing active numerical tasks \u201322. Resuet al. probed t [et al. systemat [et al. , but rat [et al. .Our results did not replicate Price & Ansari's finding Future studies should investigate the modulation of vOTC by different task demands. There is mounting evidence that neural activation in response to number in vOTC seems to be engaged by semantic rather than perceptual processes (e.g. ,22,31,325.The results fail to support the theory that there exists a region in the vOTC that can be reproducibly localized and responds selectively to the visual presentation of number symbols during passive viewing. The current study failed to replicate previous reports of a putative NFA and corroborates a growing body of evidence that this region is not responsible for bottom-up visual processing of Arabic digits. Given that some reproducible localization is evident across active task paradigms, more work is needed to understand the function of this region and how it interacts with the network of regions involved in symbolic number processing."} {"text": "Colorectal cancer (CRC) represents the second most common cancer worldwide and the third leading cause of cancer death. Most CRCs arise from premalignant colorectal lesions (mainly adenomas) that require years to develop an invasive disease. Early stage detection through the use of screening programs can sharply reduce CRC incidence and mortality allowing for better outcomes of the disease. The effectiveness of these programs may be strongly enhanced by targeting screening to individuals at higher risk.Tomlinson et al. [Gargallo et al. [CRC is a multifactorial disease resulting from complex interactions between environmental and genetic factors. A great progress in understanding the underlying genetic factors of CRC has been made in the past two decades. Since the first genome-wide association study (GWAS) on CRC risk published in 2007 by n et al. , more thDunlop et al. [Yarnall et al. [Jung et al. [Hsu et al. [. They observed that adding the GRS to prediction models increased discriminatory accuracy from 0.51 to 0.59 (P = 0.0028) in men and from 0.52 to 0.56 (P = 0.14) in women, compared to risk models based only on family history. Subsequent studies show similar results. Ib\u00e1\u00f1ez-Sanz et al. reported a discriminatory accuracy value of 0.63 for CRC risk prediction model combining some modifiable risk factors , family history of CRC, and a GRS based on 21 susceptibility SNPs [Weigl et al. [Jeon et al. [Balavarca et al. [p = 0.0002).Nowadays CRC screening guidelines are based mainly on age and family history. However, the elaboration of prediction risk models including environmental and genetic risk factors may allow a more accurate selection of low-and high-risk patients. Improving risk stratification will optimize the use of invasive technology and increase adherence to screening programs. The first risk prediction models for CRC were performed based on family history, lifestyle factors, and environmental risk factors. However, recent studies showed an increasing interest in developing genetic risk scores (GRS), combining common genetic variants associated with CRC for a more personalised risk assessment. In this context, p et al. , Yarnalll et al. , and Jung et al. developeu et al. developeity SNPs . More reP = 0.002 in men aa et al. after evAll these studies support the idea that adding genetic, environmental and lifestyle information into a CRC risk prediction model may significantly increase the discriminatory accuracy over models using only age and family history. Risk stratification could still be improved by integrating new discovered susceptibility SNPs to GRS as well as other relevant biomarkers such as epigenetic markers. Combining environmental, lifestyle factors and GRS in risk prediction models can help to tailor CRC prevention measures by adapting the onset age, nature and the intensity of CRC screening strategies."} {"text": "Critical Care concluding that, while both plasma citrulline concentrations and glucose absorption were reduced in critical illness, fasting plasma citrulline concentrations were not predictive of subsequent glucose absorption and therefore do not appear to be a marker of small intestinal absorptive function [Poole et al. published in a relatively recent paper in function . It is afunction demonstrfunction . Approxifunction . Synthesfunction . In the function describefunction that, defunction . In one function . Continu"} {"text": "An increased understanding of the biology of colorectal cancer (CRC) has fuelled identification of biomarkers with potential to drive a stratified precision medicine care approach in this common malignancy.We conducted a systematic review of health economic assessments of molecular biomarkers (MBMs) and their employment in patient stratification in CRC. Our analysis revealed scenarios where health economic analyses have been applied to evaluate the cost effectiveness of MBM-guided clinical interventions: (i) evaluation of Dihydropyrimidine dehydrogenase gene (DPYD) status to identify patients susceptible to 5-Fluouracil toxicity; (ii) determination of Uridine 5\u2032-diphospho- glucuronosyltransferase family 1 member A1 gene (UGT1A1) polymorphism status to help guide irinotecan treatment; (iii) assessment of RAS/RAF mutational status to stratify patients for chemotherapy or Epidermal Growth Factor Receptor (EGFR) therapy and (iv) multigene expression analysis (Oncotype Dx) to identify and spare non-responders the debilitating effects of particular chemotherapy interventions.Our findings indicate that Oncotype Dx is cost-effective in high income settings within specific price points, by limiting treatment toxicity in CRC patients. DPYD status testing may also be cost effective in certain settings to avoid specific 5-FU toxicities post treatment. In contrast, current research does not support UGT1A1 polymorphism status as a cost-effective guide to irinotecan dosing, while the health economic evidence to support testing of KRAS/NRAS mutational status and chemo/EGFR therapy choice was inconclusive, despite its widespread adoption in CRC treatment management. However, we also show that there is a paucity of high-quality cost-effectiveness studies to support clinical application of precision medicine approaches in CRC. Within the UK, in 2015, almost 42,000 new cases of CRC were documented [et al Manuscript in Preparation).Globally, colorectal cancer (CRC) is the second most common cancer in women , and third most common in men ; and the annual number of deaths approaches 700,000 identifiAn increased understanding of the biology underpinning malignancy has indicated that many cancers, including CRC, are composed of a number of different molecular disease subtypes, which may show differing responses to therapeutic intervention. Identification of appropriate prognostic and predictive molecular biomarkers (MBMs), which can distinguish between these different subtypes, can assist clinical decision-making, such that patients receive the most appropriate treatment based on their molecular profile. This stratified or precision medicine approach has the potential to contribute to enhanced therapeutic efficacy, while minimising treatment-related toxicity.To identify MBMs of the required clinical utility, e.g. diagnostic (identifying cancer subtype), predictive (determining likelihood of response to therapy), or prognostic (indicating course of disease), analytical platforms are becoming more sophisticated, incorporating technologies such as gene expression profiling and next-generation sequencing. Interpretation of data generated from these platforms is performed using different bioinformatics approaches, adding to overall complexity . The Natet al. [BRAF and DNA mismatch repair status) and predictive (KRAS and NRAS) utility [et al. [For researchers and clinicians to embrace a MBM test, it must demonstrate analytical validity, clinical validity, and, most importantly, clinical utility . These pet al. examined utility . Sepulve [et al. also notDecision makers such as healthcare payers need to know both the financial and the health-related implications of introducing MBM testing. Limited information on the contribution to patient outcomes and societal benefit is often cited as the basis for lack of reimbursement for a particular MBM test . TherefoFollowing Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines, this review is registered with PROSPERO (registration number: CRD42016038046) and the findings conform to that registration .5-Fluorouracil (5-FU) has been the backbone of chemotherapeutic regimens for CRC since the late 1950s. As part of our initial scoping search, we identified thirteen other drugs that have been approved by the Food and Drug Administration (FDA) for treatment of CRC since 1996 see .Our research question, formulated using the PICOS framework was \u201cWhat is the cost-effectiveness of using a MBM test for predicting response to therapy in CRC?\u201d. PICOS was employed to develop a search limited to studies that performed economic evaluation of patients diagnosed with CRC, who were subsequently stratified for treatment selection by the result of a MBM test. Initially, a scoping search was performed to identify keywords and MeSH headings. Articles were identified by systematic literature search if they were published between 1 January 2006 and 31 December 2016. We searched MEDLINE, EMBASE, Cochrane Library, SCOPUS, Web of Science, Econlit and SCHARR. Meeting presentations were also searched for the same time period in the American Society of Clinical Oncology (ASCO) and International Society for Pharmacoeconomics and Outcomes Research (ISPOR) websites. Boolean operators were used to set up weekly searches of the above databases throughout the preparation of the review to keep it current, with the addition of Google Scholar alert searches at least 3 times per week until the end of 2018. All bibliographic references retrieved via the searches were exported to reference management software, and duplicates were removed before the study selection step.Articles were screened for eligibility based on the following criteria :Titles and abstracts of all articles were reviewed for eligibility and only accepted if the above criteria were met. Four reviewers independently evaluated the full text of potentially eligible articles to determine whether to include these articles in this review. A lack of consensus over eligibility was resolved between the four reviewers. If doubts remained about the suitability of the study (such as academic posters which lack full peer review), we took the conservative approach of including these studies, so as to avoid missing potentially informative studies, while noting that they had not undergone full peer review.The integrity of each study was assessed according to a checklist developed by the ISPOR Consolidated Health Economic Evaluations Reporting Standards (CHEERS) Task Force Report . This unIn cases where more than one therapy were modelled, the reported ICER might not be compared to the base case, e.g. best supportive care (BSC). In these instances, we calculated the ICER based on reported costings and QALYs for the MBM test using the following formula:Where LYGs were reported, but not QALYs, and no health utility was reported then:The baseline health utility score of 0.8 was calculated from studies identified in our systematic review, which ranged from 0.71 to 0.87 for progression-free survival in CRC patients, which conforms with a published systematic review of health utility values for CRC . ConversFor each therapeutic intervention indicated, we listed the dates of FDA, European Medicines Agency, and National Institute for Health and Care Excellence (NICE) approval. We have identified the annual costs for each of these therapies, adjusted to 2016 \u00a3GBP and Euros using the CCEMG (Campbell and Cochrane Economics Methods Group) - EPPI (Evidence for Policy and Practice Information) - Centre Cost Converter . We haveCosts listed in n = 25) were removed. A total of 121 articles were then screened for eligibility. After full text examination, 25 articles were excluded as these reported CEAs that related to screening of families for hereditary CRC, which is not relevant to the research question being posed in this systematic review. A further 16 articles were either reviews or systematic reviews which were retained for reference, and 5 articles did not mention the terms LYG, QALY, or ICER. A total of 12 other articles did not include CBAs, CEAs, CMAs, or CUAs and 12 articles focused on CRC therapy alone, not taking into account the use of MBM tests to help guide therapy. On further examination, 7 articles were identified as duplicate studies (earlier abstract reports of the same study or versions of the same study published in other languages), a further 7 were abstracts without sufficient information, 3 articles involved a mixed population of cancer types which either included data already captured or aggregated data from which CRC-specific data could not be extracted, 1 study was an incomplete trial with insufficient data, and 5 were letters with insufficient detail for inclusion. In total, 14 eligible studies remained which involved economic evaluation of a MBM test for guiding therapeutic intervention in CRC.The Study Selection Workflow is outlined in We extracted empirical and methodological data and imported the data into Microsoft Excel. Extracted features included: author, year, country of study, CRC stage/metastases/not described, therapy, biomarker utilised, LYG, QALY, and ICER employed \u20132C. We aA quality rating for each study was determined (see Methods) which allowed us to assign a level of confidence in the strength of evidence for each study. The quality assessment was performed by one reviewer, checked by a second reviewer and any disagreement was resolved by the third and fourth reviewers.Data capture and quality analysis for each study of cost-effectiveness were represented in DPYD gene status in relation to 5-FU toxicity [DPYD gene. More than 80% of administered 5-FU is detoxified in the liver by DPD metabolism [toxicity . 5-FU istabolism .UGT1A1 testing to guide irinotecan dosing [n dosing \u201326. The n dosing .RAS (KRAS and NRAS) and BRAF, ) in relation to their use as prognostic/predictive MBMs in CRC. Tumours harbouring mutated forms of these genes are resistant to anti-epidermal growth factor receptor (EGFR) therapy [RAS (KRAS and NRAS) mutations occur in 53% of CRC patients [BRAF mutations in approximately 10% [RAS mutational status (KRAS and NRAS) before either treatment with an EGFR monoclonal antibody or chemotherapy treatment. therapy . RAS (KRpatients and BRAFtely 10% . Most ofN = 8) involved RAS and BRAF testing prior to CRC treatment with cetuximab and panitumumab [The majority of health economic evaluations . The second HQO CEA for KRAS screening prior to panitumumab treatment (compared to BSC) came very close to the NICE threshold with an ICER of \u00a330,607 and over 82 QALDs. The third HQA CEA for KRAS screening before cetuximab monotherapy (compared to BSC) generated an ICER above NICE\u2019s threshold at \u00a335,095 with 112 QALDs; however, these 3 studies were rated as medium-quality studies, as the report did not outline a structured summary, describe its analytical methods, or mention funding sources and conflicts of interest. The CEA from the Westwood et al. [et al. [et al. [KRAS testing prior to cetuximab treatment) and \u00a348,999 (over 179 QALDs for combined KRAS and BRAF screening prior to cetuximab treatment), respectively. These ICERS were above the NICE threshold of \u00a330,000 . The remaining studies were of medium to poor quality, missing important details when applying the CHEERS checklist.The results in l. study both yied et al. study re [et al. and Blan [et al. , produceIn All studies undertaken were CEAs.Incremental costs relating to toxicities following 5-FU administration are indicated in UGT1A1 genotyping to guide irinotecan dosing, one study [et al. [In the CEAs detailing ne study reportedne study , 24 repo [et al. based thRAS (KRAS and NRAS) and BRAF testing utilised QALYs to calculate their ICERs. The exceptions are Behl et al. [et al, [In l et al. and Vija [et al, which emet al. [In et al. calculatet al. [DPYD testing CEA. A decision analytic approach in combination with a Markov model was employed to assess resource use and health outcomes. The time horizon was 2 cycles of chemotherapy.Traor\u00e9 et al. did not UGT1A1 genotyping to guide irinotecan dosing was described from the perspective of the healthcare payer in 3 studies [et al. [et al. [The healthcare perspective for CEAs for studies \u201325, wher [et al. focussed [et al. employed [et al. \u201326, withRAS (KRAS and NRAS) and BRAF BM testing to inform anti-EGFR therapy was modelled in 6 of 8 studies from the healthcare payer perspective of 4 studies , 24 empl [et al. while GoRAS/BRAF studies employed a discount rate, which ranged from 3-5%. Five (5) of 8 studies employed a health utility questionnaire; in 4 cases [Six (6) of 8 4 cases , 36, 38 Oncotype DX study employed a discount rate of 3% and used a health utility questionnaire, but the type of questionnaire was not specified threshold was not employed.Three (3) out of 4 studies reported WTP thresholds, which range from \u20ac50,000 in 2013 to US$10et al. [et al. [Six (6) of 8 studies used a WTP threshold which ranged from a Canadian study which seet al. utilised [et al. did not et al. [Alberts et al. limited Sensitivity analyses were performed to test the degree of uncertainty in health benefits and costs. DSA tested parameters such as clinical effects, disease progression, QALYs and costs one at a time, while the superior PSA tested these parameters in combination.et al. [Traor\u00e9 et al. conducteet al. [et al. [et al. [et al. [UGT1A1 BM test.Gold et al. and Butz [et al. performe [et al. and Pich [et al. only perKRAS testing, cetuximab plus irinotecan was the most cost-effective therapy when compared to BSC. The DSA from Vijayaragharan et al. [KRAS WT patients in the population. The DSA used by Shiroiwa et al. [KRAS testing to be 62% cost-effective at a WTP threshold of \u00a520 million . The PSA from Westwood et al. [KRAS testing strategies were almost equal.The PSA in the HQO paper showed tn et al. indicatea et al. did not d et al. did not et al. [et al. [KRAS and BRAF testing to be the dominant strategy at a WTP threshold of \u20ac10,000 to \u20ac40,000 , whilst at a WTP threshold greater than \u20ac40,000 , KRAS testing was dominant. The tornado plot by Harty et al. [et al. [The scenario analysis performed by Blank et al. describe [et al. also noty et al. indicate [et al. revealed [et al. DSA detaMost studies which used a PSA approach , 36, 38 The DSA and PSA indicated that QALYs were sensitive to: (1) benefit of fluoropyrimidine monotherapy over surgery alone, (2) benefit of FOLFOX over fluoropyrimidine monotherapy, and (3) time preference discount rate.KRAS/BRAF test result. For panitumumab therapy, the average cost saving is \u00a341,159 per patient per year.UGT1A1, which is used to guide the reduction of irinotecan dosing.KRAS MBM reduced the ICER for cetuximab from \u00a3142,515 to \u00a3109,452 in the Shiroiwa et al. study [As shown in l. study and for l. study . HoweverKRAS (or BRAF) mutations. Two studies [et al. [From reported costs and positive effects in three studies 32, 33, 35), we were able to generate ICERs for chem, 33, 35, studies resulted [et al. while brUGT1A1*28 polymorphism occurs with higher prevalence in the African (42\u201356%) and Caucasian (26\u201331%) populations, than in Asian populations (9\u201316%). Consequently, the use of this biomarker leads to a ten-fold increase in Africans LYGs compared to Asian LYGs, when used to guide irinotecan treatment. However, even with this increase in LYG, this MBM is still not cost-effective [It is important to note that the ICERs for the MBMs evaluated in this systematic review are susceptible to the frequency of mutations in the general population. The ffective .The economic impact of MBM testing to guide therapy in CRC depends upon the cost of the therapeutic intervention and the price of the test, balanced against the clinical impact of the intervention and the degree of toxicity to the patient. So, if the net savings and QALYs are within a specific country\u2019s WTP threshold, the value-based reimbursement of the MBM may help justify a stratified/precision medicine approach to cancer treatment .et al. [DPYD*2A screening prior to treatment, showed that genotyping marginally improves patient outcomes, and is also cost saving . The CEA by Traor\u00e9 et al. [et al.\u2019s study demonstrates that establishing DPYD screening in clinical practice in advance of 5-FU or capecitabine treatment may be cost-effective. On the evidence that we have presented and evaluated in this systematic review, DPYD screening is not only cost saving, but also spares patients the associated toxicities, although the overall net monetary benefit may be minimal.Deenen et al. , in a sa\u00e9 et al. and DeenUGT1A1 may be cost saving, but our systematic review is inconclusive as to whether testing improves patient outcomes, with both positive [et al. [UGT1A1 genotyping to guide irinotecan dosing, and that any dose reduction should be based on clinical parameters, rather than UGT1A1 status.Three of the irinotecan studies \u201325 identpositive and negapositive QALYs be [et al. stated tet al. [UGT1A1*1 homozygotes and UGT1A1*28 heterozygotes, with positive therapeutic results without the development of adverse effects; a RCT of this approach is ongoing [UGT1A1 status has yet to be defined [UGT1A1 genotyping to guide irinotecan dosing will most likely need to be revisited following the availability of results from RCTs such as the one highlighted above in order to determine its efficacy and cost-effectiveness.Lu et al. attempte ongoing . As the defined , UGT1A1 RAS (KRAS and NRAS) and BRAF mutations before anti-EGFR or chemotherapy administration has informed clinical decision making in CRC. Of the articles we identified, both the Canadian study by the HQO [et al. [KRAS WT guided therapy compared to BSC and chemotherapy respectively. However, where RAS (and BRAF) testing were used to select chemotherapy for patients with the mutated form of these genes [et al. [RAS family and BRAF mutation testing, the results overall were inconclusive as to whether precision medicine strategies are cost-effective when selecting CRC patients for anti-EGFR therapy. Although NICE\u2019s WTP is set at \u00a330,000, technology appraisals performed for end-of-life treatment guidance, have permitted costs to breach this threshold at an average of \u00a349,000, implicitly suggesting a \u00a350,000 WTP when end-of-life criteria are met. If this figure had been used as our benchmark in this metastatic CRC setting, then half of the studies would be classed as cost-effective [RAS/BRAF testing did prove to be cost-effective. The cost savings can be significant. For example, given that more than one million CRC patients in Europe are expected to develop metastatic CRC [RAS mutations, there is the potential to save \u00a33 billion (\u20ac3.5 billion) over the lifetime of this patient cohort.Testing of patients with the HQO and the [et al. generatese genes , 34, ICE [et al. resulted [et al. but reco [et al. . From thatic CRC , 51, witRAS (and BRAF) testing can only be cost-effective when selecting patients who should receive chemotherapy, but not those who receive EGFR therapy, based on the result of their molecular assay.When MBM guided anti-EGFR therapy is compared to anti-EGFR therapy alone, there is a pronounced increase in the ICER values, but the QALYs produced are only marginally different. Thus, et al. [et al. CEA [The initial economic analysis of the Oncotype DX assay was generated by data from the National Comprehensive Cancer Network and concluded that the assay would improve patient outcomes (QALY = 0.035), and decrease costs by $3,000 for stage II, T3, proficient DNA mismatch repair CRC patients . In the et al. , a largeet al. and a QAet al. , and altet al. , the laret al. . An econet al. . Althoug al. CEA was base al. CEA , and fav al. CEA \u201356 indicUGT1A1 MBM, while the remaining 4 investigated the KRAS MBM. We captured all 7 of these articles but excluded the study by Mittman et al. [UGT1A1 testing prior to irinotecan administration remains unresolved, whilst using KRAS genotyping to stratify patients before anti-EGFR treatment was cost-effective. The second systematic review, by Westwood et al. in 2014 [KRAS testing of CRC tumours. Its literature search found 5 articles, which we also identified, and the authors concluded that the ICER for KRAS mutation testing to guide anti-EGFR therapy was large. However, although they performed a CEA and found KRAS testing to be cost-effective, their results should be interpreted with caution as a number of assumptions were made in relation to resection rates, MBM test use, etc. The third paper by Guglielmo et al. [et al. [KRAS testing to guide anti-EGFR therapy were inconclusive. The fourth review by Seo and Cairns [et al. [KRAS testing is always more cost-effective, even if this is not always the case for anti-EGFR therapies. However, we draw different conclusions from the data for the irinotecan studies, finding UGT1A1 testing not to be cost-effective.There have been four previous systematic reviews on the economic analysis of MBM approaches in CRC in the personalised/precision medicine setting. The first was performed by Frank and Mittendorf in 2013 . They idn et al. because in 2014 , was a ho et al. identifi [et al. which th [et al. or Japan [et al. studies d Cairns identifi [et al. . Our finUGT1A1, because our analysis indicates that there is enough evidence to support the assertion that the use of UGT1A1 genotyping to reduce irinotecan dosing is not cost-effective. Despite being able to select patients to receive chemotherapy, our findings suggest that there is insufficient evidence to indicate KRAS (and BRAF) testing is cost-effective, in the context of EGFR therapy.We disagree with the Frank and Mittendorf systematic review on the lack of evidence to make a decision on the cost-effectiveness of PIK3CA mutated tumours benefit from exposure to aspirin, whereas PIK3CA wild-type patients do not [Fusobacterium nucleatum may counteract MSI-H positivity with associated immunosuppressive effects [RAS, BRAF, PIK3CA, MSI-H, and F.nucleatum positivity in a cost effective manner to precisely guide anti-EGFR therapy, aspirin therapy, and immunotherapy in mCRC. This challenge is becoming increasingly relevant as treatment algorithms incorporating multiple biomarkers become more common place and techniques such as whole genome sequencing enters clinical practice.It is evident that not all CRC patients currently benefit from precision medicine MBM-informed therapy, as is the case for the 53% of RAS mutant mCRC patients not eligible for anti-EGFR treatment . The emes do not . Microsa effects , 65. TheDPYD screening could be cost-effective in high-income settings, if it is implemented before 5-FU therapy. Likewise, Oncotype DX assay is likely to be cost-effective in identifying patients who will not benefit from chemotherapy. We were unable to find evidence to support UGT1A1 testing to guide irinotecan dosing. Perhaps more controversially, despite its adoption in many countries globally, we found that the cost-effectiveness data currently available to support anti-EGFR treatment based on RAS/BRAF mutational status is inconclusive.There is a paucity of high-quality CEAs that evaluate MBM in CRC. Unless CEA is incorporated prospectively into clinical trial design, economically unsubstantiated results can obscure the best available evidence, undermining both methodological approaches and resources. In summary, we found that the cost-effectiveness of MBM approaches to guide CRC therapy is highly variable. The evidence from our review suggests that The evidence presented here reflects a need for a more rigorous methodological CEA-driven approach to be prospectively employed. There also needs to be greater transparency on prices used in CEA, so as to ensure the delivery of value-based care in a disease that kills nearly 170,000 Europeans every year."} {"text": "Methylococcus capsulatus (Bath)\u2019 by Hannah R. Adams et al., Chem. Sci., 2019, DOI: ; 10.1039/c8sc05210g.Correction for \u2018One fold, two functions: cytochrome P460 and cytochrome c\u2032-\u03b2 from the methanotroph The authors regret an error in the listing of one of the authors, Tadeo Moreno-Chicano, in the original manuscript. The corrected list of authors and affiliations for this paper is as shown above.The Royal Society of Chemistry apologises for these errors and any consequent inconvenience to authors and readers."} {"text": "Endoscopic video sequences provide surgeons with direct surgical field or visualisation on anatomical targets in the patient during robotic surgery. Unfortunately, these video images are unavoidably hazy or foggy to prevent surgeons from clear surgical vision due to typical surgical operations such as ablation and cauterisation during surgery. This Letter aims at removing fog or smoke on endoscopic video sequences to enhance and maintain a direct and clear visualisation of the operating field during robotic surgery. The authors propose a new luminance blending framework that integrates contrast enhancement with visibility restoration for foggy endoscopic video processing. The proposed method was validated on clinical endoscopic videos that were collected from robotic surgery. The experimental results demonstrate that their method provides a promising means to effectively remove fog or smoke on endoscopic video images. In particular, the visual quality of defogged endoscopic images was improved from 0.5088 to 0.6475. The endoscope provides surgeons with real-time endoscopic video sequences that are shown on medical displays. On the basis of endoscopic vision or surgical field from these images, surgeons can directly visualise and examine abnormal tissues and treat or resect tumours in the body.Unfortunately, the visual quality of endoscopic video images is unavoidably degraded because of surgical smoke or fog during robotic surgery. These endoscopic foggy images Fig.\u00a0 are geneEndoscopic field defogging methods generally consists of hardware- and software-based strategies. While the former uses typical devices to remove smoke, the latter is algorithmic, i.e. computational photography techniques. This work develops a new luminance blending strategy for surgical video defogging. It combines a contrast enhancement procedure with a fast visibility recovery method to remove fog or smoke on endoscopic video sequences. We also quantitatively and objectively evaluate the experimental results of using our proposed method and others. The main contributions of this work are two-fold: (i) a new luminance blending approach with better performance than other defogging approaches and (ii) an objective image quality metric for quantitative assessment of dehazed images.The remainder of this Letter is organised as follows. Section 2 briefly reviews work related to current dehazing methods. Our hybrid luminance blending-based dehazing method for vision augmentation is presented in Section 3, followed by the experiment settings in Section 4. Sections 5 shows and discusses the validation results before concluding this work in Section 6.2Real-world natural image and video dehazing or defogging techniques are widely discussed in computer vision and computational photography in the literature. Fattal presenteet al. [et al. [et al. [et al. [et al. [While He et al. proposed [et al. employed [et al. estimate [et al. discusse [et al. proposed [et al. presente [et al. .et al. [et al. [et al. [et al. [More recently, deep learning-driven methods are increasingly developed for single image dehazing. While Ren et al. employed [et al. proposed [et al. develope [et al. introducAlthough these methods work well on natural images, they remain challenging to deal with surgical endoscopic video image fog or smoke, particularly in the case of inhomogeneous or thick haze. This work aims to address the problem of hazy images or videos with inhomogeneous or thick haze, particularly foggy endoscopic videos.3This section details our luminance blending framework for surgical endoscopic video defogging. Our method contains several steps: (i) contrast enhancement, (ii) visibility recovery, and filtering, and (iii) luminance blending. Fig.\u00a03.1Surgical foggy images are of low-contrast and limited illumination, especially in hazy regions. The goal of contrast enhancement is to improve the contrast of hazy-less regions on the endoscopic image and calculate the luminance The contrast enhancement step assumes (i) most regions on the foggy image are hazy pixels that critically affect the mean of the foggy image and (ii) the level of haze in these regions depends on the distance between the atmospheric light and the scene, as discussed in . On the lated by (2)\\docu3.2k and A widely used physical imaging model is established for hazy images by Koschmieder's law (3)\\docuOn the basis of , we aim entclass1pt{minimaA\u221e.Then, can be rmulation . The metentclass1pt{minimaSince the result The bilateral filter is an edge-aware image processing method to denoise and simultaneously preserve edge information , 16. Thectively) (7)\\docu3.3This step is to estimate illumination on image Y-component or luminance component of them, we used recursive filtering [Y-component We transfer the images iltering to estim4Foggy endoscopic videos were acquired from robotic surgery. All the experiments were executed on a laptop installed with Windows 8.1 Professional 64 bit system, 32.0 GB memory, and processor Intel(R) Xeon(R) CPU et al. [et al. [et al. [et al. [et al. [We compare the proposed method with the following approaches: (i) M1, Tarel et al. , (ii) M2 [et al. , (iii) M [et al. , (iv) M4 [et al. , (v) M5, [et al. , and (viWe introduce a naturalness metric to depict how natural surgical images appear based on statistically analysing thousands of images . On the 5Fig.\u00a0Table\u00a0Additionally, the computational times of the methods M1, M3, M4, M5, M6, and M7 were 31.3, 62.3, 1.3, 5.2, 75.6, and 1.1 s/frame, respectively. Method M2 deals with an image in more than 2700 s since soft editing was extremely slow [This work aims to enhance the surgical field visualisation of endoscopic surgery. We developed a new luminance blending defogging algorithm. The experimental results demonstrate that our algorithm outperforms others from subjective and objective evaluations. The effectiveness of our algorithm lies in fusing the advantages of the enhancement and restoration dehazing methods. Our method has several potential limitations including unclear parameter sensitivity, effective enhancement, quality assessment, and heavy processing time. These limitations will be further investigated in the future. In addition, though our method works better than other approaches, it still introduces colour distortion, which will be further investigated.6We proposed a new luminance blending defogging framework that integrates contrast enhancement, joint bilateral filtering, and visibility recovery to remove smoke in endoscopic videos from robotic surgery. We evaluated our method on endoscopic video sequences acquired from robotic prostatectomy. The experimental results demonstrate the effectiveness of our proposed method, which outperforms other approaches. In particular, our method improved the hybrid quality of the dehazed results from 0.5088 to 0.6475."} {"text": "Frontotemporal dementia (FTD) is a neurodegenerative disorder clinically characterised by progressively worsening deficits in behaviour, personality, executive function and language . Clinicaper se from those that may be responsible for parkinsonian symptoms, Di Stasio et al. (2018) [The pathophysiological mechanisms responsible for parkinsonism in FTD were specifically investigated in a transcranial magnetic stimulation (TMS) study by Di Stasio et al. (2018) that was. (2018) examined. (2018) . By contSeveral research issues in the pathophysiology of FTD have yet to be fully investigated and clarified. The neurophysiological abnormalities reported in the FTD patients studied by Di Stasio et al. (2018) are not"} {"text": "There are errors the Funding statement. The correct Funding statement is as follows:RN is a Ph.D. grant holder from FRIA-FNRS (Fond pour la Formation et la Recherche dans l\u2019Industrie et dans l\u2019Agriculture). MM is a \"Ma\u00eetre de Recherche\" at the \"Fonds National de Recherche Scientifique\". This study was supported by ULi\u00e8ge-ARC program as well as the University Foundation of Belgium (AS-0265). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."} {"text": "In the field of artificial intelligence (AI), developments are leading to a new era involving its social implementation. In 2006, Hinton et. al. reported that high-dimensional data can be converted into low-dimensional codes by training a multilayer neural network with a small perceptron . This di"} {"text": "Pluripotent stem cells (PSCs) have the potential to revolutionise biomedical science; however, while it is simple to reproducibly obtain comparable, stable cell lines in mouse, those produced from human material typically show significant variability both within and between cell lines. This is likely due to differences in the cell identity of conventional mouse and human PSCs. It is hoped that recently identified conditions to reprogram human cells to a na\u00efve-like state will produce better PSCs resulting in reproducible experimental outcomes and more consistent differentiation protocols. In this review we discuss the latest literature on the discovery of human na\u00efve-like stem cells and examine how similar they are to both mouse na\u00efve cells and the preimplantation human epiblast. Studies in mouse embryonic stem cells (mESCs) over many years have led to a detailed understanding of this cell state. While mouse cells are typically grown in a state of na\u00efve pluripotency, equivalent to the na\u00efve epiblast of the preimplantation blastocyst , human cin vitro they are analysed using criteria that are known to distinguish mouse na\u00efve cells from primed cells. Such criteria include responses to extrinsic and intrinsic signalling pathways, the biophysical, biochemical and metabolic status of the cells, and the overall epigenetic and transcriptomic cell identity. However, recent advances in our understanding of the human embryo also allow direct comparisons to the na\u00efve compartment in vivo. Recently, cells exhibiting human na\u00efve epiblast molecular features have been described [There have been several attempts to generate human na\u00efve pluripotent stem cells (nPSCs) over recent years. Most often when putative human na\u00efve cells are generated escribed \u2022,4\u2022\u2022,5\u2022\u2022in vitro in order to generate blastocyst-like embryos. Importantly, recent advances in RNA sequencing, particularly protocols for small cell numbers and even single cell sequencing, have made the analysis of these embryos possible.The transcriptional identity of a cell is often considered to be a readout of the cell\u2019s state , part 1.et al. [et al. [et al. observed four distinct cell types by unsupervised clustering which appear to represent two trophectoderm populations as well as extra-embryonic endoderm and epiblast cells based on their expression of known marker genes, as expected in a mature blastocyst. However, both studies identified only a handful of epiblast cells, giving a fairly small sample size for further analysis.Using such techniques, Yan et al. , and mor [et al. , obtaineet al., the embryo derived human na\u00efve ESCs (hnESCs) from Guo et al. [in vitro culture. Interestingly, established human primed lines are separated from primed epiblast outgrowths along this same axis.Comparing the human na\u00efve induced pluripotent stem cells (hniPSCs) of Takashima o et al. , the embo et al. . This inet al. took a different approach to comparing their datasets to published human embryo data. They identified genes that are expressed in specific embryonic stages in the dataset of Yan et al. They then looked for the proportion of these genes that were differentially expressed between their hniPSCs and conventional primed hESCs. While genes specifically expressed in embryonic epiblast were enriched in the hniPSCs, so were genes specific to the morula and all other cell types of the late blastocyst [et al. [et al. [et al. [Theunissen astocyst . This is [et al. indicate [et al. , this ma [et al. suggestsESRRB nor KLF2 were upregulated in any of these na\u00efve lines; however, this may be due to differences between primate and rodent and the redundant use of paralogue genes such as Klf4 [KLF2 alongside NANOG. Takashima elegantly demonstrated that the behaviour of the transcription factor network in his hniPSCs closely corresponded to that of mouse ESCs with a knockout and rescue strategy. Mouse ESCs can support the single loss of either Esrrb or Tfcp2l1 due to redundancies in the network [TFCP2L1 in hniPSCs resulted in greatly reduced colony formation, indicating that most cells had stopped self-renewing. Application of exogenous ESRRB during this knockdown was able to rescue self-renewal. Together this provides strong evidence that an interactive transcription network highly similar to that in mouse is active in these cells.At the core of the na\u00efve cell identity in mouse ESCs is a highly interconnected transcription factor network which shows remarkable redundancy ,11 Figu, part 2. as Klf4 ,7. It is network ,16, but network . AccordiA broad array of signalling pathways interact to maintain or destabilise the na\u00efve state in mouse ESCs , part 3.et al. [et al. [Given the importance of LIF and downstream JAK/STAT signalling in reprogramming and maintenance of mouse na\u00efve PSCs ,22 and iet al. found hL [et al. show tha [et al. , it is a [et al. ,27. Furtet al. identified increased differentiation on their double inhibition [In mouse, Fgf2 and Activin A are both used to support the undifferentiated growth of primed EpiSCs ,29, whilhibition . Notablyhibition . This aphibition ,34. The hibition ,36, but hibition ; in thiset al. [et al. reveals a decrease in N-Cadherin on transition to the na\u00efve state; however, both N-cadherin and E-cadherin are expressed in primed human ESCs, and there is no further increase in E-cadherin in the na\u00efve cells [Given the poor survival of hniPSCs Theunissen et al. found itet al. , it was et al. ,5\u2022\u2022. In et al. , which cet al. . The preet al. ,40. The et al. ,42. Examve cells . It coulve cells .in vivo [in vitro and shows links to developmental disorders and tumourogenisis [Another distinctive feature of most cell states, and particularly the na\u00efve state, is the epigenetic landscape , part 5.in vivo \u201346. It hin vivo ,48. Howein vivo . In linein vivo ,5\u2022\u2022,8\u2022, in vivo . Beyond in vivo , which iogenisis .Another epigenetic property of mouse na\u00efve ESCs is the absence of a silent X-chromosome in females resulting in the presence of two active X-chromosomes , part 6.The status of the X-chromosome in female primed cells has also been somewhat contentious \u201361. Primin vivo in very late human blastocysts [Recently, it was shown that in the reprogramming of human primed cells to a na\u00efve-like state the silent X-chromosome reactivates ,63\u2022. Desstocysts ,9\u2022,63\u2022. in vivo counterpart, and mouse ESCs.Together, these studies indicate that there are epigenetic differences between current human nPSCs, their By the majority of measures, the most up to date culture systems have produced human pluripotent cells with similarities to both mouse na\u00efve ESCs and to the human preimplantation epiblast. Nonetheless, there are still significant discrepancies. The signalling pathways active in these cells and the transcription factor network they support, appear to be very similar to, yet far less stable than, their equivalents in mouse ESCs. It is currently not possible to say whether the reduced stability of the human na\u00efve state is due to interspecies differences, suboptimal culture conditions, or the possibility that we have not yet isolated bona fide human nPSCs.Evidence from Takashima and from Guo show that their na\u00efve cells have undergone metabolic reprogramming, showing a significant level of mitochondrial respiration , part 7 in vivo [Interestingly, recent papers have identified novel hPSCs with broader chimerism and differentiation potential than na\u00efve or primed cells. These respectively demonstrate the ability to form interspecies chimaeras and the ability to differentiate into both embryonic and extraembryonic lineages in vivo ,68. The in vitro studies will require demonstrating superior differentiation potential and reliability compared to conventional human ES cultures, and methods to simplify the generation and culture of these cells. By identifying cell surface markers specific to hnPSCs, Collier et al. [The next major hurdles in establishing hnPSCs as the standard for r et al. present in vitro developmental studies, and possibly advances in cell therapies.While the conditions for differentiation protocols may need to be optimised for these new cells, it will be important to learn whether the promises of more homogeneous, less cell-line dependent differentiation from a na\u00efve starting population can be delivered. If so, then this cell state could take over to become the accepted standard starting point for drug discovery models,"} {"text": "Adaptive learning and emergence of integrative cognitive system that involve not only low-level but also high-level cognitive capabilities are crucially important in robotics . GP-HSMM can segment continuous motion trajectories without defining a parametric model for each primitive. That comprises Gaussian process, which is a regression method based on Bayesian non-parametric, and hidden semi-Markov model. This method enables a robot to find motion primitives from complex human motion in an imitation learning scenario. Manipulation using the left and right arms is an essential capability for a cognitive robot. Zhang et al. proposed a neural-dynamic based synchronous-optimization scheme manipulators. It was demonstrated that the method enables a robot to track complex paths.First, three papers focused on action and behavior learning. Imitation learning is an important topic related to the integration of high-level and low-level cognitive capability because it enables a robot to acquire behavioral primitives from social interaction including observation of human behaviors. Andries et al. proposes the formalism for defining and identifying affordance equivalence. The concept of affordance can be regarded as a relationship between an actor, an action performed by this actor, an object on which the action is performed, and the resulting effect. Learning affordance, i.e., inter-dependency between action and object concept, is an important topic in this field. Taniguchi et al. proposed a new active perception method based on multimodal hierarchical Dirichlet process, which is a hierarchical Bayesian model for multimodal object concept formation method. The important aspect of the approach is that the policy for active perception is derived based on the result of unsupervised learning without any manually designed label data and reward signals.Second, two papers focused on the relationship between action and object concept. Hagiwara et al. proposed hierarchical spatial concept formation method based on hierarchical multimodal latent Dirichlet allocation (hMLDA). They demonstrated that a robot could form concept for places having hierarchical structure, e.g., \u201caround a table\u201d is a part of \u201cdining room,\u201d using hMLDA, and became able to understand utterances indicating places in a domestic environment given by a human user. Yamada et al. described representation learning method that enables a robot to understand not only action-related words, but also logical words, e.g., \u201cor,\u201d \u201cand\u201d and \u201cnot.\u201d They introduced an neural network having an encoder-decoder architecture, and obtained successful and suggestive results. Taniguchi et al. proposed a new multimodal cross-situational learning method for language acquisition. A robot became able to estimate of each word in relation with modality via which each word is grounded.Third, three papers are related to language acquisition and concept formation. Nakamura et al. proposed Symbol Emergence in Robotics tool KIT (SERKET) that can integrate many cognitive modules developed using hierarchical Bayesian models, i.e., probabilistic generative models, effectively without re-implementation of each module. Integration of low-level and high-level cognitive capability and developing an integrative cognitive system requires researchers and developers to construct very complex software modules, and this is expected to cause practical problems. Serket can be regarded as a practical solution for the problem, and expected to push the research field forward.The final paper presents a framework for cognitive architecture based on hierarchical Bayesian models. With the tremendous success of the past three Special issues of this Research Topic, we organized follow-up workshopsAll authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "With the advent of the new era of immune therapy, new aspects of inflammation as a process involved in therapy, in tumorigenesis, and in several other human pathologies have been highlighted . This spin vitro studies, PAR4 can have a tumor suppressor role and can be used as a future therapeutic target. In endometrial cancer, S. Gu et al. show that tumor-associated macrophages are predominantly type 2 (M2) that contribute to the progression of this type of cancer and that the anti-CD27 therapy could have an antitumoral effect. Similarly, tumor-associated macrophages are reviewed by H. Degroote et al. in another type of cancer, hepatocellular carcinoma. The preclinical evolution and hitherto clinical trials for TAM-targeted therapy in HCC have been highlighted.It is possible to trait two distinct sections discussing inflammation aspects; a series of papers focusing on oncology and a series of papers focusing on other inflammation-related pathologies. From the first series of papers, M. Tampa et al. review the inflammatory markers of oral squamous cell carcinoma highlighting the main markers of inflammation that could improve early diagnosis. In esophageal squamous cell carcinoma cells, M. Wang et al. show that in A syndrome associated with cancer progression is cachexia, reviewed by E. Manole et al. The paper is describing several myokines produced and released by myocytes that can become potential biomarkers and future therapeutic targets.As already mentioned, immune therapy would be probably one of the major breakthroughs in oncology. C. Vajaitu et al. review checkpoint inhibitor new treatments and the role of inflammation in these therapies presenting biomarkers that could predict efficacy and immune therapy resistance.A. Calinescu et al. reviewed an inflammatory-related protein, carcinoembryonic antigen-related cell adhesion molecule 1, and its involvement in malignancies. The paper shows its importance as a prognostic factor in oncology and a future target specific cancer therapy.The second section of the special issue consists of reviews and original papers evaluating inflammation in various nononcological diseases. A. Pedrinolla et al. evaluate proinflammatory markers of age-related obesity. M. Bucur et al. highlight in their original paper that there are clear profibrotic and antifibrotic factors expression differences in oral mucosa and skin scars.M. Dobre et al. show that differences found in various transcript levels of inflammatory molecules could aid the differential diagnosis between ulcerative colitis and Crohn's disease. Chronic kidney disease could benefit from current proteomic approaches, as S. Mihai et al. are describing the inflammasomes and gut microbiota dysbiosis involvement in this disease and moreover in the renal malignancy.An in vivo animal model is shown by E. Codrici et al. where a caveolin-1-knockout mouse is thoroughly characterized and presented as a good inflammatory disease model. Another original paper by V.M. Anghelescu et al. shows the inflammatory pattern evaluation in animal models associated with various bone implants. Last, but not least, S.R. Georgescu et al. revise the traits of chronic inflammation in HPV infection that can lead to tumorigenesis.We are very satisfied that our subject resulted in so many valuable papers, and we would like to thank all the authors who submitted their work for consideration to this special issue. Without their effort, this special issue would have not taken place. Editors would like to thank the reviewers who thoroughly revised the papers and provided important suggestions that significantly improved the papers."} {"text": "Given that \u201cpatience\u201d is a recommended means to attain academic, social, and economic success (Mischel et al., Across disciplines (e.g., Mischel et al., Here, we argue that patience, which we define as the \u201cability to tolerate delay\u201d (Barragan-Jason et al., Watts et al. suggest that \u201cunobserved factors underlying children's delay ability may have driven long-run correlations\u201d (in Watts et al., per se, that is key to explaining why patient children become more successful adults but, rather, a greater capacity to internalize behaviors as social norms during childhood. Specifically, successful adults may better internalize patience and other important abilities (e.g., cooperation: Gavrilets and Richerson, Consequently, our second argument is that it is not higher levels of patience, In conclusion, Watts et al.'s study unGB-J and CA drafted the manuscript. AH, JS, and MC provided critical revisions. All the authors approved the final version of the manuscript for submission.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Saccharomyces cerevisiae. We further present the first cell-wide spatial proteome map of S. cerevisiae, generated using hyperLOPIT, a mass spectrometry-based protein correlation profiling technique. We compare protein subcellular localisation assignments from this map, with two published fluorescence microscopy studies and show that confidence in localisation assignment is attained using multiple orthogonal methods that provide complementary data.Subcellular protein localisation is essential for the mechanisms that govern cellular homeostasis. The ability to understand processes leading to this phenomenon will therefore enhance our understanding of cellular function. Here we review recent developments in this field with regard to mass spectrometry, fluorescence microscopy and computational prediction methods. We highlight relative strengths and limitations of current methodologies focussing particularly on studies in the yeast Current Opinion in Chemical Biology 2019, 48:86\u201395OmicsThis review comes from a themed issue on Ileana M Cristea and Kathryn S LilleyEdited by Issue and the EditorialFor a complete overview see the Available online 29th November 2018https://doi.org/10.1016/j.cbpa.2018.10.026http://creativecommons.org/licenses/by/4.0/).1367-5931/\u00a9 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license . Importantly this method is able to determine proteins residing in multiple compartments and large protein complexes.A more rigorous and holistic approach has been afforded by hyperplexed Localisation of Organelle Proteins by Isotope Tagging (hyperLOPIT) \u2022\u2022,71, a S. cerevisiae [Fluorescent protein tagging has emerged as a powerful tool to visualise the localisations of individual proteins by microscopy on a cell-wide scale in revisiae . Variantrevisiae ,31,74,75revisiae in whichrevisiae . Of 5330S. cerevisiae which facilitates the manipulation and generation of systematic organism libraries in a much more routine manner [A recent study has described the use of a new strategy (SWAp-Tag) in e manner \u2022. This me manner \u2022,27\u2022. Ine manner . It is wet al. [et al. [et al. captured time-lapse films of protein localisation during culture and carried out localisation analysis in an automated fashion, whereas Tkach et al. captured localisation at a single time point and carried out localisation analysis manually. Using their method, D\u00e9nervaud et al. found 81 more re-localisation events in response to the same stress than were observed in Ref. [et al. [et al. [et al. further compared their data with the work of Tkach et al. who used one of the same stresses, to benchmark their protein re-localisation analysis method, finding that approximately half of their protein re-localisation predictions were in agreement with the Tkach study.Interrogation of published data for yeast protein subcellular localisation datasets highlights two issues. Firstly, many studies ,68\u2022\u2022,69\u2022et al. of resul [et al. interrog in Ref. . We performed the experiment as described in Ref. [et al. [et al. [No comprehensive comparison of data acquired using a truly orthogonal method of capturing cell wide protein localisation with data arising from high throughput microscopy exists to date. Unlike other organisms, there is no data resulting from correlation profiling methods for in Ref. using th [et al. that wer [et al. . We carr [et al. to obtai [et al. \u2022\u2022,71 and can be interactively explored using the pRolocGUI [https://bioconductor.org/packages/pRolocGUI) or using the standalone online interactive app (https://proteome.shinyapps.io/yeast2018).All protein-level datasets are available in the R BioconduRolocGUI package \u2022 of special interest\u2022\u2022 of outstanding interestPapers of particular interest, published within the period of review, have been highlighted as:"} {"text": "This study aims to assess the treatment adherence rate among People Living With HIV/AIDS (PLWHA) receiving treatment in a Nigerian tertiary Hospital.This was a cross-sectional study that assessed self-reported treatment adherence among adults aged 18 years and above who were accessing drugs for the treatment of HIV. Systematic random sampling method was used to select 550 participants and data were collected by structured interviewer administered questionnaire.The mean age of respondents was 39.9\u00b110 years. Adherence rate for HIV patients was 92.6%. Factors affecting adherence include lack of money for transportation to the hospital (75%), traveling (68.8%), forgetting (66.7%), avoiding side effects (66.7%), and avoiding being seen (63.6%).The adherence rate was less than optimal despite advancements in treatment programmes. Adherence monitoring plans such as home visit and care should be sustained. Adherenive care . Rates oive care , 6, 7. Hive care . Adherenive care , 9-11. Iive care . The disive care , 14. Howive care . In Nigeive care and Fedeive care . This reive care , 19. ParStigmatization has been documented as one of many factors related to varying ART adherence . BetweenThis study was conducted at the HIV clinic of University of Ilorin Teaching Hospital (UITH), Kwara State, a tertiary health care centre in middle-belt, Nigeria in 2012. It was a descriptive survey with analysis of the observed variables in PLWHA aged 18 years and above who are accessing treatment at the clinic. Self-reported treatment adherence rate was assessed among the respondents over the preceding 7 days to minimize recall bias. Respondents were asked to indicate how many pills they missed during each of the previous 7 days. The mean percent adherence was calculated based on the total number of pills taken divided by the total number of pills respondents reported being expected to take or prescribed by the doctor. In this study treatment adherence was defined as at least 95% of the prescribed dose of cART taken by the patient [2 is the abscissa of the normal curve that cuts off an area \u03b1 at the tails ; p is the estimated proportion of an attribute that is present in the population . Data weA total number of 550 HIV patients who met the criteria for the study were surveyed. The age distribution of the respondents ranged from 18 to 65 years with the mean age of 39.9 years\u00b110.0 . A high et al [et al.[et al. [et al. [This study found out that 89.8% of the patients did not miss their drugs. Of those who missed their drugs only 27.3% of the prescribed antiretroviral drugs were taken by them. The mean adherence rate of all the respondents was 92.6%. Although this was below the optimal level of greater than 95% , 8, 9-11et al in the set al . This stl [et al. in which.[et al. and Ches [et al. in whichet al. [et al. [et al. [et al. [et al. [et al [et al. [This study found that support from friend and family could improve treatment adherence. Higher proportion (27%) of patients that missed their drugs did not have support from friends and family. Support from friends and families could reduce psychological stress and financial burden especially in our environment where extended family system is practiced. Lack of money for transportation was also identified as a significant factor affecting treatment adherence. Seventy five percent of the patients who did not have money for transportation to keep hospital appointment missed their drugs. Lack of money for transportation has been reported in a similar study at Enugu, Nigeria by Uzochukwu et al. and in M [et al. . Possibl [et al. in a ter [et al. Forgetfu [et al. at the A. [et al among HI [et al. in BotswIn conclusion, we were able to demonstrate a mean adherence rate of 92.6% among all the respondents which was below the optimal level of greater than 95% but higher than the treatment adherence level of 70.8% reported in the sOptimal management of PLWHA can be ensured if clients with indicators of poor adherence can be identified and closely monitored;ART treatment adherence rates of between 54% and 62.6% have been documented in Northern Nigeria;Stigmatization has been identified as one of several factors related to varying ART adherence.This study also found that factors affecting adherence include lack of money for transportation, feeling sick and depressed, running out of drugs at home, wanting to avoid side effects of drugs and no access to drugs due to unscheduled public holidays;Support from friends and family was found to be significantly associated with treatment adherence.The authors declare competing interests."} {"text": "To describe the occurrence of psychiatric diagnoses in a specialist care setting in older people with intellectual disability (ID) in relation to those found in the same age group in the general population.n = 7936), aged 55 years or more in 2012, was identified, as was an age and sex-matched cohort from the general population (n = 7936). Information regarding psychiatric diagnoses during 2002\u20132012 was collected from the National Patient Register, which contains records from all inpatient care episodes and outpatient specialist visits in Sweden. The mean age at the start of data collection was 53 years (range 44\u201385 years).A cohort of people with ID (n = 1382) of the people in the ID cohort had at least one psychiatric diagnosis recorded during the study period. The corresponding number in the general population cohort was 10% (n = 817), which translates to an odds ratio (OR) of 1.84. The diagnoses recorded for the largest number of people in the ID cohort were \u2018other\u2019 (i.e. not included in any of the diagnostic groups) psychiatric diagnoses (10% of the cohort had at least one such diagnosis recorded) and affective disorders (7%). In the general population cohort, the most common diagnoses were affective disorders (4%) and alcohol/substance-abuse-related disorders (4%). An increased odds of having at least one diagnosis was found for all investigated diagnoses except for alcohol/substance-abuse-related disorders (OR = 0.56). The highest odds for the ID cohort was found for diagnosis of psychotic disorder (OR = 10.4) followed by attention deficit/hyperactive disorder (OR = 3.81), dementia (OR = 2.71), personality disorder (OR = 2.67), affective disorder (OR = 1.74) and anxiety disorder (OR = 1.36). People with ID also had an increased odds of psychiatric diagnoses not included in any of these groups (OR = 8.02). The percentage of people with ID who had at least one diagnosis recorded during the study period decreased from more than 30% among those aged 55\u201359 years in 2012 (i.e. born 1953\u20131957) to approximately 20% among those aged 75+ years in 2012 (i.e. born in or before 1937).Seventeen per cent (Older people with ID seem to be more likely to have psychiatric diagnoses in inpatient or outpatient specialist care than their peers in the general population. If this is an effect of different disorder prevalence, diagnostic difficulties or differences in health care availability remains unknown. More research is needed to understand the diagnostic and treatment challenges of psychiatric disorders in this vulnerable group. In addition, they had to be alive at the end of that year. A one-to-one sex and age-matched control cohort from the general population (gPop cohort) was established by Statistics Sweden using the Swedish population register. People included in the ID cohort could not be included in the gPop cohort also. However, people with ID but without LSS support were not excluded from the gPop cohort.Swedish National Patient Register (NPR) is also managed by the Swedish National Board of Health and Welfare. It contains information about all in- and outpatient specialist care in Sweden. However, it does not contain information about visits to primary health care. For inpatient care, registration in the NPR is made at the date of discharge, and for outpatient care it is made at the date of the visit. For each registration, one primary and up to 21 secondary diagnoses are listed, coded according to the 10th revision of the International Classification of Disease (ICD-10).The Information on psychiatric diagnoses during the study period was obtained from the NPR for 2002\u20132012. The mean age at the start of data collection (i.e. 1 January 2002) was 53 years (range 44\u201385 years). These were categorised as attention deficit/hyperactivity disorder (ADHD) and equivalents, psychotic disorders, affective disorders, anxiety disorders, personality disorders, alcohol/substance-abuse-related disorders, dementia or other psychiatric disorders . Each peTo compare the number of people with at least one of each respective category of psychiatric diagnoses in the ID cohort to the corresponding number in the gPop cohort, we estimated odds ratios (ORs) with 95% confidence intervals (CIs) using logistic regression. In order to illustrate possible age-effects, we performed age stratified analyses, using the 5-year age categories. Moreover, we investigated age effects within the ID cohort by comparing each age group to the youngest one. Statistical interaction was evaluated by introducing an interaction term (e.g. cohort*age group) to the logistic regression model, and trends were assessed by treating the category variable as a continuous factor.In order to evaluate using the LSS register as a proxy for ID, we performed sensitivity analyses on sub-cohorts of people with known diagnosis of either ID or ASD. Through diagnoses available from the NPR, we were able to identify 1145 men and 1002 women who had at least one diagnosis of ID (F7 in ICD-10) during 2002\u20132012. Moreover, we identified 242 men and 156 women who had at least one diagnosis of ASD . The overlap was 209 individuals. Analyses were made comparing psychiatric diagnoses among those with ASD only to those with ID only or ASD in combination with ID. Also, as people with ID may be difficult to diagnose with respect to psychiatric disorders, we performed sensitivity analyses including only diagnoses recorded at psychiatric clinics, i.e. by psychiatric specialists.p-value of 0.05 was considered statistically significant. All analyses were performed in IBM SPSS Statistics 23.Analyses were only performed if each of the two compared group comprised at least five observations. A two-tailed n\u00a0=\u00a02559 in each cohort), 44% among those aged 60\u201364 years (n\u00a0=\u00a02097), 46% among those aged 65\u201369 years (n\u00a0=\u00a01636), 48% among those aged 70\u201374 years (n =\u00a0839) and 53% among those aged 75+ years (n\u00a0=\u00a0805).Each cohort comprised 7936 people, whereof 45% were women. The percentage of women increased over the age categories, with 43% among those aged 55\u201359 years or the Diagnostic Criteria for psychiatric disorders for use with adults with Learning Disabilities/mental retardation (DC-LD). These are not identical with respect to diagnostic criteria, and so diagnoses may differ between them. Differences between diagnostic criteria have been found both in the general population .Over the 11-year study period, people with ID had three times higher odds than those in the general population to have .et al. b found a .psychotic disorders. This is well in line with previous studies investigating people with and without ID, where a considerable increase in overall psychotic disorders and schizophrenia specifically has been found for people with ID, regardless of which diagnostic system was used (Deb et al.et al.et al.et al.et al.The largest OR for people with ID compared with those in the general population in the present study was found for diagnoses of affective disorders among people with ID in the present study is similar to what has been found in other community-based populations (Deb et al.et al.et al. (The 74% increased odds for diagnoses of .et al. found ananxiety disorders to be more common among people with ID than in the general population, which is in line with some (Gentile et al.et al.et al.et al.et al.et al.et al.et al. (et al. (et al. (We found diagnoses of .et al. and Howl (et al. , as the (et al. had, howpersonality disorders among criminal offenders with ID, not much research has been published on this dual diagnosis in a more general ID population. The limited evidence available suggests that people with ID are at a greater risk for diagnoses of personality disorders (Pridding & Procter, et al.Although several studies have investigated substance-abuse-related disorder (McGillicuddy & Blane, et al.et al.et al.dementia diagnosis associated with having ID. Cooper et al. (et al. (et al. (People with ID seem to be more sensitive than the general population to developing a r et al. and Care (et al. both use (et al. who let ADHD was the least frequent psychiatric diagnosis among people with ID, with less than one percent of the cohort getting a diagnosis during the study period. As people in this age group are unlikely to \u2018lose\u2019 their ADHD diagnosis, this number may be used as a prevalence estimate. Compared with other studies, it is a low one (Fox & Wade, et al.In the present study, The co-existence of ID and psychiatric disorders does not only have a negative impact on the individual, but also places a burden on the health care system and family members. Therefore, further research into the understanding of diagnosis and treatment of such disorders in this vulnerable group of people is vital."} {"text": "Microbial production of chemical compounds often requires highly engineered microbial cell factories. During the last years, CRISPR-Cas nucleases have been repurposed as powerful tools for genome editing. Here, we briefly review the most frequently used CRISPR-Cas tools and describe some of their applications. We describe the progress made with respect to CRISPR-based multiplex genome editing of industrial bacteria and eukaryotic microorganisms. We also review the state of the art in terms of gene expression regulation using CRISPRi and CRISPRa. Finally, we summarize the pillars for efficient multiplexed genome editing and present our view on future developments and applications of CRISPR-Cas tools for multiplex genome editing. Current status of multiplex genome editing in bacteria and eukaryotic microorganisms using CRISPR-Cas tools. Industrial microbiology plays a key role in the transition towards a more sustainable industry to produce food and feed ingredients, bio-based materials, biofuels and direct synthesis of cosmetic and pharmaceutical compounds are bacterial and archaeal adaptive immune defence systems, which can be repurposed as versatile genetic editing or regulation tools in a broad range of organisms. The effector endonucleases of these systems are guided by short RNA molecules encoded by CRISPR arrays. Native CRISPR arrays consist of a succession of spacers originating from invader organisms separated by direct repeats and Cas12a (type V) in industrial microorganisms. Both bacterial and eukaryotic examples are described, although more attention is given to yeast and filamentous fungi, since the diversity of strategies using Cas endonucleases for genome editing applications is more extensive in this group of organisms.et\u00a0al. et\u00a0al. Streptococcus pyogenes (SpCas9) has become the most widely used RNA-guided endonuclease for genome editing and transcription regulation purposes and a trans-activating CRISPR RNA (tracrRNA). To simplify gRNA expression, a synthetic chimeric construct named single guide RNA (sgRNA) can be synthesized by fusing the tracrRNA and the crRNA (type V) can cleave dsDNA directed by a crRNA, hence without the requirement of a tracrRNA , non-homologous end joining repair (NHEJ) or alternative non-homologous end joining systems such as microhomology-mediated end joining (MMEJ) have been designed by substituting one or more of the catalytic amino acids in the nuclease domains are routinely used for gRNA expression have also been demonstrated to yield multiple functional gRNAs in eukaryotes as well repeated usage of equal sets of promoters and terminators for gRNA expression, (ii) requirement for multiple selection markers, and more importantly, (iii) labor-intensive expression cassettes and plasmid construction. By expressing multiple gRNAs under the control of a single promoter, polycistronic expression cassettes overcome these hurdles. To date, two polycistronic cassette-based approaches have been shown that enable multiplex genome editing Fig.\u00a0.et\u00a0al. SpCas9 has been accomplished in S. cerevisiae (Ryan and Cate S. cerevisiae or \u223c50\u2013100 bp for E. coli), to longer, PCR-based HFs for many bacteria, non-conventional yeasts and fungi (200\u20131000 bp) , either as single-stranded of the dDNA. Commonly used dDNA sequences may vary from short-sized HFs the presence of NHEJ or alternative-NHEJ repair systems, (ii) the presence of inefficient HDR systems, or (iii) an unfavorable balance between NHEJ and HDR repair mechanisms.et\u00a0al. E. coli, Lactobacillus relies on the stochastic allocation of dDNAs into the nucleus. In et\u00a0al. Y. lipolytica , the frequency of targeted integration is significantly increased in multiple fungi or the PCP and Com proteins (tested only in S. cerevisiae). By fusing the transcriptional activation domain SoxS (in E. coli) or VP64 (in S. cerevisiae) to these RNA binding proteins, expression of a gene downstream a target promoter can be enhanced, and pathway fluxes sequentially directed obtained either through software prediction based on generalized well defined guide-design principles, preferential PAM domains and secondary structure prediction for the specific host organism to be edited is required. Additionally, dDNA might be stabilized by chemical modifications , including techniques to incorporate repetitive DNA elements such as the repeat sequences of CRISPR arrays (Cress et\u00a0al. et\u00a0al. S. cerevisiae for the obtainment of up to 6 simultaneous deletions using SpCas9 with a efficiency of 23.3%. The strategy combined multiple successful strategies presented in this manuscript. Three transcripts (each of them containing two gRNAs flanked by tRNAgly sequences were expressed from a single plasmid encoding for SpCas9. The assembly of the plasmid via Golden Gate reaction was performed in the yeast (Zhang et\u00a0al. Organism-specific CRISPR tools: smart choice of the CRISPR-Cas expression approach depending on final application of the production strain or the target organism of choice. For instance, CRISPR-Cas tools can be combined with the introduction of dDNA containing selective markers, which makes screening and selection of positive clones more efficient. This approach can be used in proof of principle studies, whereas in other cases, marker-free strains are important. In a similar way, guides and Cas nucleases can be co-expressed from plasmid-borne expression cassettes or expressed sequentially in strains pre-expressing the Cas nuclease from a second plasmid or from a genome integrated copy.Editing conditions: optimization of organism-specific CRISPR-Cas delivery systems and recovery protocols. Cell synchronization protocols in combination with CRISPR-Cas systems have been used in human cells to enhance HDR versus NHEJ repair (Lin et\u00a0al. et\u00a0al. et\u00a0al. Novel or improved endonucleases: these endonucleases should have alternative or less-stringent PAM recognition selection. In addition, nucleases are preferred that are smaller, more specific and more active, as reviewed by Kleinstiver et\u00a0al. (et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. (r et\u00a0al. . Cas9 va. et\u00a0al. , as well. et\u00a0al. .In recent years, remarkable progress has been achieved in the field of multiplexed genome editing. Besides broadening the spectrum of microorganisms that can be engineered using CRISPR-Cas endonucleases, novel approaches have recently been implemented in terms of gRNA and dDNA delivery for increasing multiplexing efficiency. Below, a summary is provided of the major challenges related to the development of efficient multiplexing CRISPR-Cas systems in microorganisms.et\u00a0al. et\u00a0al. et\u00a0al. et\u00a0al. In multiplexed genome editing experiments, a negative correlation is experienced between the number of targets and the amount of obtained colonies after transformation in most microorganisms. The introduction of DSBs dramatically reduces the cell survival rate, and this causes limited numbers of simultaneous modifications as a delicate balance between DNA cleavage and repair needs to be established. Alternatively, multiplexed single-base editing does not depend on DSB generation nor dDNA supply and can be used to introduce nucleic acid base changes at a targeted window of DNA (Eid, Alshareef and Mahfouz Advances in multiplexed genome editing of microorganisms can significantly accelerate future strain construction programs of cell factories with unprecedented efficiencies. Therefore, the CRISPR revolution continues: new tools and workflows are being developed to broaden the range of functionalities of currently used CRISPR-Cas systems as well as knowledge about the mechanism of these systems. As identified in this review, dedicated optimization of each of the elements involved in CRISPR-Cas genome editing is crucial for efficient multiplex genome editing and for stretching the number of simultaneous editing events."} {"text": "We read with interest the recent report by Reimann et al. , in whicn\u2009=\u2009736) where diagnoses were assigned by a panel of dermatopathologists from a variety of different institutions [The scarcity of melanocytic neoplasms with known outcomes often necessitates use of histopathology-based reference standards, but such standards are limited by the inherent inter-observer variability and diagnostic discordance that occurs even among experts \u20136. The ditutions . When suitutions .either category\u2014positive or negative\u2014would have substantially improved concordance. In fact, in every circumstance, the classification strategy selected by Reimann et al. [The authors\u2019 conclusions are also influenced by their decision to re-classify all cases with indeterminate myPath results as \u2018benign\u2019 in their calculations of sensitivity and specificity. The authors claim this was done to 'avoid exclusion of indeterminate results in statistical analysis,' yet their own indeterminate results are, in fact, excluded from statistical analyses. Cases that they declared \u2018diagnostically ambiguous\u2019 are eliminated from some calculations , and len et al. is the on et al. , 13.Re-analysis of the data generated by Reimann et al. using thThe accuracy of the myPath melanoma test has been evaluated in three separate peer-reviewed clinical validation studies , 11, 14"} {"text": "Humanitarian emergencies can impact people's psychosocial well-being and mental health. Providing mental health and psychosocial support (MHPSS) is an essential component of humanitarian aid responses. However, factors influencing the delivery MHPSS programmes have yet to be synthesised. We undertook a systematic review on the barriers to, and facilitators of, implementing and receiving MHPSS programmes delivered to populations affected by humanitarian emergencies in low- and middle-income countries.A comprehensive search of 12 bibliographic databases, 25 websites and citation checking was undertaken. Studies published in English from 1980 onwards were included if they contained evidence on the perspectives of adults or children who had engaged in or programmes providers involved in delivering, MHPSS programmes in humanitarian settings. Thirteen studies were critically appraised and analysed thematically.Community engagement was a key mechanism to support the successful implementation and uptake of MHPSS programmes. Establishing good relationships with parents may also be important when there is a need to communicate the value of children and young people's participation in programmes. Sufficient numbers of trained providers were essential in ensuring a range of MHPSS programmes were delivered as planned but could be challenging in resource-limited settings. Programmes need to be socially and culturally meaningful to ensure they remain appealing. Recipients also valued engagement with peers in group-based programmes and trusting and supportive relationships with providers.The synthesis identified important factors that could improve MHPSS programme reach and appeal. Taking these factors into consideration could support future MHPSS programmes achieve their intended aims. Populatet al.et al.et al.et al.et al.et al.et al.Notwithstanding the recommendations on MHPSS by the Inter-Agency Standing Committee, which advocates taking a wide range of possible responses to humanitarian emergencies, many empirically evaluated MHPSS programmes continue to draw from Western-based approaches to the treatment of trauma and trauma-related symptoms definition of MHPSS and included any programme seeking \u2018to protect or promote psychosocial well-being and/or prevent or treat mental disorder\u2019 IASC, , p. 11.We conducted a comprehensive search of 12 electronic databases covering health and social science disciplines including Medline, PsycINFO and ASSIA, 13 specialist databases and grey literature portals, in addition to 25 topic-specific websites, consultation with experts and citation checking of includes. (See further details in the web appendix). Key index and free text search terms were determined by the review questions and the inclusion criteria. For example, the type of humanitarian emergency (e.g. war or typhoon or genocide), combined with the type of mental health and psychosocial intervention and study design .et al.Search results were imported into EPPI-Reviewer 4: systematic review software . Piloting and refinement of tools took place before the commencement of full coding. The reliability and usefulness of studies were assessed using EPPI-Centre tools for qualitative studies using the following dimensions: sampling, data collection, data analysis, the extent to which the study findings were grounded in the data (criteria 1\u20134) and; the extent to which the study privileged the perspectives of participants, and breadth and depth of findings (criteria 5\u20136). An overall judgement of study quality was made according to two key dimensions. First, a weight of high, medium or low was assigned according to the reliability of the study using criteria 1\u20134. Second, a weight of high, medium or low was assigned according to the usefulness of the findings in answering the review question on contexts and barriers to implementation and receipt of MHPSS programmes using criteria 5\u20136. To be judged as \u2018high\u2019 quality on methodological reliability, studies needed to have taken steps to ensure rigour in at least three of the first four criteria. Studies were judged as \u2018medium\u2019 when scoring on only 2\u20133 criteria and \u2018low\u2019 when scoring on only one or none. To achieve a rating of high on usefulness in answering the review questions, studies needed to achieve depth and breadth in their findings and use methods that enabled participants to express their views on implementing or engaging in programmes. Studies rated as a medium on usefulness only met either one of these criteria and studies rated low were judged to have met neither criterion. Low-quality studies were not excluded from the review. Instead, quality judgements were used to inform the synthesis with none of the themes solely generated by studies judged as low on both dimensions and immediately after an earthquake (n\u00a0=\u00a01), tier-three \u2018focused, non-specialised supports\u2019 programmes addressing the psychological and social impact of the Rwandan genocide (n\u00a0=\u00a03); while tier-two \u2018community and family supports\u2019 programmes primarily targeted children (n\u00a0=\u00a05) rather than adults (N\u00a0=\u00a01). Only one study addressed the basic services and security needs of affected populations. Overall, study quality was a combination of high or medium reliability and usefulness (n\u00a0=\u00a010). Of the four studies judged as being of low reliability, three contributed findings of medium usefulness reference group responsible for the greater co-ordination of MHPSS in emergency settings IASC, . We founness see .Table 2et al.et al.et al.et al.A key theme across nine studies they could \u2018facilitate district health workers to establish and run mental health clinics\u2019 (tier-four) in post-conflict Northern Uganda; however, their efforts were hindered by \u2018an attrition of government health workers trained by the project\u2019 (p. 298). Issues with staff retention meant a loss of knowledge and skills in being able to \u2018recognise, assess, and manage mental illness\u2019 (p. 298). Chauvin et al. or tier-three MHPSS programmes (n\u00a0=\u00a01). The high-quality study by Song et al. (et al. (Even when services were more adequately staffed, there were concerns about the extent to which providers felt sufficiently skilled to deliver and address the mental health needs of the local population in tier-four (g et al. reports g et al. found thg et al. found th (et al. found th (et al. . report et al.et al.et al.Recipients across five studies evaluating tier-two MHPSS programmes (Boothby et al.et al.The extent to which engagement in MHPSS programmes was more enjoyable or meaningful to recipients when they included a range of activities, including creative or other forms of play was identified across three studies. Two studies were judged as highly reliable and providing medium useful findings (Nastasi et al. (et al.Sahin's et al. evaluati. et al., p. 527.. et al., found t. et al. found thet al. (A further sub-theme, identified in two highly useful and medium reliable studies was the importance of culturally relevant activities to support engagement and increase programme impact. Interviews with FCSs in the study by Boothby et al. found thet al. reportedet al.et al.The benefits of group-based MHPSS programmes were cited in five studies (Nastasi et al.et al.et al.et al.et al.\u2019s (A sub-theme in four studies was the importance of the group as a resource and a source of support. Two studies were judged as highly reliable and provided high (Christensen & Edward, et al.\u2019s evaluatiet al.\u2019s also fouet al. (Two studies, evaluating tier-three programmes for women in post-genocide Rwanda, provided highly useful evidence of medium reliability on the challenging but rewarding experience of sharing their personal stories with others. In the Healing of Life Wounds programme, King found thet al. reportedet al.A key theme across four studies (Boothby The importance of building trusting and supporting relationships was a sub-theme emerging from two studies judged to be of medium reliability and providing highly useful findings. In the post-earthquake city of Bam, Kunz evaluatiet al. (In addition to the importance of building trusting and supportive relationships, programme recipients in three studies, providing highly useful findings of medium quality, also reflected on the individual qualities and attributes of programme providers, citing them as key factors in supporting them to participate and benefit from MHPSS programmes. For example, in the evaluation of a tier-three programme by King adult suet al. reportedet al. , p. 1153et al.et al.et al.et al.et al.et al.et al.A number of themes emerged from our synthesis. Community engagement was identified as a key mechanism to support the successful delivery and uptake of MHPSS programmes in humanitarian settings. In particular, mental health sensitisation and mobilisation strategies, and the need to develop effective partnerships with governments and local communities were seen as pivotal to increasing overall programme accessibility and reach. These findings resonate with a growing body of literature on contextual factors influencing effective implementation of MHPSS programmes in humanitarian settings (Kruk et al.et al.et al.et al.Another key mechanism contributing to the successful implementation of MHPSS programmes is ensuring they are delivered by sufficient numbers of trained providers. However, the recruitment and retention of practitioners sufficiently skilled to deliver MHPSS programmes continues to be an ongoing challenge, especially in resource-limited settings where there may be a lack of incentives to work in the mental health sector (Eisenbruch et al.et al.et al.et al.Another key theme was the importance of designing programmes that are socially and culturally meaningful to local populations to ensure they are appealing and achieve their intended aims. The importance of attending to cultural and ethical issues when supporting the mental health and psychosocial well-being of different groups is well documented (Chu A final theme in our synthesis concerned the importance of building trusting and supporting relationships between programme providers and recipients to maximise engagement and increase programme impact. We found that providers who could relate by bridging differences and show nurturing qualities, and who could act as role models, were highly valued. The wider research literature also attests to the value of establishing a robust and attuned therapeutic relationship to improve recipient outcomes (Norcross & Wampold, We have taken a systematic and transparent approach to synthesising qualitative evidence on the implementation and receipt of MHPSS programmes delivered to populations affected by humanitarian emergencies in LMICs; filling a notable gap in the evidence base. Although our search was successful in locating qualitative studies, their methodological reliability and usefulness varied. Some studies lacked analytical depth, and thus important themes may have been missed. The relatively smaller number of studies on the impact of natural disasters and the predominance of findings from post-conflict settings may also have obscured findings specifically relevant to those settings. Despite conducting a comprehensive and sensitive search, we cannot be certain that we found all relevant studies; in particular, we may not have identified grey literature or unpublished reports, which are more likely to include qualitative evidence on the process of implementing programmes. Limiting the review to English language studies also means key insights from other languages have not been included. However, despite these limitations, we developed a contextually rich synthesis exploring contextual factors that can be taken into consideration to facilitate greater programme feasibility, fidelity, acceptability and reach.This review has synthesised qualitative evidence from process evaluations on the implementation and receipt of MHPSS programmes delivered to people affected by humanitarian emergencies. Our findings suggest that future MHPSS programmes and services would benefit from continuing to invest in community engagement and outreach efforts to promote the value and improve the co-ordination of MHPSS services at local and national levels; explore models of best practice to support the training needs of providers, deliver culturally sensitive and socially appropriate MHPSS programmes for individuals and their families and build high quality therapeutic relationships to improve recipient engagement and outcomes. Further research should build on these findings and current practice recommendations, alongside evaluations of programme effectiveness. This could support better theorisation on the links between programme aims, choice of programme components, delivery mechanisms and how programmes intend to improve outcomes for affected populations."} {"text": "BMC Medical Genomics.During June 10\u201312, 2018, the International Conference on Intelligent Biology and Medicine (ICIBM 2018) was held in Los Angeles, California, USA. The conference included 11 scientific sessions, four tutorials, one poster session, four keynote talks and four eminent scholar talks that covered a wide range of topics ranging from 3D genome structure analysis and visualization, next generation sequencing analysis, computational drug discovery, medical informatics, cancer genomics to systems biology. While medical genomics has always been a main theme in ICIBM, this year we for the first time organized the BMC Medical Genomics Supplement for ICIBM. Here, we describe 15 ICIBM papers selected for publishing in For the past 6 years, the International Conference on Intelligent Biology and Medicine (ICIBM) meeting has been covering extensive cutting edge research topics in genome and medicine \u20135. At 20Predicting cellular responses to drugs has been a major challenge for personalized drug therapy regimen. In the first paper by Wang et al. , the autThe next paper by Zhang et al. aimed toThe next paper from Fan et al. proposedIn the next paper by Djotsa et al. , the autRELN and one exon in NOS1 more skipped in AD patients compared to cognitively normal elderly individuals, but also splicing-affecting SNPs associated with amyloid-\u03b2 deposition in the brain. Their integrative analysis with multiple omics and neuroimaging data confers possible mechanisms for understanding AD pathophysiology through exon skipping. This result may provide a useful resource of a novel therapeutic development.The next study by Han et al. was aimeIn the next paper by Menor et al. , a novelAlthough many methods have been developed for predicting the single nucleotide variant effects, only a few have been specifically designed for identifying deleterious sSNVs (synonymous single nucleotide variants). In the next work by Shi et al. , the autThe objective of Cheng et al. was to uHaplotype phasing is important in cancer genomics, as it facilitates a comprehensive understanding of clonal architecture and further provides potentially valuable reference in clinical diagnosis and treatment. In the next paper of this supplement, Wang et al. proposedThe paper by Li et al. represenThe next paper by Chen and Xu integratThe paper by Chiu et al. presentsXia et al. present During the past 11\u2009years, genome-wide association studies have reported many thousands of association signals between genetic variants and a specific phenotype. Phenome-Wide Association Studies (PheWAS) take advantage of large patients-based cohorts with a panel of wide range of phenotypes and are well suited to facilitate new marker SNPs as well as SNPs with pleiotropy. The paper by Zhao et al. presentsRNA-sequencing has now become a routine technique in genomic studies and data continue to accumulate with increasing rate in the public domain. This enables repurposing of existing data for new applications. In the final paper of this Supplement, Zeng et al. presente"} {"text": "They also studied its pressure sensitivity and showed its application in smart cushions for monitoring human sitting positions. Lee et al. [2 nanoparticles in PDMS to improve the TENG performance. They also demonstrated the output enhancement using a windmill-integrated TENG system.Triboelectric Nanogenerators (TENG). Lee et al. providede et al. proposede et al. investige et al. proposede et al. investig(2)Thermoelectric Nanogenerators. Culebras et al. provided(3)7 cycles of tension and compression. The tension experiments showed stable polarization, while the compression experiments showed a 7% decrease in polarization. However, no notable decrease in output voltage was observed.Piezoelectric Nanogenerators. Shin et al. investig(4)Metamaterial Nanogenerators. Lee et al. investigNanogenerator-based technologies have found outstanding accomplishments in energy harvesting applications over the past two decades. These new power production systems include thermoelectric, piezoelectric, and triboelectric nanogenerators, which have great advantages such as eco-friendly low-cost materials, simple fabrication methods, and operability with various input sources. Since their introduction, many novel designs and applications of nanogenerators as power suppliers and physical sensors have been demonstrated based on their unique advantages. This Special Issue in Micromachines, titled \u201cNanogenerators in Korea\u201d, compiles some of the recent research accomplishments in the field of nanogenerators for energy harvesting. It consists of 12 papers, which cover both the fundamentals and applications of nanogenerators, including two review papers. These papers can be categorized into four groups as follows:We would like to thank all the authors for their papers submitted to this Special Issue. We would also like to acknowledge all the reviewers for their careful and timely reviews to help improve the quality of this Special Issue."} {"text": "I have read with interest the recent review paper by Southward and colleagues . While aReliability refers to the reproducibility of values of a given test . In sporWe have reported that caffeine ingestion in the dose of 6 mg/kg enhanced lower-body one-repetition maximum (1RM) strength and upper-body ballistic performance . A scrutJenkins et al. tested tAstorino et al. tested tAs discussed herein, the estimate by Southward et al. that the"} {"text": "Antenatal common mental disorders (CMDs) including anxiety, depressive, adjustment, and somatoform disorders are prevalent worldwide. There is emerging evidence that experiencing a natural disaster might increase the risk of antenatal CMDs. This study aimed to synthesise the evidence about the prevalence and determinants of clinically-significant symptoms of antenatal CMDs among women who had recently experienced an earthquake.This systematic review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines. The search included both electronic and manual components. Five major databases were searched. A data extraction table was used to summarise study characteristics and findings. Two authors examined the quality of studies independently using a quality assessment tool. A narrative synthesis of the findings reported.In total seven articles met inclusion criteria. Quality scores ranged from six to seven out of ten. All the studies were cross-sectional surveys and were conducted in high and middle-income countries. Sample sizes varied among studies. The prevalence of clinically-significant symptoms of antenatal CMD ranged from 4.6% experiencing \u2018psychological stress\u2019 in Japan to 40.8% \u2018depression\u2019 in China. While all studies were conducted in an earthquake context, only four examined some aspect of earthquake experiences as a risk factor for antenatal CMDs. In multivariable analyses, higher marital conflict, poor social support, multiparity, stresses of pregnancy and the personality characteristic of a negative coping style were identified as risks and a positive coping style as protective against antenatal CMDs.This systematic review found that women who have recently experienced an earthquake are at heightened risk of antenatal mental health problems. It indicates that in addition to the establishment of services for safe birth which is recognised in post-disaster management strategies, pregnancy mental health should be a priority. The review also revealed that there is no evidence available from the world\u2019s low-income nations where natural disasters might have more profound impacts because local infrastructure is more fragile and where it is already established that women experience a higher burden of antenatal CMDs.CRD42017056501.PROSPERO- Antenatal common mental disorders (CMDs) which include anxiety, depressive, adjustment, and somatoform disorders are prevSome determinants of antenatal CMDs in general circumstances have been established. Lancaster et al.\u2019s systematThere is limited empirical research about the links between experiences of a disaster and prevalence of antenatal CMDs. Harville et al. conducteThese two systematic reviews included some papers reporting experiences of earthquakes, other natural disaster and antenatal CMDs and indicated that earthquake might increase the risk of antenatal CMDs , 7. HoweThe aim of this study was to identify and synthesise the evidence available about the prevalence and determinants of clinically-significant symptoms of antenatal CMDs among women who have recently experienced an earthquake.We used the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines . The proThe search incorporated both electronic and manual components. The electronic databases: Psychi Info, Cochrane Library, PubMed, Scopus, Medline, Web of Science, CINAHL, ProQuest were searched. We used keywords, boolean operators and truncation to search for the relevant articles. The search terms were: AND AND (disaster* or earthquake*). These search terms revised according to the specificities of each database.The reference lists of articles that met inclusion criteria were searched manually to identify any further publications which had not been identified in the electronic searches.Eligibility criteria were that papers had to report studies from any country that: 1) were about women who had experienced an earthquake in the prior five years, 2) ascertained and reported the prevalence of symptoms of antenatal CMDs using a standard method, 3) had been published in the English in the peer-reviewed literature to October 31st 2017.A data extraction table was used to summarise study characteristics and findings. We summarised the prevalence of clinically-significant symptoms of CMDs and extracted or derived confidence interval, odds ratio, relative risk, coefficients, and significance of determinants of CMDs.Two authors (GKK and TDT) examined the quality of studies independently using a quality assessment tool designed by Greenhalgh and modiThe steps to select eligible papers based on the eligibility criteria are reported in Fig. All the studies were conducted in Japan, Taiwan, or China. All were cross-sectional surveys. In total, data were contributed by 2209 women who had experienced an earthquake and 7381 women in comparison conditions who had not. Sample sizes varied among studies. The smallest was Hibino et al.\u2019s investigation of 99 women anSix studies , 15\u201319 rAlthough all studies were conducted in an earthquake context, the magnitude of earthquakes, the distance between the study site and the epicentre, and timing of experience of the earthquakes varied among studies Table . The magThree studies , 17, 19 Three studies , 16, 17 All studies used a standardised screening tool to measure CMD. Five studies \u201317, 19 uThe prevalence of clinically-significant symptoms of antenatal CMD ranged from 4.6% experiencing \u2018psychological stress\u2019 in Japan to 40.8%Of four studies in China, two reported the prevalence of depressive symptoms among women who had experienced the earthquake during pregnancy: 7.1% and 35.2Among the studies in Japan, Hibino et al. found thWhile all studies examined some determinants, none included all potential risk and protective factors for pregnancy mental health problems , 5 Tabl. One stup\u2009=\u20090.002). Qu et al. [p\u2009<\u20090.001). However, there was no significant association between earthquake experiences and depressive symptoms [While all studies were conducted in an earthquake context, only four , 18, 19 u et al. used a ssymptoms .r\u2009=\u2009\u2212\u20090.372; p\u2009=\u20090.01), objective support and support use and EPDS total score. Subjective support remained significant in multivariable analysis but the other two dimensions of social support were not significant [p\u2009<\u20090.05) [Ren et al. used a snificant . Lau et nificant found a \u2009<\u20090.05) .p\u2009<\u20090.001) and depressive symptoms compared to older women (\u226530\u2009years). Lau et al. [p\u2009<\u20090.05); but the association was not significant in multivariable analysis. Other studies [Three studies found a significant association of antenatal CMDs with at least one socio-demographic factor. Qu et al. found inu et al. also fou studies , 18, 19 p\u2009<\u20090.001), but they did not find an association with depressive symptoms in bivariate analysis. The significance for PTSD also disappeared in multivariate analysis. Lau et al. [p\u2009<\u20090.01) but the association was not significant in multivariable analysis. Dong et al. [Qu et al. found thu et al. found thg et al. , and Reng et al. did not p\u2009=\u20090.001). In Lau et al. [p\u2009<\u20090.05); the association disappeared in multivariable analysis. Qu et al. [p\u2009<\u20090.001), but the association did not remain significant in a multivariable analysis. They also did not find a significant association between family income with depressive symptoms. In a similar case, Lau et al. [p\u2009<\u20090.01); the association was not significant in a multivariable analysis.Ren et al. found thu et al. \u2018s study,u et al. found thu et al. found thr\u2009=\u2009\u2212\u20090.30, p\u2009<\u20090.01), \u2018thoughts and feelings regarding the marriage and one\u2019s spouse\u2019 , and \u2018agreement on relationship matter\u2019 were significantly negatively correlated with EPDS scale score. However, the association did not remain significant in multivariable analysis. Lau et al. [p\u2009<\u20090.01). Qu et al. [r\u2009=\u2009\u2212\u20090.18, p\u2009<\u20090.01) but the association was not significant with PTSD. They did not find any significant association with depressive symptoms or PTSD in multivariable analyses.While the quality of intimate partner relationship is consistently related to the mental health of pregnant women , only thu et al. found thu et al. found thp\u2009<\u20090.05) [p\u2009<\u20090.05) [p\u2009=\u20090.025) [p\u2009=\u20090.033) [Five studies \u201317, 19 e\u2009<\u20090.05) ; (\u0273 = 0.=\u20090.025) ; (p\u2009=\u20090.p\u2009<\u20090.05) [p\u2009<\u20090.001) [p\u2009<\u20090.001) but not in multivariable analyses.Two studies found that multiparous women were more likely experiencing depressive symptoms than nulliparous women . On the <\u20090.001) did not p\u2009=\u20090.001) and depressive symptoms [p\u2009<\u20090.001) [Qu et al. and Dong<\u20090.001) ; .p\u2009=\u20090.006). On the other hand, they found that a higher level of negative coping was a risk factor for depressive symptoms .Ren et al. examinedp\u2009<\u20090.01). However, they did not find a significant association in multivariable analysis.Dong et al. measuredIn summary, these studies reported that wounded relatives, poor social support, younger age, new residence of the study area, higher marital conflict, medium support from husbands, multiparity, the perceived stress of pregnancy, and negative coping were risks, and positive coping was protective against antenatal CMDs in multivariable analyses. It was also found that a higher composite score of earthquake experiences, younger age, and perceived stress of pregnancy were a risk factor for PTSD during pregnancy in multivariable analyses.To our knowledge, this is the first systematic review of the evidence available about clinically-significant symptoms of antenatal CMDs among women who had recently experienced an earthquake. It provides a narrative synthesis of the existing evidence, but we acknowledge that because of the heterogeneity of these studies, we could not conduct a meta-analysis and provide meaningful estimates of the prevalence of pregnancy mental health problems among women who have experienced an earthquake. We also acknowledge that as the review was limited to the English-language literature, it is possible that articles published in languages other than English have been missed. Nevertheless, this review provides a comprehensive summary and evaluation of the available evidence in the field.Only seven studies, all from high and middle-income countries met inclusion criteria for this review. They reported a wide range of prevalence of clinically-significant symptoms of antenatal CMDs. The differences in prevalence estimates might be attributable to study methods, but, as has been found in other post-disaster research, can reflect the local circumstances and post-disaster responses .. The studies had varied sample size\u2014small to relatively large\u2014and none justified the sample size or representative adequacy of the samples. This could bring varied results of prevalence of antenatal CMDs. In addition, different measures of psychological symptoms were used, which reduces comparability and might account in part for the wide variation in prevalence estimates. Most importantly, only two studies [Standard study methods include the use of an adequately powered sample size, following standard sample recruitment strategies, and the use of valid and reliable outcome measures studies , 16 usedThere were differences in earthquake characteristics and settings which might also have influenced the outcomes. It is likely that women who were near the epicentres had more severe earthquake experiences compared to those up to 90\u2009km away and they were more likely to experience a higher prevalence of antenatal CMDs. For instance, Lau et al. reportedThe wide prevalence estimates of pregnancy mental health problems might also be attributable to local circumstances and post-disaster response. Compared to the prevalence reported from China, the prevalence in Japan was consistently lower. Japan is a well-resourced country with experience of earthquakes and well-developed post-disaster practices. These may assist more rapid recovery of post-adversities and explain why Japanese women appear to have a less negative psychological impact from experiencing an earthquake. In general circumstances too, Japanese have less prevalence of mental health problems compared to other high-income countries . On the Dong et al.\u2019s study conducteWhile these studies contribute to enhancing understanding of the field, they do not show clearly whether and how much earthquake experiences determine the mental health of pregnant women. Without a robust examination of potential factors together with earthquake experiences, it is not possible to conclude whether and how much earthquake experiences contributed to increasing mental health problems during pregnancy. However, this evidence is crucial for making public health decisions about whether to address post-earthquake antenatal mental health problems with universal or targeted strategies.This systematic review found that women with recent direct experience of an earthquake appear to be at higher risk of clinically-significant antenatal CMD symptoms. These findings have implications for disaster responses. At present, the recommendation in this situation is that provisions for a safe birth and neonatal care should bIt also identifies knowledge gaps, in particular, that there is no evidence about the mental health of women who are pregnant and living in low-income nations at the time of an earthquake.It is clearly a priority, that, despite the difficulties of conducting ethical, sensitive, comprehensive, culturally-competent research in such situations, it is needed in order to provide the evidence to inform effective interventions. At a minimum, this research should include standardised measures of earthquake experiences and examine all potential risk and protective factors in order to delineate the nature, prevalence and duration of antenatal mental health problems among women."} {"text": "The annual prevalence of ADHD drug use was defined as the number of prevalent users per 1000 inhabitants.et al.et al.et al.et al.Next, we identified the subgroup focusing on the incident and persistent users of ADHD drugs. First, we defined the index date as the date on which the ADHD drug was first prescribed to the patient during the fiscal year of 2014. We included the patients who had been enrolled in the database at least 180 days before and after the index date, as in previous studies . All estimates were calculated with 95% confidence intervals (CI). All analyses were conducted using R version 3.4.1.v. 43%).There were 86\u00a0756 prevalent and 30\u00a0449 incident users of ADHD drugs in the database . The annet al.et al.et al.et al.et al.et al.et al.This is the first study to establish the representative prescribing practices of ADHD drugs in Japan. The prevalence of ADHD drug use in children and adolescents in Japan (0.4%) is much lower than that in the USA (5.3%) than in the UK (94%) than that in the USA (10\u201329% at 150 days) (Lawson The main limitation of this study is that the entire population was not accounted for in the database, which comprised 1\u20132% of all inhabitants. Nevertheless, our study provides representative evidence on the treatment pattern of ADHD drug use in children and adolescents in Japan."} {"text": "Wireless microdevices are getting smaller and smaller, and in this special issue seven papers address a few miniaturization challenges in the biomedical field, which are common across different applications. Kargaran et al. proposes"} {"text": "Understanding the long-term health impacts of the early-life exposome requires the characterization and assimilation of multi \u2018omics\u2019 data to ultimately link molecular changes to exposures. In this way, markers associated with negative health outcomes, such as increased disease risk, can be ascertained. However, determining the extent and direction of metabolic perturbations relies on comparisons to existing metabolomic reference profiles. While such resources are increasingly available for adult populations, analogous tools for children are decidedly lacking. Lau et al. have compiled robust, translatable quantitative metabolomics data on urine and serum samples for European children across six study locations. Metabolites were associated with body mass index, diet and demographics, and correlated within and between biofluids. As a result, a novel association between urinary 4-deoxyerythronic acid and body mass index was uncovered. This work serves as a crucial reference for future studies in exposomics, and \u2013 more broadly \u2013 represents a significant step forward for metabolomics by creating the foundation for a comprehensive reference metabolome for children.https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-018-1190-8Please see related article: BMC Medicine article, Lau et al. [In their recent u et al. characteu et al. . The expu et al. . As statu et al. . Studiesu et al. .Metabolic syndrome is a collection of features, including obesity, hyperglycemia, and hypertension, which increase the risk of developing cardiovascular disease and type 2 diabetes (T2D) , 7. ObesTo define a set of metabolic markers that reflects the exposome first requires a high quality reference metabolome to be established. This was the primary objective of Lau et al. . Such a IDQ p180 kit, along with appropriately stringent inclusion criteria, generated reproducible quantitative information for 177 serum metabolites. While this is by no means a comprehensive characterization of the either metabolome , data from the Biocrates kit are highly precise, reproducible, and \u2013 importantly \u2013 translatable between laboratories and instruments [Recent work confirming diagnoses of various inborn errors of metabolism highlights the importance of a quantitative reference metabolome \u2013 not only for determining pathological elevations in target analytes, but also for untargeted approaches that aim to discover additional or improved metabolic markers . The useGenerating robust, high-quality quantitative data for both the urine and serum metabolomes was a prudent choice for Lau et al.; many differentiating metabolites from biomarker discovery studies remain unidentified. They cannot be quantified for use in a targeted clinical assay, or be linked to metabolic pathways, genomic or proteomic data , and thuThis study by Lau et al. represen"} {"text": "Other external causes may be dietary. Such microbes are capable of shedding small, but functionally significant amounts of highly inflammagenic molecules such as lipopolysaccharide and lipoteichoic acid. Sequelae include significant coagulopathies, not least the recently discovered amyloidogenic clotting of blood, leading to cell death and the release of further inflammagens. The extensive evidence discussed here implies, as was found with ulcers, that almost all chronic, infectious diseases do in fact harbour a microbial component. What differs is simply the microbes and the anatomical location from and at which they exert damage. This analysis offers novel avenues for diagnosis and treatment.Since the successful conquest of many acute, communicable (infectious) diseases through the use of vaccines and antibiotics, the currently most prevalent diseases are chronic and progressive in nature, and are all accompanied by inflammation. These diseases include neurodegenerative , vascular and autoimmune (e.g. rheumatoid arthritis and multiple sclerosis) diseases that may appear to have little in common. In fact they all share significant features, in particular chronic inflammation and its attendant inflammatory cytokines. Such effects do not happen without underlying and initially \u2018external\u2019 causes, and it is of interest to seek these causes. Taking a systems approach, we argue that these causes include ( The great enemy of truth is very often not the lie\u00a0\u2013\u00a0deliberate, contrived and dishonest\u00a0\u2013\u00a0but the myth\u00a0\u2013\u00a0persistent, persuasive and unrealistic. Too often we hold fast to the clich\u00e9s of our forebears. We subject all facts to a prefabricated set of interpretations. We enjoy the comfort of opinion without the discomfort of thought\u2019. John F. Kennedy, Commencement Address, Yale University, June 11 1962\u2018These germs \u2010 these bacilli \u2010 are transparent bodies. Like glass. Like water. To make them visible you must stain them. Well, my dear Paddy, do what you will, some of them won't stain; they won't take cochineal, they won't take any methylene blue, they won't take gentian violet, they won't take any colouring matter. Consequently, though we know as scientific men that they exist, we cannot see them\u2019. Sir Ralph Bloomfield\u2010Bonington. The Doctor's Dilemma. George Bernard Shaw, 1906.\u2018I.et al.,et al.,et al.,et al., A very large number of chronic, degenerative diseases are accompanied by inflammation. Many of these diseases are extremely common in the modern \u2018developed\u2019 world, and include vascular , autoimmune , and neurodegenerative diseases. On the face of it these diseases are quite different from each other, but in fact they share a great many hallmarks among o3; 25(OH)D3] and is indeed widely observed in inflammation 3 to 1,25\u2010dihydroxyvitamin D3 2D3); 1,25(OH)2D3 suppresses elements of the adaptive immune system while stimulating elements of the innate immune system D3, explaining how inflammation can simultaneously cause high 1,25(OH)2D3 and low 25(OH)D3 levels and the cytochrome P450 enzyme CYP27B1 that converts 25(OH)Dm Bikle, . In addim Bikle, 1,25(OH)et al. 2D3 and low 25(OH)D3 levels et al., et al., 3 levels and Alzheimer's disease 2D3, and any effects of chronic conditions on the CYP enzymes that produce them. Biomarkers [such as taurinuria et al., et al., et al., et al. 2D but assessed by serum 25\u2010hydroxyvitamin D (25(OH)D)) are inversely associated with hepcidin concentrations and are positively associated with levels of haemoglobin and iron' and LL\u201037 antimicrobial peptide, which lead to a reduction in plasma iron levels the production of hydroxyl radicals, catalysed by \u2018free\u2019 iron that can itself lead to cell death (step 1); and (ii) the iron\u2010based reactivation of dormant microbes (step 2). In this section we concentrate on the first mechanism. Many reviews of general iron metabolism are available elsewhere that affect its reactivity in two linked reactions involving peroxide and superoxide . The amount of \u2018free\u2019 iron varies, but Fe(III) salts are virtually insoluble at neutral pH ; the typical cytoplasmic levels of \u2018free\u2019 iron are in the range 1\u201310\u00a0\u00b5M .Both hydrogen peroxide and superoxide are common products of the partial reduction of oxygen by mitochondria, among other sources Kell, . HydrogeThe ferric iron can then react with superoxide in the Haber\u2013Weiss reaction Kehrer, generativia the products of such reactions, including 8\u2010hydroxy\u2010guanine the reducing agent ascorbic acid (vitamin C) actually becomes a pro\u2010oxidant when poorly liganded, e.g. with ligands such as ethylene diamine tetraacetate (EDTA) ferritin is an intracellular marker, so that the serum ferritin level (widely but erroneously used as a measure of iron status) is simply a sign of cell death Kell, ; and is known as eryptosis et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., et al., The inflammagenic potency of LPS is so great that it is commonly even used as a model to induce symptoms more or less similar to many of the inflammatory diseases of interest. Typically this involves injecting LPS at the site of interest for such diseases. Examples of the use of endotoxin in this way include pre\u2010eclampsia et al., et al., et al., et al., et al., et al., et al., et al., et al., Gram\u2010positive bacteria have a cell wall structure that differs from that of Gram\u2010negatives both in its number of barriers and in the fact that the cell wall component equivalent to LPS is lipoteichoic acid (LTA). LTA is equivalently capable of producing an inflammatory response. In contrast to LPS, which mainly interacts with toll\u2010like receptor 4 (TLR4) . This opens up a considerable new biology et al., et al., et al., et al., There is considerable evidence that fibrin(ogen) can interact with other amyloid structures /ERK signalling pathways in macrophages and TLR4SAA seems to be a ligand for the receptor for advanced glycation end products (RAGE) is also an important and potent amyloid. SAA belongs to a family of apolipoproteins associated with high\u2010density lipoprotein (HDL) in plasma and is an acute\u2010phase protein synthesised predominantly by the liver is associated with a variety of diseases and coagulopathies and of amyloids generally between inflammation, cytokine production, amyloid formation and disease see Fig.\u00a0. A varieXV.It is hard to disentangle diseases caused or exacerbated directly by inflammation from those where the mediating agent is explicitly a cytokine. Figure\u00a0XVI.Induction of cell death will normally cause disease; for example, if the cells in the substantia nigra pars compacta die the patient will develop Parkinson's, and so on. A great many amyloids have been shown to be cytotoxic, and this is why they are considered in detail herein. What is less clear Uversky, , althouget al., et al., et al., et al., et al., et al., et al., et al. fibrin amyloid, which is considerably larger in fibre diameter than those involved in classical amyloid diseases , with differences only apparent at a finer scale . The conditions considered herein are all chronic inflammatory diseases, often with quite slow kinetics, and are all in effect diseases of ageing A systems biology strategy was used to show that chronic, inflammatory diseases have many features in common besides simple inflammation.(2) The physiological state of most microbes in nature is neither \u2018alive\u2019 (immediately culturable on media known to support their growth) nor \u2018dead\u2019 (incapable of such replication), but dormant.i) inoculation by microbes that become and remain dormant, largely because they lack the free iron necessary to replicate, and (ii) traumas that induce cell death and the consequent liberation of free iron; these together are sufficient to initiate replication of the microbes.(3) The inflammatory features of chronic diseases must have external causes, and we suggest that the chief external causes are ((4) This replication is accompanied by the production and shedding of potent inflammagens such as lipopolysaccharide or lipoteichoic acid, and this continuing release explains the presence of chronic, low\u2010grade inflammation.(5) Recent findings show that tiny amounts of these inflammagens can cause blood to clot into an amyloid form; such amyloid forms are also capable of inducing cell death and thereby exacerbating the release of iron.et al., (6) Additional to the formal literature that we have reviewed here, it seems to be commonly known that infection is in fact the proximal cause of death in Alzheimer's, Parkinson's, rheumatoid arthritis, multiple sclerosis, etc. It may, for instance, be brought on by the trauma experienced following a fall. Such infections leading to death in chronically ill patients may involve the re\u2010awakening of dormant bacteria rather than novel exogenous infection. This implies that therapies involving the careful use of anti\u2010infectives active against dormant microbes could be effective The role of microbes in stomach ulcers is now well established (Marshall,"} {"text": "Diazepam-ketamine and propofol are associated with acceptable induction and recovery from anaesthesia. Propofol had inferior anaesthetic induction characteristics, but superior and quicker recovery from anaesthesia compared with diazepam-ketamine.Induction of anaesthesia occasionally has been associated with undesirable behaviour in dogs. High quality of induction of anaesthesia with propofol has been well described while in contrast variable induction and recovery quality has been associated with diazepam-ketamine. In this study, anaesthetic induction and recovery characteristics of diazepam-ketamine combination with propofol alone were compared in dogs undergoing elective orchidectomy. Thirty-six healthy adult male dogs were used. After habitus scoring , the dogs were sedated with morphine and acepromazine. Forty minutes later a premedication score (SDS) was allocated and general anaesthesia was induced using a combination of diazepam-ketamine (Group D/K) or propofol (Group P) and maintained with isoflurane. Scores for the quality of induction, intubation and degree of myoclonus were allocated (SDS). Orchidectomy was performed after which recovery from anaesthesia was scored (SDS) and times to extubation and standing were recorded. Data were analysed using descriptive statistics and Kappa Reliability and Kendall Tau B tests. Both groups were associated with acceptable quality of induction and recovery from anaesthesia. Group P, however, was associated with a poorer quality of induction ( It may also be indicated in certain cases with cardiovascular compromise has a wide initial dose range (2 mg/kg \u2013 8 mg/kg) in dogs and induces rapid central nervous system depression facilitating anaesthetic induction within 20\u201330 seconds after commencement of intravenous administration of 5.5 kg \u00b1 2.3 kg and with a mean age of 26 \u00b1 13 months were randomly assigned to an induction regimen of either diazepam-ketamine (Group D/K) or propofol (Group P). Dogs were declared healthy based on physiologically normal haematological and serum chemistry profiles and clinical examination performed upon admission to the hospital.Prior to anaesthesia, each dog was starved of food for 8\u201312 hours, then placed in a quiet, warm cage and left undisturbed for 30 minutes prior to being allocated a cage habitus score . Sixty seconds after initiation of bolus administration, depth of anaesthesia was assessed by a co-investigator by orderly testing of the lateral and medial palpebral reflex, menace reflex and jaw tone. If depth of anaesthesia was deemed insufficient for endotracheal intubation, a follow-up intravenous bolus was administered, which in Group-D/K dogs was 0.175 mg/kg and 2.5 mg/kg of diazepam and ketamine, respectively and in Group-P dogs was 1 mg/kg of propofol over 30 seconds. Follow-up boli were administered until endotracheal intubation could be achieved.Immediately after sedation scoring, a 22-gauge indwelling cannula was inserted into the right cephalic vein to facilitate intravenous administration of anaesthetic induction drugs. General anaes\u00adthesia was induced in Group-D/K dogs with an initial combination dose of diazepam and ketamine of 0.375 mg/kg and 5 mg/kg, respectively and in Group-P dogs with an initial propofol dose of 2 mg/kg. Induction boli were administered over a 30-second period using a volumetric infusion pump . The system delivered isoflurane in oxygen via a Tec5 out-of-circuit precision vaporiser initially set to 2% with a fresh gas flow rate set to 600 mL/kg/min. Spontaneous ventilation was permitted.pO2) measured and recorded every 5 minutes. A balanced crystalloid was administered intra-operatively at a rate of 10 mL/kg/h for the duration of the anaesthesia.The overall quality of anaesthetic induction was then scored according to three separate induction criteria as described in Surgery time was recorded on completion of the orchidectomy, isoflurane and oxygen administration stopped and the dogs moved to a designated recovery area to recover from general anaesthesia. A quality of recovery score was then allocated by the primary and co-investigator .Video recording of the entire induction protocol was performed for retrospective analysis by a specialist anaesthetist blinded to the induction agents used. The same induction scoring system was usedt-test, while non-parametric data were analysed using the Wilcoxon-Mann-Whitney test. A significant level of p < 0.05 was set. Comparison of agreement between the observers was tested using the Kappa Reliability or Kendall Tau B tests were analysed for statistical significance using the p > 0.05) between Group D/K and Group P were observed with regard to age and weight. Additionally, there were no statistically significant differences with regard to cage habitus score, sedation score and duration of anaesthetic maintenance , as Group D/K had better induction scores depicted by shorter induction times (p = 0.018) and fewer follow-up boli required to achieve endotracheal intubation , with nine dogs in Group P (n = 18) observed to have myoclonus. Group D/K had a very low incidence of myoclonus, with only one dog having muscle tremors.Group P had a greater incidence of myoclonus than Group D/K (p = 0.00002) and Group P had significantly superior recoveries when compared with Group D/K in all categories of induction and recovery scoring, barring the intubation score, where Kendall's Tau B test indicated fair agreement (0.32).All dogs were client owned and consent was required in writing prior to being enrolled in the study. The dogs enrolled in the study were exposed to a moderate degree of discomfort as a result of the orchidectomy performed. The surgical procedure was performed by a specialist surgeon and all venepunctures as well as intravenous cannula placement for blood collection and drug administration, respectively, were performed by experienced veterinarians to limit the level of discomfort experienced. Appropriate analgesia including morphine and carprofen were provided during the peri-anaesthetic period. The present study was pre-approved by both the Animal Ethics Committee and the Research Committee of the Faculty of Veterinary Science, University of Pretoria (V017-33).This study demonstrated that Group D/K was associated with better quality of induction and myoclonus scores when compared with Group P. Recovery from anaesthesia was observed to be inferior and of longer duration for Group D/K than for Group P.et al.et al.et al.et al.et al.et al.The high quality of induction score associated with Group D/K in the present study supports current literature, which describes excitement-free dissociative anaesthesia with sufficient muscle relaxation to permit endotracheal intubation in dogs degree of pre-anaesthetic sedation achievedclinician experiencescoring system usedsignalment of the dogs that were used in the trial.et al.et al. (et al. (et al. (et al.The propofol dose range described for anaesthetic induction in dogs is wide (Jim\u00e9nez .et al. . The rat.et al. . This wa (et al. and Robi (et al. . The slo. et al.. As a reet al.et al.et al.et al.et al. (et al.Adequate premedication provides anxiolysis, muscle relaxation and analgesia as well as decreasing induction agent dose requirements (Grint, Alderson & Dugdale .et al. . In addi. et al..Timeous endotracheal intubation after induction of anaesthesia with a short-acting induction agent may be challenging, as previously reported in dogs anaesthetised with propofol (Clarke & Hall Signalment plays a role in the induction dose requirements in dogs. A study performed by Boveri, Brearley and Dugdale highlighet al. (One dog in Group D/K that scored poorly on induction and, similar to an outlier in the study by White et al. , may havet al.et al.et al.et al.Recovery was statistically superior and shorter in Group P when compared with Group D/K. Historically propofol generally has been associated with acceptable recoveries and the results of the present study further support published literature (Bufalari et al. (et al.Dogs induced with diazepam-ketamine were associated with statistically inferior and prolonged recoveries when compared with propofol. Clinically, however, the quality of anaesthesia was acceptable. Recovery from diazepam-ketamine demon\u00adstrated an unremarkable return to cons\u00adciousness, routine extubation, occasional paddling and vocalisation. Such a recovery, in a clinical setting, is considered acceptable as dogs were not at risk of self-inflicted injury as a result of trauma and most dogs would be able to stand after a relatively short period. Subjectively superior recoveries when compared with the present study's results were described by White et al. in dogs . et al..et al.et al.et al. was recently questioned by Ferchichi et al. (The subjective scoring systems incorporated in the present study successfully differentiated the quality of induction and recovery between two groups of healthy dogs. A moderate to good level of agreement between observers was achieved but only after sufficient training on the correct use of the scales. The SDS scoring systems used to score induction and recovery from anaesthesia have been described previously in literature but have not been fully validated (Amengual i et al. in termsthe methodology of associating induction and recovery characteristics to a single agent without confirming adequate plasma concentrations of the induction agent/sthe strength of comparing induction scores from two induction agents where administrations of the agents were not at equipotent dosesremarking on the quality of anaesthetic recovery from a specific induction agent without demonstrating the presence of the drug in adequate concentrations in plasma.et al. (These arguments raised valid concerns and demonstrate limitations in the present study as well as the study performed by Jim\u00e9nez et al. . The resThe present study performed anaesthesia exclusively in healthy, adult male dogs weighing less than 10 kg. Future studies performed in female dogs, juvenile dogs or dogs weighing in excess of 10 kg may yield different results and outcomes.The use of propofol alone and a diazepam-ketamine combination both produced clinically acceptable induction of anaesthesia in the dogs in this study; however, propofol administered at a low dose rate produced measurably inferior induction characteristics. Recovery from anaesthesia induced with both of these protocols was satisfactory; however, recovery from propofol was more rapid and associated with less excitement and ataxia."} {"text": "A mechanics-based brain damage framework is used to model the abnormal accumulation of hyperphosphorylated p-tau associated with chronic traumatic encephalopathy within the brains of deceased National Football League (NFL) players studied at Boston University and to provide a framework for understanding the damage mechanisms. p-tau damage is formulated as the multiplicative decomposition of three independently evolving damage internal state variables (ISVs): nucleation related to number density, growth related to the average area, and coalescence related to the nearest neighbor distance. The ISVs evolve under different rates for three well known mechanical boundary conditions, which in themselves introduce three different rates making a total of nine scenarios, that we postulate are related to brain damage progression: (1) monotonic overloads, (2) cyclic fatigue which corresponds to repetitive impacts, and (3) creep which is correlated to damage accumulation over time. Different NFL player positions are described to capture the different types of damage progression. Skill position players, such as quarterbacks, are expected to exhibit a greater p-tau protein accumulation during low cycle fatigue (higher amplitude impacts with a lesser number), and linemen who exhibit a greater p-tau protein accumulation during high cycle fatigue (lower amplitude impacts with a greater number of impacts). This mechanics-based damage framework presents a foundation for developing a multiscale model for traumatic brain injury that combines mechanics with biology. Note that mechanical \u201cfatigue\u201d is not medical fatigue; mechanical fatigue includes an external force of a particular amplitude that is cycled at a certain frequency.Regarding applied mechanics, Garrison and MoodyN) or reversals (2N) occur over time at a particular frequency; hence, the repetitive impact frequency is important when considering the onset of CTE as high-amplitude impacts are associated with low-cycle fatigue (LCF) and low-amplitude impacts are associated with high-cycle fatigue (HCF). When a body is subjected to an applied stress over time, \u201ccreep\u201d arises from straining and damage. Hence, the time duration of a material under stress is important.Five possible creep stress fields in the brain can be acknowledged: (1) intracranial pressure (ICP), (2) gravity inducing a body force, (3) a local stress field arising from an adjacent damaged local brain region (p-tau) due to local expansions and contractions thus inducing stress gradients on the adjacent material exhibited the least amount of damage and was associated with an age of 28\u00a0years old with data scatter of 13\u00a0years; Level 2 incurred more damage and was associated with an age of 44\u00a0years old with data scatter of 16\u00a0years; Level 3 incurred even more damage and was associated with an age of 56\u00a0years old with data scatter of 14\u00a0years; finally, Level 4 exhibited the largest area fraction of dark areas associated with p-tau accumulation indicating that these players had incurred the greatest amount of damage. The age associated with Level 4 was 77\u00a0years old with data scatter of 12\u00a0years. The definition of each level was qualitatively assessed by the Boston University Team.et al.https://imagej.nih.gov/) was then used to create global thresholding restrictions to determine the damage area and the total area of each brain slice image, which were both approximately converted from pixel density to cm2. Additionally, the nucleation (#/cm2) of each brain slice were calculated usingATotal is the total area of each slice. The nearest neighbor distances (NNDs) between each tau protein damage area were then calculated using ImageJ and the \u201cNearest Neighbor Distances Calculation with ImageJ\u201d plugin.ADamage is the previously determined damage area and ATotal is the total area of each brain slice image. As McKee et al.NND data set was arranged in descending order and assigned approximate ages (years). For Figs.\u00a0NND, and damage (%) data sets were then fit to the ISV models in Figs.\u00a0To quantify the p-tau protein damage throughout the various levels of CTE, the 76 full brain slice images documented in McKee Figure\u00a0As Fig.\u00a0Figure\u00a0et al.As Garrison and MoodyCcoeff is the coefficient to the equation, and M is a complicated term that includes the microstructure and stress-state dependence. Because of the lack of knowledge of the subscale information associated with the regions of p-tau protein accumulations, the M parameter is yet to be related to microstructural features. Furthermore, a hydrostatic tension for the stress-state dependence locally is assumed; hence, even when compression occurs as a boundary condition, locally there is tension because of the Poisson ratio.Instead of coarse coding into four stages that the Boston University Team employed for the progressive damage states, we reorganize the data as one continuous stream of data by sorting all points by the general trend seen in the four stages. This allows for easier correlations to the ISV damage variables. The ISV nucleation model of Horstemeyer and GokhaleZ includes the microstructure and stress-state dependence similar to the nucleation equation.The equation used for damage growth is similar to the nucleation equation and is given by the following:NND and coalescence are given by the following:Q includes the microstructure and stress-state dependence similar to the nucleation equation, Ccoeff is the coefficient to the coalescence equation, and d is the square root of the area damaged by p-tau accumulation.The equations for the et al.The multiplication of the ISV nucleation, ISV growth, and ISV coalescence together gives rise to the total damage, which is the area fraction curve as shown in the following equation from Horstemeyer We correlate the physics-based ISV model with the sorted data of the Boston University data regarding CTE p-tau pathology and we relate the data and damage mechanisms to football player positions.2)] as a function of approximate time of death for the 76 specimens with pictures examined in the Boston University study.\u03b7coeff equals 0.01, and M equals 5.338, a close correlation of the damage ISV nucleation model to the tau protein pathology garnered by the Boston University Team exists damaged by p-tau protein deposition signifying the damage growth of tau protein spots as measured on the brains of the deceased NFL players in the Boston University studyet al.Z equals 0.105, an excellent correlation of the damage growth model to the tau protein pathology garnered by the Boston University Team is found.Figure\u00a0NND needs to be quantified. Horstemeyer et al.et al.NNDs (cm) within the region damaged by p-tau deposition signifying the damage interaction of tau protein measured on the brains of the deceased NFL players analyzed in the Boston University studyNND incurred by the players. Nevertheless, the differences between the skill player positions and linemen resides in the fact that LCF and HCF regimes are being exhibited, respectively, as illustrated in the fatigue-life curve shown in Fig.\u00a0The Boston University studieset al.,et al.,et al.,et al.,While a player at any position can experience an LCF monotonic overload (i.e. concussion), a couple of trends correlating player position to the abnormal accumulation of p-tau can be discerned. Based upon the data of Pellman Although greater magnitude loads could occur in a blast or a car crash, where brain tissue tearing or arterial tearing could arise from very large mechanical loads, the football related damage events are more related to a CTE threshold as denoted by the black line in Fig.\u00a0Also of note, the LCF regime transitions to the HCF regime at the point where the plastic deformation asymptote intersects the elastic deformation asymptote (which are both designated by the dashed lines). Given this information, the high amplitude impacts experienced at the positions of QB/WR/DB/TE occur within the LCF regime, while repetitive cycles of low amplitude impacts result in HCF failure, like those experienced by linemen.et al.,et al.et al.et al.et al.,et al.et al.et al.,et al.et al.et al.et al.Figure\u00a0et al.et al.et al.et al.Additional data was required to plot the strain-life curve in Fig.\u00a0et al.et al.et al.et al.As mentioned previously, Funk et al.et al.et al.et al.et al.et al.,et al.et al.et al.et al.et al.et al.As aforementioned, offensive linemen incur HCF regime related damage but a skill position such as a QB incurs LCF regime related damage. Figure\u00a0et al.et al.The fatigue curve Fig.\u00a0 for the et al.et al.et al.et al.In Figs.\u00a0et al.The brain consists of at least two networks of recursively branching structures: the blood vessels and the neural processes of axons and dendrites. In addition, the brain contains tubes within tubes: microtubules within the fluid-filled neurites that transport chemicals out and back to synaptic terminals. The mechanical properties at a subcellular scale are very nonuniform and like the neurons themselves are likely to be highly anisotropic and heterogeneous. Therefore, a detailed, multiscale model of brain mechanics will necessarily explore the points of particular vulnerability and respond to experimental data from animal studies that do not yet exist. Despite our present paucity of knowledge there exist intriguing data from other related areas of investigation. For example, Da Mesquita neurofilament light can be detected in the blood and spinal CSF of a particular group of Alzheimer\u2019s patients many years before there is behavioral impairment.neurofilament light is also likely to be associated with brain damage in TBI, and if so, should be investigated in football players and in animal studies.Although p-tau accumulation is associated with microtubule damage repair, agglomeration of misfolded p-tau into fibrils is pathological, and its precise effects are not understood. We speculate that these fibrils may disrupt the cytoskeleton and possibly extracellular matrix. One recent study has found that There are a number of pathologies collectively amyloidosis in which plaques of accumulated proteins accumulate and change the properties of the tissue. The lens of the eye is an interesting model in that the progressive stiffening of the lens with age is due to the agglomeration of proteins that easily stick together to form fibrils. A recent paper has found that a particular steroid molecule, lanosterol, can dissolve these protein plaques and reverse the course of lens stiffening. While it is unknown what the mechanical effects of tau fibrils is in CTE, the studies in the lens are suggestive of work that needs to be undertaken.At a greater length scale, the axons of projection neurons are organized into a number of nerve tracts that traverse the brain both anteroposteriorly, radially from cortex to subcortical nuclei and back, and laterally between the two cerebral hemispheres. Large, long distance axons may be particularly vulnerable to mechanical insult and it would be interesting to examine how tau concentration is related to nerve tract terminations. All of these, and many other molecular neurobiological issues that we do not have space to discuss here, are suggestive of what a next-generation multiscale model might contain to examine detailed mediating mechanisms in CTE.et al.et al.NND) has been correlated to the damage progression found in the brains of deceased NFL players donated to Boston University. The strong correlation indicates that the different mechanics notions of nucleation, growth, and coalescence are key deformation mechanisms in brain damage progression.An ISV model with three physically motivated ISV rate equations contributed to the p-tau accumulation and damage nucleation, growth, and coalescence in the brains of the deceased NFL players.Different football player positions were identified with various mechanical loading conditions. Skill position players, like QBs, incurred mainly LCF damage; whereas, linemen incurred mostly HCF damage.Given the unknown multiscale mechanisms, this study introduces a \u201cfirst order\u201d mechanical damage framework"}