{"text": "Regional chemotherapy allows further exploitation of the steep dose response curve of most chemotherapeutic agents, while systemic toxicity remains tolerable. We investigated the difference in maximally tolerated dose, pharmacokinetics and antitumour effect comparing administration of melphalan as a bolus in isolated liver perfusion (ILP) or via hepatic artery infusion (HAI). For these in vivo studies an experimental model for liver metastases in male WAG/Ola rats is obtained by subcapsular inoculation of CC531 rat colon carcinoma cells. In this system, ILP allowed administration of a two times higher dose than HAI (12 mg kg-1 vs 6 mg kg-1). In both treatment modalities systemic toxicity (leukopenia) was dose limiting. No hepatic toxicity was observed. Bolus administration of the maximally tolerated doses of melphalan in HAI (6 mg kg-1) and ILP (12 mg kg-1) resulted in four times higher concentrations in both liver and tumour tissue of the ILP treated rats. However, the ratio of mean drug concentration in liver vs tumour tissue appeared to be 1.5 times that found for HAI. In the range of the in tumour tissue measured melphalan concentrations the CC531 cells showed a steep dose response relationship in vitro. Whereas HAI resulted in significant tumour growth delay, complete remissions were observed in 90% of the rats treated with ILP. This study shows that with 12 mg kg-1 melphalan in ILP highly effective drug concentrations are achieved in CC531 tumour tissue; although the melphalan concentration in liver tissue shows an even higher increase than in tumour tissue, hepatic toxicity is negligible in this dose range.(ABSTRACT TRUNCATED AT 250 WORDS)"} {"text": "We describe melphalan pharmacokinetics in 26 patients treated by isolated limb perfusion (ILP). Group A (n = 11) were treated with a bolus of melphalan (1.5 mg kg-1), and in a phase I study the dose was increased to 1.75 mg kg-1. The higher dose was given as a bolus to Group B (n = 9), and by divided dose to Group C (n = 6). Using high performance liquid chromatography (HPLC) the concentrations of melphalan in the arterial and venous perfusate (during ILP) and in the systemic circulation (during and after ILP) were measured. Areas under the concentration time curves for perfusate and systemic (AUCs) data were calculated. In all three groups the peak concentrations of melphalan were much higher in the perfusate than in the systemic circulation. The pharmacokinetic advantages of ILP can be quantified by the ratio of AUCa/AUCs, median value 37.8 (2.1-131). AUCa and AUCv were both significantly greater in Group B than in Group A . In Groups B and C acceptable 'toxic' reactions occurred but were not simply related to melphalan levels. Our phase I study has allowed us to increase the dose of melphalan to 1.75 mg kg-1, but we found no pharmacokinetic advantage from divided dose administration."} {"text": "Systemic toxicity is usually the dose-limiting factor in cancer chemotherapy. Regional chemotherapy is therefore an attractive strategy in the treatment of liver metastasis. Two ways of regional chemotherapy, hepatic artery infusion (HAI) and isolated liver perfusion (ILP), were compared investigating the difference in toxicity with tissue and biofluid concentrations of mitomycin C (MMC). In wistar derived WAG rats the maximally tolerated dose of mitomycin C via HAI was 1.2 mg kg-1. Body weight measurements after HAI with doses higher than 1.2 mg kg-1 suggest both an acute and delayed toxic effect of mitomycin C since the time weight curves were triphasic: a rapid weight loss, a steady state and a second fall in weight phase. These rats died due to systemic toxicity. ILP with 4.8 mg kg-1 was associated with no signs of systemic toxicity and only transient mild hepatotoxicity. ILP with 6.0 mg kg-1 was fatal mainly due to hepatic toxicity. The four times higher maximally tolerated dose in ILP resulted in a 4-5 times higher peak concentration of mitomycin C in liver tissue, while the plasma concentration remained significantly lower than in the HAI treated rats. In the tumour tissue a 500% higher concentration of mitomycin C was measured in the ILP with 4.8 mg kg-1 than in HAI with 1.2 mg kg-1 treated rats. We demonstrated that when mitomycin C was administered by ILP a 400% higher dose could be safely administered and resulted in a five times higher tumour tissue concentration. In view of the steep dose-response curve of this alkylating agent this opens new perspectives for the treatment of liver metastasis."} {"text": "P<0.0001) and were significant predictors of tissue origin (P<0.0001). In solid tissues (n\u200a=\u200a119) we found striking, highly significant CpG island\u2013dependent correlations between age and methylation; loci in CpG islands gained methylation with age, loci not in CpG islands lost methylation with age (P<0.001), and this pattern was consistent across tissues and in an analysis of blood-derived DNA. Our data clearly demonstrate age- and exposure-related differences in tissue-specific methylation and significant age-associated methylation patterns which are CpG island context-dependent. This work provides novel insight into the role of aging and the environment in susceptibility to diseases such as cancer and critically informs the field of epigenomics by providing evidence of epigenetic dysregulation by age-related methylation alterations. Collectively we reveal key issues to consider both in the construction of reference and disease-related epigenomes and in the interpretation of potentially pathologically important alterations.Epigenetic control of gene transcription is critical for normal human development and cellular differentiation. While alterations of epigenetic marks such as DNA methylation have been linked to cancers and many other human diseases, interindividual epigenetic variations in normal tissues due to aging, environmental factors, or innate susceptibility are poorly characterized. The plasticity, tissue-specific nature, and variability of gene expression are related to epigenomic states that vary across individuals. Thus, population-based investigations are needed to further our understanding of the fundamental dynamics of normal individual epigenomes. We analyzed 217 non-pathologic human tissues from 10 anatomic sites at 1,413 autosomal CpG loci associated with 773 genes to investigate tissue-specific differences in DNA methylation and to discern how aging and exposures contribute to normal variation in methylation. Methylation profile classes derived from unsupervised modeling were significantly associated with age ( The causes and extent of tissue-specific interindividual variation in human epigenomes are underappreciated and, hence, poorly characterized. We surveyed over 200 carefully annotated human tissue samples from ten anatosites at 1,413 CpGs for methylation alterations to appraise the nature of phenotypically, and hence potentially clinically important epigenomic alterations. Within tissue types, across individuals, we found variation in methylation that was significantly related to aging and environmental exposures such as tobacco smoking. Individual variation in age- and exposure-related methylation may significantly contribute to increased susceptibility to several diseases. As the NIH\u2013funded HapMap project is critically contributing to annotating the human reference genome defining normal genetic variability, our work raises key issues to consider in the construction of reference epigenomes. It is well recognized that understanding genetic variation is essential to understanding disease. Our work, and the known interplay of epigenetics and genetics, makes it equally clear that a more complete characterization of epigenetic variation and its sources must be accomplished to reach the goal of a complete understanding of disease. Additional research is absolutely necessary to define the mechanisms controlling epigenomic variation. We have begun to lay the foundations for essential normal tissue controls for comparison to diseased tissue, which will allow the identification of the most crucial disease-related alterations and provide more robust targets for novel treatments. While all somatic cells in a given individual are genetically identical (excepting T and B cells), different cell types form highly distinct anatomic structures and carry out a wide range of disparate physiologic functions. The vast repertoire of cellular phenotypes is made possible largely via epigenetic control of gene expression, which is known to play a critical role in cellular differentiation. Epigenetics is the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence We have previously distinguished normal and tumor tissues using methylation profiling et al. observed significant variation among individuals when bisulfite sequencing a particular CpG island, and suggested that larger-scale studies are required to determine the extent of interindividual variability in methylation patterns et al. suggests a more complex picture. These authors found both increased and decreased intra-individual global methylation levels (enriched for promoter regions) in peripheral blood cell DNA over time Efforts to describe the methylation profiles of normal tissues are now underway. Recent genome-wide studies of methylation in normal human tissues have shown that DNA methylation profiles are tissue-specific and correlated with sequence elements In this study we used Illumina's GoldenGate methylation platform to investigate cytosine methylation in 217 normal human tissue specimens from 10 different anatomic sites in order to begin to understand variation both between and within tissues across individuals. Profiling CpG methylation of normal human tissues allowed us to begin characterizing the role of aging and environmental exposures in interindividual methylation variation, as well as specific gene-loci determinant of normal tissue-specificity. This work highlights the dynamic nature of epigenomes, and begins to disentangle the roles of aging, environmental factors, and innate variability among individual epigenomic profiles, both within, and across tissues.P<0.0001). Further, age was significantly associated with methylation classes (P<0.0001). Separating samples into groups as placenta, blood, or other solid tissue, we found a significant association between group and methylation profile classes (P<0.0001).Array methylation data were first assembled for exploration and visualization with unsupervised hierarchical clustering using Manhattan distance and average linkage for the 500 most variable autosomal CpG loci . EpigeneP<0.0001). Not unexpectedly, tissue types with larger sample sizes showed a significantly reduced misclassification error rates (P<0.05). The mean and standard deviation of average beta values for all autosomal CpG loci in each tissue type, and values for the decrease in random forest classification accuracy with locus removal are given in P<0.0001), and the mean and standard deviation of average beta values for all autosomal CpG loci in each of placenta, blood, or solid tissue, and values for the decrease in random forest classification accuracy with locus removal are given in Random Forests (RF) classification of all samples based on methylation average beta values at all autosomal loci returned a confusion matrix showing: which samples are correctly classified, which are misclassified, and the misclassification error rate for each sample type . OverallP<0.005), though we did not detect significant associations between methylation class and smoking status, packyears, or alcohol consumption. An RPMM of lung tissues (n\u200a=\u200a53) resulted in five methylation classes (P<0.07).Variation in tissue-specific methylation relative to differences between tissue types was first explored visually. Scatter plots of methylation values for representative samples from two different tissues were less well correlated than similar plots of two representative samples from the same tissue type, though variation in tissue-specific methylation was also evident . Tissue- classes where cl classes , and claQ<0.05, MLH1 (Q<0.0001), and RIPK3 (Q<0.002) methylation; and over 30 CpG loci had significantly altered methylation in never versus ever drinkers were also observed. Additionally, and in contrast to the predominantly increased age associated methylation at other gene-loci, there was a significant age-related decrease in CpG methylation of the de novo methyltransferase DNMT3B; and unlike the vast majority of other CpGs tested, DNMT3B_P352 was not located in a CpG island , after correcting for multiple comparisons, over 300 CpG loci had age-related methylation alterations Q<0.05, . RestricP\u200a=\u200a7.0E-04; P\u200a=\u200a5.2E-05, There is now a considerable literature that suggests that genome structure affects both the initial placement of DNA methylation marks in development class-specific mean associations between age and methylation and plotted the estimates with their 95% confidence intervals. In a class-specific model for solid tissue samples, there was a positive correlation between age and methylation in classes whose loci were predominantly located in CpG islands with RPMM (aiming to examine classes of CpGs with similar methylation profiles in more detail), grouping CpGs with similar methylation into eight separate classes. The CpG island status of all loci was plotted, and illustrates the well known tendency for CpGs located in islands to be unmethylated, while non-island CpGs tend to be methylated . We agaiRARA_P176 (P\u200a=\u200a0.003), DNMT3B_P352 (P\u200a=\u200a0.008), and LIF_P383 was validated by pyrosequencing based on aging and the environment has the potential to dramatically improve the success of studies of epigenetic alterations in disease. Hence, our work characterized methylation of phenotypically important CpG loci across several human tissue types, elucidating interindividual tissue-specific variation in methylation profiles and the contribution of CpG island context to age associated methylation alterations. This work increases our appreciation for the dynamic nature of the epigenome, and begins to define basic tenets to follow in pursuit of both constructing reference epigenomes and elucidating epigenetic alterations truly indicative of disease states.Using recursively-partitioned mixture modeling and random forests approaches, we differentiated tissues based on CpG methylation profile, consistent with other recent studies conducting genome-wide DNA methylation profiling Factors known to contribute to methylation alterations include carcinogen exposures, inflammation, and diet. Several carcinogen exposures such as tobacco, alcohol, arsenic, and asbestos have been associated with methylation-induced gene-inactivation in various human cancers including bladder cancer, head and neck squamous cell carcinoma, and mesothelioma et al. described an association between aging colonic mucosa and estrogen receptor methylation et al. reported that 29 loci had age-related methylation alterations, with 23 loci displaying increased methylation with age and 6 decreasing with age et al. found both increased and decreased methylation levels dependent on the individual, with over 50% of participants exhibiting >5% change in methylation Cancer is a disease of aging, and initial studies of age-related methylation in normal tissues were motivated in large part by studies of methylation in cancer et al. and Bjornsson et al. who showed bi-modal age-related methylation in normal tissues. A direct comparison, by examination of the data of Bjornsson et al., indicated that a high percentage of their top 50 most age-altered loci are not located in CpG islands; among 24 of 30 autosomal CpGs in their Stratifying our data on CpG-island status of loci, we showed that both the direction and strength of correlation between age and methylation were largely dependent upon CpG island status. More specifically, we found a propensity for CpG-island loci to gain methylation with age, and non-island CpGs to lose methylation with age. Our data are consistent with the literature that has demonstrated age-related increases in methylation at gene-loci found within CpG islands nd functional allele). Future population-based studies addressing the potential of quantifying age and/or exposure associated methylation alterations indicative of disease risk are necessary.The observed pattern of age associated methylation was irrespective of tissue-type, suggesting a common mechanism or dysregulation to explain these alterations. Reduced fidelity of maintenance methyltransferases with aging is one potential explanation for age related decreases in methylation; while age related increases in methylation could potentially reflect the accumulation of stochastic methylation events over time. As the examined tissues do not have a pathologic phenotype, methylated CpGs in these cells may not indicate dramatic functional consequences upon gene expression. However, the (in part selective) accumulation of alterations without readily detectable functional consequences should not be interpreted as biologically insignificant. Age-related drift of normal epigenomes without prominent changes in gene expression may nonetheless confer significantly increased risk of conversion to a pathologic phenotype by enhancing both the likelihood and frequency of methylation events that ultimately result in aberrant expression or altered genomic stability. For example, in the context of acquired \u201cnon-functional\u201d CpG methylation in the promoter region of an aged individual, continued stochastic methylation events (e.g. \u201cmethylation spreading\u201d) increase the chance of methylation induced silencing at that promoter (or silencing of another locus through action at a distance via silencing of other important regions such as enhancers), and hence, progression to a pathologic phenotype. Certainly, this hypothesis is especially plausible for the many diseases of aging. Alternatively, aberrant CpG methylation that silences a gene on a single allele may not appear to have a functional consequence if the complementary allele can provide compensatory expression. As a result, for example, clusters of cellular clones with mono-allelic gene expression could contribute to an increased risk of progression to a pathologic phenotype . Briefly, normal brain tissues (n\u200a=\u200a12) were contributed by the Wiencke lab at UCSF through the San Francisco Adult Glioma Study et alFresh frozen tissue and whole blood DNA was extracted using the QIAamp DNA mini kit according to the manufacturer's protocol . DNA was modified by sodium bisulfite to convert unmethylated cytosines to uracil using the EZ DNA Methylation Kit according to the manufacturer's protocol. Illumina GoldenGate methylation bead arrays were used to simultaneously interrogate 1505 CpG loci associated with 803 cancer-related genes. Bead arrays have a similar sensitivity as quantitative methylation-specific PCR and were run at the UCSF Institute for Human Genetics, Genomics Core Facility according to the manufacturer's protocol and as described by Bibikova Quantification of cytosine percent methylation was performed by pyrosequencing bisulfite-converted DNA using the PyroMark MD pyrosequencing system (Biotage). Specific pyrosequencing primers were designed to amplify array CpG sites and as many downstream CpGs as conditions permitted using Biotage Assay Design Software v1.0.6.FZD9), and 10\u00d7 PCR buffer with 15 mM MgCl2 under the following conditions: 95\u00b0C 15\u2032, 30\u2033, 72\u00b0C 1\u2032) \u00d7 45 cycles, and 72\u00b0C 5\u2032. Final reaction primer concentration for PCR and sequencing was 0.3 \u00b5M, primer details are in All PCR reactions were carried out in 25 \u00b5l, utilized Qiagen Hot Star Taq polymerase, 5\u00d7 Q solution )/(|Cy3|+|Cy5|+100), the average methylation (\u03b2) value is derived from the \u223c30 replicate methylation measurements. Raw average beta values were analyzed without normalization as recommended by Illumina. Each array CpG is annotated with the gene name followed by the CpG location seq to reference a specific sequence) and its physical distance from the transcription start site. At each locus for each sample the detection P-value was used to determine sample performance; 5 samples (2%), had detection P-values >1.0E-05 at more than 25% of CpG loci and were removed from subsequent analysis. Similarly, CpG loci with a median detection P\u2013value >0.05 , were eliminated from analysis. Finally, all CpG loci on the X chromosome were excluded from analysis. The final dataset contained 217 samples and 1413 CpG loci associated with 773 genes. The manufacturer recommended CpG island designation of array CpGs was used and follows the definition of CpG island in Data were assembled with BeadStudio methylation software from Illumina . All array data points are represented by fluorescent signals from both methylated (Cy5) and unmethylated (Cy3) alleles, and methylation level is given by \u03b2\u200a=\u200a(max(hclust with Manhattan metric and average linkage. To discern and describe the relationships between CpG methylation and tissue type (sample clustering), and the relationships between CpGs with coordinate methylation (CpG clustering), a modified model-based form of unsupervised clustering known as recursively partitioned mixture modeling (RPMM) was used as described in Subsequent analyses were carried out using the R software m out of the total M variables is chosen and the best split is found among the m variables. We utilized the default value for m in the Random Forest R package, \u221aM (\u221a1413\u200a=\u200a38). The misclassification error rate is the percentage of time the RF prediction is incorrect.The R Package was also used to build classifiers with the Random Forest (RF) approach. RF is a tree-based classification algorithm similar to Classification and Regression Tree (CART) et al.a priori hypothesis existed were tested separately, such as those that have been previously associated with aging in normal tissues WRN. Array-wide scanning for CpG loci associations with sample type or covariate used false discovery rate estimation and Q\u2013values computed by the qvalue package in R Associations between covariates and methylation at individual CpG loci were tested with a generalized linear model. The beta-distribution of average beta values was accounted for with a quasi-binomial logit link with an estimated scale parameter constraining the mean between 0 and 1, in a manner similar to that described by Hsuing To test the hypothesis that there are associations of age and exposure with methylation, we constructed measures of partial methylation sets, in analogy with global DNA methylation, which is measured on repeats Sequencing data were processed using Pyro Q-CpG software v1.0.9 (Biotage) under default analysis parameters and exported for subsequent analysis in R software Figure S1Pairwise plots comparing average beta values (A) between all blood and all head & neck samples, (B) individual blood sample versus an individual head & neck sample, comparisons within tissue type between individual samples for (C) blood and (D) head & neck. Average beta value scatterplots between tissue types indicate significant differences between tissues, and scatterplots within tissue type indicate relative similarity in the presence of interindividual variation. A) Mean of average betas for all blood samples (n\u200a=\u200a30) versus mean of average betas for all head and neck samples (n\u200a=\u200a11), indicates relatively high variability between tissue types, R2\u200a=\u200a0.84. B) Representative blood sample 1 average betas versus representative head and neck sample 1 average betas indicate similarly high variability between tissue types at the individual sample level, R2\u200a=\u200a0.87. C) Representative blood sample 2 versus representative blood sample 3 indicates relative similarity between individuals within a tissue type in the presence of interindividual variation, R2\u200a=\u200a0.97. D) Representative head and neck sample 1 versus representative head and neck sample 2 indicates relative similarity between individuals within a tissue type in the presence of interindividual variation, R2\u200a=\u200a0.96.(0.07 MB TIF)Click here for additional data file.Figure S2Bisulfite pyrosequencing mean percent methylation across all CpGs measured for RARA, DNMT3B, and LIF versus their respective CpG of interest on the array. Bisulfite pyrosequencing mean percent methylation across all CpGs measured for RARA, DNMT3B, and LIF versus their respective CpG of interest on the array. A) Mean bisulfite pyrosequencing percent methylation across array target CpG RARA_P176 and 5 downstream CpGs plotted versus Illumina GoldenGate methylation array average beta demonstrates a significant correlation between sequencing and array methylation . B) Mean bisulfite pyrosequencing percent methylation across array target CpG DNMT3B_P352 and 2 downstream CpGs plotted versus Illumina GoldenGate methylation array average beta demonstrates a significant correlation between sequencing and array methylation . Mean bisulfite pyrosequencing percent methylation across array target CpG LIF_P383 and 2 downstream CpGs plotted versus Illumina GoldenGate methylation array average beta demonstrates a significant correlation between sequencing and array methylation .(0.04 MB TIF)Click here for additional data file.Table S1Autosomal CpG locus mean (sd) of average beta and values for the decrease in random forest classification accuracy with locus removal, all tissues.(0.68 MB XLS)Click here for additional data file.Table S2Autosomal CpG locus mean (sd) of average beta and values for the decrease in random forest classification accuracy with locus removal, for solid tissues, blood, and placenta.(0.29 MB XLS)Click here for additional data file.Table S3CpG loci with significantly altered methylation by reported asbestos exposure in pleural samples (n\u200a=\u200a18).(0.04 MB DOC)Click here for additional data file.Table S4CpG loci with significantly altered methylation in never versus ever alcohol drinkers in blood (n\u200a=\u200a29).(0.05 MB DOC)Click here for additional data file.Table S5CpG loci with significantly altered methylation by smoking in lung tissue (n\u200a=\u200a53).(0.13 MB DOC)Click here for additional data file.Table S6Top 100 CpG loci associated with age by tissue type.(0.84 MB DOC)Click here for additional data file.Table S7Pyrosequencing assay primers.(0.05 MB DOC)Click here for additional data file."} {"text": "The corpus callosum, which is the largest white matter structure in the human brain, connects the 2 cerebral hemispheres. It plays a crucial role in maintaining the independent processing of the hemispheres and in integrating information between both hemispheres. The functional integrity of interhemispheric interactions can be tested electrophysiologically in humans by using transcranial magnetic stimulation, electroencephalography, and functional magnetic resonance imaging. As a brain structural imaging, diffusion tensor imaging has revealed the microstructural connectivity underlying interhemispheric interactions. Sex, age, and motor training in addition to the size of the corpus callosum influence interhemispheric interactions. Several neurological disorders change hemispheric asymmetry directly by impairing the corpus callosum. Moreover, stroke lesions and unilateral peripheral impairments such as amputation alter interhemispheric interactions indirectly. Noninvasive brain stimulation changes the interhemispheric interactions between both motor cortices. Recently, these brain stimulation techniques were applied in the clinical rehabilitation of patients with stroke by ameliorating the deteriorated modulation of interhemispheric interactions. Here, we review the interhemispheric interactions and mechanisms underlying the pathogenesis of these interactions and propose rehabilitative approaches for appropriate cortical reorganization. The corpus callosum, which is the largest white matter structure in the human brain, connects the homologous and nonhomologous areas of the 2 cerebral hemispheres , 2. It pResearch on the functions of interhemispheric interactions is based on studies of brain lateralization, which is thought to allow each hemisphere to process information without the interference of the contralateral hemisphere , 16. SevHowever, processing tasks that share and integrate the information between hemispheres require facilitative communication between hemispheres . Even inThe ability to perform precisely coordinated movements using both hands is an important aspect of particular human abilities, such as tying a string, peeling a fruit with a knife, typing, and playing a musical instrument. It is now known that modulations of interhemispheric interactions are involved in the control of the unimanual and bimanual coordinations that generate the spatially and temporally precise coordinated limb movements that enable humans to perform different movements . MoreoveRecent studies have revealed that the modulation of interhemispheric interactions relates to neural plasticity, which refers to the ability of the brain to develop new neuronal interconnections, acquire new functions, and compensate for impairments \u201325. HoweIt has been estimated that the corpus callosum is the pathway through which one hemisphere can inhibit the other, thus facilitating brain lateralization. Alternatively, the corpus callosum integrates information across the cerebral hemispheres and serves an excitatory function in interhemispheric communication , 3, 15. The inhibitory theory posits that the corpus callosum maintains independent processing between the hemispheres, hinders activity in the opposing hemisphere, and allows the development of hemispheric asymmetries . A TMS sHandedness may be related to inhibitory interhemispheric interactions. Although it remains controversial whether interhemispheric inhibition from the dominant motor cortex differs from the nondominant motor cortex under resting condition \u201330 physiThe excitatory theory posits that the corpus callosum shares and integrates information between the hemispheres, resulting in greater connectivity, which decreases brain lateralization by masking the underlying hemispheric asymmetries in tasks that require interhemispheric exchange , 31. ThiAs a motor system, the excitatory interhemispheric interaction plays an important role in the adjustment of movement onset. A TMS study revealed that interhemispheric interaction from the nonactive to the active motor cortex translates from inhibitory to excitatory effects around movement onset . This exHowever, the findings of interhemispheric interactions during in-phase movements support the inhibitory theory. The maximum speed of bimanual in-phase movements was the highest in subjects that exhibited weak inhibition of both homologous motor cortices . InterheThese findings suggest that, depending on the motor task, the interhemispheric interactions may be inhibitory or excitatory, so that homologous muscles are adjusted . This isThe degree of connectivity between the hemispheres is reflected in the size of the corpus callosum , 31, 42.Several studies have revealed a correlation between interhemispheric interactions and age \u201347. The Aging also influences interhemispheric interactions. Several MRI studies have reported that aging increases the atrophy of the corpus callosum , 46. MorHowever, the role of the overactivation of cortices in older adults may vary according to the brain region involved in tasks. The results of previous studies supported the idea that overrecruitment of bilateral prefrontal activation compensates cognitive tasks in older adults , 57. In Several studies have reported morphological and microstructural differences in the corpus callosum between men and women. The relative size of the corpus callosum proportional to cerebral volume was larger in women compared to men , 62, butAs described previously, modulation of interhemispheric interactions influences human movement patterns, such as handedness. In contrast, motor training itself can change interhemispheric interactions. Changes in interhemispheric interactions mediated by motor training have been reported, especially in musical training \u201376. MusiIn addition to bimanual training, interhemispheric interactions may contribute to motor acquisitions, such as intermanual transfer, as it is well known that motor learning using one hand improves the performance of the other hand , 78. A pIn contrast to motor training, the nonused limb may also influence interhemispheric interactions. A recent study revealed that transient arm immobilization reduced the interhemispheric inhibition from the immobilized to the nonimmobilized motor cortex . MoreoveStudies of callosotomy or callosal lesions have provided much insight into the functions of interhemispheric interactions via the impairment of the corpus callosum , 15, 33.Lesions of the corpus callosum are commonly detected in patients with traumatic brain injury \u201383. DiffMultiple sclerosis is an inflammatory disease that affects myelinated axons and leads to neurological and cognitive impairments. Therefore, the corpus callosum, which is the largest white matter structure in the brain, is considered a target for inflammation. Corpus callosum degeneration, which has been described frequently \u201388, can Impairments of interhemispheric inhibition detected using TMS have been reported in patients with Parkinsonian syndromes, including patients with corticobasal degeneration and progressive supranuclear palsy , 90. MRISeveral studies using MRI reported the atrophy and reduction in microstructural connectivity of the corpus callosum in patients with schizophrenia , 94. PreSeveral studies have reported that stroke lesions indirectly disrupt interhemispheric interactions , 98, 99.In addition to stroke, recent studies revealed that indirect changes in interhemispheric interactions through the corpus callosum occur after changes in peripheral organs, such as limb amputation , 106. ThIt has been reported that several techniques alter interhemispheric interactions. In particular, noninvasive brain stimulation (NIBS), which can modulate cortical excitability, may enhance neural plasticity by altering interhemispheric interactions. Moreover, paired associative stimulation of the homologous motor cortices using TMS induces a neural plasticity that is dependent on Hebbian mechanisms through interhemispheric interactions. In this section, we discuss the neural plasticity that is induced by changes in interhemispheric interactions.Repetitive TMS and transcranial direct current stimulation are NIBS techniques that can alter the excitability of the human cortex for several minutes . In partA recent study reported that paired associative stimulation of the homologous motor cortices using TMS is a new interventional protocol that induces an increase in excitability in the conditioned motor cortex . The paiAs mentioned previously, excessive interhemispheric inhibition from the unaffected hemisphere deteriorates the motor function of the paretic hand in patients with stroke. Therefore, improvement of the motor deficits of these patients may be achieved by decreasing the excitability of the unaffected hemisphere using NIBS , 102. InAlthough it has been reported that inhibitory NIBS over the unaffected hemisphere facilitates motor recovery during the acute stage of stroke , 128, a This paper focused on the mechanisms underlying motor control and neural plasticity that relate to interhemispheric interactions to suggest approaches for appropriate cortical reorganization. Inhibitory or excitatory interactions that occur via interhemispheric communication may vary depending on the different time points during the movement and different cortical areas that are involved in the processing demands of the motor task. The age-related degeneration of the corpus callosum may induce the engagement of both hemispheres partly because of the failed inhibition of the contralateral hemisphere. Female hormones may exert positive effects on the interhemispheric communication that is related to maintaining independent processing between the hemispheres in the motor system. Plastic developmental changes that are caused by extensive bimanual training during childhood result in more symmetrical brains and equally efficient connections between the hemispheres. Several neurological disorders, such as traumatic brain injury, multiple sclerosis, and Parkinsonian syndromes, directly alter interhemispheric interactions by impairing the corpus callosum. Stroke lesions indirectly disrupt interhemispheric inhibition, which is highly relevant to the research on motor recovery after stroke. In addition, amputations may indirectly alter interhemispheric interactions between sensorimotor cortices. Inhibitory NIBS reduces the interhemispheric inhibition from the stimulated motor cortex to the non-stimulated motor cortex. The paired associative stimulation of the homologous motor cortices using TMS induces a neural plasticity that is dependent on Hebbian mechanisms that occur via interhemispheric interactions. Inhibitory NIBS over the unaffected hemisphere in patients with stroke can improve the motor function of the paretic hand by reducing the interhemispheric inhibition from the unaffected hemisphere to the affected hemisphere. However, it should be noted that inhibitory NIBS might worsen bimanual movements by reducing the interhemispheric inhibition that controls them. Assessments of interhemispheric interactions have provided information on the mechanisms underlying the physiological processes involved in motor control and have allowed the formulation of interventional strategies that can improve motor function in neurological disorders, which is a critical issue in clinical neurorehabilitation."} {"text": "Here we study the possible control exerted by the stationary phase sigma factor RpoS on the bistability decision. The gene for RpoS in P. knackmussii B13 was characterized, and a loss-of-function mutant was produced and complemented. We found that, in absence of RpoS, ICEclc transfer rates and activation of two key ICEclc promoters (Pint and PinR) decrease significantly in cells during stationary phase. Microarray and gene reporter analysis indicated that the most direct effect of RpoS is on PinR, whereas one of the gene products from the PinR-controlled operon (InrR) transmits activation to Pint and other ICEclc core genes. Addition of a second rpoS copy under control of its native promoter resulted in an increase of the proportion of cells expressing the Pint and PinR promoters to 18%. Strains in which rpoS was replaced by an rpoS-mcherry fusion showed high mCherry fluorescence of individual cells that had activated Pint and PinR, whereas a double-copy rpoS-mcherry\u2013containing strain displayed twice as much mCherry fluorescence. This suggested that high RpoS levels are a prerequisite for an individual cell to activate PinR and thus ICEclc transfer. Double promoter\u2013reporter fusions confirmed that expression of PinR is dominated by extrinsic noise, such as being the result of cellular variability in RpoS. In contrast, expression from Pint is dominated by intrinsic noise, indicating it is specific to the ICEclc transmission cascade. Our results demonstrate how stochastic noise levels of global transcription factors can be transduced to a precise signaling cascade in a subpopulation of cells leading to ICE activation.Conjugative transfer of the integrative and conjugative element ICE clc normally resides in the chromosome of its bacterial host, but can excise from the chromosome and prepare for conjugation. Interestingly, the decision to excise ICEclc is made in only 3%\u20135% of cells in a clonal population in stationary phase. We focus specifically on the question of which mechanism may be responsible for setting this threshold level of ICEclc activation. We find that ICEclc activation is dependent on the individual cell level of the stationary phase sigma factor RpoS. The noise in RpoS expression across a population of cells thus sets the \u201cthreshold\u201d for ICEclc to excise and prepare transfer.Horizontal gene transfer is one of the amazing phenomena in the prokaryotic world, by which DNA can be moved between species with means of a variety of specialized \u201celements\u201d and/or specific host cell mechanisms. In particular the molecular decisions that have to be made in order to transfer DNA from one cell to another are fascinating, but very little is known about this at a cellular basis. Here we study a member of a widely distributed type of mobile DNA called \u201cintegrative and conjugative elements\u201d or ICE. ICE For example, several ICE carry genes for antibiotic resistance clc in Pseudomonas knackmussii B13 must be the consequence of a bistable switch that culminates in the activation of the intB13 integrase promoter (hereafter named Pint) in 3% of cells during stationary phase clc is a 103-kb sized element with strong homologies to a large number of genomic islands in Beta- and Gammaproteobacteria, and is named after its propensity to provide the host cell with the capacity to metabolize chlorinated catechols, encoded by the clc genes clc copies reside in the chromosome of strain B13, which are interspaced by 340 kb . Activation of the intB13 integrase leads to excision and formation of a closed circular ICEclc intermediate oriT) on ICEclcint activation was preceded by and dependent on expression of a protein named InrR (for INtegRase Regulator) in the same individual cell (clc under control of another bistably expressed promoter (PinR) Bacillus subtilis, which lead to phenotypically differentiated cells Although several ICE have been genetically and functionally characterized, and their general importance for bacterial evolution and adaptation is now widely appreciated, still very little is known about their cell biology ual cell . InrR isclc becomes active. We focused our attention on both Pint and PinR promoters, which are expressed during stationary phase and only in a subpopulation of cells clc transfer in stationary phase cells further suggested involvement of a specific sigma factor such as RpoS (\u03c3s). RpoS is the stress-starvation sigma factor that in P. aeruginosa controls the expression of some 772 genes at the onset of stationary phase rpoS in P. aeruginosa does not result in a dramatically changed phenotype, although such mutants survive 50-fold less well to heat and salt shocks than wild-type, and produce less extracellular proteins such as elastase, exotoxin A, and alginate clc, we identified an rpoS-gene in P. knackmussii B13 and studied the effects of interruption and subsequent complementation using single-cell reporter gene fusions to Pint and PinR. Interestingly, a B13 wild-type equipped with a second rpoS gene copy displayed a much higher subpopulation of cells expressing both Pint and PinR promoters. To study whether actually individual cell levels of RpoS could be somehow deterministic for the activation of ICEclc we replaced native rpoS by a gene for an active RpoS-mCherry fusion protein. Finally, we measured contributions of intrinsic and extrinsic noise on Pint and PinR promoters from covariance in the expression of double gene reporters placed in single copy on different parts of the B13 chromosome int and PinR, which suggests that the stochastic variation in RpoS levels across a population of cells is transduced into ICEclc activation and transfer in a small subpopulation.The goal of the underlying work was to explore whether noisiness may lay at the basis of determining the proportion of cells in which ICErpoS gene of P. knackmussii strain B13 we used PCR amplification with primers designed against conserved regions in a multiple alignment of rpoS sequences of P. aeruginosa, P. putida KT2440 and P. fluorescens , which showed that the rpoS region of strain B13 is syntenic to that in P. aeruginosa PAO1 with a gene for a lipoprotein (nlpD) upstream of rpoS, and an rsmZ-like gene and a gene for a ferredoxin (fdxA) downstream . Maximum specific growth rates of strain B13-2671 (rpoS) on MM with 5 mM 3CBA were similar as B13 wild-type , but the onset of exponential growth was slightly delayed in B13-2671 (rpoS) (A single crossover 1 (rpoS) . Reversi1 (rpoS) .clc are solely expressed in stationary phase P. knackmussii B13 cells inR promoter clc stationary phase expression. Inactivation of rpoS in B13 indeed resulted in reduced expression of both PinR and Pint promoters. This was evident, first of all, from a reduced proportion of cells in a B13-2673 (rpoS) compared to B13 wild-type population expressing eCherry and eGFP above detection threshold from single copy transcriptional fusions to PinR and Pint, respectively (rpoS) produced a lower average reporter fluorescence signal than wild-type cells (inR and Pint were expressed in the same cell (rpoS) cells examined after 24 h in stationary phase, but after 72 h a small fraction of cells still developed eGFP and eCherry fluorescence (rpoS) compared to B13-78 wild-type to reach stationary phase (int and PinR in B13-2673 (rpoS) was not due to reversion of the rpoS mutation was significantly lower than in B13 wild-type and the rpoS-complemented strain (B13-2993), RpoS is necessary for achieving native transcription levels from the PinR promoter . On the other hand, RpoS is not absolutely essential, since cells with interrupted rpoS gene eventually (96 h) express PinR and Pint, which was not due to reversion of the rpoS mutation in the rpoS mutant could be the result of either less InrR being formed from PinR, or of a direct control by RpoS of Pint, we compared eGFP expression from a single copy Pint-egfp transcriptional fusion in B13, the B13 rpoS mutant (B13-2976) and a B13 lacking both inrR copies , and correlated eGFP to eCherry expression. Since this strain would be devoid of InrR-mediated expression of Pint, we expected that expression of egfp from Pint in absence of rpoS would be lower than expression of echerry from PinR. Indeed, there was a slight tendency for the mean proportion of cells expressing eGFP (from Pint) in strain B13-3091 to be lower than that expressing eCherry (from PinR), although this was only poorly significant after 96 h (P\u200a=\u200a0.04), again because of the very low subpopulation sizes . This suggests that transcription from Pint is both indirectly (via InrR) and directly dependent on RpoS.Since the proportion of cells expressing eGFP from Putations . For this <0.5%, . Purifieint and PinR promoters on ICEclc, we also determined ICEclc core gene expression and transfer frequencies from B13 wild-type or derivatives as donor and P. putida UWC1 as recipient. Expression of the ICEclc core genes in stationary phase cells measured by microarray analysis was lower (up to 27-fold) for both B13-2671 (rpoS) and B13-2201 (inrR\u2212/\u2212) compared to B13 wild-type (inrR operon was not only downregulated in B13-2671 (rpoS) but also in B13-2201 (inrR\u2212/\u2212) (Whereas expression of the reporter gene fusions was interpreted as being representative for the behaviour of the native Pild-type . InteresnrR\u2212/\u2212) , suggestclc core gene expression but also transfer frequencies were significantly lower at all time points from B13-2673 (rpoS) or B13-3091 than from B13-2581 wild-type or the rpoS-complemented B13 rpoS mutant (B13-2993) as donor (clc transfer frequencies from the complemented B13 rpoS mutant were not significantly different than those from B13 wild-type. Transfer frequencies from B13-2673 (rpoS) as donor were significantly higher than from B13-3091 as donor, but only after 96 h mating time . mCherry expression from PrpoS in stationary phase is normally distributed among all cells with a mean around 50 RFU . RpoS-mCherry but not an N-terminal mCherry-RpoS fusion protein complemented B13-rpoS for bistable Pint or PinR-dependent eGFP expression (data not shown). This indicated that the RpoS-mCherry fusion protein functionally replaces B13 wild-type RpoS. Significantly, only B13-3564 and B13-3555 cells expressing the highest RpoS-mCherry levels had also activated eGFP from Pint or PinR, respectively, although not all cells with high RpoS-mCherry levels expressed high levels of eGFP from the original ribution , and eGF of eGFP . This suinR and Pint, an additional rpoSB13 copy under control of its own promoter was introduced by mini-Tn5 transposition expressed eGFP from Pint and eCherry from PinR compared to 5% in B13-2581 wild-type (clc transfer from B13-3260 (+rpoS) as donor to P. putida UWC1 as recipient was twice as high as with B13 wild-type after the same mating contact time, although this was not a statistically significant difference did not significantly differentially express both reporter genes from PinR and Pint and in the same strain into which another single copy of rpoSB13-mcherry was transposed (B13-3712). Indeed, the mean mCherry fluorescence in B13-3712 was almost twice as high as in B13-3564 from two individual single copy transcription fusions to Pint or PinR, placed at different positions of the B13 chromosome as suggested in Elowitz et al.int promoter than on PinR . Also adding an additional copy of inrR resulted in a lowering of the total noise, although the proportion of cells expressing eGFP and eCherry in the inrR+ strain was not increased compared to wild-type as sole carbon and energy source measurably express PinR and Pintclc excision and transfer at population level clc element.One of the mysteries in ICE gene transfer among bacteria is the mechanism that controls the frequency by which they become excised in clonally identical populations of donor cells. ICE conjugation must start with its excision and therefore the cellular decision that determines conjugation is binary: ICE excision or not. Low transfer frequencies suggest that the binary \u2018ON\u2019-decision is only made in a very small proportion of donor cells. Indeed, our previous results on ICEP. knackmussii is a stationary phase sigma factor controlling transcription of the PinR- and Pint-promoters and thus, indirectly, transfer of ICEclc to P. putida. Addition of an extra rpoSB13 gene copy led to an increased proportion of stationary phase cells in which the PinR- and Pint-promoters are activated, which suggested that the expression level of RpoS is important for controlling the bistable switch leading to ICEclc activation. Indeed, by expressing an RpoS-mCherry fusion instead of RpoS wild-type protein in strain B13 we showed that PinR- or Pint-egfp expression in stationary phase preferably occurred in individual cells with the highest levels of RpoS-mCherry fluorescence must be responsible for the activation or derepression of PinR InrR and the relatively minor role of (a widely abundant) RpoS directly on Pint-expression. This effect may actually have been overestimated by a bias introduced by the measurement technique . In contrast, and in the same \u2018biased\u2019 setting , the total noise is significantly lower on the PinR-promoter and the relative contribution of the extrinsic noise is higher . This would make sense since individual cells would overall contain higher levels of RpoS permitting more direct interaction with Pint. Adding a third copy of inrR also reduced the level of intrinsic noise on Pint, but in this case because such cells would produce more InrR, diminishing the noise effect by \u2018small numbers\u2019 of regulatory factors . Noise in individual cell RpoS levels is thus not propagated to noise in expression of downstream regulons, as was shown recently for global transcription factors in yeast clc factors to a precise activation cascade leading to ICEclc excision and transfer.This conclusion is further supported by noise measurements on the Promoters . Intrinss higher , which irpoS copy number strongly increased the proportion of cells in the population expressing Pint and PinR from 3% to almost 20%, although the transfer frequency of ICEclc only doubled to Pint and other ICEclc core genes. Microarray analysis confirmed the important role of InrR for the overall activation of ICEclc core functions, and indicated a possible feedback loop on its own expression , although it is not highly conserved Intriguingly, doubling doubled . In contneration . In thispression . Importa70 family, which is widely distributed among prokaryotes, although RpoS regulons can be quite different in individual species 4652 activity in P. putidatnpA transposition frequency since Tn4652 becomes at least 10 times more activated in an rpoS-defective strain rpoS mRNA levels in P. aeruginosa biofilms, but this occurred rather as a consequence of physico-chemical gradients within the biofilm rbsV-rbsW-sigB operon for the stress response sigma factor SigB in Bacillus subtilis. Interestingly, sigBp expression proceeds in a \u2018burst-like\u2019 fashion with a higher pulse frequency under stress than under normal growth condition sigB operon feedback on itself and on its anti- and anti-anti-sigma factors RbsW and RbsV, respectively.As far as we are aware, this is the first time that RpoS has been implicated in controlling horizontal gene transfer of a conjugative element. RpoS homologs are part of a large protein cluster called the \u03c3clc are wired within noise in a global transcription factor but can transduce this noise to a precise activation cascade, and thus may have been selected for their capacity to successfully exploit the noise.Gene expression noise is ubiquitous and plays an essential role in a variety of biological processes, triggering stochastic differentiation in clonal populations of cells Escherichia coli DH5\u03b1 was routinely used for plasmid propagation and cloning experiments. E. coli HB101 (pRK2013) was used as helper strain for conjugative delivery of mini-transposon constructs P. knackmussii strain B13 clc element (ICEclc), of which it carries two copies E. coli, whereas LB and type 21C mineral medium (MM) P. knackmussii. 3-Chlorobenzoate (3CBA) was added to MM to a final concentration of 5 or 10 mM. When necessary, the following antibiotics were used at the indicated concentrations (\u00b5g per ml): ampicillin, 500 (for P. knackmussii) or 100 (for E. coli); kanamycin, 50 and tetracycline, 100 (for P. knackmussii strain B13 derivatives) or 12.5 (for E. coli). P. knackmussii strain B13 was grown at 30\u00b0C; E. coli was grown at 37\u00b0C.Luria-Bertani (LB) medium 9 donor cells (P. knackmussii B13 or one of its derivatives) and 500 \u00b5l suspension of around 109 recipient cells (P. putida UWC1) on membrane filters for 24, 48, 72 or 96 h, as described earlier P. putida UWC1 with ICEclc) were selected on MM plates with 5 mM 3CBA as sole carbon and energy source (to select for ICEclc) and 50 \u00b5g per ml rifampicin (resistance marker of the recipient). Transfer frequencies were expressed as number of transconjugant colony forming units (CFU) per number of donor CFU.Self-transfer was tested by mixing 500 \u00b5l suspension of around 10E. coli and restriction enzyme digestions were all carried out according to standard procedures Polymerase chain reaction (PCR), reverse transcription RT-PCR, plasmid and chromosomal DNA isolations, RNA isolation, DNA fragment recovery, DNA ligations, transformations into rpoS genes of P. aeruginosa, P. putida and P. fluorescens . The B13 rpoS gene region was submitted to GenBank under accession number AB696604.Primers were designed for conserved regions obtained in a nucleotide sequence alignment among orescens . A singlrpoSB13 gene was amplified with a forward primer (080304) carrying a BamHI, and reverse primer (080303) carrying an EcoRI restriction site . Separate experiments to delete rpoS by using recombination with a DNA fragment in which rpoS was fully deleted were not successful either (not shown). The same strategy was then used to produce a single recombinant disruption of rpoS in P. knackmussii strain B13 that lacked both inrR copies rpoS-pME3087 allele to wild-type rpoS in stationary phase cultures was tested by specific PCR was amplified from strain B13 purified genomic DNA using primers 091206 and 090902 . As this strain carried a mini-Tn5 insertion already it was necessary to remove the Km gene cassette associated with it. Hereto the strain was transformed with plasmid pTS-parA rpoS-rpoS fragment, which was designated B13-2993 . Three independent clones with possible different mini-transposon insertion sites were examined for ICEclc transfer and reporter gene expression.A 2.2-kbp fragment containing the d 090902 . The amporf95213 and inrR genes plus PinR was amplified by PCR using primers and cloned in pBAM1 using SphI and EcoRI. The resultant suicide plasmids were introduced into B13 or its derivatives by electroporation, from where the transposition was selected by plating on MM plus 3CBA and kanamycin. Bona fide single copy transposition was verified by PCR. At least three independent clones with possibly different insertion positions were used for further experiments.A 1700-bp fragment containing int promoter in front of intB13 and the egfp gene, or Pint and a promoterless echerry gene have been described previously orf95213, inrR, ssb gene cluster (PinR) and either egfp or echerry have been detailed elsewhere inR and Pint promoters simultaneously, we used a previous construct with PinR-echerry in one and Pint-egfp in the opposite direction rpoS promoter (PrpoS), a 1200-bp fragment upstream of rpoS including the nlpD gene was amplified from strain B13 by PCR as a template and primers (101003 and 101004), in which the start codon of mcherry was replaced by a short nucleotide sequence encoding 15 amino acids (KLPENSNVTRHRSAT) as a linker peptide. The fragment was then cloned in HindIII and SpeI sites on the mini-Tn5 delivery plasmid pBAM1, resulting in pBAM-link-mCherry. A 2.1 kb region containing PrpoS and rpoS lacking its stop codon was amplified using B13 genomic DNA and primers 101001 plus 010102. This fragment was digested wtih EcoRI and HindIII, and cloned into the same sites on pBAM-link-mCherry (designated pBAM-rpoS-mcherry), After transformation in E. coli and purification, this plasmid was introduced into strain B13 or its derivatives by electroporation. Single copy transposon insertions of the rpoS-mcherry fusion construct were selected by plating cells on MM plus 3CBA and kanamycin. If required for introduction of subsequent mini-transpositions the kanamycin gene cassette was removed by ParA resolvase action (see above). At least three independent clones with possibly different insertion positions were used for further experiments.To produce a C-terminal fusion of RpoS to mCherry, a \u223c750 bp fragment containing the rpoS of B13 by the gene for the RpoS-mCherry fusion protein we used double recombination by crossing-over. Hereto, a \u223c1 kb downstream region of rpoS was first amplified using B13 genomic DNA and primers 110524 plus 110525, which was digested using XbaI and SalI and ligated wtih pJP5603-ISceIv2 EcoRI and SpeI, an inserted upstream of the amplified fragment in pJP5603-ISceIv2 which was hereto digested with EcoRI and XbaI. After transformation in E. coli and purification, the resulting plasmid was electroporated into strain B13-78 (rpoS.To replace n B13-78 . Single P. knackmussii strain B13 or B13 rpoS carrying the PrpoS-mcherry fusion were grown in 96-well black microtiter plates (Greiner Bio-one) with a flat transparent bottom. Each well contained 200 \u00b5l of MM medium with 5 mM 3CBA and was inoculated with 2 \u00b5l of a bacterial preculture grown overnight in LB medium. Microtiter plates were incubated at 30\u00b0C with orbital shaking at 500 rpm. At each given time point both culture turbidity (A600) and fluorescence emission (excitation at 590 nm and emission at 620 nm) were measured from triplicate cultures using a Fluostar fluorescence microplate reader (BMG Lab Technologies). Cultures of P. knackmussii strain B13-78 wild-type served for background fluorescence correction.To image eGFP, eCherry or mCherry expression in single cells, culture samples of 4 \u00b5l were placed on regular microscope slides, closed with a 50 mm long and 0.15 mm thick cover slip, and imaged within 1\u20132 minutes. Fluorescence intensities of individual cells were recorded on image fields not previously exposed to UV-light to avoid bleaching. For most imaging series, except data shown in int and PinR) was then calculated as 100% - the percentile of the breakpoint. The average expression intensity over the highest expressing subpopulation was calculated as the mean AGV over the percentile range between that of the breakpoint and 100%. Fluorescence images for display were adjusted for brightness to a level +143, cropped to their final size and saved at 300 dpi with Adobe Photoshop (Version CS4). Corresponding phase-contrast images were \u2018auto contrasted\u2019 using Photoshop.Subpopulation expression was determined from cumulative ranking of all objects according to their AGV. The \u2018breakpoint\u2019 between subpopulations on cumulative distribution curves was deteint and PinR promoters, two identical copies were fused to distinguishable reporter genes (i.e. egfp and echerry) and integrated into separate locations on the chromosome of B13 or its derivatives using mini-Tn5 delivery. Three independent clones with different insertional positions were maintained. Stationary phase cells of such double-reporter strains grown in MM with 3CBA were examined in epifluorescence microscopy, and their eGFP and eCherry fluorescence intensities were measured as outlined above (AGVs). AGVs of both markers in each cell were scaled to subtract background AGV of digital EFM images and normalized to the highest AGV in a population (100%). Only cells belonging to the subpopulations of having higher eGFP or eCherry fluorescence than the breakpoint in the respective cumulative curves , extrinsic noise (\u03b7ext), and total noise (\u03b7tot) were then calculated according to previous definitions given in Elowitz et al.g and c denote the normalized eGFP and eCherry AGV, respectively, observed in the nth single cell. Angled brackets denote a mean over the sample population.To identify and quantify noise in expression of the Pes e.g., were useSignificance of different treatments was examined by pair-wise t-test or ANOVA followed by a Tukey post hoc test. To test the effect of subpopulation size on noise calculations, data sets were randomly resampled using bootstrap procedures (1000 times), upon which the intrinsic, extrinsic and total noise were calculated and finally, averaged over all resampled populations of the same data set.P. knackmussii B13-78 (wild type), B13-2671 (rpoS) and B13-2201 (inrR\u2212/\u2212) cultures after 48 h in stationary phase after growth on 3CBA as sole carbon and energy source, by using the procedure described previously Total RNA was isolated from Figure S1rpoS genes from Pseudomonas putida (P. p.), P. fluorescens (P. f.) and P. aeruginosa (P. a.). Rectangular boxes represent the region chosen to design primers for the amplification of rpoS from strain B13. Inosine was used in the oligonucleotides at non-conserved positions. Genbank numbers: P. putida KT2440, NC_002947.3; P. fluorescens Pf-5, NC_004129.6; P.aeruginosa PAO1, NC_002516.2.Alignment of (TIF)Click here for additional data file.Figure S2Pseudomonas strains. (A) MegAlign alignment (DNAStar Lasergene package v.8) and indication of consensus per position. (B) Dendrogram showing the closest neighbourhood clustering of the strain B13 rpoS gene.Comparison of the predicted RpoS amino acid sequence from strain B13 and orthologues from four other (TIF)Click here for additional data file.Figure S3rpoS in strain B13 by a single recombination event. (A) rpoS gene region. (B) Amplification of a 600-bp internal rpoSB13 fragment by PCR whilst creating BamHI and EcoRI restriction sites. Insertion of the rpoSB13 fragment into the suicide vector pME3087. (C) Genetic structure produced by single homologous recombination and inactivation of rpoS on the B13 chromosome.Strategy for inactivating (TIF)Click here for additional data file.Figure S4P. knackmussii B13-78 wild-type and B13-2671 (rpoS) in MM with 5 mM 3CBA. Data points are the average from three independent biological replicates \u00b1 one calculated standard deviation. Maximal specific growth rates in exponential phase for B13-78 were 0.22\u00b10.01 versus 0.26\u00b10.01 h\u22121 for B13-2671 (rpoS). Note that growth medium for B13-2671 included Tc to select for the rpoS-pME3087 allele. (B) Semi-quantification of the presence of rpoS revertants in B13-2671 (rpoS) cultures by PCR. 25 ng of genomic DNAs isolated from B13-2671 culture with Tc at 24 h (lane 5), 48 h (lane 6), 72 h (lane 7), or 96 h (lane 8) were used as templates. A serially diluted B13-78 (wild-type) DNA was used as control: lane 1, 0.25 ng; lane 2, 0.5 ng; lane 3, 2.5 ng; lane 4, 25 ng. Intact rpoS (upper panel) and fdxA alleles were amplified using primer pairs 090206+090902 and 110524+110525, respectively. Lane M, molecular mass marker . The positions and sizes of the expected PCR fragments are indictaed. Note that some reversion of rpoS-pME3087 to wild-type rpoS must occur (lane 7\u20139) but at less than 1% in the population (lane 1).Growth of (TIF)Click here for additional data file.Figure S5rpoS or double inrR disruption on expression of a Pint-egfp fusion in P. knackmussii. (A) Relevant construction details of the mini-Tn construct delivering the single copy Pint-egfp fusion. (B) Micrographs showing the subpopulation of cells expressing eGFP from Pint amidst a large number of silent cells for B13-1346 (wild-type), B13-2976 (rpoS) or B13-2979 (\u2212/\u2212inrR) cultured on 3CBA after 24 h into stationary phase. (C) As B, but after 72 h in stationary phase. Shown are phase-contrast micrographs at 1,000\u00d7 magnification and corresponding epifluorescence images. For quantification, see Comparison of effects caused by (TIF)Click here for additional data file.Figure S6clc gene expression compared among P. knackmussii B13-78 (wild-type), B13-2201 (\u2212/\u2212inrR) and B13-2671 (rpoS). A) Log2 fold-change in negative-strand probe signals on an ICEclc micro-array. Inset shows detail around inrR-operon. B) Positive-strand probe signals. Open reading frames of ICEclc plotted along its length; white boxes: genes oriented on the positive strand, grey boxes: negative strand. Known ICEclc functional genes or regions indicated by name for reference.ICE(TIF)Click here for additional data file.Figure S7rpoS promoter in P. knackmussii. (A) Relevant construction details of the mini-Tn construct used to place a single copy PrpoS-mCherry transcriptional fusion in the B13 genome. (B) Culture-density normalized mCherry fluorescence as a function of culture density (open circles) and incubation time in B13-3165 (wild-type) B13-3228 (rpoS), or B13-3654 (rpoS-mCherry). (C) Corresponding phase contrast (PhC) and epifluorescence micrographs of B13-3165 cells 24 h into stationary phase. Note how expression from PrpoS is RpoS independent and how expression of RpoS-mCherry from PrpoS is detectable slightly later than that of mCherry alone, suggesting post-transcriptional effects.Growth phase dependent expression from the (TIF)Click here for additional data file.Figure S8int or PinR above threshold and representative for activating the ICEclc element. (A) Finding the breakpoint between the larger non-active subpopulation of cells and the smaller ICEclc-active subpopulation of cells on a cumulative distribution curve of reporter fluorescence values from Pint or PinR. (B) Scaling and normalizing of eCherry and eGFP expression for noise calculations. Only cells falling in the grey zones are considered for noise calculation.Calculation of the subpopulation (size and mean reporter fluorescence expression) of B13-cells expressing P(TIF)Click here for additional data file.Table S1clc from P. knackmussii strain B13, the inrR deletion and the rpoS deletion mutants to P. putida UWC1 as recipient.Transfer frequencies of ICE(DOC)Click here for additional data file.Table S2clc from P. knackmussii strain B13 and the rpoS+ strain to P. putida UWC1 as recipient.Transfer frequencies of ICE(DOC)Click here for additional data file.Table S3int-copies in different places on the chromosome of P. knackmussii derivatives.Effect of subpopulation size on noise calculation from two identical P(DOC)Click here for additional data file.Table S4Primers used in this study.(DOC)Click here for additional data file."} {"text": "Hypoxia-ischemia brain insult induced significant brain weight reduction, profound cell loss, and reactive gliosis in the damaged hemisphere. Hypoxic preconditioning significantly attenuated glial activation and resulted in robust neuroprotection. As early as 2\u2009h after the hypoxia-ischemia insult, proinflammatory gene upregulation was suppressed in the hypoxic preconditioning group. In vitro experiments showed that exposure to 0.5% oxygen for 4\u2009h induced a glial inflammatory response. Exposure to brief hypoxia (0.5\u2009h) 24\u2009h before the hypoxic insult significantly ameliorated this response. In conclusion, hypoxic preconditioning confers strong neuroprotection, possibly through suppression of glial activation and subsequent inflammatory responses after hypoxia-ischemia insults in neonatal rats. This might therefore be a promising therapeutic approach for rescuing neonatal brain injury.Perinatal insults and subsequent neuroinflammation are the major mechanisms of neonatal brain injury, but there have been only scarce reports on the associations between hypoxic preconditioning and glial activation. Here we use neonatal hypoxia-ischemia brain injury model in 7-day-old rats and Hypoxia-ischemia injury is the final common mechanism in different kinds of brain damage resulting from trauma. Hypoxia-ischemia injury is also the major cause of brain damage in neonates . The incPathophysiological features of cerebral hypoxia-ischemia are unique to the immature brain and provide the potential for clinical intervention . Multipl in vivo and in vitro.Hypoxic preconditioning, or hypoxia-induced tolerance, refers to a brief period of hypoxia that protects against an otherwise lethal insult occurring minutes, hours, or days later . Gidday ad libitum access to water and food. The pups were housed with their dams until weaning on postnatal day 21 (P21). Only male rat pups were used in this study.All experiments were performed in accordance with the National Institutes of Health Guidelines on Laboratory Animal Welfare and were approved by the Animal Care and Use Committee of the College of Medicine, National Taiwan University. Ten to twelve Sprague-Dawley pups per dam were used in this study and housed in institutional standard cages on a 12-hour light/12-hour dark cycle, withA modified Rice-Vannucci model was used for the induction of hypoxia-ischemia (HI) brain damage in 7-day-old (P7) neonatal rats . P7 ratsn = 6 in each group). After removal of the brainstem and cerebellum, the forebrain was sectioned at the midline, and both hemispheric weights were determined. A previous study showed that the extent of unilateral reduction in hemispheric weight is highly correlated with biochemical, electrophysiological, and morphometric markers of tissue injury, in the same animal model [On day 14 after birth (P14), the pups were anaesthetized with isoflurane and decapitated and the brains were removed . The pups were anaesthetized with isoflurane and perfused transcardially with ice-cold saline followed by 4% paraformaldehyde in 0.1\u2009M phosphate-buffered saline (PBS). Brains were then removed, postfixed overnight in 4% paraformaldehyde, and then transferred into a 30% sucrose solution for 72\u2009h. At this time, brains were embedded and frozen in Neg-50 for storage at \u221280\u00b0C. Coronal sections were cut at 20\u2009\u03bcm on a Leica cryostat and collected serially onto gelatin-coated slides for storage at \u221280\u00b0C.Brains were collected 24\u2009h after HI injury (P8) for immunohistochemistry assessment of glial activation and Nissl staining was performed for evaluation of structural and cellular damage was used as a marker of activated astrocytes. Briefly, sections were thawed and treated with 3% hydrogen peroxide for 30\u2009min. After blocking in 4% nonfat milk containing 0.4% Triton X-100 for 60\u2009min at room temperature, the sections were incubated overnight at 4\u00b0C with primary antibodies against CD11b and GFAP . Sections were then washed several times in PBS and incubated with a biotinylated secondary antibody and then treated using an avidin-biotin complex kit . Finally, the labeling was detected by treatment with 0.01% hydrogen peroxide and 0.05% 3,3\u2032-diaminobenzidine . The DAB reaction was stopped by rinsing tissues in PBS. Labeled tissue sections were then mounted and analyzed under a bright-field microscope.7 cells into 75\u2009cm2 flasks in Dulbecco's modified Eagle's medium containing 10% heat-inactivated fetal bovine serum , 2\u2009mM L-glutamine, 1\u2009mM sodium pyruvate, 100\u2009mM nonessential amino acids, 100\u2009U/mL penicillin, and 100\u2009mg/mL streptomycin. Cell cultures were maintained at 37\u00b0C in a humidified atmosphere of 5% CO2 and 95% air, and medium was replenished twice a week. These cells were used for experiments after reaching confluence (7-8 days).Primary glial cultures were prepared according to a previous report . In brie2 and 95% air. Confluent cultures were passaged by trypsinization. All the cells were plated on 12-well plates at a density of 2.5 \u00d7 105 cells/well for nitrite assays or on 6-well plates at a density of 5 \u00d7 105 cells/well for reverse transcriptase-polymerase chain reaction (RT-PCR). The cells were then cultured for 2 days before experimental treatment.The murine BV-2 cell line was cultured in DMEM with 10% FBS at 37\u00b0C in a humidified incubator in 5% CO2, 94.5% N2, and 5% CO2 , and control cultures were incubated under normoxic conditions for the same duration. After the indicated hypoxic period, reoxygenation was performed by transferring the cells into a regular normoxic incubator , and cells were incubated for another 24\u2009h for nitrite assays. For RT-PCR, total RNA was extracted from the cells after termination of hypoxia, using the same method outlined above.Hypoxia/reoxygenation was performed as previously described with some modifications . To gene\u03bcM). After incubation for 24\u2009h, 20\u2009\u03bcL CellTiter 96 AQueous One Solution Reagent was added to culture wells for 30\u2009min. One hundred microliters of the culture medium from each well was transferred to an ELISA 96-well plate, and absorbance at 490\u2009nm was measured with a 96-well plate reader. The absorbance at 490\u2009nm is directly proportional to the number of living cells in culture.Cell viability was assessed by the 3--5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium (MTT) assay. Cells cultured in 12-well plates were exposed to hypoxia or lipopolysaccharide in 96-well cell culture plates for 10\u2009min at room temperature in the dark. Nitrite concentrations were determined by using standard solutions of sodium nitrite prepared in cell culture medium. The absorbance at 550\u2009nm was determined using a microplate reader .Accumulation of nitrite in the medium was determined using a colorimetric assay with Griess reagent. Briefly, 100\u2009n = 4 in each group) or cell culture (n = 5 in each group) using an RNeasy mini Kit . Two micrograms of RNA was used for RT-PCR. The reaction mixture contained 1\u2009\u03bcg Oligo (dT)15 Primer, 0.02\u2009mM deoxynucleotide triphosphate (dNTP), 40\u2009U RNase Inhibitor, 100\u2009U M-MLV Reverse Transcriptase, and 5x Reaction Buffer . PCR was performed using an initial step of denaturation (5\u2009min at 94\u00b0C), 25 cycles of amplification , and an extension (72\u00b0C for 7\u2009min). PCR products were analyzed on 1.5% agarose gels. The mRNA of glyceraldehyde 3-phosphate dehydrogenase (GAPDH) served as the internal control for sample loading and mRNA integrity. All of the mRNA levels were normalized to the level of GAPDH expression. The oligonucleotide primers are shown in Total RNA was extracted from brain tissue homogenates (\u03b1) and interleukin-1 beta (IL-1\u03b2), were detected using ELISA kits , according to the manufacturer's instructions. Briefly, pups were sacrificed by decapitation at 2, 6, 12, 24, and 72\u2009h after HI (n = 4 for each time point per group) and protein samples of the ipsilateral hemispheric cortex from each pup were collected. All protein concentrations were determined by the Bradford method . Data were acquired using a 96-well plate reader (BioTek). The cytokine content is expressed as pg cytokines/mg protein.Two major proinflammatory cytokines, tumor necrosis factor-alpha . The mRNA levels and protein concentrations at different time points were analyzed by two-way ANOVA to compare the HI and HP + HI groups. Individual groups were then compared using Bonferroni's post hoc tests, as appropriate. Statistical analyses were performed using Prism GraphPad v.5.0 , with a significance level of The mean ratio of brain weight reduction of the right hemisphere, measured 7 days after right carotid artery ligation and exposure to 2\u2009h of hypoxia in the HI group, was 34.6 \u00b1 2.1%; this reduction was significantly greater than in the HP + HI group. The ratio of brain weight reduction was not statistically different between HP + HI and sham groups. The percentages for brain weight reduction are individually illustrated in To identify the effect of hypoxia preconditioning on glial response after hypoxia-ischemia injury, we used CD11b immunostaining to determine microglial activation and GFAP immunostaining to determine astrocytic activation . Twenty-\u03b2 upregulation were significantly reduced as early as 2\u2009h after the hypoxia-ischemia insult . We thus extracted total protein from the right cortex and further evaluated TNF-\u03b1 and IL-1\u03b2 protein expression by ELISA at different time points (\u03b1 were very low in the sham group (0.08 \u00b1 0.02\u2009pg/mg protein) and were increased markedly at 2\u2009h after the hypoxia-ischemia insult . These levels peaked at 6\u2009h and were decreased after 12\u2009h. Hypoxia preconditioning resulted in a significant reduction of the TNF-\u03b1 level at 6\u2009h . The level of IL-1\u03b2 was also very low at baseline , slightly increased 2\u2009h after the hypoxia-ischemia insult , and markedly elevated 6\u2009h after the hypoxia-ischemia insult . Hypoxia preconditioning before the hypoxia-ischemia insult markedly decreased the IL-1\u03b2 levels at 6\u2009h . Therefore, hypoxia preconditioning was able to suppress the hypoxia-ischemia injury-induced inflammatory response in neonatal rat brain.We next examined the effect of hypoxia preconditioning on the inflammatory response induced by hypoxia-ischemia injury. Comparing the mRNA levels of the right cortex in the HP + HI group with the HI group, inducible NOS (iNOS) and IL-1a insult . Six houe points . The levTo further clarify the role of glial cells in the anti-inflammatory mechanisms of hypoxia preconditioning, we next examined the glial response to hypoxia preconditioning at the cellular level. Firstly, nitrite production and cell viability after hypoxia exposure for different time durations were determined in a primary mixed glial culture and in the microglial cell line, BV-2 . Exposur in vivo model of hypoxia preconditioning. The production of nitrite after reoxygenation was decreased significantly by hypoxia preconditioning for 0.5\u2009h, but not for 1 or 2\u2009h and the expression of the inflammatory related gene, COX-2. Our results also showed that a longer period of hypoxia preconditioning had less of an anti-inflammatory effect, which implies the importance of selecting the appropriate duration for hypoxia preconditioning. The induction of proinflammatory cytokine genes, including TNF-\u03b1 and IL-1\u03b2, was not as prominent as for the in vivo model. The difference in cytokine response may be due to the recruitment of circulating white blood cells. In the animal model, hypoxia-ischemia will induce migration and infiltration of circulating macrophages and monocytes, further increasing proinflammatory cytokine release [\u03b1 and IL-1\u03b2 were all significantly increased after hypoxia exposure for 4 hours, and hypoxia preconditioning for the optimal duration (0.5\u2009h) was able to attenuate the activation effects. Again, longer durations of hypoxia preconditioning decreased the anti-inflammation effects, as in primary mixed glia cultures.It may be argued that a diminished inflammatory reaction arises from a reduction in neuronal damage and is not a direct cellular response. We thus developed an release . The majGlial cells, in particular astrocytes, are usually viewed as supporters of neuronal function. However, numerous studies are increasingly demonstrating the important role of glial cells in preserving brain function under physiological and pathological conditions . The genThe results of the present study can also be extrapolated to other mechanisms of neonatal brain injury. An increasing body of evidence has demonstrated a link between inflammation and long-term brain dysfunction. A recent meta-analysis of 26 articles has shown a positive association between infection and cerebral palsy, in both preterm and full-term infants . In addiIn conclusion, hypoxic preconditioning induced significant neuroprotection against neonatal hypoxia-ischemia brain insults and suppressed astroglial and microglial activation in the ischemic cortex and hippocampus. Pretreatment with sublethal exposure to hypoxia before prolonged hypoxia injury prevented the cellular inflammatory response in the primary glial culture and microglial cell line BV-2. These results further address the importance of anti-inflammatory strategies in preventing neonatal brain injury."} {"text": "Declarations on end-of-life issues are advocacy interventions that seek to influence policy, raise awareness and call others to action. Despite increasing prominence, they have attracted little attention from researchers. This study tracks the emergence, content, and purpose of declarations concerned with assisted dying and euthanasia, in the global context. The authors identified 62 assisted dying/euthanasia declarations covering 1974\u20132016 and analyzed them for originating organization, geographic scope, format, and stated viewpoint on assisted dying/euthanasia. The declarations emerged from diverse organizational settings and became more frequent over time. Most opposed assisted dying/euthanasia. Euthanasia and certain forms of assisted dying are currently legal or decriminalized in just a few countries. The Netherlands (2001), Belgium (2002), and Luxembourg (2009) have legalized euthanasia such declarations group around a common purpose. They capture the goals of interest groups, make statements of intent, point to a more desirable state of affairs, and encourage greater awareness to achieve a stated goal. These declarations have no legal mandate but do have potential for influencing laws, policies, systems and processes on end-of-life issues. They have become a part of the landscape of end-of-life care, and the debates that swirl around it.We refer to advocacy interventions of this type as At the same time, they are poorly documented and largely ignored by researchers. Yet they are important markers in the evolution of end-of-life discourse. They give perspective on the changing discussion around specific issues and have some importance within the culture of many end-of-life care organizations. They merit research scrutiny, in particular, when declarations on the same topic take up opposing or differing perspectives.Building on an earlier study of declarations in support of palliative care development included national and international medical and nursing associations, specific fields of medicine such as palliative care or geriatric care, and societies representing particular patient groups, such as the Association for Persons with Severe Handicaps, Parkinson\u2019s UK and The Arc of the United States. Religious organizations (16) were all Christian in orientation, including Methodist, Baptist, Catholic, the Salvation Army, the Reformed Churches, and the Christian Medical and Dental Association. Others included political parties (three) and those organizations instituted to advocate for (eight) or against (four) euthanasia/assisted dying.Seven out of the eight declarations in the group established to advocate for euthanasia/assisted dying were issued by the World Federation of Right to Die Societies. The first of these was in 1976 and the remaining six were issued between 1996 and 2006 at 2-year intervals, corresponding with the biannual conferences of the Federation.Nearly three quarters of the declarations (45/62) were against euthanasia/assisted dying and were issued by associations of palliative care and other health care disciplines, associations of patient groups, and churches. Nine declarations advocated for the introduction of euthanasia/assisted dying, of which seven were issued by the World Federation of Right to Die Societies. Among the eight declarations that expressed a neutral position, two were from political parties calling for further discussion on the subject. Others included health care associations representing divided views of members, organizations that expressed their commitment to equal treatment of all patients irrespective of their position on euthanasia, and those that refrained from taking a position because euthanasia/assisted dying was illegal in their respective countries.All declarations issued by religious organizations were against euthanasia/assisted dying. Among health care organizations, 24 were against and 5 were neutral. Two declarations from political parties took a neutral position and one was for euthanasia/assisted dying.Analyzing the 51 declarations where the year of publication could be identified, we found different viewpoints showing prominence over specific periods of time. The first two declarations from 1974 (against) and 1976 (for) represented either side of the argument. With two exceptions, all declarations issued in the 1990s were against legalizing euthanasia/assisted dying. Five out of the nine declarations published between 2000 and 2010, were in support. The period from 2011 to 2016, which showed highest activity (23), was dominated by declarations against euthanasia/assisted dying (18). The first declaration with a neutral stance appeared in 2001 and, after a break of 10 years, five declarations were issued between 2012 and 2016.The majority of declarations (39) were oriented to national audiences: United States (14), United Kingdom (12), Canada (eight), New Zealand (three), and the Netherlands (one). Many were published because of a proposed change in legislation or a judicial decision. The international declarations (19) were all issued by organizations or churches with a global presence, such as the World Medical Association, The World Federation of Right to Die Societies, The Christian Medical and Dental Associations, The Salvation Army International, and the Sacred Congregation for the Doctrine of Faith. Two declarations involved two countries only , and two involved a specific region within a country .The 62 declarations came in several formats. Most common was a statement of convictions (38) expressing beliefs and opinions. Others made recommendations (23) to governments, policy makers, health care professionals and the wider public, expressed specific concerns (10), made a call to action (seven) for governments, health institutions or the public, made an explicit position statement (six) of the organizations\u2019 stand, described their action plan (three), and recorded their commitment to a cause or an aspect of care (two). Many declarations contained more than one of these formats.Most declarations indicated the ethical or practical reasons for their position on euthanasia/assisted dying and inclThe recommendations in the declarations varied in relation to the \u201cviewpoint\u201d adopted: for, against or neutral. Recommendations from declarations for euthanasia or assisted dying included decriminalization of voluntary medically assisted death; legalizing medically hastened death; respecting the voluntarily expressed will of individuals as an intrinsic human right; openness to and acceptance of terminal sedation as a form of assisted dying; inclusion of assisted dying within the mandate and practice of palliative care.The most prominent recommendation from declarations against euthanasia/assisted dying was for improvement in the provision of palliative care. This was followed by recommendations about access to and the administration of medications for adequate pain relief. They asserted that good palliative care and physical symptom control minimize the number of requests for hastened death and that governments should pay attention to lack of relevant health and social support, equality, and justice. Asserting that misconceptions about suffering at the end of life fuel the public demand for legalizing euthanasia, some declarations recommended public education about palliative care.Our study has demonstrated that the practice of issuing declarations on euthanasia/assisted dying has emerged as a significant phenomenon within the field of end-of-life care. We have shown an increasing incidence of such declarations over time and their growing prominence as an advocacy tool. The declarations take specific (though varied) positions on the issue of legalization of euthanasia/assisted dying and aim to promote these to gain public support and/or favorable actions from governments. Despite their emerging significance, no commentary exists to our knowledge on such advocacy documents and their role in end-of-life debates and discourse. As the discussion on these issues spreads to more countries we are likely to see the appearance of further declarations of this type.Our analysis shows a specific geographic range in the declarations identified. They all emanate from the United States, Canada, Western Europe, Australia, and New Zealand. These are countries where active measures have taken place to consider the value of legalizing euthanasia/assisted dying or where such legalization has already taken place. The absence of declarations from other parts of the word, including Asia and Africa is notable. Although discussions and studies exploring perceptions on the issue of euthanasia and assisted dying are emerging from these parts of the world Saadery, , relevanThe diversity of viewpoints on euthanasia/assisted dying is strikingly depicted in these declarations. Declarations for euthanasia/assisted dying range from those which endorse the decriminalization of assisted dying, to those which demand it as a fundamental human right. Declarations against range from those suggesting that assisted dying may not be the right solution to the problem of suffering, to others which strongly condemn initiatives to legalize euthanasia.Although declarations for and against use some terminologies in common, the extent of their meaning and use differs significantly. Respecting the contents of a living will is a commonly recognized issue in end-of-life care. Yet although declarations favoring euthanasia extend the value of the living will to those expressing the wish to die, those against do not support its use to facilitate medical assistance to end life.Although all declarations express their intention to promote dignified death, those for euthanasia consider respecting autonomous decisions of the individual on the timing, place, and manner of death as aspects of dignity. Declarations against euthanasia, however, present dignity as an equal and inviolable quality inherently possessed by human beings. They present the view that intentional killing of a human being, even at their voluntary request due to intractable suffering, undermines human dignity.Despite their wide ranging characteristics and divided perspectives, euthanasia/assisted dying declarations share some of the wider principles of advocacy. They identify with disadvantaged populations, promote their cause, and invoke responses from positions of authority and professional groups, as well as from wider communities (Gray & Jackson, We acknowledge certain limitations to our study. Although the search for declarations was conducted in a systematic way, it is possible there may be other declarations we did not find, for example declarations could have used different terminology in their titles to our keywords or declarations may have been issued in other languages than English. Therefore, while capturing the landscape of declarations to a significant degree, there may be other declarations on euthanasia/assisted dying that are not covered in this study. We consider this a small possibility however. The findings of our study are also limited by the contents of these advocacy documents. We acknowledge that these may not necessarily represent the views of all individuals that make up these organizations, though they are the declared organizational position on the issue. It is also possible that there may be other organizations concerned about the legalization of euthanasia/assisted dying that have not considered it a high enough priority to issue a declaration.Declarations relating to euthanasia and assisted dying represent the views and demands of diverse communities of interest concerned about suffering at the end of life, often with a determination to make their voices heard and to advocate for change. Our study has catalogued the emergence of this particular form of intervention as an advocacy tool in the wider debates about end-of-life issues. We have identified the various organizations involved, the positions represented and the recommendations made. In so doing, we have opened up a space for further analytic work and more comparative analysis of declarations across a range of end-of-life issues. Further exploration of these declarations in the light of their respective contexts will help understand their significance and impact."} {"text": "PWS-ICdel mice also displayed a 48% reduction in proportionate interscapular brown adipose tissue (isBAT) weight with significant \u2018beiging\u2019 of abdominal WAT, and a 2\u00b0C increase in interscapular surface body temperature. Maintenance of PWS-ICdel mice under thermoneutral conditions (30\u00b0C) suppressed the thermogenic activity in PWS-ICdel males, but failed to elevate the abdominal WAT weight, possibly due to a normalisation of caloric intake. Interestingly, PWS-ICdel mice also showed exaggerated food hoarding behaviour with standard and high-fat diets, but despite becoming hyperphagic when switched to a high-fat diet, PWS-ICdel mice failed to gain weight. This evidence indicates that, unlike humans with PWS, loss of paternal gene expression from the PWS cluster in mice results in abdominal leanness. Although reduced subcutaneous insulation may lead to exaggerated heat loss and thermogenesis, abdominal leanness is likely to arise from a reduced lipid storage capacity rather than increased energy utilisation in BAT.Prader\u2013Willi syndrome (PWS), a neurodevelopmental disorder caused by loss of paternal gene expression from 15q11\u2013q13, is characterised by growth retardation, hyperphagia and obesity. However, as single gene mutation mouse models for this condition display an incomplete spectrum of the PWS phenotype, we have characterised the metabolic impairment in a mouse model for \u2018full\u2019 PWS, in which deletion of the imprinting centre (IC) abolishes paternal gene expression from the entire PWS cluster. We show that PWS-IC Prader\u2013Willi syndrome (PWS) is caused by a lack of paternal gene expression from the 15q11\u2013q13 imprinting cluster and results from large chromosomal deletions, chromosome 15 maternal uniparental disomy or imprinting-centre (IC) mutations. This disorder is associated with significant metabolic impairment. After severe neonatal hypotonia and a failure to thrive in infancy, subsequent development of hyperphagia and reduced satiety responses result in obesity, unless managed carefully. Indeed, as PWS is the most common syndromal cause of morbid obesity , elucidaMagel2 become obese and wild-type (WT) mice used in this study were bred under the Authority of the Animals (scientific procedures) Act 1986 (UK), with subsequent procedures conforming with the institutional and national guidelines, including those for genetically modified animals, and specifically approved by local ethical review.The PWS-ICdel mice and WT littermates were generated by crossing ICdel-positive males with WT females. Given the nature of the epigenetic regulation of imprinted genes, paternally inherited IC deletion results in a lack of gene expression from the PWS interval. As PWS-ICdel animals on a pure C57BL/6J background suffer severe postnatal lethality, we crossed ICdel-positive males with CD1 females and selectively culled WT littermates (identified on the basis of their increased size 48\u2009h after birth) leaving only 1 or 2 WT pups per litter, as previously described , unless stated otherwise.All animals were maintained on a 12-h light/darkness cycle (lights on 07:00\u2009h), with ad libitum), 18-month-old male WT and PWS-ICdel littermate mice were weighed and killed by cervical dislocation. Inguinal, retroperitoneal and epididymal white adipose tissue (WAT) and interscapular brown adipose tissue (isBAT) fat depots and liver were excised, weighed, snap frozen and stored at \u221270\u00b0C for subsequent histological analysis (see below). Left tibiae were excised, and the length was determined with a hand-held micrometre.After an overnight fast (with water available ad libitum feeding (with water available to both groups ad libitum). After thoracotomy, blood samples were obtained by cardiac puncture using BD Microtainers , and aliquots of separated plasma were stored at \u221220\u00b0C for the quantification of circulating ghrelin and lipid profiling (see below).Two groups of 16- to 17-month-old female WT and PWS-IC littermate mice were weighed and killed by cervical dislocation after 24\u2009h of total food removal or After analysis of BAT histology from the mice in study 1, surface body temperature was measured in 6- to 10-month-old male WT and PWS-IC mice as previously described at either standard room temperature (20\u201322\u00b0C) or at a thermoneutral ambient temperature (30\u00b0C) for 9 weeks. 30\u00b0C represents the lower critical temperature in mice, below which non-shivering thermogenesis is induced Fat, in which energy sources were Crude Fat 10%; Crude Protein 20%; Carbohydrate 70% ) for 1 week, followed by a further 1-week period with a high-fat diet were: Crude Fat 45% Crude Protein 20%; Carbohydrate 35%). Body weight and food intake were monitored daily, food consumption being normalised to body weight. After the first day of exposure to each diet, the proportion of \u2018hoarded\u2019 food was determined by expressing the mass of food in the inner (\u2018spill\u2019) compartment of the food hopper (with water provided ad libitum). Mice were placed individually in normal shoe-box cages under low lighting levels and presented with a pre-weighed pot of wet mash (1 part standard diet (SDS RM3):1 part water). Wet mash was used to overcome the propensity of PWS-ICdel mice to hoard diet pellets and spill powdered diet . Mice were allowed to feed freely for 60\u2009min, the pots being weighed after 30 and 60\u2009min to determine food consumption.Group housed 4- to 6-month-old male and female WT and PWS-ICAdipocyte size and hepatic lipid content were quantified as previously described and b, d2O prior to quantification of Ucp-1 and \u03b2-actin cDNA by quantitative real-time PCR amplification using a Bio-Rad IQ5 thermal cycler and Precision 2\u00d7 qPCR Mastermix reagents (Primerdesign). Oligonucleotide primers for mouse Ucp-1 were purchased from Primerdesign . PCR thermal cycling conditions included an initial enzyme activation for 10\u2009min at 95\u00b0C (1 cycle), followed by 51 cycles of denaturation at 95\u00b0C for 15\u2009s and real-time Fluorogenic Data Collection (FDC) at 60\u00b0C for 1\u2009min. The final FDCs were acquired at 0.5\u00b0C temperature increments between 55\u00b0C and 95\u00b0C at 10-s intervals (81 cycles). The PCR products were quantified by incorporation of SYBR green into double stranded DNA. Single amplicon identity was verified by melt curve analysis. Individual samples were normalised to the expression of \u03b2-actin . The quantity of cDNA in each reaction was calculated by reference to a standard curve constructed from serial dilutions of cDNA from WT BAT (Uncoupling protein-1 (Ucp-1) mRNA expression in isBAT and inguinal WAT was quantified as previously described concentrations were determined by RIA (Millipore/Linco (IAV: 3.28\u20138.05%)). Plasma lipids were extracted by the Folch method . Classess.e.m., and differences were established by 1-way and 2-way repeated measures ANOVA and Bonferroni post hoc test or unpaired Student\u2019s t-test , as indicated in the figure and table legends, with P\u2009<\u20090.05 considered significantly different.Results are expressed as mean\u2009\u00b1\u2009del foetuses weighed 96% of their WT counterparts (WT: 1.18\u2009\u00b1\u20090.044\u2009g (n\u2009=\u20097); PWS-ICdel: 1.13\u2009\u00b1\u20090.044\u2009g (n\u2009=\u20097); P\u2009=\u20090.45)) nor placental weight (WT: 100\u2009\u00b1\u20096.5\u2009mg (n\u2009=\u20097); PWC-ICdel: 92\u2009\u00b1\u20094.1\u2009mg (n\u2009=\u20097); P\u2009=\u20090.32) were significantly affected in PWS-ICdel pregnancies, PWS-ICdel mice failed to thrive after 24\u2009h post-partum, leading to increased neonatal mortality. PWS-ICdel mice that survived to adulthood were significantly growth retarded, adult PWS-ICdel males showing a 40% reduction in body weight (P\u2009<\u20090.001), accompanied by a 7% reduction in tibial length . Comparable reductions in body weight were observed in similarly-aged PWS-ICdel females , 84% (P\u2009<\u20090.001) and 82% (P\u2009<\u20090.01), respectively, and reflected in a 69% reduction in adipocyte size (P\u2009<\u20090.001). Although proportionate liver weight was unaffected (del mice was disrupted. In contrast to the diffuse lipid staining in WT livers (del livers showed punctate staining (P\u2009<\u20090.01)), larger (increased by 66% (P\u2009<\u20090.001)) lipid droplets, with total lipid content reduced by 67% (P\u2009<\u20090.01).However, in contrast to humans with PWS, PWS-ICinguinal , epididyinguinal and retringuinal WAT weigaffected , hepaticT livers inset, Pstaining inset, wdel mice was only 79% of that in WT littermates , consistent with elevated thermogenesis. Similarly, retroperitoneal WAT (P\u2009<\u20090.05). Thermal imaging revealed that surface temperatures of the head and interscapular regions were increased in PWS-ICdel mice by 1.4\u00b0C and 2.0\u00b0C, respectively (P\u2009<\u20090.05), whereas dorsolumbar tail root and radiant temperatures were not significantly affected . After 9 weeks, proportionate isBAT weight in WT and PWS-ICdel males was elevated by 72% and 76%, respectively ; P\u2009=\u20090.183 ). Despite suppressed thermogenesis, proportionate WAT mass , whereas for male PWS-ICdel mice at thermoneutrality, retroperitoneal WAT mass was 97% of that in PWS-ICdel mice at room tempera\u00adture. In addition, thermoneutrality had no significant effect on WAT Ucp-1 mRNA expression . This lack of weight gain may arise from the suppressive influence of thermoneutrality on daily food intake . When quantifying the amount of diet in the \u2018spill\u2019 compartment as a percentage of the remaining diet, PWS-ICdel males were found to \u2018hoard\u2019 more than 3-times as much diet as WT males . This behaviour was prominent with both standard and high-fat diets, PWS-ICdel males hoarding 86\u201391% of the remaining diet. Interestingly, this pattern of hoarding in PWS-ICdel males was broadly similar to that in both WT and PWS-ICdel females .Although constrained by the short period of monitoring, it is particularly striking that although PWS-ICtermates , althougdel mice show proportionate hyperphagia, we investigated individual meal consumption patterns after an overnight fast. When adjusted for body weight, PWS-ICdel mice consumed 48% more diet than WT littermates . This was due mainly to continued consumption after 30\u2009min as evidenced by a significant interaction with the time of measurement .As PWS-ICdel mice.Loss of paternal gene expression from the imprinted gene cluster on human chromosome 15q11\u2013q13 impairs neuroendocrine and metabolic function in PWS. However, investigating these impairments in mouse models for PWS has been hampered by high postnatal lethality. We have utilised a mixed genetic background to increase survival, enabling us to characterise metabolic status in a mouse model in which deletion of the homologous PWS-IC results in complete loss of paternal gene expression from the entire PWS locus . As similar growth results have been reported in another \u2018full\u2019 genetic mouse model for PWS depots. Although reduced visceral adiposity has been reported in human females with PWS . When supplied with pelleted diet, individual PWS-ICdel mice from our Cardiff colony displayed increased diet distribution in both home and metabolic cages , whereas PWS-ICdel mice in a Florida colony packed bedding around high-fat diet pellets in the food hopper . This hoarding behaviour, previously described in birds, rodents and, in certain circumstances, humans , it is possible that leanness occurs in PWS-ICsorption , are unasorption . In addisorption , the abssorption indicatedel mice is significantly reduced , maintael males . Howeverde novo lipogenesis and/or decreased lipolysis or lipid export, resulting in hypertrophy , whereas lipid accumulates in response to increased substrate supply/uptake, ertrophy .del mice was partly due to a 50% reduction in adipocyte number (data not shown). Evidence of reduced adipogenesis in subcutaneous WAT has recently been reported in human PWS, adipocyte progenitor number being halved ; the Prader\u2013Willi Syndrome Association UK (to A R I and T W), Biotechnology and Biological Sciences Research Council and Foundation for Prader\u2013Willi Research (to T W); and Seed Corn Funding (to T W and A R I)."} {"text": "The imaging of drugs inside tissues is pivotal in oncology to assess whether a drug reaches all cells in an adequate enough concentration to eradicate the tumor. Matrix-Assisted Laser Desorption Ionization Mass Spectrometry Imaging (MALDI-MSI) is one of the most promising imaging techniques that enables the simultaneous visualization of multiple compounds inside tissues. The choice of a suitable matrix constitutes a critical aspect during the development of a MALDI-MSI protocol since the matrix ionization efficiency changes depending on the analyte structure and its physico-chemical properties. The objective of this study is the improvement of the MALDI-MSI technique in the field of pharmacology; developing specifically designed nanostructured surfaces that allow the imaging of different drugs with high sensitivity and reproducibility. Among several nanomaterials, we tested the behavior of gold and titanium nanoparticles, and halloysites and carbon nanotubes as possible matrices. All nanomaterials were firstly screened by co-spotting them with drugs on a MALDI plate, evaluating the drug signal intensity and the signal-to-noise ratio. The best performing matrices were tested on control tumor slices, and were spotted with drugs to check the ion suppression effect of the biological matrix. Finally; the best nanomaterials were employed in a preliminary drug distribution study inside tumors from treated mice. In drug discovery and development it is important to understand the pharmacokinetics, investigating the absorption, distribution, metabolism, and excretion (ADME) of molecules. Several analytical methods, based on high-performance liquid chromatography (HPLC) and Liquid chromatography tandem-mass spectrometry (LC-MS/MS), have been developed and employed on plasma and tissue homogenates to establish drug concentration profiles of drugs . ClassicIn recent decades, several imaging techniques have been developed to investigate the distribution of compounds inside a tissue, such as magnetic resonance imaging (MRI), positron emission tomography (PET), or wall body autoradiography (WBA), and are routinely used in clinical diagnosis ,7,8,9,10W < 10,000 Da). Laser desorption/ionization based techniques have always been the most widely used in mass spectrometry for imaging [Mass spectrometry imaging (MSI) is one of the latest, rapidly growing surface analysis techniques for the detection, localization, and identification of molecules in tissues ,12,13. M imaging and theyW < 1000 Da) like drugs and metabolites as well [MALDI is the prevailing MSI technique used in pre-clinical and clinical research ,18 to ob as well ,23,24,25The principle behind MALDI-MSI is to acquire a mass spectrum for each point of a tissue section by rastering a laser beam at defined geometrical coordinates. The UV laser energy is absorbed by a matrix present in a homogeneous layer on the tissue slice that facilitates analyte extraction in the MALDI ion source ,27. The A different approach based on the use of inorganic fine particles takes advantage of their physicochemical properties such as high photo-adsorption, low heat capacity, and large surface area. This ensures rapid heating and highly localized and uniform energy deposition, resulting in efficient sample desorption and ionization , allowinThe use of gold and platinum nanostructures in laser desorption/ionization (LDI) MS of low molecular weight compounds was reviewed by Bergman et al. . Gold naTitanium dioxide can be considered as a matrix for MALDI, since the laser wavelength falls in the range of its absorption band. In particular, the literature reports some examples of the use of this material for the determination of carbohydrates and anti2Si2O5(OH)4\u00b72H2O), is a two-layered aluminosilicate, with a predominantly hollow tubular structure. Chemically, the outer surface of the HNTs has properties similar to SiO2, while the inner cylinder core is related to Al2O3. In a recent paper, halloysite nanoclay showed good potential as a SALDI surface for the rapid analysis of low molecular mass polyesters and their degradation products [Halloysite (HNT), a naturally occurring aluminosilicate nanotube (Alproducts .Carbon nanotubes (CNTs) are allotropes of carbon with a cylindrical nanostructure. They have been reported to be an effective MALDI matrix for small molecules , eliminaWe compared the ability of these five nanostructured matrices to ionize six different anticancer drugs: taxans , tyrosine kinase inhibitors , an antineoplastic antibiotic , and a DNA binding protein ,37. All The main objective of this study is the improvement of the MALDI-MSI technique in the field of pharmacology, developing specifically designed surfaces for the imaging of different anticancer drugs, with high spatial resolution, sensitivity, and reproducibility.All nanostructured materials were deeply characterized by the morphological, structural, and surface points of view, by ultraviolet-visible spectroscopy (UV), transmission electron microscope (TEM), scanning electron microscope (SEM), and dynamic light scattering (DLS) analyses. Relevant features are reported in All nanostructured matrices were firstly screened by co-spotting them with equimolar concentrations of six anticancer drugs on the MALDI plate and evaluating both the signal intensity and the S/N for each combination of drug and matrix. The detected MS fragments for each drug are reported in m/z 743 after losing a water molecule and OTX (m/z 271) in negative ion mode, but they also allowed the ionization of smaller molecules like IMT (m/z 492) and LCT (m/z 373). AuNPs gave more intense signals, especially in positive ion mode for the tyrosine-kinase inhibitor IMT as a sodium adduct at m/z 515. Compared to TiO2 nanoparticles, AuNPs were more efficient for ionizing and visualizing smaller molecules such as IMT and LCT, giving a more intense signal in general. HNTs gave good visualization only for three of the six drugs: PTX, IMT, and LCT. CNTs efficiently ionized all drugs both in negative and positive ion modes, but could not be used for imaging.Almost all drugs were successfully ionized both in negative and positive ion modes using gold, halloysite, and titanium, while trabectedin (ET) was detectable only in positive ion mode at molecule . P25 andm/z 284 as a base peak. In this mass region the spectrum has a very high S/N, suggesting a good effectiveness of the technique with this kind of matrix. Unfortunately, CNTs tended to fly off from the target plate when subjected to the laser pulse, contaminating the ion source and interfering with the instrument functionality.The different performances of the nanomaterials towards the ionization of the drug molecules may be ascribed to many factors, starting with the physico-chemical properties , their am/z < 100).Both titanium samples, instead, showed a strong absorption (inflection point of the curves) particularly in the laser emission range. However, not all the absorbed energy could be transferred to the target molecules, as these materials are semiconductors. This yields a good compromise between the absorbed and released energy, with promising results in ionizing almost all the bigger drug molecules. Unfortunately, the released energy is still so high that the small fragments are not detected because of their complete mineralization were tested to assess how the biological matrix influenced the intensity of the analyte ion signal and for interfering signals by spotting drug standards on control tumor tissue slices that were then sprayed with each matrix for imaging experiments and OTX (m/z 260) in negative ion mode; TiO2 Hombikat efficiently ionized the two taxans PTX and OTX in negative ion mode but also IMT and OTX in positive ion mode. AuNPs efficiently visualized almost all drugs spotted on tissue for PTX, 400 mg/kg per os (p.o.) for IMT, 20 mg/kg p.o. for LCT). 2 based matrices appear to be suitable for visualizing the PTX distribution inside treated tissues in negative ion mode with high sensitivity, but did not give good visualization of IMT in negative or positive ion mode. In contrast, the nanogold-based matrix allowed a better visualization of the IMT distribution, highlighting its peripheral localization inside treated mesothelioma (even though there is high background noise). Both TiO2 and gold nanoparticles were not suitable for LCT distribution studies inside treated tumors.Based on the results obtained from the control tissues spotted with the drug standards, TiO2 nanoparticles have been previously reported to be suitable for MALDI mass spectrometry analysis of low molecular weight compounds, with almost complete absence of background noise [4 in water, which was introduced by Turkevitch and revisited by Kimling [The imaging of drugs inside biological tissues is pivotal in oncology to understand how a compound and its metabolites are localized and distributed inside a tissue, in order to check that they reach the intended target site. Therefore, there is a growing need for methods to assess whether a drug reaches all tumor cells in adequate concentrations and to develop new strategies to improve penetration and the outcome of chemotherapy . In thisnd noise , and recnd noise ,42. For Kimling ,44.We tested common nanoparticles with different features with six types of drugs, currently used as anticancer drugs in clinical settings. While the carbon nanotubes based matrix has been reported to provide higher detection sensitivity than classical organic matrices , and it PTX efficiently ionizes with titanium based matrices which were however less suitable for the ionization of a smaller molecule such as IMT. AuNPs were convenient for the ionization and for the imaging of IMT, enlarging our ability to visualize different kind of molecules.These results confirm that there is not a single type of matrix that suits all drugs. Further studies are needed to clarify the interactions between matrix and analyte to define case-by-case how to choose the best matrix and the best fitting combination that gives high sensitivity and therefore a good detection of drug distribution in the imaging of molecules inside biological tissues.Further studies are also needed to accurately understand the mechanism of interaction between the matrix and analyte. Determining out why a particular matrix is more appropriate for the imaging of a certain drug could give clues for developing of new nanomaterials with designed texture for the imaging of a larger group of small molecules, to investigate drug distribution in primary tumors or metastases, while also offering new opportunities for the visualization of endogenous molecules.2O, all at a concentration of 100 pmol/\u00b5L.Paclitaxel , ortataxel , and imatinib were dissolved in 50% ethanol, trabectedin and lucitanib were dissolved in 50% methanol, and doxorubicin was dissolved in HFor mouse treatments, PTX was dissolved in 50% Cremophor EL and 50% ethanol and further diluted in saline immediately before use. IMT and LCT were suspended in 0.5% Methocel.4 was heated to a boil while stirring. Then trisodium citrate was added quickly to the boiling mixture. The solution was refluxed for 15 min, and then allowed to cool to room temperature. Subsequently, the gold colloid was centrifuged at 13.2 krpm for 20 min and resuspended in phosphate buffered saline (PBS), pH 7.4. The average diameter of the AuNPs was determined by UV, TEM, SEM, and DLS.Gold nanoparticles (AuNPs) of 20 nm diameter were synthesized as follows: an aqueous solution of HAuCl2 P25 and TiO2 Hombikat UV100, were purchased from Evonik and Sachtleben Chemie GmbH , respectively. Multiwalled Carbon Nanotubes (CNTs) were purchased from NANOCYL\u00ae (NC7000\u2122 series) and were produced by a Catalytic Chemical Vapor Deposition (CCVD) process. Halloysite nanoclay was purchased from Sigma Aldrich . P25 or Hombikat TiO2, AuNPs, and HNTs or CNTs matrices suspensions were prepared respectively at 1 mg/mL, 0.4 mM, and 0.15 mg/mL in 50% ethanol, and were vortexed and sonicated before use.Two different titanium nanoparticles, TiOUV-Visible measurements were recorded using a Hitachi UH 5300 Spectrophotometer , equipped with 1.0 cm path length quartz cells and recorded in a range of 400-700 nm. The morphology and shape of the samples have been evaluated by means of a Zeiss Evo50 Scanning Electron Microscope with an accelerating voltage of 20 kV and a magnification of 24000 \u00d7 TEM analyses were performed on a Jeol Jem 3010 UHR instrument .Diffuse reflectance spectra (DRS) of the powders were measured on a UV-Vis scanning spectrophotometer equipped with a diffuse reflectance accessory. A \u2018\u2018total white\u2019\u2019 Perkin Elmer reference material was used as a reference.Procedures involving animals and their care were conducted in conformity with the following laws, regulations, and policies governing the care and use of laboratory animals: Italian Governing Law ; Mario Negri Institutional Regulations and Policies providing internal authorization for persons conducting animal experiments ; the NIH Guide for the Care and Use of Laboratory Animals (2011 edition) and EU directives and guidelines (EEC Council Directive 2010/63/UE) and in line with Guidelines for the welfare and use of animals in cancer research . The staAnimal experiments have been reviewed and approved by the IRFMN Animal Care and Use Committee (IACUC) that includes members ad hoc for ethical issues. Animals were housed in the Institute\u2019s Animal Care facilities which meet international standards. They were regularly checked by a certified veterinarian who is responsible for health monitoring, animal welfare supervision, experimental protocols, and procedures revision.7 A2780 cells or fragments of the rare MPM487 human malignant pleural mesothelioma . Tumor growth was measured with a digital caliper two/three times a week and the tumor volume (mm3) was calculated as (length (mm) \u00d7 width2 (mm2))/2.Six to seven-week-old female NCr-nu/nu mice (from the Harlan Lab) were inoculated subcutaneously with 102 and all efforts were made to minimize suffering.When tumors reached approximately 500 mg, the animals were treated with different drugs or with the vehicle as a negative control. Mice bearing xenografts were treated with vehicle (CTRL) or a single dose of drug . Animals were sacrificed 1 hour after treatment (4 h for PTX) under COTumors and organs were explanted, then immediately snap-frozen in liquid nitrogen and stored at \u221280 \u00b0C until analysis.The different nanomaterials tested as matrices were firstly screened by spotting 0.5 \u00b5L of 100 pmol/\u00b5L drug standard solutions on the MALDI steel plate. After complete drying in air of the drug standard spots, 0.5 \u00b5L of matrix suspensions described above were spotted on top of the different drugs samples. Matrices were also tested on tissues to evaluate the influence of the biological matrix on the ion signal intensity, following the MSI protocol we recently published . Brieflywww.maldi-msi.org, M. Stoeckli, Novartis Pharma, Basel, Switzerland) with an imaging raster of 100 \u00b5m. Tissue View software 1.1 was used to process and display the ion distributions inside the tumor sections. The ions plotted for each drug are reported in A MALDI 4800 TOF-TOF was used, equipped with a 355 nm Nd:YAG laser with a 200 Hz repetition rate, controlled by the 4000 Series Explorer TM software . MS spectra were acquired with 20 laser shots with an intensity of 6000 arbitrary units, with a bin size of 1.0 ns, acquiring spectra in reflectron, both in negative and positive-ion mode. Images of tissue sections were acquired using the 4800 Imaging Tool Software (Testing different types of materials has made it possible to ionize molecules with different molecular weights and chemical properties, overcoming the problem of high background noise in the low mass range that is typical in classic MALDI experiments. Moreover, with these nanomaterials we could visualize the mitotic inhibitor PTX and the tyrosine kinase inhibitor IMT inside tumors harvested from treated mice."} {"text": "Parkinson\u2019s disease (PD) is a disabling neurodegenerative disease that manifests with resting tremor, bradykinesia, rigidity and postural instability. Since the discovery of microRNAs (miRNAs) in 1993, miRNAs have been shown to be important biological molecules involved in diverse processes to maintain normal cellular functions. Over the past decade, many studies have reported dysregulation of miRNA expressions in PD. Here, we identified 15 miRNAs from 34 reported screening studies that demonstrated dysregulation in the brain and/or neuronal models, cerebrospinal fluid (CSF) and blood. Specific miRNAs-of-interest that have been implicated in PD pathogenesis include miR-30, miR-29, let-7, miR-485 and miR-26. However, there are several challenges and limitations in drawing definitive conclusions due to the small sample size in clinical studies, varied laboratory techniques and methodologies and their incomplete penetrance of the blood\u2013brain barrier. Developing an optimal delivery system and unravelling druggable targets of miRNAs in both experimental and human models and clinical validation of the results may pave way for novel therapeutics in PD. Parkinson\u2019s disease (PD) is the most common movement disorder in the aging population. An estimated prevalence of 1% of people above 60 years old suffer from PD . In addiCurrently, clinical treatments available for PD are mainly symptomatic which include medications such as Levodopa, dopamine agonists, catechol-O-methyl transferase (COMT) inhibitors and monoamine oxidase B inhibitors and non-pharmacological interventions such as deep brain stimulation ,8. WhileUsing the advanced search builder function in PubMed, a systemic search was performed with keywords \u2018\u201cMicroRNAs\u201d (Majr) and Parkinson\u2019s disease\u2019 in September 2019. This resulted in 253 titles consisting of both reviews and original research articles. All relevant articles including screening studies and mechanistic studies were examined to identify the promising microRNAs in PD that are discussed in this review.MicroRNAs are small non-coding RNAs that are transcribed from miRNA genes and intronic sequences as primary miRNAs (pri-miRNAs) and stem-loop precursor miRNAs (pre-miRNAs) respectively. In the nucleus, pri-miRNAs are further processed to form pre-miRNAs by the Drosha/DGCR8 complex. Pre-miRNAs are then exported out of the nucleus by Exportin-5. In the cytosol, pre-miRNAs are cleaved by Dicer to produce double-stranded mature miRNAs. The mature guide strand, about 20\u201322 nucleotides long, is then loaded onto Argonaute proteins to form the RNA-induced silencing complex (RISC). The mature miRNA is responsible for associating RISC to its target messenger RNAs (mRNAs) by binding at complementary sequences located usually at the 3\u2032-UTR of the mRNA ,15 and fSince the discovery of miRNAs in 1993, increasing evidence have revealed miRNAs to be important biological molecules involved in diverse processes to maintain normal cellular functions ,21. In aUsing techniques such as RNA sequencing, microarray and microRNA qPCR profiling, many screening studies have been conducted in attempt to characterise the miRNAs dysregulation in PD in both the central nervous system (CNS) and the periphery. Our search strategy revealed 34 screening studies that have been conducted to identify the clinically significant miRNAs in PD. We included studies on animal PD models and in vitro neuronal models as we believe that clinically significant miRNAs identified in PD patients will also be highlighted in these models. Moreover, due to the difficulty in obtaining human PD patients\u2019 samples, the inclusion of these model studies can also reveal suitable PD models to study mechanisms of the identified miRNA(s). In The top 5 miRNAs listed in The miR-30 family consists of 5 members and 6 mature miRNA sequences, namely, miR-30a, miR-30b, miR-30c-1, miR-30c-2, miR-30d and miR-30e. The members share a common seed sequence near the 5\u2032-end but has different compensatory sequences near the 3\u2032-end, allowing the different members to target different genes and pathways . In the post-mortem human brain studies, miR-30b in the substantia nigra (SN) and miR-Besides the CNS, differential expression levels of miR-30 family members have also been detected in the peripheral blood of PD patients as compared to healthy controls. Downregulated expressions of miR-30b and miR-30c in the PBMCs and downUsing a PD mouse model, Dorval and colleagues (2014) revealed that miR-30a* was upregulated in the striatal tissues of LRRK2-KO mice which inThe miR-29 family consists of 3 members and 4 mature miRNA sequences, namely, miR-29a, miR-29b-1, miR-29b-2 and miR-29c. Mature sequences have identical seed sequences at nucleotide positions 2-7, suggesting that the targets for miR-29 members heavily overlap . In the post-mortem human brain, miR-29a, miR-29b-1 and miR-29b-2 were observed to be upregulated in the anterior cingulate gyri of PD patients . The dysIn the periphery, miR-29a-3p was reported to be upregulated in the WBCs of L-dopa-treated PD patients but not in the untreated PD patients . On the miR-29 has been shown to regulate various processes that are important in PD development, such as apoptosis ,67,68 anThe let-7 family members consist of let-7a to let-7-k, miR-98 and miR-202. While the let-7 sequence is well-conserved from nematode to human, different species can express different members of let-7. For example, in humans, we do not express let-7h/j/k. Interestingly, all let-7 members share the same seed sequence (nucleotides 2\u20138) for target recognition . In post-mortem human brain studies, let-7b in the SN and let-Caenorhabditis elegans (C. elegans) [In animal PD models, let-7f was observed to be upregulated in the striatal tissues of LRRK2-KO mice as compared to control mice while leelegans) .C. elegans [Extensive research conducted on let-7 revealed that let-7 members regulate processes such as apoptosis ,80, immu elegans . TherefoLittle has been discovered about the functions of miR-485. However, this miRNA has been suggested to be dysregulated in PD. In the post-mortem human brain, miR-485-5p was reported to be downregulated in the SN while thSome miR-485-mediated pathways suggested in the literature include apoptosis ,96, immuBeside apoptosis, miR-485 is also able to target the expression of peroxisome proliferator-activated receptor gamma (PPAR\u03b3) coactivator-1\u03b1 (PGC-1\u03b1) . PGC-1\u03b1 The miR-26 family consists of miR-26a-1, miR-26a-2 and miR-26b in humans . In the The role of miR-26 in neurodegenerative diseases has not been well-studied. However, in other disease models, miR-26 has been suggested to modulate processes such as DNA repair ,108, apoWe have highlighted several candidate miRNAs that may aggravate or mitigate PD progression through their actions in the CNS or periphery. miRNAs are capable of regulating several signaling pathways as each miRNA can target and bind to an average of 100\u2013200 genes, making them potent modulators of gene expression . TherefoDespite the importance of miRNA research, the literature currently available on the actions of miRNA in PD is greatly lacking. Moreover, results of miRNA studies have not been consistent with each other. There are several potential limitations in published studies. It is difficult to identify clinically important miRNAs in diseases because clinical studies thus far are limited by their small sample size which may not have sufficient power to identify the effect size difference. In addition, the areas investigated also differ in the studies as noted in There are also several challenges in developing miRNA-based therapeutic therapies. One major problem is the delivery of candidate miRNA(s) to specific sites. The development of specific and effective miRNA delivery systems is vital as the delivery vehicle must allow miRNAs to cross the blood\u2013brain barrier. As miRNAs are easily degradable, the delivery systems employed must also be able to stabilise and extend the life of the miRNAs. Furthermore, using miRNAs in treatment may be a double-edged sword. While it is advantageous that miRNAs as powerful modulators of gene expression have the ability to alter several signaling pathways at once to switch the cellular physiology from an apoptotic state to one that favors survival, this also means that side effects in unspecific sites could be problematic. Hence, specific and effective delivery systems in miRNA-based therapy are vital. The current experimental approaches used in miRNA research have many limitations. As miRNAs are small and easily degradable, the existing isolation methods can lead to a great loss in yield or a biased recovery of certain miRNAs. This can affect the potential of miRNAs as diagnostic markers for diseases. Moreover, the experimental approaches required for miRNA studies are also very costly. Hence, these challenges could slow down the progression of miRNAs research. We have highlighted several miRNAs that show promise in having therapeutic and/or diagnostic potential in PD as well as the pathophysiologic role of miRNAs in disease states. Developing an optimal delivery system and unravelling druggable targets of miRNAs in both experimental and human models and clinical validation of the results may pave the way for novel therapeutics in PD."} {"text": "Correction to: BMC Plant Biolhttps://doi.org/10.1186/s12870-019-1934-4Following publication of the original article , the autCorrect Fig."} {"text": "Correction to: BMC Biolhttps://doi.org/10.1186/s12915-015-0140-6Upon publication of the original article , the aut"} {"text": "Correction to: Cell Commun Signalhttps://doi.org/10.1186/s12964-019-0417-4Following publication of the original article , the aut"} {"text": "Correction to: Syst Revhttps://doi.org/10.1186/s13643-018-0919-yFollowing publication of the original article , the aut"} {"text": "Correction to: Mol Cancer (2019) 18:188https://doi.org/10.1186/s12943-019-1119-7After the publication of this work , the aut"} {"text": "Correction to: Cell Commun Signalhttps://doi.org/10.1186/s12964-019-0355-1Following publication of the original article , the aut"} {"text": "Correction to: J Neuroinflammationhttps://doi.org/10.1186/s12974-019-1449-9Following publication of the original article , the aut"} {"text": "Previous research suggests that family caregivers contemplate suicide at a higher rate than the general population. Much of this research has been disease specific and in relatively small samples. This study aimed to compare suicidal thoughts between non-caregivers and informal caregivers of people with a variety of conditions, in a large representative sample, and to identify significant risk factors.The general population study NEMESIS-2 (N at baseline\u2009=\u20096646) included 1582 adult caregivers at the second wave (2010\u20132012) who also participated at the third wave (2013\u20132015). Suicidal thoughts were assessed over 4\u2009years, with the Suicidality Module of the Composite International Diagnostic Interview 3.0. The presence of suicidal thoughts was estimated and risk factors for suicidal thoughts were assessed with logistic regression analyses adjusted for age and gender.Thirty-six informal caregivers (2.9%) reported suicidal thoughts during the 4\u2009year study period. The difference between caregivers and non-caregivers (3.0%) was not significant. Among caregivers, significant risk factors for suicidal thoughts included being unemployed, living without a partner, having lower levels of social support, having a chronic physical disorder, a mood disorder or an anxiety disorder, and having impaired social, physical and emotional functioning. These risk factors were also found in non-caregivers. No caregiving-related characteristics were associated with suicidal thoughts.There was no elevated rate of suicidal thoughts in caregivers and risk factors for suicidal thoughts in caregivers were consistent with risk factors in non-caregivers. No association between caregiving characteristics and suicidal thoughts was found. Caregivers with limited resources and in poorer health might still benefit from prevention and intervention efforts. Family caregivers provide the cornerstone of care for people living with long-term illnesses and disabilities, and in Europe alone there are more than 100 million of them . As wellMore than 50\u2009years\u2019 worth of research, however, has shown that caring takes a significant toll on the physical health, mental health, social engagement, career prospects, and financial security of family caregivers . More reThe vast majority of this research, however, has used purposive, and relatively small, study samples. And, of the few studies that have used large representative samples , 14, 16,The aim of this study was to compare the rate of suicidal thoughts in caregivers and non-caregivers in a large, representative sample, and to identify significant risk factors for suicidal thoughts in caregivers that might provide directions for prevention and intervention.NEMESIS-2 is a psychiatric epidemiological cohort study in the Dutch general population aged 18 to 64\u2009years at baseline. It is based on a multi-stage, stratified random sampling procedure of households, with one respondent randomly selected in each household. The study was approved by the Medical Ethics Review Committee for Institutions on Mental Health Care. Participants provided written informed consent at each wave. The provision of informal care was assessed at the second wave (T1), and we therefore used the data of T1 and the subsequent wave (T2) for the current study to investigate suicidal thoughts.In the first wave of NEMESIS-2 , 6646 respondents were interviewed . This sample was nationally representative, although younger subjects were somewhat underrepresented . All T0 The face-to face interviews were laptop computer-assisted and mainly conducted at the respondent\u2019s home. The interviews were conducted by trained professional interviewers who were selected for their experience with systematic face-to-face data collection, experience with sensitive topics, and ability to achieve a good response rate in other studies.Fieldwork was monitored over the entire data collection period by the NEMESIS-investigators and the fieldwork agency. For more information on quality checks of the data and a more detailed description of the design and fieldwork, see .n\u00a0=\u20091759). Of these caregivers, 1582 (89.9%) also participated in the third wave (T2) and were included in the current study. This enabled us to investigate the presence of suicidal thoughts over a period of 4 years .At the second wave (T1), respondents were asked whether they provided unpaid care in the previous 12\u2009months, to a family member, partner or friend who needed care because of physical problems, mental problems, or ageing version 3.0, a fully structured lay-administered diagnostic interview for common mental disorders. This instrument was developed for use in the World Mental Health Survey Initiative .At T1, respondents were asked about thoughts of suicide in the previous 12\u2009months (\u201cHave you seriously thought about committing suicide?\u201d) and at T2 they were asked about thoughts of suicide in the previous 3\u2009years (i.e. since T1). At both waves, to encourage people to report their thoughts, plans or attempts of suicide, the experiences were not mentioned but listed in a booklet and referred to by number (\u201cevent A\u201d), as was done in other studies as well (e.g. ). For thSociodemographics included: age; gender; educational level ; employment status , living with a partner ; and, parental status (cohabiting with child(ren) yes/no).Caregiving-related characteristics included: the type of relationship with the care recipient ; number of care recipients ; cohabiting with the care recipient ; reasons for care ; type of care ; time spent giving care ; and, duration of care .Other characteristics included:Perceived social support from three sources was measured with two questions on instrumental and emotional support from each source. These referred to the extent respondents could rely on them for help if they had a problem and could open up to them if they needed to talk about worries. The four response categories ranged from one to four (a lot). Perceived social support was calculated as the mean score from at least two sources\u2014because not all respondents had a partner at the time of interview.\u25e6 Chronic physical disorders were identified with a standard checklist of 17 chronic physical disorders, treated or monitored by a medical doctor in the 12\u2009months prior to T1, including: asthma; chronic obstructive pulmonary disease; chronic bronchitis; emphysema; severe heart disease; heart attack; hypertension; stroke; stomach or intestinal ulcers; severe intestinal disorders; diabetes; thyroid disorder; chronic back pain; arthritis; migraine; impaired vision or hearing; or, any other chronic physical disorder [\u25e6 disorder .Mood disorders and Anxiety disorders in the last 12\u2009months were identified with the CIDI 3.0 [\u25e6 CIDI 3.0 . ClinicaCIDI 3.0 .negative life events in the 12\u2009months prior to T1 were measured, based on the Brugha Life Events scale [\u25e6 The presence of ten ts scale .Role functioning was assessed in the past 4\u2009weeks for the physical, mental and social domains from the Medical Outcomes Study Short Form Health Survey (SF-36) [\u25e6 (SF-36) . All thrA broad set of variables was used to identify potential risk factors for suicidal thoughts. These were all assessed at T1, except for educational level, which was assessed at T0. Obviously, the caregiving-related characteristics were not assessed in non-caregivers.p\u00a0<\u20090.05. Analyses were conducted in SPSS 22 and STATA 12.1.First, we compared the baseline characteristics of caregivers who dropped out at T2 with caregivers who completed T2 by performing logistic regression analysis. Second, the rate of suicidal thoughts was calculated as the number of caregivers who reported these thoughts at any point during the 4 years, i.e. between the 12\u2009months prior to T1 and T2. We checked whether the rate of suicidal thoughts differed between informal caregivers and non-caregivers, with a Chi-squared test using weighted data to correct for differences in the response rates in several sociodemographic groups at both waves and differences in the probability of selection of respondents within households at baseline . LogistiThe characteristics of the study sample are described in Table\u00a0n\u00a0=\u2009177) were lost to follow-up. None of the baseline variables were significantly associated with loss to follow-up.Over the 3\u2009year follow-up, 10% of the caregivers (p\u00a0=\u20090.89)).Across the 4\u2009years, 36 of the caregivers (2.9% weighted percentage) reported suicidal thoughts . The weighted prevalence of suicidal thoughts in informal caregivers did not differ significantly from people who did not provide informal care at T1 , p\u00a0=\u20090.07). In non-caregivers, the same risk factors were found as in informal caregivers and poor health. These risk factors were also found in non-caregivers. There was no significant association between suicidal thoughts and specific caregiving characteristics, such as type of caregiving.Our finding that the rate of suicidal thoughts between caregivers and non-caregivers did not differ is in line with a previous, relatively small study among a family caregivers of people with dementia in Japan , but conThe risk factors for suicidal thoughts in caregivers identified in the current study are broadly consistent with previous research. Poor physical health, poor mental health, and a lack of social support have all been previously identified as risk factors for suicidal thoughts in family caregivers , 14, 16 p\u00a0=\u20090.07). In a study of caregivers in Taiwan, Huang and colleagues (2018) found that people caring for a family member with a mental disorder reported significantly higher rates of suicidal thoughts than those caring for a family member with a physical disorder [We did not find a significant association between reason for care and suicidal thoughts, nor between type of caregiving and suicidal thoughts. The association between suicidal thoughts and caring for someone with a mental disorder did, however, approach significance and suicidal thoughts. Future research would benefit from samples that facilitate multivariate analysis, although this is complex when studying relatively rare events such as suicidal thoughts.Furthermore, because of the relatively small number of caregivers reporting suicidal thoughts, it was not considered appropriate to examine possible recurrence of suicidal thoughts or report the rate of suicide attempts in this population. Despite this, understanding how suicidal thoughts arises and persists across the caregiving trajectory, as well as how suicidal thoughts are related to suicide attempts in this population, remains important for the development of tailored prevention and intervention strategies. Previous research has shown that suicidal thoughts can occur while caring at home, following institutionalization, and after bereavement , 7 and tThe presence of suicidal thoughts was based on retrospective self-reports over the last 12\u2009months (at T1) and the previous 3\u2009years (at T2). Due to this relatively long time span, recall bias may have occurred and resulted in underreporting of suicidal thoughts. However, recall bias is not likely to differ between caregivers and non-caregivers in this study and therefore it is unlikely that this has influenced the comparison of the rate and risk factors of suicidal thoughts between caregivers and non-caregivers.Lastly, respondents were defined as informal caregivers if they provided unpaid care in the twelve months preceding the second wave (T1), but no data on their caring status 3 years later was available. It might be possible that some people were no longer providing informal care at the second wave and this may have influenced the presence or absence of suicidal thoughts.There is a small but rapidly growing body of research on suicidal thoughts in family caregivers and the current study makes a substantial contribution. The absence of any significant difference in the rate of suicidal thoughts between caregivers and non-caregivers stands in contrast to previous findings and highlights the need for more nuanced research to understand this complex phenomenon. Research that can identify which carers are at greater risk of suicidal thoughts, and when, will be key to developing effective intervention and prevention strategies. In the meantime, the current findings suggest that at the least, caregiving is not a protective factor, with caregivers no less likely to consider suicide than non-caregivers. Risk factors of suicidal thoughts in informal caregivers were largely similar to the risk factors in non-caregivers. The current findings suggest that caregivers with limited resources and poor health might benefit from prevention and intervention efforts. Finally, quantitative studies do not allow consideration of the broader social context in which both caring and living with a disability take place . Qualita"} {"text": "Cerebellar hemorrhage is a potentially life-threatening condition and neurologic deterioration during hospitalization could lead to severe disability and poor outcome. Finds out the factors influencing neurologic deterioration during hospitalization is essential for clinical decision-making.One hundred fifty-five consecutive patients who suffered a first spontaneous cerebellar hemorrhage (SCH) were evaluated in this 10-year retrospective study. This study aimed to identify potential clinical, radiological and clinical scales risk factors for neurologic deterioration during hospitalization and outcome at discharge.p\u2009=\u20090.002; 95% confidence interval (CI): 0.026 to 0.455) adjusted risk of neurologic deterioration compared with those without obliteration of basal cistern. An increase of 1 point in the ICH score on admission would increase the neurologic deterioration rate by 83.2% . The ROC curves showed that the AUC for ICH score on presentation was 0.719 and the cutoff value was 2.5 (sensitivity 80.5% and specificity 73.7%).Neurologic deterioration during hospitalization developed in 17.4% (27/155) of the patient cohort. Obliteration of basal cistern (p\u22660.001) and hydrocephalus (p\u22660.001) on initial brain computed tomography (CT), median Glasgow Coma Scale (GCS) score at presentation (p\u22660.001) and median intracerebral hemorrhage (ICH) score (P\u22660.001) on admission were significant factors associated with neurologic deterioration. Stepwise logistic regression analysis showed that patients with obliteration of basal cistern on initial brain CT scan had an odds ratio (OR) of 9.17 (Patients had obliteration of basal cistern on initial brain CT and ICH score greater or equal to 3 at admission implies a greater danger of neurologic deterioration during hospitalization. Cautious clinical assessments and repeated brain images study are mandatory for those high-risk patients to prevent neurologic deterioration during hospitalization. Spontaneous cerebellar hemorrhage (SCH) account for represent 5 to 13% of all cases of spontaneous intracerebral hemorrhage and about 15% of cerebellar strokes . It is oBecause of its unique neurological location near the brainstem, neurologic deterioration usually results from brain stem compression due to the direct mass effect of the haematoma and/ or the development of hydrocephalus . HoweverThis study aimed to identify potential clinical, radiological and clinical scales risk factors to predict neurologic deterioration during hospitalization and outcome at discharge in patients with SCH.This is a single-centre retrospective study. Medical records were retrospectively reviewed using pre-existing standardized evaluation forms as well as brain computed tomography (CT) findings for patients with SCH admitted to the Department of Neurology or Neurosurgery in our tertiary academic centre from January 2005 to April 2015. The study approved by the Institutional Review Board (IRB)/Ethics Committee .On admission, a detailed physical examination, the routine laboratory testing, and brain imaging were evaluated for all patients. The initial neurologic state was evaluated by the Glasgow Coma Scale (GCS). Then systolic blood pressure, diastolic blood pressure, heart rate, and body temperature were recorded immediately before brain computed tomography (CT) scanning. After CT scan, the ICH score was calculated by the all parameters . The ICHAcute SCH was diagnosed by the clinical history and brain CT. Patients were excluded if they had: 1) non- spontaneous cerebellar hemorrhage, such as traumatic cerebellar hemorrhage; 2) SCH caused by a primary or secondary brain tumor, cavernomas, arteriovenous malformations or aneurysms, or hemorrhagic transformation of a cerebellar infarct; and 3) preexisting neurological conditions with various neurological deficits . From January 2005 to April 2015, a total of 186 patients with acute SCH were admitted to our hospital. This study is to evaluate neurologic deterioration during hospitalization, therefore, GCS =3 and no brain stem reflex on presentation was excluded. Finally, a total of 155 patients were enrolled in the studies only, midline suboccpital craniectomy for hematoma evacuation, and suboccpital craniotomy plus EVD.Patients with neurologic deterioration during hospitalization was defined that patients presented the identified episodes of one or more of the following: 1) a spontaneous decrease in GCS motor scores of 2 points or more from the previous neurologic examination; 2) development of loss of pupillary reactivity 3) pupillary asymmetry greater than 1\u2009mm. .The patients were divided into two groups according to the discharge outcome: 1) The good outcome group with independent performance of daily activities score\u2009=\u20094 or 5). 2) The poor outcome group with disability of daily living, vegetative states, or death ..p\u2009<\u20090.05.The descriptive data were showed as both median and inter-quartile range (IQR). Categorical variables were accessed by the Chi-square test or Fisher\u2019s exact test. The Mann-Whitney U test was used for the continuous variables analysis. The Spearman rank test was applied for correlation analysis for the relationship between age, GCS and laboratory data. Statistical significance was set at We used stepwise logistic regression analysis for evaluating the association between significant variables and patients with neurologic deterioration during hospitalization and outcome. ROC curve was generated to estimate an optimal cut-off value for ICH score on admission, and the area under ROC curve was measured. All statistical analyses were conducted using the SPSS software (IBM SPSS statistic version 22.0).From January 2005 to April 2015, a total of 186 patients with SCH were admitted to our hospital. Thirty-one patients whose GCS was 3 and without pupil reflex at presentation were excluded. A total of 155 patients finally were enrolled in this study Hydrocephalus in initial image was presented in 69 patients (44.5%). The appearance of 4th ventricle compression was grade I in 30 (19.4%), grade II in 65 (41.9%), and grade III in 60 (38.7%) study sample patients. Eighty-four patients (54.2%) exhibited obliteration of the basal cistern.Neuroradiological characteristics and neuro-surgical treatment at presentation were listed in Table\u00a0Forty-one patients (26.5%) received neurosurgical intervention, including 11 patients (7.1%) with suboccpital decompressive craniectomy, 4 patients (2.6%) with EVD only, and 26 patients (16.8%)with both suboccpital decompressive craniectomy and EVD. There were 10 patients underwent the re-operative craniectomy. The median time between symptoms and surgery was 2.46\u2009h (IQR: 1.81\u20139.0\u2009h). The median time of neurologic deterioration after admission was 12.5\u2009h (IQR: 6\u201372\u2009h).p\u00a0=\u20090.038), liver cirrhosis (p\u00a0=\u20090.005), the initial presentation with loss of consciousness (p\u00a0=\u20090.029), and elevated heart rate (p\u00a0=\u20090.010). Statistical analysis revealed significant associations at initial neuro-image findings are median CH volume (p\u22660.001), the presentation of obliteration of basal cistern (p\u22660.001) and hydrocephalus (p\u22660.001). But there is no significant statistical difference for patients with presentation of intraventricular hemorrhage in initial CT scan(P\u2009=\u20090.108). GCS and ICH score at presentation showed significant statistical difference between the two groups (both p\u22660.001).Twenty-six patients died during hospitalization. Twenty-three died of the hemorrhage. Three patients died of unrelated causes, one was aspiration pneumonia, one was massive GI bleeding, and the other one was uncontrolled liver cirrhosis. Factors predict of neurologic deterioration during hospitalization and the outcome in patients with SCH were listed in Table\u00a0The functional outcome at discharged, the neurologic deterioration group showed poor outcome compared to the no neurologic deterioration group. Only 1 of 27 patients in neurologic deterioration group had good outcome, compared to 113 of 128 patients in no neurologic deterioration group (p\u22660.001). Of these 27 patients had neurologic deterioration during hospitalization, 22 died during hospitalization. Of the five patients survived, only one had good outcome (GOS =5) with minor neurologic deficit , two were vegetative state (GOS =2) and the other two were severe disability state (GOS =3) at discharged. Moreover, the SCH patients with neurologic deterioration during hospitalization revealed a specific high mortality rate during hospitalization. 22 of 27(81.4%) patients died during hospitalization in the neurologic deterioration group, compared to 4 of 128 (3.1%) patients died during hospitalization in no neurologic deterioration groups (P\u22660.001).P\u00a0=\u20090.002 and P\u00a0=\u20090.010, respectively) The adjusted risk of SCH patients with neurologic deterioration during hospitalization with obliteran of basal cistern at initial brain CT scan had odds ratio (OR) of 9.17 : 0.026 to 0.455) compared with those without obliteran of basal cistern at initial brain CT scan. Furthermore, an increase of 1 point in ICH score on admission would increase the risks of neurologic deterioration during hospitalization rate by 83.2% .Stepwise logistic regression analysis identified obliteran of basal cistern at initial brain CT and the higher ICH score as independent risk factors for neurologic deterioration during hospitalization. . The cutoff value of ICH score on presentation was 2.5 (sensitivity 80.5% and specificity 73.7%).p\u2009=\u20090.002 and p\u2009=\u20090.010, respectively).In our study, 17.4% patients with SCH had neurologic deterioration during hospitalization and most of these patients had poor outcome. Arterial hypertension, initial loss of consciousness, the presentation of obliteran of basal cistern, hydrocephalus and CH volume in the initial brain CT, lower GCS and higher ICH score are statistically significant in patients with risks of neurologic deterioration. Although those factors are statistically significant in patients with neurologic deterioration, only obliteran of basal cistern at initial brain CT and the higher ICH score as independent risk factors had poor outcome. Obliteration of basal cistern and hydrocephalus in the initial brain CT, lower GCS scale and higher ICH score on presentation implies a greater danger of neurologic deterioration during hospitalization. The median time of neurologic deterioration after admission was 12.5\u2009h. Cautious clinical assessment, repeated neuro-image study within the first 12\u2009h after admission and possibility of early neurosurgical intervention needed to be recognized by clinical physicians for high-risk patients, especially in those with ICH score greater or equal to 3, in order to early management the presence of neurologic deterioration in SCH patients."} {"text": "This strategy could provide standardization in CONUS tidal carbon accounting until such a time as modeling and mapping advancements can quantitatively improve accuracy and precision.Tidal wetlands produce long-term soil organic carbon (C) stocks. Thus for carbon accounting purposes, we need accurate and precise information on the magnitude and spatial distribution of those stocks. We assembled and analyzed an unprecedented soil core dataset, and tested three strategies for mapping carbon stocks: applying the average value from the synthesis to mapped tidal wetlands, applying models fit using empirical data and applied using soil, vegetation and salinity maps, and relying on independently generated soil carbon maps. Soil carbon stocks were far lower on average and varied less spatially and with depth than stocks calculated from available soils maps. Further, variation in carbon density was not well-predicted based on climate, salinity, vegetation, or soil classes. Instead, the assembled dataset showed that carbon density across the conterminous united states (CONUS) was normally distributed, with a predictable range of observations. We identified the simplest strategy, applying mean carbon density (27.0\u2009kg C\u00a0m Mapping tidal carbon stocks and fluxes is challenging, with substantial implications for ecology5, carbon markets7, resiliency9, and greenhouse gas (GHG) inventorying10.Tidal wetlands, herein including saltmarshes, tidal freshwater wetlands, and tidally influenced forests such as mangroves, are a substantial global sink of organic carbon (C)in-situ is deposited primarily by root addition into shallow anoxic soils11. As sea level rises, organic deposition, as well as inorganic sediment deposition, contribute new soil mass that, under the right conditions, allow the wetland surface to vertically accrete and gain elevation in equilibrium with relative sea-level rise13. Long-term storage properties are variable and depend on salinity, flooding, plant type, and microbial community activity14. Tidal wetlands can be a major source of carbon emissions when the soil is lost to erosion15, or other disturbances17. Processes such as drainage or diking can result in direct oxidation of soil carbon or the emission of methane depending on soil type, inundation, and salinity18. Erosion results in export of particulate and dissolved organic carbon to other aquatic systems, a portion of which is oxidized and returned to the atmosphere19.Tidal wetlands store carbon in their soil organic matter when they are stable and release carbon when they are degrading. Organic matter produced 20. We refer to precision throughout as the agreement among repeated comparisons of mapped and reference values. The lack of precision is referred to throughout as imprecision, also known as random error20.In order to both evaluate existing carbon stocks at sub-national to local scales, and estimate emissions from tidal wetlands that are lost during erosion and degradation events, we require accurate and precise soil carbon mapping strategies. We refer to accuracy throughout as the average difference between mapped and reference values. The lack of accuracy is referred to throughout as bias, also commonly referred to as systematic errorWetlands Supplement10 to the 2006 assessment report provides global default values based on a literature review. It also provides guidance for \u2018higher tier\u2019 analyses utilizing country-specific data, such as a more extensive and thorough review of country-specific soil core data22 or the use of soils maps23. The IPCC Wetland Supplement guidance recommends disaggregating wetland emissions based on soil type , as well as by vegetation community, salinity, and climate type. However, the relative importance of these factors and the efficacy of applying separate estimates have not been evaluated at the scale of the conterminous U.S. (CONUS).Herein we discuss three types of strategies for estimating carbon stocks: applying average carbon stock values from syntheses of soil core data, applying models fit using empirical data and applied spatially using soil, vegetation and salinity maps, and relying on independently generated soil carbon maps that intersect with mapped tidal wetlands. The International Panel on Climate Change (IPCC)\u2019s 2013 et al.24 recently presented an \u2018ideal mixing model\u2019, which describes the physical and volume-limited nature of bulk density in tidal wetland soils. Bulk density and organic matter content are not independent variables; instead, bulk density is a predictable function of organic matter26, the product of the \u2018self-packing densities\u2019 of organic and mineral soil fractions24. Although Morris et al.24 fit this model to describe constraints on tidal wetlands\u2019 resilience to relative sea-level rise, it also has important ramifications for carbon monitoring as organic matter self-packing density defines an effective upper-limit for likely ranges of observable organic matter density.Morris et al.23 utilized the United States Department of Agriculture (USDA)\u2019s Soil Survey Geographic Database (SSURGO) for a CONUS-wide stock assessment, independent of the previously described soil core syntheses. SSURGO is a CONUS-wide series of high resolution soil maps27 that link soil classifications and descriptions to tables populated with associated bulk density and % organic matter28 depth series information. However, the underlying information used to populate SSURGO soils maps with organic matter content and bulk density values are not necessarily empirical. They can be based on laboratory measurements, or can be assembled from literature, interviews with experts, or interpreted using a soil scientist\u2019s expert judgement27. Hinson et al.23 were not able to perform a full accuracy assessment of their maps because data were not readily available through the literature. One study provided a regional independent validation for SSURGO carbon data for tidal wetlands, among other land cover types, in Louisiana29. The study observed a weak positive correlation in organic matter content between SSURGO and independently collected soil cores but did not assess the accuracy of SSURGO-based carbon stock maps29.Hinson Wetlands Supplement guidance for reporting and applying soil carbon stock values based on soil type, climate type, and salinity and vegetation. Second we evaluated whether or not more complex, spatially-explicit approaches improved precision and accuracy over a simpler strategy, applying a single mean value based on an extensive empirical dataset.In the absence of, and with an interest in developing, a robust national-scale strategy for estimating tidal wetland carbon stocks, our goals are twofold. First, we evaluated the efficacy of IPCC We assembled a spatially explicit database totaling 1959 soil cores from 49 different studies across CONUS .We described carbon density using mean and standard deviation (s.d.) assuming a truncated normal distribution in which values could not be lower than 0. The fit of this distribution was an improvement over a log-normal distribution Fig.\u00a0. For ourModel fitting occurred in three major steps: fitting the ideal mixing and organic matter density models, determining an appropriate threshold for categorizing organic- and mineral- dominated soils, and fitting two mixed effects models one with soil type as an independent variable (model 1) and one without (model 2), to describe the major categorical trends and effect sizes within the data.\u22123\u2009\u00b1\u2009s.e. 0.001 (p\u2009<\u20090.0001) and inorganic self-packing density (k2) was 1.67\u2009g\u2009cm\u22123\u2009\u00b1\u2009s.e. 0.025 (p\u2009<\u20090.0001). When the mixing model was fit to separate 10\u2009cm depth intervals, k1 ranged between 0.086 to 0.146\u2009g\u2009cm\u22123 and k2 ranged between 1.34 to 2.32\u2009g\u2009cm\u22123; neither k1 nor k2 exhibited a trend with depth to 1\u2009m.For the ideal mixing model and organic matter density models Fig.\u00a0, the sel2\u2009=\u20090.30, p\u2009<\u20090.0001) in the carbon mass data than the prescribed definition of either >20% or >35% .The organic matter density model is useful for describing variability along a spectrum of soils types. However soils are typically mapped using the binary categories of organic- and mineral-dominated. We detected a significant threshold at 13.2% organic matter this equals 0.48 \u03c3c , with organic soils having higher carbon density than mineral soils . Pseudo R2 for model 2 decreased from 0.51 to 0.32 and AICc increased from 7278 to 8454. Climate, vegetation and salinity type, and depth interval as well as an interactive effect were present in the most parsimonious version of model 2 respectively , indicating that the model performed better than the application of the mean from the reference dataset , it did not incorporate any of the uncertainty in the underlying layers needed to create national-scale mapping products. These included SSURGO for soil type (model 1), and the Coastal Change Analysis Program (C-CAP) for salinity and vegetation types (models 1 and 2).r performance threshold .When model 1 was applied using a derivative SSURGO-based organic and mineral-dominated soils map and C-CAP maps for salinity and vegetation classes, precision decreased, and normalized total error increased above the 1 \u03c3r from 10 to 20\u2009cm and \u22120.06 \u03c3r at 80 to 90\u2009cm . Bias correcting SSURGO using the known relationship between organic matter content and bulk density24 substantially reduced bias, but did not improve precision or reduce RMSE* below the required threshold of 1 \u03c3r values range from 0.24 to 0.30\u2009g\u2009cm\u22123 average carbon mass assumption, we estimate 0.72 (Pg) of C for the top 1 meter of soil. SSURGO soils maps do not perfectly overlap mapped tidal wetlands because of missing or incomplete survey data; overlapping area covered 1.97\u2009m ha (Table\u00a0We mapped 2.67 million hectares (m ha) of coastal wetlands based on national scale maps. Given our most precise, and accurate applied strategy, a simple 27\u2009kg C\u00a0mComparing all approaches, the SSURGO-limited spatial mapping led to a tidal wetland C stock total of 0.53 Pg C, using the simple mean, and 0.54 using the bias-corrected SSURGO data, whereas models 1 and 2 resulted in slightly lower national-scale carbon stock values (0.43 and 0.37 Pg). In contrast, utilizing unadjusted SSURGO data and maps resulted in a CONUS stock estimate of 1.15 Pg C, thus 54% higher than the approach of applying a single average carbon density. We visualized the comparison between these maps for the Louisiana Delta in Fig.\u00a0Since we detected negative bias in classifying organic soils using SSURGO organic matter content data and positive bias when calculating carbon density using both SSURGO organic matter content and bulk density data, we reviewed the empirical data that informs SSURGO at the pedon level. The National Cooperative Soil Survey (NCSS)\u2019s pedon database archives quantitative data available to SSURGO soil scientists. One-hundred eleven pedons overlapped mapped tidal wetland area. These included 14 of the 22 tidal CONUS states and were both less numerous and less spatially representative than our empirical dataset. Further, a close inspection of the archived data shows very limited measurements from tidal wetland pedons; approximately one-third of the pedons lack any empirical bulk density or organic carbon data, and most of the remaining pedons are either missing empirical organic carbon or bulk density data, or missing some depth horizon data. Four of the 111 relevant pedons had both organic carbon and bulk density measurements complete and continuous down to 1\u2009m depth we found in our large empirical dataset is comparable to multiple previous syntheses in the U.S. and other locations. The mean value we observed using a Tier II (nation-specific) approach is within the confidence intervals for IPCC global default values for salt marshes (25.5 [25.4\u201329.7 95% CI] kg C\u00a0m\u22123). Therefore, applying Tier I default using reference carbon stocks provided by the IPCC Wetland Supplement would have been reasonable. A recent independent CONUS-wide study supported this estimate reporting an adjusted 28.0\u2009\u00b1\u20097.8\u2009s.d. kg C\u00a0m\u22122 to 1\u2009m in soils of saline wetlands21. For European tidal marshes, van Broek et al.32 report a mean of 26.1\u2009kg C\u00a0m\u22123. In Southeastern Australia mangroves and marshes have a mean carbon density of 25.3\u2009kg C\u00a0m\u22123\u200933, with no effect of vegetation type but geomorphic influences within drainages.The average carbon mass across 323 samples from Australian salt marshes, but the distribution is strongly weighted toward mineral soils, less than 14% organic matter. The majority of the variability occurs within these mineral soils, and appears to be related to grain size associated with geomorphic conditions33. Similar to our study, a narrow distribution is observed in Australian salt marshes, around a relatively low mean of 16.5\u2009\u00b1\u20090.7\u2009s.e.\u2009kg C\u00a0m\u22122\u200934. This is consistent with the assumption of the organic matter density model, that carbon density should vary over a fairly narrow range, predictably increasing only along a spectrum of more to less mineral-dominated soils between 0 and 13.2% organic matter , and carbon mass changes less with the organic/inorganic composition ratio. This established an effective upper limit of 0.048\u2009gC cm\u22123 (48.0\u2009kg\u2009m\u22123) that highly organic soils should not theoretically exceed on average. We note that while the assumption of an additive relationship between organic and inorganic fractions was useful for describing soil properties in this synthesis, there is an argument and some evidence that this assumption may not apply to karstic mangrove soils of the Everglades and Gulf of Mexico31.The concepts introduced by the ideal mixing model and the organic matter density model provide some context for the low variability in observed carbon density. Macreadie Our analysis of competing mapping and modeling strategies indicated that many geographic categories previously emphasized in the literature may be less important than previously assumed Fig.\u00a0. Relyinget al.1, reported higher carbon densities for mangrove soil globally, and the IPCC Wetlands Supplement recommends a higher default estimate for mangroves compared to tidal marshes. However, after controlling for random submitter effect and soil type, we observed little effect of vegetation type on carbon density . These apparent biases may be a result of low data availability to SSURGO soil scientists . Although standardized precision, accuracy, and total error metrics indicated that model 1, dominated by soil type effects, had the potential to outperform the use of a single average carbon density value, in practice it did not. Our accuracy assessment indicated that the CONUS soil map misclassified as many as 42.8% of organic soil observations as mineral than that estimated from either bias-corrected SSURGO or from our simple empirical mean approach (0.0283 PgC). Barataria Basin has been documented to have had some of the highest wetland loss rates in Louisiana, (\u221212.10\u2009\u00b1\u20092.51 km2 yr\u22121 from 1985\u20132010)36. Assuming emissions commensurate with that loss rate using watershed averages as in Hinson et al.23 would have resulted in estimated emissions of 18.3\u2009m tonnes CO2 emitted according to the null map, 10.2 for Bias-Corrected SSURGO and 37.1 for SSURGO from 1985 to 2010. Our previous analyses showed that on the national scale SSURGO and bias-corrected SSURGO lack precision compared to the null strategy was not unsubstantial. Controlled comparative lab studies show that inter-lab variance in loss-on-ignition is higher than intra-lab differences, and lab-specific bias is associated with sample size, ignition time, and ignition temperature38. Future studies could attempt to further control for these three variables.For accuracy assessment purposes we made the assumption that reference values were an improvement over mapped values, and that they approximated \u2018true values\u201910, future studies could improve total soil stock estimates by more explicitly measuring and reporting deposit depth. In our empirical dataset most cores were from shallow coring efforts (e.g. 24\u2009cm for the Louisiana Coastwide Reference Monitoring System [CRMS]). Comparatively few studies reported reaching bedrock or non-marsh sediment interface was the most precise approach we tested, and importantly, is unlikely to substantially bias coastal carbon monitoring efforts. Overall, our analysis supports the use of a simpler metric for both stock assessment and estimating emissions. However, we also present metrics by which future efforts could be assessed and intercompared for predicting carbon distributions at system-relevant spatial scales.Given a single average carbon stock estimate, an assumption of 1\u2009m depth and an area of 2.67\u2009m ha, we estimate CONUS tidal wetlands contain 0.72 Pg of soil organic carbon. Soil carbon stocks, based on a large empirical dataset, were far lower on average and varied far less spatially and with depth than stocks calculated from available national-scale soils maps. Soil type was the single most important driver of variability, with little evidence of climate, vegetation, or salinity type influencing C stocks enough to produce a predictive model. For tidal wetlands soils with >13.2% organic matter, organic matter density increases very little as organic fraction increases. We quantified upper limits on mean tidal soil carbon stock defined by organic matter\u2019s self-packing density. Although tidal wetland carbon stocks are greater in organic compared to mineral soils69, reports72, public databases75, unpublished data in preparation for peer review submitted by co-authors or generous members of the scientific community Six hundred two cores reported measured organic carbon (%), 475 cores in addition to organic matter and 127 without. Carbonates were often removed physically by applying dripped dilute acid or fumigating with concentrated acid (n\u2009=\u2009138). Four hundred twenty five cores reported organic carbon without specifying acid treatments or reporting other strategies for carbonate removal, which is typically most important in karst embayments. Thirty nine of the cores in the dataset measured % total carbon rather than organic carbon, but we assumed that carbonate was a minimal contribution to the total.et al.76. We used a subset of paired data points from six data sources that had published organic matter content by LOI and organic carbon by elemental analysis77. Studies spanned CONUS including the Pacific Northwest, San Francisco Bay, Louisiana, the Everglades, and Long Island Sound. We modeled organic carbon as a quadratic function of organic matter using multi-model inference, algorithm which selects optimal models based on Akaike\u2019s Information Criteria for small datasets (AICc)79 in \u2018R\u201980. A quadratic function outperformed a linear function in terms of parsimony relative to explanatory power .We independently verified a function for predicting organic carbon from organic matter published by Craft OC\u2009=\u2009fraction organic carbon.OM\u2009=\u2009fraction organic matter.We summarized empirical data bulk density, organic matter, and organic carbon content across 10\u2009cm increments down to 1\u2009m using a depth weighted average, normalizing sampling interval to 1\u2009cm increments and summing across the 10\u2009cm depth intervals. If the deepest sample depth covered >50% of the depth increment the carbon mass, bulk density and organic matter were extrapolated to the bottom of the interval. If not the increment was considered a \u2018no data\u2019 value. If a cores had LOI data we estimated organic carbon from Eq.\u00a0\u22123, so we recast 0 values as 0.01\u2009kg C\u00a0m\u22123 for to generate the log-normal distribution. This was for the sake of the exercise in visually comparing the two datasets, and zero values were not recast for reporting the mean and s.d. assuming the normal distribution.We described central tendency and spread by fitting probability densities to histograms for both a normal distribution and a log-normal distribution. For the log-normal distribution we assumed all zero values were not truly zero but below a detection limit of 0.1\u2009kg C\u00a0mWe randomly sorted cores into two independent subsets, a calibration dataset used for generating averages uncertainties and fitting models, and a reference dataset used for performing accuracy assessments and calculating model performance statistics.et al.24, herein referred to as the organic matter density model, to describe how organic matter density varies along a spectrum of soil types from purely mineral to purely organic and mineral (k2) self-packing densities, conceptually the average density of pure organic and mineral matter respectively .k1 and k2 refer to the self-packing density of pure organic matter and pure mineral matter respectively (g cmOrganic self-packing density (k1) has important practical and theoretical implications for tidal wetland carbon accounting because it defines a hypothetical upper limit of carbon mass in organic soils when OM\u2009=\u20091. OM density should approach, but on average, not exceed, organic self-packing density (k1) in organic soils. We refer to this equation throughout as the Organic Matter Density model Eq.\u00a0.3\\docume81. We refined model parameters k1 and k2 and generated uncertainty estimates using a bootstrapped approach including 1000 iterations81.For the calibration dataset, we fit the ideal mixing model to generate p-value82 for all cores and depth classes together.We defined organic and mineral soils using a bootstrapped piecewise linear regression83. We used four classes: mediterranean, subtropical, temperate cool, and temperate warm. Mediterranean was defined as within the state of California and south of 40\u00b0 latitude. Subtropical was defined as the Gulf Coast and the Atlantic coast of Florida south of 30\u00b0 latitude. Temperate warm included the Atlantic coast between 30 and 40\u00b0 latitude. Temperate cool included the Pacific and Atlantic Coasts north of 40\u00b0 latitude. We realize that these are do not perfectly match K\u00f6ppen-Giel climate zones84, but we applied them to remain consistent with other community efforts83.We mapped climate zone using the same standards as the EPA\u2019s Greenhouse Gas Inventory85, and that data was not available for our soil core database, we combined forested and scrub/shrub categories.We also tested combined salinity and vegetation type as potentially predictive . Vegetation categorization differs slightly from that recommended by the IPCC (marsh and mangrove). We did this to match classifications in the Coastal Change Analysis Program (C-CAP) a Landsat-based land cover and change product. Because C-CAP defines forested and scrub/shrub based on shrub or tree heights86, using the syntax in Eq.\u00a0Because this effort synthesized data from multiple sources, and measurements such as LOI and dry bulk density can have laboratory specific biases, we integrated a random \u2018submitter\u2019 effect into our modeling structure. We considered a submitter to be the first author of an associated peer-reviewed publication, or the individual or organization responsible for actively managing the original dataset. We generated submitter codes as the last name of the submitter or commonly used acronym for an organization. Random effects were combined with fixed effects and all potential interactions using the R packages \u2018lme4\u2019lmer is a linear mixed effects model.climate is one of four mapped climate zonessalVeg is one of four mapped salinity and vegetation typesdepth is one of the 10\u2009cm depth increments down between 0 and 1\u2009msoil is either organic or inorganic dominatedThe intercept of the linear model is conditional on random variation associated with the data submitter79 in R to test model fit relative to parsimony as measured by AICc, for all possible permutations of factors. Specifically, we used the \u2018dredge\u2019 function in the \u2018MuMIn\u2019 package79. We selected the model with the highest ranking Akaike weight as \u2018model 1\u2019. We calculated the pseudo R2 value for model 1 also using the R package \u2018MuMIn\u201979.We used multi-model inference2) using the \u2018anova_stats\u2019 function in the R package \u2018sjstats\u201987. We also repeated the dredge process described previously on a version of equation\u00a0We did two things to quantify and contextualize effect sizes for the various fixed effects. First we calculated an adjusted effect size soil survey boundaries, there are 288 survey zones overlapping mapped tidal wetlands89. Of these 288 zones, there were sixteen zones that had incomplete or missing data.We calculated wetland area using both 2006\u20132010 C-CAP27.SSURGO \u2018map units\u2019 represent the spatial extents of soils using mapping techniques, soil surveys, and expert judgment taking into account landscape factors. SSURGO contains multiple linked data tables associated with those map units. Each map unit may have one or more components, soil descriptions that make up a percent of that map unit, indicated by the \u2018component percent\u2019. Each component can have one or more \u2018horizons\u2019, depth intervals, which contain organic matter content and bulk density data89. We extracted all SSURGO map units intersecting mapped tidal wetlands from NWI, and further extracted all components categorized as \u2018hydric\u2019. If bulk density data was present, but organic matter was not, organic matter was assumed to be 0%. If both values or bulk density were missing, the horizon was interpreted as a \u2018no data\u2019 value.We downloaded all SSURGO maps and tables corresponding to soil survey areas intersecting mapped tidal wetlands28. We did not perform rock fragment corrections, as it is not applicable to tidal wetland soils29. We estimated carbon density using the van Bemmelen factor (0.58 gOC gOM\u22121), as that is the recommended conversion factor for SSURGO28. Carbon density (gC cm\u22123) was converted to mass area\u22121 by multiplying by the depth interval (10\u2009cm) and converting to kg C\u00a0m\u22122 (1\u2009kg 1000\u2009g\u22121 and 103\u2009cm2 m\u22122). We also assigned SSURGO map units a binary classification of \u2018mineral-\u2019 or \u2018organic-dominated\u2019 based on a detected empirical threshold of 13.2% organic matter.To summarize data at the map unit scale we first calculated depth weighted averages for organic matter content, bulk density, and organic matter density separately based on each component\u2019s separate horizon data. We then summarize each of these variables for map units as the weighted average of all the components based on their reported component percentReference dataset members were additionally screened so that low-quality latitude-longitude coordinates were excluded (n\u2009=\u2009960 cores). Location information was coded as coming from GPS measurements, map figures or site descriptions. If positional information was not able to be effectively matched to a SSURGO map unit they were excluded.We assessed \u2018model skill\u2019 at two phases, what we refer to throughout as a validation phase, and an application phase. For model validation we modeled carbon density based on \u2018true values\u2019. We used the \u2018R\u2019 predict function using only the fixed effects from the mixed effects models 1 and 2. This allowed us to determine whether or not the models were overfit or unduly influenced by outliers in the calibration dataset. For model application we modeled carbon density based on \u2018mapped\u2019 values following the same procedure for model validation, except using mapped values. Application compounded uncertainty in both the models and the underlying data products used to apply the model.90. We assumed no mapping errors in climate zone or depth intervals.To run models that took soil type, climate, vegetation and salinity, and/or depth interval as inputs, each reference dataset core was assigned \u2018true values\u2019 based on field descriptions and empirical data, and a \u2018mapped values\u2019 based on SSURGO for soils and C-CAP for salinity and vegetation. All spatial statistics were done in ArcGIS Pro30. We also calculated total normalized root mean square error and unbiased root mean square error (RMSE*\u2019) Eq.\u00a0\u2013830. We rror Eq.\u00a0. Becausem and \u03bcr\u2009=\u2009means of the modeled and reference values respectively.\u03bcr\u2009=\u2009standard deviation of the reference values.\u03c3R\u2009=\u2009correlation coefficient.n\u2009=\u2009number of data points.i\u2009=\u2009ith modeled value.mi\u2009=\u2009ith reference value.rm\u2009=\u2009s.d. of modeled values.\u03c3In addition to model 1 and 2 we performed these validation metrics of bias*, RMSE*\u2019 and RMSE* on two different applications of SSURGO. First, we validated SSURGO as described above. However we detected a positive bias. Second, we attempted to \u2018bias correct\u2019 SSURGO bulk density using the ideal mixing model Eq.\u00a0, 11 of ai\u2009=\u2009residual of data point i.ri\u2009=\u2009The i th bulk density.BDi\u2009=\u2009The i th organic matter fraction value.OMs and k2s\u2009=\u2009the organic and inorganic self-packing densities fit to SSURGO data.k1i*\u2009=\u2009standardized residual of data point i.rs\u2009=\u2009the standard deviation of residuals for SSURGO.\u03c3i is the \u2018corrected\u2019 bulk density value for point i.CBDc and k2c\u2009=\u2009the organic and inorganic self-packing densities fit to calibration data.k1c is the standard deviation of residuals for calibration dataset.\u03c337 .While assembling SSURGO data to test the efficacy of using it to generate improved carbon stock estimates, we made extensive observations and independently vetted the the empirical datasets available to soil scientists who populate SSURGO with values. The National Cooperative Soil Survey (NCSS) Pedon Database is a resource available to soil scientists; reports contain field and lab descriptions of pedons, three-dimensional soil structures that make up the most basic and disaggregated unit of soil abstraction in the USDA spatial data product hierarchyDetailed summary statistics and figures are available in the supplemental information. Soil carbon maps based on the Soil Survey Geographic Database, salinity and vegetation maps, and modeling efforts are\u00a0available via the Oak Ridge National Laboratory\u2019s Distributed Active Archive Center (DAAC) for biogeochemical dynamics\u00a0(10.3334/ORNLDAAC/1612). Soil core data that is from previously published information will be made immediately available via the Coastal Carbon Research Coordination Network\u00a0(10.25572/ccrcn/10088/35684). Soil core data from previously unpublished sources that are currently \u2018in-preparation\u2019 or \u2018in-review\u2019 will be made public via the Coastal Carbon Research Coordination network pending their status change to \u2018in-press\u2019. Until the time that the full dataset is made public, previously unpublished soil core data inquiries will be referred to the original data submitters as listed in Supplemental Table\u00a0Supplemental MethodsSupplementary Dataset 1Supplementary Dataset 2Supplementary Dataset 3Supplementary Dataset 4Supplementary Dataset 5"} {"text": "Global navigation satellite systems (GNSS) allow estimating total electron content (TEC). However, it is still a problem to calculate absolute ionosphere parameters from GNSS data: negative TEC values could appear, and most of existing algorithms does not enable to estimate TEC spatial gradients and TEC time derivatives. We developed an algorithm to recover the absolute non-negative vertical and slant TEC, its derivatives and its gradients, as well as the GNSS equipment differential code biases (DCBs) by using the Taylor series expansion and bounded-variable least-squares. We termed this algorithm TuRBOTEC. Bounded-variable least-squares fitting ensures non-negative values of both slant TEC and vertical TEC. The second order Taylor series expansion could provide a relevant TEC spatial gradients and TEC time derivatives. The technique validation was performed by using independent experimental data over 2014 and the IRI-2012 and IRI-plas models. As a TEC source we used Madrigal maps, CODE (the Center for Orbit Determination in Europe) global ionosphere maps (GIM), the IONOLAB software, and the SEEMALA-TEC software developed by Dr. Seemala. For the Asian mid-latitudes TuRBOTEC results agree with the GIM and IONOLAB data (root-mean-square was < 3 TECU), but they disagree with the SEEMALA-TEC and Madrigal data (root-mean-square was >10 TECU). About 9% of vertical TECs from the TuRBOTEC estimates exceed (by more than 1 TECU) those from the same algorithm but without constraints. The analysis of TEC spatial gradients showed that as far as 10\u201315\u00b0 on latitude, TEC estimation error exceeds 10 TECU. Longitudinal gradients produce smaller error for the same distance. Experimental GLObal Navigation Satellite System (GLONASS) DCB from TuRBOTEC and CODE peaked 15 TECU difference, while GPS DCB agrees. Slant TEC series indicate that the TuRBOTEC data for GLONASS are physically more plausible. Currently, the global navigation satellite systems (GNSS) enable the study of the ionosphere at any spot on the globe. Such studies are based on dual-frequency phase and pseudorange measurements of the total electron content (TEC) . Since tPseudorange measurements are thought to be absolute, however, they are noisy. For this reason, one often uses pseudorange and phase measurements jointly to eliminate the phase ambiguity. Herewith, there is a bias related to a different time of the signal passage through the channels of a satellite and a receiver. In literature, this error is referred to as differential code biases (DCBs). This error is known to vary systematically, and it may have irregular variations . MylnikoAn important goal is to obtain absolute characteristics reflecting ionospheric conditions. A good way is to have an electron density profile that an incoherent scatter radar or an ionosonde can provide ,20,21,22Using the GNSS data can also provide estimates for the electron density profile. This information may be obtained by using 4D spatio-temporal ionospheric tomography suggested by Mitchell and Spencer . GNSS-toWhen using the data from sparse networks of stations, a more promising way is to build TEC maps. The International GNSS Service (IGS) Working Group on Ionosphere providesCurrently, there are several methods to estimate the vertical TEC. These methods are based on various TEC models ,30. One Different techniques for TEC estimation do not solve the problem of negative TEC values, especially negative slant TEC on some line of sights. Start and Parker and Watehttp://www.ionolab.org/) [We use the GNSS data from the IGS network as the Tab.org/) and softab.org/) , as wellab.org/) and CODEab.org/) . It is fThe IONOLAB software is based on a single-station solution . The sofSeemala developed a software for TEC calculations (referred to as SEEMALA-TEC) based onMadrigal TEC is also based on projection of slant TEC to vertical TEC, but for different cells (1\u00b0 \u00d7 1\u00b0) on the map . They usGlobal ionosphere maps are used for a long time to monitor and model the ionosphere . We usedMI is the model (expected) value of the recorded slant TEC, Is is the slant TEC along the line of sight without a differential code bias, VI is the vertical TEC estimate at a point corresponds with the measurement (ionospheric pierce point), S is the function that converts the slant TEC into the vertical one (mapping function), and DCBI is an error related to the DCB of the satellite and the receiver.To obtain absolute TEC, one estimates the preset measurement model parameters by using experimental data. As a rule, one uses the following model of TEC:S in Equation (1) is a constant. The corresponding error can be regarded as a TEC offset [For geostationary Earth orbit (GEO) satellites it is difficult to estimate DCB (or similar error in single frequency data) from (1), because C offset . HoweverC offset .V=IV I, where 0\u03d5, 0l, and 0t are the station latitude, longitude, and the time, for which the calculation is performed. This implies that all the TEC measurements were supposed to be performed over the station. However, such an approach does not allow one to obtain spatial gradients. Such an approximation is correct in a limited number of cases when the spatial gradients and the time derivative can be neglected. In the most studies, a spatial-temporal expansion of the VI function is used. The spherical harmonic expansion is often used. Komjathy et al. [VI Taylor series expansion in (1) only on space. In our research, we expand the vertical TEC function VI into the Taylor series on space and time:\u03d5, \u0394l are the difference between latitude, longitude of the ionospheric pierce point and the point of station , \u0394t is the difference between time of measurement and 0t.Ma and Maruyama used de y et al. increasey et al. used they et al. , the Tayy et al. mentione(1)PI and phase \u03c6I measurements. For the analysis, we use the data with elevations greater than 10\u00b0.Calculating TEC based on the pseudorange (2)Dividing the data into continuous samples.(3)Detecting and eliminating outliers and cycle slips in the TEC data Figure .(4)Eliminating the phase measurement ambiguity . At this state we obtain experimental slant TEC ExpI.(5)Estimating DCBs by a simple measurement model and determining the model parameters based on minimizing the model data root-mean-square deviation.We developed the following algorithm to estimate the vertical TEC, the TEC gradients, the time derivative, and DCBs. The algorithm involves:m = 0, n = 0, k = 0 in Equation (2)), indeed, it is necessary to substantiate the selection of the expansion order in greater detail. We performed an analysis by using the IRI-2012 model. The TEC slant values, corresponding to real observed elevations and azimuths, as well as the vertical TEC, the time derivative, and the spatial gradients were modeled by latitude and by longitude. Then, we recovered the values of vertical TEC (IV) from slant measurements by using expansions to the specified order of Taylor-expansion.Although in some research the zero-order expansion of (2) is used ). Next, TEC was recovered by using the first-order (leaving terms with n + m + k \u2264 1 in (2)) and second-order (leaving terms with n + m + k \u2264 2 in (2)) expansions.At the first step, this recovery was performed based on the zero-order Taylor expansion (2) is the latitude (longitude) difference between the ionospheric point coordinate \u03d5(l) and that of the station 0\u03d5(0l), \u0394t is the difference between the measurement time t and the time 0t for which the calculation is performed. Further, \u03d5 = \u2202IV/\u2202\u03d5, Gl = \u2202IV/\u2202lG, q_\u03d5 = \u22022IV/\u2202\u03d52G, and q_l = \u22022IV/\u2202l2G are the linear and quadratic spatial TEC gradients, and t = \u2202IV/\u2202tG and q_t = \u22022IV/\u2202t2G are the first and second time derivatives. Equation (3) represents the second-order Taylor series expsion of (1). We neglect the mixed derivatives.Meanwhile, using the second order provides a sufficient accuracy to be able to neglect higher orders. Therefore, we use the expansion \u03b1 is introduced to more correctly account for the ionospheric layer height [E =R 6371 km is the Earth radius, max =h 450 km is the ionospheric point height, and \u03b1 is the correcting coefficient.In the literature different mapping functions are used. Selecting the most correct one is a challenge. In most papers, one uses the one-layer approximation of the ionosphere as a thin layer. This approach was suggested by Klobuchar . Hern\u00e1ndr height :(4)Sji=[SI, on the elevation when converting S = S\u2219IVI for different values \u03b1. Simulation used the IRI-2012. By its physical implication, the coefficient \u03b1 adjusts a sharp non-physical TEC growth at low elevations, less than 30\u00b0.\u03b1 should be latitude-dependent. Also, the \u03b1 parameter should affect the ionosphere disturbance level that may dramatically vary the ionization global distribution [\u03b1 values for several points on the earth surface. It is worth noting that due to the ionosphere peculiarities, the coefficient ribution . We estiU = \u2211kU (5) for the set of selected instant kt, for which we estimate the parameters. For computation, we represent (5) in the form (7):ExpI is the experimental phase TEC measurements with the eliminated phase ambiguity obtained after Stage 4 of the algorithm, \u0398 is the Heaviside step function, ith instant of measurement for the jth satellite, kt, for which calculating is performed, and \u0394t = 1 h is the maximal time difference, for which the data are still used for analysis (kt-estimate). The 1/S factor in Equation (6) causes the measurements at high elevations to produce the greatest contribution. For each time instant k, we have 7\u2219J + N variables in Equation (5), where J is the number of instants over the investigated interval, for which calculation is performed, and N is the number of the satellites observed.We obtain the set of equations by minimizing the functional C. So we introduce the next boundaries:C is a non-negative value of minimal TEC, which can be observed in principle. We chose C = 0.5 TECU.After that, minimization should be applied, which is based on the least square technique. A typical problem for TEC estimation is emerging negative or zero values. For GIM data, this leads to zero values in some GIM cells. Actually, the solution for such problems was suggested a long time ago. We need to restrict the estimated values ,37. Firs(1)The algorithm first computes the usual least-squares solution. This solution is returned as optimal, if it lies within the bounds. If not, the algorithm finds all variables within the bounds (free set) and beyond (active set).(2)At each iteration the algorithm chooses a new variable to move from the active set to the free set.(3)b in (7) is changed by active set. Least-squares solution for new equation system contains variables beyond the bounds, the gradient correction is applied to all the free set (see [New equation system for free set is created where set (see for deta(4)The iterations continue until all the variables are in the free set.We used the Python library scipy.optimize.lsq_linear based on algorithm suggested by Start and Parker . The main iterations for a problem with n variables. As a result we obtain non-negative (positive) vertical TEC and slant TEC at all the lines of sight. To obtain robust DCB estimates IDCB, we calculate the parameters simultaneously for different time instants over 24 h, thus solving a consistent set of Equations (7). The temporal resolution for kt may vary from 2 h to 5 min.This algorithm ensures an accurate solution eventually, but may require about N is the number of satellites in the constellation. We applied condition (9) for GPS and GLONASS separately, following Schaer [To analyze the satellite DCB separately, we apply the often-used zero-mean condition :(9)\u2211i=1Ng Schaer .To simulate the algorithm operation, we used the IRI-2012 model with a set of the International Union of Radio Science (URSI) coefficients, recommended by the URSI and the IRIcorr topside ionosphere profile option . To chec\u03b8 > 60\u00b0; 5 + 1\u00b7S10(\u03b8 + 30) TECU at \u03b8 < 60\u00b0. Also, we introduced outliers for the pseudorange and losses of phase lock for the phase, which could occur with a probability. Thus, at the output, we obtained a series of the phase TEC and the pseudorange TEC corresponding to a certain time instant, elevation, and azimuth. Further, we used these values and calculated ionospheric parameters based on the above algorithm. Finally, the parameters obtained as a result of the algorithm operation were compared with the modeled vertical TEC, time derivatives, and gradients. The obtained DCBs were compared with those specified as the simulation input.Simulation was performed for a selected real station. For each recorded satellite at each time instant, we calculated the electron density and TEC along the line of sight. Further, we introduced the DCB-related error, as well as random noise to the phase TEC and the pseudorange TEC. This value was 0.01 TECU for the phase TEC. For the pseudorange TEC, the noise value was assigned depending on the elevation: five TECU at We validated data based on ionosphere modeling and TEC products from alternative software mentioned in \u03b1=0.96 for THU2. We did not find significant influence of the plasmosphere on TuRBOTEC estimations, except high latitudes, where we should to use another \u03b1 in (4).The differences between TuRBOTEC simulation and the IRI-plas vertical TEC are shown in green. The results show that TuRBOTEC provides even better performance for modeling based on IRI-plas at mid-latitudes, and comparable performance at low latitudes. Higher difference at high latitudes can be due to using \u03b1 obtained through the IRI-2012 modeling, so we used The greatest deviation occurs in the equatorial region. This is related to a substantially inhomogeneous ionosphere structure in this region, particularly, during daylight hours. The error is less at high latitudes. Although, as compared with the mid-latitude region, the error is higher. This is related to the absence of satellite observations at high elevations in high latitude regions, and to the dominance of the southward contribution to the total measurement statistics.maxKp = 2.3, maximal Kp for the day) and for the 17 March 2015 magnetic storm .One can see well that the TEC curves reproduce the diurnal variation similarly, but they can quantitatively differ. Such systematic differences are well-known and were repeatedly addressed in literature . Like diV\u0394I difference distribution between the vertical TEC data in the Irkutsk region.One can see that, at individual instants, synchronous variations with TEC are noted in the CODE and TuRBOTEC data. For example, for the 17 March 2015 strong magnetic storm, one can see a slight increase in the vertical TEC around 18 UT. In general, the estimates for the vertical TEC appear plausible for all the considered cases. Deviations in the data from other laboratories are within the variance interval of different laboratories among themselves. To analyze systematic variances, we built a histogram for the The vertical TEC from the previous version of the suggested technique was experimentally checked in ,59. ThosAbsolute gradients differ from <0.1 TECU/deg. for longitude gradients in the high-latitude regions to 2.5 TECU/deg. for latitude gradients in the equatorial regions during the equatorial anomaly evolution.S function that converts the slant TEC into the vertical TEC. In the equatorial anomaly region, such an approximation works worst of all, and, whereas the vertical TEC estimate is quite reasonable involves able see , the spaWhen estimating the vertical TEC at a growing distance from a station, this leads to an error. In S mapping function has an essential latitude dependence. Therefore, at a significant deviation, the vertical TEC value converted from the slant TEC is not precisely determined. Hence, the gradients are not precisely determined, either. It is worth noting that, in this case, this error would grow almost linearly. From One can see that the latitude errors grow more than when moving at a similar distance by longitude. This may be determined by that the maxKp = 2.3 (left panel), and for 17 March 2015, maxKp = 7.7 (right panel). Like for the absolute vertical TEC data in the same way. Thus, for Equation (7), we exchange N variables (where N is a number of satellites) by N\u2019 those (where N\u2019 is a number of continuous series). This improves the solution (see red line in We can solve the emerged problem by excluding DCB from Equation (1):We have developed an algorithm to recover the absolute TEC, its gradients, its time derivative, and DCBs. The procedure is based on the space-and-time Taylor expansion and bounded-variable least squares. We termed it TayloR-series and Bounded-variable-least-squares based iOnosphere TEC (TuRBOTEC). We simulated the algorithm operation by using the IRI-2012 and the IRI-plas models. The absolute TEC values recovered through the developed algorithm were established to agree with the IRI-2012 model-set values. The mean standard deviation for the mid-latitude IRKJ station is 0.09 TECU, for the equatorial NTUS is 0.35 TECU, for the high-latitude THU2 is 0.17 TECU. We did not find significant influence of the plasmosphere on TuRBOTEC estimations, except for high latitudes, where we should use another \u03b1 for the mapping function. About 9% of experimental vertical TECs from the TuRBOTEC estimates exceed (by more than one TECU) those from the same algorithm but without constraints.S mapping function has an essential latitude dependence. Therefore, at a significant distance, the vertical TEC value converted from the slant TEC is not precisely determined. Hence, the gradients are not precisely determined, either. One should note that, in this case, this error would grow almost linearly, but this is not absolutely so. The real (including the model ones) latitude gradients have a considerably spatially non-uniform character at a distance from a station, and, at distances more than 10\u00b0\u201315\u00b0, the TEC estimates based on gradients, start to surpass 10 TECU.The recovered values of the TEC spatial gradients and of the TEC time derivative agree qualitatively with the model-set values. Also, we studied the accuracy of TEC estimates by means of the latitudinal and longitudinal gradients, for the ionosphere at a distance from a station. The latitude errors were established to grow more dramatically, than those of longitude. This could happen, because the The DCB values obtained through the developed algorithm for GPS satellites agree with the GIM and CODE data, but, for the GLONASS DCB values, the deviation from the CODE data is up to 17 TECU. At the same time, the recovered DCBs (at the IRI-2012 simulation) agree well with the initial data. At large errors in determining DCBs after correcting the slant TEC series, one may observe negative unphysical TEC values.The developed software may be used to calculate the vertical TEC from the local networks or to locally update ionosphere models . The ver"} {"text": "Nitrogen (N) and Phosphorus (P) are essential nutritional elements for life processes in water bodies. However, in excessive quantities, they may represent a significant source of aquatic pollution. Eutrophication has become a widespread issue rising from a chemical nutrient imbalance and is largely attributed to anthropogenic activities. In view of this phenomenon, we present a new geo-dataset to estimate and map the concentrations of N and P in their various chemical forms at a spatial resolution of 30 arc-second (\u223c1\u2009km) for the conterminous US. The models were built using Random Forest (RF), a machine learning algorithm that regressed the seasonally measured N and P concentrations collected at 62,495 stations across the US streams for the period of 1994\u20132018 onto a set of 47 in-house built environmental variables that are available at a near-global extent. The seasonal models were validated through internal and external validation procedures and the predictive powers measured by Pearson Coefficients reached approximately 0.66 on average. Machine-accessible metadata file describing the reported data: 10.6084/m9.figshare.11948916 However, today anthropogenic production of N and P to support fertilisation and industrial releases4 has dramatically increased the N and P presence in water bodies. This has led to the widespread eutrophication of both inland and coastal waters5.Nitrogen (N) and phosphorus (P) are key nutritional elements for many important life processes such as protein and DNA synthesis, primary production, cellular growth and reproduction. Both have a natural global cycle that includes conversion between different inorganic and organic forms, solid and dissolved (and gaseous for nitrogen) phases that maintained their pre-industrial concentrations within certain natural bounds. During the preindustrial era, the concentrations and fluxes of N and P in rivers were generally small, much less than present day levels, and were mainly sourced from erosion and the leakage of dissolved N and P in their organic/inorganic forms6. However, our current ability to map N and P concentrations across regions or the globe is still limited. Early attempts focused on concentrations and fluxes from major rivers7 and were implemented through bottom-up approaches, which estimated N and P content based on our knowledge of land-use and population influences on river nutrients11. Other local and regional studies have also featured different combinations of bottom-up, process based, and statistical models, which link N concentrations in inland water to environmental variables15.Over the past decades, significant progress has been made towards our understanding of the dynamics of natural and anthropogenic inputs of N and P to inland waters. Furthermore, the recognition of human impact on the N and P cycle has driven much research into the scope for better management of these nutrients16. This set of stream variables at the near-global scale provides a new base for stream-relevant biotic and abiotic modelling, such as variability in biodiversity, nutrient distributions, or water flows. Based on this platform, we present a new method for mapping the concentrations of N and P in various chemical forms across continental waters based on a machine learning approach. The resulting N and P maps can be used to study nutrient loading and processing in inland waters. For instance, fertiliser run-off presents a high load of chemical nutrients in recipient freshwater bodies, and can be charted by the aforementioned method18. The N and P maps possess information about the location of nutrient-enriched streams, which can guide engineered de-nitrification processes20. In addition to resource recovery, a mitigation strategy can be employed through the improved management of nutrient-rich wastes. In this approach as well, the derived N/P ratio map can prove a valuable source of information on where N vs P limitation might be located regionally. Furthermore, this unique N and P modelling can be used in conjunction with process-based methods to enhance the understanding of metabolism and recycle of N and P in riverine systems.Freshwater environmental variables that account for the basin and upstream environment have recently been computed21 (in form of GeoTIFF raster layers) derived by connecting freshwater environmental variables with in situ measurements and map the distribution of various N and P compounds in water bodies across the conterminous US for the period of 1994\u20132018 recorded in the Water Quality Portal (WQP)22. Random Forest (RF)23, a well-established machine learning algorithm was employed in this study for its exceptional capability of handling complex and heterogeneous data. We demonstrate in detail below how RF has excelled to date at capturing local geographical variations of stream predictors, and produces superior predictability for N and P distributions in the US. The mapped resolution of the predicted N and P concentrations is at a 30 arc-second (\u223c1\u2009km) gridded stream network24 for four seasons. Moreover, the quality and appeal of the proposed geo-dataset21 lies in the rigorous scripting and modelling procedures that was applied to treat sparse spatio-temporal observations. Additionally, the computation was performed by employing multi-core processing in a super computer which requires advanced geocomputation programming skills. The described geo-dataset21 is ready for use as input data in various environmental models and analyses. The newly developed geo-dataset21 and the methodological framework are suitable for large-scale environmental analyses such as N and P emissions in small and large rivers at a global scale. To our knowledge, this is the first time that N and P concentrations have been estimated at such high spatial resolution for the territory large as the contiguous US.In this paper, we present a gridded geo-datasetThe Methods section is divided into two subsections that includes: (i) Data pre-processing, that describes cleaning the gauge stations source data , spatial/seasonal variability and stream layers (referred hereafter as predictors); (ii) Modelling framework, that concerns data splitting and model training/validation/prediction.22, which is so far the largest standardised water quality database25. From WQP22, we retrieved the measured concentration data for N and P nutrients in their various chemical forms for the period from 1994 to 2018 with data spanning US stream networks. Each single observation is associated with its sampling geolocation (latitude and longitude) and a USGS Parameter Code (PC) to indicate its chemical identity. We selected five nutrients of interest as the response variables , the U.S. Environmental Protection Agency (EPA) and the National Water Quality Monitoring Council developed the Water Quality Portal (WQP)22 were provided by multiple organisations26. Employing such multi-sourced data for the \u201csecondary use\u201d, i.e. beyond the original intention proposed by the original data collection agencies26, can result in a number of challenges. For instance, intermittent sampling activities and data gaps in time series complicated the temporal analyses for long-term trend. Data records can be misinformative owing to instrument failure, missing measurement that are labelled as \u201c0\u201d values and incorrect use of physical units26. Such errors might produce extreme values beyond the natural value range and trend , and also large number of \u201c0\u201d values). We removed extreme values by data trimming using certain thresholds.The chemical nutrients recorded in WQP\u03bc is the mean and \u03c3 is the standard deviation and E is the expected value.The distribution of the raw observation data at day-level resolution for all nutrients were highly left-skewed, as quantified by the third standardised moment Eq.\u00a01\\documen27 less than two, determined by iterative trials. The data after cleaning are reported in Table\u00a028 rejected the null hypothesis that a temporal trend exists in the time series. Additionally, we plotted the data distributions across the continuous US for each year in Supplementary Fig.\u00a0We performed spatial and temporal analyses to better inform the design of modelling strategies. Within the current data set, we identified only eight stations with eight or more years of data continuity for a single chemical species and sum (precipitation) across each sub-catchment. Here, soil data refers to the soil within the depth of 2.5\u2009cm (0\u20135\u2009cm thickness)29. This yielded a series of predictors such as the upstream average forest cover, upstream sum of precipitation that mimics surface run-off and the average upstream temperature16, available at www.earthenv.org/streams.To build the predictive models, we used a total of 47 predictors belonging to four categories: topography16. The unit for each stream variable is derived from an original, spatially continuous environmental variable across the land surface area. Thus, temperature is expressed in degrees Celsius, precipitation in millimetres, and land cover as a percentage of each class (e.g. Urban/built-up class in percentage). We refer to16 for further details regarding the calculation of the freshwater-specific predictors.All predictors except for climate were static, as opposed to being time-updated. Monthly climate data was averaged to a seasonal level as described in Table\u00a0r.stream.snap function in GRASS GIS33 with 3\u2009km as the maximum distance tolerance. After snapping, we computed the seasonal mean for each chemical species by considering all the points that fell in the same snapped location. This led to a unique one-to-one association between a geographical identification and an averaged concentration value for each season and each chemical species.Due to the possible spatial discrepancy between the HydroSHEDS stream network and the gauge station locations, the latitude and longitude locations of the gauge stations do not consistently fall directly on the stream grids. Hence, we snapped the geolocations (latitude and longitude) of the stations to the HydroSHEDS stream network using the v.kernel available in GRASS GIS33) for each species and season. The pixel values of the resultant density surface were used as weighting factors to split the data into training and testing subsets that possess identical spatial distributions.We split the full dataset into two sub-datasets, training and testing respectively. To consider the heterogeneity of the spatial distribution of the gauge stations, we employed the spatial density estimation technique in the data splitting step by building a density surface using Gaussian kernels with a bandwidth of 50\u2009km (using xi represents the observation and i) at various proportions of the training-testing subsets with 50 times independent samplings for each trial. The trial repetition intended to sample different combinations of training and testing so as to reduce the bias of the sample estimate. To this end, we labelled the MRSE as In order to optimise the split ratio between the training and testing subsets, we explored the Mean Root Square Error (MRSE\u2009=\u2009te and its low variability (defined as the standard deviation of MRSEte) at the proportion 0.5, we decided to use it as the optimal cut to build the final models.As shown in the Supplementary Fig.\u00a0randomForestSRC35 to train the models. RF regression is an ensemble learning strategy that elevates the collective predictive performance of a large group of weaker learners (regression trees). Two key elements contributing to the superiority of the RF algorithm are bootstrapping aggregation (bagging) and random selection of variables. Bagging (bootstrap sampling from the training sub-dataset) aims at reducing data noise through averaging. Data that is not included in the bag is called an out-of-bag (OOB) sample. Random drawing of variables improves variance reduction by reducing the intercorrelation between trees. OOB samples can be used to validate the model performance and evaluate the variable importance. The variable importance is of great value in identifying the most influential variables that direct predictive outcomes and thus offer adaptive or intervention strategies in response to the modelled phenomena. One important feature of the RF algorithm is its relative resilience towards data noise due to the two mechanisms mentioned above. This technical advantage of RF directly benefits the analysis of environmental data. The attractiveness of the randomForestSRC package was that it allows considering the sample distribution density in the bagging step. In the model development, we paid close attention to the model stability. We noticed that the superparameter as the number of trees had a strong impact on the model errors as shown in Supplementary Fig.\u00a0We employed the RF regression algorithm implemented in the R-package The predicting performance on the training and testing sets provided complementary information for the model validation. Training primarily exhibits model robustness, i.e. stability and balance of model predictability in the presence of data shuffling. Testing measures the model performance on the unseen data and addresses the model fitness. In this context we used the Pearson correlation coefficient as the statistical metric to quantify the predictive performance of the models.xi represents the observation and i) to numerically quantify model uncertainty, since the it offers a more discernible measure of prediction accuracy. Thus, we denote:To supplement the Pearson correlation coefficient and provide an in-depth assessment of model accuracy, we calculated the Root Mean Square Error (RMSE\u2009=\u2009(i) th and above the 80th percentiles, obtaining RMSE can also be used to obtain a comparison of accuracy across high and low-density gauge station distribution. To this end, we calculated a partial Lastly, after establishment of the predictive models, we investigated the contributions of each variable to the predicted outcomes by means of the \u201cvariable importance\u201d, an output from RF.The final validated RF models were applied to predict each of the 30-arc-second stream grid cell within the conterminous US, for all the nutrients . The predictive outcomes were then reversely transformed back to recover their original physical values (in ppm).21. The nutrient concentrations, mapped across the conterminous USA, are available in a compressed GeoTiff file format in the WGS84 coordinate reference system (EPSG:4326 code). All layers are stored as floating points (Float32 data type) to ensure sufficient precision for future use and analysis for varied purposes.We provide TN, TDN, NO3, TP, and TDP concentrations (ppm) for four seasons for the gridded stream network at a spatial grain of 30 arc-second (\u223c1\u2009km). All layers are available for download at PANGAEA repositoryThe predicted nutrient maps follow the layer name convention:nutrient abbreviation_resolution_season.formatTN_1KM_winter.tif: layer showing the Total Nitrogen for the winter season at 30 arc-second spatial resolution.TP_1KM_summer.tif: layer showing the Total Phosphorus for the summer season at 30 arc-second spatial resolution.Below are two examples of the layer names for the two main nutrients product TN and TP36.For the purpose of visual interpretation of the results, we plotted the TN and TP bivariate maps as shown in Fig.\u00a0The Pearson correlations between predicted and observed values for TN and TP are in the range of 0.56\u20130.81 across the testing sets as shown in Fig.\u00a0In Fig.\u00a037), which highlighted the significance of human influence and suggested the need for further completing the variable list (iii) the original highly skeweness of the observation data and the associated box-cox transformation implemented.From the residual maps we also noticed that the model sometimes underestimates the higher values. Three possible causes may have contributed to this result: (i) untrustful observations (ii) anthropogenic actions that are not fully included in the current environmental variable layers based on MERIT-DEM44 by adopting the procedure described in45. The MERIT-DEM derived stream network is also under development46. These former described layers will be useful in combination with other global maps of irrigated areas47, livestock48, agricultural fertiliser use49, soil types/properties50 to compute N and P concentrations more accurately on a global scale. We encourage potential users of the described geo-dataset to contact the authors for future product updates.Overall, the newly-developed layers provide the basis for a variety of high-resolution, nutrient-related analyses across the inland waters in the conterminous US. A global-scale N and P assessment with new stream predictors at higher resolution (3-arc-second) is under development by our group. The focus is on creating new geomorphometry variables (Geomorpho90mSUPPLEMENTARY INFORMATION"} {"text": "Cognition is claimed to be extended by a wide array of items, ranging from notebooks to social institutions. Although the connection between individuals and these items is usually referred to as \u201ccoupling,\u201d the difference between notebooks and social institutions is so vast that the meaning of \u201ccoupling\u201d is bound to be different in each of these cases. In this paper I argue that the radical difference between \u201cartifact-extended cognition\u201d and \u201csocially extended cognition\u201d is not sufficiently highlighted in the literature. I argue that there are two different senses of \u201ccognitive extension\u201d at play, that I shall label, respectively, \u201cimplementation extension\u201d and \u201cimpact extension.\u201d Whereas implementation extension is a causal-functional notion, impact-extension hinges on social normativity that is connected with organization and action coordination. I will argue that the two kinds of cognitive extension are different enough to warrant separate labels. Because the most salient form of social extension of cognition involves the reciprocal co-constitution of cognitive capacities, I will propose to set it apart from other types of extended cognition by using the label \u201csymbiotic cognition.\u201d In the literature on extended, integrated and distributed cognition, human cognitive systems are said to be coupled with and enhanced by a large number of rather diverse items, ranging from simple notebooks and abacuses , via comimplementation base of these processes, socially extended cognition alters the nature and hence extends the impact of cognitive engagements with the world by embedding them in social practices of coordinated behavior. When we interpret socially extended cognition as an instance of impact-extension and not as implementation-extension, the problem of cognitive bloat disappears.The paper is set-up as follows. In the next section I will introduce the notion of extended cognition and highlight the difference between artifact-extended cognition and socially extended cognition. In the section \u201cThe Problem of Cognitive Bloat,\u201d I will briefly discuss the problem of cognitive bloat such as this has first been proposed as an argument against the early varieties of cognitive extension. I will argue that if socially extended cognition is indeed modeled on artifact-extended cognition, it falls prey to this problem in such a blatant way that it is clear that we must understand socially extended cognition differently. In the section \u201cImplementation-Extension and Impact-Extension,\u201d I will propose a characterization of the difference between artifact-extended cognition and socially extended cognition. I will argue that cognition can be considered to be extended in different ways. Whereas artifact-extended cognition extends cognitive processes by extending the In the section \u201cCausality, Coordination, and Reciprocal Cognitive Dependency,\u201d I will defend and elaborate on the distinction between \u201cimplementation-extension\u201d and \u201cimpact-extension\u201d by arguing that, crucially, the chain of items causally linked to a person whose cognition is socially extended involves other human beings\u2014other cognitive systems. On the one hand, this introduces social normativity into the extended system, which is absent in artifact-extended cognition. On the other it introduces the idea of reciprocal cognitive dependency between people. I will propose the label \u201csymbiotic cognition\u201d for networks of mutually dependent cognitive systems. In the section \u201cCognitive Symbiosis, Weak and Strong,\u201d I will define the notion of symbiotic cognition. I will allow for the possibility of socially extended cognition that is not symbiotic cognition, and will distinguish between weak forms of symbiotic cognition, that do not require social institutions, and strong forms that do. In the section \u201cSymbiotic Cognition, Cognitive Integration and Distributed Cognition,\u201d I will compare the idea of symbiotic cognition with integrated cognition , 2013 anThe idea that human cognitive systems are in fact extended by items outside our brains and bodies has been developed and defended by many philosophers for over two decades now. Disregarding precursors, the idea that started the debate on extended cognition\u2014then labeled \u201cactive externalism\u201d \u2014was baseBrain-chauvinists think there is a relevant difference. On their view, Otto does not remember the MoMa address. Rather, he believes the address is in the notebook, perceives the contents of the notebook and forms a new belief about the address. On this reconstruction all the mental work is done in Otto\u2019s head, not outside it. The response to this \u201cOtto two-step\u201d , 46 is e1 The wider variety of items that our cognition is said to be coupled with, which is the main topic of this paper, stems mainly from the second wave of extended mind theories. These are not based on the parity principle, but on the complementarity principle is in some real sense an expression of several minds externalized and extended into the world, instantiating in external memory an agreed-upon decision, adding to a system of rights and laws that transcend the particularities of any individual\u2019s mind. Contracts are institutions that embody conceptual schemas that, in turn, contribute to and shape our cognitive processes , p. 6.practice. It is not just scribbles and sounds, but also the way we use these in social interactions.The point I wish to make in this section is that somewhere along the way in ascending from notebooks as possible cognitive extensions to socio-cultural institutions, a crucial distinction is ignored. The connection between individuals and the items their cognition is extended with is described as more or less similar\u2014it is described as \u201ccoupling.\u201d What coupling entails must depend on what we couple with. Hence, in order to maintain similarity throughout the ascent from notebooks to institutions, items that are said to extend our cognition are very often described as physical objects. Language, for example, is described as a set of physical symbols. But apart from involving a set of physical symbols, language is also a social This certainly goes for social institutions too. Legal systems involve courtrooms, togas, and in some countries wigs. But they also involve rules, conventions and practices. A contract is an externalized memory not just because of its physical properties but mainly because of the way these pieces of paper (or bunches of bits) function in legal practice. Gallagher acknowledges that cognition can also be extended by institutions that are less formal and reinforced, such as practices involving cultural conventions:In solving a problem like keeping my cattle in my pasture, my bodily manipulations of a set of wooden poles and wire are not necessarily part of the cognitive process; but my engagement with the particular local custom/practice of solving this problem with a fence (and even a specific kind of fence) is a cognitive part of the problem solving. In such cases, cultural practices, local know-how in the form of established practices, etc., in either formal or informal ways, enter into and shape the thinking process. Without such cultural practices, rules, norms, etc. our thinking \u2013 our cognitive processes \u2013 would be different , p. 10.Interestingly, the difference between physical objects and practices is not seen as an obstacle for claiming that coupling is basically similar when we move from notebooks to institutions:Just as a notebook or a hand-held piece of technology may be viewed as affording a way to enhance or extend our mental possibilities, so our encounters with others, especially in the context of various institutional procedures and social practices may offer structures that support and extend our cognitive abilities , p.4.Let us call cognition that is extended by physical objects \u201cartifact-extended cognition.\u201d The question I would like to pose is whether Gallagher, Clark and Menary are correct (on some interpretations of their views) in assuming that socially extended cognition is really continuous with artifact-extended cognition. Are coupling with artifacts and coupling with practices really similar enough to warrant the use of the same label\u2014extended cognition\u2014in both instances?In order to make a beginning with driving a wedge between artifact-extended cognition and socially extended cognition, it is useful to look at what is known as the problem of cognitive bloat . This isFrom the perspective of artifact-extended cognition, Otto-and-notebook-style, the response to the threat of cognitive bloat is to tighten the constraints on what counts as co-constituents of cognition. Clark proposes four extra constraints:(1)That the resource be reliably available and typically invoked. .(2)That any information thus retrieved be more or less automatically endorsed. It should not usually be subject to critical scrutiny . It should be deemed about as trustworthy as something retrieved clearly from biological memory.(3)That information contained in the resource should be easily accessible as and when required.(4)That the information in the notebook has been consciously endorsed at some point in the past and indeed is there as a consequence of this endorsement , 79.This does limit the possible candidate artifacts that may be said to extend cognition considerably. Arguably, the remaining problem is a matter of intuition. It is surely the case that even with these extra criteria our extended minds are bigger and more scattered than traditional brain-based or neo-Cartesian intuitions would make them out to be. But they are not so large and scattered that it is incoherent to think of them as single cognitive systems.One of the reasons for this is that the external items we are said to be coupled with are not themselves coupled with still further structures in ways that satisfy 1\u20134. But this is exactly the problem with socially extended cognition. If we are coupled with social institutions, we are coupled with structures that are constituted, among other things, by (very many) other human beings. These human beings are themselves coupled with further structures in the same way we are coupled with them. And this makes the cognitive system implausibly large and scattered\u2014if we are able to draw boundaries at all. For this reason, even philosophers who are sympathetic to the idea that human cognition involves massive coupling with our external niches are reluctant to think of social institutions as co-constituents of our cognitive systems . AccordiThe point I wish to make here is not that socially extended cognition clearly falls prey to the problem of cognitive bloat. Rather, the point is that (i) it would fall prey to the problem of cognitive bloat if socially extended cognition is a proposal that is modeled completely on the idea of artifact-extended cognition, and (ii) if it is interpreted in this way it falls prey to the problem of cognitive bloat so obviously and blatantly that it seems unlikely that socially extended cognition is intended to be modeled completely on artifact-extended cognition.Gallagher is ambivalent here. On the one hand he does present socially extended cognition as a proposal that is somehow derived from the idea of artifact-extended cognition . On the other hand, however, he distances himself from Clark\u2019s functionalism and the way Clark deals with the problem of cognitive bloat. Tightening the restrictions on what counts as proper cognitive extension in the way Clark does, emphasizes the idea that the brain is still the central hub of any cognitive system, however extended this system is. And it is precisely such brain-centeredness that Gallagher wishes to overcome with the idea of socially extended cognition. But now the question arises: how is avoiding brain-centeredness and including social practices and institutions in the list of co-constituents of our cognitive processes going to help sidestep the problem of cognitive bloat?I believe the answer here is to distance the idea of socially extended cognition even more from the idea of artifact-extended cognition than Gallagher does.2 of the cognitive system responsible for that repertoire. Differently put: some of the cognitive work in our interactions with the world has to be performed by items external to our brains and bodies. I believe there are different ways in which these descriptions can be made more precise. And I believe that the way in which we do this depends on our views of what cognition consists of. In this section I will sketch two different ways of unpacking the idea of cognitive extension. One is tailor-made for the functionalist view of cognition that underlies Clark-style artifact-extended cognition. The other is more suitable for Gallagher-style enactivist views of cognition\u2014even though I am less sure he would accept it.To say that cognition is extended is to say that items external to our brains and bodies expand our cognitive repertoire in such a way that they can somehow be said to co-constitute the \u201cmechanisms\u201dThe meaning of cognitive extension that fits a functionalist outlook on cognition such as Clark\u2019s best is what I will label \u201cimplementation extension.\u201d According to functionalists, cognitive states and processes are to be characterized as functional role states and transitions from one set of functional states to another .Feature (iv) does not imply that reciprocal co-constitution of cognitive abilities is necessarily symmetrical. It may well be that by playing different roles in the same social structure we co-constitute different cognitive abilities in each other.Feature (v) is deliberately vague about the nature of social structures. The term might refer to social institutions, but this need not be the case. There is structure in human interactions when there are identifiable roles that interact in ways that allow us to discern regularities. The sense in which social structures \u201cpre-exist\u201d before symbiotic cognitive processes can occur is metaphysical, and not necessarily temporal : without the context of a social structure, a symbiotic cognitive process cannot exist as such.Various forms of collective cognitive activity satisfy (i\u2013v), without being instances of the type of cognition Gallagher refers to, i.e., cognition in the context of social institutions. Group-memory is a well-researched case in point. While some researchers argue that memory storage and retrieval by groups is impaired relative to the sum memory abilities of the individual members of a group , there iIn general, task division in couples that live together for some time often rigidifies into shared routines, that are usually based on tacit knowledge of individual proclivities and talents, and that usually amount to the automatic complementing of each other\u2019s cognitive efforts. Such routines would make the couple into a symbiotic cognitive systems in terms of the above definition. Let me take the following, simplified case as an example: when on vacation, my wife always takes care of train- plane- or boat-tickets and the planning of when we should go where and what to see, whereas I do navigation and hotel arrangements, tents (in which case my wife determines the campsite) and guesthouses.This (simplified) arrangement satisfies (ii\u2013v):(ii)My actions of arranging tent-gear and navigating result in having a complete vacation, including interesting trips, a nice campsite, a boat trip, etcetera, because they are done in the context of a (weakly) symbiotic system. This is a form of impact extension; outside of this context the same actions would not have that effect.(iii)There is most certainly a kind of normativity involved in our division of cognitive labor. This is based on precedent and on shared assessment of talents which leads to mutual expectations.(iv)We co-constitute each other\u2019s cognitive abilities. By dividing complementary cognitive tasks and by using many automatized interaction routines that let us share information when necessary (and not when not necessary), we co-constitute each other\u2019s ability to realize a full vacation with roughly half the effort.(v)These routines\u2014our implicit knowledge of the way in which we divide cognitive labor and share results when necessary\u2014counts as social structure of the relevant kind .5 but even if we disregard these, such cases are not instances of symbiotic cognition. For first, and most importantly, these relations do not satisfy (iv): the cognitive extension is a one-way affair and not reciprocal\u2014teachers extend the cognition of students, but not vice versa and writers extend the cognitive abilities of readers, but not vice versa. This might be argued to affect (ii), (iii), and (v) as well. To start with (v), the social structures involved are not structures of the right kind because they do not involve mutual dependency. Also, these relations do not involve the right kind of normativity. There may certainly be normativity involved in these relations or in playing the relevant roles involved, but not necessarily normativity of the kind that renders the behavior of others predictable so that cognitive engagements by the agent are impact-extended. Which means that (ii) is not satisfied either. Having said that, though, nothing hinges on these assessments of the applicability of (ii), (iii), and (v); the non-applicability of (iv) suffices to rule out these cases as cases of symbiotic cognition.6Are there forms of socially extended cognition that do not satisfy (ii\u2013v)? I believe that that is possible, depending on how widely we apply the term \u201csocially extended.\u201d For example, the relation between a student and a teacher might be described as socially extended cognition\u2014the student\u2019s cognition is extended by the teacher\u2019s . Likewise, a reader\u2019s cognitive abilities might be thought of as being extended by the cognitive activities of a writer. There are reasons to be cautious here in describing such cases as instances of socially extended cognition,within the context of their respective institutions. Many of our daily cognitive activities have this property. Signing a contract is not intelligible in abstraction from a legal system, voting is not intelligible in abstraction from a social structure which allows for joint decision making, being polite by shaking hands is not intelligible in abstraction from a system of cultural conventions, etcetera. What I will label \u201cstrong\u201d or full-fledged symbiotic cognition, then, adds one more requirement to (i\u2013v):I have labeled forms of symbiotic cognition that do not involve social institutions \u201cweak symbiotic cognition,\u201d because they differ in one important respect from socially extended cognition of the type Gallagher discusses. I believe that the discussion of the previous sections suffices to show that (i\u2013v) apply to Gallagher\u2019s cases. But these cases have a striking feature that is lacking in the case of a married couple jointly planning and having a vacation or the case of collective memory. The cognitive engagements Gallagher discusses are only intelligible (vi)possible and intelligible only within the context of a social institution.Cognitive processes are are intelligible in abstraction from the system. My activity of navigating or booking a hotel does not require my wife\u2019s activity of planning trips and booking tickets to be intelligible. Neither does the individual memory-contribution of an individual to a transactive memory system require reference to other people to be intelligible as a memory process.7 Weak symbiotic cognitive systems combine individual cognitive processes, that do not require the system to exist, into a larger system that is beneficial to participants. Strong symbiotic cognition, by contrast, cannot be reduced to a collection of individually intelligible cognitive processes. It is only in connection with the whole system that strongly symbiotic cognitive processes are cognitive processes at all. It is not just that the whole is more than the sum of its parts (see footnote 7), the point is rather that there are no identifiable relevant parts without the notion of the whole.Crucially, the example of married couples with ingrained automatized routines, or transactional memory systems, are not examples of strong symbiotic cognition. For the individual cognitive processes within such symbiotic systems means to sign a contract involves reference to a very complex social structure in which rights and obligations exist and can be changed. \u201cRights and obligations\u201d refers to very specific norm-guided, socially structured behavior. It is not possible to identify that behavior fully, in turn, without referring back to contracts. The roles and regularities of the social structures involved in strong symbiotic cognition are holistically inter-defined . Shaking hands as a greeting opens up a new space of social interaction possibilities due to the fact that those involved all participate in the same system of cultural conventions\u2014it is a \u201cmove\u201d within the \u201cgame\u201d of social etiquette that is meaningless or weird to anyone who does not share your conventions.The holistic inter-defining of roles and regularities implies that strongly symbiotic cognitive engagements or processes are necessarily aimed at accomplishing a given state of affairs Feature (vi), then, transforms weak symbiotic cognition into a qualitatively different kind of cognition. If (vi) is added to (ii\u2013v), and the five features together are taken as interconnected, then (ii\u2013v) are substantially strengthened. Of course feature (v) is further defined by limiting the pre-existing social structures to social institutions. But this affects the other features too. Impact extension (ii) within a strongly symbiotic system is substantially more encompassing than impact extension in a weakly symbiotic system. Setting a whole reorganization of a company in motion by raising a single hand illustrates the point. This is a different scale of impact-extension than having a whole vacation with half the work. (iii) The normativity involved in social institutions is not merely dependent on precedent and implicit assessment of talents and proclivities. Precisely because it applies to much larger groups, it is usually reinforced, either explicitly, as in legal systems, or implicitly, as in a system of social etiquette. (iv) The co-constitution of cognitive abilities in strongly symbiotic systems is much more elaborate than in weakly symbiotic systems. First of all this is because many more people are involved. But secondly this is because most social institutions, instill a wide range of \u201cnew\u201d cognitive abilities in those who help to enact them.As said, I take Gallagher to refer to strong of full-fledged symbiotic cognition in his discussion of socially extended cognition. In the remainder of this paper I will refer to this type of cognition simply as \u201csymbiotic cognition.\u201dSo far, I have limited the discussion to the literature on extended cognition, arguing that symbiotic cognition differs from \u201cnormal,\u201d artifact-extended cognition in some important respects. There are other theories about the essential embeddedness of our cognitive systems. Richard Menary\u2019s notion of cognitive integration does emphasize the expansion of our cognitive repertoire by engaging with a wide variety of cultural items, including social structures, but without making claims about the extension of our cognitive systems as such. Edwin Hutchins\u2019 notion of socially distributed cognition, by contrast, allows for whole social institutions to count as cognitive systems. I have argued that the literature on extended cognition has swept an important distinction under the carpet; it has not sufficiently recognized that socially extended cognition is\u2014at least very often\u2014a type of cognition of its own, fundamentally different from artifact-extended cognition. But it may well be that this distinction is respected by the notions of integrated cognition or distributed cognition. I which case I may have said nothing new. I will briefly argue, however, that neither cognitive integration, nor distributed cognition is very sensitive to the distinction I have argued for above.The idea of cognitive integration is in many respects very close to the idea of extended cognition. Cognitive integration is also close to the enactivist view in that it emphasizes that cognition consists of bodily manipulations of the world, often involving man-made cognitive devices . Cognitive processes are cognitive practices, and these can be hugely expanded by involving a host of different items. The items mentioned in the cognitive integration literature fall in the same (wide) range as the devices referred to by extended cognition theorists. The crucial difference with extended cognition is that while according to Menary items such as linguistic symbols, smart phones, abacuses and social institutions allow for a whole new range of cognitive practices, they are enabling conditions for such practices, rather parts of our minds. In this respect, Menary is closer to those who argue that external devices scaffold our cognition, rather than extend it e.g., .It should be noted that the notion of cognitive extension that Menary rejects is a variant of what I have labeled \u201cimplementation extension\u201d above. Even though he tends toward an enactivist notion of cognition rather than a classical functionalist one, he still speaks of cognition \u201csupervening\u201d on a realization base and thinks of cognitive extension in terms of enlarging this base. This raises the question whether perhaps impact-extension might be compatible with the idea of cognitive integration. The similarity between the enactivist notion of cognitive engagement and Menary\u2019s notion of cognitive practices might suggest this. Indeed, there are clear similarities. Menary speaks of the \u201ctransformation\u201d of our minds by cognitive artifacts and our interactions with them in a way that suggests that manipulating these artifacts has a cognitive yield in the context of cognitive practices that the same manipulation would not have outside of such practices. The impact of an ignorant infant who happens to manipulate numeric symbols such that they accidentally represent a calculation differs from the impact of a mathematically trained person who performs the same manipulation. This is akin to the difference between someone coincidentally putting a scribble on a piece of paper and someone signing a real contract. The practice extends the impact of the manipulation.However, even though it may be argued that this type of \u201ctransformation\u201d of cognitive processes is very much like impact-extension, this does not mean that the idea of cognitive integration already contains or implies the notion of symbiotic cognition. On Menary\u2019s view, all cognitive integration is somewhat like impact-extension. The contrast between socially extended/integrated and artifact-extended/integrated cognition\u2014or between what I would prefer to call extended and symbiotic cognition\u2014is not made. Hence, in this respect it will not help to abandon extended-cognition talk in favor of cognitive integration.What about socially distributed cognition? On Hutchins\u2019 original proposal, sociallyIn many respects, therefore, symbiotic cognition can be viewed as a variant of the cognitive ecosystems view implied by later versions of the idea of distributed cognition. The one thing that is missing, however, like in the case of cognitive integration, is the relevant contrast between extended and symbiotic cognition. I have argued that there is an important distinction between cognitive extension as the extension of the causal-functional implementation base of cognitive processes, which is best applicable in cases where cognition is extended by physical artifacts only, and cognitive extension as the idea that our cognitive engagements with the world have massively enhanced impact in the context of normative, rule-based coordination of actions in a social practice. Though both types of cognition might equally well be called \u201cextended,\u201d they are extended in radically different ways. In order to mark this difference, and given the reciprocal cognitive co-constitution between humans in impact-extended cognition, I have proposed to label what is now known as socially extended cognition \u201csymbiotic cognition.\u201dThe author confirms being the sole contributor of this work and has approved it for publication.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Our previous study demonstrated that \u03b12AP knockout mice exhibit spatial memory impairment in comparison to wild-type mice, suggesting that \u03b12AP is necessary for the fetal and/or neonatal development of the neural network for spatial memory. However, it is still unclear whether \u03b12AP plays a role in the memory process. The present study demonstrated that adult hippocampal neurogenesis and remote spatial memory were enhanced by the injection of an anti-\u03b12AP neutralizing antibody in WT mice, while the injection of \u03b12AP reduced hippocampal neurogenesis and impaired remote spatial memory, suggesting that \u03b12AP is a negative regulator in memory processing. The present study also found that the levels of \u03b12AP in the brains of old mice were higher than those in young mice, and a negative correlation between the \u03b12AP level and spatial working memory. In addition, aging-dependent brain oxidative stress and hippocampal inflammation were attenuated by \u03b12AP deficiency. Thus, an age-related increase in \u03b12AP might cause cognitive decline accompanied by brain oxidative stress and neuroinflammation. Taken together, our findings suggest that \u03b12AP is a key regulator of the spatial memory process, and that it may represent a promising target to effectively regulate healthy brain aging. Lysine In addition to \u03b12AP, plasmin presents in the mouse brain , 7, and \u2212/\u2212) mice in comparison to \u03b12AP+/+ mice [\u2212/\u2212mice exhibit impaired memory, including working memory, spatial memory and fear conditioning memory, in comparison to WT mice [Our previous study demonstrated that the length of dendrites was markedly shorter and the number of dendritic branches was markedly lower in the hippocampal neurons from \u03b12AP knockout (\u03b12APWT) mice . Given tWT) mice , the conWT) mice . Thus, \u03b1 WT mice . ConsideIt is widely accepted that the hippocampus is a crucial brain region for learning and memory. Adult neurogenesis in the hippocampus is one of the most important mechanisms for the spatial memory process; the inhibition of adult neurogenesis by irradiation impairs long-term spatial memory, while the enhancement of neurogenesis after running facilitates LTP and spatial memory , 21. Fur\u2212/\u2212) mice were generated by homologous recombination using embryonic stem cells, as described previously [Male 12-week-old C57BL/6J mice were purchased from Japan Charles River . \u03b12AP-deficient . The alteration of their behavior was calculated as the ratio of the number of alterations to the total number of arm entries minus 2.Mice received visible platform pre-training on the first day, followed by hidden platform training for two days. In the hidden platform training, two sessions consisting of four trials per session were performed on two days. Mice were placed into the pool from four different directions in each of the four trials, and the escape latency was measured by a video-tracking system . The second day of training was performed the following day, followed by a 60-s probe test without a platform. The time in each quadrant was analyzed by a video-tracking system (SMART). To evaluate long-term memory, the probe tests were performed 1 and 3\u00a0months after hidden platform training.After the first day of MWM training, mice were anesthetized with 1.8\u20131.9% isoflurane, and then were injected 200\u00a0nM of \u03b12AP or 0.1\u00a0\u03bcg/\u03bcL of an anti-\u03b12AP neutralizing goat antibody as well as saline or normal goat IgG control, respectively in a total volume of 20 \u03bcL into the lateral ventricle with a two-step needle attached to a glass syringe . After injection, the needle was held at the site for 1\u00a0min to prevent reverse flow.5-Bromo-2\u2032-deoxyuridine (BrdU) was dissolved in saline, and injected intraperitoneally at 24-h intervals for 7\u00a0days at 50\u00a0mg/kg. The mice were perfused with PBS and 4% paraformaldehyde (PFA) in PBS, and the brains were then fixed in 4% PFA for 48\u00a0h, then soaked in 30% sucrose for 5\u00a0days. Thirty-micrometer-thick frozen coronal sections were prepared and stained with anti-BrdU mouse antibody and anti-Ki67 rabbit antibody after 2\u00a0N HCl-treatment at 37\u2103 for 30\u00a0min, neutralization with 0.1\u00a0M of boric acid pH 8.5 at room temperature for 10\u00a0min, and blocking with a Mouse on Mouse blocking kit and blocking solution in PBS). The sections were then incubated with anti-BrdU and anti-Ki67 antibodies at room temperature at 4\u00a0\u00b0C overnight. After washing with PBS, the sections were treated with Alexa 488-conjugated goat anti-mouse IgG and Alexa 546-conjugated goat anti-rabbit IgG , and then coverslipped in Prolong Gold\u2122 antifade reagent (Invitrogen). The specimens were observed using a confocal laser microscope NIKON A1R .2O2 in methanol at room temperature for 10\u00a0min. The sections in Retrievagen A (pH 6.0) (BD Biosciences) were autoclaved and washed in PBS, and then incubated with blocking solution at room temperature for 1\u00a0h, and treated with anti-doublecortin (Dcx) mouse antibody at 4\u00a0\u00b0C overnight. After washing with PBS, they were incubated with biotinylated anti-rabbit IgG antibody at room temperature for 30\u00a0min. The detection of antibody-antigen complexes was accomplished using a Vectastain Elite ABC kit (Vector Laboratories) and Metal-Enhanced DAB Substrate kit . The immunostained sections were photographed using a microscope with a digital camera . Images were taken at full resolution with a single image dimension set at 1360\u2009\u00d7\u20091024 pixels.Thirty-micrometer-thick frozen coronal sections were treated with 0.3% HAfter perfusion with PBS, the hippocampi and cerebral cortexes from the mice were homogenized and sonicated in lysis buffer: 10\u00a0mM Tris\u2013HCl buffer (pH 7.5) containing 1% SDS, 1% Triton X-100, and a protease inhibitor cocktail . The protein concentration in each lysate was measured using a BCA protein assay kit . Lysates containing equal amounts of protein were subjected to SDS\u2013polyacrylamide gel electrophoresis on a 10% acrylamide gel. Proteins were transferred onto PVDF or nitrocellulose membranes. After blocking with 3% skim milk in Tris-buffered saline containing 0.05% Tween-20 (TBS-T), the membranes were incubated with anti-\u03b12AP goat antibody , anti-hexanoyl-lysine (HEL) mouse antibody , or anti-glyceraldehyde 3-phosphate dehydrogenase (GAPDH) mouse antibody at 4\u00a0\u00b0C overnight. After washing with TBS-T, the membranes were incubated with horseradish peroxidase-conjugated rabbit anti-goat IgG , or goat anti-mouse IgG for 1\u00a0h. After washing again, immunoreactive bands were detected using Chemi-Lumi One Super with an LAS-3000 mini-image analysis system . The band intensities were quantified using the ImageJ software program. The same sample as a loading control was included in each Western blot analysis, and the band intensities were normalized.2. The outflow fraction for the first 3\u00a0h was discarded, and then the dialysis sample perfused with ACSF at a flow rate of 1 \u03bcL/min was collected for 1\u00a0h under anesthesia with 1.5% isoflurane.The \u03b12AP levels in the cerebrospinal fluid and plasma were measured with a mouse \u03b12AP ELISA kit . The cerebrospinal fluid was collected as detailed below. Briefly, a guide cannula was implanted into the lateral right ventricle , fixed to the skull with dental cement , and was then occluded with a dummy cannula . The mice were returned to their home cage and allowed to recover for 2\u00a0days. A microdialysis probe was inserted into the lateral right ventricle through the guide cannula under anesthesia with 1.5% isoflurane. The probe was perfused continuously at a flow rate of 10 \u03bcL/min with artificial cerebrospinal fluid (ACSF) containing 147\u00a0mM NaCl, 4\u00a0mM KCl and 3\u00a0mM CaCl3, centrifugation was performed at 15,000\u00d7g for 15\u00a0min. The resultant supernatants were each mixed with an equal volume of 2-propanol. After centrifugation, the pellets were rinsed with 75% ethanol/diethylpyrocarbonate (DEPC)-treated water and then dried. The pellets were each dissolved in an appropriate volume of DEPC-treated water as total RNA fractions. RNA from each sample (1\u00a0\u00b5g) was transcribed using ReverTra Ace-\u03b1 according to the manufacturer\u2019s protocol. Quantitative PCR was performed to analyze the murine IL-6, TNF-\u03b1 and IL-1\u03b2 mRNA expression relative to the GAPDH mRNA expression using a MiniOpticon real-time PCR system . We used the following primers: IL-6, 5\u2032-GTTCTCTGGGAAATCGTGGA-3\u2032 (sense) and 5\u2032-GGAAATTGGGGTAGGAAGGA-3\u2032 (antisense); TNF-\u03b1, 5\u2032-AAATGGGCTTTCCGAATTCA-3\u2032 (sense) and 5\u2032-CAGGGAAGAATCTGGAAAGGT-3\u2032 (antisense); IL-1\u03b2, 5\u2032-CAAATCTCGCAGCAGCACA-3\u2032 (sense) and 5\u2032-TCATGTCCTCATCCTGGAAGG-3\u2032 (antisense); and GAPDH, 5\u2032-TGTGTCCGTCGTGGATCTGA-3\u2032 (sense) and 5\u2032-TTGCTGTTGAAGTCGCAGGAG-3\u2032 (antisense). The fold-change in the expression levels of IL-6, TNF-\u03b1 and IL-1\u03b2 relative to the GAPDH expression as an endogenous control gene were determined by the\u2009\u2212\u2009\u2206Ct method.Total RNA was isolated from the hippocampi and cerebral cortexes using TRIsure . After the addition of CHClt-test. P values of\u2009<\u20090.05 were considered to indicate statistical significance.Data are reported as the mean\u2009\u00b1\u2009standard error of the mean (SE). Differences among mean values were analyzed using a one-way analysis of variance (ANOVA) followed by an LSD post-hoc test or Student\u2019s To first determine whether \u03b12AP mediates adult neurogenesis in the dentate gyrus (DG) of the hippocampus, we examined the effect of a neutralizing antibody against \u03b12AP on neurogenesis. The number of BrdU-positive and Ki67-negative cells that had exited the cell cycle was significantly increased in the DG of the anti-\u03b12AP antibody-injected mice in comparison to that of the control mice, although there were few Ki67-positive cells, proliferating cells, in the DG of the mature adult mice Fig.\u00a0a, b. We We next investigated the effect of \u03b12AP injection on adult hippocampal neurogenesis and the spatial memory process. The number of BrdU-positive and Ki67-negative cells in the DG was significantly decreased by the injection of \u03b12AP in comparison to the control mice Fig.\u00a0a, b. AccThe degree of \u03b12AP levels and functions in the brain was suggested to affect the hippocampus-dependent spatial memory. Thus, in order to elucidate the relationship between the \u03b12AP levels and brain aging accompanied by cognitive decline, we first compared the levels of \u03b12AP in the brain between young and old mice. The expression of \u03b12AP in both the hippocampus and cerebral cortex in old mice was remarkably higher in comparison to young mice, while there was no significant difference in the expression of plasmin Fig.\u00a0a, b. TheWe next analyzed correlations between the relative levels of \u03b12AP in the hippocampus and cerebral cortex, and spontaneous activity and the working spatial memory scores in the Y-maze test Fig.\u00a0c, d. The\u2212/\u2212 mice. The degree of oxidative stress was assessed by detecting the levels of 13-hydroperoxyoctadecanoic acid (13-HPODE)-modified proteins, which reacts specifically with an anti-HEL antibody was implanted into the lateral right ventricle , fixed to the skull with dental cement, and then occluded with a dummy cannula . The mice were returned to their home cage and allowed to recover for 1 week. The Y-maze test was performed 1 week after the ventricular cannulation. On the next day of the Y-maze test, an injection cannula was connected through polyethylene tubing to a Hamilton syringe that had been preloaded with 0.1 \u03bcg/\u03bcL of an anti-\u03b12AP neutralizing goat antibody (R&D System) or normal goat IgG control (R&D System), and inserted into the guide cannula in the awake mice. Each solution was injected in a total volume of 20 \u03bcL. On the next day, the Y-maze test was performed again.Additional file 1: Figure S1. Impaired spatial working memory in old mice in comparison to young mice. The Y-maze test was performed in young and old C57BL/6J mice . The mice were placed in the center and allowed to explore the apparatus for 8 min. The alteration of behavior was calculated as the ratio of the number of alterations to the total number of arm entries minus 2. The values represent the mean \u00b1 S.E. Statistical significance was evaluated using Student\u2019s t-test. *P < 0.05.Additional file 2: Figure S2. The effects of anti-\u03b12AP neutralizing antibodies on spatial working memory in young and old mice. The Y-maze test was performed before and after an intraventricular injection of anti-\u03b12AP neutralizing antibodies or control IgG in young and old C57BL/6J mice . The values represent the mean \u00b1 S.E. Statistical significance was evaluated using a paired t-test. *P < 0.05.Additional file 3: Figure S3. Comparison of the levels of inflammatory cytokines in the brain between young and old mice. The mRNA levels of IL-6, IL-1\u03b2 and TNF-\u03b1 in the hippocampus (A) and the cerebral cortex (B) were determined by real-time PCR . Statistical significance was evaluated using Student\u2019s t-test. *P < 0.05.Additional file 4: Figure S4. The effects of excess plasmin on spatial memory. (A) Plasmin or saline was intracerebroventricularly injected in 12-week-old C57BL/6J mice after the first day of training in the MWM test. On the second day, mice were repeatedly trained, and probe tests were performed 30 minutes and 1 month later. (B) The results of the training sessions. The latency to the target in each trial was measured. The values represent the mean values of 4 trials in each session. There was no difference in latency to the platform between the plasmin-injected mice and the control mice. (C) The results of the probe tests 30 minutes after training. The time in the target quadrant was longer than the other quadrants in both groups of mice, and the time in each quadrant did not differ between the two groups. The swimming velocity of the plasmin-injected mice and the control mice did not differ to a statistically significant extent . (D) The results of the probe tests at 1 month after training. The time spent by the plasmin-injected mice in the target quadrant was significantly shorter in comparison to the control mice, although the time in the target quadrant was still longer than the time in the opposite quadrant in both groups of mice. (E) The values represent the mean \u00b1 S.E. . Statistical significance was evaluated using an ANOVA with an LSD post-hoc test. *P < 0.05, **P < 0.01."} {"text": "Background: To analyze the clinical characteristics of nephrotic syndrome (NS) with complications of cerebral sinovenous thrombosis (CSVT) in children.Method: Clinical, radiographic, laboratory, and treatment data obtained from 10 confirmed cases of NS with complications of CSVT were analyzed. All patients were followed up for at least 18 months. CSVT was diagnosed by cerebral computed tomography (CT) and/or magnetic resonance imaging (MRI) with or without magnetic resonance venography (MRV) of the cerebral vessels.Results: Among 10 cases reported, 4 were steroid-sensitive NS with frequent relapse, 5 were steroid-resistant , and 1 was steroid-sensitive with one relapse. Common clinical manifestations were headache or ophthalmodynia complicated by vomiting, dizziness, convulsion, and coma. Neuropathologic signs were positive in some cases. Papilledema appeared in only one case with winding of vein. Cerebrospinal fluid was examined in three cases with elevated pressure but normal cytological and biochemical results. D dimer and fibrinogen levels were elevated while prothrombin time and activated partial thromboplastin time were shortened. Five out of seven cases who had performed cranial CT were suspicious for cerebral thrombosis. Nine cases had cranial MRI with abnormal signs in seven cases. All of the cases received MRV, confirming the diagnosis of CVST.Conclusion: Clinical manifestations of NS with CSVT are not specific but varied. Therefore, CSVT should be considered once nervous manifestations present. MRV is a better method in the diagnosis of CSVT. Thrombosis is one of the common complications of nephrotic syndrome (NS). Renal veins, veins of the lower extremities, and pulmonary artery are the most common sites of thrombosis. The incidence of cerebral sinovenous thrombosis (CSVT) in children is much lower than that in adults, though the true incidence may be underestimated for many events that are asymptomatic or had a delay in diagnosis . HoweverTen children with NS complicated by CSVT were included in this study. They were admitted to the children kidney disease center in the First Affiliated Hospital of Sun Yat-sen University between August 2005 and August 2020. NS was diagnosed according to the criteria defined as heavy proteinuria (urine protein > 50 mg/kg/day), hypoalbuminemia (ALB <25 g/L), hypercholesteremia (cholesterol > 5.7 mM/L), and clinical edema. We excluded patients with NS secondary to systemic disorders including systemic lupus erythematosus, hepatitis B-related nephropathy, vasculitis, and congenital NS. CSVT was diagnosed by cerebral computed tomography (CT) and/or MRI with or without MRV of the cerebral vessels , 15.Urine protein was qualitative determined in degree from negative to positive with one plus to four plus (\u2013~ 4+). Biochemical indexes including serum albumin, cholesterin, and serum creatinine were determined by automatic biochemical analyzer. Blood coagulation function test including D dimer, prothrombin time (PT), activated partial thromboplastin time (APTT), and fibrinogen were detected with ELISA assay. Blood samples for coagulation function test were collected in negative pressure vacuum anticoagulant tube while other samples were also collected and examined in appropriate processes at the time of thrombosis.Cases enrolled were retrospectively analyzed. Detailed clinical, radiographic, laboratory, and treatment data were obtained at the time of thrombosis. All patients have been followed up for at least 18 months.The study was conducted in accordance with the principles outlined in the 1964 Declaration of Helsinki and with approval from the ethics committee of the First Affiliated Hospital of Sun Yat-sen University. Written informed consent was obtained from all of the patients' parents or guardians.A total of 10 patients were enrolled in the analysis in our center over a period of 15 years (about 200 children diagnosed with NS are admitted into the center every year). There were nine males and one female aged 3 to 10 years old with an average age of 6.1 years old. There were no thrombotic events for family history or previous thrombotic events occurred in all 10 cases. Thrombotic screening with Doppler ultrasound for other sites such as lower limb veins and abdominal vessels were done in all cases and turned out to be negative. The detailed characteristics of those patients are shown in The course of NS ranged from 1.5 months to 5 years. Four cases were steroid-sensitive nephrotic syndrome (SSNS). None of the 10 patients had hypovolemic shock. For blood volume assessment, we conducted the test of passive leg raising (PLR) and rehydration test in all patients, which could predict volume responsiveness, as well as with comprehensive assessment of hematocrit, urine specific gravity, urine volume, blood pressure, and capillary refill time (CRT), none of the patients developed hypovolemia. Except for case 4, case 9, and case 10, the other seven patients had edema. Patients with SSNS were also diagnosed as frequent-relapsing NS (FRNS), among them two developed CSVT during relapses while the other two were in remission. Five cases were steroid-resistant NS (SRNS) including two cases of minimal change disease (MCD) and one case of IgA nephropathy (IgAN) confirmed by subsequent renal biopsies. There was one steroid-dependent NS (SDNS) with one relapse. At the time of CSVT being diagnosed, steroid was administered in nine cases. The other case was given Cyclosporin A (CsA) only and steroid was discontinued 1 week before neurological symptoms appeared. Four cases had received diuretic treatments. Clinical characteristics of those patients are summarized in CSVT could be classified into three clinical types according to the interval between onset of neurological symptoms and suspicion of CVST. Interval of <48 h, 48 h to 1 month, and more than 1 month are referred to as acute, subacute, and chronic CSVT, respectively. In this study, five patients presented with acute CSVT, in which convulsion occurred in one patient within 6 h after renal biopsy under basal anesthesia by ketamine. Three appeared as subacute and the other two patients with SSNS were chronic. CSVT was mainly manifested by headache, dizziness, convulsion, vomiting, altered consciousness, and even papilledema; these symptoms can present in isolation or in association with other symptoms. Eight out of the 10 patients presented with headache predominantly in the forehead. Headache in case nine remitted spontaneously but later recurred and progressed to paroxysmal and severe headache 1 week before admission. Case 10 had predominant pulvinar headache that occurred as paroxysm during the day but remitted when he fell asleep. Case 8 also had paroxysmal headache and had been misdiagnosed as nasosinusitis. Only one case presented with ophthalmodynia in the right eye. Seven patients had non-projectile vomiting. One patient exhibited irritability. An episode of coma had occurred in case 4 with loss of awareness and response to external stimuli. Papilledema occurred in case 9 with fundoscopy demonstrating elevation and blurring of optic disc and swelling veins along its margins without retinal hard exudates. Focal neurological deficits were not observed in patients. Meanwhile, abnormal neurological signs included low muscle tone of extremities, absent patellar tendon reflex, neck rigidity, and Babinski's sign. Yet, four patients showed no abnormal neurological signs. Nine patients had normal blood pressure except for case 5. None of them had intracranial infection . Imaging of case 4 showed a low-intensity shadow in the corona radiata of posterior limb of internal capsule, which was suspected to be a flow artifact. Neuroimaging obtained by MRI was conducted in nine patients. Summary of results is listed as follows . The duration of anti-coagulation therapy was 6 months. Consciousness recovered gradually from coma for Case 4, the patient was able to walk but incontinent, subsequent recovery was not available because of withdrawal 18 months later. Case 5 had fully recovered nearly 20 days after treatment. The condition of the other eight patients also improved gradually. Case 9 occasionally complained of headache and dizziness. Initial symptoms along with positive signs of nervous system all disappeared for the other cases. Urine protein was found to be negative for cases 9 and 10. For cases 1 and 2, proteinuria showed remission once CsA was administered and did not fluctuate after steroid withdrawal. For case 4, there was remission of proteinuria after 4 months. For cases 3 and 6, urine protein was persistently positive during the process of steroid reduction until tacrolimus was added. Case 7 had three relapses in 1 year; hence, tacrolimus was administered in combination with steroid, similarly in case 8. Reexamination of MRV in cases 1 and 10 showed no abnormal findings after 6 months. The thrombi became smaller in cases 5 and 7 after 1 month and 10 days, respectively. The thrombi had shrank 1 month later and disappeared 4 months later in case 8 and MRI (77.8%), proposing that MRV is a better modality in the diagnosis of CSVT. Symptoms and signs could be fully recovered in 3 months to a year under comprehensive therapies of anti-coagulation, thrombolysis, and control of NS. Risk factors and predictive indexes for thrombosis in NS children need to be further explored.The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author/s.The studies involving human participants were reviewed and approved by ICE for Clinical Research and Animal Trials of the First Affiliated Hospital of Sun Yat-sen University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. Written informed consent was obtained from the minor(s)' legal guardian/next of kin for the publication of any potentially identifiable images or data included in this article.YM and XJ designed the study, and reviewed and revised the manuscript. LR and LC carried out the initial analyses and drafted the initial manuscript. ZD and LR were responsible for the image collection. HZ and ZL coordinated and supervised the data collection. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "The cerebellum plays a crucial role in sensorimotor and associative learning. However, the contribution of molecular layer interneurons (MLIs) to these processes is not well understood. We used two-photon microscopy to study the role of ensembles of cerebellar MLIs in a go-no go task where mice obtain a sugar water reward if they lick a spout in the presence of the rewarded odorant and avoid a timeout when they refrain from licking for the unrewarded odorant. In naive animals the MLI responses did not differ between the odorants. With learning, the rewarded odorant elicited a large increase in MLI calcium responses, and the identity of the odorant could be decoded from the differential response. Importantly, MLIs switched odorant responses when the valence of the stimuli was reversed. Finally, mice took a longer time to refrain from licking in the presence of the unrewarded odorant and had difficulty becoming proficient when MLIs were inhibited by chemogenetic intervention. Our findings support a role for MLIs in learning valence in the cerebellum. This study shows that cerebellar molecular layer interneurons (MLIs) develop responses encoding for identity of the stimulus in an associative learning task. Chemogenetic inhibition of MLIs decreased the ability of mice to discriminate stimuli suggesting that MLIs encode for stimulus valence. Importantly, subtle changes in this PC excitatory\u2013inhibitory balance generate robust, bidirectional changes in the output of PCs6.The cerebellum plays a pivotal role in coordinating movements through sensorimotor integration. It receives massive input through mossy fiber synapses onto granule cells (GCs) to form a circuit efficient in complex pattern separation2+ in PCs elicited by motor error signals conveyed by climbing fibers (CFs) is a classical model of plasticity10. However, recent studies indicate that CFs also signal reward prediction12 or decision-making errors13, and the cerebellum modulates association pathways in the ventral tegmental area (VTA) contributing to reward-based learning and social behavior14. Furthermore, although LTD at the PF\u2013PC synapse is often considered as the substrate for cerebellar dependent learning10, such learning can occur in the absence of LTD and may therefore involve other forms of plasticity15. A potential substrate for plasticity is the PF-MLI synapse16 where LTP can be induced in slices by pairing MLI depolarization with PF stimulation17 and in vivo by conjunctive stimulation of PFs and CFs18, believed to underlie changes in the size of cutaneous receptive fields20. In addition, high frequency stimulation of PFs alters subunit composition of AMPA receptors, rendering them calcium impermeable21, a long-lasting change linked to behavioral modifications22. Furthermore, MLIs have been proposed to participate in cerebellar plasticity24, and Rowan et al. found graded control of PC plasticity by MLI inhibition25, suggesting that MLI inhibition is a gate for learning stimulus valence, which conveys information as to whether the stimulus is rewarded. However, whether there is a causal participation of MLIs in reward-associated learning is unknown.Plasticity in cerebellar circuit activity plays an important role in generation of adequate output. Indeed, long-term depression (LTD) mediated by dendritic increases in Ca27. We applied two-photon microscopy28 to record Ca2+ changes in ensembles of MLIs and utilized chemogenetics to explore the functional role of MLI activity in learning.Here we explored whether MLI activity plays a role in reward-associated learning in a go\u2013no go task where mice learn to lick to obtain a water reward2+ indicators GCaMP6/7. MLIs were imaged within the superficial 50\u2009\u03bcm of the ML in head-fixed mice through a 2\u2009\u00d7\u20092\u2009mm glass window implanted above the cerebellar vermis Fig.\u00a0, and theF/F, Fig.\u00a031. The average diameter of the ROIs in this field of view (FOV) was 10.5\u2009\u00b1\u20094\u2009\u03bcm and reached a peak when the animal was rewarded for the last second of odorant application increased as a function of percent correct performance for the S+ condition, whereas it remained stable for the S\u2212 condition and for the interaction between performance and the odorant (S+ vs. S\u2212) . The bootstrapped 95% confidence interval (CI) for an LDA trained after shuffling stimulus was used as a control. The time course for LDA decoding accuracy for the session with the learning curve in Fig.\u00a0p values <0.01 and <0.05, 24 observations, 18 d.f., n\u2009=\u20094 sessions, 4 mice, GLM F-statistic 16.4, p\u2009<\u20090.001). Post-hoc tests corrected for multiple comparison using false discovery rate (FDR)33 yielded a statistically significant difference for decoding accuracy for either reinforcement or odorant vs. shuffled for proficient . The LDA analysis revealed that the accuracy for decoding the odorant identity from \u0394F/F during the odorant period increases as the animal learns to differentiate between odorants.We utilized a linear discriminant analysis (LDA) to determine whether a hyperplane placed in the multidimensional space of \u0394p\u2009<\u20090.001, 72 observations, 54 d.f., n\u2009=\u20094 sessions, 4 mice, GLM F-statistic\u2009=\u200923.6, p\u2009<\u20090.001). Thus, MLI activity encodes for the stimulus even when a single ROI is analyzed indicating that stimulus information encoded by MLI activity is highly redundant.Since dimensionality of MLI neural activity is low with an increase in \u0394F/F before reversal , maintained the odorant increase in \u0394F/F to what was then the unrewarded odorant immediately following reversal and switched responses with increases to the new rewarded stimulus when they became proficient in the reversal task when the valence was reversed . Thus, after successful reversal the \u0394F/F time course switched for the two odorants: the stimulus-induced increase in \u0394F/F took place for the reinforced odorant, not for the chemical identity of the odorant.In order to determine whether the MLIs responded to the chemical identity of the odorant, as opposed to responding to the valence , we reversed odorant reinforcement. When the reward was reversed for a proficient mouse the animal kept licking for the previous rewarded odorant resulting in a fall in percent correct below 50%, and as the animal learned the new valence the percent correct raised back above 80% Fig.\u00a0. We analsed Fig.\u00a0. GLM anaP\u2009<\u20090.001), but not between forward and reverse . These results indicated that for the proficient mouse it is possible to decode contextual identity, suggesting that MLI activity encodes for valence.Next we computed the accuracy for decoding the reinforced odorant for trials when the animal was proficient in either the forward or reverse trials using LDA analysis. Figure\u00a0To gain a better understanding of the information on odorant valence present in the responses of the MLI ensemble we asked whether stimulus decoding accuracy calculated with LDA for proficient mice differed between correct (Hits and CRs) and incorrect trials (Miss and FAs). If information in MLI activity reflects the outcome of the trial, we would expect that decoding accuracy would be lower for incorrect trials. On the other hand, if information encoded by MLI activity reflects the stimulus regardless of trial outcome decoding accuracy would not differ between correct and incorrect trials.F/F during the odorant application regardless whether the trial was a correct response (Hit or CR) or an error . In contrast, \u0394F/F did not differ between Hits vs. Miss and CR vs. FA , while Hits/Miss differ from CR/FA . To survey the information encoded in MLI activity in trials with different outcomes in proficient mice, we utilized LDA analysis to decode the stimulus . In addition, GLM analysis did not find a significant difference between outcomes or time period (odorant vs. reinforcement) indicating that MLI activity reflects the stimulus regardless of trial outcome . This analysis determined that odorant-induced MLI Ca2+ changes carry information on the stimulus, as opposed to the outcome.We performed this analysis for sessions that included at least one error trial (Miss or FA) when the animals were proficient. In these time series the majority of the ROIs exhibited changes in \u0394A) Table\u00a0. In addi35, we examined the correlation between \u0394F/F and the LR. When the animal was proficient the animal licked at least once in each of the two lick segments during application of the S+ odorant, increased licking after receiving the water reward and refrained from licking for the S\u2212 trials and the derivative of the lick rate (DtLR)34 and plotted their relationship during different time periods for proficient mice. Figure\u00a0F/F and the LR. Examples of these correlations for a single session for a proficient mouse are shown in Fig.\u00a0F/F and the LR was larger during the reinforcement period compared to the odorant and pre-odorant periods , and was smaller for both the pre-odorant and odorant periods and between the reinforced and odorant periods for Dt\u0394F/F vs. DtLR . These data suggested that changes in \u0394F/F during the reinforcement period reflect changes in lick activity, while changes in \u0394F/F during the odorant period are less dependent on licks, and maybe dependent on multiple variables.In order to explore the relationship of MLI activity to licking we proceeded to examine the correlation between \u0394ods Fig.\u00a0. Similarods Fig.\u00a0. We proc LR Fig.\u00a0 and Dt\u0394FtLR Fig.\u00a0 were postLR Fig.\u00a0. A two-sF/F in CR trials when the mouse did not lick and for time courses aligned to the beginning of \u0394F/F changes after odorant addition. We compared \u0394F/F and lick frequency during the odorant application period for CR trials when the animal did not lick during the two 2\u2009s odorant response periods vs. CR trials when the animal licked .We performed complementary studies of \u0394F/F and lick frequency in the time period shortly after odorant application when \u0394F/F increases for both S+ and S\u2212, before \u0394F/F decreases for S\u2212 (and keeps increasing for S+). In Crus II, where neural activity of MLIs is thought to reflect licks, \u0394F/F increases whenever there is an increase in licking frequency35. We aligned the traces to the point where the time derivative for \u0394F/F increased above 0.03. We found that \u0394F/F increased for both S+ and S\u2212 . In contrast, in this time period there was no increase in lick frequency . The data on the relationship between lick frequency and \u0394F/F indicate that although there is a dependence between these two variables, the dependence is not consistent with a direct relationship between \u0394F/F and lick frequency, as found in Crus II.Furthermore, we analyzed the relationship between the time course of \u0394F/F time course and the LR . In contrast, a GLM analysis of the changes in \u0394F/F as a function of volume delivered in dry and wet lick conditions did not yield statistically significant changes . Finally, if MLI activity reflected reinforcement value \u0394F/F should show a positive correlation with volume of sugar water delivered . We did not find a significant correlation between lick frequency and \u0394F/F for either dry or wet licking . This experiment indicated that MLI activity does not reflect reward value, and is consistent with MLI activity reflecting valence.The correlation between \u0394 LR Fig.\u00a0 raises tF/F on the different behavioral and stimulus variables37. We included event variables, whole trial variables, and continuous variables , we found that GLM explained a substantial percent of the average \u0394F/F variance ranging from 23.5 to 95% . Furthermore, variables describing the licks contributed to the GLM fit during the outcome period . In contrast, the other two variables (body kinematics and reinforcement history) did not differ in contribution during the different periods . These data indicate that odorant identity contributes to modeling MLI activity during the odorant application period while licks contribute to MLI activity during the reinforcement period.Black traces in Fig.\u00a095% Fig.\u00a0. We next38. To control for off-target effects of clozapine-N-oxide (CNO)39, we injected PV-Cre mice with Cre-dependent mCherry AAV virus in another group of six mice (control group) 40\u2009min before the start of the session (control-CNO and hM4Di-CNO).In order to determine whether activity of MLIs plays a role in behavioral responses in the go\u2013no go task we used a Cre-dependent AAV virus to express the inhibitory DREADDs receptor hM4Di in MLIs in six PV-Cre mice (hM4Di group)p\u2009<\u20090.01, 24 observations, 20 d.f., n\u2009=\u20096 mice, GLM F-statistic\u2009=\u200913.7, p\u2009<\u20090.001) and post-hoc tests indicated that the hM4Di expressing group differs between CNO and saline , while there are no significant differences between CNO and saline for control mice , indicating an effect of CNO-induced inhibition of MLIs expressing hM4Di on behavioral output and the absence of off-target CNO effects.Mice in all groups with the exception of the hM4Di-CNO attained proficiency (\u226580% percent correct) Fig.\u00a0. GLM anap\u2009>\u20090.05, 60 observations, 56 d.f., n\u2009=\u20096 mice, GLM F-statistic\u2009=\u20092.96, p\u2009>\u20090.05). Yet GLM found a statistically significant difference for S\u2212 for the interaction between CNO and hM4Di expression . We obtained a similar impairment of performance in a separate set of experiments with two hM4Di mice and two controls where animals discriminated between 1% Iso and MO, and CNO was applied when the animal was naive, and we reversed the reward . Simulation of S+ trials showed that odorant stimulus though PF inputs increases the activity of SCs that inhibit PC firing eliciting an increase in the LR and CNO (p\u2009<\u20090.05) and for the interactions between S+ vs. S\u2212 and CNO .To model the effect of inhibitory chemogenetics Fig.\u00a0, we consThe model is simple. For example, it does not include basket cells. Furthermore, we did not model the large increase we find in the LR when the mouse receives the sugar water reward. In addition, we did not perform an exhaustive study of how the variables affect the changes in lick strength. Therefore, other explanations should be explored in future studies with alternate computational models and an exhaustive search of the input variables. Regardless, our model provides plausible mechanisms for the results that can be tested in future experiments with slice electrophysiology and awake behaving recording.We found that vermal MLIs developed a differential response to odorants in the go\u2013no go task that switched when the valence was reversed. Decoding analysis revealed that when the animal was proficient the contextual identity of the odorant could be decoded from MLI responses. GLM analysis revealed that contextual identity made a large contribution to the fit of MLI activity during the odorant application period. Chemogenetic inhibition of MLIs impaired achievement of proficient discrimination of odorants. These data indicate that MLIs play a role in associative learning by encoding valence.43. CFs carrying error signals make profuse synaptic connections on the dendrites of PCs and elicit powerful excitatory dendritic Ca2+ spikelets47. Furthermore, CFs also signal reward prediction12 or decision-making errors13, and the cerebellum modulates association pathways in VTA enabling a cerebellar contribution to reward-based learning and social behavior14. The increase in Ca2+ mediates LTD in subsets of synapses innervated by co-activated GC PFs carrying sensorimotor information relevant to learning49. However, recent studies by Rowan et al.25 revealed that increasing feedforward inhibition by MLIs can switch the valence of plasticity from LTD to LTP . In addition, adaptive changes in the vestibulo-ocular reflex elicited by CF optogenetic activation switched from increase to decrease depending on whether MLIs were co-activated25. Finally, MLIs gate supralinear CF-evoked Ca2+ signaling in the PC dendrite51. These studies suggest that the valence of learning is graded by MLI activity.The cerebellum has been implicated in mediating supervised learning through an iterative process whereby the response to an input is evaluated against a desired outcome, and errors are used to adjust adaptive elements within the systemF/F responses and stimulus decoding for correct and incorrect behavioral response trials mice and wild-type C57BL/6J mice. The animals were housed in a vivarium with a 14/10\u2009h light/dark cycle. Food was available ad libitum. Access to water was restricted in for the behavioral training sessions according to approved protocols, all mice were weighed daily and received sufficient water during behavioral training to maintain \u226580% of original body weight. Animal are housed at 72\u2009\u00b1\u20092\u2009\u00b0F and a humidity of 40\u2009\u00b1\u200910%.To perform immunostaining, mice were sacrificed and transcardially perfused with ice cold 4% paraformaldehyde , followed by incubation in 30% sucrose . After the brain was incubated in the sucrose solution, 60-\u03bcm-thick slices were cut with a cryostat. The slices were imaged using a confocal laser scanning microscope to determine the GCaMP expression patterns in the cerebellum. The slices were counterstained with DAPI .\u22121). A craniotomy was made over the vermis of cerebellum centered at midline 6.8\u2009mm posterior to Bregma leaving the dura intact (lobule VI). A square glass window (2\u2009mm\u2009\u00d7\u20092\u2009mm) of No. 1 cover glass was placed over the craniotomy and the edges were sealed with cyanoacrylate glue . The window was further secured with Metabond , and a custom-made steel head bracket was glued to the skull.Adult mice (8 weeks or older) were first exposed to isoflurane (2.5%) and then maintained anesthetized by intraperitoneal ketamine\u2013xylazine injection . The viral infection method we use has been reported to express GCaMP in MLIs, and not in PCs65. Supplementary Fig.\u00a02+ indicator65. In one C57BL/6 animal each, we used AAV5-Syn-GCaMP6s or AAVrg-Syn-jGCaMP7f with similar results to those obtained with GCaMP6f . The floor in an olfactometer that controlled valves to deliver a 1:40 dilution of odorant at a rate of 2\u2009L\u2009min\u22121. The LR was calculated from the lick records and the time course was convolved with a 2\u2009s Gaussian for the experiments where we performed multiphoton calcium imaging. We did not convolve the LR records for the experiments with chemogenetics. The water-deprived mice started the trial by licking on the water port. The odorant was delivered after a random time interval ranging from 1 to 1.5\u2009s. In S+ trials, the mice needed to lick at least once in two 2\u2009s lick segments to obtain a reward (0.1\u2009g\u2009ml\u22121 sucrose water) er) Fig.\u00a0. The int67 . Measurement of body movement with a single camera gives limited information. Five mice were used for the go\u2013no go experiments. Four mice were imaged when they were naive. The window became opaque for two of the mice preventing MLI imaging for the reversal experiment.Movement of the mouse was imaged in the infrared to prevent light interference with the non-descanned detection in the visible using a 1 Megapixel NIR security camera at 30 frames/s. Velocity of body movement was estimated using the Farneback algorithm coded in MatlabFor the experiment in Supplementary Fig.\u00a039, 1.2\u2009\u03bcl of AAV8-hSyn-DIO-hM4D(Gi)-mCherry virus was bilaterally injected into six PV-Cre animals at \u00b10.5\u2009mm lateral to midline, 6.8\u2009mm posterior to Bregma and 200\u2013400\u2009\u03bcm below the brain surface. For control, an AAV8-hSyn-DIO-mCherry virus was injected in the same position in 6 PV-Cre animals. This viral infection method results in expression of the protein in MLIs, and not in PCs65. We performed two separate experiments: (1) For the experiments in Fig.\u00a0\u22121 CNO . (2) For the experiment in Supplementary Fig.\u00a0\u22121 CNO 40\u2009min before starting the session and were trained to discriminate between 1% Iso (S+) and MO (S\u2212) for 6 sessions for a total of 500\u2013600 trials. The reinforcement was then reversed and the animals were again injected with CNO 40\u2009min before the sessions and were trained to discriminate between MO (S+) and 1% Iso (S\u2212) for 5 sessions for a total of 400\u2013500 trials.For chemogenetic inhibition of MLI activity30. The head-fixed two-photon imaging system consisted of a movable objective microscope paired with a 80\u2009MHz, ~100\u2009fs laser centered at 920\u2009nm. The MOM was fitted with a single photon epifluorescence eGFP filter path (475\u2009nm excitation/500\u2013550\u2009nm emission) used for initial field targeting followed by switching to the two-photon laser scanning path for imaging GCaMP at the depth of the MLIs. The galvometric laser scanning system was driven by SlideBook 6.0 . The two-photon time lapses were acquired at 256\u2009\u00d7\u2009256 pixels using a 1.0 NA/20x water emersion objective at 5.3\u2009Hz. On the day of initial imaging, a FOV was selected to image a large number of active cerebellar neurons located in the most superficial planes of the molecular layer including mostly SCs, and several batches of 6000 frames (a time series) were collected in each training session. After two-photon imaging a second image of the vasculature was captured with wide field epifluorescence to reconfirm the field.All the animals were first habituated to the setup to minimize stress during the imaging experiments. All the imaging sessions started at least 10\u2009min after mice had been head fixed. We searched for active MLIs while imaging zones in the vermis of lobule VI, between the midline and the paravermal vein, an area of the cerebellum where GCs acquire a predictive feedback signal or expectation reward2+-independent68. Supplementary Fig.\u00a031. CaImAn identifies different spatial components (addressed here as ROIs) and a component representing the background and neuropil signals. Baseline of intensity (F0) was defined as the mean fluorescence intensity before trial start, defined as the time when the animal first licked. This was when fluorescence started to increase above baseline and the odorant was added at a random time 1\u20131.5\u2009s after trial start. Intensity traces (F) were normalized according to the formula \u0394F/F \u2009=\u2009(F\u2009\u2212\u2009F0)/F0. After CaImAn analysis, the \u0394F/F traces of the spatial components were sorted and we assigned trial traces to different behavioral events and aligned them to trial start, odorant onset or water delivery. Finally, the time course for the average \u0394F/F did not differ greatly between the different GCaMP variants as would be expected for a fast firing interneuron with small increases in Ca2+ per action potential to exclude image sequences exhibiting axial movement. We did not find evidence of axial movement while the animal was engaged in the go\u2013no go task. In addition, we performed control imaging where we excited GCaMP6f at 820\u2009nm, a two-photon excitation wavelength where fluorescence emission is Ca33. The post-hoc comparisons between pairs of data were performed either with a two-sided t test, or a ranksum test, depending on the result of an Anderson\u2013Darling test of normality. 95% CIs shown in the figures as vertical black lines or shading bounding the lines were estimated by bootstrap analysis of the mean by sampling with replacement 1000 times using the bootci function in MATLAB.Statistical analysis was performed in Matlab 9.6 . Statistical significance for changes in measured parameters for factors such as learning and odorant identity (S+ vs. S\u2212) was estimated using a GLM, with post-hoc tests for all data pairs corrected for multiple comparisons using FDRF/F measured from all components in the FOV was accomplished via LDA in Matlab. \u0394F/F for all components for every trial except one were used to train the LDA, and the missing trial was classified by its fit into the pre-existing dataset. This was repeated for all trials and was performed separately for analysis where the identity of the odorants was shuffled. Fluorescence intensity traces, LRs, and kinematics in Fig.\u00a0PCA was calculated using the Matlab Statistics Toolbox. Classification of trials using \u039469, we defined the dimension of the system (dim) with M inputs as the square of the sum of the eigenvalues of the covariance matrix of the measured \u0394F/F for all ROIs in the FOV divided by the sum of each eigenvalue squared:i are the eigenvalues of the covariance matrix of \u0394F/F computed over the distribution of \u0394F/F signals measured in the FOV. If the components of \u0394F/F are independent and have the same variance, all the eigenvalues are equal and dim(\u0394F/F)\u2009=\u2009M. Conversely, if the \u0394F/F components are correlated so that the data points are distributed equally in each dimension of an m-dimensional subspace of the full M-dimensional space, only m eigenvalues will be nonzero and dim(\u0394F/F)\u2009=\u2009m.Following Litwin-Kumar et al.37. We used the Matlab fitglm function to fit the per trial \u0394F/F time course for mice proficient in the go\u2013no go task with a GLM. We included event variables, whole trial variables and continuous variables. Continuous variables quantified kinematics including the LR, the derivative of the LR and the velocity and acceleration of movements made by base of the tail of the head-fixed animal during the trial .For stellate cell simulation we used reconstructed mouse SC morphology available in Neuromorpho using NLMorphologyViewer 0.3.0 (http://www.neuronland.org). We proceeded to create the electrical compartmental model with passive and active properties of the SC membrane. The passive parameters of the SC model were adapted mainly from Molineux et al.71. We set the specific membrane resistivity Rm\u2009=\u200920 k\u03a9\u2009cm2, the specific membrane capacitance Cm\u2009=\u20091.5\u2009\u03bcF\u2009cm\u22122\u200971, and the intracellular resistivity Ri\u2009=\u2009115 \u03a9\u2009cm72. The input resistance Rin\u2009=\u2009571.39\u2009M\u03a9 and membrane time constant \u03c4m\u2009=\u200940.30\u2009ms were obtained injecting a hyperpolarizing current into the soma . The time constant was obtained by a double exponential fit of membrane voltage decay. Those values are within the range of experimental values measured in SCs73.We removed the axons from the original swc morphology file and exported the reconstructed morphology into a NEURON 7.5 hoc file , delayed rectifier potassium currents (KDR), A-type potassium currents (KA) and transient calcium currents (CaT)75 .All simulations were performed on the NEURON 7.5 simulatorFurther information on research design is available in the\u00a0Supplementary InformationPeer Review FileReporting SummaryDescription of Additional Supplementary FilesSupplementary Movie 1Supplementary Movie 2"} {"text": "Forced degradation study is a systemic characterization of degradation products of active pharmaceutical ingredient (API) at conditions which posses more harsh environment that accelerates degradation of API. Forced degradation and stability studies would be useful in selection of proper, packaging material and storage conditions of the API. These are also useful to demonstrate degradation pathways and degradation products of the API and further characterisation of the degradation products using mass spectrometry. TGR5 is a G protein-coupled receptor, activation of which promotes secretion of glucagon-like peptide-1 (GLP-1) and modulates insulin secretion. The\u00a0potent and\u00a0orally bioavailable\u00a0TGR5 agonist, ZY12201, shows activation of TGR5 which increase secretion of GLP-1 and help in lowering blood glucose level in animal models. Hence it is necessary to establish and study degradation pathway and stability of API for better handling and regulatory approval. Force degradation studies of ZY12201 have shown presence of one oxidative impurity during oxidative degradation in HPLC analysis. The oxidized product is further characterized by LC\u2013MS to elucidate structure of impurity and characterize its degradation pathway.The online version contains supplementary material available at 10.1007/s42452-021-04660-y. Type 2 diabetes mellitus (T2DM) is a metabolic disorder sparked by insulin resistance and dysfunction of the \u03b2 cells. Type-2 diabetes is generally characterized by increase in the resistant to insulin which leads to higher blood glucose level . Impaire50 of 57\u00a0pM and mTGR EC50 of 62\u00a0pM with a favorable pharmacokinetic properties and demonstrated in-vivo glucose lowering effects in animal models (ED50 of 7.9\u00a0mg/kg and ED90 of 29.2\u00a0mg/kg) [Bile acids plays a significance role in the emulsification lipids in the body and absorption of vitamins A, D, E, K . Along w2\u00a0mg/kg) . This woAbdelhameed et al. \u201320 have Standards and samples of ZY12201 were synthesized at\u00a0Zydus Research Center, Cadila Healthcare Ltd. , 11. HPLSee Table Test solutions of ZY12201 with 1000\u00a0\u00b5g/mL concentration were prepared in diluents and further sonicated for 5\u00a0min to dissolve the compound, which were further analyzed by HPLC.A chromatography was performed on alliance waters separation module e2695 with YMC Triart C18 150\u00a0mm, 4.6\u00a0mm and 3\u00a0\u00b5m particle size column using mobile phase includes, Solvent A 10\u00a0mM ammonium acetate buffer pH (8.5\u2009\u00b1\u20090.05) and solvent B was mixture of 0.1% ammonia in acetonitrile with pump flow rate of 1.2\u00a0mL/min. The HPLC gradient mode ratios have been mentioned in Table An impurity obtained in the oxidation was further characterized and identified by LC\u2013MS system. An electrospray LC\u2013MS system was used for the identification of degradation impurities formed during stress testing studies. Chromatography was performed on YMC Triart C18 150\u00a0mm, 4.6\u00a0mm and 3\u00a0\u00b5m particle size column from YMC co. Ltd. Japan using mobile phase consisting of mobile phase A (10\u00a0mM ammonium acetate (pH 8.5) with ammonia solution) and mobile phase B (0.1% ammonia in acetonitrile) at a flow rate of 1.2\u00a0mL/min. The LC gradient program was applied as per Table Forced degradation studies have been proved as an useful tool in analyzing stability of pharmaceutical products in different environmental conditions. Stability data and force degradation studies are very crucial for necessary regulatory aprovals . Even ICChromatogram of the control sample for the reference in the degradation study has been given in Figure S-1.Test sample of 50\u00a0mg weighed in 50\u00a0mL volumetric flask, added 2\u20133\u00a0mL of diluent to dissolve followed by 1\u00a0mL of 5.0\u00a0M hydrochloric acid solution and heated at 60\u00a0\u00b0C for 120\u00a0min. Cool it at room temperature and neutralize the solution with 5.0\u00a0M sodium hydroxide solution with help of pH paper. The chromatogram obtained from the sample after acidic hydrolysis reaction of\u00a0ZY12201\u00a0has shown a satisfactory separation of compound and the degradation products .Test sample of 50\u00a0mg weighed in 50\u00a0mL volumetric flask, add 2\u20133\u00a0mL of diluent to dissolve then add 1\u00a0mL of 5.0\u00a0M sodium hydroxide solution and heated at 60\u00a0\u00b0C for 120\u00a0min. Cool it at room temperature and neutralize the solution with 5.0\u00a0M hydrochloric acid solution with help of pH paper. The chromatogram for the alkali treated ZY12201 sample was achieved satisfactory .Hydrogen peroxide has been used for the oxidative stress degradation. Electron transfer serves as basic mechanism for the oxidative forced degradation of drug substance . ZY122012 light; 300\u2013800\u00a0nm wavelengths are commonly used to cause the photolytic degradation [ICH guidelines have recommended the conditions protocols for photo stability studies . Accordiradation . Free raradation . ZY12201Sample was exposed at 105\u00a0\u00b0C for 24\u00a0h to study the thermal decomposition or thermolysis caused by heat . HPLC chForce degradation studies have concluded that there was no significant changes have been found during the degradation study except oxidative degradation. Summary of force degradation study has been shown in Table Peak purity\u00a0is a comparison of the reference standard to the API in the sample stressed by forced degradation to confirm that no impurity is eluted. Peak threshold is used as a parameter for determining peak purity in HPLC. For the acceptance of the peak purity, angle should be less than a purity threshold. Peak purity results are summarized in Table Identification of degradation related impurities for ZY12201 was done in oxidation treated sample through LC\u2013MS technique. Total eight impurities were formed during oxidation degradation, among all impurity in ZY12201 oxidation treated sample, one impurity was confirmed and identified through mass spectral analysis. The degraded impurities mass and retention time has been reported in Table +) suggesting the possibility of Molecular formula C31H31FN4O4S, which confirms the theoretical molecular weight of Impurity-1.The positive ion mass spectral analysis of impurity-1 was observed at 575 Below is the link to the electronic supplementary material."} {"text": "The research results show that greater repeatability of the technique (lower RE) has a significant impact on the length of the shot put.The purpose of the study was to analyze the variability of performance and kinematics of different shot put techniques in elite athletes and sub-elite athletes . Each athlete performed 6 trials. Only 34 trials in group A and 27 trials in group B were analyzed. Two high-speed digital cameras were positioned 8 m from the center of the shot put throwing circle. All throws performed during international and national competitions were analyzed using the Ariel Performance Analysis System. To estimate variability of kinematic parameters, the value of relative error was calculated. The average relative error generally showed low variability for the analyzed indicators. In only 4 analyzed cases, variability was high (>20%). Statistical analysis was used to find indicators which have a significant influence on the distance of the throw (according to the sports level and technique). Significant inverse correlations (at Several throwing techniques of the shot put have been trialed over the years. The gliding technique was introduced in the 1950s and was prevalent for many years. The gliding technique involves a linear push out of the back to the front of the circle while facing away from the sector. Then, shot putters rotate their body toward the throwing section, putting the shot for maximum distance. The rotational style appeared in the 1970s and is the most popular nowadays . The iniThe use of 3D video motion analysis helped athletes and their coaches to analyze the technique they use and to find ways to increase efficiency and improve performance.Kinematics of motion is not the same, even during repeated execution of the same movement. Natural variations in the position, velocity and acceleration of the body limbs take place even during apparently identical movements and influence the variability of the throw kinematics . Consequ2) were slightly taller (3 cm on average), had greater weight and BMI (7.2 kg and 0.6 kg/m2) and were a little older (1.3 years) than shot putters who used the rotational technique in the same group . A similar dependence (with slightly greater differences between the two techniques) was observed in athletes in group B. Athletes who used the glide technique , were 3.7 years older, 3 cm taller, 12.7 kg heavier and had BMI about 2.4 kg/m2 greater than athletes who used the rotational technique . There were no significant differences (p < 0.05) for the height of the athletes\u2019 center of gravity depending on the group and technique = 0.7484, p = 0.416).Fourteen elite and sub-elite shot putters (right-handed) took part in this analysis. Eight elite participants, including 5 athletes using the rotational technique and 3 athletes using the glide technique, performed trials during an international competition (group A), whereas six sub-elite participants performed trials during national championships (group B). In the year of the analyzed competition, all athletes from group A belonged to the group of thirty world-leading shot putters (places from 18th to 37th), with the best results in the range between 20.38 m and 20.64 m. National ranking of 100 best results of the year in the shot put included performances of 6 competitors from group B. The best results of the athletes taking part in the analyzed national competition (group B) ranged from 20.32 m to 18.77 m . The shot putters from group A who used the glide technique were analyzed. In group A, 5 athletes who used the rotational technique performed 21 measured trials, while 3 athletes who used the glide technique performed 13 measured trials. In group B, 3 athletes who used the rotational technique performed 13 measured trials, while 3 athletes who used the glide technique performed 14 measured trials. All participants gave their informed consent and were informed of the benefits and risks of the investigation prior to signing an institutionally approved informed consent document to take part in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of J\u00f3zef Pi\u0142sudski University of Physical Education in Warsaw (SKE 01\u201302/2013).Two high-speed digital cameras were positioned perpendicular to each other and 8 m from the center of the shot put throwing circle.All throws performed during international and national competitions were recorded and then analyzed using the Ariel Performance Analysis System (APAS). Eighteen points were digitized. Sixteen points were placed on the athlete\u2019s body, including the big toe, ankle, knee, hip, wrist, elbow, and shoulder for the left and right side of the body as well as the right hand, the chin and the top of the head. The 17th point was placed in the center of the shot, and the last (18th) was placed on the right-side edge of the toe-board of the throwing circle. The analyzed area of the throwing circle was calibrated with a 1.5 m \u00d7 2 m \u00d7 1.5 m reference scaling frame. The calibration was performed before and after the competition session. Synchronized data sequences from all camera views were utilized.The kinematic parameters of the athlete and the shot during release (RLS\u2013the last contact of the athlete) presented in To estimate variability of kinematic parameters, the value of relative error (RE) was calculated.Two-factor analysis of variance (ANOVA) was used to analyze significant differences depending on the two factors: the GROUP (sports level) and/or the TECHNIQUE .http://www.statsoft.com (accessed on 1 December 2019)) was used for statistical analysis. The significance level \u03b1 = 0.05 was used to assess the significance of differences and correlations.Correlations between selected indicators were determined using Pearson\u2019s correlation coefficient. All statistical analyses were based on RE of selected indicators. The use of RE made it possible to perform a statistical analysis of all shot-put measured trials. The values of the distance of the throws were used in the Pearson\u2019s correlation analysis with RE of the selected indicators to find the most important correlations between them. Statistica 10.0 STATISTICA , Version 10. The angle indicators like \u03b2_pThe greatest differences were found for the S-H as the effect of GROUP and TECHNIQUE. Other indicators of the athlete and the shot had low RE value .p \u2264 0.0001), \u03bb_r (F(1.57) = 14.890, p \u2264 0.001) \u03c6_r (F(1.57) = 16.306, p = 0.002), S-H (F(1.57) = 9.545, p = 0.003), \u03b3 (F(1.57) = 9.552, p = 0.003), VxCG (F(1.57) = 7.547, p = 0.008), VxS, VyS, VrS (F(1.57) = 10.657, p = 0.002, F(1.57) = 16.897, p \u2264 0.001, F(1.57) = 28.846, p \u2264 0.001, respectively) and also for h (F(1.57) = 13.0, p \u2264 0.001). Statistically significant influence of the TECHNIQUE was observed for mean RE of the 3 indicators: \u03c6_r (F(1.57) = 5.461, p = 0.002), S-H (F(1.57) = 10.462, p = 0.002), \u03b3 (F(1.57) = 6.950, p = 0.011) and for RE of VxS (F(1.57) = 6.052, p = 0.017). Both GROUP and the TECHNIQUE factors significantly influenced mean RE of \u03c6_r (F(1.57) = 18.377, p \u2264 0.001), \u03b4_r (F(1.57) = 4.574, p = 0.037), VyS (F(1.57) = 4.766, p = 0.332), \u03b3 (F(1.57) = 4.887, p = 0.031), and D (F(1.57) = 14.417, p \u2264 0.001).Statistically significant influence of the GROUP factor was noted for mean RE: \u03b4_r (F(1.57) = 35.432, The highest level of RE (about 20%) was found for the horizontal velocity of CG in both groups and techniques and for the vertical velocity of CG except for the rotational technique in group A . Levels p = 0.006) and z_S (F(1.57) = 6.042, p = 0.017). In the case of the technique (without the division into sports levels), significant differences were found for RE of x_CG, y_CG, z_CG and R_S (F(1.57) = 5.795, p = 0.019, F(1.57) = 11.670, p = 0.001, F(1.57) = 17.312, p \u2264 0.001, F(1.57) = 9.464, p = 0.003, respectively) and also for RE of z_S (F(1.57) = 16.414, p \u2264 0.001). Both the GROUP and the TECHNIQUE factors significantly influenced RE level of x_CG (F(1.57) = 4.982, p = 0.030), y_CG ((F(1.57) = 4.982, p = 0.030), and R_S, (F(1.57) = 8.205, p = 0.006).Depending on the GROUP factor only, statistically significant differences were noted for RE level of R_S (F(1.57) = 8.107, All the statistically significant correlations between the distance of the throw and RE of selected release indicators had inverse correlations, which meant that the lower the variability of the indicator, the longer the distance was . The invThe most important observation of the study was the identification of variability in performance and kinematics for elite and sub-elite shot putters. This study fills gaps that have not been previously addressed in research on the shot put. Release indicators have been frequently examined in the competition like the shot put ,5. AccorIn general, in our research, low RE characterized RLS indicators and the indicators of the path of athletes\u2019 center of gravity and the center of the shot. Moreover, lower RE was found in the trials of higher-level athletes (group A). The greatest differences between groups A and B were found in the rotational technique for S-H, Vy_CG and Vx_CG. The greatest differences in the glide technique were also observed for S-H, Vy_CG and Vy_S. It was only in the case of Vx_CG that general average RE in group A was found to be greater than in group B. The average RE for the 8 finalists from our study group A\u2013international level compared to group in the study of Shaa , also wip < 0.05) linear dependence was found between the longest trials and the relative errors of release indicators for the longest distance of each athlete. It was observed that while the distance was increasing, the value of RE was decreasing linearly for 4 release indicators in the glide technique in group A, for 2 indicators in the glide technique in group B, and for 3 indicators in the rotational technique in group B. The lowest average values of the relative error for the longest trials were noted in 16 cases in the glide technique in group A, in 6 cases in the glide technique in group B, in 5 cases in the rotational technique in group A and in 7 cases in the rotational technique in group B. The longest trial in the rotational technique, without the division into sports level and in relation to the other distances, had the lowest average value of the relative error for 3 indicators . In the glide technique, without the division into sports level, the longest trial had the lowest average value of the relative error for 7 cases . The results of this study expanded the findings of previous optimization modeling attempts for elite men\u2019s shot put [Taking into account all the analyzed athletes, without the division into sports levels and techniques used, significant had the lowest mean values of RE, especially in the elite group. A significant influence of the resultant shot velocity on the distance was confirmed. Also, the right knee joint angle and the left hip joint angle had a significant effect on the distance of the throw. The greatest differences between the elite and sub-elite group were found for S-H and Vy_CG. Further research on the topic is necessary to better understand all of the relationships both between these variables and with performance. The observations of this study provide useful information for the technical development of male throwers and may provide an insight into new training methods that should mainly be focused on the power development associated with the mode of the shot put technique. The research results show that greater repeatability of the technique (lower RE) has a significant impact on the length of the shot put. Along with the decrease in RE, the distance of the shot put increased. The relationship between the sports level and greater repeatability confirms the importance of the technique automatism and stabilization of the technique in sport."} {"text": "Cancer can cause physical changes and affect satisfaction with a persons\u2019 physical appearance, which in turn can affect overall quality of life. Previous studies have primarily focused on women with breast cancer and few is known about body image in patients with other cancers and especially men. The present study compares satisfaction with body image of patients with different types of cancer with the general population and across sexes and identifies risk factors for diminished body image. Additionally, patients that were diagnosed within the last year and those living with cancer for longer are compared.N = 531 cancer patients answered the German Self-Image Scale to assess body image. One sample t-tests are utilized to compare the body image of cancer patients with the general population. Stepwise regression analyses were used to identify factors associated with body image and ANOVAs with posthoc tests as well as t-tests were used to examine group differences.In this cross-sectional study, Cancer patients showed diminished body image compared to the general population. For men, higher relationship satisfaction and lower cancer-specific distress were associated with more positive body self-acceptance (SA), whereas younger age, higher relationship satisfaction, and lower cancer-specific distress resulted in better perceived partner-acceptance of one\u2019s body (PA). In women, higher education, lower anxiety and cancer-specific distress were associated with more positive SA. Female cancer patients with breast/gynecological cancer reported better SA than those with visceral cancers. Higher relationship satisfaction and lower cancer-specific distress were found to be associated with more satisfactory PA in females. Time since diagnosis did not affect body image in this study.Results indicate that cancer patients regardless of sex tend to have decreased body image satisfaction. Future research directions include examination of additional entities of cancer, deeper research in men and the role of time since diagnosis. With approximately 19.3 million new cancer diagnoses and an estimated 9.9 million deaths from cancer worldwide in 2020, cancer is currently one of the leading global health concerns , sleep dMost of the body image-related research in the field of psycho-oncology focuses on the consequences of irreversible body alterations through surgical treatment (e.g. scars and amputations). Thus, researchers have assessed breast cancer surgery and its side effects , the impMost studies that investigate body image in cancer patients and survivors are exclusively conducted with female participants and focus nearly entirely on the most common cancer entities like breast cancer . StudiesAnother area that has hardly been explored so far in psycho-oncological body image research is intimate relationships. Nonetheless, close relationships have been shown to be important sources of coping with cancer and spoufirst goal was to add to the currently still ambivalent literature on body image comparisons between cancer patients and the general population. Thus, the German version of the Self Image Scale (SIS-D) was utilized as a measure for body image satisfaction as it provides normative scale scores for the German general population ) \u03b1 = .94 and Cronthe problem does not apply to me) to 5 (the problem applies to me and is a very big problem). The FBK-R23\u2019s \u03b1 for the total score was .92 in the present study which is equally good as in the original study . Total scores range from 0 to 115 and the cut-off for clinical distress related to cancer is 34.Cancer-specific concerns and distress were assessed with the German version of the Questionnaire on Stress in Cancer Patients (FBK-R23 ). The FBnever) to 5 (very often). All item scores are added up to a sum score with higher values indicating higher levels of fear of progression (range: 5\u201360). A cut-off of 34 indicates a dysfunctional fear of progression. The PA-F-KF had an \u03b1 of .91 in the present study and an \u03b1 of .87 in the validation study ),The present study adds to the existing literature on body image in cancer patients and suggests that satisfaction with one\u2019s body in cancer patients is generally impaired in a statistically significant way. This indicates that the consideration of the satisfaction with the physical appearance should be an important part of psycho-oncological care regardless of patients\u2019 sex or type of cancer. Possible body image difficulties then need to be addressed by psycho-oncological professionals. Questionnaires such as the SIS could ideally be used to identify body image problems. However, if this is not possible, body image problems should also be asked about in the medical or psycho-oncological interview. As this often also impacts sexual satisfaction, this is an important area to consider in clinical care. Since sexuality is often not addressed, or addressed inadequately, body image issues might also not be raised. It is likely that many patients may also have inhibitions about addressing this topic on their own. Therefore, the recommendation is that the initiative should come from the professional staff. Besides individually tailored interventions on the clinician-patient level, structured and evaluated interventions for individuals as well as groups exist, especially for breast cancer survivors e.g. , 82)..82]).The results of the present study suggest that cancer and cancer treatment can potentially lead to a decreased sense of SA and PA not only in women with breast cancer but also in cancer patients across the sexes and with a variety of different cancers entities. Future research is necessary to further examine group differences and predictors of satisfaction with one\u2019s body with longitudinal studies in patients with different kinds of cancers and to investigate the effect of time since diagnosis more deeply. Additionally, research in larger samples that allow differentiation between specific cancer diagnoses and in populations with other chronic medical conditions like transplantation, heart disease, diabetes, and multiple sclerosis should be conducted in the future.S1 Dataset(SAV)Click here for additional data file."} {"text": "Background: In renal transplantation, chronic transplant dysfunction (CTD) is associated with increased PCSK9 and dyslipidemia. PCSK9 is an enzyme that increases plasma cholesterol levels by downregulating LDLR expression. We recently showed increased PCSK9\u2013syndecan-1 interaction in conditions of proteinuria and renal function loss. Treatment with heparin(oids) might be a therapeutic option to improve dyslipidemia and CTD. We investigated the effects of (non-)anticoagulant heparin(oids) on serum lipids, syndecan-1 and PCSK9 levels, and CTD development.Methods: Kidney allotransplantation was performed from female Dark Agouti to male Wistar Furth recipients. Transplanted rats received daily subcutaneous injections of saline, unfractionated heparin, and RO-heparin or NAc-heparin (2\u00a0mg heparin(oid)/kg BW) until sacrifice after 9\u00a0weeks of treatment.Results: Saline-treated recipients developed hypertension, proteinuria, and loss of creatinine clearance , along with glomerulosclerosis and arterial neo-intima formation. Saline-treated recipients showed significant increase in plasma triglycerides (p < 0.05), borderline increase in non-HDLc/HDLc (p = 0.051), and \u223c10-fold increase in serum syndecan-1 (p < 0.05), without significant increase in serum PCSK9 at 8\u00a0weeks compared to baseline. Heparin and non-anticoagulant RO-heparin administration in transplanted rats completely prevented an increase in triglycerides compared to saline-treated recipients at 8\u00a0weeks (both p < 0.05). Heparin(oids) treatment did not influence serum total cholesterol (TC), plasma syndecan-1 and PCSK9 levels, creatinine clearance, proteinuria, glomerulosclerosis, and arterial neo-intima formation, 8\u00a0weeks after transplantation. Combining all groups, increased syndecan-1 shedding was associated with TC and glomerulosclerosis , whereas the non-HDLc/HDLc ratio was associated with the neo-intimal score in the transplanted kidneys .Conclusion: Prevention of triglyceridemia by (non-)anticoagulant heparin(oids) neither influenced PCSK9/syndecan-1 nor precluded CTD, which however did associate with the shedding of lipoprotein clearance receptor syndecan-1 and the unfavorable cholesterol profile. Chronic transplant dysfunction (CTD) is a functional decline of the transplanted kidney characterized by a progressive increase in plasma creatinine levels, proteinuria, and hypertension . About 3The pathogenesis of CTD is complex and multifactorial, and involves immune and non-immune factors. Human leukocyte antigen (HLA) mismatches between donor and recipient give rise to cytotoxic T cells, NK cells, and donor-specific antibodies, causing immune-related injury to the graft, whereas hypertension, ischemia, infections, dyslipidemia, diabetes, and drug toxicity cause non-immune-related graft injury . Among tvia their negatively charged sulfated sugar groups and therefore bear the potential to be developed as PCSK9 inhibitors for the treatment of dyslipidemia (Dyslipidemia is characterized by increased serum triglycerides (TGs), total cholesterol (TC), low-density lipoprotein cholesterol (LDLc), very-low-density lipoprotein cholesterol (VLDLc), and normal or reduced high-density lipoprotein cholesterol (HDLc) . Dyslipiipidemia .Interestingly, treatment with heparin(oids) is found to improve serum lipid levels (TG and TC) in patients on renal replacement therapy . The effTherefore, in this study, we investigated 1) the effect of heparin and non-anticoagulant heparinoids on plasma values of lipids, syndecan-1, and PCSK9; 2) the efficacy of heparin and non-anticoagulant heparinoids to prevent the development of glomerulosclerosis and arterial neo-intima; and 3) the association of plasma lipid levels with plasma syndecan-1, PCSK9, and degree of glomerulosclerosis and arterial neo-intima formation in a rat CTD model.Agouti (DA) rats (donors) and 38 10-week-old male inbred Wistar Furth (WF) rats (recipients) were used. DA and WF rats were obtained from Harlan Nederland and Charles River Laboratories , respectively. The local animal Ethics Committee of the University of Groningen approved all the procedures used in the study, and the principles of Laboratory Animal Care were followed.In this study, 38 10-week-old female inbred Dark Kidney allotransplantation was performed from female DA donors to male WF recipients according to standard procedures as described previously and immun = 9) and two non-anticoagulant heparinoids derived from regular heparin: N-desulfated, N-reacetylated heparin and periodate-oxidized, borohydride-reduced heparin as reported before by n = 10) received daily saline injections.In this study, interventions were performed by daily (s.c.) injections of 2\u00a0mg/kg BW/day with regular unfractionated heparin as previously described with minIdentification and quantification of glomerulosclerosis and neo-intima formation in kidneys were determined with periodic acid-Schiff (PAS) and Verhoeff\u2019s staining, respectively, and blindly scored as mentioned previously by The sections were semi-quantitatively scored for focal glomerulosclerosis in a blinded fashion by determining the level of mesangial expansion and focal adhesion in each quadrant in a glomerulus and expressed on a scale from 0 to 4. If the glomerulus was unaffected, it was scored as 0; if one quadrant of the glomerulus was affected, it was scored as 1, two affected quadrants as 2, three affected quadrants as 3, and 4 affected quadrants as 4. In total, 50 glomeruli per kidney were analyzed, and the total FGS score was calculated by multiplying the score by the percentage of glomeruli with the same FGS score. The sum of these scores gives the total FGS score with a maximum of 400.Neo-intima formation was scored at 200\u00d7 magnification by determining the percentage of luminal occlusion. All the identifiable elastin positive intrarenal vessels were evaluated in a blinded fashion. The lumen of vessels was the mean length of two straight lines drawn from the internal elastic lamina (IEL) and passing through the center of the vessel. The areas enclosed by the lumen, internal elastic lamina, and external lamina were measured. The area between the lumen and internal elastic lamina was described as the neo-intimal area. The percentage of neo-intimal area to the area enclosed by IEL is described as the percentage of luminal occlusion.p value of <0.05 was considered statistically significant.Analyses were performed using GraphPad version 8.0.1 (GraphPad software). The one-way ANOVA test and the Kruskal Wallis test were used to compare differences between the groups. When significant differences were observed between the means, Dunnett\u2019s multiple comparison test, corrected for multiple comparisons, or Dunn\u2019s multiple comparison test, corrected for multiple comparisons, was used as post-test to identify which specific means were significant from the others . Data are given as means \u00b1 SEM. The non-parametric Spearman correlation was used to analyze the association between parameters. For all experiments, a n = 8), heparin (n = 8), N-acetyl heparin (n = 8), and RO-heparin (n = 8). In the plasmas of the rats taken at 8\u00a0weeks after renal transplantation, 4\u00a0h after heparin(oid) injection, we measured the activated partial thromboplastin time. In the saline-treated transplanted rats, this was 75\u00a0s . In the regular heparin group, this time was 173\u00a0s , 73\u00a0s in the RO-heparin group, and 69\u00a0s in the N-acetyl heparin group (not shown). These data show that both chemically modified heparin preparations indeed were non-anticoagulant, resulting in a similar activated partial thromboplastin time compared with saline-treated rats.This study included 38 male WF rats that were transplanted with a female DA kidney as reported before . Renal gTreatment with heparin and non-anticoagulant heparins had no effect on body weight , food anSaline-treated groups developed CTD-related renal failure as evidenced by reduced creatinine clearance and a rise in both serum creatinine and urinary protein excretion . Serum cp < 0.05; To investigate the effects of heparin and non-anticoagulant heparinoids on serum lipid levels, serum TG and TC levels were measured. Serum TG levels were significantly increased in saline- and NAc-heparin-treated groups over time, whereas heparin and RO-heparin treatment prevented the increase in serum TG levels over time . At 8\u00a0weNo significant differences in serum TC levels were observed in saline-treated group over time, although values tend to increase over time . All trep < 0.05). VLDLc levels were increased in saline and NAc-heparin (p < 0.01) treated groups, whereas heparin and RO-heparin treatment prevented the increase in VLDLc levels over time (not shown).To get more insight into cholesterol profiles, we further profiled serum lipoproteins levels . VLDLc lver time . At 8\u00a0weLDLc and HDLc levels were increased in all groups over time . At 8\u00a0weAltogether, these data indicate that heparin and RO-heparin treatment prevented an increase in serum TG and VLDLc levels without affecting the LDLc, HDLc, and non-HDlc/HDLc ratio in the CTD model.Serum PCSK9 levels in the RO-heparin treated group and the NAc-heparin treated group were significantly lower than those in the saline-treated group at baseline . Becauser = 0.5, p = 0.03; r = 0.49, p = 0.03; and r = 0.49, p = 0.03, respectively) but not with TG, VLDLc, and renal function parameters (not shown).We previously reported syndecan-1 shedding in the same transplant model as used here . Serum sr = 0.53, p = 0.021) and negactively) .r = 0.64, p < 0.001) . All treated groups showed variable lumen occlusion due to neo-intima formation. No significant differences were observed in neo-intima formation in any of the treatment groups compared to the saline-treated group . Interes< 0.001) , suggest= 0.046) .In a rat model of CTD, we show that heparin and non-anticoagulant heparinoid (RO-heparin) could prevent the increase in serum TG and the TG-rich VLDL particles. None of the heparin(oids), however, could prevent a progressive increase in TC, LDLc, HDLc, serum PCSK9, and serum syndecan-1 levels. Similarly, transplant glomerulosclerosis and arterial neo-intima formation could not be attenuated by heparin(oid) treatment. Arterial neo-intimal scores were positively associated with unfavorable cholesterol profile (non-HDLc/HDLc ratio). Transplant glomerulosclerosis was positively associated with shedding of the lipoprotein clearance receptor syndecan-1. These data suggest that development and progression of CTD are independent of changes in TGs and VLDLc levels, rather related to syndecan-1 shedding and cholesterol profile.Although we did not study the lipoprotein lipase (LPL) activity in our model, the effects of heparin in reducing plasma TG and VLDLc levels might be attributed to the release/activation of lipoprotein lipase and hepatic lipase as reported before .We observed >10-fold increase in serum syndecan-1 levels in saline as well as heparin/non-anticoagulant heparin-treated groups over time. Furthermore, we observed no effects of heparin and non-anticoagulant heparinoids on serum syndecan-1 levels despite strong reductions in TG and VLDLc levels. It is worthwhile to mention that serum syndecan-1 levels at 8\u00a0weeks are strongly positively associated with TC, LDLc, and HDLc levels, but also with glomerulosclerosis. These data indicate that increased syndecan-1 shedding leading to reduced lipoprotein clearance might be a cause underlying dyslipidemia and glomerulosclerosis in our model. Although reported by others , neitherTissue remodeling and invasion of the neo-intima into the vascular lumen due to inflammatory processes inside the walls of arteries are important hallmarks of CTD. The interaction of lipids such as oxidized LDL and immune cells is thought to be a driving force behind chronic inflammation of the arterial wall during atherogenesis. Accumulation of lipid particles in vessel walls and subsequent immunological response cause plaque formation, which gets aggravated in dyslipidemic situations . Elevateversus LMW heparin/non-anticoagulant heparin derivatives.Apart from the role of dyslipidemia in transplant vasculopathy, dyslipidemia also induces glomerulosclerosis. Various studies have shown the accumulation of lipoproteins in glomerular mesangium, which accelerates matrix production in several rat models of renal diseases . MoreoveIn conclusion, the efficacy of heparin and non-anticoagulant heparins in lipid reduction is heterogeneous, controversial, and context dependent. Furthermore, complete relying on heparin and non-anticoagulant heparins in preventing CTD and CTD-related tissue remodeling might not be warranted by our results, at least not in the transplantation setting. Future investigations on non-immunological proteinuric animal models and human nephrotic diseases might be superior to target PCSK9/syndecan-1/cholesterol axis."} {"text": "Candida mannan in blood sera samples for the diagnosis of invasive candidiasis. To reinvestigate carbohydrate specificity of EBCA-1, a panel of biotinylated oligosaccharides structurally related to distinct fragments of Candida mannan were loaded onto a streptavidin-coated plate to form a glycoarray. Its use demonstrated that EBCA-1 recognizes the trisaccharide \u03b2-Man-(1\u21922)-\u03b1-Man-(1\u21922)-\u03b1-Man and not homo-\u03b1-(1\u21922)-linked pentamannoside, as was reported previously.Monoclonal antibody EBCA-1 is used in the sandwich immune assay for the detection of circulating Candida are the most common agents causing nosocomial fungal infections, and the fourth most common cause of nosocomial bloodstream infections (BSI) overall. Invasive candidiasis (IC) affects about 750,000 people worldwide with a case fatality rate of ~30\u201355% wer wer22] wThe wells of 96 streptavidin-coated plates were coated with biotin-tagged oligosaccharides 1\u201318 -\u03b1-Man-(1\u21922)-\u03b1-Man trisaccharide sequence from the \u201creducing\u201d end by one 2)-\u03b1-Man unit (12\u219213) or its dimer 2)-\u03b1-Man-(1\u21922)-\u03b1-Man (12\u219214), or that elongation from the \u201cnon-reducing\u201d end by one \u03b2-Man-(1\u21922)-unit (12\u219215) have not influenced the recognition by mAb EBCA-1. On the contrary, the elongation of the \u03b2-Man-(1\u21922)-\u03b1-Man-(1\u21922)-\u03b1-Man trisaccharide sequence from the \u201cnon-reducing\u201d end by dimer \u03b2-Man-(1\u21922)-\u03b2-Man-(1\u21922)-(12\u219216) remarkably decreased the binding with mAb EBCA-1, while the attachment of one additional \u03b2-Man-(1\u21922)- unit (16\u219217) practically blocked the binding to mAb.Aspergillus fumigatus galactomannan [The knowledge of fine carbohydrate specificity of the monoclonal antibody used in a diagnostic kit is important for understanding the molecular basis of observed possible false-positive and false-negative results. Previously, we reported the reinvestigation of carbohydrate specificity of EB-A2 monoclonal antibody used for the immune detection of the tomannan , where ttomannan ,20,21 ortomannan for mAb tomannan .Candida strains lacking such fragment in the structure of their mannans [The reinvestigated trisaccharide epitope is abundantly present in many yeast mannans, however there are mannans . This maCandida mannans reported previously -manno-fragments of e papers ,20,21,47 strains . It can strains .Candida mannan would be very promising for high performance diagnostics of invasive candidiasis. The generation of mAbs which are capable of recognizing such fragments are still faced with a challenge to test their applicability for clinical diagnostic needs.To the best of our knowledge, there is no diagnostic antibody with an undoubtedly proven ability to recognize homo-\u03b1-(1\u21922)-linked oligomanoside chains. The raising of such mAbs is a rather complex task due to the tolerance of the mammalian immune system to \u03b1-mannosides, as opposed to higher immunogenicity of \u03b2-mannosides. However, antibodies against \u03b1-(1\u21922)-linked oligomanosides containing branch points of"} {"text": "Asthmatics do not appear to have increased susceptibility to COVID-19.Uncontrolled severe asthma may be associated with worsened COVID-19 outcomes, especially in asthmatics managed with oralcorticosteroids. Risk mitigation measures such as hand hygiene, social distancing and wearing of face masks must be observed at all times. Asthma should be managed as outlined in local and international guidelines.Ensure an adequate supply of medication, and inhaled corticosteroids should not be withdrawnChronic obstructive pulmonary disease (COPD) is associated with severe COVID-19 disease and poor outcomes. Maintenance of background medication is important to avoid exacerbations of COPD.Vaccination against influenza is strongly advised for all patients with asthma and COPDVaccination against pneumococcal infection is advisable for patients with COPD. Patients with obstructive airway disease on oral corticosteroids and/or with impaired lung function should take stringent safety precautions.This statement will be updated when more data become availableAsthma and COPD occur commonly in South Africa. SARS-CoV-2 is a novel coronavirus, which can result in COVID-19-associated severerespiratory infection with respiratory failure and the need for mechanical ventilation. The South African Thoracic Society has prepared a guidancestatement to assist clinicians and patients with asthma and COPD during the current epidemic. Thus, while asthma was not documented as a risk factor/underlying comorbidity for COVID-19 infection in earlier studies, somemore recent studies, particularly from the USA, have shown that asthmamay be a significant comorbid condition.3] Furthermore, asthma wasalso found to be relatively common among intensive care unit cases withCOVID-19 and in patients who died. Therefore, while mild to moderateasthma that is well controlled may not be a risk factor for COVID-19,people with uncontrolled and severe asthma, those receiving repeatedor regular doses of oral corticosteroids (OCS) and those with underlyingstructural lung damage may be at risk of more severe COVID-19.Asthmatics do not appear to have increased susceptibility to COVID-19.[4] Asthma with recent use of OCS was associated with highermortality compared with non-asthmatics when fully adjusted for othervariables 1.13 (1.01 - 1.26)).[4]By contrast, the International Severe Acute Respiratory and EmergingInfection Consortium (ISARIC) study reported that 14% of hospitalisedpatients with asthma have no increased risk of death.[5] Data fromSpain[3] suggest that a few patients with asthma were infected withSARS-CoV-2 and this was not associated with asthma exacerbationRecently published data from the UK primary care and COVID-19system notification databases have indicated that asthma (withoutcorticosteroid use) was not associated with higher mortality whenadjusting for confounders in 10 926 COVID-19-associated deaths inthe UK.7]Although data from the previous severe acute respiratory syndromeand Middle East respiratory syndrome outbreaks did not demonstrate any increased risk of infection nor severe disease in asthmatics, this maybe different for COVID-19, and further data are awaited.12] Data fromprevious influenza outbreaks and the 2009 H1N1 pandemic suggestthat asthma was associated with hospitalisation,[13] but interestingly,protective for mortality.[14]Regarding other asthma phenotypes and patients from other settings,there is little data available, and thus recommendations cannot be made.Few asthma patients are recorded in published cohorts of SARS-CoV-2infections in China, and it has not been linked to poor outcomes.[15] Inhaled or OCS should not bewithheld if indicated to treat exacerbations of asthma.Based on these limited data, severe asthmatics seem to be at higher risk of death than non-asthmatics. There is noevidence to suggest that inhaled steroids should be stopped as they areprotective against asthma exacerbations.et al.[14]reported on 1 592 patients and demonstrated a significant associationbetween COPD and poor outcomes in COVID-19 (odds ratio 5.69 (2.49 -13.00)). The second meta-analysis included 15 studies with 2 473 patients.The patients with COPD had a relative risk of 1.88 (1.4 - 2.4) for severeCOVID-19 compared with non-COPD patients.[16] These data providesufficient evidence establishing an association between COPD and riskof severe COVID-19. Tobacco smoking has also been associated withsevere COVID-19 and adverse outcomes.[17] It is unclear, however, ifsmoking is an additive risk factor in addition to COPD. It is worthnoting that data are discordant and in both the UK[17] and China,[18]COPD did appear to confer increased risk of death among hospitalisedand confirmed cases, respectively.19]Current evidence summarised in two meta-analyses indicate that chronicobstructive pulmonary disease (COPD) patients are at a significantlyhigher risk of severe COVID-19 and adverse outcomes. Lippi [20] Inhaled steroids area risk factor for bacterial pneumonia and should be avoided unlessindicated[19] and where dual bronchodilators are not accessible.Influenza and pneumococcal vaccination are recommended as wellas a regular supply of background medication."} {"text": "Patients with obstructive lung diseases may be at risk of hospitalization and/or death due to COVID-19.To estimate the frequency of severe COVID-19, and COVID-19-related mortality in a well-defined large population of patients with asthma and chronic inflammatory lung disease (COPD). Further to assess the frequency of asthma and COPD as registered comorbidities at discharge from hospital, and in death certificates.At the start of the pandemic, the Swedish National Airway Register (SNAR) included 271,404 patients with a physician diagnosis of asthma and/or COPD. In September 2020, after the first COVID-19 wave in Sweden, the database was linked with the National Patient Register (NPR), the Swedish Intensive Care Register and the Swedish Cause of Death Register, which all provide data about COVID-19 based on International Classification of Diseases (ICD-10) codes. Severe COVID-19 was defined as hospitalization and/or intensive care or death due to COVID-19.Among patients in SNAR, 0.5% with asthma, and 1.2% with COPD were identified with severe COVID-19. Among patients \u2009<\u200918\u2009years with asthma, only 0.02% were severely infected. Of hospitalized adults, 14% with asthma and 29% with COPD died. Further, of patients in SNAR, 56% with asthma and 81% with COPD were also registered in the NPR, while on death certificates the agreement was lower (asthma 24% and COPD 71%).The frequency of severe COVID-19 in asthma and COPD was relative low. Mortality for those hospitalized was double as high in COPD compared to asthma. Comorbid asthma and COPD were not always identified among patients with severe COVID-19. Although most cases have no or very mild symptoms, the infection can also present as a severe disease with pneumonia, respiratory failure and multi-organ failure, with substantial mortality. After the first wave, in early September 2020, The National Board of Health and Welfare in Sweden stated that 22,260 patients had been hospitalized with COVID-19 and 5,981 had died.At the end of December 2019, the first cases of coronavirus disease 2019 (COVID-19) were reported from China. Since then, COVID-19 has evolved as a major worldwide health crisis. previous studies have reported an increased risk of severe disease with need for hospitalization and worse clinical outcomes in patients with underlying comorbidities such as hypertension, cardiovascular disease and diabetes mellitus.6 However, the reported frequency of hospitalized COVID-19 patients with underlying respiratory diseases varies between countries. Studies from both China and Italy have reported low rates (less than 5%) of respiratory diseases among hospitalized patients with COVID-19,8 while higher figures (14%-18%) have been reported from the United Kingdom and United States.10 In Sweden, the proportion of patients with chronic lung diseases was 14% in the intensive care units (ICU) during the first months of the pandemic.In addition to older age and male sex, chronic obstructive pulmonary diseases (COPD), on the contrary, has shown to be associated with an increasing need for intensive care treatment and risk of death. However, most studies are based on patients with COVID-19 identified with asthma or COPD as comorbidities, rather than settings using high-quality nationwide registers for identification of COVID-19 in patients with obstructive lung diseases. The aim of this study was to estimate the frequency of severe COVID-19, and COVID-19-related mortality in a well-defined large population of Swedish patients with asthma and COPD. A secondary aim was to assess the frequency of asthma and COPD as registered comorbidities at discharge from hospital and in death certificates.Overall, asthma patients do not seem to represent a risk group,15 The development and design of SNAR have been described in detail and the current study was approved by the Swedish Ethical Review Authority .The participants were identified in SNAR, which was initiated in 2013 and includes data on patients with a current physician diagnosis of asthma (children and adults) and/or COPD from primary and secondary care, as well as data on hospitalized COPD patients. In Sweden, the diagnostic criteria for asthma and COPD follow international guidelines.In August 2020, SNAR included in total 271,404 patients, of whom 198,113 (73%) had asthma, 55,942 (21%) had COPD, and 17,349 (6%) had both diagnoses. Patients who had been included in SNAR, but died before January 2020 were excluded from the study population.17 These registers provide data about COVID-19 based on International Classification of Diseases, version 10 (ICD-10) codes U07.1 (COVID-19 confirmed by laboratory testing) and U07.2 . The registration of inpatient care, intensive care and causes of death have coverage of nearly 100%.17 A severe COVID-19 was defined as hospitalization (inpatient care identified as primary discharge diagnosis in NPR and/or ICU registration in SIR) or death due to COVID-19 (registered as an underlying or contributing cause of death in SCDR). Patients in SNAR with severe COVID-19 were defined as cases and patients in SNAR without severe COVID-19 as controls. Data extraction from SNAR was conducted on August 17, 2020, and linked with data from the NPR (inpatient care), the Swedish Intensive Care Registry (SIR), and the Swedish Cause of Death Register (SCDR) on 11 September.The frequency of asthma and COPD as underlying comorbidities to COVID-19, registered as secondary discharge diagnoses, was retrieved from the NPR at discharge by using ICD-10 codes for asthma J45 (asthma with/ without acute exacerbation) and COPD J44 (COPD with/without acute exacerbation). Report of chronic lung disease among patients receiving intensive care for COVID-19 were analysed from the SIR. From the SCDR, J44 and J45 were used to identify asthma and COPD reported as underlying or contributing causes of death together with COVID-19.SAS 9.4 for Windows was used for statistical analyses. Frequencies, proportions, means and standard deviations (SD) were used to describe data. For relevant estimates and differences, 95% confidence intervals (CI) were calculated to assess statistical precision.Among children with asthma in SNAR, only 0.02% (8 of 46 123 patients) had been hospitalized due to COVID-19. In adults with asthma in SNAR, 0.5% (784 of 168 488 patients) were hospitalized due to COVID-19% and 0.1% (175 of 168 488) died of COVID-19, making the frequency of severe COVID-19% to 0.5% .versus 61%, respectively. Regardless of sex, cases with severe COVID-19 were substantially older than controls, with an average age of 66\u2009years, compared with 52\u2009years (n\u2009=\u2009784), 98 patients (13%) received intensive care with an average stay of 13\u2009days at the ICU .n\u2009=\u2009721) a total of 54 (7%) cases required intensive care with an average stay at the ICU for 10\u2009days were hospitalized, and 0.5% (357 of 72 421) died of COVID-19. The frequency of severe COVID-19 in patients with COPD from SNAR was 1.2 % (870 out of 72 421) . The pro 10\u2009days , and 210 10\u2009days . No sex- 10\u2009days .In 81% of the cases with COPD diagnosis in SNAR, J44 was recorded in NPR as secondary discharge diagnosis after COVID-19-related hospitalization. Among COPD cases attending the ICU, 85% were identified having a chronic lung disease in SIR. In 71% of the cases, J44 was recorded as an underlying or contributing cause of death in the SCDR .This study describes the frequency of severe COVID-19 in a cohort of over 270 000 Swedish asthma and COPD patients retrieved from the SNAR, a national quality register. Severe COVID-19 was more common in COPD than in asthma, and after hospitalization, 14% of asthma patients and 29% of COPD patients died. Increasing age was associated with severe COVID-19, and men with COPD were more prone to a poor outcome (death). Despite the fact that we identified patients with known diagnosed asthma or COPD who developed severe COVID-19, their underlying comorbid obstructive lung disease was not always registered as a secondary discharge diagnosis or in death certificates, however more often in COPD than asthma.versus 0.3% in the general population\u2009\u2a7e\u200918\u2009years), but with unadjusted data, it is difficult to generalize our results. Today, COPD is well recognized as a comorbidity associated with an increased risk of poor outcome in COVID-19. As far as asthma is concerned, there is no convincing evidence of asthma as a risk of severe COVID-19,18 and COVID-19 does not appear to induce severe asthma exacerbations or increase the risk of worse clinical outcome of COVID-19. However, our results showed that a higher proportion of patients with asthma required intensive care than patients with COPD, which also is shown in another Swedish study based on ICU patients in relation to COVID-19. In the latter study, it was speculated that more severe COPD patients could be subject to limitation for care. In our study, patients with COPD were older than those with asthma, and older age may also be a reason for limitation of intensive care. Contrary in US, a higher level of patients seemed to be admitted to ICU due to COVID-19, which may reflect different management of COVID-19 between countries.The proportion of a severe COVID-19 among patients with COPD registered in SNAR seems to be somewhat higher than in the Swedish general population children with asthma in SNAR had been hospitalized due to COVID-19. However, up to date, it is still unknown if asthma is associated with an increased risk of worse outcome in COVID-19 among children.Our findings are consistent with previous reports that age influences the risk of worse outcome in COVID-19. Children seem to present milder symptoms when infected with COVID-19, and the need for hospitalization is low.30 and an increased vulnerability in elderly patients suffering from comorbidities such as COPD. There are reports about elderly patients being at higher risk of pulmonary and cardiac complications and increased mortality in COVID-19. This may be the reason for the high mortality rate (nearly 30%) in hospitalized patients with COPD in our study. One explanation to increased risk of poor outcome in the elderly could be the natural aging process of respiratory and immune systems, which might increase the susceptibility to viral infections and lead to more serious clinical outcome.3335We have shown that adult patients with severe COVID-19 were significantly older than those without a severe infection, both in asthma and COPD and regardless of sex. At an early stage in the pandemic, reports indicated a poor prognosis in elderly infected with COVID-1937 male sex seems to be associated with severe COVID-19. There are various possible explanations for worse clinical outcomes and higher mortality for COVID-19 infected men than women. The risk of infection after exposure seems to be equal between sexes, but women seem to have better interferon and Toll-like receptor (TLR) mediated anti-viral response and viral clearance. Besides, higher mortality can be associated with increased cytokine concentrations and dysregulated inflammatory response in men, leading to respiratory and cardiovascular complications.As in other studies, and the management of severe COVID-19 is heterogenic which may affect the external generalization of our results.A major strength is the large database of SNAR with well-characterized physician-diagnosed patients with asthma and COPD. The possibility to link SNAR with other national registers allows monitoring the frequency of a severe COVID-19 infection among asthma and COPD patients in Sweden. Linked together, the current data are a unique resource for respiratory research in patients with obstructive lung diseases. Importantly, the hospital care of patients with COVID-19 varies between countries. For example, the ICU beds are in general fewer in Sweden than in other European countries41 a limitation is a difficulty to reach a 100% coverage in SNAR. However, of all counties in Sweden, health care units in Stockholm and V\u00e4stra G\u00f6taland transmit most frequently data into SNAR, and these counties also have had the highest incident of COVID-19 in Sweden. Thus, over 250 000 patients included from these two counties, together with smaller counties in Sweden, are estimated to be a sample size, large enough, to identify the frequency of a severe COVID-19. Importantly, transmitted data from healthcare systems to the NPR and the National Causes of Death Register are related to a delay. As a consequence of data delay, it may be a risk that our data do not include all cases with severe COVID-19 in Sweden until September. A severe infection may therefore have been under-estimated instead of over-estimated in our study.As asthma and COPD are chronic diseases with a high prevalence in the population, in Sweden estimated to 10% in asthma and 7% in COPD,Further limitations are no information on the severity of asthma and COPD, and no age-matched control group of non-asthma and non-COPD in this manuscript. Future research will be aimed to study severe COVID-19 in asthma and COPD and the associations with disease severity, pharmacological treatment and comorbidities.In Sweden, 0.5% of adults with asthma and 1.2 % of those with COPD in SNAR had a severe COVID-19 during the first wave of the pandemic. Of patients being hospitalized due to COVID-19, 14% of those with asthma and 29% of those with COPD died. Asthma and COPD are not always registered in Swedish health care as comorbidities together with COVID-19, nor as underlying or contributing causes of death. However, COPD is reported more often than asthma, suggesting that physicians in Sweden assess COPD to play a more significant role in the disease course of a severe COVID-19 than asthma."} {"text": "The fatality rates and factors associated with death from coronavirus disease 2019 (COVID-19) in hemodialysis patients have been extensively investigated. However, data on peritoneal dialysis (PD) patients remain scarce.In this nationwide cohort study, we assessed the 28-day COVID-19-related fatality rate in PD patients between August 2021 and July 2022 using data from the InCov19-PD registry. Predictors associated with death were evaluated using a multivariable Cox regression model. Changes in functional status before and during COVID-19 were also examined.p < 0.01 for all). Conversely, the number of COVID-19 vaccines administered and receiving corticosteroid therapy during COVID-19 were associated with a decreased risk of death within 28 days after COVID-19 diagnosis. The number of functionally independent PD patients dropped from 94% at baseline to 63% during COVID-19 (p < 0.01).A total of 1,487 eligible participants were evaluated. During the study period, 196 participants died within 28 days after COVID-19 diagnosis . In a multivariable Cox regression model, an increased risk of death within 28 days after COVID-19 diagnosis among PD patients was independently associated with functional impairment during COVID-19 , SARS-CoV-2 infection with the Delta variant , and the need for respiratory support (The COVID-19-related 28-day fatality rate was high among PD patients. The predictors of COVID-19-related death in PD patients were similar to those in hemodialysis patients. During COVID-19, PD patients commonly experienced functional deterioration. The coronavirus disease 2019 (COVID-19) pandemic posed unprecedented challenges for patients with end-stage kidney disease (ESKD) requiring kidney replacement therapy (KRT). Numerous studies have demonstrated that the case fatality rates of COVID-19 in ESKD patients are significantly higher than in the general population . To mitiAmong ESKD patients diagnosed with COVID-19, those on chronic PD therapy have received much less attention than those on the other dialysis modalities. The majority of the COVID-19 mortality data and clinical outcomes are derived from the HD and kidney transplant (KT) populations , 8, 9. AHerein, we sought to determine the 28-day case fatality rate and predictors of COVID-19-related death among patients on chronic PD therapy. Alterations in patients\u2019 functional status and pattern of routine PD care, focusing on PD bag exchange, were also investigated.This is a prospective observational nationwide cohort using data from the InCov19-PD registry, a national surveillance registry that assessed the clinical outcomes and health impacts of SARS-CoV-2 infection in Thai PD patients. Under the auspices of the Nephrology Society of Thailand (NST), the InCov19-PD registry prospectively collected data from August 2021 to July 2022 on PD patients diagnosed with COVID-19. Patients were included in the study if they were PD patients who tested positive for SARS-CoV-2, were at least 12 years old, and had received chronic PD treatment for at least 1 month before COVID-19 diagnosis. For PD patients who had multiple COVID-19 episodes, only the initial COVID-19 episodes were included in the study. Any PD patients who had no documented clinical outcomes at day 28 after COVID-19 diagnosis, aged less than 12 years old, or commencing PD treatment for acute kidney injury were excluded from the study. A COVID-19 diagnosis was confirmed either by a positive SARS-CoV-2 real-time reverse transcription polymerase chain reaction test or a rapid antigen test kit (ATK) on samples obtained from the nasopharyngeal swab.The Institutional Review Board of the Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand approved this study (IRB No. 0298/65). All participants provided written informed consent before enrollment. The study was conducted following the principles laid out in the declaration of Helsinki.The primary outcome was the 28-day case fatality rate of PD patients with confirmed COVID-19 in the InCov19-PD registry. The COVID-19 confirmation date was utilized as the index date when calculating the 28-day case fatality rate. Survivors were participants who were still alive 28 days following the COVID-19 confirmation date. The remaining individuals who died within 28 days as a result of COVID-19 were classified as non-survivors.Secondary objectives included identifying predictors of COVID-19-related death within 28 days following a confirmed COVID-19 diagnosis. The impacts of COVID-19 on hospitalization rate, the need for respiratory support, and changes in patient functional status and patterns of PD bag exchange during COVID-19 were also examined. The baseline functional status of patients before and at the time of COVID-19 diagnosis was evaluated and categorized as independent, partially dependent, or totally dependent on a caregiver.via an online database system. Clinical outcomes were evaluated at baseline and monitored until the patient died or until day 28 after COVID-19 diagnosis, whichever came first. The baseline functional status of PD patients and assistance with PD bag exchange before and during COVID-19 were obtained from the index patient and/or their family members using a semi-structured questionnaire. The COVID-19 confirmation date was utilized as the index date for each participant\u2019s baseline characteristics, laboratory data, functional status, and vaccination record. All participants were followed until death, recovery, or hospital discharge.At the end of July 2021, physicians and PD nurses from all NST-registered PD facilities were invited to voluntarily participate in this study. The InCov19-PD registry collected patient-level and facility-level data from all voluntarily participating facilities and study participants using a standard process and standardized data collection instruments. The NST verified the index case by contacting the treating physicians and reference PD nurses. Responsible physicians or PD nurses provided the NST with demographic data, comorbidities, details of PD prescriptions, COVID-19 vaccination status and regimens, laboratory parameters at the time of COVID-19 diagnosis, identified SARS-CoV-2 strains, initial and subsequent treatments, and COVID-19-related complications t-test for continuous variables. The 28-day case fatality rate of the overall cohort was computed. Fatality outcome was analyzed by survival analyses using Kaplan\u2013Meier curves with a Log-rank test. All included patients had a complete follow-up, thus there was no censoring for missing outcome data. The associations between the covariates and the 28-day case fatality outcome were first evaluated using univariable Cox proportional hazard regression and subsequently adjusted for age. The multivariable Cox regression model was conducted using backward elimination and list-wise exclusion of missing data. All variables with age-adjusted p-values of less than 0.10 were candidates for the multivariable model. The least significant variables were removed repeatedly from the model until all variables had p-values below 0.05. The proportional hazards assumption was verified using Schoenfeld residuals and plots. Data were analyzed using R 4.0.5 . P-values less than 0.05 were statistically significant.The categorical variables were described as the frequency with percentage, while the continuous variables were presented as mean with standard deviation (SD). Baseline demographic data and laboratory parameters between the survivor and non-survivor groups were compared using the Chi-square test and Fisher\u2019s exact test for categorical factors, and the independent The InCov19-PD registry documented a total of 1,660 PD patients diagnosed with COVID-19 from August 2021 to July 2022. Of those, 1,487 eligible participants with a complete clinical record of outcomes on day 28 after COVID-19 diagnosis and meeting other inclusion criteria were included in the analysis . Baselinp < 0.001], had a higher prevalence of diabetes mellitus , and were more functionally dependent at baseline (p < 0.0001) (p < 0.0001) study participants died (non-survivor group), while 1,291 (87%) survived (survivor group). Patients in the non-survivor group were significantly older . At baseline, most (94%) PD patients in this cohort received continuous ambulatory PD (CAPD). During COVID-19, 46 (6%) patients switched from CAPD to automated PD (APD), while all APD patients continued their initial dialysis modality.Of 1,487 PD patients diagnosed with COVID-19, a total of 921 (62%) patients were hospitalized. Approximately 64% of these patients needed oxygen or other invasive respiratory support . More pap < 0.001). Furthermore, the impaired functional status during COVID-19, but not at baseline, was associated with the risk of dying within 28 days from a COVID-19 diagnosis (p < 0.001) in the multivariable model. Similarly, COVID-19 impacted the bag exchange patterns of PD patients. At baseline, more than half (59%) of PD patients could independently perform PD bag exchange without assistance. During COVID-19, 10% of patients who were previously independent of PD bag exchange required additional assistance from caregivers or nurses. In the multivariate model, however, the need for PD bag exchange support during COVID-19 illness was not associated with the risk of dying within 28 days from a COVID-19 diagnosis.The functional status of PD patients with COVID-19 was altered. The number of PD patients who were classified as independent in their activities of daily living (ADLs) declined from 94% at baseline to 63% after COVID-19 diagnosis . The patients/participants provided their written informed consent to participate in this study.PC, PD, SB, SK, SS, and TK conceptualized and designed the study. PC, SB, and TK collected the data and drafted the manuscript. PC, SB, TH, TN, and TK analyzed the data. All authors reviewed and approved a final version of the manuscript."} {"text": "Acacia nilotica sawdust activated carbon (ASAC) as an adsorbent for the adsorption treatment of toxic Indigo Carmine Dye (ICD). The effect on the adsorption characteristics of ASAC of the influent ICD concentration, flow rate, and column bed depth has been investigated. According to the column study, the highest efficiency of ICD removal was approximately 79.01% at a preliminary concentration of 100\u00a0mg/L with a flow rate of 250\u00a0mL/h at a bed depth of 30\u00a0cm and adsorption power\u00a0of\u00a024.67\u00a0mg/g. The experimental work confirmed the dependency of break-through curves on dye concentration and flow rate for a given bed depth. Kinetic models were implemented by Thomas, Yoon\u2013Nelson, and Bed-depth-service-time analysis along with error analysis to interpret experimental data for bed depth of 15\u00a0cm and 30\u00a0cm, ICD\u00a0concentration\u00a0of\u00a0100\u00a0mg/L and 200\u00a0mg/L and flow rate of\u00a0250\u00a0mL/h, and 500\u00a0mL/h. The analysis predicted the breakthrough curves using a\u00a0regression basin. It indicated that all three models were comparable for the entire break-through curve depiction. The characteristic parameters determined by process design and error analysis revealed that the Thomas model was better followed by the BDST and Yoon\u2013Nelson models in relating the procedure of ICD adsorption onto ASAC. B-E-T surface area and B-E-T pore volume of ASAC were 737.76 m2/g and 0.2583 cm3/g, respectively. S-E-M and X-R-D analysis reveal the\u00a0micro-porous and amorphous nature of ASAC. F-T-I-R spectroscope indicate distinctive functional assemblies like\u00a0-OH group, C\u2013H bond, C\u2013C bond, C\u2013OH, and C\u2013O groups on ASAC. It could be computed that the ASAC can be used efficiently as an alternative option for industrial wastewater treatmentA continuous mode fixed-bed up-flow column adsorption analysis was conducted utilizing About 10\u201315% of textile industry dyes are discharged in streams, making the effluents aesthetically unpleasant. Discharge of such colored effluents is dangerous from an environmental and ecological point of view. Color obstructs sunlight dispersion, hinders photosynthesis action, and constrains the growth and metabolism of aquatic biota. The eradication of color from the effluent-carrying dye is a crucial challenge owing to difficulties in handling conventional and fixed treatment methods to manage such wastewaters. Consequently, such techniques are ineffective but cannot be utilized to handle the large variety of organic pigment discharge efficiently.Colorants and dyestuffs are commonly used in manufacturing and commercial industries, including clothing, rubber, pharmacy, leather, printing press, fruit, cosmetics, carpet, and paper. The textile industry ingests more than 80% of the entire production of dyestuff, creating it the principal consumer 2. Industrial activated carbon (AC) is a well-known adsorbent used for admirable adsorption capabilities. However, it is expensive, and its rejuvenation makes it pricier in some world regions . Hence it is desirable\u00a0to search for low-cost alternatives such as natural ingredients, agricultural by-products, or industrial waste as an adsorbent material. These products do not need any additional or expensive pre-treatment and should be regarded as possible adsorbents to eliminate dye-containing wastewater. These low-cost products provide acceptable output for diagnosing-colored effluents in laboratory measurements3.Surface assimilation, or the deposit of impurities on the surface of a solid, is an attractive alternative treatment. Suitable for its convenience, simplicity of use, handling, sludge-free facility, and rejuvenation potential, it has become trendy and appealing, demonstrating an appropriate process for extracting non-biodegradable chemicals from wastewater4, orange peels\u00a0activated carbon5, chicken feathers6, pongamia pinnata seed shell activated carbon7, palm wood cellulose\u00a0activated carbon8,banana peels activated carbon9, acacia glauca sawdust\u00a0activated carbon10, babul sawdust activated carbon11, etc. have been considered to adsorb dyestuffs, heavy metals, and other impurities from solution as unconventional adsorbents. Mall et al. 13 provided a critical analysis of such minimum cost adsorbents to diagnose different wastewaters to remove several toxins carrying wastewaters. Sorption of different adsorbates using other adsorbent materials like boron by sepiolite 14, azo dye by jute fibers15, phenolic compounds16, methylene blue dye by zeolite 17, etc. in column mode is also described by some researchers. But dye adsorption by activated carbon of Acacia nilotica sawdust in column mode is hardly reported. Indigo Carmine is one of the dark blues colored poisonous and toxic, crystalline type of powdered dye having a chemical composition of C16H8Na2O8S2N2, molecular weight is 466.367\u00a0g/mol, and distinctive wavelength of 610\u00a0nm. It is very commonly used as a colorant and an indicator of pH in various activities. It has some drug allergies due to which it can damage the life of man 18, affects bones and chromosomes, and can cause dangerous hemodynamic effects on living beings 19. This study investigates the efficacy of Acacia nilotica sawdust activated carbon (ASAC) foradsorptive elimination of poisonous IndigoCarmine Dye (ICD) in a constant stable bed up-flow column.Several papers reported that manylow-cost materials such as natural, agricultural, or industrial wastes like Acacia nilotica sawdust activated carbonAcacia nilotica sawdust activated carbon (ASAC), was packed with glass beads from the top and bottom. The adsorbate, Indigo Carmine Dye solution (ICD), was pumped at the appropriate flow rates using a pump with a known initial concentration at natural pH. The final samples were taken at daily interims at the column's output, along with the concentrations were determined using a spectrophotometer.The current study used a perspex column for continuous fixed-bed up-flow column analysis. The adsorbent of known weight for a given bed depth, i.e., Acacia nilotica sawdust activated carbon (ASAC) as an adsorbent for the adsorptive treatment of toxic Indigo Carmine Dye (ICD) bearing wastewater. For this purpose, continuous mode fixed-bed up-flow column adsorption analysis is conducted. The effect of the adsorption characteristics of ASAC on the influent ICD concentration, flow rate, and column bed depth has been investigated.The objective of the present research work is to utilize 20 to get a uniform size in the range of 250\u2013500\u00a0\u03bc. It is then rinsed with doubly distillate water, naturally dehydrated, and incubated in an oven at 105\u00a0\u00b0C for around 2\u00a0h. After this, char is obtained by mixing 25\u00a0mL of ortho-phosphoric acid, i.e., H3PO4, in 50\u00a0g dehydrated sawdust in a 0.5:1 volume to weight ratio. To complete the activation and carbonization, the char was placed in a muffle furnace for around 1\u00a0h at 450\u00a0\u00b0C. The carbon was then rinsed with doubly distillate water for 2\u00a0h, dried at 378\u00a0K, and used in the new adsorption column analysis as Acacia nilotica sawdust activated carbon (ASAC)4.For continuous fixed-bed up-flow column study, the adsorbent material,i.e., carbon activated babool sawdust for the removal of ICD, was prepared by chemically activating the material with ortho-phosphoric acid. The plant chosen for the study is the\u00a0Acacia nilotica tree. Acacia nilotica is a scientific name for the\u00a0evergreen Babool tree. It is native to Africa, the\u00a0Middle East, the\u00a0Indian subcontinent and across Asia. It is locally available and found abundantly. This plant is not directly utilized as an adsorbent\u00a0in the present research work. The sawdust of babool tree which is a waste material from sawmills and hence of marginal cost has been considered here to prepare the adsorbent. Sawmill and timber industries are commonly available sources of sawdust or wood waste. Secondly the plant is locally available and found abundantly across Asia. In addition to this, Acacia nilotica tree/sawdust/wood waste is not listed as vulnerable/rare/endangered/indeterminate. The cost-effective low cost raw sawdust material, obtained from the local saw mill in a quantity of 0.5\u00a0kg, was crushed and sieved according to the protocol outlined in Part 4 of Bureau of Indian Standards IS-2720B-E-T) surface area and (B-E-T) pore volume analysis, scanning electron microscope (S-E-M) analysis, Fourier transform infrared (F-T-I-R), spectroscopy, and X-ray diffraction (X-R-D) technique. Adsorption is a surface process. It robustly depends on the\u00a0adsorbent\u2019s surface characteristics. The area-volume, morphology, chemistry and constitution of the\u00a0ASAC surface were premeditated by B-E-T, S-E-M, F-T-I-R, and X-R-D analysis.Characterization of adsorbent ASAC includes Brunauer, Emmett and Teller was acquired from a scientific store-Upper India, Nagpur. Standard 1000\u00a0mg/L stock solution was obtained by dissolving 1 g powdered ICD in 1000\u00a0mL doubly distillate water. Dilution of a standard stock of 1000\u00a0mg/L concentration yielded desired solutions of 100\u00a0mg/L and 200\u00a0mg/L concentrations. A double-beam Shimadzu ultraviolet\u2013visible spectrophotometer was used to calculate the wavelength at an\u00a0absorbance of 610\u00a0nm. (Model No. 2450).21. The schematic diagram showing the set-up of the column study is shown in Fig.\u00a0Continuous flow analysis fully explores the concentration differential, which is believed to be a prime factor for adsorption, resulting in more optimal use of the adsorbent potential and improved effluent performance The fixed bed adsorption analysis was performed using perspex glass columns and consists of internal diameters of 2\u00a0cm and lengths of 100\u00a0cm. The experiments were conducted using doubly distillate water at natural flow and continued till the bed reached the exhaustion point of all the pH of respective solutions. Table The predetermined flow rates of 250\u2013500\u00a0mL/h, the predetermined ICD concentration of 100\u2013200\u00a0mg/L and the predetermined column bed depth of 15\u201330\u00a0cm were adopted in the present research work in order to examine the effect on the adsorption performance and to establish optimum conditions for the adsorption of ICD in a column by ASAC.22. The breakthrough curve is drawn based on lapse time by plotting output concentration against initial concentration, flow rate, bed width, and column diameter. In a fixed bed study, the adsorbent closest to the contaminated water saturates first, where maximum adsorption occurs initially. As time passes, these adsorption areas advance until they enter the bed exit 23. As the adsorption areas migrate through the column, the adsorbate concentration equals the feed concentration at the exit.Continuous flow analysis is a successful periodic desorption mechanism. The breakthrough curve is the efficiency of a continual adsorption sample or a fixed-bed column. A break-through curve happens if the outflow agglomeration from a column bed is (3\u20135) % of the inflow agglomerationb) effect on initial concentration on ICD adsorption on the ASAC's fixed-bed column was investigated, and measurements were conceded out at the preliminary stage of 100\u2013200\u00a0mg/L concentration with a maintained flow discharge of 250\u00a0mL/h and 500\u00a0mL/h with column bed depths of 15\u00a0cm and 30\u00a0cm.The obtained results are depicted in Table Break-through time , Ct is the concentration of effluent ICD (mg/L), kT is the Thomas rate constant (L/mg/min), q0 is the equilibrium ICD uptake per gram of ASAC (mg/g), Q is the flow rate (mL/h), m is the ASAC volume (g), and t\u00a0is time (min). A linear plot of ln [(C0/Ct)\u00a0\u2212\u00a01] against time t was used to measure kT\u00a0and q0 values from the intersection and slope. Table kT),\u00a0i.e., Thomas rate constant and (q0) maximum adsorption capacity. The relative parameters and regression coefficient (R2) were determined using a correlation basin. The results of relative parameters and values of error analysis by the least-square of errors method (S.S) (that is less than or up to 0.004) are also listed in Table t and Ct/C0.The investigational column statistics were applied to the Thomas model equation to obtain amplified from 15 to 30\u00a0cm, while the rate of kT decreased. Thus, lesser influent ICD concentration, lesser flow rate, and greater or higher depth will improve the efficiency of the column for ICD adsorption on ASAC. Thomas rate constant and contrast of adsorption capacity values acquired from experimental data and calculations showed that they were significantly close for given situations that specify the Thomas model's applicability.Table 28 was worn to learn the break-through action of ICD adsorption on ASAC. It takes into account the hypothesis that a decrease in the\u00a0sorption probability rate is relative to adsorbate sorption and adsorbate break-through probability28.The Yoon\u2013Nelson plotThe Yoon\u2013Nelson kinetic plot for the column is given as Eq.\u00a0:3\\documeThe linear plotof one component structure is presented asEq.\u00a0:4\\documeC0 is the influent ICD concentration (mg/L), Ct is the effluent ICD concentration (mg/L), kYN is the Yoon\u2013Nelson rate constant (L/min), is the time needed for 50% adsorbate break-through (min), along with t is the sampling time (min).As ln [Ct/(C0 \u2212 Ct)] against sample time (t) was worn to computekYN\u00a0along with the slope and intercept. In Table A linearized design of \u03c4 and kYN, the Yoon\u2013Nelson rate constant kYN improved along with 50% adsorbate break-through time (\u03c4) falls with the grows in influent ICD concentration and flow rate for a given column bed depth. With the bed volume, i.e., flow rate increasing from 250 to 500\u00a0mL/h, the value has expanded, while the standard of kYN has reduced. Table 0) increases from 100 to 200\u00a0mg/L. In all cases, the regression correlation coefficient R2\u00a0was found to be nearly close to unity.When a basic hypothetical Yoon\u2013Nelson model was applied to ASAC Table to undertotal) decreases as the influent ICD concentration and flow rate increase. Though all the parameters obtained in a model satisfy the Yoon\u2013Nelson hypothesis, an increase in\u00a0kYN\u00a0values with a decrease in \u03c4 values and slightly higher S.S values compared to other models may slightly deviate (negligible) internal significance of this model to a marginal extent.The total quantity of adsorbate (q29. BDST is a common researchplot for forecasting the connection between depth and time concerning\u00a0concentrations and various sorption characteristics. The BDST plot is indicated as (Eq.\u00a0The hypothesis of the BDST plot depicts that the pace of sorption is synchronized by the reaction between adsorbate (ICD) and adsorbent (ASAC)d as Eq.\u00a0:5\\documeThe adsorption capacity by the BDST model is expressed as Eq.\u00a0:6\\documeC0 is the inflow ICD concentration (mg/L), Ct is the outflow ICD concentration (mg/L), kais the BDST model rate constant (L/mg/min), N0 is the saturation concentration (mg/L), z is the bed depth (centimetre), and u is the influent linear velocity (centimetre/min), where tis the sampling time (min) q0 represents the equilibrium ICD uptake per gram of adsorbent (mg/g), Q means the flow rate (mL/h).Ct/C0 is plotted against sampling time (t) and column bed width, a straight line is obtained (z). The BDST model measures the adsorption power, saturation concentration, and BDST rate constant (ka). The model parameters, total adsorbed quantity and the correlation coefficients are presented in Table Also, m represents the volume of adsorbent (ASAC) in the column (g). As ka and N0 values increase with the same. The BDST model variables will continue to advance the methods for different flow rates and other influential dye concentrations without additional laboratory experiments. To measure the column output at new feasible flow rates and influent dye concentrations, the BDST equation was used at either flow rate or influent ICD concentration. The error analysis values obtained by the least square method (S.S) were smaller than those obtained by the Yoon\u2013Nelson, and robust predictions were found for adjusting feed concentration and flow rate. In every one ofthe cases, the regression coefficient R2 was approximately equal to one, indicating the significance andrationality of the BDST model for the current system.The result Table ; Fig.\u00a07 Columns with a wide range of possible flow rates and concentrations can be generated using the model and measured constants.These outcomes show that the optimized conditions can be utilized to estimate adsorption efficiency on ASAC under desirable operating conditions for the ICD adsorption process.2) values, these values may significantly influence the accuracy throughout the linear regression examination; subsequently, the nonlinear regression investigation can be a superior alternative in evading such errors. As a result, the parameters of various kinetic models were determined using nonlinear correlation coefficients examination and the least square of errors procedure 17. Error analysis was conducted to validate which model\u00a0gives better results. The relative formula for error analysis by least-square of errors method (S.S) is provided by the following equation (Eq.\u00a0t/C0)c is the ratio of effluent to influent ICD concentrations calculated using Thomas, Yoon\u2013Nelson and BDST plots, and (Ct/C0)e is the ratio of effluent to influent ICD concentrations calculated using experimentation conditions31. N is the number of experimental data points. It is important to evaluate the data using S.S according to the coefficients (R2) criteria to validate the most suitable and best-fitting kinetic model.As diverse equations are considered to calculate linear regression correlation coefficients (R0\u2009=\u2009100\u00a0mg/L and 200\u00a0mg/L).Figure\u00a00\u2009=\u2009100\u00a0mg/L and 200\u00a0mg/L at (a) Q\u2009=\u2009250\u00a0mL/h (b) Q\u2009=\u2009500\u00a0mL/h). It was clear from the figures for all the three models, it could be considered that BDST best followed the Thomas model along with the Yoon\u2013Nelson models in relating the development of ICD adsorption onto ASAC.Figure\u00a0res Fig.\u00a0 that theB\u2013E\u2013T surface area and B\u2013E\u2013T pore volume of raw sawdust were 543.28 m2/g and 0.1925 cm3/g respectively. At the same time, B\u2013E\u2013T surface area and B\u2013E\u2013T pore volume of ASAC were obtained to be 737.76 m2/g and 0.2583 cm3/g, respectively. This rise in B\u2013E\u2013T surface area and B\u2013E\u2013T pore volume is due to the\u00a0physico-chemical activation of raw sawdust into activated carbon (ASAC).B\u2013E\u2013T surface area and B\u2013E\u2013T pore volume of ASAC were analyzed using Brunauer, Emmett and Teller (B\u2013E\u2013T) method by ASTMD-3663-03. The standard test was conducted using ASAP 2020 surface area and porosity analyzer. The 17.S-E-M analysis explains the surface morphology and porosity of ASAC. ASAC laden with ICD was tested at 15\u00a0kV, 500\u00d7 magnifications using a scanning electron microscope. From the picture of raw and laden ASAC Fig.\u00a0, it was 17. F-T-I-R spectroscopes of raw ASAC and ICD-laden ASAC are depicted in Fig.\u00a0\u22121 because of the \u2013OH assembly, which vaguely shifts to 3682.03\u00a0cm\u22121 after ICD sorption onto ASAC. The group stretched in raw ASAC at 2933\u00a0cm\u22121 depicts well-built C\u2013H links that swing faintly in ICD-laden ASAC to 2898\u00a0cm\u22121. The summit stretched in raw ASAC at 2160\u00a0cm\u22121 describes the occurrence of a weak C\u2013C linkage thatdoes not swing after ICD adsorption. Similarly, the summit at 1551\u00a0cm\u22121 for raw ASAC swings to 1563\u00a0cm\u22121 for ICD laded ASAC33. This depicts the strong occurrence of C\u2013OH and C\u2013O groups. This swinging of summits validates the sorption of ICD onto ASAC.The F-T-I-R spectroscopes are usually worn-out to recognize the distinguishing functional assemblies with excellent sorption capability. F-T-I-R spectroscopy monitors the chemistry of the ASAC surface and ICD-ASAC surface 36. From Fig.\u00a0X-R-D technique is a tool to analyze the crystalline or amorphous constitution of the adsorbents. The adsorption process may lead to changes in the adsorbent's constitution. Hence, understanding the molecular constitution and crystalline/amorphous constitution of the ASAC would provide valuable information regarding adsorption. Figure\u00a0B\u2013E\u2013T surface area and B\u2013E\u2013T pore volume of raw sawdust were 543.28 m2/g and 0.1925 cm3/g respectively. Whereas B\u2013E\u2013T surface area and B\u2013E\u2013T pore volume of ASAC were found to be 737.76 m2/g and 0.2583 cm3/g, respectively. This rise in B\u2013E\u2013T surface area and B\u2013E\u2013T pore volume is due to physico-chemical activation of raw sawdust into activated carbon (ASAC). S-E-M analysis and X-R-D analysis reveal the micro-porous and amorphous nature of ASAC. F-T-I-R spectroscopes indicate distinctive functional assemblies like \u2013OH group, C\u2013H bond, C\u2013C bond, C\u2013OH, and C\u2013O groups on ASAC.It could be computed that the Acacia nilotica sawdust activated column can be considered as an alternative treatment option for industrial wastewater.The present study shows the\u00a0utilization of ASAC as an effective solution for removing ICD from wastewater. A continuous study of the fixed bed adsorption column on ASAC for the treatment of ICD discovered that ASAC could be used as an adsorbent material in industrial wastewater treatment to remove dyes from the solution. The adsorption of ICD for a given column bed depth was influenced by the flow rate and the ICD concentration. It was found that break-through and exhaust time occurred faster for shallower bed depths (15\u00a0cm) and increased gradually as column depth increased (30\u00a0cm). It was also observed that with\u00a0an increase in initial dye solution concentration (from 100 to 200\u00a0mg/L) and flow rate (from 250 to 500\u00a0mL/h), break-through time and exhaust time decreased. The percentage removal efficiency and adsorption capacity of ASAC increase for lower initial ICD concentration and flow rate. The column study revealed that the maximum removal efficiency of ICD and adsorption capacity of ASAC was found to be about 47.35% and 21.99\u00a0mg/g, respectively, at lower column depth (15\u00a0cm), which increased respectively to 79.01% and 24.67\u00a0mg/g at higher column depth (30\u00a0cm), with the initial dye concentration reduced from 200 to 100\u00a0mg/L, allowing flow rate to reduce from 500 to 250\u00a0mL/h. The column kinetic study and error analysis also depicted that the experimental break-through plots compared satisfactorily with the break-through profiles calculated by Thomas, Yoon\u2013Nelson and BDST models. Through comparing the values of variables, constants, and error analysis for all three models in relating the mechanism of ICD adsorption onto ASAC, it was also discovered that Thomas was better approached over BDST and Yoon\u2013Nelson models. The comparison of correlation coefficients shows that all three models match the experimental results well and are equal in fixed-bed adsorption systems. The"} {"text": "Fertility is a significant concern among adolescent and young adult (AYA) cancer survivors and their caregivers, especially after their completion of cancer treatment programs. Concerns about fertility affect not only cancer patients' psychological well\u2010being, but also all aspects of their medical treatments, including treatment protocol, decision\u2010making, and treatment adherence. In this scoping review, the PubMed, CINAHL, Web of Science, Embase, CNKI, and Wanfang electronic databases were searched according to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta\u2010Analysis Extension for Scoping Reviews. The searches identified 669 articles, 54 of which met the inclusion criteria. Reviewers extracted the data on the study characteristics, measurements, positive factors, negative factors, and additional themes. This scoping review included studies from 10 countries. Most studies were quantitative using a cross\u2010sectional design. The prevalence of reproductive concerns among AYA cancer survivors ranged from 44% to 86%, and 28% to 44% of the survivors experienced moderate to severe concerns. The specific implementation of fertility consultation, including timing, consult frequency, and content, deserves ongoing exploration. Current research situations about reproductive concerns in adolescent and young adult cancer survivors. This scoping review overviewed the current research situations about reproductive concerns among adolescent and young adult (AYA) cancer survivors and showed the prevalence of reproductive concerns among AYA cancer survivors ranged from 44% to 86%, and 28% to 44% of the survivors experienced moderate to severe concerns. There was few research focusing on interventions to alleviate reproductive concerns. Therefore, theoretical frameworks for this problem should be explored, and appropriate psychotherapy should be designed to alleviate their concerns.The prevalence of reproductive concerns among AYA cancer survivors ranged from 44% to 86%, and 28% to 44% of the survivors experienced moderate to severe concerns. There lacks an appropriate interventions to alleviate reproductive concerns among AYA cancer survivors.1Fertility plays an important role in the continuation of human life, and young adulthood is the prime time to have own biological children. However, among cancer survivors, reproductive function is often impaired or interrupted due to the destructive nature of the cancer itself or due to the reproductive toxicity of the cancer treatment. Thus, there are concerns about the fertility potential of patients diagnosed with cancer, especially among adolescent and young adult (AYA) (aged 15\u201339\u00a0years) cancer survivors.With the steadily increasing 5\u2010year survival rate of cancer patients,To provide directions for improving the quality of life, it is significant to learn about the level, influencing factors, and clinical interventions of reproductive concerns in AYA cancer survivorship. The objective of this scoping review was to evaluate the literature on reproductive concerns among AYA cancer survivors after the completion of cancer treatment, to identify research gaps in the current literature and to describe future research directions to alleviate reproductive concerns.2We performed our scoping review guided by the six steps illustrated by Arskey and O\u2032Malley2.1How have reproductive concerns among AYA cancer survivors been assessed?What have assessment on reproductive concerns among AYA cancer survivors shown?What factors may associate with the level of reproductive concerns among AYA cancer survivors?Are there interventions to alleviate reproductive concerns among AYA cancer survivors?To accomplish the aims of the study, the following research questions were identified:2.2We conducted a comprehensive search of the PubMed, CINAHL, Web of Science, Embase, CNKI, and Wanfang electronic databases in August 2021. We searched the above databases using a combination of the following keywords: child, OR young adults, OR adolescents, OR childbearing age, AND cancer survivorship, OR childhood cancer survivors, OR cancer, OR malignancy, AND fertility concern*, OR infertility concern*, OR reproductive concern*, OR fertility worries, OR fertility worry.2.3Inclusive criteria were met if studies (1) were published in peer\u2010reviewed journals in English or Chinese; (2) applied quantitative, qualitative, or mixed methods; (3) answered the review question; and (4) included study sample groups composed at least 50% by participants aged 15\u201339\u00a0years old who had completed cancer\u2010related treatment. Exclusion criteria were met if studies (1) were focused mainly on fertility, fertility preservation, or fertility\u2010related distress; (2) reported overall outcomes of AYAs and older adults, such as the results of AYAs that could not be extracted and described; (3) included study sample groups composed over 50% by survivors older than 39\u00a0years; or (4) were case studies, review articles, correspondence, gray literature, or conference abstracts.2.4After removing duplicates, titles were filtered using the conservative method described by Higgins and Green2.5Two researchers (Sun Q and Xiao P) extracted data from eligible studies using electronic forms to ensure accuracy. The following information was extracted: study authors, publication year, nationality of the author, study design, type of cancer and age, measurements, and main outcomes. The results were depicted with statistics and thematic analysis. Although the PRISMA: Extension for Scoping Reviews does not ask for bias analysis across studies, we acknowledge the risk of selectively reporting themes, particularly in qualitative studies.3n\u00a0=\u00a04); focus on reproductive concerns after cancer scale (n\u00a0=\u00a09); focus on wrong age range (n\u00a0=\u00a07); French language (n\u00a0=\u00a01); and inability to access the full text (n\u00a0=\u00a01). The flowchart of the study selection process is shown in Figure\u00a0n\u00a0=\u00a033) or observational cohort (n\u00a0=\u00a04) design. Only one study used a mixed method design, and four studies reported experimental data.Database searching yielded 669 articles, from which 374 repeated articles were removed. In the remaining 295 articles, 219 articles were excluded after screening the abstract and title. After screening the full text of 76 articles, 22 articles did not satisfy the inclusion criteria and were excluded for the following reasons: focus on fertility\u2010related information (3.1n\u00a0=\u00a018), 33.3% in China (n\u00a0=\u00a018), and 5.6% in the United Kingdom (n\u00a0=\u00a03). In this review, AYA\u2010aged survivors included childhood and AYA cancer survivors; 37 studies included only survivors who were diagnosed in adolescence and/or young adulthood , while eight studies included participants diagnosed across childhood and young adulthood , and one study focused on survivors diagnosed in patients 15\u00a0years of age or younger. Forty studies included only females, three included only males, and 11 included both male and female cancer survivors. Most studies focused on survivors of various cancers (n\u00a0=\u00a019) or on breast cancer survivors (n\u00a0=\u00a020). Three themes emerged as important concepts related to reproductive concerns: (1) reproductive concerns, encompassing the prevalence of reproductive concerns, and other aspects of reproductive concerns; (2) influencing factors ; and (3) interventions for reproductive concerns.As shown in Table\u00a03.2Various measures were used to examine reproductive concerns, which were reported as a primary outcome in 40 articles (Supplement File). Measurements of reproductive concerns involved the Reproductive Concerns Scale (RCS),3.330, and no group differences in RCS mean scores were found between cancer survivors and infertile women without cancer.Due to the diversity of measurement tools used in the different studies, it is hard to determine the overall prevalence of AYA cancer survivors with reproductive concerns. In studies using the RCAC, 58% to 61% of AYA cancer survivors reported high concerns on at least one dimension, and 28% to 44% of the survivors reported moderate to high overall reproductive concerns, with the total scores for reproductive concerns ranging from 56.45\u00a0\u00b1\u00a08.18 to 65.73\u00a0\u00b1\u00a012.36.3.4Additional concepts about reproductive concerns were also reported in these studies. Three themes about reproductive concerns were extracted: worry and remorse, desire for communication and support, and demand for reproductive knowledge.3.5131 therapy, being nulliparous at diagnosis, and reporting treatment\u2010related ovarian damage.Factors associated with fertility concerns were identified in these studies and can be divided into three categories. As shown in Table\u00a03.6Wang et al.4We sought to explore the available literature on reproductive concerns in AYA cancer survivors who ranked fertility among their top three life goals. This scoping review found that reproductive concerns universally occurred in areas of China, America, and some European nations, and concerns related psychological interventions were scarce. However, for some economically less developed regions, such as India and areas of Africa, which have high cancer rates and high reproductive rates,Our findings showed that the prevalence of reproductive concerns in AYA cancer survivors ranged from 44% to 86% and that 28% to 44% of survivors experienced moderate to severe concerns. Although cancer patients do have a risk for impaired fertility, Bartolo et al.Child health, personal health, and fertility potential were the top three of the six dimensions of fertility concerns identified by AYA cancer patients in both quantitative and qualitative studies. In comparison, dimensions such as acceptance and achieving pregnancy/becoming pregnant received less attention from both male and female patients. The results of this research suggest that AYA cancer patients have a high perception of the risk of infertility and overemphasize the health problems that childbirth after cancer may bring to themselves and their children. In contrast, they tend to ignore their own emotional responses to infertility risks and appropriate remedies. These two aspects keep patients trapped in a vicious cycle of constant negative emotions and coping behaviors that ultimately make their concerns circular and compounding. In fact, reproductive concerns, just as fears of cancer recurrence, are, in essence, a normal concern that arises after cancer diagnosis and treatment, with high levels of concern associated with impaired quality of life.Most published work on the reproductive concerns of AYA cancer survivors consists of cross\u2010sectional studies. These studies provide further information on the prevalence of and factors associated with reproductive concerns in different areas of the world; in addition, these studies serve as a basis for population\u2010oriented interventions on reproductive concerns such as those with marriage, low education levels, desire to have biological children, breast cancer, poor family function, etc. However, there have only been four clinical intervention studies on this topic, which were all in China,Although fertility concerns have previously been partly reviewed within a systematic review of fertility\u2010related psychological distress,Above all, the reproductive concerns of AYA cancer survivors are a psychological burden that urgently needs the attention of reproductive specialists, oncologists, psychologists, and nurses. But the limited number of studies to date have had small samples and were not designed to explore psychological processes involved or to construct a theoretical framework of reproductive concerns. Nonetheless, research with further clinical intervention trials needs to be conducted on cancer survivorship, centering on alleviating survivors' psychological distress or improving treatment outcomes. Further research into fertility consultations with AYA cancer patients in real clinical settings is also warranted.None.Sun and Xiao devised this study; Sun, Xie, Duan, Cheng, Luo, Zhou, and Liu searched the literature; Sun and Xie wrote the first draft of the paper; Xiao and Andy revised the paper. All authors reviewed the final paper.Neither informed consent to participate nor ethical approval is required.Table S1Click here for additional data file."} {"text": "As the worldwide spread of coronavirus disease 2019 (COVID-19) continues for a long time, early prediction of the maximum severity is required for effective treatment of each patient.This study aimed to develop predictive models for the maximum severity of hospitalized COVID-19 patients using artificial intelligence (AI)/machine learning (ML) algorithms.The medical records of 2,263 COVID-19 patients admitted to 10 hospitals in Daegu, Korea, from February 18, 2020, to May 19, 2020, were comprehensively reviewed. The maximum severity during hospitalization was divided into four groups according to the severity level: mild, moderate, severe, and critical. The patient's initial hospitalization records were used as predictors. The total dataset was randomly split into a training set and a testing set in a 2:1 ratio, taking into account the four maximum severity groups. Predictive models were developed using the training set and were evaluated using the testing set. Two approaches were performed: using four groups based on original severity levels groups and using two groups after regrouping the four severity level into two . Three variable selection methods including randomForestSRC were performed. As AI/ML algorithms for 4-group classification, GUIDE and proportional odds model were used. For binary classification, we used five AI/ML algorithms, including deep neural network and GUIDE.http://statgen.snu.ac.kr/software/nomogramDaeguCovid/).Of the four maximum severity groups, the moderate group had the highest percentage . As factors contributing to exacerbation of maximum severity, there were 25 statistically significant predictors through simple analysis of linear trends. As a result of model development, the following three models based on binary classification showed high predictive performance: (1) Mild vs. Above Moderate, (2) Below Moderate vs. Above Severe, and (3) Below Severe vs. Critical. The performance of these three binary models was evaluated using AUC values 0.883, 0.879, and, 0.887, respectively. Based on results for each of the three predictive models, we developed web-based nomograms for clinical use (We successfully developed web-based nomograms predicting the maximum severity. These nomograms are expected to help plan an effective treatment for each patient in the clinical field. The coronavirus disease 2019 (COVID-19) pandemic is a rapidly evolving global emergency that continues to strain healthcare systems /ML algorithms such as GUIDE and deepThis is a multicenter retrospective cohort study of polymerase chain reaction-confirmed COVID-19 patients admitted to 10 hospitals in Daegu, Korea . The cohA total of 46 variables were used in this study. Excluding an outcome variable , 45 variables were used as predictors. Records for 45 predictors with an average missing rate of 16% (IQR: 6\u201319%) were collected from each patient on the first day of admission. In this study, the original data was used as it is. This study was approved by the institutional review board of Kyungpook National University Hospital (KNUH 2020-03-044).Excluding those who died on the first day of admission, 2,254 of 2,263 patients were used in this study. To define the outcome, a disease severity variable was used. The disease severity was divided into four groups: mild, moderate, severe, and critical . The disp-value < 0.05 was considered significant.Of the 45 predictors, 2 demographic variables, 4 vital signs, and 12 laboratory findings were continuous variables. Note that the body temperature belonging to the four vital sign predictors is the body temperature measured only on the first day of admission. Based on the opinions of clinicians for practical use in the clinical field, an optimal cutoff was selected for dichotomizing each continuous predictor. To this end, the maximally selected rank statistics were useThe overall workflow is shown in For the binary classification, we combined the four maximum severity groups into two groups: (1) Mild vs. Above Moderate (2) Below Moderate vs. Above Severe (3) Below Severe vs. Critical. Above Moderate refers to a group that combines moderate, severe, and critical. Below Moderate refers to a group that combines mild and moderate. Above Severe refers to a group that combines severe and critical. Below Severe refers to a group that combines mild, moderate, and severe. For each outcome with a binary group, multiple predictive markers were selected using the area under the receiver operating characteristic curve (AUC)-based stepwise selection , the leaFor the 4-group classification, the proportional odds model and GUIDY is the outcome with four ordinal categories , \u03b1j is an intercept corresponding to the jth category and \u03b2 is a vector of coefficients. In the case of the proportional odds model, the proportionality assumption was confirmed through the likelihood ratio test, which compares the proportional odds model and the cumulative logit model , precision, and F1-score were used as evaluation measures. A parsimonious model, a simple model with a high predictive ability for each outcome, was considered the final predictive model.For the binary classification, we developed three predictive models: (1) Mild vs. Above Moderate (2) Below Moderate vs. Above Severe (3) Below Severe vs. Critical. During the marker selection process, a 5-fold cross-validation (CV) was performed. We considered five AI/ ML algorithms: LR, RF, SVM, DNN, and GUIDE. For RF, SVM, and DNN, we tuned hyperparameters n = 548; 24.3%), moderate , severe , and critical . p-value = 2.8E-98; CA test). In terms of sex, the maximum severity was more severe for men than women . Body mass index (BMI) and two vital signs were statistically significant predictors showing a linear trend with the maximum severity. Among the other predictors, three initial clinical findings were fatigue, shortness of breath, altered consciousness, 8 comorbidities , chest X-ray infiltration, and 8 laboratory findings showed the linear trend with the maximum severity . Thus, we developed a predictive model using the proportional odds model and evaluated its performance on the testing data. In the case of the proportional odds model when evaluating the performance, the probability of being a specific category j was calculated by using the difference between the cumulative probability corresponding to j and j\u22121 . Each sample of the testing data is classified into the group with the highest probability. The evaluation results of the proportional odds model are shown in First, we developed predictive models using the four ordinal groups which represent triage COVID-19 patients more informatively. To select multiple markers associated with the maximum severity, we used the proportional odds model and GUIDE model. As a result of AIC-based stepwise selection, eight predictors were selected including age, SBP, cough, sore throat, shortness of breath, hypertension, ALT, and lymphocyte. Based on these eight predictors, the proportional odds assumption was held Mild vs. Above Moderate, (2) Below Moderate vs. Above Severe, and (3) Below Severe vs. Critical]. For each of the three binary outcomes, the variable selection was performed using AUC-based stepwise, LASSO, and randomForestSRC methods. For each outcome, predictive models were developed using selected variables based on five AI/ML algorithms: LR, RF, SVM, DNN, and GUIDE. For (1) Mild vs. Above Moderate model, three predictors including chest X-ray infiltration, body temperature, and age were finally selected for the final model. The final model showed good performance with an AUC of 0.882, balanced accuracy of 0.811, and F1-score of 0.874 for GUIDE. In particular, the predictive performance of the GUIDE for binary classification is much better than a model for 4-group classification, when converting the 4 \u00d7 4 confusion matrix of 4-group classification to a 2 \u00d7 2 version . Figure For (2) Below Moderate vs. Above Severe model, 5 predictors were finally selected as the final model: age, shortness of breath, chest X-ray infiltration, CRP, and AST. Based on LR with the highest performance, all predictors showed positive effects on the Above Severe group . When rahttp://statgen.snu.ac.kr/software/nomogramDaeguCovid/ and is expected to help plan effective treatment for each patient in a clinical setting.For (3) Below Severe vs. Critical model, 6 predictors were finally selected as the final model: CRP, respiration rate, chronic kidney disease, AST, age, and diabetes. As with the Below Moderate vs. Above Severe model, based on LR with the highest AUC value, all predictors showed positive effects on the Critical group . When raAs the COVID-19 pandemic continues for a long time, the importance of proper preparation and distribution of medical resources at an early stage is growing. Early prediction of the high-risk group for severe COVID-19 pneumonia is important because it can reduce mortality by providing timely treatment to critically ill patients such as the elderly , 41. ForMost of the predictors used to develop the nomogram were found to be consistent with previously reported results in the literature. Age, the common predictor of the three models used in the nomogram, is known to be a major risk factor for clinical severity . Chest XOur study showed similar results in a large cohort retrospective study conducted in the United States. The previous study used 64 input variables, including vital signs, various laboratory findings, and comorbidities. As the previous study found that age, male sex, and liver disease were associated with higher clinical severity, the Below Severe vs. Critical model in this study had a high predictive performance with clinical parameters including age, male sex, and elevated AST relating to liver diseases. In the previous study, ferritin and d-dimer were used as input variables, but in our study, cytokine storm syndrome-related these blood tests occurring in severe COVID-19 were not included. However, a high predictive model was presented without using these laboratory findings, and through this, convenience in predicting the disease severity in clinical situations can be expected. Our study demonstrated that the predictive model has the potential to predict the maximum disease severity of patients with COVID-19 with high accuracy and to help healthcare systems in planning for surge medical capacity for COVID-19, especially in a situation where medical resources are limited.Compared to our previous study , the maiMost of the existing methods have focused on classifying two groups, such as mild and severe patients . HoweverHowever, this study has some limitations. Firstly, we could not evaluate the impact of COVID-19 treatment, new COVID-19 variants, and vaccination status on the clinical severity course because this study was conducted in early COVID-19 pandemic patient groups. Secondly, full therapeutic options were not available such as remdesivir, tocilizumab , baricitinib (janus kinase inhibitor), and anti-SARS-CoV-2 monoclonal antibody. Thirdly, laboratory findings related to cytokine storm syndrome occurring in severe COVID-19 such as ferritin, interleukin 6 (IL-6), and d-dimer were not included. Ferritin (macrophage activation indicator) and IL-6 (T lymphocyte activation) are known to suspect cytokine storm syndrome in severe COVID-19 exacerbation . If laboIn conclusion, three predictive models were developed to predict the maximum severity during hospitalization based on the initial hospitalization records. The five AI/ML algorithms including DNN and GUIDE were used for model development. Each of the three predictive models showed excellent predictive performance using a few predictors. Representatively, the Mild vs. Above Moderate model showed the predictive performance of 0.882 for AUC using three clinicopathologic predictors. Based on these three predictive models, we successfully developed web-based nomograms useful in the clinical field. These nomograms are expected to help plan effective and timely treatment for each patient.The datasets generated for this study are available on request to the corresponding author.The studies involving human participants were reviewed and approved by the Institutional Review Board of Kyungpook National University Hospital (KNUH 2020-03-044). The patients/participants provided their written informed consent to participate in this study.SH and TP led the overall study and conceived the model. YK and S-WK contributed to the data collection. SH and CL contributed to the data analysis. CL developed the nomogram. SH, SL, BO, MM, S-WK, and TP contributed to data interpretation. SH and YK wrote the manuscript. TP and S-WK supervised the project. All authors read, edited, and approved the final manuscript.This research was supported by the Bio and Medical Technology Development Program of the National Research Foundation (NRF) funded by the Korean government (MSIT) (No. 2021M3E5E3081425).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "Research is limited on the use of technology to help individuals who have a mismatch between physiological fall risk (Body) and perceived fall risk (Mind) and are unable to access traditional fall interventions. We examined the feasibility and acceptability of a technology-based body-mind intervention in low-income older adults during the COVID-19 pandemic and explored barriers to access and adopting the technology. Data were collected using a survey, balance test, accelerometer-based physical activity (PA), and semi-structured interviews with twenty participants who engaged in an 8-week intervention at a low-income setting in Florida. We found that: 1) the technology-based intervention is feasible, 2) participants tend to accept technology to alter their perceptions of fall risk and balance capacity, 3) tailored activities to each component are not a one-size-fits-all approach. There were no statistically significant changes in sedentary time, light PA, and moderate to vigorous PA between pre and post-intervention."} {"text": "According to guidelines from the European Association for the Study of the Liver (EASL) and American Association for the Study of Liver Diseases (AASLD), abdominal ultrasound (US) is recommended for surveillance of hepatocellular carcinoma (HCC) in high-risk patients. However, US is limited as a surveillance modality for various reasons. Magnetic resonance imaging (MRI) is generally considered a better modality for detection of early HCC, but too elaborate in a surveillance setting. Consequently, abbreviated MRI (AMRI) protocols are investigated for surveillance purposes. The aim of our study was to evaluate the potential of non-contrast-enhanced AMRI (NC-AMRI) for surveillance of HCC, using multiple readers to investigate inter-observer agreement and the added value of double reading. We found that NC-AMRI presents a valuable screening tool for HCC and that double reading improves the sensitivity and specificity of HCC detection.Purpose: To evaluate NC-AMRI for the detection of HCC in high-risk patients. Methods: Patients who underwent yearly contrast-enhanced MRI of the liver were included retrospectively. For all patients, the sequences that constitute the NC-AMRI protocol, namely diffusion-weighted imaging (DWI), T2-weighted (T2W) imaging with fat saturation, and T1-weighted (T1W) in-phase and opposed-phase imaging, were extracted, anonymized, and uploaded to a separate research server and reviewed independently by three radiologists with different levels of experience. Reader I and III held a mutual training session. Levels of suspicion of HCC per patient were compared and the sensitivity, specificity, and area under the curve (AUC) using the Mann\u2013Whitney U test were calculated. The reference standard was a final diagnosis based on full liver MRI and clinical follow-up information. Results: Two-hundred-and-fifteen patients were included, 36 (16.7%) had HCC and 179 (83.3%) did not. The level of agreement between readers was reasonable to good and concordant with the level of expertise and participation in a mutual training session. Receiver operating characteristics (ROC) analysis showed relatively high AUC values (range 0.89\u20130.94). Double reading showed increased sensitivity of 97.2% and specificity of 87.2% compared with individual results . Only one HCC (2.8%) was missed by all readers. Conclusion: NC-AMRI presents a good potential surveillance imaging tool for the detection of HCC in high-risk patients. The best results are achieved with two observers after a mutual training session. HCC is the most frequent primary tumor of the liver and the third most common cause of cancer-related deaths annually worldwide . The majCurrent surveillance guidelines of both the EASL and AASLD recommend bi-annual US for surveillance ,5. AdvanMRI and computed tomography (CT) are considered to be superior over US for detection and diagnosis of HCC, particularly in patients clinically considered to be unsuited for US owing to obesity, liver steatosis, fibrosis, or cirrhosis ,9. AlthoRecently, different AMRI protocols have been proposed for surveillance purposes with promising results ,14. In gEfforts to further improve screening results that are nowadays common practice in breast cancer screening, for example, have to the best of our knowledge not been explored in HCC screening. Potential improvements are supplemental training to readers and double-reading ,17. The purpose of this retrospective study was to evaluate NC-AMRI for HCC detection in high-risk patients who underwent yearly full MRI for surveillance of HCC. The medical ethical committee of our institution granted permission for this retrospective study and informed consent was waived, as the study was performed with anonymized data and in accordance with the Central Committee on Research involving Human Subjects. Consecutive patients who underwent MRI of the liver for screening of HCC in a surveillance program between January 2010 and January 2019 were reviewed. Patients received yearly full MRI protocol, owing to failed surveillance with abdominal US, mostly because of fatty infiltration of the liver, advanced liver cirrhosis, or obesity. For inclusion, patients received at least two full MRI examinations. In patients without HCC , the second-to-last MRI was included for further analysis. The reason for this was that the last MRI was considered necessary as a reference to ensure that the patient was truly HCC na\u00efve. In the event of HCC, the MRI examination with first detection of HCC was included for analysis.Patients\u2019 demographics were retrieved from medical records. For each patient, gender, age, and presence of liver disease with underlying cause were registered. The number of patients with and without HCC was recorded. The Li-RADS classification of focal liver lesions per patient was documented. The Child\u2013Pugh classification and clinical stage of patients with HCC according to the Barcelona Clinic Liver Cancer system (BCLC) and the treatment were noted.The MRI examination for all patients was performed on a 1.5 Tesla system with use of a dedicated 8\u201316-channel abdominal coil. This protocol included the following sequences: T2W fast spin echo, in axial and coronal plane; axial T1W gradient-echo sequences in-phase and opposed-phase; axial DWI with at least three b-values; axial T2W imaging with fat saturation; and T1W dynamic CE series, with arterial phase acquired using bolus triggering, repeated at least four times after injection of 7.5 cc Gadobuterol 1 mmol/mL at a rate of 2 mL/sec and saline flush of 30 mL, followed by delayed coronal 3D T1W images. Total imaging time, including localizer and calibration, was around 27 min. Details of these sequences are shown in Our NC-AMRI protocol consists of the above-mentioned axial T1W in-phase and opposed-phase, DWI, and T2W with fat-saturation sequences, with a total acquisition time of 12.5 min . FurtherOnly NC-AMRI sequences per MRI examination of the study population were extracted, anonymized, and uploaded onto a separate research server. An open-source imaging informatics platform was usedThree readers with different levels of experience evaluated the NC-AMRI sequences of each patient separately and were blinded for the remaining sequences of the full MRI examination, including all previous imaging studies and clinical information. Readers I and II were abdominal radiologists with fifteen and twelve years, respectively, of professional experience in liver imaging at a tertiary center for hepatobiliary diseases. Reader III was a radiologist with six years of professional experience in general and abdominal radiology at a middle-sized primary hospital. Experienced reader I and less-experienced reader III held a joint training session prior to scoring. Experienced reader II purposefully did not participate in this joint training session. The joint training session was conducted by reviewing ten patients with liver cirrhosis from a teaching file collected by experienced reader I, including five cases with HCC as well as five cases with benign and challenging non-HCC cases such as confluent fibrosis. None of these teaching cases were included in this study. T2W imaging with fat saturation and DWI (restricted diffusion) typically depict HCC as a lesion with a high signal intensity (SI), while T1W in-phase and opposed-phase imaging may be used to confirm the presence of the lesion and to detect intralesional fatty deposits, which are oftentimes seen in early HCC or well-differentiated HCC ,20.The readers considered image quality, presence of focal lesions, size, segment location, visibility on the different sequences, and conclusion of a benign observation or possible HCC. Data were registered, digitally secured, and stored using a standardized anonymized and coded clinical reporting form in the online clinical software program OpenClinica . OpenCliThe reference standard for HCC was based on the corresponding full MRI protocol with confirmation by the multidisciplinary liver tumor board (MDTB). The institutional MDTB comprises specialized radiologists, hepatologists, surgeons, radiotherapists, and oncologists. The reference standard for benign lesions, including technical artifacts, was based on follow-up full MRI reports. Final diagnosis of all focal lesions (benign and HCC) was registered with the Li-RADS V. 2018 classifip-values of <0.05 were considered statistically significant. Descriptive statistics were used to describe the study population. The primary analysis was patient-based. The difference between categorical variables was presented by numbers and percentages and tested with the Fisher exact test. The conclusiveness of the findings from each reader was presented as numbers and percentages. For further analysis of readers\u2019 scoring data, the qualification of confident, probable, or possible HCC was considered positive for HCC. Sensitivity, specificity, ROC/AUC value, positive predictive value (PPV), negative predictive value (NPV), and accuracy of NC-AMRI were calculated using SPSS software for each reader separately and for all readers together using the majority of votes. AUC of 1.0 indicated a perfect model, whereas 0.5 indicated a very poor model. Generally, AUC > 0.7 indicates a good model . Inter oAfter completion of data analyses, the original imaging data and clinical information were made available for all three readers, for selecting illustrative cases and figures for this publication. A total of 240 consecutive patients were eligible for inclusion. Twenty-five patients were excluded because of substandard quality MRI. These patients lacked one sequence of our NC-AMRI protocol (4 patients) or had too disturbing motion artefacts of the NC-AMRI sequences (21 patients). Minor and moderate motion artefacts were accepted. The remaining 215 patients, 149 (69.3%) male and 66 (30.7%) female with a mean age of 56 years (range 19\u201381 years), were included for final analysis. The study population consists of 179 (83.3%) HCC-na\u00efve patients and 36 (16.7%) patients with HCC. Most patients in the HCC-na\u00efve subgroup had cirrhosis followed by non-cirrhotic chronic hepatitis B and non-cirrhotic hepatitis C . In the HCC subgroup, most patients had cirrhosis and only one patient (2.8%) had chronic hepatitis B. Twenty-eight patients with HCC had Li-RADS 5 lesions (77.8%) and 8 patients had Li-RADS 4 lesion (22.2%). HCC-na\u00efve patients had no focal lesions in 55 patients (30.7%); Li-RADS 1 and 2 lesions in 95 patients (53.1%), and Li-RADS 3 lesions in 29 patients (16.2%) . Of the The mean HCC lesion size was 31 mm, including seven patients with more than one HCC lesion. Readers I and III, who held a training session prior to scoring, had the highest sensitivity per patient of 97.2% (35/36) and 91.7% (33/36), respectively, as compared with reader II, with 80.1% (29/36) sensitivity. Conversely, reader II had a much greater specificity of 91.1% (163/179) than reader I with 82.1% (147/179) and reader III with 72.1% (129/179) . NonetheSimulations for double-reading showed a good interclass correlation coefficient of all readers . Using majority vote, sensitivity and specificity increase to 97.2% (35/36) and 87.2% (156/179), respectively, with NPV ranging from 95.9% to 99.4%. HCC was correctly detected and interpreted by all three readers in 61.1% of patients (22/36) . In the Area under the curve (AUC): reader I 0.94 (95% CI 0.90\u20130.99), reader II 0.88 (95% CI 0.81\u20130.96), and reader III 0.93 (95% CI 0.89\u20130.97). AUC of 1.0 indicated a perfect model, whereas 0.5 indicated a very poor model. Generally, AUC > 0.7 indicates a good model.In HCC-na\u00efve cases, diagnosis was correct in 87.2% (156/179) using majority vote. All three readers correctly scored 65.4% (117/179) of the patients as negative for HCC. In about one-eighth of the patients , two readers qualified a benign lesion as positive for HCC, and about one-fifth of the patients with benign lesions , while one reader scored a benign lesion as positive for HCC . ExampleOur retrospective study based on a simulated NC-MRI liver protocol in a surveillance population shows promising sensitivities (range 80\u201397%) and specificities (range 72\u201391%) for both highly experienced and less experienced abdominal radiologists. Although we did not perform a direct comparison with US in our study population, the results hold promise that NC-AMRI may be a substantial improvement over US (reported sensitivity of 47%) for the detection of lesions that resemble HCC and warrant further evaluation with full MRI for confirmative final diagnosis . Because of disappointing reports on US surveillance for HCC in high-risk patients, surveillance with AMRI gained much attention in recent years ,27,28,29In our study, the readers had no access to prior examinations, which would reflect a (most pessimistic) screening situation in which all patients are considered newly enrolled. For this reason, it is expected that the relatively high false-positive detection rate of 13% will likewise be lower in daily practice when the readers have access to prior studies for a fair comparison. Previously established benign lesions such as hemangiomas as illustrated in It is possible that early HCC, which are only clearly seen with full MRI, may be missed using NC-AMRI. In our study, there was one such lesion that was consequently missed by all three readers , as illustrated in The further improved sensitivity (97%) and specificity (87%) of NC-AMRI based on majority vote suggests that double-reading may be preferable over single-reading in HCC imaging surveillance programs. These improvements appear in line with increases in sensitivity of more than 10% when double-reading was applied in an imaging surveillance program for breast cancer ,37. To tInter-observer agreement was substantial for the experienced readers and, in accordance with a recent systematic review and meta-analysis, inter-observer variability of 0.72 (95% CI 0.62\u20130.82) for NC-AMRI for the detection of HCC was determined . HoweverAnother point to consider is that all patients were imaged using a 1.5T MRI system, and no use was made of 3T systems that might provide even better lesion detection owing to a higher signal-to-noise ratio. In our opinion, 1.5T is more widely available and perhaps more robust than 3T, as it is less susceptible to motion artifacts that might obscure parts of the liver, especially of the sub-capsular regions and lateral left liver lobe . It is dOur study has some limitations. Foremost are the retrospective nature and simulated analysis of our NC-AMRI. However, our results are in line with other publications on NC-AMRI protocols for HCC detection in a diagnostic cohort . AdditioOur study demonstrates that NC-AMRI of the liver may present a valuable surveillance modality for HCC surveillance in high-risk patients. Diagnostic accuracy may further be improved with double-reading. In addition, implementing a mutual training session is promising, and the exact role of this needs further evaluation."} {"text": "Dental implants (3.5 mm in diameter and 8 mm in length) with the experimental surfaces (n = 96) were inserted into the tibias of six sheep, which were left to heal for 3 and 7 weeks. Histologic, histomorphometric (bone\u2013implant contact (BIC%)) and mechanical tests ) were performed. The boron-coated surface (BC group) was smoother (Rz: 4.51 \u03bcm \u00b1 0.13) than the SLA (5.86 \u03bcm \u00b1 0.80) and the SLA-B (5.75 \u03bcm \u00b1 0.64) groups (p = 0.033). After 3 weeks, the highest mean RTV was found in the SLA group (37 N/cm \u00b1 2.87), and the difference compared with the BC group (30 N/cm \u00b1 2.60) was statistically significant (p = 0.004). After 7 weeks, the mean RTV was >80 N/cm in all groups; the highest was measured in the H3BO3-treated (BS) group (89 N/cm \u00b1 1.53) (p < 0.0001). No statistically significant differences were found in the BIC%s during both healing periods between the groups. H3BO3 seems to be a promising medium for dental implant osseointegration.The aim of this study was to compare the topographical, chemical and osseointegration characteristics of sandblasting and acid-etching (SLA) surfaces and dental implants treated by boron compounds. Titanium (Ti) disks (n = 20) were modified using boron (B) and boric acid (H Following the discovery of osseointegration, numerous types of dental implant surface treatment and modification modalities have been introduced ,2,3, whi3BO3, BN, CaB, TiB and NaB) are used for treating recurrent or chronic infections [Boron (B) is a bioactive trace element widespread in nature . It playfections , alveolafections and surffections . Human efections . Nonethefections ,11,12.3BO3)-treated Ti surfaces demonstrated improved proliferation and viability for human osteoblast cells and diminished the adherence of pathogen bacteria onto the corresponding substrates [In a previous in vitro investigation, B- and partbstrates . This mabstrates .This study was performed to evaluate the topographical, chemical and osseointegration characteristics of Ti surfaces and dental implants treated with B compounds.Bone-to-implant contact (BIC%) and removal torque value (RTV) were designated as the primary outcomes, and relevant data were obtained from similar previous studies ,19. The 2O3) particles, followed by acid-etching with hydrochloric and sulfuric acid (HCl and H2SO4) (SLA surface group); (b) large grit (250 \u03bcm) sandblasted with Al2O3 particles and H3BO3 particles (1\u20135 \u03bcm) , followed by acid-etching with hydrochloric and sulfuric acid (HCl and H2SO4) (SLA-B surface group); (c) SLA-B surfaces coated with 99.5% amorphous boron powder (B) (<1 \u03bcm) by heating at 900 \u00b0C for 10 h (BC surface group); and (d) SLA surfaces submerged in H3BO3 saline solution (BS surface group). The BS group implants were taken out of the H3BO3 saline solution at the stage of surgical insertion (Twenty Ti disks (10 mm in diameter and 3 mm in height) and 96 Ti dental implants (3.5 mm in diameter and 8 mm in length) were machined by a commercial manufacturer . Four dinsertion . Samples2). To determine the three-dimensional description of the surfaces and texture aspect ratio (Str); scan size 2 \u00d7 2 \u03bcm2 with 40\u00d7; optical zoom 10\u00d7) a confocal laser scanning microscope was used. Energy-dispersive X-ray spectroscopy and X-ray photoelectron spectroscopy (XPS) analysis were used to decide the chemical configuration of the surfaces.Surface morphology was examined using a scanning electron microscope , and the surface roughness was quantitatively evaluated using an atomic force microscope with an allowance of six sheep for this experimental study. All experimental procedures were performed in compliance with the animal research guidelines of the Mehmet Akif Ersoy Experimental Research and Development Center. Six Anatolian-breed sheep were used. All animals were fasted 24 h prior to surgical procedures. The tibia was selected as the experiment site to refrain from the risks of infection and early implant loss. A block randomization list was obtained via a designated software , accounting equal distribution of surface groups to each tibia and animal for 96 implants.All surgical procedures were performed with general anesthesia under sterile conditions. Xylazine ; Rompun, Bayer, Switzerland) and ketalar , Ketamin HCl, Vancouver, BC, Canada) were used for sedation. General anesthesia was accomplished using an i.v. injection of pentobarbital and maintained with 3\u20134% sevoflurane and 100% oxygen.\u00ae, Adeka, Samsun, Turkey). An incision of approximately 25 cm was made, skin and fascia were incised, respectively, and muscles were dissected. The implants were placed in the proximal tibia referring to the previously established block randomization order. A periodontal probe was used to maintain 10 mm distance between the implants. A total of 16 implants (8 implants for each tibia) were inserted into the right and left tibia of each animal ; Zentiva, \u0130stanbul, T\u00fcrkiye) and analgesics ; Nobel Drug, \u0130stanbul, Turkey) were administered during postoperative care for 1 week. For the representation of the early and late-term healing, the six sheep were separated into two groups.Fluorescence labeling was used for evaluating the dynamic bone mineralization and deposition according to the healing schedule in accordance with the guidelines proposed by van Gaalen et al. . Three dUsing a high dosage of anesthesia, three animals were sacrificed after 3 weeks, and three were sacrificed after 7 weeks. The corresponding tibia region was exposed, and the implants were examined using X-rays . The RTV was measured immediately following the stabilization of the tibia in a dedicated bench clamp. A digital torque meter was used for precise measurement. Reversal torque was enforced, and the maximum torque (N/Cm) was registered.After the animals were euthanized, the tibia of the sheep were removed at the specified time periods for histomorphometric analysis. Sections were prepared with nondecalcified histologic slicing system . The sections were analyzed using a light microscope to measure the BIC%. All measurements were made using an image analysis software . The implant surfaces were analyzed in three adjoining microscopic images. The BIC% was measured at a magnification of 40\u00d7. The calculation was performed by dividing the length of the attached bone by the length of the complete implant surface . All measurements were made by an independent examiner on two separate days (M.S.T), and the mean values were recorded as final. A confocal scanning laser microscope (CLSM) was used for fluorescence evaluation.p values. Group-by-time interaction was evaluated using the two-way ANOVA test. A p value < 0.05 indicated a statistically significant difference.Data were analyzed using Statistical Packages of Social Sciences (SPSS) 25.0 software. Descriptive statistics, including mean, standard deviation (SD), median, range of quartiles and 95% confidence interval were calculated. The distribution of data was evaluated using the Shapiro\u2013Wilk normality test. Homogeneity of variances was evaluated using Levene\u2019s test. Analysis of variance (ANOVA) was used to compare the measurements that fit the normal distribution between the groups and provide the assumption of homogeneity of variances. The Tukey test was used for post hoc comparison. The Kruskal\u2013Wallis test was used to compare the measurements that did not fit the normal distribution between the groups. The Mann\u2013Whitney U test was used for pairwise comparisons of the groups, and Bonferroni correction was applied to ARRIVE guidelines were referred to while preparing the manuscript.3BO3 particles were observed on the SLA-B surface. Nanowire-shaped dense crystallized B areas were visible on the BC surface. H3BO3 particles were visible on the surfaces in both the BS and SLA-B groups, which were distributed homogenously in the SLA-B group but were rather disordered in the BS surface group (In the low-magnification SEM imaging (\u00d72000), the SLA surface revealed the typically recognized topographical features . The remce group .p = 0.015). The differences in the Rz values between the groups were statistically significant : 10.53, p = 0.015)). The BC group had significantly lower Rz values (mean: 4.51 \u03bcm \u00b1 0.13) than the SLA (mean: 5.86 \u03bcm \u00b1 0.80) and the SLA-B (mean: 5.75 \u03bcm \u00b1 0.64) groups ; ; .p > 0.05). The highest mean Str (0.36 \u00b1 0.02) and Sdr% (76.32% \u00b1 4.41) were measured in the SLA surface, and the lowest was in the BC surface (Str: 0.19 \u00b1 0.01 and Sdr%: 62.84% \u00b1 3.57). The differences in the Str and Sdr% values were statistically significant between the groups . The SLA surface revealed significantly higher Str and Sdr% values compared with the remaining groups . The BC group demonstrated a significantly lower Str value than the remaining groups and a lower Sdr% value than the SLA group and BS group .The B, Al, C, T, N and O elements were observed on the SLA, SLA-B and BS surfaces in the EDS analysis. Ti was not detected on the BC surface due to a dense coating of B with a nanowire-shaped morphology. The atomic percentage of the B element was 7.6% on SLA-B and 18.71% on the BC surfaces.The O, C and Ti elements were detectable using XPS on all surfaces (except Ti on the BC surface). Cl was detected on the SLA surface only (2.6%). The B element was found only on the BC surface (14.96%). The highest amount of O was measured on the SLA-B surface (43.74%), while the lowest amount of C was measured on the SLA surface (35.55%) .Healing was uneventful in all animals, with no adverse reactions or inflammation or implant loss in any of them. Proper healing of all experimental sites was confirmed by the X-rays .p > 0.05). All implants achieved primary stability with an approximate mean ITV of 40 N/cm, and the differences in ITV were not statistically significant in or between any of the surface groups.The normal distribution of the mechanical test values was confirmed using the Shapiro\u2013Wilk normality test (p < 0.0001), surface groups and surface group x time interaction . After 3 weeks of healing, the highest mean RTV was found in the SLA group , and the differences compared with the BC group were statistically significant .The RTV tests were successfully completed in the designated 48 implants. Two-way ANOVA revealed statistically significant differences in time . The BS surface demonstrated the highest mean RTV values (89.46 N/cm \u00b1 1.53), and the differences compared with the remaining groups were statistically significant . The lowest mean RTV values were measured in the SLA group (80.45 N/cm \u00b1 2.46) with statistically significant differences with the remaining groups .No signs of inflammatory response, foreign body reaction or necrosis were noted in any histologic slices. The active osteoid formation was visible around all groups and all implants during both healing periods. An increased fill of new bone in between the threads was visible in all groups, especially in the 7th week. The process of osseointegration was ongoing during the early healing period (3 weeks), while it was concluding in the late-term healing (7 weeks) sections .Compared with the BC groups, the intensity of the early term fluorochrome staining at the bone\u2013implant interface and in the surrounding bone area appeared to be higher in the SLA, SLA-B and BS groups. Orange and light yellow staining indicating active late-term mineralization was especially discernable in the SLA-B, BC and BS groups.Highest fluorescence intensity in the late-term healing period was observable in the BS surface group .p > 0.05). The range of BIC% was 23.13\u201333.0% during the early healing period and 52.49\u201368.58% during the late-term healing period. The change in BIC% measurements from 3 to 7 weeks was statistically significant .The normality of the BIC% measurements was confirmed using the Shapiro\u2013Wilk normality test and SLA-B groups (mean: 68.58% \u00b1 11.76), respectively. However, the differences in BIC% between the groups were statistically not significant during both healing periods .In this study, the surfaces with distinctive B-based modifications were analyzed and compared with SLA\u2014the surface that has been used widely in modern implantology . Detaile3BO3. As for the BC surface, a dense B-film formation with nanowire-shaped morphology might have caused a significant drop in the Rz, Sdr% and Str values on the BC surface compared with others and particularly with the SLA.The height descriptive two-dimensional parameter Ra is regarded as a reference when comparing the dental implant surfaces . In the The B amount detected using EDS and XPS was low most probably as a result of the poor binding energy of the B element, which complicated the detection of B. This incident was reported in previous studies that revealed peaks of low magnitude at a level around 187 eV corresponding to the bonding energy of B and B-oxides at 187\u2013189 eV and 188.Removal torque forces applied in the counterclockwise direction have been used as a tool for the objective quantification of the strength of osseointegration. Furthermore, in clinical implant dentistry, resistance to rotational forces was regarded as critical, especially in implants that were early or immediately loaded ,23. In tIt was apparent that the biologic effect of B on the osseointegration seemed to be initiated no earlier than the third week, thereby achieving significantly higher RTV than the SLA surface in the seventh week. This might be a result of the biologic effect of B, including increased osteoblastic activity , angiogeThe positive effect of B was also noted in the BIC% measurements. Although the differences were not significant, the highest BIC% during both healing periods were measured in the BS and SLA-B groups. The fluorescence microscopic observations were also indicative of a higher bone mineral deposition in BS group for late-term healing. In a study by Witek et. al., a lower amount of BIC% (16.44% \u00b1 7.9) was reported from the boronized-machined implants left to heal in sheep tibia for 3 weeks . It is nDespite an expected increase in the BIC% in accordance with the RTV values, BIC% and RTV did not reveal any correlation in this study. A similar outcome was reported by Sennerby et al. (1992) who used screw-type implants left to heal in rabbit tibiae for 6 weeks, 3 months and 6 months, which revealed no significant associations with the recorded BIC% at relevant healing intervals. The amount of compact bone surrounding the titanium fixture was shown to be related to the resistance of the reverse torques . It was Contrary to the positive outcomes of the B-treated surfaces for RTV in the late term, no significant differences were found in the BIC%, despite a slightly higher percentage of measured BIC% values in the B-modified groups after 7 weeks. Such incidents were merely reported by some studies , and thiIt should be emphasized that the implants on the sheep tibia may not appropriately represent the outcomes in the human jaw bone due to the biologic and topographic differences . Owing t3BO3, as employed in the BS group, seems to be a promising medium for dental implant osseointegration and warrants further investigation to optimize the dose and the method of application onto the blasted Ti surfaces.Within the limits of this study, it was concluded that the presently employed surface modifications via B yielded a smoother surface than the conventional SLA, which seemed to cause a reduced resistance to reverse rotational forces (RTV) in the early term healing (3 weeks). No adverse reactions were observed in the B-treated surfaces. Nevertheless, B treatment, especially the B coating, did not provide a significant advantage over the conventional SLA in the early term healing but provided a significant resistance to rotational removal forces in the late term (7 weeks). H"} {"text": "The aim of the study was to explore mothers\u2019 experiences of having an infant born prematurely (28\u201332\u2009weeks gestation). In particular, the study aimed to explore the developing parent\u2013infant relationship 12\u201330\u2009months since birth and the developing parental identity during hospitalization and discharge.Twelve mothers, aged between 22 and 43, participated in the semi-structured interviews. The mean age of infants was 19\u2009months. Interviews comprised open-ended questions and visual stimuli consisting of photographs brought by participants, word selection, and card sorting techniques. Data were analyzed using Braun and Clarke\u2019s thematic analysis .Three themes arose from a clustering of 10 subthemes: (a) Emotional Impact, (b) Searching for Parent Identity, and (c) Moving Beyond Adversity. Participants expressed experiencing heightened emotional distress during the time of their infants\u2019 birth and hospitalization and initially not feeling like parents. Their parental identity strengthened as they became more involved in the care of their infant and began to accept the situation. Participants described parenting their premature infants differently compared with parents of full-term infants, and described adjusting to this difference over time.The findings highlight the emotional experience and adjustment of mothers of premature infants, from hospital and postdischarge. The need for psycho-educational interventions postdischarge and parent-partnered models during hospitalization is discussed. In addition, the study demonstrates the use of integrating visual stimuli in qualitative data collection procedures, to elicit further meaning and interaction from participants with the interview process. Premature birth is defined as infants born alive before 37\u2009weeks gestation. Infants born between 28 and 32\u2009weeks are considered to be \u201cvery premature\u201d and under 28\u2009weeks is coined \u201cextremely premature\u201d . Compared with full-term born infants, those born prematurely are at increased risk of illness , which can result in longer hospital stays, future hospital admissions, and long-term congenital conditions . The iniSeveral studies have focused on the psychological effects that having an infant born prematurely and spending time in the NICU has on parents. Results examining parental stress show that stress levels are higher in parents whose infants had lower gestational ages and birth weights compared with parents of full-term infants , and thaFewer studies have focused on parents\u2019 subjective experiences following discharge from hospital and how the transition impacts on their parent\u2013infant relationship and identity. As the research questions focus on exploring primary caregivers\u2019 subjective experience of their parenting journey, a qualitative research design using semi-structured interviews was adopted. The interview guide was based on the research questions and developed through consultation within the research team and reviewing previous interview questions in the area . The firCriterion sampling was used for participant recruitment. Inclusion criteria consisted of (a) primary caregivers over 18\u2009years of age of an infant born between 28 and 32 weeks\u2019 gestation (inclusive of infants born during 32nd week gestation); (b) at birth, infant spent time in the NICU; (c) at interview, infant was between 12- and 30-months old; and (d) infant is being co-parented. Exclusion criteria were (a) infant born as part of a multiple birth; (b) having major congenital anomalies/birth defects such as cerebral palsy, heart defects, learning disability syndromes, neural tube defects, congenital hydrocephalus; (c) main caregiver currently attending or seeking mental health services; and (d) main caregiver having more than one child born prematurely. As infants born before 28\u2009weeks tend to be more at risk of having disabilities or congenital anomalies ; this wawww.tinylife.org.uk). TinyLife provides free services within NICU\u2019s and upon discharge to all parents living in Northern Ireland who have a premature infant and can be accessed easily by all types of families. The recruitment documentation was developed after review with TinyLife staff. An invitation to participate in the study, including a link to the information sheet, was posted at the end of May 2019 within TinyLife\u2019s Twitter and Facebook accounts. Twelve parents contacted the research team via email expressing an interest in participating. A final round of recruitment took place at the start of August 2019 through the same means. Eight further parents contacted the research team. Telephone screening to discuss eligibility, informed consent, and the purpose of the study was conducted by the lead researcher (C.S.) who also conducted the interviews. This helped to build the relationship with potential participants prior to data collection and introduce them to the interviewer. The lead researcher was female, had no children, was employed as a Trainee Clinical Psychologist, and had previous training in research methods and data collection. Overall, five parents who met exclusion criteria were informed that they were unable to participate, and three parents did not respond to correspondence. Interviews were conducted by the lead researcher between June 6 and November 7, 2019 with parents who met inclusion criteria and who consented to participate (n\u2009=\u200912). Details of the interview, information leaflets, consent forms, and instructions regarding photographs to bring were sent to participants prior to the interview. The photograph instructions asked participants to bring two recent or past photographs that represent their premature birth journey which could include people, objects, places, or relevant images. A separate consent form was provided to participants detailing how their photographs would be used in the research and anonymized. Participants were given the option to conduct the interviews in university premises or in their homes. Five interviews took place in participants\u2019 homes and seven took place in university premises. Within the university, interviews took place in a consistent location. Participants were asked for their infant not to be present for the interview. Prior to audio-recording, the interviews using a dictaphone, participants were asked to provide written informed consent. Participants understood that they could end their participation at any time without this affecting their care or involvement with TinyLife. Interviews were administered in an empathic style using open-ended questions (see www.cuddlebright.com).Participants were recruited via the social media platforms of TinyLife, a premature and vulnerable baby charity based in Northern Ireland . Participants were living in Northern Ireland at the time of the interviews and spoke fluent English. Their infants consisted of five females and seven males with a mean gestational age of 30\u2009weeks (SD\u2009=\u20091.62) and a mean hospital stay of eight weeks (SD\u2009=\u20093.65). At interview, the infants ranged in age between 12 and 28\u2009months . Demographics are summarized in Emotional Impact, (b) Searching for Parent Identity, and (c) Moving Beyond Adversity, which arose from a clustering of 10 subthemes Most participants spoke of memories of leaving their infants in the hospital, which brought heightened distress that has persisted over time; \u201csaying goodbye, I don\u2019t think that is ever going to leave me just how hard it was to leave him in the hospital [\u2026], it\u2019s just awful\u201d (Jill). In addition, participants mentioned specific symptoms linked to a trauma response, such as flashbacks and recurring nightmares; \u201clike every night you know [I had] a visual nightmare where I would wake up with the sweat and the tears and was up and standing around at her funeral\u201d (Lauren).There was a felt sense of responsibility for some participants regarding the health of their infant which they associated with feelings of distress; participants felt guilty of the harm they believed to have caused them and wondered what they had done. Participants also described finding it difficult to forgive themselves due to the self-blame they felt; \u201cI think I\u2019ll never fully forgive myself and I\u2019ll always fully think there was something I did or didn\u2019t do, that will always be at the back of my mind\u201d (Jane).I\u2019ve felt like the whole thing was my fault [\u2026], so the first time that I held her [\u2026] I actually said to her \u2018I\u2019m sorry\u2019 [crying] \u2018cause I felt it was my fault, because it was my body that rejected the placenta and everything else (Mia)Distress was also mentioned regarding the medical environment, such as unhelpful staff responses, perception of the environment as intrusive, and medical procedures occurring quickly.She took me to about three other machines and then she got the consultant and he literally straight away said \u2018yep, this needs to come out\u2019 [\u2026]so he was delivered that night at twenty to ten by emergency section and I really wasn\u2019t prepared for it\u201d (Jess)This subtheme was linked to participants initially feeling unprepared for the experience and feeling fearful of what may happen to their infants in the present and future; \u201cI think it was absolute terror whenever I found out that she was coming early it was like, I\u2019m not ready for this, I have no clothes, I have no cot\u201d (Mia).Having an infant born prematurely caused much fear in participants regarding interacting with the infant and due to future thoughts of the potential difficulties that the infant may have; \u201cI suppose when Megan was growing up, we were always worrying \u2018would she meet this milestone, would she be developmentally okay?\u2019\u201d (Kate).I remember the lady squirting hand sanitiser into my hand and saying, reach in and touch your baby, [..] and I said \u2018no I can\u2019t, I\u2019m going to hurt her\u2019 and I remember being frightened because she was so tiny (Helen)Participants described feeling different from their peers and feeling unable to connect with parents of full-term infants.I got into arguments with a lot of people, like my mum and my sister [\u2026] just after he was born and it was because I think \u2018you don\u2019t understand what I\u2019m going through, you don\u2019t know what it\u2019s like\u2019. I know my sister has kids but it\u2019s not the same (Jane)Feeling different was linked to loss. Participants had preformed ideals of what parenthood would be like, which they felt they were deprived of and had lost; \u201cthis [photograph] is his first car ride. So obviously whenever you think of your baby coming home you think your first car ride is going to be in your car seat... His was in an ambulance\u201d Participants realized that their life was different from where they thought they would be once they had a new-born. This sense of difference created a dichotomy between the present situation and what could have been, contributing to feelings of being different.What I most looked forward to when I got home, was to be able to sit in the house in my pyjamas instead of getting up, getting ready, and going to hospital\u2026 so I feel like my whole maternity was completely different from a full-term baby (Lisa)Within this subtheme, participants described experiences where they were directly faced with the medical conditions of their infant and associated these experiences with feeling fearful. They also described experiences of anticipating their infants\u2019 health deteriorating and their potential death; \u201cI got [my partner] to take a video because I thought \u2018if she passed away, at least I would have seen her move and be alive\u2019\u201d (Helen).Participants hoped for a positive outcome but reported having constant reminders of the vulnerability of their infants. Participants also described their own physical and psychological frailty; \u201cI was very sick after Megan was born. I had to have the drip on because they thought I was going to have seizures, so I didn\u2019t get to see her for a full 24\u2009hours \u201d (Kate). Although none of the parents in the sample were attending mental health services, several of them described believing they had suffered from mental health problems such as postnatal depression, post-traumatic stress disorder, or an anxiety disorder;I feel like even now, I\u2019m just getting back to who I was before Layla was born, although I was never diagnosed with any post-natal depression or anything like that, I think there was an underlying thing somewhere within me (Mia)Participants described an initial sense of not feeling like they were parents while in hospital due to the dependence on the medical system to keep their infants alive and feeling a loss of their parenting role. This was also related to not being responsible for performing basic care tasks and feeling like they were unable to initially strengthen their emotional bond with their infant. Participants contrasted their actual lived experience to what they had previously envisioned and hoped for; \u201cI felt like I was still pregnant, but I wasn\u2019t carrying him. The incubator is like my belly. I just left him off somewhere to be, to do the rest of the job, but I didn\u2019t feel like I was a parent\u201d (Ava).They\u2019re keeping her alive and warm. She\u2019s wrapped in bubble wrap and a hat that we didn\u2019t give her\u2026 there\u2019s nothing of [my partner] and I in that picture - that\u2019s not even one of our hands\u2026 she\u2019s now separate from me, she\u2019s just been taken out of my tummy and taken away from me - that\u2019s what that picture means to me .Once they are in the cot then the next step is home, so it felt like we took a real leap that day \u2018cause she didn\u2019t have her monitors. [\u2026] I could lift her out of the cot, hold her, nurse her\u2026 it just felt so much more natural\u2019 (Mia)During this shift from not feeling like a parent to regaining one\u2019s parent identity, participants also reported seeking ways to establish an emotional connection and to make up for lost time.I just thought \u2018I didn\u2019t get to hold this baby for six weeks so I\u2019m going to make up for all these cuddles because before you know it, she\u2019ll be walking, and I\u2019ll not be able to hold her all the time\u2019 (Kate)The concept of returning to normality appeared across all transcripts. This involved reaching milestones such as leaving hospital, but also participants asserting their needs to establish a normative bonding experience to regain their parenting role.This is just me and my baby and I\u2019m just going to sit here and hold him and that\u2019s okay [\u2026] it was the realisation of that, and I suppose just really trying to get to know him, getting our independence and not obsessing about being in a routine (Maria)In this subtheme, participants reflected on how they adjusted to parenting a premature infant and spoke of how their experience was different from parents of full-term infants. They described themselves as engaging more in overcautious behaviors to protect their infants from illness and relapse.[Other mummies] are not as obsessed with sterilising and cleaning. I suppose I had been in that sort of institutionalised environment - I was so into sterilising, infection control, all the things I suppose other preemie mummies were (Kate)Participants described the impact the NICU environment and hospital protocols had on their parenting. Their parenting experiences upon birth began with an initial dependence on the medical equipment to keep their infants alive within hospital. Participants also spoke of the hospital medical routine and protocols dominating their home life; \u201cthere were a lot more appointments, there was a lot more check-ups, and she came home on medication\u2026 so there was a lot more that I had to do\u201d (Evelyn).There are lots of machines and lots of beeping and lots of things going on, but it\u2019s so quiet and it\u2019s almost like there\u2019s a fear you know even though all the parents are there and it\u2019s all their children, but there seems to be almost this fear and you just sit and you stare and it\u2019s very strange (Jane)In NICU, there was an underlying fear of infants becoming progressively ill or dying. Therefore, participants rejoiced when their infants reached milestones, as it represented progression and hope. These instances often began with participants realizing that their infant was born alive and being thankful for this. Paying attention to milestones reached within the NICU, helped participants feel closer to being discharged and embracing normality in their parenting. These milestones tended to be related to specific aspects of the infants\u2019 medical care and physical well-being.There was so many small steps but they were happening every hour every day, like his feeds, if a feed increased, that was amazing, you were focused on all the tiny milestones, like his weight going up by a couple of grams, so I just took it day by day (Lisa)We used this [photograph] as a progress picture\u2026one of the things I liked was when it was in the cot beside him - it was nearly the same size as him [laughs] and now he has pictures of it nearly every month with it sitting beside him. You can just see how much bigger he is and healthier Participants depicted their infants as having strength which gave them comfort that they would continue to thrive and survive; \u201cI just think of her journey and what she\u2019s came through and I\u2019m like \u2018how amazing\u2019 as an adult we [\u2026] fall sick we just lie there, whereas these babies fight, and these babies are just incredible\u201d (Helen).Within this subtheme, participants described natural processes where they accepted the situation over time, learnt to deal with their emotions, and reinterpreted their experience; \u201cI just got to a point I was like \u2018d\u2019you know what, I couldn\u2019t have done anything different, like he\u2019s here now and he\u2019s healthy just it\u2019s not my fault\u2019\u201d (Lisa).Even a year ago if you had of said \u2018tell me your story\u2019 I would have broken down in tears halfway through and couldn\u2019t have told you anymore, but I think I have dealt with much more and I have battled through it, and I have lived it [\u2026]it\u2019s a raw emotion still but I\u2019ve learnt how to manage those emotions you know? I don\u2019t think you would ever get over it in the sense that it is just something that happened (Helen)Examples of actions that helped participants, or advice they would suggest to other parents were also discussed as part of the learning and growth process. These included accepting the help of others, having more confidence in medical staff expertise, taking each day as it comes, not engaging in thoughts of self-blame, and keeping self-informed about medical procedures; \u201ctake the help that you can get, listen to people, and never ever blame yourself, \u2018cause that\u2019s probably what ate me for so long\u2019\u201d (Mia). Participants also recognized service improvement areas. One participant spoke of wanting more support for parents when dealing with the experience of leaving their infants each night.You are never gonna feel fine leaving them, but I think just if somebody sort of came to you and said you know \u2018this is normal to be feeling like this\u2019 you know \u2018try doing this\u2019 or even just get you prepared for that (Jill).Interpersonal support reflects the impact that other people had on helping participants to cope and overcome adversity. Participants described specific interactions with staff members which helped them while in hospital such as reassurance provided, explaining medical terminology, prioritizing their emotional well-being, and responding to their concerns. Participants also described specific interactions with those outside of the immediate medical care , which helped them cope better and move beyond adversity: \u201chaving somebody to talk to who\u2019s also had a premature baby, on a day-to-day basis about various things has been really really lovely, so I value that because I think I would have really struggled had I not had that\u201d (Maria).Overall, when recounting their experiences of having an infant born premature, participants described moving from a state of emotional distress to a state of growth and acceptance. All participants in the study reported initially feeling distressed by the experience and the hospital environment. As depicted in existing findings, participants described feeling fearful, shocked, experiencing post-traumatic stress symptoms, and that they were to blame . SimilarParticipants also described moving from thoughts and emotions of not feeling like a parent to adjusting to a different kind of parenting. The difference was in relation to comparisons made with parents of full-term infants and prior expectations about parenting. Parents of premature infants described themselves as initially being preoccupied with medical routines and appointments, having clean home environments, and not wanting to expose their infants to outside settings. Over time, they described embracing the difference and returning to normality. This process seemed to occur as their infant became older, less dependent on the medical system, and as parents became less fearful of illness. These findings are consistent with the literature on parents\u2019 regaining confidence following discharge. Interview guides and the order of questions can influence responses and subsequent analysis within qualitative research. Results showed a sense of a progression or journey that participants went through. This may have reflected the nature of the interview questions, which began with the birth story and ended with questions on current parenting experiences. This potential limitation was addressed through asking open-ended questions during the interview. Additionally, the research did not capture the experience from both parental perspectives and data were not collected on participants' racial and ethnic representation, potentially leading to a misrepresentation of the diverse kinds of parents living in Northern Ireland who experience a premature birth. All parents that participated were first time mothers and therefore the research did not capture being an experienced parent with a premature infant. These limitations may have been due to recruitment through social media and the kind of parent that engages with these mediums for information . Notwithstanding these limitations, the sample was more homogenous than other studies in this area. Qualitative studies within this population have tended not to be homogenous and have comprised samples where individuals vary greatly on specific factors, including infants from wide ranging gestational ages and parents being interviewed at different timeframes since their premature birth experience . AlthougAcross the interviews, participants described emotional difficulties they experienced regarding their parent identity and infant\u2019s prematurity and provided ideas on how their transition to parenthood in the NICU and beyond could be better supported. Along with existing studies highlighting emotional distress within similar samples, this supports the need for psychological interventions to be imbedded in hospital settings. Parent support interventions in the NICU could be delivered in the form of psycho-educational programs and psychological therapy, which have shown their effectiveness in relieving psychological distress in this population . ParticiThe present study revealed that participants did not feel like parents initially and wanted to be more involved in the care of their infants. Parent-partnered care models such as Family Integrated Care aim to train parents to be more involved in their infants\u2019 care through being part of the NICU care team . This moNone to disclose."} {"text": "Fourier-transform infrared spectroscopy gave peaks at 1726 cm\u22121 (C=O) and 1573 cm\u22121 (RCOO\u2212), indicating the formation of OS-TBS. We further studied the physicochemical properties of the modified starch as well as its emulsification capacity. As the DS with octenyl succinate anhydride increased, the amylose content and gelatinization temperature of the OS-TBS decreased, while its solubility increased. In contrast to the original Tartary buckwheat starch, OS-TBS showed higher surface hydrophobicity, and its particles were more uniform in size and its emulsification stability was better. Higher DS with octenyl succinate led to better emulsification. OS-TBS efficiently stabilized O/W Pickering nanoemulsions and the average particle size of the emulsion was maintained at 300\u2013400 nm for nanodroplets. Taken together, these results suggest that OS-TBS might serve as an excellent stabilizer for nanoscale Pickering emulsions. This study may suggest and expand the use of Tartary buckwheat starch in nanoscale Pickering emulsions in various industrial processes.In this study, Tartary buckwheat starch was modified to different degrees of substitution (DS) with octenyl succinate anhydride (OS-TBS) in order to explore its potential for stabilizing Pickering nanoemulsions. OS-TBS was prepared by reacting Tartary buckwheat starch with 3, 5 or 7% ( Fagopyrum tataricum, (L.) Gaertn.) is a traditional edible and medicinal pseudo-cereal enriched with beneficial phytochemicals, including flavonoids, phenolics, steroids, fagopyrins, and d-chiro-inositol [Tartary buckwheat [\u201314 \u03bcm 3. HoweverOctenyl succinic anhydride (OSA)-modified starches, approved for food use by the US Food and Drug Administration in 1972, have been prepared from oat, quinoa, sago, maize, and wheat ,11 and wThe improvement of the surface properties of the starch granules and the choice of the appropriate physical treatment technique are essential for the formation of Pickering nanoemulsions. High-pressure homogenization (HPH) is a non-thermal physical process for reducing the particle size of a sample (emulsion or suspension) from the micron range to the nanometer range by using shear, impact and cavitation effects at high pressure . Due to Therefore, we investigated the potential of OSA-modified TBS as a stabilizer for Pickering nanoemulsions. In particular, we focused on the effect of the modification on the physicochemical properties of the starch granules and on the formulation and process of nanoemulsion preparation. The size, chain length, viscosities and thermal characteristics of starch influence their emulsifying characteristics ,27. ThusTartary buckwheat was obtained from the Key Laboratory of Coarse Cereal Processing at the Ministry of Agriculture and Rural Affairs at Chengdu University ; 2-octenyl succinate anhydride , from Shanghai Maclean Biochemical ; as well as amylose standard (96% pure) and amylopectin standard (87.2% pure), from Beijing Northern Weiye Institute of Metrology and Technology ; medium-chain triglycerides from Shanghai Yuanye Bio-Technology . All other chemicals in this study were analytical grade and purchased from Chengdu Kelong Chemical Company .w/v) for 30 min at 50 \u00b0C and 500 W in order to remove flavonoids and lipids. The precipitate was soaked for 24 h at room temperature in 0.3% NaOH solution at a ratio of 1:10 (w/v), then passed through gauze in order to further remove crude fibers and other impurities. The resulting starch slurry was centrifuged for 10 min at 4000 rpm; the supernatant and upper brown layer were discarded, and the remaining white layer was washed again with 0.3% NaOH. Centrifugation and washing with 0.3% NaOH were repeated three times. The final precipitate was dispersed in distilled water, neutralized to pH 7.0 by addition of 0.1 M HCl, washed with distilled water and centrifuged repeatedly until the supernatant was clear without brown layers and it formed a firm, stable white precipitate at the bottom of the tube. Finally, the precipitate was dried at 40 \u00b0C for 48 h, ground into a powder and passed through a 100-mesh sieve to eliminate agglomeration. The resulting starch was stored in a polyethylene bag at room temperature for later use.TBS was isolated as described with sliw/v) was dispersed in distilled water with continuous stirring. OSA was diluted with anhydrous ethanol and dropped slowly into the TBS dispersion within 2 h while temperature was maintained at 35 \u00b0C. The pH of the starch slurry was adjusted to 8.5 with 3% NaOH solution and the esterification reaction was allowed to continue for 3 h at 35 \u00b0C. Then the pH of the starch slurry was adjusted to 7 using 1 M HCl. The slurries were centrifuged for 15 min at 4000\u00d7 g, washed several times with distilled water, washed twice with 90% ethyl alcohol, dried at 45 \u00b0C for 24 h, passed through a 100-mesh sieve and stored in polyethylene bags at room temperature.OS-TBS was prepared using a method based on a previous report ,29. TBS The different amounts of OSA were added to the TBS dispersion, and resulting slurries were denoted by OS-TBS-3, -5, -7, respectively.v/v) isopropanol was added to each sample, stirring was continued for 10 min, and the mixture was centrifuged at 3000\u00d7 g for 10 min. The sediment was washed thoroughly with 90% isopropanol until addition 0.1 M AgNO3 did not lead to appreciable formation of AgCl. The washed sediment was suspended in 30 mL of distilled water, heated in a boiling water bath for 30 min, and titrated with 0.1 M NaOH solution using phenolphthalein as the end-point indicator. DS was calculated using the formula [A was the titration volume of NaOH solution (mL), M was the molarity of NaOH solution, and W is the dry weight (g) of the OS-TBS. Native TBS served as the reference.DS of OS-TBS samples was determined using a titration-based method . Briefly formula :(1)DS=16The amylose content of TBS and OS-TBS was determined using an iodine-binding method . Starch w/w) and spectra were obtained from 400 to 4000 cm\u22121 at a resolution of 4 cm\u22121 [The chemical structure of TBS and OS-TBS was analyzed qualitatively using a Spectrum Two FT-IR spectrometer . Samples were prepared by grinding the finely powdered starch with KBr . Starch samples were suspended in distilled water over a range of light obscurations from 10 to 20%. Volume-averaged droplet size (D) was determined as described ,4 by assGranules were observed under a scanning electron microscope . Samples were mounted on double-sided adhesive tape on an aluminum stub, a layer of gold was sputtered on top, and the samples were imaged at an accelerating voltage of 15 kV. Images were taken at different magnifications (\u00d72000 and \u00d75000) at a working distance (WD) of 14 mm to observe the dense structure of the particles and pores .\u03b8) from 5 to 40\u00b0 at a rotational speed of 6.35 \u00b0/min [The crystalline structure of TBS and OS-TBS was examined using an X-ray diffractometer operating at 40 kV and 40 mA with Cu-K\u03b1 radiation. Diffractograms were obtained over a range of diffraction angles (2 35 \u00b0/min . Relativ35 \u00b0/min using MDTo), peak temperature (Tp), conclusion temperature (Tc) and gelatinization enthalpy (\u0394H) were measured as described [The onset temperature (escribed using a escribed .w/v) in deionized water and incubated for 30 min in a water bath at 50, 60, 70, 80 or 90 \u00b0C, then centrifuged at 4000 rpm for 15 min. The supernatant was recovered and dried to a constant weight at 105 \u00b0C. The WSI and SP were calculated using the following equations:W0 was the weight of starch; sW, the weight of the sediment after centrifugation; and W1, the weight of the supernatant after drying.The water solubility index (WSI) and swelling power (SP) of starch samples were determined as described . The TBSStarch powder was compacted into a standard tablet 2 mm thick and the tablet was then immersed in medium-chain triglycerides. Then, 16 \u03bcL of deionized water was dripped lightly onto the surface of the tablet for 1 min and allowed to equilibrate. Three-phase contact angles were determined using a JY-82B device based on taking photos and the protractor method .The preparation of Pickering emulsion samples were determined as described ,41 with v/v) in deionized water, and one drop of emulsion was poured onto the glass microscope slide. The distribution of drop sizes was also measured using a Zetasizer Nano 90 system , based on refractive indices of 1.414 for medium-chain triglycerides and 1.33 for water [The distribution of drop size in Pickering emulsions was observed using a TL3900CA optical microscope at an image magnification of 10 \u00d7 100. The emulsions were diluted 1:5 was transferred to a 15 mL sample vial, sealed and stored at room temperature and photographed at 0, 15 and 30 days. The emulsification index (EI) of the corresponding Pickering emulsions were evaluated by the volume/height of the emulsified and precipitated layers after 1d and the EI was calculated as described using thg for 10 min as described [The stability of emulsions with centrifugation was investigated after centrifugation at 10,000\u00d7 escribed using a p < 0.05. Data plots were prepared using Origin 2021 software .All experiments were performed in triplicate, and data were reported as mean \u00b1 standard deviation. Data were analyzed statistically using one-way analysis of variance (ANOVA) in SPSS 25.0 . Differences were considered significant if Increasing the amount of OSA from 3 to 7% during the preparation of OSA-modified Tartary buckwheat starch increased the DS from 0.0184 to 0.0312 , reflect1, tensile vibrations of C-H and bending vibrations of absorbed water at 2930\u20131640 cm\u22121 [\u22121 [\u22121, due to the C-O telescopic vibration of the ester carbonyl; and one at 1573 cm\u22121, due to asymmetric stretching of the RCOO vibration. These peaks were interpreted to indicate successful formation of OS-TBS because they were similar to previous reports [FT-IR analysis of TBS revealed strong O-H stretching vibration peaks at 3800\u20133000 cm640 cm\u22121 , and strcm\u22121 [\u22121 than for the unmodified TBS (18.8 \u03bcm), B. Additi\u03b8 = 15, 17, 18 and 23\u00b0 [XRD analysis revealed TBS to have an A-type X-ray diffraction pattern with intense peaks at 2 and 23\u00b0 . The cry and 23\u00b0 ,42,49.oT, pT, cT and \u0394H of OSA. The OSA modification slightly increased the particle size distribution of TBS and reduced the apparent straight-chain starch content. XRD patterns showed that the OSA modification occurred mainly in the amorphous region and had less effect on the crystalline region, which is consistent with the OSA treatment maintaining the morphological results of the starch, which are consistent with the OSA treatment maintaining the shape and integrity of the granules. We also investigated the effect of various factors on the emulsification properties of Pickering emulsions prepared by the HPH method using OS-TBS as a stabilizer. Our results suggest that Pickering nanoemulsions can be maximally stabilized by OS-TBS when the HPH method is used with a stabilizer concentration of 4 wt%, an oil phase volume fraction of 30 vol%, minimal ionic strength and neutral pH. The higher DS in OS-TBS result in smaller and more stable emulsified particles. Although the effects of various factors on the physicochemical properties of OSA starch have been extensively investigated, this study further demonstrates the synergistic effects of OSA treatment, small particle-size characteristics and HPH methods in the preparation of Pickering nanoemulsions. Our work provides the first demonstration that a combined strategy offers the unique advantage of significantly improving the properties of TBS and lead to stable Pickering nanoemulsions for various industrial applications."} {"text": "Objective: To quantify the effects of increasing the step length of the split squat on changes in kinematics, kinetics, and muscle activation of the lower extremity.Methods: Twenty male college students participated in the test . Data on kinematics, kinetics, and EMG were collected during split squat exercise at four different step lengths in a non-systematic manner. One-way repeated measurements ANOVA were used to compare characteristic variables of peak angle, moment, and RMS among the four step length conditions.Results: The step length significantly changes the peak angles of the hip (p = 0.011), knee (p = 0.001), ankle (p < 0.001) joint, and the peak extension moment of the hip (p < 0.001), knee (p = 0.002) joint, but does not affect the ankle peak extension moment (p = 0.357) during a split squat. Moreover, a significant difference was observed in the EMG of gluteus maximus (p < 0.001), vastus medialis (p = 0.013), vastus lateralis (p = 0.020), biceps femoris (p = 0.003), Semitendinosus (p < 0.001), medialis gastrocnemius (p = 0.035) and lateralis gastrocnemius (p = 0.005) during four step lengths, but no difference in rectus femoris (p = 0.16).Conclusion: Increases in step length of split squat had a greater activation on the hip extensor muscles while having a limited impact on the knee extensor muscles. The ROM, joint moment, and muscle activation of the lead limb in the split squat all should be considered in cases of individual preventative or rehabilitative prescription of the exercise. Moreover, the optimal step length for strength training in healthy adults appears to be more suitable when it is equal to the length of the individual lower extremity. The optimal exercise selection for improving, maintaining, and enhancing functional capacities involves aligning the demands of an exercise with the specific needs of the client or patient . Using aA split squat, or a forward lunge, is a multijointed exercise of a closed kinetic chain used to improve the function or strength of the lower extremities . The maiAccording to a general statistic, joint injuries are very common in the athletic population, with an incidence of 10\u201335.5 injuries per 1,000 . Most ofRecently, it was shown that the front tibia angle influences joint angles and loading conditions during the split squat exercise . SuggestNo studies have been conducted to observe the kinematics and kinetics of the lower extremity among multiple different step lengths creating a void in the existing literature.The strength of the lower extremity muscles, particularly those around the knee, and the ratios between different muscle strengths, play a crucial role in rehabilitation and strength training. Some studies have pointed out through EMG that the vastus medialis (VM) and vastus lateralis (VL) are two of the key muscles that control the frontal plane kinematics of the knee , which mThe activation of specific muscles can be varied by performing variations on the same exercise. To our knowledge, performing variations of split squats or lunges may alter the muscle activation of the lower extremities, possibly resulting in changes in strength that may be important both for the rehabilitation of patients and the strength training of healthy adults. For example, certain types of squats and lunges result in different activation of the VM compared to the VL . SpecifiIn summary, the objective of this study was to describe the kinematics, kinetics, and muscle activation of the lower extremities during split squat among four different step lengths. It was hypothesized that the peak angle and the extension moment of the hip increases with the increase of step length, and decrease for the knee. The EMG activity of the hip extensor increases with the increase of step length, and decrease for the knee extensors. The results of this study may provide theoretical support for guiding the selection of exercises or training programs, both for patients\u2019 rehabilitation and healthy adults\u2019 strength training.Twenty male college students majoring in physical education who engage in regular exercise (at least twice a week) participated in this study . Participants who had previous neurological disease, hypertension, or orthopedic pathology were not included in this study. The mean height and mass of the participants were 1.75 \u00b1 6.4\u00a0m and 81.2 \u00b1 3.8\u00a0kg, respectively. The mean 1RM of the split squat was 1.1 \u00b1 0.3\u00a0kg/BW. All participants were familiar with the split squat. The protocol for this study was approved by the Health Sciences Institutional Review Board (NO.2020187H), and participants provided their informed written consent prior to participation.Thirteen retroreflective markers, each 14\u00a0mm in diameter, were affixed to palpable body landmarks to estimate the rotational centers of the ankle, knee, and hip. Markers were placed bilaterally at the anterior superior iliac spine (ASIS), the top of the crista iliaca, the L4-L5 interface, the anterior thigh, the lateral and medial femur condyles, the lateral and medial malleolus, the tibial tuberosity, the center of the second and third metatarsals, and the heel of dominant limb. Three-dimensional kinematic data were collected using an 8-camera motion analysis system at 200\u00a0Hz . Kinetic data were collected using force plates , which were embedded in the floor and sampled at 1000\u00a0Hz. The coordinate and ground reaction force signals were time-synchronized using Cortex, version 2.6.2 .Surface electromyography (EMG) of eight muscles, was recorded using silver-contact wireless bipolar bar electrodes with fixed 1\u00a0cm interelectrode spacing . Electrodes were placed parallel to muscle fibers of gluteus maximus (GM), vastus lateralis (VL), vastus medialis (VM), rectus femoris (RF), biceps femoris (BF), Semitendinosus (ST), medialis gastrocnemius (MG), lateralis gastrocnemius (LG). A maximal voluntary isometric contraction (MVIC) was performed for each of the eight muscles to elicit maximal activity, as previously described in the literature , and to At the first visit, the 10RM barbell weight for each subject\u2019s split squat was determined by a standardized protocol. After a standardized warm-up (jogging and dynamic stretching), an estimated 10RM weight was selected for the split squat. When the maximum number of reps was greater than 10, the weight was increased until the maximum number of squats was 10. Oppositely, when the maximum number of reps is less than 10, decrease the weight until the maximum number of squats is 10. Each increase or decrease in weight is 10% of the estimated 10RM weight. During all the attempts, each participant was asked to squat until reaching a depth where the thighs were parallel to the floor. In addition, each subject\u2019s 10RM barbell weights were determined using a comfortable step length.During the second visit, all subjects were required to complete split squats under four step length conditions while kinematic, ground reaction force, and EMG data were collected. During each condition, subjects were required to perform split squat exercises for three consecutive repetitions, using their 10-repetition maximum (10-RM) barbell weight. To control for multiple exposure and fatigue effects, each participant was randomly assigned a step length condition order. The data was collected mainly from the dominant lower extremity, which was positioned in front, while the non-dominant side was positioned behind. The dominant lower extremity was operationally defined as the preferred limb for kicking a ball. A 72-h resting period was given to participants between the 10RM procedure and the formal data collection .Each subject participated in the preliminary experiment 72\u00a0h before the formal experiment. For the split squat exercise, all participants wore the same brand and style of shoes. Four step lengths were determined for each subject based on their own leg length, and each subject was given the opportunity to practice the split squat for the four conditions. The stepping length for the split squat was standardized by using the leg length, which was measured from the greater trochanter to the lateral malleolus, and four different lengths were set at 50%, 70%, 100%, and 120% of leg length. Tape strips were placed on the floor at the starting point and target step length .First of all, participants were instructed to step forward into a split stance with the dominant limb on the force plate and then complete each repetition by lowering the body until the front thigh was parallel to the floor. Once they reached the lowest position, they were instructed to immediately rise upward and return to the split standing starting position. They were also instructed to maintain an erect torso during the entire split squat. A video camera recorded a view of the sagittal plane from the right side, which was chosen to make sure the erect torso and thighs are parallel to the floor. Participants attempted to complete the lowering phase of each repetition within 2\u00a0s; an acoustic metronome set to 60 beats per minute was used to assist with movement. Several familiarization trials were allowed for each step length condition before data collection so that the participants could become comfortable with the movements and length.Lower extremity kinematics, kinetics, and EMG data were collected simultaneously in all four step length conditions. The original 3-dimensional coordinate data of the markers were filtered using a Butterworth low-pass digital filter with an estimated optimal cutoff frequency of 13\u00a0Hz . Visual p < 0.05. When statistical significance was evident for the four split squat characteristics, LSD post hoc tests were used. All statistical tests were conducted in Statistical Package for Social Science software .For each dependent variable, the average across the three trials within each of the four step length conditions was calculated and used for statistical analysis. One-way repeated measures analyses of variance (ANOVAs) were used to compare the split squat characteristic variables among the four step length conditions. In all analyses, a Greenhouse-Geisser correction factor was applied when sphericity was indicated. The statistical significance was set at p = 0.011), knee joint (p < 0.001), and ankle joint (p < 0.001) among different step lengths. Post hoc analysis revealed that the peak hip flexion angle at 100% LL step length was significantly smaller than that at 70% LL step length (p = 0.005), and 120% LL step length (p = 0.019). The flexion angle at 50% LL step length tended to be greater than that at 100% LL step length (p = 0.064). The peak knee flexion angle at 100% LL step length was significantly smaller than that at 50% LL (p = 0.002) and 70% LL (p < 0.001) step length. The angle at 120% LL step length was significantly smaller than the angle at 50% LL (p < 0.001) and 70% LL (p = 0.001) step length. The peak ankle flexion angle at 120% LL step length was significantly greater than the angle at 50% LL (p < 0.001), 70% LL (p < 0.001), and 100% LL (p < 0.001) step length. The peak ankle flexion angle at 100% LL step length was significantly greater than 50% LL (p < 0.001), and 70% (p < 0.001) step length. The peak ankle flexion angle at 70% LL step length was significantly greater than the angle at 50% LL (p = 0.048) step length.Descriptive statistics for the peak flexion angles are presented in p < 0.001) and knee joints (p = 0.002) among different step lengths, and no significant differences in ankle peak extension moment (p = 0.357). Post hoc analysis revealed that the peak extension moment of the hip joint at 120% LL step length was significantly greater than that at 100% LL (p = 0.013), 70% LL (p = 0.008), and 50% LL (p = 0.008) step length. The peak hip extension moment at 50% LL step length was significantly smaller than that at 70% LL (p = 0.008) and 100% LL (p < 0.013) step length.Descriptive statistics for the peak net joint moment are presented in p < 0.001), the VM (p = 0.013), the VL (p = 0.020), the BF (p = 0.003), the ST (p < 0.001), the MG (p = 0.035) and the LG (p = 0.005) during the lunge for all step length, and no significant difference in RF (p = 0.16). In the post hoc test. The root mean square amplitude (RMS) for GM at 50% LL step length was significantly smaller than the RMS at 70% LL (p = 0.020), 100% LL (p < 0.001), and 120% LL (p < 0.001) step length, and 70% LL step length was significantly smaller than the RMS at 120% LL step length (p = 0.037). The RMS for VM at 50% LL step length was significantly smaller than the RMS at 100% LL (p = 0.010) and 120% LL (p < 0.012) step length. The RMS for VL at 50% LL step length was significantly smaller than the RMS at 70% LL (p = 0.006) step length, and 120% LL step length was significantly smaller than the RMS at 100% LL step length (p < 0.001). The RMS for BF at 50% LL step length was significantly smaller than the RMS at 70% LL (p = 0.038), 100% LL (p < 0.001), and 120% LL (p < 0.001) step length, and 70% LL step length was significantly smaller than the RMS at 120% LL step length (p < 0.001). The RMS for ST at 50% LL step length was significantly smaller than the RMS at 70% LL (p = 0.045), 100% LL (p = 0.002), and 120% LL (p < 0.001) step length, and 70% LL, 100% LL step length was significantly smaller than the RMS at 120% LL step length (p < 0.001). The root mean square amplitude (RMS) for MG at 50% LL and 70% LL step length was significantly smaller than the RMS at 100% LL and 120% LL step length, respectively. The RMS for LG at 50% LL and 70% LL step length was significantly smaller than the RMS at 100% LL and 120% LL step length, respectively.Descriptive statistics for muscle activation are presented in The purpose of this study was to compare the differences in kinematic, kinetic, and muscle activation characteristics of the anterior lower extremity when performing a split squat with leg length-based standardized step lengths in healthy adults. There is a significant change in the peak flexion angle of the hip joint. The range of motion (ROM) decreased, while the peak extension moment increased with the increase in step length. Peak knee flexion angle, knee extension moment, and peak ankle angle decrease with increasing step length. In general, a strength exercise is executed safely within the physiological ROM of a joint and by avoiding overloading of the human tissue . Our resIn order to avoid the effects of trunk angle and squat depth on the biomechanical characteristics of the lower extremity among four step lengths, we asked all subjects to keep the torso upright and squat down to the position where the front thigh was parallel to the ground at four split lengths. Therefore, in theory, step length will not affect the peak flexion angle of the hip joint, but the experimental results are inconsistent with our hypothesis. The peak flexion angle of the hip at 100% LL step length was significantly smaller than at 70% and 120% LL step length . The reaAs shown in our data, the ROM of the hip joint gradually decreases and the peak net moment gradually increases with increasing step length, which is more challenging for the hip extensors. Our results supported the work of As S. P. The split squat is a movement dominated by the front lower extremity . TheoretThe peak knee extension moment showed a gradual overall decrease with increasing step length but was significantly greater at 70% LL step length than at 50%. The reason could be the over-short step length (50% LL) during split squatting causing a backward shift in the body\u2019s center of gravity, leading to increased load on the posterior limb. But our experiments did not collect data from the posterior limb, which is a limitation of our study. However, some studies showed tMany previous studies have shown that excessive peak extension moment in sports may cause cartilage or ligament tissue damage , and PatMany adults and even athletes will have varying degrees of dorsiflexion restriction, which is an important factor in athletic performance or injury . The maxAt 120% LL step length, the split squat is entirely in the plantar flexion of the foot, which means that the entire process of squatting works out the calves within the vertical line. Therefore, we recommend that for some patients with limited dorsiflexion of the foot, split squat training be performed with \u2265100% LL of the step length. Despite greater variation in peak ankle dorsiflexion angles at different step lengths, there was no statistically significant difference in peak ankle extension moment.Our results seem to indicate that increasing step length can lead to more favorable kinematics and dynamics of the lower extremity joints. However, during the practice process, it became evident that excessively large step lengths negatively impact movement maneuverability, often resulting in forward slipping. Notably, significant improvements were not observed beyond a step length of 100%LL.The kinematic and kinetic data for a given movement reflect the outward appearance of all muscles involved in the movement, EMG provide a good indication of the activation of a muscle during a particular phase of a given movement. The results of the EMG data in this study can reflect the activation of the relevant muscles during the split squat movement at four step lengths . The splThere are many clinicians and health professionals who recommend their patients strengthen the quadriceps with split squat or lunge movements. Therefore, appropriate training movements are important to optimize muscle activation. Same as our hypothesis, the activation of the hip extensors, knee extensors, and flexors was significantly less at 50% LL step length than the other three lengths. We can speculate that this may be due to the small step length of the split squat, thus increasing the load on the posterior lower extremity and reducing the activation of the anterior lower extremity muscles. Hofmann\u2019s study also demonstrated that reducing step length resulted in more weight-bearing on the posterior limb .Anatomically, the RF serves the dual function of flexing the hip and extending the knee, while the BF and ST perform the opposite functions. Therefore, it can be challenging to determine whether the RF and BF/ST is engaging in a centrifugal or centripetal contraction during a split squat . HoweverConsistent with the previous results , the EMGThe EMG of MG and LG was only significantly different between the longer (100% LL and 120%LL) and shorter (50% LL and 70% LL) step length. We hypothesized that a 10RM load would result in a higher demand for ankle stability when the step lengths are larger. These all results may provide a clearer training protocol for patients rehabilitating from clinical muscle atrophy but also requires consideration of whether the patient\u2019s knee joint mobility and maximum net joint moment are within the patient\u2019s tolerance range.With the step length variations, the ROM of the hip, knee, and ankle joints, peak joint net moments, and the EMG of lower extremities underwent changes. Specifically, with the step length increases, the ROM in the knee and ankle joints tends to decrease, while the peak extension moment of the hip joint increases. In addition, alterations in step length had a greater influence on the hip extensor muscles, while having limited impact on the knee extensor muscles, especially between 100% LL and 120%LL. But in practice, if the length of the step goes beyond 120% LL of the step length, it would require more stability and induce only minor enhancements in muscle activation. It is recommended that the appropriate step length should be selected based on specific needs, such as considering joint restrictions, avoiding excessive joint stress, or increasing specific muscle activation when customizing rehabilitative exercise prescriptions. The optimal step length for strength training in healthy adults appears to be more suitable when it is equal to the length of the individual lower extremity."} {"text": "R2R2 value of MRF-GCN model was 0.865 0, much larger than that of Long-Short Term Memory (LSTM) and other conventional models, while mean square error (MSE) of MRF-GCN model was 1.59 899, much smaller than that of LSTM and other conventional models. Therefore, the MRF-GCN model has better prediction accuracy than other models and can be applied to predicting surface subsidence in large areas.Accurate prediction of surface subsidence is of significance for analyzing the pattern of mining-induced surface subsidence, and for mining under buildings, railways, and water bodies. To address the problem that the existing prediction models ignore the correlation between subsidence points, resulting in large prediction errors, a Multi-point Relationship Fusion prediction model based on Graph Convolutional Networks (MRF-GCN) for mining-induced subsidence was proposed. Taking the surface subsidence in 82/83 mining area of Yuandian No. 2 Mine in Anhui Province in eastern China as an example, the surface deformation data obtained from 250 InSAR images captured by Sentinel-1A satellite from 2018 to 2022, combined with GNSS observation data, were used for modeling. The deformation pattern of each single observation point was obtained by feeding their deformation observation data into the LSTM encoder, after that, the relationship graph was created based on the correlation between points in the observation network and MRF-GCN was established. Then the prediction results came out through a nonlinear activation function of neural network. The research shows that the Surface subsidence is a common geological environmental disaster, which is increasingly concerned \u20134. It isThe existing models for predicting surface subsidence are mainly divided into two types: physical models and statistical models \u201314. PhysThe emergence of Graph Convolutional Network (GCN) \u201326 is wen nodes of the graph structure, and the information of each node is uniquely represented by an m-dimensional vector, and the set of edges n dimensional symmetric 0 \u2212 1 matrix A, which is usually referred to as the adjacency matrix. The adjacency matrix A is generated by the following equation.In general, the graph structure data consists of a set i \u2212 th node in the l \u2212 th layer.GCN can aggregate local information between neighboring nodes by performing convolution operations on the graph. When performing multi-layer convolution operations, the information in the graph can be passed farther through the connections between nodes, achieving the purpose of fusing local and global information. The following equation updates the hidden state Wl is the weight matrix, j \u2212 th node in the l \u2212 1th layer, bl is the bias, and \u03c3 is the nonlinear activation function . When l = 1, the hidden feature In the Taking the surface subsidence in 82/83 mining area of Yuandian No. 2 Mine in Anhui Province in eastern China as an example, the surface deformation data obtained from 250 InSAR images captured by Sentinel-1A satellite from 2018 to 2022, combined with GNSS observation data, were used for establishing a prediction model with MRF-GCN. The main contributions of this paper are as follows.We design and implement a prediction model MRF-GCN for surface subsidence based on graph convolutional networks. The model runs a graphical convolution operation in the surface subsidence monitoring network, which can model the interaction between monitoring points due to geological heterogeneity and thus make more accurate predictions of subsidence at surface points.We successfully predicted the surface subsidence pattern in a mining area in China by the model proposed in this paper and elaborated on the principles of the model work. Meanwhile, we designed ablation experiments to confirm the effectiveness of the model modules.https://github.com/BaoSir529/MRF-GCNWe write the code based on the Python language and implement it based on the latest Pytorch deep learning architecture. To facilitate related research, our code is publicly available at As shown in P = {p0, p1, \u2026, pn}. For a given surface monitoring point, there are continuous time series data ti = {t0, t1, \u2026, tn}. Depending on the type of monitoring, these time-series data can provide feedback on changes in data at the site over time . The surface subsidence prediction task aims to predict data for a future period based on a priori data ti, i.e., to predict {tn+1, tn+2, \u2026., tn+k}.In the case of large-scale surface subsidence monitoring, denote a series of surface monitoring points as In order to obtain subsidence data for a particular monitoring area, a monitoring network needs to be set up several years in advance, and the observation area needs to be continuously observed for an extended period by manual level measurement or in combination with remote sensing technology. In this paper, we selected the surface of the 82/83 mining area of Yuandian No. 2 Mine in Anhui Province, China, as the observation area. We set up the monitoring network in advance and obtained the continuous deformation data in this area by manual level measurement and InSAR image-based extraction. In order to confirm the reliability of the model, we select two observation networks with 20 monitoring points in the manual measurement area and the InSAR image processing area, respectively, as the demonstration objects of this paper.According to China National General Specification for Engineering Surveying requirements, we select the monitoring points within the artificial measurement area. In addition, when selecting points, avoid extremely sloping ground and water and choose sites that can be preserved for a long time and are easy to observe. The relationship between monitoring points in all observation networks follows the following guidelines.More excellent connectivity between monitoring sites in close spatial proximity.More excellent connectivity of monitoring points on the same geological structure.Lack of connectivity between monitoring sites isolated by large geological structures or building complexes.The connection between monitoring points should consider the actual placement of the level observation network.A, and the subsequent contents do not distinguish between the two regions.The control network is laid in the remote sensing area, and the manual measurement area is shown in P = {p0, p1,., pn}, each monitoring point pi obtains a time series information reflecting its elevation change After continuously observing monitoring points y (LSTM) , 29 netwy (LSTM) , which ck steps of the preliminary data and train the model to learn the patterns of all sequential data by scanning the prior data item by item with the sliding of this window. The specific experimental procedure of this process is that, first, the sequence data of each monitoring point in the control network is divided into overlapping data segments according to the window width K. For a monitoring point pi the time series information is divided into n \u2212 k + 1 data segments, notated as l-layer LSTM module, with each segment of length k as an input state at the current time t. For each monitoring point input, the following function is computed for each layer of the LSTM module.ht is the hidden state at time t, ht\u22121 is the hidden state of the layer at time t \u2212 1 or the initial hidden state at time 0, Ct is the cell state at time t, xt is the input at time t, and It, Ft, Gt, Ot are the input, forget, cell, and output gates, respectively. \u2299 is the Hadamard product. Wi and bi is the corresponding parameter matrix and bias. In a multilayer LSTM, the input l \u2212 th layer is the hidden state l layers of the LSTM module, for all monitoring points in the detection network, the hidden state Hlstm.To achieve the goal of learning future subsidence patterns using a priori sequences, we set up a window (window) that allows us to look forward at Finally, to facilitate the subsequent graph convolution operation, the subsidence features extracted by LSTM are subjected to nonlinear changes to extract the association information between them and finally mapped to a new feature space. Their spatial dimension is set as the input dimension of the graph convolution module.k consecutive a priori monitoring data, where c is the input dimension of the graph convolution module, W and b are the parameter matrix and bias of the nonlinear transformation.The final coded information The observation area of the continuous surface subsidence monitoring task is large, and the geographical factors are complex. The points in the monitoring network interact with each other and have complex intrinsic interactions. For example, changing points in the monitoring network within the area to be trapped can continuously affect surrounding points. In order to better tap the interaction pattern between points, we introduce a designed graph convolution module. By modeling based on the graph structure information of the monitoring network and the information of the continuous settlement pattern of the monitoring points, it helps the model to make a more optimistic prediction of the surface settlement.k a priori data. In the actual monitoring, the monitoring data of each monitoring point in the monitoring network are obtained in the same period and independently measured. This pattern is often reflected in the monitoring data of the duration when the monitoring points in the same monitoring network interact with each other. According to the graph generation module and LSTM coding module mentioned above, we obtain the adjacency matrix A of points within the monitoring network and the continuous variation law H of each monitoring point fed into a designed graph convolutional neural network.After the LSTM coding module, the model obtains the coding information of each monitoring point in the monitoring network about the continuous I is a unit matrix, and D is the degree matrix of A, storing the number of nodes that each node joins to the rest.To eliminate the effect of non-normalization of the monitoring point relationship map, we normalize the point adjacency matrix according to the suggestion of Kipf & Welling .e=D-12Al by aggregating the state information of neighboring monitoring points in layer l \u2212 1, and the network is operated according to the following function.pi at the l-th layer of GCN. pj\u2019s neighbor node pi\u2019s output at the l \u2212 1-th layer of GCN, and also as the input of the l-th layer. It should be noted that when l = 1, the input state of the GCN is the initial state of the neighbor node hj, where hj \u2208 H. eij is a normalization constant for the edge which originates from using the symmetrically normalized adjacency matrix W is a nonlinear parameter matrix.Next, the GCN updates the state information of each monitoring point in layer pi after the graph convolution module, and k + 1 made by the model based on the monitoring point a priori data.The training of a neural network is continuously adjusting the weight matrix while minimizing the error function by back-propagating the gradient. In order to fully utilize the limited data in the dataset, the error profile of the model can be trained to converge by setting multiple epochs. In order to obtain the predictions of the model from the priori data of each monitoring point, we designed a Feed forward neural network (FFNN) immediately following the GCN module as the prediction output layer, and the predicted values are given according to the following functions.y are the predicted and true values of the monitoring points, respectively. Adam is chosen as the optimizer of the model, and the goal of model training is to make the error function tend to be minimized.In terms of the error function, the mean square error of the predicted value and the target value is chosen as the error function in this paper.km from Suizhou City in the east and 52km from Huaibei City in the northeast. Geographical coordinates are 116\u00b023\u203259\u2032\u2032E \u223c 116\u00b032\u203204\u2032\u2032E, 33\u00b029\u203205\u2032\u2032N \u223c 33\u00b033\u203255\u2032\u2032N. The east-west length is about 10.9km \u223c 13.3km, the north-south width is about 1.3km \u223c 5.3km, the area is about 41.6km2, and the geographical location is shown in m \u223c \u22121000m elevation, the main mining 72 coal seam, coal thickness 0m \u223c 5.7m, coal mining mainly according to the directional longwall full trap collapse method.The study area of this paper is mainly the 82/83 mining area of Yuandian No. 2 Mine in Anhui Province and the surrounding collapse area in Anhui Province, China, which is located in the northern part of Anhui Province, and its center is about 55To obtain remote sensing image-based subsidence data for the study area, we used remote sensing images taken by the C-band Sentinel-1A satellite of the European Space Agency. From January 2018 to December 2022, 148 SAR images were acquired with an average interval of 12 days, a polarization of VV+VH, and a satellite attitude of ascending orbit phase at the time of photography. Based on the original image data, we use multiple Master-image Coherent Target Small-baseline Interferometric SAR method , 32 to gFurther, in order to obtain more accurate field data, we set up a long-term surface settlement monitoring network with 64 monitoring points in conjunction with the current level points in the mine site. From September 2017 to October 2018, with an average interval of 10 days, the monitoring points were measured at the third-order leveling following China\u2019s \u201cSpecifications for the third and fourth order leveling\u201d. Similarly, 20 monitoring points uniformly covering the mining plane of the mine area were selected from the measurement results as a manual data source with 37 consecutive data for each subsidence sequence.Our model is trained on an NVIDIA GeForce RTX 3090Ti with sliding window set to 5, and LSTM module hidden state dimension is 300, and GCN module hidden state set to 300. The experimental data set was split according to 80% as the training set and the remaining 20% as the test set. Too little training does not allow the model to converge, while too much training introduces additional noise. As shown in To verify the accuracy of the model prediction, we use the first 80% data of the time series data as the training set, train the model to predict the subsequent data, and compare it with the real data. Meanwhile, to verify the advancedness of MRF-GCN, we conducted controlled tests using three common time-series prediction models, all benchmark tests used the same data sources, and all models simultaneously predicted future settlement changes at all monitoring sites in the monitoring network.p, q) as according to the data specifics.Autoregressive integrated moving average (ARIMA) is one oSpace-time ARIMA (STARIMA) is a varLSTM is a praR2 characterizes the degree of correlation between the two data sets, and the mean square error (MSE) can reflect the degree of dispersion between the two data sets. The smaller it is, the better the two data sets match. The experimental results of each model are shown in Finally, the statistical results between all model predictions and the actual values are presented based on the monitoring network data obtained from InSAR images. The Pearson correlation coefficient (PCC) can describe the degree of linear correlation between two matrices, close to 1. It indicates that the two matrices have a strong linear correlation. The R2), respectively. Additionally, MRF-GCN outperforms LSTM by 1.4162mm2 in terms of mean squared error, indicating a closer fit to the actual data. To visualize how the MRF-GCN prediction results match the true values, we select three points from each of the InSAR-based and manual-based area and plot them as time series diagrams, respectively. As shown in Figs p1, p6 and p19 are selected, and for the manual measurement based control network, points p5, p7 and p11 are selected. It should be noted that these points were chosen randomly just to show the prediction effect of the model, and the time series distribution plots of the remaining points can be fully reproduced by the code in this paper. With these line plots, it can be seen that the prediction results of MRF-GCN fit the real subsidence pattern well. Such prediction results can provide credible support for developing subsidence prevention measures.The results in Y = X and Y = X \u00b1 3 on the graph. The red, green, and blue scatter points correspond to the error distribution of MRF-GCN, LSTM, and ARIMA, respectively. It is evident from Y = X line and exhibit a distribution inside Y = X \u00b1 3 compared to the green and blue scatter points. This observation implies that the prediction results based on the MRF-GCN model are significantly closer to the actual value of the data than the traditional time series prediction methods, and the prediction results are relatively more stable. Therefore, it can be concluded that the MRF-GCN model outperforms the traditional prediction methods in terms of accuracy and stability.To thoroughly investigate the accuracy of the model prediction results, we conducted a visual analysis of the error distributions of MRF-GCN and the other benchmark models. To clearly reflect the prediction effect of different models, we randomly selected a point in To provide a comprehensive evaluation of the long-term prediction capability of the proposed model, we conducted a time series comparison between the prediction results of MRF-GCN and LSTM with the actual values of the test set, as illustrated in The application of graphical convolutional networks in modeling the subsidence patterns of multiple points can unveil the interplay among the points. The Pearson correlation coefficient (PCC), which ranges from -1 to 1, reflects the degree of correlation between the two vectors. A higher positive absolute value of PCC indicates a more robust correlation and the same trend direction, and vice versa. Considering the predicted data of each point as a vector, the correlation coefficient between two points can reflect the correlation between their subsidence patterns. To examine whether the model can capture the interdependencies among the points after adding the graph convolution, we use the raw data based on InSAR images to generate predictions by the MRF-GCN and LSTM models, respectively compute the PCC between any two points using the prediction sequence. We present the correlation degrees between different points as a 20 \u00d7 20 heat map in p4 is connected to both points p8 and p9. In contrast, p9 is closer to p4 and is located in a stable geological formation, while p8 is farther away and located near water. Therefore, p9 is closer to p4 in terms of subsidence pattern than p8 under the same circumstances. In the heatmap, the MRF-GCN based predictions reflect this actual situation well, with the grid being significantly darker than the grid, while the LSTM-based predictions is unable to achieve this well. These patterns can be accurately learned by modeling the neighboring relationship map using GCN. In contrast, the conventional LSTM prediction model treats the variation of each point in isolation and hence cannot model the mutual influence law between the two points.Inspection of R2 value of MRF-GCN model was 0.8650, much larger than that of Long-Short Term Memory (LSTM) and other conventional models, while mean square error (MSE) of MRF-GCN model was 1.5989, much smaller than that of LSTM and other conventional models. In a restricted view, in order to explore the prediction performance of the model, we ignore the accuracy requirement of remote sensing data and assume that the subsidence data obtained by multiple Master-image Coherent Target Small-baseline Interferometric SAR method meet the actual situation, but this is not allowed in practical applications, so when using this model to predict the actual subsidence area, remote sensing data that meet the accuracy requirement should be obtained. In a word, the model proposed in this paper can well meet the needs of large-scale surface subsidence prediction, and has the potential to be applied to predict surface subsidence caused by various factors.In this paper, we proposed a MRF-GCN model for mining-induced surface subsidence prediction. Unlike previous work, this paper introduced a graph convolutional neural network focusing on the change trend of associated points, and the model can predict the change of current point by learning the subsidence trend of neighboring points. Specifically, to demonstrate how the model works, this paper took the surface subsidence area at the 82/83 mining area of Yuandian No. 2 Mine in Anhui Province as the study area and combined remote sensing technology and manual measurement to obtain subsidence data. These data were used to model with MRF-GCN and to predict the future subsidence trend. The experimental results indicate that the S1 File(ZIP)Click here for additional data file."} {"text": "Conversely, the indirect pathways through sentiments towards UGS, UGS quality, and time spent in UGS were highly significant , underscoring their substantial role as mediators in the UGS use-health association. While a comprehensive understanding of the mechanisms linking perceived health to UGS use in Mexico City requires further research, this study proposes that fostering positive sentiments towards UGS, enhancing UGS quality, and encouraging extended visits to green areas could potentially amplify the perceived health benefits associated with UGS use among residents. These insights offer valuable inputs for policymaking, emphasizing the importance of integrating public perspectives to optimize nature-based solutions and broaden their positive impact within Mexico City.In recent decades, extensive research has demonstrated the positive impact of urban green spaces (UGS) on public health through several pathways. However, in the context of Latin America, particularly Mexico City, there remains a notable scarcity of evidence linking UGS use to health outcomes and an insufficient understanding of the pathways or factors underlying these associations. Therefore, this study employs Structural Equation Modeling (SEM) to investigate the intricate pathways between UGS use and residents\u2019 perceived health in Mexico City, a densely populated urban center. The SEM integrates three key mediators: sentiments towards UGS, UGS quality, and time spent within these spaces. Survey data was collected through an online survey distributed via social media in May 2020 . The findings indicate a minor yet significant direct link between UGS use and self-reported health (0.0427, Since the early 19th century, researchers have increasingly recognized the beneficial effects of nature on human health and well-being . This acHowever, specific evidence on the mechanisms linking green space use to improvements in health outcomes in Latin American megacities is lacking, and only a few studies have examined in-depth the pathways or factors mediating some type of relationship between green spaces and health or well-being and evidence has not been conclusive in all cases . For insTo date, little attention has been paid to the effect of using UGS to enhance health outcomes in the densely populated cities of Latin America, where wealth inequalities are vast, and the availability of green spaces is often scarce \u201326. More2 has experienced accelerated growth rates, exacerbating spatial inequalities that have contributed to increasing public health risk factors [Accordingly, the present study aims to address these gaps by examining the pathways between the use of UGS and the self-reported health of residents of Mexico City, accounting for multiple mediators through Structural Equation Modeling (SEM). In the context of hypothesis testing and theory development, this approach is suitable as it helps in understanding how variables directly and indirectly affect one another. Mexico City was chosen as a case study because it presents patterns of spatial segregation, extreme social inequality, and a dense population of 8,400 people per square kilometer , 36. In factors \u201339.As such, this study will seek to determine how the use of green spaces is directly or indirectly associated with Mexico City\u2019s residents\u2019 perceived health through multiple mediators, namely sentiments toward green spaces, UGS quality, and time spent in UGS. The evidence generated will assist the decision-making process regarding the most effective underlying mechanisms to support health through UGS use in the city. The consideration of mediating factors is indispensable in contexts such as Mexico City, where both spatial and social inequalities may influence the perception and use of health-promoting urban environments, as well as the benefits derived from them . The stuIn this study, UGS refers to built environments supporting healthy behaviors, physical activity, recreation, social contact, and overall well-being in cities \u201344. The Although several studies have linked simple exposure to green spaces, which includes indirect and transitory contact, as a contributor to health \u201354, thisTo date, several pathways have been suggested between health and UGS use , 59. SinThe first pathway examined in this study is the one mediated through sentiments toward UGS, which were incorporated into the model following Wan and Shen\u2019s (2015) frameworThe second pathway between UGS use and health explored in this analysis is the one mediated through UGS quality, given its role in affecting green spaces\u2019 type of use and enjoyment \u201369. MultIn a study conducted in the city of Carmona, Italy, the authors found that the presence of playgrounds, pleasant views, drinking fountains, and recreational areas, all of which are linked to UGS quality, contributed to greater feelings of comfort and better perceptions of green spaces . LikewisLastly, another critical pathway explored in this study links UGS use to health via time spent in UGS. Time spent in UGS also acts as a mediator between UGS quality and health and sentiments toward UGS and health, thus adding two additional pathways between UGS use and self-reported health. Despite some studies indicating that a higher frequency of UGS use is conducive to better health outcomes irrespective of the time people spend in UGS , evidencThe study\u2019s analytical approach relied on an online survey distributed through social media in May 2020. The decision to collect the data via this method was made because of the high risk of COVID-19 contagion through face-to-face data-gathering methods in 2020 . For theThe final questionnaire was then uploaded to Qualtrics for two pilot tests, each involving five volunteers. Following the pilots, the survey was launched on Facebook in May 2020 using targeted advertisements directed at adults (18+) whose profiles indicated that they lived in one of the 16 municipalities of Mexico City. The participants\u2019 location at the time of survey completion was verified through Qualtrics. Individuals outside the study area were excluded from the sample. Furthermore, the ad explicitly sought individuals who had been living in Mexico City for a minimum of 14 months. Facebook was selected as a distribution channel due to its status as the most popular social platform in Mexico, with over 81 million registered users . The finFor the purposes of this analysis, only surveys in which respondents completed all the questions were considered . In order to encourage a higher level of participation, more detailed information like respondents\u2019 specific income was not requested. Instead, respondents were prompted to select their income bracket. This approach aligns with standard practices in online data collection, acknowledging the common reluctance of individuals to provide precise personal information . AdditioStructural Equation Modeling (SEM) was chosen as the primary analytical method due to its capability to accommodate multiple dependent variables and account for mediating effects among interrelated variables , 86. Thi2.2.1.1. UGS use. To assess UGS use and reduce the effect of the COVID-19 pandemic restrictions on respondents\u2019 answers , participants were requested to select the option that most accurately represented their usage patterns of green spaces between March 2019 and March 2020 (spanning 12 months). The survey clarified that \"use\" specifically referred to direct engagement with these spaces. In other words, respondents were asked about their specific intention to visit green spaces, ensuring that they were not merely passing through them while traveling to another destination. Responses were classified as (i) no use (none), (ii) once every two-three months (rare), (iii) once or twice per month , and (iv) once or more than once a week (frequent). The survey differentiated the frequency of use based on prior research indicating that frequent use of UGS rather than sporadic use is significantly associated with improved health outcomes [outcomes . For exaoutcomes .2.2.1.2. Health. To assess health outcomes, participants were asked to assess their perceived health status utilizing a 5-point Likert scale . This measure of self-reported health was employed as a proxy for health status, aligning with the approach outlined by Jylh\u00e4 (2009) [\u00e4 (2009) . Jylh\u00e4\u2019s\u00e4 (2009) , 90.In Mexico, researchers frequently use self-perceived health as a well-established proxy to study overall health. For example, Valle (2009) investig2.2.1.3. Mediators. As previously highlighted, this study incorporated three essential mediators: sentiments toward UGS, UGS quality, and time spent in UGS. To assess sentiments toward UGS, participants were requested to evaluate the significance of UGS to their overall quality of life and urban experience using a 4-point Likert scale . Understanding individuals\u2019 sentiments toward spaces is crucial, as this factor significantly influences their inclination to utilize these spaces, subsequently shaping their actual behavior [behavior . Likewisbehavior , 95. Las2.2.1.4. Additional exogenous variables. Two fundamental sociodemographic attributes at the individual level were considered as exogenous variables in the model: age and socioeconomic status. It is well established that an individual\u2019s health and well-being are significantly impacted by their age and socioeconomic circumstances [mstances , 97. Formstances . Similarmstances emphasized the paramount importance of UGS for both city life and their overall quality of life. However, a notable percentage held contrasting views: 7.1% regarded UGS as unimportant, and 13% considered them only slightly important. This finding underscores that approximately one-fifth of the surveyed individuals did not perceive green spaces as crucial to urban quality of life. Concerning the quality of green spaces in people\u2019s neighborhoods, most individuals felt that this was poor (28%). Intriguingly, the highest rating\u2014excellent quality\u2014garnered the fewest responses, accounting for merely 12% of the participants. This disparity in perceptions sheds light on varying assessments of the local green space quality. Regarding the duration of visits to UGS, a significant majority (69%) reported spending between 16 and 60 minutes per visit. Remarkably, one-fifth of respondents allocated more than 60 minutes during each visit, highlighting the substantial investment of time in utilizing these green spaces.p-value of 0.261. Additionally, crucial fit indices were scrutinized, namely the root mean square error of approximation (RMSEA = 0.012), the comparative fit index (CFI = 1.000), the Tucker-Lewis index (TLI = 0.992), and the standardized root mean squared residual (SRMR = 0.005). All of these metrics affirmed that the model exhibited a strong and favorable fit.To test the total indirect effects of UGS use on perceived health, a bootstrapping method was applied. Many scholars recommend the bootstrapping method over a normal theory approach when testing indirect effects due to its robustness and accuracy . This mep<0.10). Notably, the five indirect pathways explored in the model were highly significant , as outlined in The results obtained through the bootstrapping method indicated a weakly significant direct path from UGS use to self-perceived health , as well as time spent in UGS , as outlined in p <0.01) and moderate significance for time spent in UGS . Interestingly, sentiments toward UGS were not identified as mediators between age or income and perceived health.Among the introduced exogenous variables used as controls, age was found to be significantly associated with respondents\u2019 perceived health both through UGS quality , albeit with a modest effect. Conversely, the direct effect of age on health was moderately significant . Both age and income exhibited more substantial direct effects on self-reported health than UGS use , with their total effects being highly significant.In terms of direct effects, income demonstrated a robust association with better-perceived health significantly influences the degree to which the space\u2019s use is associated with health outcomes , 106.In addition, this study unveils that the time spent in UGS acts as a mediator between sentiments toward UGS and health. In other words, spending more time in UGS is associated with individuals\u2019 positive sentiments toward them. This finding aligns with the results of a study conducted in Turkey by Akpinar (2016) , which eIn order to enhance sentiments toward UGS, public health campaigns to promote a positive perception of UGS and reinforcing their importance for urban quality of life could prove to be a significant contributor to intensifying the relationship between UGS use and perceived , 109. FoThe SEM findings also suggest that the quality of UGS acts as a significant mediator in the pathway between UGS use and self-perceived health, with a higher perceived quality of UGS strengthening this relationship. Veitch et al. (2012) , who stuBuilding upon analogous studies that employed quality as a mediating factor between UGS use and health , it is iBuono et al. (2012) have higFinally, the findings also suggest that time spent in UGS acts as an additional mediator between UGS use and self-rated health. While some researchers have argued that visiting green spaces is sufficient to benefit from them, others have claimed that spending a certain amount of time in UGS is essential to reaping their benefits \u2013128. ForThe SEM used in this analysis also revealed that the time spent in UGS serves as a mediating pathway between sentiments toward UGS or UGS quality and self-reported health. These findings align with the existing literature, indicating that the quality of UGS is linked to the time individuals spend in these spaces, consequently impacting the health outcomes of the users . SimilarIn this vein, introducing activities that encourage people to spend more time in UGS could prove to be an effective strategy to increase both the frequency and duration of UGS visits . This, iAlthough UGS research has gained traction in recent decades, substantial knowledge gaps persist. Particularly in Latin America, research in this domain has been , 37. ThiIn terms of limitations, it\u2019s vital to stress that SEM, while providing opportunities to examine direct and indirect effects, does not imply causality. Accordingly, causal relationships between variables cannot be inferred from the analyses conducted. Nonetheless, while this study is cross-sectional in nature, it is crucial to emphasize that the findings open intriguing avenues for future research, particularly in terms of UGS quality and sentiments toward green spaces. In addition, the model delves into individual-level variations, acknowledging that external factors, such as high crime rates, could inhibit the utilization of green spaces. Given the structural equation modeling framework of this study, exogenous factors were not specifically investigated.Moreover, it is important to note that certain latent variables that have been shown to significantly mediate the relationship between UGS use and health in diverse studies, such as social cohesion , were noAnother limitation was the use of an online survey, which might have constrained the engagement of marginalized groups. Research has demonstrated that data collection tools relying on the Internet can inadvertently discriminate against underprivileged populations, as individuals without access to the internet at home are automatically excluded . NeverthThis study provided valuable insights into the relationship between the use of urban green spaces and the self-reported health among residents of Mexico City. Employing Structural Equation Modeling, the research effectively addressed notable gaps in existing literature by analyzing the multifaceted connections between UGS use and health outcomes, considering three critical mediators: sentiments toward UGS, UGS quality, and time spent in UGS.The findings shed light on the nuanced relationship between UGS use and perceived health. They emphasize the importance of considering mediating factors to understand the intricate mechanisms through which UGS use influences self-perceived outcomes. While UGS use exhibited a weak and minor direct impact on self-reported health, the indirect pathways were notably robust, collectively bearing more substantive weight. Thus, in Mexico City, enhancing aspects such as quality of UGS, time spent in UGS, and sentiments toward UGS can potentially bolster the positive association between UGS use and self-reported health.Furthermore, the study underscored the need for targeted interventions and thoughtful policy formulation to cultivate favorable perceptions of UGS, elevate UGS quality, and encourage extended periods spent in these spaces. Public health campaigns, engagement initiatives, and activity programs within UGS might be effective tools to achieve these objectives and enhance the public health outcomes associated with UGS use. Nonetheless, additional research on these instruments is needed.Finally, this study is important in its contribution to the discussion on the impact of nature-based solutions on public health in Latin America, where studies are still scarce. As research in this field continues to grow, more evidence is expected to emerge regarding the association between UGS use on health and well-being in Latin American urban settings, and the pathways to strengthen this association. Researchers, policymakers, and urban planners are increasingly recognizing the importance of green spaces in creating healthier and more livable cities in the region. In this regard, this study provides a solid foundation for future studies and policy interventions aimed at optimizing the health benefits derived from UGS and enhancing the overall livability of urban environments, not only in Mexico City but also in similar settings globally."} {"text": "Contemporary synthetic chemistry approaches can be used to yield a range of distinct polymer topologies with precise control. The topology of a polymer strongly influences its self-assembly into complex nanostructures however a clear mechanistic understanding of the relationship between polymer topology and self-assembly has not yet been developed. In this work, we use atomistic molecular dynamics simulations to provide a nanoscale picture of the self-assembly of three poly(ethylene oxide)-poly(methyl acrylate) block copolymers with different topologies into micelles. We find that the topology affects the ability of the micelle to form a compact hydrophobic core, which directly affects its stability. Also, we apply unsupervised machine learning techniques to show that the topology of a polymer affects its ability to take a conformation in response to the local environment within the micelles. This work provides foundations for the rational design of polymer nanostructures based on their underlying topology. Our molecular dynamics simulations provide molecular-scale understanding of how polymer topology effects the self-assembly and stability of nanoparticles, and the polymer molecule\u2019s ability to take a conformation in response to its local environment. Ring polymers are one synthetically accessible topology that have drawn considerable attention as a result of the unique properties that they exhibit in comparison to their linear counterparts.5\u201313 Functional polymer nanostructures have been typically fabricated using linear polymers but significant synthetic advances in the past two decades have made ring copolymer synthesis possible. Ring polymers demonstrate distinct self-assembly behavior,9,12,13 which leads to their resultant micelles possessing markedly different properties,9 including the size and shape,14 morphology,15,16 temperature, salt tolerance,17,18 and degradation14 with respect to micelles formed from analogous linear polymers.The ability of amphiphilic polymers to self-assemble into specific morphologies in solution has driven interest in their deployment for a diverse range of applications.19,20 greater efficacy,21\u201324 longer in vivo circulation times,25,26 and high cancer cell uptake25\u201328 as the same polymers with a linear topology.In drug delivery applications, the ability to control the size and stability of micellar aggregates is particularly important. The size of such micelles is one of the most critical features in determining biodistribution and the stability can be tuned to prevent premature release or to enable a controlled release of therapeutics. Ring polymers have shown great promise as potential drug and gene delivery vehicles because they often show improved drug loading and releasing capacity,29,30 extensional flows31,32 and thin films.33,34 However, relatively few simulation studies have investigated the underlying mechanisms that lead to the properties of ring polymers in aqueous environments observed experimentally. Studies that have been performed have primarily utilized coarse-grain polymer models to gain insight into how polymer topology affects the morphology of the micelles that form.23,35\u201339While interest in the application of self-assembling ring polymers in drug-delivery applications is building, there is a relative lack of detailed understanding of the molecular-scale mechanisms that drive the emergence of their desirable properties. Molecular-scale simulations present the unique opportunity to build this level of understanding. Simulations have recently been used to develop understanding of the unique properties of ring polymers within polymeric melts,12EO31-)) in comparison to its analogous linear diblock topology (MA12EO30) and triblock topologies (MA-terminated (MA6EO31MA6) & EO-terminated (EO15MA12EO15)) and poly(ethylene oxide) blocks (-(MA15)) see . We provRG) and the eccentricity of the largest micelle in each system algorithm,42 followed by clustering in the resulting embedded space using Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN)43 are found to be enriched in the core of the micelle. In these conformations, the EO block is more extended so that it can reach the micelle corona and interact with the surrounding aqueous environment. The polymers at the interface of the core of the ring polymer micelle take on a more conventional ring shape (cluster 2), allowing the EO block to expand to maximize its contact with the surrounding water and the MA block to embed into the core to minimize its interaction with water.For the diblock polymer micelle, the pattern is similar as for the cyclic one. The most extended conformations (cluster 1 and 2) are predominate in the core of the micelle. In these conformations, the MA is more extended, allowing it to maximize its contacts with the rest of the MA present in the core. Finally, cluster 3 is more likely to be found at the core\u2013shell interface. This cluster presents a collapsed MA and extended EO, which allows the EO to maximise its contacts with the water, while the MA minimizes its contacts with this solvent by collapsing within itself.et al. have studied MA-EO-MA linear and MA-EO ring block copolymers and found that the linear polymers form micelles that have larger hydrodynamic diameters and aggregation numbers, while also being less thermally and salt stable than the corresponding ring polymer.18 The same authors also studied butyl acrylate (BA)-ethylene oxide linear and cyclic block copolymers and found that the size of the micelles from the two polymers were similar but the ring polymer showed greater thermal stability.17 Our simulations show that there are more MA\u2013MA contacts within the core of ring polymer as compared to the MA-terminated linear polymer which results in a more compact and more stable micelle forms larger aggregates that are less stable than those formed from the cyclic or diblock polymer. Honda et al.17,18 as well as for Pluronics which contain blocks of propylene oxide (PO) and ethylene oxide.44 In each case, the authors suggest that these polymers with the hydrophobic monomers on the terminal ends form flower-like micelles where a majority of the polymers have both terminal ends within the core of the micelle, and some of the polymers have a hydrophobic terminal end in solution. The results of our simulations for the MA-terminated linear polymers show that \u223c20% of the polymers take conformations which result in at least one of the MA-blocks being in the corona of the micelle. Interestingly, with the larger aggregation number for the MA-terminated linear polymers than for the micelles formed from the EO-terminated linear polymers, we find that both micelles have roughly the same number of MA monomers (\u223c360) in the core of their micelles.Our ability to identify three distinct conformations of each of the polymers allows us to provide a detailed picture of the internal structure of the micelles. In doing so, we show that for the linear polymer with the hydrophobic monomers on either end (MA-terminated linear) there are two conformations where the MA blocks are near to one another and one conformation in which the polymer is fully extended with the MA blocks separated from another. This is consistent with the general picture suggested for the MA-EO and BA-EO polymers studied by Honda We found that in the micelles formed by the EO-terminated triblock, the diblock and the ring polymers, which have a well defined core and corona, the polymers take different conformations depending on their location within the micelle. In the case of the EO-terminated linear polymer we find that the polymers in the core of the micelle have a propensity to have an elongated MA block which maximizes the hydrophobic contact between MA monomers and more compact EO blocks which lie on the surface of the micelle. The polymers at the core/shell interface of the micelle have more compact MA blocks which allows the polymers to more effectively shield their hydrophobic blocks and the EO blocks are more extended in order to maximize their hydration. While in the ring polymer micelle, we find two more elongated conformations which are most prominent in the core of the micelle, whereas the other more ring-like conformation sits at the core\u2013corona interface. These conformations taken by the ring polymers in the different parts of the micelles allow the polymers to maximize the hydrophobic contact of the MA blocks while also allowing the EO monomers to maximize their interaction with the surrounding water. In the case of the diblock polymer micelle, we find that the conformations where the MA blocks are the most extended are located closer to the core, while the conformation with a collapsed MA block is found close to the core\u2013shell interface. Then, it is clear that these conformations are the result of the MA monomers maximising their hydrophobic interactions and minimising their contact with the aqueous environment. Therefore our findings show that polymers that can take location specific conformations will form stable micelles that have hydrophobic cores which are shielded by the hydrophilic monomers, and those that cannot, the MA-terminated polymer in this case, will not.45\u201347Our simulations provide a mechanistic picture of what leads to the difference in size and stability of micelles formed by block copolymers that differ in topology but not in the chemical composition of their constituent monomers. Additionally, we have been able to demonstrate the range of conformations that are taken by four different topologies of polymers within the micelle and how they determine the stability of the micelles. We have also shown how the conformations of the polymers change as their position within the micelle changes, which is particularly interesting when considering loading these micelles with small molecule therapeutics, as the location and the hydration of the drug within the micelle will be driven largely by the conformations of the polymers in its local environment. This understanding allows polymer topology to become another parameter that can be used to perform rational design of polymer nanoparticles for the use in a variety of applications including drug delivery.48 to describe the interactions of the polymers and the TIP3P water model.49 All of the simulations were performed using GROMACS50 versions 2019.2 and 2020.4. The same simulation protocol was followed for each of three simulations, which begins with energy minimization by steepest descent, followed by a 125 ps simulation in the NVT ensemble using the Nos\u00e8\u2013Hoover thermostat to control the temperature (target temperature 300 K) with a timestep of 1 fs. Subsequently we ran 1 \u03bcs production simulations in the NPT ensemble using the Nos\u00e8\u2013Hoover thermostat and the Parrinello\u2013Rahman barostat to control the temperature (target temperature 300 K) and pressure of 1 atm, respectively with a 2 fs timestep while all hydrogen-containing bonds were constrained using the LINCS algorithm.51 In all simulations, the non-bonded interactions were cut off at 12 \u00c5 while the particle-mesh Ewald (PME) algorithm was used to calculate long-range electrostatic interactions. Appropriate burn-in times were calculated, with only the stationary portion of the production simulations used for analysis. A description of all of the analyses conducted on these simulations is described in the ESI.\u2020Each simulation reported consists of 20 polymers placed in a simulation box with initial dimensions of 147 \u00c5 \u00d7 147 \u00c5 \u00d7 147 \u00c5 containing approximately 105\u2009000 water molecules, resulting in 3 wt% solutions of each polymer. We used the OPLS forcefield parameters as prescribed by the PolyParGen webserverRaquel L\u00f3pez-R\u00edos de Castro: data curation, formal analysis, investigation, methodology, software, validation, visualisation, writing \u2013 original draft. Robert M. Ziolek: conceptualization, formal analysis, methodology, software, supervision, writing \u2013 review & editing. Christian D. Lorenz: conceptualization, funding acquisition, project administration, resources, supervision, writing \u2013 review & editing.There are no conflicts to declare.NR-015-D3NR01204B-s001"}