diff --git "a/deduped/dedup_0543.jsonl" "b/deduped/dedup_0543.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0543.jsonl" @@ -0,0 +1,49 @@ +{"text": "Biologists have long known that the African great apes are our closest relatives, evolutionarily speaking. The recent release of the chimp draft genome sequence confirms this relationship at the nucleotide level, showing that human and chimp DNA is roughly 99% identical. Given the genetic similarity between human and nonhuman primates, the next big challenge is to identify those changes in the human genotype (the genetic complement of an organism) that generated the complex phenotype that distinguishes humans from the great apes. For example, modern humans have larger brains and a larger cerebral cortex than both nonhuman primates and their forebears, the early hominids. Elucidating the molecular mechanisms that account for this expansion will provide insight into brain evolution.ASPM gene cause microencephaly, a rare incurable disorder characterized by an abnormally small cerebral cortex. Since the microencephalic brain is about the same size as the early hominid brain, researchers hypothesized that ASPM\u2014whose normal function is unclear\u2014may have been a target of natural selection in the expansion of the primate cerebral cortex. Last year, researchers showed that selective pressure on the ASPM gene correlated with increased human brain size over the past few million years, when humans and chimps diverged from their common ancestor. Now, Vladimir Larionov and colleagues report that the selective pressure began even earlier\u2014as far back as 7\u20138 million years ago, when gorillas, chimps, and humans shared a common ancestor.One way to figure out which genes are involved in a physiological process is to analyze mutations in the genotype that generate an abnormal phenotype. Such efforts are easier in the relatively rare instance that one gene affects a single trait. Mutations in the ASPM gene, including promoter and intronic (noncoding) sequences, from chimpanzees, gorillas, orangutans, and rhesus macaques. They sequenced these YACs to determine the complete genomic sequence of the ASPM gene from each species. Next, they characterized sequence changes among these species, based on whether the resulting substitutions in amino acids produced changes in the ASPM protein, to determine how fast the protein was evolving. Larionov and colleagues found that different parts of the protein evolved at different rates, with the rapidly evolving sequences under positive selection and the slowly evolving sequences under \u201cpurifying\u201d selection (significant disruptions were jettisoned). Positive selection on genes is one important way to drive evolutionary change.The researchers used a newly developed technology to extract specialized cloning agents in yeast containing the entire ASPM gene, Larionov and colleagues show that the increase in human brain size\u2014which began some 2\u20132.5 million years ago\u2014happened millions of years after the gene underwent accelerated selective pressure. The ASPM gene, they conclude, likely plays a significant role in brain evolution. The next big challenge will be identifying the forces that preferentially acted on the human genotype to kick-start the process of brain expansion, forces that promise to shed light on what makes us human. New genomic technologies like TAR-cloning will likely accelerate this process.By reconstructing the evolutionary history of the"} +{"text": "Some primatologists have argued that to understand human nature we must understand the behavior of apes. In the social interactions and organization of modern primates, the theory goes, we can see the evolutionary roots of our own social relationships. In the genomic era, the age-old question, What makes us human? has become, Why are we not apes? As scientists become more adept at extracting biological meaning from an ever expanding repository of sequenced genomes, it is likely that our next of kin will again hold promising clues to our own identity.Many comparative genomics studies have looked to our more distant evolutionary relatives, such as the mouse and even yeast, to help interpret the human genome. Because the genomes of mice, yeast, and humans have diverged significantly since their last common ancestor\u2014about 75 million years ago for mouse and human, and about 1 billion years ago for yeast and human\u2014there are enough differences between the functional and nonfunctional regions to home in on biologically significant sequences, based on their similarity. Sequences that are similar, or conserved, in such divergent species are assumed to encode important biological functions. These comparative studies have successfully identified and characterized many human genes. And a similar approach comparing primate genomes can help scientists understand the genetic basis of the physical and biochemical traits that distinguish primate species. In this approach, however, rather than looking for genes that are shared across many species, scientists look for those that are unique to a species.One of the primary agents of genome evolution is gene duplication. Duplicated genes provide the raw material for the generation of novel genes and biological functions, which in turn allow the evolution of organismal complexity and new species. James Sikela and colleagues set out to compare gene duplications between humans and four of our closest primate relatives to find the genetic roots of our evolutionary split from the other great apes. Collecting the DNA of humans, chimpanzees, bonobos, gorillas, and orangutans from blood and experimental cell lines, the researchers used microarray analysis to identify variations in the number of copies of individual genes among the different species. They analyzed nearly 30,000 human genes and compared their copy numbers in the genomes of humans and the four great apes.Overall, Sikela and colleagues found more than 1,000 genes with lineage-specific changes in copy number, representing 3.4% of the genes tested. All the great ape species showed more increases than decreases in gene copy numbers, but relative to the evolutionary age of each lineage, humans showed the highest number of genes with increased copy numbers, at 134. Many of these duplicated human genes are implicated in brain structure and function.The gene changes identified in the study, the authors conclude, likely represent most of the major lineage-specific gene expansions (or losses) that have taken place since orangutans split from the other great apes, some 15 million years ago. And because some of these gene changes were unique to each of the species examined, they will likely account for some of the physiological and morphological characteristics that are unique to each species. One cluster of genes that amplified only in humans was mapped to a genomic area that appears prone to instability in human, chimp, bonobo, and gorilla. This region, which corresponds to an ancestral region in the orangutan genome, has undergone modifications in each of the other descendent primate species, suggesting an evolutionary role. In humans, gene mutations in this region are also associated with the inherited disorder spinal muscular atrophy. This fact, along with the observation that there are human-specific gene duplications in this region, suggests a link between genome instability, disease processes, and evolutionary adaptation.In their genome-wide hunt for gene duplications and losses in humans and great apes, Sikela and colleagues have highlighted genomic regions likely to have influenced primate evolution. With the impending release of the chimp genome and more primate sequences to follow, scientists can take advantage of both sequence-based and microarray-based genome information to wrest additional insights from our primate cousins and flesh out the details of the human story."} +{"text": "E. coli BL21 cells. The cDNA encoding APSL protein was obtained from shark regenerated hepatic tissue by RT-PCR, then it was cloned in the pET-28a expression vector. The expressed fusion protein was purified by Ni-IDA affinity chromatography. SDS-PAGE and HPLC analysis showed the purity of the purified fusion protein was more than 98%. The recombinant APSL (rAPSL) was tested for its biological activity both in vitro, by its ability to improve the proliferation of SMMC7721 cells, and in vivo, by its significant protective effects against acute hepatic injury induced by CCl4 and AAP (acetaminophen) in mice. In addition, the rAPSL could decrease the blood glucose concentration of mice with diabetes mellitus induced by alloxan. Paraffin sections of mouse pancreas tissues showed that rAPSL (3 mg/kg) could effectively protect mouse islets from lesions induced by alloxan, which indicated its potential application in theoretical research and industry.The Active Peptide from Shark Liver (APSL) was expressed in Chiloscyllium plagiosum and its physicochemical properties and pharmacodynamics have been studied. Due to the low yield of natural APSL from shark liver, its industrial application is limited. This paved the way for obtaining rAPSL by gene engineering methods based on the fact that the cDNA sequence has been obtained by RT-PCR. The overexpression ofrAPSL can make it easy to study the advanced structure and pharmacodynamics mechanism. The use of recombinant APSL is an alternative that could resolve the problem of low availability for industrial applications. Currently, limited information is available on the overexpression of rAPSL, as well as its pharmacological effects. The purpose of this investigation was to overexpress the recombinant APSL (rAPSL) using the gene engineering method, and then rAPSL was purified by SDS-PAGE and HPLC, respectively. Additionally, some in vitro and in vivo experimental methods were used to identify the pharmacological properties of the purified rAPSL, compared with that of natural APSL.Sharks are one of the most active marine animals. The shark liver, whose weight can account for 75% of the viscera, possesses intense immunoregulation effects and contains some novel bioactive substances. Many researchers have studied the bioactive substances in shark liver . Wu and Escherichia coli. Six continuous histidines were added to the N-terminus of the rAPSL, which greatly accelerated the protein purification process. This work was reported by our group , weighing 18~22 g, were purchased from the Animal Center of Nanjing Medical University. AAP purchased from ICN Biomedicals Inc., was dissolved with saline before use. Alloxan, RNasin, M-MLV, Taq DNA polymerase, dNTPs and pGEM-T Easy Vector were all purchased from Promega. The expression vector pET-28a, the host strain BL21 (DE3) and His-Bind Kit were purchased from Novagen. DNA Recovery Kits were offered by Vitagene. The purity of APSL which was prepared by our laboratory and analyzed by high-performance liquid chromatography (HPLC) was above 98%.N-terminus sequence of APSL, we designed one degenerate 5\u2032-AT(C)TIGTIGGICCIATC(T)GGIGCIG-3\u2032 primer. The PH of shark liver model was built according to the method reported by Ye [18. PCR product was purified with DNA Recovery Kits and ligated with pGEM-T Easy Vector. Recombinant vector was selected by digestion with EcoR I and sequencing.Based on the report from Ou about thed by Ye . The totNde I and Sal I, and then subcloned into the pET-28a plasmid. The recombinant vector, named pET28a-APSL, was selected for sequencing after PCR analysis and restriction enzyme digestion [The PCR product (350 bp) was collected, digested with E. coli BL21 (DE3) with the recombinant expression plasmid pET28a-APSL was grown in LB liquid medium containing 50 \u03bcg/mL kanamycin at 37 \u00b0C in a shaking incubator. When A600 was about 0.4, IPTG was added into the medium with a final concentration of 1 mmol/L to induce expression of recombinant APSL, and then incubation continued at 37 \u00b0C with continuous agitation. At the 0, 1, 3 and 5 h time points of the induction period, samples were centrifuged, and the protein of interest was analyzed by SDS-PAGE, gel electrophoresis according to the protocol described by Sambrook [Sambrook .2O containing 0.3% \u03b2-mercaptoethanol. The purified protein was analyzed by SDS-PAGE and HPLC (Zorbax 300SB-C18), respectively. SDS-PAGE was carried out using 12% resolving gel and 5% stacking gel. The mobile phase A of HPLC was H2O and 0.1% TFA, and the mobile phase B was analytical grade CH3CN.For high expression of recombinant APSL, the cells were harvested during the incubation at 37 \u00b0C at 0, 1, 3, and 5 h time points after IPTG induction , then the cells with optimized expression were harvested by centrifugation and washed with 10 mmol/L Tris-HCl. The precipitate was dissolved with 50 mmol/L Tris-HCl and the cells broken with ultrasound on ice. The solution was centrifuged at 5,000 g for 10 min at 4 \u00b0C in order to collect the pellets. The pellets were dissolved into 6 mol/L urea and 1 \u00d7 binding buffer , and stored at 4\u00b0C. The fusion APSL was purified with a His-Bind kit according to the protocol described by Novagen Co. The purified recombinant APSL was washed by turns twice with 10 mmol/L Tris-HCl (pH 7.9) containing 0.3% mercaptoethanol and twice with ddHin vitro was tested by the MTT -2, 5-diphen-yltetrazolium bromide) method [4 cells/well MTT solution (5 \u03bcg/mL) were selected to carry out the test. The protocol of the present MMT study was performed as follows: after SMMC 7721 cells were seeded at 104 cells/well in a 96-well plate, the various concentrations of rAPSL and natural APSL were added to the 96 wells. Then the plates were incubated for 30 h at 37 \u00b0C in a 5% CO2 humidified atmosphere. After the incubation period, 20 \u03bcL of MTT solution from a 5 mg/mL stock solution prepared in PBS was added to each well. Plates were returned to the incubator for 6 h. After the 6 h incubation period, the MTT solution was replaced by DMSO. The plates were incubated at 37 \u00b0C for 10 min with agitation. MTT conversion was measured using a microplate reader , reading the absorbance at 570 nm. Cell proliferation (as a percent of the negative control) was evaluated by the absorbance values.The bioactivity of rAPSL ) method . Accordi4 intraperitoneally half hour after the third administration, CCl4 was dissolved in olive oil [Fifty male ICR mice weighing 18~22 g were randomly divided into five groups, including the control group, the model group, the group treated with natural APLS (3.0 mg/kg body wt), and the group treated with rAPSL at high dosage (3.0 mg/kg body wt) and at low dosage (1.0 mg/kg body wt), respectively. All groups were treated with the dose intraperitoneally for 4 days (twice per day). The mice, except the control group, were injected with 0.2% CCllive oil and injelive oil .th, 14th, 21st and 28th after the alloxan administration.Ninety ICR mice, equivalent numbers of females and males, weighing 18~22 g were randomly arranged in six groups, including the control group, the model group, the insulin treated group (6 U/kg body wt), the group treated by natural APLS (3.0 mg/kg body wt), and the group treated by rAPSL at high dosage (3.0 mg/kg body wt) and low dosage (1.0 mg/kg body wt), respectively. Mice in the control group and the model group were treated only with saline (20 ml/kg body wt) [x\u0304 \u00b1 s, \u201cStudent\u201d t-test was used for statistical analysis and statistical significance was defined as P < 0.05 or P < 0.01.All results were presented as"} +{"text": "RLN1 and RLN2 genes, which are found as duplicates only in apes. These studies have revealed that the RLN1 and RLN2 paralogs in apes have a more complex history than their phyletic distribution would suggest. In this regard, alternative scenarios have been proposed to explain the timing of duplication, and the history of gene gain and loss along the organismal tree. In this article, we revisit the question and specifically reconstruct phylogenies based on coding and noncoding sequence in anthropoid primates to readdress the timing of the duplication event giving rise to RLN1 and RLN2 in apes. Results from our phylogenetic analyses based on noncoding sequence revealed that the duplication event that gave rise to the RLN1 and RLN2 occurred in the last common ancestor of catarrhine primates, between \u223c44.2 and 29.6 Ma, and not in the last common ancestor of apes or anthropoids, as previously suggested. Comparative analyses based on coding and noncoding sequence suggests an event of convergent evolution at the sequence level between co-ortholog genes, the single-copy RLN gene found in New World monkeys and the RLN1 gene of apes, where changes in a fraction of the convergent sites appear to be driven by positive selection.The relaxin/insulin-like gene family includes signaling molecules that perform a variety of physiological roles mostly related to reproduction and neuroendocrine regulation. Several previous studies have focused on the evolutionary history of relaxin genes in anthropoid primates, with particular attention on resolving the duplication history of Convergent evolution is defined as the process whereby unrelated organisms independently reach similar character states. At the phenotype level, one of the best known examples of convergence is the wing, in which phylogenetically unrelated groups evolved the ability of flight independently. At the molecular level, several cases have been reported in which preexisting genes have changed their original function . One remThe relaxin/insulin-like gene family includes signaling molecules that perform a variety of physiological roles mostly related to reproduction and neuroendocrine regulation . Recent A; fig. 1B). Here, the single copy RLN gene from New World monkeys would be a 1:1 ortholog to the RLN1 gene of apes, whereas the single copy RLN gene from Old World monkeys would be a 1:1 ortholog to the RLN2 gene of apes. However, dot-plot comparisons suggested the possibility that the RLN gene found in New World monkeys could be a 1:1 ortholog to the RLN2 gene of apes . The primates species included six apes , four Old World monkeys , two New Wold monkeys , one tarsier (Tarsius syrichta), and two strepsirrhines . We compared annotated exons sequences with unannotated genomic sequences using the program Blast2seq in 15 species of primates representing all main groups of the order along the branches of the tree, and the second set of models focused on comparing changes in \u03c9 along the different sites in the alignment between background and foreground sets of branches. We first compared the following two branch models: 1) a 1 \u2212 \u03c9 model in which a single \u03c9 estimate was assigned to all branches in the tree; and 2) a 2 \u2212 \u03c9 model, which assigned one \u03c9 to the ancestral branch of the New World monkey RLN clade, and a second \u03c9 to all other branches. We also implemented branch-site models, which explore changes in \u03c9 for a set of sites in a specific branch of the tree to assess changes in their selective regime . Initial studies had suggested that the duplication giving rise to RLN1 and RLN2 mapped to the last common ancestor of apes, between approximately 29.6 and 18.8 Ma . Because the observed differences between coding and noncoding phylogenies were statistically significant, our results are indicative of a pattern of convergent evolution at the sequence level.In all analyses the two RLN1 and RLN2 paralogs of apes fell in two separate clades that did not deviate significantly from the expected organismal phylogenies . Thus, wPhylogenetic reconstructions have been widely used in the literature to investigate events of putative convergent evolution at the sequence level . Cases wdN/dS ratio significantly higher than 1, and that some of the codons under natural selection could have converged to the same state independently in both lineages.In this case, we investigated the potential role of natural selection on the evolution of the single copy RLN gene of New World monkeys. In particular, we focused on exploring the possibility that the phylogenetic affinity between the RLN gene from New World monkeys and the RLN1 paralog of apes are due to convergent evolution at the sequence level driven by natural selection. If this was the case, we hypothesized that the branch leading to the RLN gene of New World monkeys would have a dN/dS) among the branches in the tree in a maximum likelihood framework. First, we compared a 2 \u2212 \u03c9 model that assigned one independent \u03c9 estimate with the ancestral branch of the RLN clade of New World monkeys and a second one to the rest of the tree with a 1 \u2212 \u03c9 model where all branches were assigned the same \u03c9. The 2 \u2212 \u03c9 model was significantly better according to the likelihood ratio test . Under the 2 \u2212 \u03c9 model, the ancestral branch of the New World monkey RLN clade had an \u03c9 estimate of 1.77 whereas all other branches had an \u03c9 of 0.76 (P = 0.049), where several residues switched to a positive selection regime in the ancestral branch of the New World monkeys RLN clade. The BEB analysis identified 35 codons under a positive selection regime, two on the region encoding for the signal peptide, four on the region encoding for the B peptide, 21 on the region encoding for the C peptide, and eight located on the region encoding for the A peptide stated, \u201cThe function of the RLN1 gene in humans and higher primates is unknown.\u201d In the same work they also said \u201cThe RLN1 gene is only found in humans and the great apes, but in some of these species, it is doubtful that a functional peptide is produced. Even in humans where mRNA expression is detected in multiple tissues, there is no evidence for functional peptide production.\u201d In agreement with these statements, Shabanpoor et al. (2009) wrote, \u201cthe mRNA expression of H1 relaxin has been detected in human deciduas, prostate gland and placenta trophoblast. However, its functional significance remains unknown.\u201dC. jacchus) the pattern of relaxin expression appears to be very similar to the human ."} +{"text": "The most common application of imputation is to infer genotypes of a high-density panel of markers on animals that are genotyped for a low-density panel. However, the increase in accuracy of genomic predictions resulting from an increase in the number of markers tends to reach a plateau beyond a certain density. Another application of imputation is to increase the size of the training set with un-genotyped animals. This strategy can be particularly successful when a set of closely related individuals are genotyped.Imputation on completely un-genotyped dams was performed using known genotypes from the sire of each dam, one offspring and the offspring\u2019s sire. Two methods were applied based on either allele or haplotype frequencies to infer genotypes at ambiguous loci. Results of these methods and of two available software packages were compared. Quality of imputation under different population structures was assessed. The impact of using imputed dams to enlarge training sets on the accuracy of genomic predictions was evaluated for different populations, heritabilities and sizes of training sets.Imputation accuracy ranged from 0.52 to 0.93 depending on the population structure and the method used. The method that used allele frequencies performed better than the method based on haplotype frequencies. Accuracy of imputation was higher for populations with higher levels of linkage disequilibrium and with larger proportions of markers with more extreme allele frequencies. Inclusion of imputed dams in the training set increased the accuracy of genomic predictions. Gains in accuracy ranged from close to zero to 37.14%, depending on the simulated scenario. Generally, the larger the accuracy already obtained with the genotyped training set, the lower the increase in accuracy achieved by adding imputed dams.Whenever a reference population resembling the family configuration considered here is available, imputation can be used to achieve an extra increase in accuracy of genomic predictions by enlarging the training set with completely un-genotyped dams. This strategy was shown to be particularly useful for populations with lower levels of linkage disequilibrium, for genomic selection on traits with low heritability, and for species or breeds for which the size of the reference population is limited. Prediction of breeding values of animals using genomic information was proposed by Meuwissen et al. and sincIn principle, an increase in marker density should result in higher LD between the markers and the quantitative trait loci underlying a given trait, and consequently in more accurate genomic predictions. However, the advantage of using a high-density panel for GS compared to a low-density panel depends on which markers are included in the low-density panel. Such a formulation can be interpreted in terms of variable selection in a linear model, which has been a topic of frequent research aiming at reducing over-parameterisation in statistical models for GS ,20, as wAccording to a study using dairy cattle data by Weigel et al. , moving Many of the studies done with imputation so far have focused on the increase in density of markers panels through imputation and its impact on accuracy of genomic predictions. Results from Weigel et al. in JerseImputation can be used to increase the number of markers. However, the benefit is expected to reach a plateau beyond a certain density. Imputation can also be used to increase the size of the training set with animals that were not genotyped at all. Cleveland et al. investig2) of the trait under selection and sizes of training sets already available.The objectives of this work were: (1) to investigate the performance of two imputation methods for a completely un-genotyped dam, using the information on its genotyped family members and the mating partner plus the estimates of either allele or haplotype frequencies; (2) to investigate the effects of different population structures, levels of LD and distribution of allele frequencies on the success of imputation; and (3) to evaluate the impact of enlarging a training set with imputed dams on the accuracy of genomic predictions for different populations, levels of heritability uses the information from the MGS, Sire and Offspring genotypes and allele frequencies to infer the dam\u2019s genotypes for all loci, unambiguously or not. For each genotype configuration of the MGS, Offspring and Sire, each possible genotype of the dam can be assigned a probability, which in the ambiguous cases can be expressed as a function of the allele frequencies. These probabilities for each case were derived and are available in Additional file The second imputation procedure is done in two stages and therefore will be referred to as the Two_Step method. In a first step, only the Dam genotypes that can be inferred with probability 1 are assigned were randomly allocated. Marker allele frequencies in the first historical generation were set equal to 0.5 and the mutation rate was set to 2.5e-5. In order to generate different genomic structures that may influence the success of imputation, four populations were simulated, which differed in the level of LD and the presence or absence of selection. The increase in the level of LD desired for two of the populations was induced by simulating a bottleneck in the historical population. Therefore, the four scenarios were created as follows: no bottleneck and no selection (LowLD_NoSel), no bottleneck and selection (LowLD_Sel), bottleneck and no selection (HighLD_NoSel), and bottleneck and selection (HighLD_Sel). For each of the four scenarios, 10 replicates were simulated.For the comparison of imputation methods, genomic data were simulated using the software QMSim . The simTo generate a minimum level of LD for the two scenarios without bottleneck, a historical population of 4000 animals was mated at random for 1600 discrete generations, without selection, without migration and with an equal number of animals from both genders. Then the population size was increased to 4040 in the following 20 generations and kept at a constant size for 20 additional generations. For the two scenarios with bottleneck, the historical population was initially set to 2000 animals and mated at random for 2500 generations. After this, a bottleneck was simulated by gradually decreasing the population size to 200 animals over the following 70 generations; these 200 animals were further mated at random for 10 generations. The population size was then gradually expanded from 200 to 4040 animals within the next 20 generations, and remained at a size of 4040 for 20 additional generations. In all four scenarios, population size was 4040 in the last historical generation, which included 40 males.2\u2009=\u20090.20. Since the proportions of female and male offspring were identical, the last generation of the recent population contained 2000 female offspring. Genotype imputation was then performed on the dams of these 2000 female offspring from the last generation.Starting with the 4000 female and 40 male founders from the last historical generation, 10 additional generations were simulated to form the recent population. In the recent population, the proportion of male offspring was 0.5, litter size was 1, a random mating design was applied and replacement ratios for sires and dams were 0.5 and 0.25, respectively. These parameters were common to all four scenarios. For the two scenarios without selection, a random selection design was used and the culling design was based on the age of the animal. For the two scenarios with selection, both selection and culling designs were based on estimated breeding values (EBV). These EBV were obtained by solving Henderson\u2019s mixed model equations using peTo investigate the impact of imputation on the accuracy of genomic predictions, the size of the training set used for SNP effect estimation is a relevant parameter. For that purpose, the same simulation procedures described above for the four scenarios were applied again in another simulation, in which a larger population was generated at the end. Instead of using a size of 4040 for the last historical generation, the number of female founders was set to 32 000 so that 16 000 female offspring in the last generation were available for the imputation of their dams. As above, 10 replicates of each scenario were simulated for the larger populations.2) between each pair of markers in the last generation. To minimize the influence of the minor allele frequency (MAF) on the measure of LD, r2 values were computed only for pairs of markers with a MAF greater than 0.05. The decay of LD with increasing inter-marker distances was also assessed by calculating the mean r2 within bins of inter-marker distances.Outputs from QMSim included information about the paternal and maternal alleles of each locus, which allowed the determination of linkage phase and the calculation of haplotype frequencies. The level of LD in the four simulated scenarios could then be assessed by calculating the squared correlation coefficient (r2), different magnitudes of residual terms were added to the simulated true breeding values, generating phenotypic values representing ten levels of h2, ranging from 0.05 to 0.5 in steps of 0.05. For each size of training set and each h2, allele substitution effects of every locus on the simulated phenotypes were fitted in a multiple random regression model similar to the GBLUP method of Meuwissen et al. [The impact of the imputation of Dam genotypes on the accuracy of genomic predictions was investigated for the imputation method with the best performance. For that purpose, the simulated data sets with 16 000 female offspring in the last generation were used. Four different sizes of training sets to estimate marker effects were created by splitting each replicate into subsets containing 2000, 4000, 8000 and 16 000 female offspring. From each subset, 90% of the animals were assigned to the training set and the remaining 10% to the validation set. Accuracy of genomic prediction was then assessed by cross-validation, i.e. marker effects were estimated with data from animals in the training set and used to predict genomic breeding values of animals in the validation set. The sizes of the training sets containing only genotyped animals (TS) were 1800, 3600, 7200 and 14 400. Training sets augmented (TSA) with imputed Dams were created, resulting in training sets of 3800, 7600, 15 200 and 30 400 animals. The impact of imputation was evaluated by comparing the accuracies of genomic predictions using TS or the corresponding TSA. To generate different levels of heritability of allele 2, of the animals in the training set; y is the vector of phenotypes; I is an identity matrix of order equal to the number of markers and \u03d5 is an assumed ratio of residual to marker variances. This ratio of variances was calculated using the simulated h2 values and assuming a marker variance equal to the additive variance divided by the number of markers. For each scenario and replicate, only markers with a MAF greater than 0.05 were used in the estimation of SNP effects. Genomic breeding values were then predicted as Z is the matrix of SNP genotypes, coded as the number of copies of allele 2, of the animals in the validation set. Accuracy of genomic evaluation was calculated as the correlation between GEBV and the simulated true breeding values of the animals in the validation set.where \u03bc is an overall mean; 2 was less than one third of the mean r2 at an inter-marker distance smaller than 25\u00a0kb, whilst in the scenarios with selection it was still more than a half of that for each imputation method and each scenario, averaged across replicates, are presented in Table\u00a0Overall, the Single_Step method performed better than the Two_Step method for the four simulated scenarios. As expected, the quality of imputation with the Two_Step method increased with higher levels of LD. A similar trend was observed with the Single_Step method. Although LD information is not directly used in the Single_Step method, its performance was influenced by the level of LD since the simulated populations with a higher level of LD presented distributions of allele frequencies with greater densities at more extreme allele frequencies , in which first a \u2018low density chip\u2019 is built based on the unambiguous cases and then the rest is filled in with LD information. However, three main differences must be pointed out: (i) the Two_Step method starts from completely un-genotyped animals; (ii) after the first step, Dams have genotypes for a \u2018low density chip\u2019 but with a different chip for each Dam and not a set of evenly spaced markers common to all Dams; and (iii) information on the genotyped relatives is used only in the first step, which means that after the \u2018low density chip\u2019 is built the only information available for imputation is LD, whereas in a low to high density approach, one would still have the possibility of using family information. Obviously, if a low density panel of SNP was also available for these Dams, the average success rate would be even greater, but at the cost of genotyping the Dams for the low density chip. Inspection of the number of genotypes which can be imputed unambiguously may provide an approximate estimate of the expected success rate that may be achieved by imputation. Such an estimate could then be used as an aid to choose the Dams to be genotyped with a low density chip. In the case of a group of Dams, for which say 10 or 15% of the loci can be unambiguously inferred from family information alone, one could choose to leave them completely un-genotyped and do imputation with the Single_Step method. Knowledge about the population structure under consideration would also be required in such a decision process. In order to account for that, simple experiments could be conducted to empirically estimate the expected success rate for Dams with a given number of loci inferred with a probability of one.One aspect of the imputation procedures proposed here is that genotypic information is assumed to be available on a specific set of animals Figure\u00a0, includi11, 12 or 22. For genomic selection purposes and according to how marker genotypes are modelled , genotypes at each locus can also be assigned a continuous value within the range 0\u20132. Instead of the number of copies of allele 2, genotypes are defined as allele 2 dosage. This definition avoids loss of information caused by rounding the genotype to one of the three classes. Allele dosages for each genotype configuration of the MGS, Sire and Offspring as a function of allele frequencies were derived and are provided in Additional file According to the algorithm described in Additional file Imputation accuracies from findhap.f90 were lower than accuracies from Single_Step* and Two_Step. The algorithm implemented in findhap.f90 is a combination of pedigree haplotyping and population haplotyping. Our results indicate that the amount of genotyping information available in the situation considered here seemed to be insufficient for the pedigree haplotyping algorithm to satisfactorily impute a completely un-genotyped Dam. Many other studies reporting performance results from findhap.f90 applied the program with the main purpose of imputing genotypes from low to high density chips ,35,36. IAccuracies of imputation from AlphaImpute were higher than from the Two_Step method, especially in the LowLD scenarios. In some cases, although the complete genotypes cannot be inferred unambiguously, one can at least be sure about the presence of one of the alleles. This piece of information is neglected by the Two_Step method, since when moving from the first to the second step, the only information available for haplotype reconstruction are the unambiguous genotypes. An improvement in imputation accuracy from the Two_Step method could be achieved if known alleles were also taken into account in the haplotyping step. This information seems to be more efficiently used by the algorithm implemented in AlphaImpute, which is a combination of long-range phasing and haplotype library imputation. Results from AlphaImpute were similar to results obtained with Single_Step*. In the LowLD scenarios, AlphaImpute performed better and in the HighLD scenarios, results from Single_Step* were better. The strength of AlphaImpute is its flexibility, since it can handle different levels of relationship between the surrogate and the genotyped animals. The strength of the Single_Step* method is its simplicity and ease of programming, which enables very fast imputation. Since the difference in performance was smaller in the LowLD than in the HighLD scenarios and the intended application was for the specific situation considered here, Single_Step* was the method of choice to investigate the impact of imputation on accuracy of genomic predictions.2 and numbers of offspring. As a general trend, accuracy of genomic predictions increased with increasing h2 and increasing sizes of training sets. This is consistent with the formula proposed by Daetwyler et al. [2 and the number of independent chromosome segments. The simulated population structure also had an impact on prediction accuracy. As expected, accuracies on average increased with increasing levels of LD observed from scenario LowLD_NoSel to scenario HighLD_Sel. These differences between scenarios are also consistent with the formula of Daetwyler et al. [2 and numbers of offspring are provided in Additional file An overview of the level and the decay of LD with inter-marker distance for each of the four simulated populations used to investigate the impact of imputation on the accuracy of genomic predictions is presented in Table\u00a0r et al. , in whicr et al. , in whicr et al. . These p2 of 0.05, moving from a TS of 1800 to a TS of 3600 offspring gave a gain in accuracy of 32% (from 0.31 to 0.41). Adding the 2000 imputed Dams to a TS of 1800 offspring gave a gain in accuracy of 23% (from 0.31 to 0.38), which is 72% of the gain in the first case and reflects the fact that the proportion of correctly imputed genotypes of Dams is lower than 1. On average, across all h2 and numbers of offspring, the gain in accuracy in the second case was 93% (LowLD_NoSel), 62% (LowLD_Sel), 78% (HighLD_NoSel) and 63% (HighLD_Sel) of the gain in accuracy obtained in the first case. The first case would require doubling the costs by genotyping another set of offspring, whereas in the second case, no additional costs for genotyping are needed. If there is funding available for genotyping more animals, then increasing the size of the training set with genotyped animals should improve the accuracy of genomic predictions more. Different strategies can be used to genotype more animals, e.g. genotyping for a low density chip the Dams with very few loci for which imputation can be unambiguously made, as pointed out in the previous section. Nevertheless, according to our results, even if all available funding for genotyping has been spent, there is still room for an additional improvement in genomic predictions by enlarging TS with imputed Dams.An increase in the accuracy of GEBV was observed when using TSA instead of TS, which demonstrates that enlarging a training set with imputed Dams represents an advantage. The extent of this advantage differed between the different population structures simulated. In the LowLD_NoSel scenario, the gain in accuracy, expressed as percentage of the accuracy with TS, ranged from 3.7% to 37.1%. The benefit of incorporating imputed Dams in the training set was overall larger for this scenario, despite the fact that with this scenario genotype imputation was performed with the poorest quality. In the other three scenarios, the maximum gains were 11.1% (LowLD_Sel), 15.3% (HighLD_NoSel) and 11.9% (HighLD_Sel), and the minimum gains were close to zero. Because imputation is not perfect, the increase in accuracy obtained with TSA was generally lower than what could be achieved by enlarging TS with another set of genotyped offspring. For each of the four scenarios, we compared the increase in accuracy obtained when: (1) enlarging TS by doubling the number of genotyped offspring; or (2) enlarging TS with imputed Dams. For example, in the LowLD_NoSel scenario with an h2 and numbers of offspring already available in TS. The effects of h2, number of offspring and simulated scenario on the difference between accuracies obtained with TS and TSA were all significant (P\u2009<\u20090.001). Pszczola et al. [2, which is consistent with our results. The population of Pszczola et al. [2 of 0.41 between adjacent markers, which were on average 0.13\u00a0cM apart). This level of LD is higher than that observed in our scenario with the highest LD (HighLD_Sel), in which the increase in accuracy of genomic predictions was overall the lowest in our study. This agrees with our indication that the impact of enlarging a reference population with imputed individuals in terms of accuracy of genomic prediction depends on the population structure under consideration.The magnitude of the gain in accuracy when moving from TS to TSA varied not only between scenarios but also for different values of ha et al. added 10a et al. reporteda et al. was simu2 and numbers of offspring for the four scenarios were performed. Results fitted a negative linear relationship well, with coefficients of determination of 0.80, 0.88, 0.68 and 0.85 for scenarios LowLD_NoSel, LowLD_Sel, HighLD_NoSel and HighLD_Sel and 2 (q). Table S3. Correlation between true and genomic estimated breeding values in the validation set obtained when estimating SNP effects with Offspring only (TS) or with an augmented training set including imputed Dams (TSA).Probabilities of each Dam genotype at a SNP, given genotypes of relatives , Sire, and one Offspring) and frequencies of alleles 1 (p) and 2 (q), and decision algorithm for assigning the imputed Dam genotype. Click here for file2 against inter-marker distance for all replicates of the four scenarios. Figure S2. Histograms of the frequencies of allele 2 for all replicates of the four scenarios. Figure S3. Distributions of the number of unambiguously imputed loci per Dam for all replicates of the four scenarios. Figure S4. Description: Regression analyses of the percentage increase in accuracy obtained with TSA against the accuracy already obtained with TS across all h2 and numbers of offspring for the four scenarios.Pair-wise values of rClick here for file"} +{"text": "Cystic Fibrosis (CF) is characterized by chronically inflamed airways, and inflammation even increases during pulmonary exacerbations. These adverse events have an important influence on the well-being, quality of life, and lung function of patients with CF. Prediction of exacerbations by inflammatory markers in exhaled breath condensate (EBC) combined with early treatment may prevent these pulmonary exacerbations and may improve the prognosis.To investigate the diagnostic accuracy of a set of inflammatory markers in EBC to predict pulmonary exacerbations in children with CF.k-nearest neighbors (KNN) algorithm was applied (SAS version 9.2).In this one-year prospective observational study, 49 children with CF were included. During study visits with an interval of 2 months, a symptom questionnaire was completed, EBC was collected, and lung function measurements were performed. The acidity of EBC was measured directly after collection. Inflammatory markers interleukin (IL)-6, IL-8, tumor necrosis factor \u03b1 (TNF-\u03b1), and macrophage migration inhibitory factor (MIF) were measured using high sensitivity bead based flow immunoassays. Pulmonary exacerbations were recorded during the study and were defined in two ways. The predictive power of inflammatory markers and the other covariates was assessed using conditionally specified models and a receiver operating characteristic curve (SAS version 9.2). In addition, Sixty-five percent of the children had one or more exacerbations during the study. The conditionally specified models showed an overall correct prediction rate of 55%. The area under the curve (AUC) was equal to 0.62. The results obtained with the KNN algorithm were very similar.Although there is some evidence indicating that the predictors outperform random guessing, the general diagnostic accuracy of EBC acidity and the EBC inflammatory markers IL-6, IL-8, TNF-\u03b1 and MIF is low. At present it is not possible to predict pulmonary exacerbations in children with CF with the chosen biomarkers and the method of EBC analysis. The biochemical measurements of EBC markers should be improved and other techniques should be considered. Cystic fibrosis (CF) is the most common life-shortening genetic disease in the Caucasian population, caused by a mutation in the cystic fibrosis transmembrane conductance regulator (CFTR) gene . There aChronic inflammation of the airways is a major characteristic of CF. The inflammatory response is excessive and dysregulated, furthermore it plays an important role in both chronic bacterial infections and pulmonary exacerbations . The CF For this one-year observational cohort study , children with CF between 5 and 18 years were recruited from three CF centers in the Netherlands . Families were approached for the study by one of the pediatric pulmonologists during regular hospital visits and received written and oral information. The Medical Ethical Committee of the Maastricht University Medical Centre approved this study. Informed consent was signed by all parents and by children aged 12 years and older.In the Maastricht University Medical Centre enrolment started in December 2011 and follow-up ended in May 2013. In the University Medical Centre Utrecht, the first children were enrolled in January 2012 and follow-up ended in June 2013. Finally, enrolment of children in the Amsterdam Medical Centre started in March 2012 and follow-up ended in August 2013.Study visits were scheduled every 2 months during one year. To lessen the burden of the study and avoid loss to follow-up, we combined study visits with regular hospital visits as much as possible.CF disease was defined as the presence of characteristic clinical features in combination with an abnormal sweat test (chloride > 60mM/L) and/or two CF mutations .Burkholderia Cepacia or Methicillin Resistant Staphylococcus Aureus; 6) participation in an intervention trial.Exclusion criteria were: 1) severe cardiac abnormalities; 2) mental disability; 3) no technically adequate performance of measurements; 4) on waiting list for lung transplantation; 5) children colonized with The occurrence of a pulmonary exacerbation was the primary outcome measure which was defined in two ways: first, according to the definition used in the EPIC trial ; and secThe presence of an exacerbation according to the EPIC trial was established by one of the major criteria alone, or two of the minor signs, and fulfillment of symptom duration (duration of sign/symptoms \u22655 days or significant symptom severity) .Treatment of pulmonary exacerbations during the study occurred in accordance with the Dutch Central Guidance Committee (CBO) guideline , which cDuring every study visit, the same measurements took place: first, children completed a questionnaire, thereafter EBC collection took place, and finally, lung function measurements were performed. All measurements were carried out by extensively trained members of the research teams. The measurements were the same for all children.A questionnaire derived from the validated Dutch version of the revised Cystic Fibrosis Questionnaire (CFQ-R) was used to evaluate symptoms . At 0, 6Children breathed tidally for ten minutes, while wearing a nose-clip, through a mouthpiece connected to a two-way non-rebreathing valve into a condenser system like described previously . The two1), forced vital capacity (FVC) and maximum expiratory flow at 50% of FVC (MEF50), all expressed as a percentage of the predicted normal value. Static lung function indices were determined at 0 and 12 months by means of body plethysmography using the Masterscreen Body .The Masterscreen Pneumo was used to measure dynamic lung function parameters, according to ATS/ERS standards . The higPseudomonas Aeruginosa at inclusion, the use of prophylactic or therapeutic antibiotics, the use of corticosteroids, the time between visits and exacerbation at previous visit were considered as covariates. Information about the use of medication was checked at every study visit.Sex, Age, colonization with Forty-nine children were included in this observational cohort study. Based on an assumed prevalence of an exacerbation of 50% and assuming the worst case sensitivity and specificity of 0.50, the width of the confidence interval for sensitivity and specificity would be 0.4 (0.3\u20130.7).Pseudomonas Aeruginosa at inclusion, ABPA at inclusion, use of prophylactic antibiotics and use of inhalation corticosteroids. The data set was divided into a training and validation data sets. The training data set, containing the information of 36 randomly selected children, was used to estimate the predictive models and the predictive performance of the models was evaluated using the information in de validation data set containing the information of the remaining 13 children. The predictive capability of the models was primarily evaluated using the area under the corresponding ROC-curves. Additionally, the percentage of correct predictions was assessed using the validation data set.For the statistical data analysis, SAS software package version 9.2 was used. To assess the prediction capacity of the biomarkers, a form of conditionally specified models, the so-called transition models were used. In transition models a measurement in a longitudinal sequence is described as a function of previous outcomes and covariates . In thisBesides, the KNN algorithm was used to evaluate the predictive performance of the biomarkers in the validation data set using the information in the training data set. KNN is a non-parametric lazy learning algorithm . The traAll 49 patients were included in the statistical analysis, despite loss to follow-up or exclusion during the study. The proportion of missing values was generally low (less than 3.5%). The exception was pH that had about 7.6% of missing values. Only the presence of missing values in pH was considered for the construction of the predictive model, these missing values had no predictive value.Methicillin Resistant Staphylococcus Aureus. Six out of the 49 children were lost to follow-up, mostly because of personal reasons (n = 5), or because of a mild adverse effect of inhaled tobramycin (n = 1). Data of the children who were lost to follow-up were included in the analysis (\u2018intention to treat\u2019).Forty-nine children with CF were included in the study . One chi1 of 87.4% of predicted value. The nutritional status was good as reflected by the BMI and BMI-SDS and the majority of the children had a homozygous DF508 mutation (73.5%). The lung function was good with a mean FEV BMI-SDS . All chiWhen the EPIC trial definition for a pulmonary exacerbation was used, 32 children (65%) had 1 or more exacerbations during the study . The preThere was a great variability in concentrations of measured biomarkers in EBC. The distribution of EBC acidity, and concentration of EBC biomarkers is given in The best predictive results were obtained using the most complex Model 2. The estimated parameters for this model are provided in However, the analysis of the ROC curve showed that the best predictions were obtained using a threshold of about 0.48 for the predictive probability of exacerbation, which led to a sensitivity of 0.70 and a specificity of 0.50. Thus, in the best scenario, the model would detect 70% of the exacerbations but would fail to predict absence of an exacerbation in 50% of the time. The predictive model (with threshold 0.48) led to an area under the curve of 0.62 (CI 0.49\u20130.75) .The results obtained when the KNN algorithm was used were very similar. The KNN algorithm correctly predicted 59% of the events. If the second definition of a pulmonary exacerbation (when the responsible pediatric pulmonologist started a course of therapeutic antibiotics) was used, the overall correct prediction of the transitional model was 49% and the KNN algorithm predicted correctly 54% of the events.In this study, we assessed the diagnostic accuracy of a set of biomarkers in EBC to predict pulmonary exacerbations in children with CF. Overall, we found low predictive power of the EBC acidity and the inflammatory markers IL-6, IL-8, TNF\u03b1 and MIF. Neither the definition of exacerbations nor the statistical method did significantly affect the results. At present, it is not possible to predict pulmonary exacerbations in children with CF by means of the chosen biomarkers and methods.To our knowledge, only Horak et al performed a longitudinal study to investigate if an inflammatory marker, EBC nitrite, was helpful in monitoring lung disease in children with CF. They found that EBC nitrite could not predict pulmonary exacerbations or changes in pulmonary function or clinical and radiological scores . Others An important strength of our study is its prospective and longitudinal character: we have followed 49 children with CF during one year. We obtained EBC every 2 months, and recorded all exacerbations to investigate the predictive power of inflammatory markers in EBC. Furthermore, we used advanced statistical methods, transitional models and KNN algorithm, which take recurrent and dependent findings into account, and in this way fits the longitudinal nature of our study. This in contrast to other predictive models that presume one condition or finding independently leads to one outcome. Another strength is the use of two definitions of a pulmonary exacerbation as primary outcome measure, which minimizes information bias as a result of misclassification. If we had only used the stringent definition of the EPIC trial, we might have missed mild exacerbations, and incorrectly classified children as being stable when they were not. However, the use of two definitions accounts for heterogeneity of this primary outcome measure.There are several explanations that could account for not being able to predict pulmonary exacerbations with the chosen biomarkers in EBC. First, in comparison with earlier studies , 19, 20,Although the collection of EBC is non-invasive, safe, fast and EBC originates directly from the airways, inflammatory markers in EBC currently do not contribute to the prediction of pulmonary exacerbations in children with CF. Future research should focus on development of EBC-specific sensitive assays for analysis of inflammatory markers in EBC that can cope with the strong dilution of EBC and the possible matrix effects. Furthermore, the potential of other techniques to analyze EBC like Nuclear Magnetic Resonance (NMR) spectroscopy, metabolomic profiling, gene expression or microbiome analyses should be explored.In conclusion, we found that 2-monthly assessed inflammatory markers in EBC and acidity of EBC were not able to predict pulmonary exacerbations in children with CF. This may well be due to methodological problems concerning the biochemical analysis of EBC. Considering chronic airway inflammation is a major hallmark of CF and pulmonary exacerbations negatively influence the prognosis, it would be a big step forward to be able to measure this airway inflammation (directly and non-invasively) and predict upcoming exacerbations.S1 Table(DOCX)Click here for additional data file."} +{"text": "Move for Well-being in School programme using the RE-AIM framework. The purpose was to gain insight into the extent by which the intervention was adopted and implemented as intended and to understand how educators observed its effectiveness and maintenance.The aim of this study was to address the gap in the translation of research into practice through an extensive process evaluation of the Public schools located in seven municipalities in Denmark were invited to enroll their 4th to 6th grade classes in the project. Of these, 24 school decided to participate in the project in the school-year 2015\u201316 and were randomly (cluster) allocated to either intervention or control group. A process survey was completed online by school personnel at the start, at midterm, and at the end of the school year. Additionally, informal interviews and observations were conducted throughout the year.At the 12 intervention schools, a total of 148 educators were involved in the implementation of the programme over the school-year. More than nine out of ten educators integrated brain breaks in their lessons and practically all the physical education teachers used the physical education lesson plans. The educators delivered on average 4.5 brain breaks per week and up to 90% of the physical education teachers used the project lesson plans for at least half of their classes. Half of the educators initiated new recess activities.A total of 78%, 85% and 90% of the educators believed that the implemented recess, brain break and physical education components \u2018to a high degree\u2019 or \u2018to some degree\u2019 promoted the pupils\u2019 well-being, respectively.This study shows that it is possible to design a school-based PA intervention that educators largely adopt and implement. Implementation of the PA elements was stable throughout the school year and data demonstrate that educators believed in the ability of the intervention to promote well-being among the pupils. Finally, the study show that a structured intervention consisting of competence development, set goals for new practices combined with specific materials, and ongoing support, effectively reached a vast majority of all teachers in the enrolled schools with a substantial impact.ISRCTN12496336 \u2013 named: \u201cThe role of physical activity in improving the well-being of children and youth\u201d).Date of registration: retrospectively registered on 24 April 2015 at Current Controlled Trials (DOI 0.1186/The online version of this article (10.1186/s12966-017-0614-8) contains supplementary material, which is available to authorized users. School-based approaches promoting physical activity (PA) are recommended because in most countries the majority of children and adolescents spend many hours in school every day \u20133. A schIn order to construct evidence applicable to real-world settings, it is crucial to evaluate the feasibility of the intervention programme, and this is no easy task . SeveralThe RE-AIM framework has been developed to guide the evaluation of issues relating to the external validity that is, \u00b4finding out which populations it works for and how best to make it work in those populations\u00b4 . The fraMove for Well-being in School intervention (MWS) aimed to improve psychosocial well-being among school-aged children and youths from 4th to 6th grade (10\u201313\u00a0years) through the development, implementation, and evaluation of a multicomponent, school-based, physical activity intervention. The objective of this paper is to evaluate the implementation of MWS from the implementer\u2019s point of view (the educator), using the RE-AIM framework.The The Medical Research Council [The programme was designed, piloted and implemented in accordance with the study protocol published previously . In brie Council frameworIn the design phase, the preliminary development processes entailed conducting a scoping review, interviews with members of the target group and the execution of four workshops including a broad selection of key stakeholders. Informed by the design phase, an initial intervention programme was assembled. Also, the main theoretical driver of the programme development originated from the area of motivation, as construed by Edward Deci and Richard Ryan\u2019s self-determination theory . The pilThe RCT included the implementation of the final intervention programme. Based on initial screening municipalities were selected from the following criteria a) geographic and demographic variation, b) variation in schools size and c) variation without extremes in municipal budgets for public schools . A totalThe intervention consisted of, (i) a competence development programme for educators consisting of four workshops with both practical and theoretical content Fig.\u00a0; (ii) inAt the 12 intervention schools, a total of 148 educators were involved in the implementation of the MWS program over the school year of which 48 were PE teachers. The average number of involved educators per school was 12 (range 5\u201317), and three of these were physical education teachers (range 1\u20136).The programme evaluation phase contains a joint evaluation of the entire project period, guided by the RE-AIM framework .Data collection featured administrative data, online surveys, informal interviews, and observations. Administrative data on municipalities and schools were obtained from the Ministry of Economic Affairs and the Interior and from the municipalities. Educators were asked to complete an online questionnaire three times during the intervention: two months (T1), five months T2) and nine months (T3) after commencement was conducted at the end of the school year \u2013 four months after the last competence development programme workshop Fig.\u00a0. This coThe process surveys were completed by 100 of 141 educators at Time 1(T1) , by 109 of 135 possible educators at Time 2 (T2) and finally, 93 of 139 possible educators at Time 3 (T3) , averaging as a 72% response rate. Only educators that completed at least two process surveys were included in the analyses.Table\u00a0At T3, more than eight out of ten educators believe that brain breaks improved pupils\u2019 well-being \u2018to a high degree\u2019 or \u2018to some degree\u2019 Fig.\u00a0. As for Finally, when asked about the impact of the physical activity interventions as such, 75% of the educators answer that this improved well-being among their pupils \u2018to a high degree\u2019 or \u2018to some degree\u2019, Fig.\u00a0.More than nine out of ten educators integrated brain breaks in their lessons, and all physical education teachers used the lesson plans at least once Table\u00a0. Around The set goal for the intervention was: two brain breaks per day; approximately half of the physical activity lessons organized according to the MWS programme; and initiated/facilitated recess activities three times a week lasting at least 30\u00a0min each .On average, the educators delivered 4.1, 4.5 and 4.8 brain breaks per week at T1, T2, and T3 respectively. There were large differences between schools. Teachers at the school with the lowest implementation conducted approximately three brain breaks, while teachers at the schools with the highest implementation conducted twice as many the motivation to implement an innovation, b) the general capacities of an organization, and c) the innovation-specific capacities needed for a particular innovation\u201d, also referred to as organisational readiness [As stated in the results section, schools were selected to ensure comparability to most Danish schools in terms of number of pupils, expenses per pupil, and socioeconomic status of the pupil\u2019s parents. We included 24 schools but had difficulties reaching schools through the municipal authorities. Municipalities were reluctant to put pressure on schools due to a recent extensive school reform and consequently increased workload. Therefore, with permission from the municipalities, the schools were contacted directly. The schools agreeing to participate could be grouped into two basic categories: 1) schools that already had a consolidated focus on school PA and were interested in improving the already high standards; and 2) schools with low experience and capabilities regarding school PA and with obvious challenges meeting the target of 45\u00a0min of school PA per day. This variation could contribute to understanding the differences in school management and educators\u2019 motivation for participating in the project and capacities for taking on MWS. According to Scaccia et al. the ability to take on a particular innovation depends on \u201ceadiness . The \u2018orAs an indicator of effectiveness, we used the educators\u2019 perception of the pupils\u2019 change in well-being. The definition of well-being and introduction to the key elements of the self-determination theory was presented in the available materials and during the competence development program. Still, it is uncertain whether the educators\u2019 perceptions accurately reflect actual changes in pupils\u2019 well-being. Nonetheless, the overall belief in the positive effects of the intervention is evident and it is essential to the educators\u2019 motivation to implement and maintain the programme.The adoption rates for both physical education and brain breaks were high, but the adoption rate of the recess activities was much lower, which may reflect the fact that not all educators are appointed for recess duty. Recess is used by many educators as a time for preparation and coordination or having a break and an informal talk with colleagues. Having \u2018recess duty\u2019 is tantamount to just ensuring pupils are not getting in trouble or injured. Some teachers also hold the view, that recess is free time for the pupils and should not be influenced in any way by the adults.The recess activities could, therefore, be experienced as a rather radical change compared to previous practices in the area. In general, the literature holds that the more radical a change is, the more uncertainty it creates and the more difficult the implementation is . WhereasThe fact that no school withdrew from the project, together with the high response rates for the process survey, lends confidence in the representativeness of the presented data. The role of the coordination groups at each school consisting of educators and management representatives could partly explain the high participation and adoption rate. Active management participation and involvement have in previously process evaluations been stated as facilitator for engaging in project interventions .The implementation of brain breaks was 4.5 per educator per week with major differences between educators and an overall average of 8.6 brain breaks per class per week for all intervention schools. Barriers for conducting brain breaks relates to difficulties integrating it to the normal practice. Finally, some brain breaks lasted more than 5\u00a0min, which might let the educators settle for one per day of longer duration.Overall, the physical education set goal was met, with at least half of physical education lessons being MWS lessons. This was probably because physical education teachers perceived it as a help to their planning, and because this element is directed at teaching and learning and thus resembles normal school practice and is related to the academic subject , 42 . InThe observations of and interviews with the physical education teachers supported the findings that the MWS lessons plans were positively received. They also indicated that lack of time for preparation; lack of coordination between teachers; and challenges with some pupils\u2019 acceptance of the new physical education practice were among the biggest barriers to the implementation . The phyhere do you experience the biggest challenges in implementing the intervention components?\u201d confirm the general findings in the process literature [Several studies have previously reported barriers and facilitators in implementing school-based PA interventions , 41. Ansterature . \u2018Lack oterature , 46.Both adoption and implementation are relatively stable between T1 and T3. During the school year the coordination group received bi-weekly information letters by e-mail; received two follow-up visits from the research team; conducted a mid-term theme day; and were invited to attend the fourth workshop half way through the school year. The relatively low level of input from the research team required for ongoing implementation provided an important enticement for the maintenance of the MWS initiatives. Furthermore, the fact that the intervention was conducted over a whole school year and employed teacher-delivered strategies is in the process evaluation literature perceived as facilitators for increasing maintenance . FinallyThis study have shown that it is possible to design a school-based PA intervention that educators largely adopt and implement. Implementation of the PA elements was stable throughout the school year and data demonstrate that educators believed in the ability of the intervention to promote well-being among the pupils. There were, however, large differences between schools in implementation, which can be explained by differences in existing capabilities and motivation. Finally, the MWS show that a structured intervention consisting of competence development, set goals for new practices combined with specific materials, and ongoing support, effectively reached a vast majority of all teachers in the enrolled schools with a substantial impact."} +{"text": "P for trend, .026) fold risk for all cause death, compared with those in the lowest quartiles. The HR was 1.88 using 11.11\u200a\u03bcmol/L as cut point for hyperhomocysteine. HHCYS was significantly associated with poor prognosis in diastolic dysfunction participants in the community.Hyperhomocysteinemia (HHCYS) has been associated with systolic heart failure. However, it is still unknown that serum homocycsteine level was useful in predicting the outcome in patients with diastolic dysfunction. We conducted a cohort study to determine if HHCYS was associated with poor prognosis in diastolic dysfunction patients. The Chin-Shan Community Cardiovascular Cohort (CCCC) study was designated to investigate the trends of cardiovascular morbidity and mortality in a community. Individuals who were 35 years and above were enrolled. Participants were categorized by homocysteine concentration quartiles. We used multivariate Cox proportional hazards models to calculate the hazard ratio (HR) of the 4th quartiles versus the 1st quartile. Area under the receiver-operating characteristic (ROC) curve was to compare prediction measures. A total of 2020 participants had completed the echocardiography examination, and 231 individuals were diagnosed as diastolic dysfunction. A total 75 participants had died during follow-up period. HHCYS was found to be significantly associated with poor prognosis. The adjusted HR for homocysteine level was 1.07 . Participants in the highest quartile had a 1.90 (95% CI, 0.88\u20134.12, Experimental and clinical data had demonstrated this relationship by showing that HHCYS in patients can lead to the prognosis of heart failure. However, the cellular mechanisms regarding the effects of HHCYS on cardiac remodeling and pump function are not very well understood.\u20136 Previous studies had shown that the patients with HHCYS have a higher risk of left ventricular hypertrophy (LVH) and coronary artery disease (CAD).\u20139 Since HCY is a potential proinflammatory and prooxidative compound. The increased level of homocycteine in the body, caused by HHCYS, may contribute to the pathogenesis of cardiovascular structures and endothelial dysfunctions.,10 Experimental studies had shown that HHCYS may adversely affect the myocardium, leading to hypertrophy of ventricles, and a disproportionate increase in collagen.\u201315 These remodeling and dysfunction can ultimately lead to the prognosis of LVH, CAD, and impaired left ventricular systolic or diastolic functions. However, very few studies had investigated the relationship between HHCYS and impaired left ventricular diastolic dysfunction. Some studies had indicated that HHCYS is associated with an increased risk of mortality in patients with systolic heart failure. However, there is no study that had demonstrated the relationship among mortality rate, HHCYS, and diastolic heart failure. Furthermore, there is no agreement among the literatures on the diagnostic cutpoint for HHCYS. Therefore, in this study, we prospectively investigated the association of plasma HCY with the risk of all-cause death in patients with diastolic dysfunction.Increased plasma homocysteine (HCY) level is associated with arterial ischemic events such as acute myocardial infarction, peripheral vascular disease, and stroke.22.1,17\u201322 The CCCC study had recruited 3502 adults from northern Taiwan, homogenous in Chinese ethnicity, and are the age of 35 years and above. The details of the CCCC study had been described in previous literatures.,17,23,24 The study was performed in accordance with the Declaration of Helsinki and was approved by the institutional review board of the National Taiwan University Hospital, and all subjects provided their written informed consent prior to participation in the study. The following is a brief summary of the initial study: The study started in 1990 with initial cohort 3602 participants. Baseline demographic data were collected through questionnaires at enrollment. Physical examinations including measurements of weight, height, blood pressure, and electrocardiography were conducted by senior medical students. Fasting serum samples of participants were collected for biochemical assays. The research team conducted biennial prospective follow-up household visits to account for the major cardiovascular morbidity and mortality. This study was approved by the institutional review board of National Taiwan University Hospital.The participants were enrolled in the Chin-Shan Community Cardiovascular Cohort (CCCC) Study, a prospective community-based cohort study for risks factors and outcomes of cardiovascular disease since 1990.2.2In the 1992 to 1993 follow-up period, we invited the participants to undergo echocardiographic examination for the 1st time. And a 2nd session of echocardiography examination was conducted during 1994 to 1995. The velocities of mitral inflow were measured during the 2nd session of echocardiographic examination. Among the 3602 selected participants, 2214 of which had completed echocardiographic examination. A total of 147 participants were excluded due to the absence of HCY data. Another 47 participants were excluded due to the absence of mitral inflow data. Therefore, the final study population consists of 2020 participants.2.3 Peak velocity of early (E) and late atrial (A) mitral flow were obtained from an apical 4-chamber view, by pulse wave Doppler measurements. Diastolic dysfunction was defined as a mitral inflow of E/A ratio <1, deceleration time >220\u200acm/s, and without impairment of systolic function.\u201328 Systolic dysfunction was defined as LV ejection fraction below 40%. Body mass index and body surface area were estimated by weight and height information obtained from the period of 1994 to 1995. Left ventricular ejection fraction and left ventricular mass were calculated by means of previously established method.,30 The left ventricular mass was further divided by the body surface area to obtain a left ventricular mass index.,31Two-dimensional-guided M-mode echocardiography was performed by cardiologists according to the recommendations presented by the American Society of Echocardiography.2.4\u201334 All venous blood samples were drawn after a 12-hour overnight fast. These samples were refrigerated immediately and transported within 6 hours, to the National Taiwan University Hospital. Serum HCY samples were collected into tubes containing ethylene-diamine-tetra-acetic acid. The serum samples were then stored at \u221270\u200a\u00b0C until analysis. HCY levels were measured by fluorescence polarizaion immunoassay . The data from the immunoassay correlated very well with results obtained by high-performance liquid chromatography (HPLC).\u201337The procedures of blood sampling have been reported elsewhere.2.5The end points of this investigation were all-cause death in the follow-up period from 1994 to 2007. Deaths from any cause were identified from the official certified documents and further verified by house-to-house visits.2.6The participants in this study were categorized into quartiles by their serum HCY concentration. Continuous variables are presented as mean (SD) or median values. The categorized data are presented in the form of contingency tables. Analysis of variance (ANOVA) and chi-square tests were used to analyze the corrections between quartiles. The age and gender-adjusted Spearman partial correlation coefficients were calculated between baseline HCY concentrations and blood pressures, left ventricle mass index, lipid profiles, and fasting glucose. Incidence rates for all-cause death were calculated for each HCY quartile by dividing the number of cases by the numbers of person-years of follow-up. The hazard ratio (HR) and 95% confidence interval (CI) were determined by the multivariate Cox proportional hazards models. Logistic regression analysis was performed to determine the significance between all-cause death, and crude HCY and 4 quartiles of HCY. Three specific models were used in estimating the HRs of events, in the higher HCY quartiles relative to the lowest quartiles. In model 1, the univariate HR of HCY was estimated with the 1st quartile as the reference. In model 2, the HCY was adjusted according to the age and gender variables. In model 3, we used variables chosen by model selection. The model selection was to select the adequate variables with an entry level of 0.3, and a stay level of 0.15. The HR of these 3 models was calculated by using HCY as 4 quartiles and as an independent variable. Furthermore, a receiver-operating character (ROC) curve was constructed to generate the optimal cutoff point with highest Youden index for all-cause death. The HRs were then calculated using the resulted cutoff point.P value is <.05. The possible confounding factors in our models are age, gender, hypertension, diabetes, and cigarette-smoking history.Exist modifying factors (confounding factors) in the HCY mechanism were investigated. The patients were stratified according to the modifying factors. And the HR was calculated within each stratified group. In addition, we also introduced interaction terms into our models to test whether if these terms are the modifying factors. Each factor would be considered as a significant confounding factor the resulting logistic regression P-values <.05 were considered statistically significant. Analyses were performed with SAS software .All statistical tests were performed as 2-tailed tests. Type I error of 0.05 and 3Baseline characteristics of the participants are shown in Table Among the 3602 selected participants, 2020 participants constituted this study population. In these 2020 participants, 231 adults had diastolic dysfunction, which was defined as a mitral inflow E/A ratio of <1, deceleration time of >220\u200acm/s, and without systolic dysfunction.The average HCY level among the 231 adults was 11.1\u200a\u03bcmol/L, the inter quartile ranged between 8.5 and 12.6\u200a\u03bcmol/L. The relationship between HCY level and other variables was investigated by gender-adjusted Spearman partial correlation coefficients. Our study had shown that there were no statistically significant correlations between HCY concentrations and blood pressure, left ventricle mass index, lipid profiles, and fasting glucose.P for trend, .026) (Table P\u200a=\u200a.016) Table .P\u200a=\u200a.028) was 11.11\u200a\u03bcmol/L. The sensitivity was 64%, specificity was 71.2%, and the area under curve was 0.68 at this cutpoint. We then used this cutpoint of HCY to calculate the HR for the participants in the higher HCY quartile. The HR for these participants was determined to be 1.88 for these participants. Also, in participants older than 75 years old, the HR of HCY was 1.12 . And in participants younger the 75 years old, the HR was 0.96 . With the aforementioned results, we can conclude that age is an effect modifier. Other factors such as gender, hypertension, diabetes, and cigarette-smoking history were also studied in this analysis. These factors, including gender, hypertension, diabetes, and cigarette smoking history, did not modify the mechanism, and therefore are not effective modifiers , respectively. In our study, the adequate homocycteine concentration cutpoint was 11.11\u200a\u03bcmol/L. The HR was 1.88 . The variation among the reported values is due to the fact that cutpoint and the HR were highly dependent on the study population. In general, if the study population had a higher disease severity, the HR of this population would also be higher. Our study population consisted with patients with diastolic dysfunction were relatively healthier than the patients with congestive heart failure. So the HR value, in patients with diastolic dysfunctions, was lower than the that of the HR value of patients with congestive heart failures.Previous studies had showed that HHCYS is associated with poor prognosis in patients with congestive heart failure. The cutpoint of homocycteine concentration and HR were determined to be 14\u200a\u03bcmol/L and 3.26 , respectively. This study also suggests that the concentration of HCY is a better predictor then the classic risk factors for patients of very old age.In this study, we also tested age, gender, hypertension, diabetes, and smoking history as the possible hypermohocysteinemia confounding factors. We had found that patient age is a possible effecting modifier. In participants older than 75 years old, HCY significantly increased with mortality rate. The HR for these older participants was 1.12 . However, this relationship was not observed in participants younger the 75 years old. Therefore, we suggest that the importance of HCY increases proportionally with the age of the patient. This is especially true when using homocycteine levels as a mortality predictor in older patients. This result is in concurrence with a previous study, where the concentration of HCY can become a cardiovascular mortality predator in patients of very old age.4.1Our study has several strengths. First of all, this is the first cohort study of HCY and all-cause mortality in diastolic dysfunction participants. HCY was shown to be an important marker for diastolic dysfunction patients of very old age. We also constructed an ROC curve to determine the optimal cutpoint of homocystine level in diastolic dysfunction participants. This study enrolled 231 diastolic dysfunction participants, and follow-up was performed for up to 13 years.Our study also had some limitations. First of all, we lacked information on some determinants of total HCY level such as dietary patterns, folic acid, and fortification of food and vitamin supplements. Second, we determined diastolic dysfunction only by mitral inflow, which may not be sufficient, and the number of participants with diastolic dysfunction may be underestimated. Finally, in this study we had used the all-caused death for outcome management due to the fact that we lacked a complete clinical information such as cardiovascular events, etc.5In this cohort study, we had shown that people with diastolic dysfunction and a higher level of HCY have a significant higher risk of all-cause death. Plasma HCY level was a good predictor for all-cause death among old adults."} +{"text": "Gliomas, and in particular glioblastoma multiforme, are aggressive brain tumors characterized by a poor prognosis and high rates of recurrence. Current treatment strategies are based on open surgery, chemotherapy (temozolomide) and radiotherapy. However, none of these treatments, alone or in combination, are considered effective in managing this devastating disease, resulting in a median survival time of less than 15 months. The efficiency of chemotherapy is mainly compromised by the blood-brain barrier (BBB) that selectively inhibits drugs from infiltrating into the tumor mass. Cancer stem cells (CSCs), with their unique biology and their resistance to both radio- and chemotherapy, compound tumor aggressiveness and increase the chances of treatment failure. Therefore, more effective targeted therapeutic regimens are urgently required. In this article, some well-recognized biological features and biomarkers of this specific subgroup of tumor cells are profiled and new strategies and technologies in nanomedicine that explicitly target CSCs, after circumventing the BBB, are detailed. Major achievements in the development of nanotherapies, such as organic poly(propylene glycol) and poly(ethylene glycol) or inorganic (iron and gold) nanoparticles that can be conjugated to metal ions, liposomes, dendrimers and polymeric micelles, form the main scope of this summary. Moreover, novel biological strategies focused on manipulating gene expression for cancer therapy are also analyzed. The aim of this review is to analyze the gap between CSC biology and the development of targeted therapies. A better understanding of CSC properties could result in the development of precise nanotherapies to fulfill unmet clinical needs. Gliomas, demonstrating glial cell characteristics, represent 30% of all brain tumors as described by The development of new technologies based on nanometer-sized particles (nanotechnology) for cancer treatment has been extensively investigated in the last decade and this approach shows potential for glioma diagnosis and treatment. Unique molecular signatures for each type of tumor have been uncovered recently, because of advances in proteomics and genomics, opening new paths for therapies that specifically target and kill tumor cells .In this review paper, the challenges in targeting gliomas are highlighted. The concept of CSCs and their biomarkers is introduced initially, and finally, developed nanotechnologies, including some clinical trials, are summarized. Moreover, the application of therapies already used in different fields to glioblastoma multiform (GBM) treatment is proposed, focusing on CSC targeting.Gliomas are brain tumors that resemble normal stromal cells of the brain, such as astrocytes (astrocytomas), oligodendrocytes (oligodendrogliomas) and ependymal cells (ependymomas). They are a group of oncological diseases for which no cure exists and little progress has been made in order to guarantee a longer life expectancy. Gliomas can diffusely penetrate throughout the brain and are mainly classified according to their morphological resemblance to their respective glial cell types, their cytoarchitecture and their immunohistological marker profile .There is also a glioma grading system that distinguishes, astrocytomas, by four World Health Organization (WHO) grades ; and oligodendrogliomas and oligoastrocytomas, by two grades (II and III) .de novo (primary glioblastoma) (The most aggressive and common glioma is glioblastoma (a grade IV astrocytoma). This tumor demonstrates extensive vascular endothelial proliferation, necrosis, high cell density and atypia. It can evolve from a preexisting secondary glioblastoma (low grade astrocytoma), but usually occurs lastoma) .de novo glioblastoma and prevailing in patients over 55 years of age; IDH-mutant (about 10% of cases), corresponds to secondary glioblastoma that preferentially arises in younger patients , it has been recommended that glioblastomas be divided into IDH-wildtype, IDH-mutant and Nitric oxide synthase (NOS). IDH-wild type (about 90% of cases) is regarded as primary or patients ; and NOSpatients .In the last two decades, glioblastoma treatment using chemotherapy has undergone some changes, such as replacing the use of some alkylating substances like carmustine (BCNU), nimustine (ACNU), and lomustine (CCNU) with temozolomide (TMZ). The alkylating agent groups that have been mostly prescribed in the clinic are: TMZ -1, 2, 3, 5-tetrazin-4(3H)-one) and nitrosoureas .Temozolomide is rapidly converted into its reactive format, 5-3-(methyl)-1-(triazen-1-yl) imidazole-4-carboxamide, at physiologic pH, causing DNA damage through methylation of the O6-position of guanines, blocking DNA replication and inducing the death of tumor cells or even In contrast, the CNUs alkylate the N3-position of adenine and the N7-position of guanine inducing apoptotic cell death in p53 wildtype cells and necrotic cell death in p53 deficient cells .Currently, TMZ, together with radiotherapy and surgical resection, is the most commonly applied glioblastoma treatment. Despite a boost in overall patient survival with TMZ treatment and the low toxicity of TMZ, patient prognosis remains poor. Usually few patients survive longer than 5 years, with a median survival of approximately 14.6 months , 2009.The possible cause of GBM chemoresistance is the presence of CSCs. CSCs are tumor cells with stem cell-like properties that reside in GBM and can readily generate both proliferating progenitor-like and differentiated tumor cells amid microenvironment cues . CSCs coThe origin of CSCs can be either mutated embryonic stem cells or downstream progenitors, that may already exist at birth or accumulate over time through mutation . Recent Distinguishing between CSCs and other tumor populations largely lies in the functional multipotency that stem cells demonstrate, i.e., the self-renewal and differentiation to multiple progeny capabilities. Cells that are tumorigenic and can differentiate hierarchically are commonly regarded as CSCs . Also, CSCs can form sphere-shaped colonies, however, it is not considered as a default feature .The CSC hypothesis states that CSCs escape multimodal therapy, causing tumor resistance. Some causes of this resistance could be insufficient drug delivery to CSCs niche or non-specific targeting, since the therapies generally target more differentiated tumor cells. Another premise of this hypothesis is that therapies which efficiently eliminate the CSC fraction of a tumor are able to induce long-term responses and thereby halt tumor progression. The best-described marker for CSCs is CD133, and recently new molecules such as CD15/ stage specific embryonic antigen-1 (SSEA-1) and integrin a6 have been described as novel markers. However, there is not yet a consensus on the optimal markers for CSCs in GBM. CSCs have been isolated from cancer to be analyzed and later used to screen for stem cell-specific biomarkers in tumor cells, particularly surface biomarkers. Cell-surface markers are generally cell membrane-surface antigens to which antitumor drugs can easily bind, consequently increasing the therapeutic efficiency of the drug. Therefore, membrane surface markers are more meaningful than nuclear or cytoplasmic antigens in targeted tumor therapy.CD133 belongs to the Prominin family, and is also known as Prominin 1, with five transmembrane regions. Singh et al. found thin vitro, both of whom embrace stem cell features, tumorigenic characteristics and capability of re-generating CD133+ and CD133- cell populations. CD133+ glioma stem cells can differentiate into CD133- tumor cells; CD133- glioma cells injected into nude rats formed tumors containing CD133+ cells belongs to the nerve cell adhesion molecule category and to the type I transmembrane glycoprotein of immunoglobulin super family and is crucial in nervous system development. L1CAM supports the survival and proliferation of CD133+ glioma cells, both gliomas . L1CAM a gliomas .Also known as Thy-1, CD90 is a member of the cell adhesion molecule immunoglobulin super family. CD90 has been found on the surfaces of nerve cells, thymocytes, fibroblast subsets, endothelial cells, mesangial cells, and hematopoietic stem cells, suggesting that CD90 is a surface marker in hematopoietic stem cells , mesenchA2B5 is a ganglioside on the surface of the glial precursor cell membrane. Ogden et al. detectedRecently, some typically expressed embryonic stem cells markers have been considered as the markers for tumor-initiating cells, such as c-Myc, SOX2, and OCT-4. These markers could be useful as a tool to identify and isolate CSCs . Moreove+ (Ca) channels have been identified as potential targets for modulation of BBB permeability in brain tumors by assisting the formation of pinocytic vesicles of drugs (+ (Ca) channels. Furthermore, cerebral blood flow could be modulated and the therapeutic efficacy was augmented after applying a nitric oxide donor which selectively open the blood tumor barrier in rats with intracerebral C6 gliomas is an obstacle because of its low permeability, requiring higher doses of drugs, which causes increased side effects. The BBB inhibits the delivery of therapeutic agents to the CNS and prevents a large number of drugs, including antibiotics, antineoplastic agents, and neuropeptides, in passing through the endothelial capillaries to the brain . Safe diof drugs . Moreove gliomas .Aiming to enhance transport through or bypass the BBB, many research groups have been developing new nanotechnologies to overcome these obstacles. Many biochemical modifications of drugs and drug nanocarriers have been developed, enabling local delivery of high doses while avoiding systemic exposure. In this review section, BBB properties and recently discovered nanotechnologies that allow systemic drug delivery for CNS cancer therapy are discussed.Figure 1). The tight junctions in the BBB are mainly composed of claudins and occludins .Transport across the BBB is selective for molecules smaller than 12 nm and is finely regulated; there are mainly two types of transport, carrier-mediated transport (CMT) and receptor-mediated transport (RMT) with pharmacologic activity preserved, but preferably not affect CMT function to avoid possible side effects .In contrast to CMT, RMT promotes the permeability of some macromolecules into the brain, such as lipoproteins, hormones, nutrients and growth factors . The RMTThe approach targeting RMT requires the involvement of a specific ligand , which has affinity for an endocytic receptor expressed on the endothelial cell surface, to the chemotherapeutic drug or to a drug-loaded nanocarrier. Binding to the targeted receptor induces intracellular signaling cascades mediating invagination and formation of membrane-bound vesicles in the cell interior, and then intracellular vesicular trafficking transport to the abluminal endothelial plasma membrane .Figure 2).The discussion of nanosystems in this review mainly focuses on liposomes, polymeric nanoparticles, solid lipid nanoparticles, polymeric micelles and dendrimers as carriers , glial fibrillary acidic proteins or the insulin receptor . The useFurthermore, trafficking cargo across the BBB is improved when using nanocarriers that target CMT. For example, liposomes targeting glucose transporter 1 (GLUT1) enhanced transport of daunorubicin , while dIn vivo GBM models have shown that magnetic NPs are promising. Detailed reviews concerning NP applications have already been published (Nanoparticles (NPs) have also been widely studied, because of their high drug-loading capacity and protection against chemical and enzymatic degradation. NPs have enormous medical potential and have emerged as a major tool in nanomedicine, compared with conventional drug delivery methods. NPs are solid colloidal particles made of polymers ranging from 1 to 1000 nm, and are divided in two types, nanospheres and nanocapsules . An inteublished .D,L-lactide), together with a hydrophilic shell made of poly(ethylene glycol) (PEG). Pluronic micelles (PEG-PPG-PEG) have emerged as good candidates for brain therapy, since they can easily cross the BBB and inhibit drug efflux. Micelles carrying paclitaxel were able to increase the toxicity of the chemotherapeutic drug in a LN18 human glioblastoma cell line , poly (propylene glycol) (PPG), or poly with near-infrared radiation (NIR) was effective in debulking a tumor in rats, leading to tumor shrinkage without recurrence. Furthermore, this protocol could eliminate glioma CSCs, both drug-sensitive and drug-resistant glioma cells due to the broad-spectrum absorption of CNTs by gliomas. In contrast, normal cells were merely affected, demonstrating the lower uptake of CNTs . Hyperthpulation .in vitro. A discernible shrinkage of tumor after subcutaneous NIR laser irradiation following CDSWNT administration in this particular ectopic GBM tumor model conjugated with anti-CD133 antibodies (CDSWNTs) produced a targeted lysis of CD133+ GBM CSCs, while CD133- GBM cells remained intact or model . NIR phoor model .\u00ae), daunorubicin (trade name: Cerubidine\u00ae), and bleomycin (trade name: Blenoxane\u00ae), show powerful anticancer activity against gliomas cells in vitro. Their efficacy in vivo was reported to be poor, which was largely attributed to their inability to penetrate the BBB , the prolonged survival of treated animals is observed following an enhanced local antitumor effect /Cas 9 and silencing RNA, has provided new methods to deliver nucleic acids to the brain, and in particular for glioma treatment. For this purpose, positively charged and degradable polymers, including chitosan, poly(beta-amino esters), poly(amidoamines), and many other cationic polymers have been used, because of their cationic nature, which allows complexation with negatively charged molecules like DNA or RNA. Inorganic NPs are better applied for imaging and drug delivery purposes, because their synthesis is easily tunable and reproducible . Some exIFN-\u03b2 gene in mouse models of glioma, resulting in immune response induction and reduced tumor growth. Five malignant glioma patients were treated using liposomes carrying the IFN-\u03b2 gene in a pilot clinical trial and four patients showed > 50% tumor reduction or stable disease was designed to circumvent BBB under several clinical trials. Ang-1005 is conjugated to paclitaxel and to the RMT ligand angiopep-2 that targets LRP1. In a phase I trial, the drug tolerance of maximum dose of 650 mg/m samples .To the best of our knowledge, nanocarrier-based RMT-targeting strategies in GBM treatment have very limit clinical trial outcomes. It has been described that PEGylated liposomal doxorubicin without RMT-targeting was evaluated in phase I studies in GBM patients, showing no improvements in progression nor survival . In Phasp53 tumor suppressor, and which displays scFv-targeting TfR. One phase II clinical trial of SGT-53 is to combine it with TMZ for patients with recurrent malignant gliomas, aiming to evaluate tumor cells death after accumulation of the drugs, anti-tumor efficacy, safety and overall survival is also in phase I trials for glioma .Moreover, magnetically induced hyperthermia, which uses a magnetic medium such as thermoseeds and magnetic NPs to produce moderate heating in a specific area of the organ where the tumor is located, is under investigation for malignant glioma, prostatic cancer, metastatic bone tumors and some other malignant tumors. Thermoseed magnetic induction of hyperthermia for the treatment of brain tumors was first reported by Kida et al. in 1990. A Fe-Pt alloy thermoseed with a length of 15\u201320 mm, a diameter of 1.8 mm and a Curie point of 68\u201369\u00b0C was used for seven cases of metastatic brain tumor two to three times a week, with the tumor tissues reaching 44\u201346\u00b0C during the treatment. This resulted in two cases of complete response and one case of partial response. D, L-lactide) loaded with paclitaxel to form Genexol\u00ae-PM has been trialed clinically and is now commercially available for the treatment of breast cancer, ovarian cancer, and non-small cell lung cancer -block-poly.To develop a novel treatment based on targeting CSCs, an effective strategy should use liposomes as nanocarriers, because of their ability to shield and carry molecules of different sizes and charges. These liposomes should have a shell coated with aptamers or antibodies specific for CSC markers such as CD133, and would carry antitumor antibiotics (doxorubicin) or genome editing tools that would modulate the expression of genes important for tumor survival, such as Finally, some clinical trials have succeeded in testing new nanotechnologies that may become available to patients in the near future.TG, IH, LW, and XZ summarized the literature and drafted the manuscript. TG, XZ, and LW revised and edited the manuscript. XZ and LW supervised the work. TG and XZ initiated, finalized, and submitted the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "After being incorporated into E. coli, this system featured high-selective detection and recycling of gold ions among multi-metal ions from the environment. It serves as an efficient method for biological detection and recovery of various heavy metals. We have developed modular methods for cell-based detection and adsorption of heavy metals, and these offer a quick and convenient tool for development in this area.Detection and recovery of heavy metals from environmental sources is a major task in environmental protection and governance. Based on previous research into cell-based visual detection and biological adsorption, we have developed a novel system combining these two functions by the BioBrick technique. The gold-specific sensory Gold (Au) is historically a highly valued noble metal. It has been used for centuries by humans with its obvious and has superior properties to those of its heavy metal peers ,2. As a Traditional methods for detection of gold ions in the environment are based on its physicochemical properties, and include inductively coupled plasma mass spectrometry (ICP-MS) and atomic absorption spectroscopy (AAS), but these methods are restricted by the cost of apparatus and complexities of sample pretreatment ,11,12. NIn recent years, a series of bacteria modified by genetic engineering for whole-cell detection have been used in environments containing various heavy metals ,19,20. IIn this article, we report a unified whole-cell system combining specific detection using the GolS regulon with selective adsorption by the surface displayed GolB protein. This integration was achieved by \u201cBioBrick\u201d, a technique in synthetic biology ,24 whichSalmonella, the promoter pgolTS began the expression of the protein GolS. When gold ions entered the cell and bonded with protein GolS, the metal protein complexes were bound to the promoter pgolB and further modified the expression of GolB , as did the results when the concentration increased to 20 \u03bcM.Time gradients of 0.5, 1, 1.5, 2, 3, 5, 7.5 and 10 h were set at different concentrations of gold ions . The results of the fluorescence experiments showed that when the Gold ion concentration was 0.1 \u03bcM, the intensity increased with time and reached the maximum value of 16,000 after 10 h a. WesterWhen our whole-cell engineered bacteria were applied in both the separate a and mixE. coli were first incubated for 10 h in LB medium with the concentration of gold ions increasing from 0.1 to 20 \u03bcM and then analyzed by ICP-AES after extensive washing. E. coli bacteria without induction or expressing only OmpA were used as controls. As shown in E. coli cells with or without surface-displayed GolB was measured through a plate sensitive assay. The incubation was operated at 37 \u00b0C for 5 h. And, as shown in the result, E. coli with surface-displayed GolB protein survived in LB agar plates containing 0\u201320 \u03bcM Au (III) while those without surface-displayed GolB protein showed little tolerance of the environment containing 30 \u03bcM Au (III). Our proposed explanation is that gold ions are adsorbed by the surface-displayed GolB protein, consequently improving the gold tolerance of bacteria cells.The two GolB-displaying gol-rfp/pSB1A2 and the Lpp-OmpA-GolB fusion protein expression plasmid pBAD, produced with a previously published protocol [golB was amplified from the gol-rfp/pSB1A2. After being confirmed by sequencing, the PCR product was digested by XbaI and PstI and then inserted the plasmid pSB1A2 and digested by SpeI and PstI. The gene fragment encoding Lpp-OmpA-GolB was obtained from the lpp-ompA-golb/pBAD, and then this DNA fragment was digested by XbaI and PstI; and inserted into the plasmid which included the modified gol regulon. Finally, the plasmid was transformed into the E.coli strain DH5\u03b1.The gold sensing plasmid protocol ,25 were gol regulon was expressed in E. coli strain DH5\u03b1 with induction by 20 \u00b5M of HAuCl4 (Au3+) with shaking at 37 \u00b0C for 10 h. The cells were then harvested and re-suspended in phosphate buffer saline (PBS pH 7.4) and subjected to fluorescence analysis. Fluorescence was recorded using a Multi-Mode Microplate Reader with filters at wavelengths of 558 nm for excitation and 583 nm for emission. To confirm the expression of the red fluorescence protein (RFP), the modified E. coli cells were harvested by centrifugation at 4500 rpm for 5 min then re-suspended in lysis buffer . After sonication, the two supernatant fractions and the cell membrane were separated by centrifugation at 4 \u00b0C and 11,000 rpm for 15 min. The cell membrane fraction was re-suspended in 100 \u00b5L PBS and 100 \u00b5L of the supernatant fraction were both mixed with 10 \u00b5L of 10\u00d7 loading buffer then heated at 95 \u00b0C for 10 min. After centrifugation at 11,000 rpm for 10 min at 4 \u00b0C, the samples were loaded onto 15% SDS-PAGE gels and electrophoresed for 30 min at 80 V and 50 min at 150 V. For Western blotting analysis, the separated proteins were held on polyvinylidene difluoride membranes at 250 mA, at 4 \u00b0C for 2 h. After blocking at room temperature for 1 h in Blotto (5% nonfat dry milk in 1\u00d7 TBST), the membranes were developed using 1:1000 dilutions of monoclonal anti HA-tag or monoclonal anti FLAG-tag (Santa Cruz) as primary antibody for 10 h at 4 \u00b0C. This was followed by incubation with horseradish peroxidase (HRP)-conjugated goat anti-mouse secondary antibodies (Santa Cruz) at room temperature for 2 h. Antibodies were detected with ECL reagents .The C-terminal Flag tagged-GolB displayed E. coli strain containing the plasmid was cultured in an LB medium containing ampicillin (50 \u00b5g\u00b7mL\u22121) until OD600 = 0.6\u20130.8. The mixture was subjected to a gradient concentration of HAuCl4 (Au3+) with shaking at 37 \u00b0C for a gradient induction time and then the cells were harvested and normalized to OD600 = 1.0 with PBS buffer (pH 7.4), measured by a Multi-Mode Microplate Reader with 558 and 583 nm filters for the excitation and emission wavelengths, respectively.We investigated the effect on the engineered bacteria of the concentration and the induction time with gold ions. The E. coli strain containing the plasmid was cultured in LB medium containing ampicillin (50 \u00b5g\u00b7mL\u22121) until OD600 = 0.6\u20130.8, then induced by a gradient concentration of HAuCl4 with shaking at 37 \u00b0C for 10 h. Meanwhile, the E. coli cells were also cultured in an LB medium containing ampicillin (50 \u00b5g\u00b7mL\u22121) until OD600 = 0.6\u20130.8. Then a final concentration of 20 \u03bcM Au3+, Ag+, Cu2+, Zn2+, Ni2+, Cd2+, Cr3+, Hg2+ or Pb2+ was added to the medium respectively. All of the cells were harvested and normalized to an OD600 = 1.0 with PBS buffer (pH 7.4). Additionally, a mixed metal solution with or without Au3+ ions was used to induce the engineered cells as a single metal induction protocol. For fluorescence determinations, a 300 \u03bcL aliquot of each sample was applied in triplicate to a 96-well flat bottom black plates . Fluorescence was recorded using a Multi-Mode Microplate Reader with 558 and 583 nm filters for the excitation and emission wavelengths, respectively.For the gold concentration sensitivity measurement, the E. coli strain DH10B. Cells were grown in LB medium containing ampicillin (50 \u00b5g\u00b7mL\u22121) with shaking for 10 h at 37 \u00b0C. After 1:100 dilution in LB medium containing ampicillin (50 \u00b5g\u00b7mL\u22121), the culture was grown at 37 \u00b0C to OD600 = 0.6\u20130.8. Protein expression was induced by the addition of arabinose to a final concentration of 0.002% and then incubated at 37 \u00b0C for 10 h. For gold ion adsorption, 0.1, 1, 5, 20 \u03bcM concentrations of gold ions were added to LB medium during the induction of the Lpp-OmpA-GolB fusion proteins. To measure the metal ion adsorption ability of GolB-displayed E. coli, the cells were harvested from LB medium by centrifugation at 4000 rpm for 10 min and then washed with double distilled H2O at least three times. The gold-adsorbed cells were lyophilized to measure the dry weight and subjected to wet ashing. The samples were then analyzed using an inductively coupled plasma-atomic emission spectrometer .The OmpA or OmpA-GolB fusion proteins were expressed in E. coli cells provides a new way to design novel metal biosensors, and by improving the original resistance to the toxicity of the gold ion in the ion probe of the whole cell, the integration finally achieved a integration of expression system in detection and adsorption of gold ions regulated by the gradients of gold ions in environment. Thus selective detection and adsorption of gold ion from the mixture of several ions was achieved. It should be possible for B. subtilis to selectively detect and absorb trace levels of gold ions by homologous recombination of this system, which provides an important direction for detection of other toxic heavy metal ions. This study will make this kind of construction of whole cell detection and adsorption more patterned, and could realize a more convenient method of detection and adsorption of heavy metals. Inspired by previous studies of whole-cell biodetection and bioadsorption, this research succeeded in integrating a gold ion selective detection system based on the GolS regulator with the gold ion selective adsorption system based on the surface display of GolB protein using the synthetic \u201cBioBrick\u201d biology technology. The high selectivity and sensitivity towards gold of these engineered"} +{"text": "To conduct a systematic review and meta-analysis to examine the strength of associations between social network size and clinical and functional outcomes in schizophrenia.Studies were identified from a systematic search of electronic databases from January 1970 to June 2016. Eligible studies included peer-reviewed English language articles that examined associations between a quantitative measure of network size and symptomatic and/or functional outcome in schizophrenia-spectrum diagnoses.g) found that smaller social network size was moderately associated with more severe overall psychiatric symptoms \u2009=\u2009\u2212\u20090.875, \u2212\u20090.184, p\u2009=\u20090.003) and negative symptoms . Statistical heterogeneity was observed I2\u2009=\u200963.04%;\u00a0I2\u2009=\u200935.75%,) which could not be explained by low-quality network measures or sample heterogeneity in sensitivity analyses. There was no effect for positive symptoms or social functioning . Narrative synthesis suggested that larger network size was associated with improved global functioning, but findings for affective symptoms and quality of life were mixed.Our search yielded 16 studies with 1,929 participants. Meta-analyses using random effects models to calculate pooled effect sizes contains supplementary material, which is available to authorized users. Social connections can have positive effects on mental health, for example, by directly increasing self-esteem or buffering the negative effects of socioenvironmental stressors , 2. HaviOver the past few decades, an abundance of research has shown that social networks are disrupted in\u00a0individuals diagnosed with schizophrenia and psychosis. Social networks can be described as the set of social relations or social ties that connect individuals . CommonlSocial network is a multidimensional construct, yet research in schizophrenia and psychosis tends to use generic measures and focuses on functional attributes such as social support . Social Despite the potential importance of network characteristics for outcomes in schizophrenia, to date, there has been no systematic review of the magnitude or nature of these relationships. The previous literature reviews on networks and outcomes are outdated, not systematic, include mixed diagnostic samples, and do not focus specifically on network size and service user-related outcomes , 26. TheThe specific aims of this review were to: (1) carry out a systematic search and narrative synthesis on the nature and strength of the relationship between social network size and symptomatic, functional and QOL outcomes in schizophrenia; (2) examine the quality of the empirical findings and the measurement of social networks; and (3) conduct a series of meta-analyses to examine the magnitude of the relationship between network size and schizophrenia outcomes. The findings will determine whether social networks are important for outcomes and highlight potential targets for psychosocial interventions.The review was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines . The revEligible studies were peer-reviewed journal articles published in English. Studies published after 1970 were included as these were the first empirical studies of social networks in schizophrenia , 31. IncS1).On 1 June 2016, a systematic electronic search was conducted on EMBASE, Medline, PsycINFO and Web of Science. Several combinations of the following and related search words were used and separated by the Boolean operators OR and AND: \u2018schizophrenia\u2019 OR \u2018psychosis\u2019 OR \u2018severe mental illness\u2019 AND \u2018social network\u2019 OR \u2018personal network\u2019 OR \u2018social tie\u2019. Medical Subject Headings (MeSH) and explode functions were used to expand the search and identify all relevant studies. Given that we were investigating multiple outcomes, we did not include outcome-related search terms to ensure we covered all literature. The search strategy was adapted for each database (supplementary Two authors (AD and DS) independently screened articles for eligibility. Titles and abstracts were examined against the inclusion and exclusion criteria (stage 1). Full texts of potentially relevant articles were retrieved and screened and those that met the inclusion criteria were retained (stage 2). Level of agreement at stage 1 was 90% and stage 2 was 89%. At each stage of screening, discrepancies were resolved via discussion with KB before continuing to the next stage. Additional studies were identified through scanning reference lists of included articles.A narrative synthesis was carrk\u2009=\u20090.610\u20130.888). Discrepancies were discussed and resolved with KB.The Effective Public Health Practice Project (EPHPP) Quality Assessment Tool for Quantitative Studies was usedStudies that statistically examined associations between social network size and a validated outcome measure were included in the meta-analyses. Studies were excluded if there was insufficient data to calculate effect sizes, despite attempts to contact authors for missing data.r or Spearman\u2019s rho) which were converted to the common metric Hedge\u2019s g for meta-analysis. For studies reporting regression, the effect size r was estimated and converted to Hedge\u2019s g.Data were available for separate meta-analyses on the relationship between network size and (1) overall psychiatric symptoms; (2) positive symptoms; (3) negative symptoms; and (4) social functioning. Most studies reported cross-sectional correlational analyses . Visual inspection of funnel plots and Egger\u2019s test of funnel plot asymmetry was applied to examine publication or selection bias. For meta-analyses demonstrating significant effects, the Fail-Safe N was calculated to estimate the number of additional unpublished/missing studies that would be required to nullify the effect.Comprehensive Meta-analysis version 3.0 was usedSensitivity analyses were conducted removing studies with weak or moderate quality network measures and samples\u00a0with\u2009<\u2009100% schizophrenia/non-affective psychosis. \u2018One-study-removed\u2019 analyses were conducted to assess whether any studies skewed the results.The search across all databases yielded 15 articles for inclusion. One additional article was identified through searching reference lists resulting in a total of 16 articles. The study selection process is summarised in the PRISMA diagram and the number of network members, with some studies setting a limit on the number of people named and others asking for a list of all people known. Mean total network size was reported for six independent samples and ranged from 4.18 to 12.9 n\u2009=\u200916) of studies due to lack of detail on recruitment and selection procedures, self-referred or convenience sample or less than 60% response rate. Eighteen studies controlled for confounders in the analyses or design . Data collection for outcomes was rated \u2018strong\u2019 for just over half (n\u2009=\u200914) studies reporting valid and reliable outcome measures. The remaining studies were given \u2018moderate\u2019 (n\u2009=\u20095) and \u2018weak\u2019 (n\u2009=\u20098) ratings mainly because of poor reporting of service use data collection and no references for translated measures which brought ratings down . Fifty-nine percent (n\u2009=\u200916) of social network tools were rated as strong. Network tools were rated as \u2018weak\u2019 in seven studies due to non-validated assessment tools with inadequate measure of network size; including lack of detail (n\u2009=\u20092), boundaried (capped network size or focus on one type of relation) (n\u2009=\u20093), single-item measures (n\u2009=\u20092), and no measure of size (n\u2009=\u20091). \u2018Moderate\u2019 ratings were given to four studies (11%) due to lack of detail (n\u2009=\u20092) or boundaried networks (n\u2009=\u20092). Withdrawals and drop-outs was rated \u2018not applicable\u2019 for the vast majority of studies (n\u2009=\u200923) due to the cross-sectional design and rated \u2018moderate\u2019 for two longitudinal studies with 60\u201379% follow-up rate and weak for two studies with less than 60% follow-up rate. Most analysis sections (n\u2009=\u200924) were appropriate to the research aims and statistical methods appropriate for the design and were marked as \u2018strong\u2019 (n\u2009=\u20099) or \u2018moderate\u2019 (n\u2009=\u200915). Fifteen studies were marked as \u2018moderate\u2019 for analyses due insufficient detail relating to the management of missing data, distribution and skewness, power analyses, and correction for multiple correlations.Quality assessments are presented in Table\u00a0A total of 12 studies were included in the meta-analyses on the association between social network size and outcomes. Two studies , 43 had g\u2009=\u2009\u2212\u20090.53) for the association between smaller network size and overall psychiatric symptoms, with moderate heterogeneity (I2\u2009=\u200963.04%). Egger\u2019s regression test was non-significant , indicating no publication or selection bias (Fail-Safe N\u2009=\u200928). A sensitivity analysis removing one study [g\u2009=\u2009\u2212\u20090.60) and heterogeneity (I2\u2009=\u200971.92%).Meta-analyses of five studies with 467 participants showed a significant moderate effect and moderate heterogeneity (I2\u2009=\u200952.79%). Egger\u2019s test indicated no publication bias . A sensitivity analysis excluding one study [g\u2009=\u2009\u2212\u20090.21; I2\u2009=\u200960.58%). Removing three studies with weak [g\u2009=\u2009\u2212\u20090.28, I2\u2009=\u200973.58%).Seven studies with 405 participants were included in the meta-analysis for positive symptoms which found no significant effect of network size showed a significant negative association between network size and negative symptoms (g\u2009=\u2009\u2212\u20090.75) and low heterogeneity (I2\u2009=\u200935.75%). There was no evidence of publication bias as indicated by Egger\u2019s test . A sensitivity analysis removing one study [g\u2009=\u2009\u2212\u20090.82) and heterogeneity (I2\u2009=\u200940.76%). An additional sensitivity analysis removing three studies [g\u2009=\u2009\u2212\u20090.90) and heterogeneity (I2\u2009=\u200959.14%).Meta-analysis conducted on eight studies measured social functioning outcomes. Meta-analyses showed no significant effect (g\u2009=\u20090.36) and moderate heterogeneity (I2\u2009=\u200957.77%). Egger\u2019s test was non-significant , suggesting no selection bias. All studies had 100% schizophrenia samples and high-quality social network measures. Sensitivity analyses indicated that the removal of one study [g\u2009=\u20090.14) and heterogeneity (I2\u2009=\u20090%). This study assessed outpatients seven\u00a0years after the initial hospitalisation, whereas the other two included patients in earlier stages of\u00a0schizophrenia. No studies adjusted for confounders.Three studies . Based on current evidence, it is difficult to determine the effects of social network characteristics and outcomes independent of confounders or other explanatory or mediating mechanisms , 10. MorThere was a tendency for network size to be more strongly related to symptomatic and functional outcomes in individuals at later stages of schizophrenia when compared to first episode. This was supported by evidence for stronger associations the longer the time period from previous hospitalisation , 42, 47.Social networks were measured using a variety of assessment tools based on different definitions, timescales, and criteria, as previously highlighted in psychosis research , 27, 56.To conclude, our findings indicate that larger social networks are associated with better symptomatic and functional outcome in schizophrenia. Interventions that target social networks may, therefore, indirectly improve these outcomes. Controlled trials using longitudinal designs are required to confirm whether supporting an individual to increase the number of people in their social networks leads to a reduction in symptoms. Given that network changes can occur prior to and during the early stages of schizophrenia , cliniciBelow is the link to the electronic supplementary material.Supplementary material 1 (DOCX 93 KB)"} +{"text": "Stigma is a well-documented barrier to health seeking behavior, engagement in care and adherence to treatment across a range of health conditions globally. In order to halt the stigmatization process and mitigate the harmful consequences of health-related stigma , it is critical to have an explicit theoretical framework to guide intervention development, measurement, research, and policy. Existing stigma frameworks typically focus on one health condition in isolation and often concentrate on the psychological pathways occurring among individuals. This tendency has encouraged a siloed approach to research on health-related stigmas, focusing on individuals, impeding both comparisons across stigmatized conditions and research on innovations to reduce health-related stigma and improve health outcomes. We propose the Health Stigma and Discrimination Framework, which is a global, crosscutting framework based on theory, research, and practice, and demonstrate its application to a range of health conditions, including leprosy, epilepsy, mental health, cancer, HIV, and obesity/overweight. We also discuss how stigma related to race, gender, sexual orientation, class, and occupation intersects with health-related stigmas, and examine how the framework can be used to enhance research, programming, and policy efforts. Research and interventions inspired by a common framework will enable the field to identify similarities and differences in stigma processes across diseases and will amplify our collective ability to respond effectively and at-scale to a major driver of poor health outcomes globally. Stigma tunities .powerfully influence how evidence is collected, analysed, understood and used and notes that, when theories are implicit, their power to clarify or to confuse, and to reveal or obscure new insights, can work unnoticed [In order to intervene to halt the stigmatization process or mitigate the harmful consequences of health-related stigma, or stigma associated with health conditions, the existence of a clear, multi-level theoretical framework to guide intervention development, measurement, research, and policy is critical. Existing stigma frameworks typically focus on one health condition in isolation, for example, obesity/overweight \u201317, HIV nnoticed . As suchThe majority of health-related stigma frameworks explore psychological pathways at the individual level, focusing either on the individuals experiencing stigma , 30, 31,Building from existing conceptualizations of health-related stigmas and practical experience in designing stigma-reduction interventions, we propose a new, crosscutting framework and demonstrate its application to a range of health conditions, including leprosy, epilepsy, mental health, cancer, HIV, and obesity/overweight. We discuss how stigma related to race, gender, sexual orientation, class, and occupation intersects with health-related stigmas, and examine how the framework can be used to enhance research, programming, and policy efforts. The framework is intended to amplify our collective ability to respond effectively and at-scale to a major driver of poor health outcomes globally.The Health Stigma and Discrimination Framework Fig.\u00a0 articulaThe first domain refers to factors that drive or facilitate health-related stigma. Drivers vary by health condition, but are conceptualized as inherently negative . They maWe postulate that stigma manifestations subsequently influence a number of outcomes for affected populations, including access to justice, access to and acceptability of healthcare services, uptake of testing, adherence to treatment, resilience , 44, andWhile the framework is specific to health-related stigma, it recognizes that health-related stigma often co-occurs with other, intersecting stigmas, such as those related to sexual orientation, gender, race, occupation, and poverty. Therefore, incorporating intersecting stigmas into the framework is necessary, as stigma manifestations and health outcomes may be influenced by a range of stigmatizing circumstances that must be considered to understand the full impact of stigma , 36.see stigma as a thing which individuals impose on others and instead emphasize, the broader social, cultural, political and economic forces that structure stigma.The Health Stigma and Discrimination Framework differs from many other models in that it does not distinguish the \u2018stigmatized\u2019 from the \u2018stigmatizer\u2019 , 32. TheAccording to Kippax et al. , the danAnother difference from previous frameworks is the separation of manifestations into \u2018experiences\u2019 and \u2018practices\u2019. This distinction clarifies the pathways to various outcomes following the stigma-marking phase of the process. Those who experience, internalize, perceive, or anticipate health-related stigma face a range of possible outcomes, such as delayed treatment, poor adherence to treatment, or intensification of risk behavior, that may diminish their health and wellbeing. While outcomes are mostly negative, positive outcomes are possible; stigma has been known to foster resilience in marginalized populations and fuelWe also differentiated outcomes for affected populations from outcomes for organizations and institutions. Our framework seeks to demonstrate that stigma experiences and practices influence affected populations as well as organizations and institutions, which then together influence the health and social impacts of stigma. By articulating these outcomes, the framework highlights the need for multilevel interventions to respond to health-related stigma. It also focuses attention on the far-reaching influence of health-related stigma on societies as well as individuals.Ideally, we want to interrupt the process prior to the application of stigma. Thus, interventions often target the removal of the drivers of stigma or the shifting of norms and policies that facilitate the stigmatization process . HoweverThe availability of data on health-related stigma and discrimination is critical for improving interventions and programs to address them, yet such routine data are often lacking . The HeaSince sociologist Erving Goffman published his seminal work on stigma in 1963, research on stigma across the disciplines of sociology, psychology, social science, medicine, and public health have expanded, and much is now understood about how stigma operates and induces harm in the context of different diseases and identities. Yet, progress has stalled in our collective ability to tackle stigma and its harmful consequences. Therefore, cross-disciplinary and cross-disease research and collaboration are urgently required to move forward.give some conceptual organization to the diverse lines of research that were \u2013 and still are \u2013 underway [The Health Stigma and Discrimination Framework is intended to be a broad, orienting framework, akin to Pearlin\u2019s Stress Process Model, which was developed to underway . It is oTo demonstrate the cross-cutting nature of the Health Stigma and Discrimination Framework, we examine how it applies to both communicable and non-communicable health conditions. We review health conditions in roughly chronological order to provide perspective on how health-related stigma has been applied to new and emerging conditions throughout the course of human history. While the different domains of stigma articulated in the framework may not apply in the exact same way across all health conditions, health-related stigmas share a number of commonalities that warrant underscoring.Firstly, social exclusion rooted in stigma appears to be a response to threat, varying across health-related stigma to the degree to which the source of threat is physical or symbolic . Across the various health-related stigmas, people negatively stereotype, display prejudice toward, and discriminate the group and its members, although the content of the stereotype and the rationalization for the bias differ across the groups. In addition, these conditions differ in the extent to which they are concealable and thus in the way people cope with and manage their stigmatized identity, but all involve anticipated, experienced, and internalized stigma. Finally, how people cope with and manage stigma often adversely affects their health, both in terms of the stress it causes and in the underutilization of services available to them. Table\u00a0Leprosy is perhaps the oldest stigmatized health condition known to humankind . Most maThe fact that persons affected by leprosy often have a low socioeconomic status, a low level of education and little awareness of human rights increases people\u2019s vulnerability to discrimination . In SoutFurther, many persons affected seek to conceal their condition , 64. ConEpilepsy is a neurological condition characterized by chronic or recurrent seizures. Seizures can lead to individuals crying out, collapsing, bleeding or foaming from the mouth, and losing control of urine and/or stools, and can therefore be frightening to those experiencing or witnessing them. Epilepsy is both concealable and unpredictable \u2013 it may be impossible to know that someone has epilepsy until they experience a seizure and it may be impossible to predict the onset of a seizure. Epilepsy-related stigma is largely driven by concerns about productivity and longevity, and fear of infection. Members of the general public endorse beliefs that people with epilepsy cannot contribute meaningfully to society and are poor prospects for marriage and employment \u201373. MoreReligious and supernatural beliefs act as facilitators of epilepsy-related stigma in some contexts, with some believing that epilepsy is a curse or caused by witchcraft . Risk faMental health-related stigma is often grounded in stereotypes that persons with mental health issues are dangerous , responsible for their mental health issue, cannot be controlled nor recover, and should be ashamed . PersonsRace and gender appear to intersect with mental health-related stigma, influencing its severity. For example, a higher risk for psychiatric disorders among Caribbean-born versus US-born black men has been reported and greaPublic policy responses in some countries have gone a long way towards reducing or ameliorating the harmful effects of mental health-related stigma at the organizational and institutional levels. For example, in the US, the Americans with Disabilities Act enacted Cancer encompasses a large group of diseases characterized by the uncontrolled growth and spread of abnormal cells. Despite the fact that many cancers can be cured or at least effectively controlled, it remains a highly stigmatized condition, with some types of cancer more stigmatized than others . One keyThe experience of cancer-related stigma has important psychological, physical, and social consequences. Psychologically, it is associated with depression, anxiety, and demoralization among patients with cancer . IndividThe stigma associated with cancer varies across religions and related cultures. Although women who are members of ultra-Orthodox Jewish communities are at heightened risk for both breast and ovarian cancer due to an increased probability of being carriers of certain genes associated with these cancers given their Eastern and Central European ancestry, they tend to have low screening rates, low health literacy, and poor health practices because of the stigmatization of cancer in these communities . Fears tHIV is a potentially life-threatening disease caused by a virus that weakens the immune system and spreads through blood and sexual contact. HIV-related stigma is driven by several factors, including (1) fear of infection, where people living with HIV (PLHIV) may be perceived as threatening due to the infectious nature of HIV; (2) concerns about productivity and longevity, where PLHIV may be perceived as poor prospects for employment, friendships, and romantic relationships; and (3) social norm enforcement, since HIV risk is related to a range of socially stigmatized behaviors and therefore PLHIV are devalued due to their perceived associations with these behaviors , 108. FaPLHIV, including adolescents and young people, report a range of stigmatizing experiences from others, including social rejection, exclusion, gossip, and poor healthcare, and are at risk of internalizing stigma . The levThe stigma associated with weight is particularly strong, pervasive, and openly expressed. There seem to be minimal social norms prohibiting weight shaming, making it particularly problematic. It develops relatively early in socialization, emerging as early as 31 months . ObesityExperiencing and anticipating weight-based stigma adversely affects the mental and physical health of people with overweight or obesity . PsycholIn healthcare settings, women who perceive stigmatization from their providers report delaying use of preventive health services for fear of being judged or embarrassed . This avThe Health Stigma and Discrimination Framework provides an innovative and alternative method to conceptualize and respond to health-related stigmas. Applicable across a range of health conditions and diseases, the framework highlights the domains and pathways common across health-related stigmas and suggests key areas for research, intervention, monitoring, and policy. This crosscutting approach will support a more efficient and effective response to addressing a significant source of poor health outcomes globally.The Health Stigma and Discrimination Framework has practical applications for program implementers, policy-makers, and researchers alike, providing a \u2018common ground\u2019 to inform discourse around research priorities, developing innovative responses and implementing them at scale. For program implementers, the framework can inform the combination and level of interventions most appropriate for responding to a specific type of health-related stigma. For policy-makers, the framework has the potential to lead to efficiencies in funding for and implementation of efforts to reduce health-related stigmas. Lastly, for researchers, the framework should enable more concise and comparable measures of stigma that can be compared across health conditions and diseases by removing the disease siloes of the past and replacing them with common domains and terminology that is more accessible. The framework should also enable crosscutting research endeavors to develop and test interventions that more appropriately address the lived realities of vulnerable populations accessing healthcare systems.People are not defined by just one disease or one perceived difference, they have complex realities in which to maneuver in order to protect their health and wellbeing, and public health interventions must be responsive to these realities."} +{"text": "Olive pomace is a major waste product of olive oil production but remains rich in polyphenols and fibres. We measured the potential of an olive pomace-enriched biscuit formulation delivering 17.1\u2009\u00b1\u20094.01\u00a0mg/100\u00a0g of hydroxytyrosol and its derivatives, to modulate the composition and metabolic activity of the human gut microbiota.In a double-blind, controlled parallel dietary intervention 62 otherwise healthy hypercholesterolemic subjects were randomly assigned to eat 90\u00a0g of olive pomace-enriched biscuit or an isoenergetic control (CTRL) for 8\u00a0weeks. Fasted blood samples, 24-h urine and faecal samples were collected before and after dietary intervention for measurement of microbiota, metabolites and clinical parameters.Ruminococcus were reduced in OEP compared to CTRL biscuits. A trend towards increased bifidobacteria abundance was observed after OEP ingestion in 16S rRNA profiles, by fluorescent in situ hybridization and by qPCR. Targeted LC\u2013MS revealed significant increases phenolic acid concentrations in 24-h urine following OEP ingestion and 3,4-dihydroxyphenylacetic acid (DOPAC) and homovanillic acid, derivatives of hydroxytyrosol, were elevated in blood. A sex effect was apparent in urine small phenolic acid concentrations, and this sex effect was mirrored by statistically significant differences in relative abundances of faecal bacteria between men and women.Consumption of OEP biscuits did not impact on the diversity of the faecal microbiota and there was no statistically significant effect on CVD markers. A trend towards reduced oxidized LDL cholesterol following OEP ingestion was observed. At the genus level lactobacilli and Ingestion of OEP biscuits led to a significant increase in the metabolic output of the gut microbiota with an apparent sex effect possibly linked to differences in microbiota makeup. Increased levels of homovanillic acid and DOPAC, thought to be involved in reducing oxidative LDL cholesterol, were observed upon OEP ingestion. However, OEP did not induce statistically significant changes in either ox-LDL or urinary isoprostane in this study. Olives and olive oil are important and characteristic components of the Mediterranean diet, a dietary pattern shown to improve on both physical and mental quality of life, and reduce the risk of chronic diet-associated disease, especially cardiovascular disease (CVD) . Indeed,Olea europaea, leaving waste in the form of olive water and solid olive pomace. The olive pomace and wastewater produced from oil extraction processes contain macromolecules such as polysaccharides, lipids, proteins and polyphenolic compounds (mainly of the tyrosol group) which can range from 1 to 8\u00a0g/l together with the analysis of the variation of polyphenols and their metabolites in plasma and urine. Additional measures were the analyses of the anthropometric indices, the fasting plasma insulin, glucose and C-reactive protein (CRP) and the analysis of isoprostane F2 in urine. This study was powered for changes in blood LDL cholesterol and changes in faecal bifidobacteria. Since previous studies have shown that fewer individuals are required for measuring changes in faecal bifidobacteria and because of its clinical significance, the sample size calculation was performed only for changes in LDL cholesterol levels. Based on measures from a previous parallel trial design using similar products and for total bacteria . Reactions were performed at the specified conditions (see reference) using SsoFAST Evagreen SupemixKit (BIO RAD) and a Lightcycler 480 PCR machine (Roche). Quantifications were done using standard curves obtained by amplifying pure cultures of Bb12 which had been previously quantified by plate counting. For total bacteria a mixture of bacterial DNA was obtained by pooling the total faecal genomic DNA from four faecal samples, which had been previously enumerated using FCM-FISH.DNA extraction was performed using the FastDNA\u2122 SPIN Kit for Feces . Amplifications were performed with sets of primers specific for Targeted metabolomics analysis by UHPLC\u2013ESI-MS/MS was carried out as previously described , 34 on 2t test. p values\u2009<\u20090.05 were deemed statistically significant. qPCR data analysis was performed using factorial ANOVA with FDR correction. Urine and plasma metabolite data analysis was performed using factorial ANOVA with FDR correction.Statistical analysis was performed using STATISTICA 13.1 statistics software for data analysis. Data were checked for normality using the Kolmogorov\u2013Smirnov and Shapiro\u2013Wilk tests. Treatment effects were assessed using one-way analysis of variance, or non-parametric Mann\u2013Whitney test. Treatments were compared to each other using a paired Student\u2019s A total of 73 suitable subjects were identified and were accepted onto the trial to begin the dietary intervention. Eight did not finish the study. Two subjects dropped out because of illness not related to the intervention (flue and surgery), one for family-related issues, five did not like the taste of the product and dropped out. Exclusion occurred because of deviation from the protocol: one volunteer declared after finishing the study to have taken antibiotics and two subjects were excluded because they did not consume the product as directed (more than 25% returned unconsumed). In total 62 people completed the study successfully and were included in the statistical analysis. In detail, 32 females and 30 males, between 30 and 65\u00a0years old, with BMI from 20 to 29.9 (average 24\u2009\u00b1\u20093.4) and total cholesterol ranging between 180 and 240\u00a0mg/dl completed the study. Female group had an average age of 48\u00a0years (\u00b1\u20098.5), while the male group average age was 49 (\u00b1\u20099.6). Between the treatment groups at baseline, blood pressure (average 120/74 and 122/74), BMI (average 24 and 24) and total cholesterol (average 204 and 217), no significant differences were measured and beta diversity using QIIME . Sequenp\u2009=\u20090.29) and beta diversity . Also very small changes in relative abundance of the less dominant bacterial genera were observed . Figure\u00a0Bifidobacterium, Ruminococcus and Lactobacillus for OEP and CTRL treatments between V1 and V2.At a genus taxonomic level, a significant although very small increase of Bifidobacterium spp. (1.07\u2009\u00b1\u20091.57 and 1.06\u2009\u00b1\u20091.31 vs 0.53\u2009\u00b1\u20090.78 and 0.61\u2009\u00b1\u20091.11) for Lactobacillus/Enterococcus spp. . Figure\u00a0Bifidobacterium spp., Lactobacillus/Enterococcus spp. and Ruminococcus spp.No significant differences were observed between treatment OEP and CTRL compared to baseline values (1.85\u2009\u00b1\u20092.89 and 2.17\u2009\u00b1\u20093.33 vs 1.39\u2009\u00b1\u20091.70 and 2.20\u2009\u00b1\u20093.88) for p\u2009>\u20090.05, factorial ANOVA) or in total faecal bacteria were observed after qPCR analysis (data not shown).No significant changes in faecal bifidobacteria , 3,4-dihydroxyphenyl acetic acid (p\u2009<\u20090.001), hippuric acid (p\u2009=\u20090.014), caffeic acid (p\u2009=\u20090.003), homovanillic acid (p\u2009<\u20090.001), 3-hydroxyphenyl acetic acid (p\u2009=\u20090.001), sinapic acid (p\u2009=\u20090.002), scopoletin (p\u2009=\u20090.001). 2,4-Dihydroxybenzoic acid (p\u2009<\u20090.001), 2,5-dihydroxybenzoic acid (p\u2009=\u20090.022), 3-(3-hydroxyphenyl) propionic acid (p\u2009=\u20090.009), were increased after OEP feeding.The results of targeted urinary polyphenols are shown in Table\u00a0p\u2009=\u20090.002) and homovanillic acid (p\u2009=\u20090.003) were significantly higher after OEP treatment compared to CTRL. Most of the polyphenol metabolites were present at very low concentrations in plasma compared to urine since the plasma samples were taken in a fasted state.The results of targeted quantification of plasma metabolites by LC\u2013MS are shown in Table\u00a0Subjects in either group, OEP or CTRL, were matched for age and sex. Little difference was observed in baseline clinical parameters between the groups before dietary intervention. After 8\u00a0weeks of treatment with either biscuit, no significant change in CVD or inflammatory makers was observed Table\u00a0. There wt-coutaric acid, naringenin, 4-hydroxybenzoic acid, 4-hydroxyphenyl acetic acid were excreted by male subjects compared to female subjects after the OEP treatment Fig.\u00a0.Lactobacillus, Ruminococcus, Gemellaceae and Anaerofustis were observed between treatments using community level 16S rRNA profiling. More quantitative analysis using flow cytometry-coupled fluorescent in situ hybridization did not confirm statistically significance for bifidobacteria, lactobacilli or the Ruminococcus obeum-like bacteria. However, a trend was apparent, consistent between 16S rRNA gene sequencing, the probe-based FISH and qPCR, showing a small increase in bifidobacteria.The primary objective of this study was to measure the impact of an olive pomace-enriched product (OEP) on the composition and metabolic output of the human gut microbiota. Considering the accepted physiological relevance of olive polyphenols, their apparent ability to protect LDL cholesterol particles from oxidative damage, and the fact that the gut microbiota appears to be intimately related to their metabolism in vivo, we measured changes in key olive-derived polyphenols, including tyrosol and HT, and their derived catabolites using a quantitative LC\u2013MS-based strategy. The OEP biscuits did not have a major impact on the composition of the gut microbiota, but did induce subtle changes in relative abundances of certain bacteria. Significant differences in relative abundance of In terms of metabolic output, LC\u2013MS-based targeted metabolomics confirmed that ingestion of the OEP biscuits resulted in a significant increase in urinary excretion of small phenolic acids derived from the metabolism of olive polyphenols. These small phenolic acids derive from the combined activities of human phase I and II biotransformation and the action of the gut microbiota. OEP ingestion resulted in a significant increase in excretion of homovanillic acid, 3,4-dihydroxyphenyl acetic acid, scopoletin, protocatechuic acid, sinapic acid, 3-hydroxyphenyl acetic acid, isoferulic acid, caffeic acid, hippuric acid, 3,3-hydroxyphenyl acetic acid, 2,5-dihydroxybenzoic acid and 2,4-dihydroxybenzoic acid. Many of these compounds derive from the breakdown pathways of the tyrosol group enriched in olives and/or the hippuric acid pathway, a pathway common to many classes of polyphenols. Both involve steps mediated by the gut microbiota and these catabolites and similar small phenolic acids have been reported to be excreted following ingestion of olive or olive fractions in previous studies , 36, 37.In this current study, we also measured the ability of the OEP biscuit to modulate blood lipid profiles. Previous studies with whole plant foods or oat-derived beta-glucan in particular, have shown significant and clinically meaningful reductions in cholesterol upon ingestion . Howevert-coutaric acid, naringenin, 4-hydroxybenzoic acid, 4-hydroxyphenyl acetic acid than women. A sex bias in polyphenol metabolism has been reported previously. Zamora-Ros et al. [Akkermansia, Bifidobacterium, Bacteroides, Prevotella, Rikenellaceae, Barnesiellaceae, and Enterobacteriaceae. Some of these bacteria are linked to host physiology and protection from metabolic and cardiovascular disease (Akkermansia and Bifidobacterium in particular), but also Bacteroides and Prevotella in relation to obesity and traditional dietary paradigms [The quantities of small phenolic acids in urine differed between men and women upon OEP ingestion. Men excreted significantly more 3,5-diOH-benzoic acid, s et al. analysins et al. . Such ses et al. . The conaradigms \u201350. Simiaradigms . The rolIn conclusion, ingestion olive pomace extract-enriched biscuits mediated small changes within the composition of the gut microbiota. Delivering 17.1\u2009\u00b1\u20094.01\u00a0mg/100\u00a0g HT and its derivatives, the OEP biscuits induced a significant increase in excretion of small phenolic acids in urine, indicative of up-regulation of microbial polyphenol biotransformation in the intestine. Quantities of some small phenolic acids differed in urine of men and women, as did relative abundances of important members of the gut microbiota. OEP also led to a significant increase in homovanillic acid and DOPAC in fasted plasma samples, indicating related clearance of these compounds from the blood or extended release and uptake from the intestine. In either case, higher levels of these biologically active compounds mediated by OEP ingestion warrant further investigation in acute or post-prandial studies specifically targeting LDL cholesterol oxidation and cognitive function."} +{"text": "Approximately 1 in 5 Canadians with HIV are unaware of their status. In many provinces and especially rural communities, barriers to HIV testing include lack of access, privacy concerns, and stigma. The availability of HIV point-of-care testing (POCT) is limited across Canada. Pharmacists are well positioned to address barriers by offering rapid HIV POCT and facilitating linkage to care.We will use a type-2 hybrid implementation-effectiveness design to assess a pilot HIV POCT model in one urban and one rural pharmacy in each of two Canadian provinces over 6\u00a0months. In this feasibility trial the research aims include developing and assisting pharmacies in implementing the model, evaluating processes/determinants of program implementation, evaluating the model\u2019s effects on client outcomes, preferences, and testing satisfaction. Using a community-based research approach, the research team will engage community stakeholders in each province including individuals with lived experience to inform the development of the pharmacy-based HIV testing model and support the research team throughout the study. A multipronged promotion campaign will be used to promote the study and facilitate recruitment. The pharmacy-based testing model will include pre/post-test counseling and linkage to care plans in addition to pharmacist-administered HIV POCT. Pharmacists will complete a comprehensive training program prior to implementing the testing model. Client demographics and satisfaction will be assessed by surveys and interviews. Pharmacists will document time required for testing and participate in a post-study focus group to discuss barriers/enablers. Implementation will be assessed qualitatively and quantitatively. The process of developing and implementing the model will be described using qualitative data and a logic model. Acceptability and barriers/enablers will be examined qualitatively based on survey responses. A preliminary costing assessment will consider the client, pharmacy, and government perspectives.The results of this pilot will inform modifications to the HIV POCT model to optimize effectiveness and increase scalability. The study has national importance, providing valuable information on improving access to HIV testing. Future applications of this research may expand the role of pharmacists in offering POCT for other sexually transmitted/bloodborne infections as tests become available in Canada.NCT03210701Clinicaltrials.gov, An estimated 21% of Canadians with human immunodeficiency virus (HIV) are unaware of their status . HoweverPoint-of-care testing (POCT) for HIV has been shown to improve access to and uptake of HIV testing in areas where healthcare resources are limited , 12 and Offering HIV POCT through community pharmacies may help improve access to testing and facilitate linkages to care, particularly in small towns lacking primary care clinics . In ruraA pilot study in the USA demonstrated acceptability and feasibility of pharmacist-provided HIV POCT . NotablyWhile standard of care for HIV screening also includes testing for other STBBI due to their common modes of transmission, currently, there exists only one Health Canada-approved POCT for HIV and at the time of study design, there was no approved POCT for other STBBI in Canada.Existing literature provides a strong foundation for conducting clinical effectiveness and implementation studies on HIV POCT in pharmacies and supports the design and evaluation of implementation strategies to help understand which tools and approaches work best , 15. ConCan we identify characteristics of a pharmacy-based HIV POCT program to (a) reach people at increased risk of HIV, especially those who have never been tested; (b) be broadly adopted across different settings ; (c) be consistently implemented by different pharmacy staff members with moderate levels of training and expertise; (d) produce replicable and long-lasting effects ; and (e) at a reasonable cost?The APPROACH study will use a type-2 hybrid study design to help Partnering with pharmacists, staff, and managers to provide training and resource development, as well as identifying community resources to facilitate development of linkage to care plans individualized to each setting .To assist community pharmacies in developing and implementing the modelAssessing the acceptability of the intervention, barriers, and facilitators to implementation Understanding how implementation strategies and tools affect adoption, effectiveness, and fidelity and examining key determinants of sustainability.To evaluate processes and determinants of the HIV POCT program implementation in pharmacies to strengthen the intervention and its implementation. This will include:To evaluate the effect of POCT implementation on client outcomes, preferences, and satisfaction with the testing experience.To understand demographic, social, and behavioral characteristics of clients who seek HIV testing at pharmacies, including the proportion who are first-time testers, reasons for testing at a pharmacy (versus other testing options), and to assess whether these characteristics differ between those who access testing in urban versus rural pharmacies or by province.The main aim of the study is to assess the acceptability and feasibility of a multi-faceted, integrated model of HIV POCT in community pharmacies in both rural and urban settings in two Canadian provinces. The study has several objectives:A secondary aim will be to develop a framework to assess the costs of implementing POCT in pharmacies. Costs will include those at the client level, pharmacy level, and government level.Using a type-2 hybrid design, we will develop and assess (phase I) the acceptability and feasibility of a multi-faceted, integrated, contextualized model of HIV POCT in one urban and one rural pharmacy in two Canadian provinces (Alberta (AB) and Newfoundland and Labrador (NL)) as part of a 6-month pilot study see Fig.\u00a0. HIV POCTo ensure contextual factors are appropriately considered, the research team will be supported by provincial advisory committees (PAC). Using a community-based research approach, local stakeholders will be engaged and invited to form the PAC in each province. The PAC will be comprised of stakeholders including pharmacists and managers, HIV-experienced health workers, decision makers, and community representatives, including community-based organizations and individuals at risk or with lived experience. The PACs will provide advice and feedback to ensure the program developed is responsive to the needs of the local communities and will advise on issues and barriers to support implementation and uptake of POCT programs, the linkage to care plans for each of the four individual pharmacies, and the study promotion plan. Together, the research team and the PAC will function as one cohesive unit, shaping both the implementation and the effectiveness aims see Fig.\u00a0.Fig. 2APMembers of the research team will hold meetings with PAC members in each province to identify contextual issues related to implementing a pharmacy-based POCT program, obtain advice on how to reach populations at risk , and obtain feedback on a proposed model for POCT testing by pharmacists. Following the two provincial PAC meetings, the research team will meet to review the findings from the PAC meetings, feedback on the proposed POCT model, recruitment plans, and linkage to care plan issues. Outputs from this meeting will include lists of tools, resources, and supports to be developed to support the POCT program for the pharmacy settings in each province, as well as training and supports required for pharmacists and staff offering the POCT. Consultation with the PAC will take place throughout the development process to inform revisions as necessary.Pharmacies in each province will be intentionally selected considering a variety of criteria including motivation to offer HIV testing as an expanded pharmacist service, private room available to provide testing, sufficient staffing to support the service, and ability to provide linkage to care for patients who have a reactive HIV test result. One urban and one rural pharmacy site from each province will be selected to participate in the study. Each pharmacy will be required to commit to designated testing hours each week that will be advertised on study promotional material, or to an appointment-based testing program if consistent, designated testing hours are not possible. Each participating pharmacy must have one pharmacist willing to undergo the complete training program and provide consent to participate in the study in order to offer testing and provide feedback on their experience at the end of the study.Participating pharmacists will complete an extensive training program consisting of four parts\u2013\u2013an online self-study module, face-to-face training day, in-pharmacy competency assessment, and proficiency assessment. Prior to the training day, pharmacists will complete an online continuing education program which covers the basic elements of HIV POCT and watch an online video on the use of the INSTI\u00ae HIV-1/HIV-2 rapid antibody test . The training day will consist of didactic and discussion sessions for the first half of the day covering the following topics: HIV 101, an overview of the study process and documentation tools; pre-/post-test counseling; and client supports. The second half of the day will be a hands-on session for pharmacists to learn how to use the INSTI\u00ae HIV-1/HIV-2 rapid antibody kits, interpret and explain results, quality control procedures, and practice consenting and counseling clients. The importance of client support, medical follow-up, and linkage to care will be a significant component of the training program.Immediately prior to implementation of the POCT program, a pharmacy site visit will be conducted for competency assessment. The pharmacist will be observed completing the entire testing process and study procedure and their performance evaluated by a member of the research team using a checklist to ensure all steps are appropriately followed. Within 1\u00a0week, a proficiency assessment will be conducted in which each pharmacist tests a series of blinded samples provided by the provincial public health laboratories and completes a multiple choice test. The POCT and the multiple choice test results will be relayed to the public health laboratories to check accuracy.The study population consists of adult clients in rural and urban areas of two Canadian provinces (AB and NL) who wish to be tested for HIV. Clients will not be specifically invited to participate in the study. Instead, a variety of promotional strategies will be used to promote the study to the general public and high-risk groups.Adults who are 18\u00a0years of age who request HIV testing, who are not known to have HIV infection, and who provide their provincial healthcare number and informed consent will be eligible to participate in the study.Clients will not be specifically approached or invited to participate in the study. Instead, a variety of techniques will be used to promote the study in each region and individuals will self-select and choose to approach the pharmacy to request HIV testing. Feedback from PAC members will inform development of specific promotional materials, such as posters, flyers, and post cards. Promotional materials will be posted and distributed to clients by community partners, including AIDS service organizations, organizations working with at-risk clients (such as people who inject drugs and sex workers), and through participating pharmacies. Posters will also be displayed in public gathering places in each community where testing is being offered, as well as neighboring communities.Communications staff at each academic institution will develop and support a media relation campaign to promote the study . Members of the research team will meet with community partners throughout the study to share interim findings and to maintain interest in the study to encourage ongoing promotion with each organization\u2019s staff and clientele. A focused advertising campaign will be developed for online \u201chook up\u201d sites to appeal to site users, encouraging them to get tested if they are having sex, and where testing is available through the study.Clients may request an HIV test at the pharmacy during advertised walk-in hours or by calling the pharmacy to arrange an appointment for testing. The client may request the test verbally or by passing a note at the pharmacy counter to enhance privacy. Pharmacy staff will bring the interested client to meet the pharmacist in a private counseling room. The pharmacist will then screen the client for eligibility, explain the study, and obtain informed written consent. Pre/post-test counseling and administration of the HIV test will be performed see Fig.\u00a0. The HIVPharmacists will administer the INSTI\u00ae HIV-1/HIV-2 rapid antibody test using a finger-prick blood sample. The test is easy to administer by trained personnel and has been used at numerous community-based testing sites across Canada by health professional and peer testers. The results are read within 60\u00a0s and are reported as reactive, non-reactive, indeterminate, or invalid see Fig.\u00a0. ReactivLinkage to care plans will be individualized for each pharmacy to utilize existing community resources to achieve the goal of quick, responsive, supportive linkage to care. Before leaving the pharmacy, clients with reactive POCT results will receive a bloodwork requisition from the pharmacist with counseling on where they can obtain confirmatory testing. Confirmatory bloodwork is ordered by the provincial HIV program nurse practitioner (NL) or medical director (AB) who will be notified by the pharmacist of the reactive POCT result. The pharmacist will provide the client\u2019s healthcare number and contact information so that follow-up can occur with the client as soon as possible. In both provinces, local support services specific to each pharmacy will be identified, and pharmacists will undergo extensive training on how to provide client support, including helping the client identify their own supports and making a plan for the next 2\u20137\u00a0days while awaiting confirmatory test results.For clients with non-reactive HIV POCT results, post-test counseling focuses on education regarding how to reduce their risk of HIV exposure in the future and recommending testing for additional STBBI. Clients will receive information on where they can access additional STBBI testing. Support services are also available to clients if needed. Participation in this trial or results of the HIV test will not be stored in the client\u2019s pharmacy records or shared with the client\u2019s family physician or third party payers.A process evaluation approach will be used to identify potential and actual influences on the conduct and quality of implementation as part of the fidelity assessment. Summative evaluations will assess patient-level health outcomes (effectiveness aims) and process/quality measures (implementation aims). Research staff will conduct biweekly site checks on each pharmacy to monitor uptake of testing services, ensure integrity of data collection, monitor fidelity to the study protocol, answer questions, and provide support to the pharmacists. The research team will also provide support to the pharmacists by being available for questions. A monthly conference call for all testing pharmacists with members of the research team will be offered to share experiences and challenges. Research team meetings will take place bimonthly, and liaison with PAC members will occur through individual meetings, calls, and teleconferences throughout the study.This pilot study was approved by the Newfoundland and Labrador Human Research Ethics Board (reference no. 2016.178) and the University of Alberta Health Research Ethics Board (reference no. Pro00066308). All participating pharmacists and clients included in the study sign consent forms as per ethical approval. The confidentiality of clients will be preserved as all participants will be allocated a unique identifier, and all trial data collected will be held in a linked anonymized form. Identifiable information will be stored separately from trial data.Patients will be recruited over a 6-month period in each of the two provinces, between February 2017 and October 2017. See Fig.\u00a0Participating pharmacists will complete de-identified data collection forms for each test performed to document the number of tests performed and time required for each step of the HIV testing process . At the end of the study, a focus group of pharmacists, pharmacy managers, and support staff from each site will be held to determine their experiences and perceptions .Qualitative and quantitative data will be collected. Clients will complete a de-identified survey prior to receiving their HIV test to collect demographic information , HIV risk factors, and testing history (first HIV test or not). After completion of the post-test counseling clients will complete a second brief survey to assess their perception of the testing experience (including factors that influenced their decision to get tested at the pharmacy and whether/where they would have sought HIV testing otherwise). The surveys will include questions to obtain data for the cost assessment. Client surveys will be collected in a sealed envelope, so the testing pharmacist cannot see the responses. Data will primarily be quantitative with some open-ended questions.Clients will also be asked if they are willing to participate in a telephone interview within a week of their test to further explore qualitatively their perceptions about their testing experience. If they agree, the client will provide a phone number for a research assistant to call them at their preferred time to administer the interview. Data from this interview will be anonymous and not linked to their other data or test results.As this was a pilot feasibility study, a sample size calculation was not performed. We estimate a minimum of 30 clients tested over the 6-month pilot to collect sufficient data to inform any modifications necessary prior to a broader scale implementation of the intervention in phase II of the study Fig.\u00a0. Data frWe will describe the process of developing and implementing the HIV POCT program in pharmacies in urban and rural settings in two provinces. This description will be developed from interviews, focus groups, notes from meetings and site visits, and other qualitative data collected during the study. Qualitative data will be transcribed verbatim. Two reviewers will independently review, code, thematically analyze, and subsequently compare their results. Agreement on final patterns and themes will be achieved in an iterative process, and discrepancies will be discussed and resolved. Qualitative data will be compared within and between settings and over time. Other sources of qualitative data will also be analyzed. We will develop a logic model to describe our program plan, the process, and summative evaluations to describe and evaluate implementation of the program.To identify acceptability, barriers, and enablers, a variety of analytic approaches will be used. Setting measures will be described using frequency distributions and measures of central tendency. Performance measures from pharmacists, staff, and clients will be analyzed, using parametric and non-parametric tests. Comparisons will be made between urban and rural pharmacy test sites, within and between provinces.To understand how the implementation strategies and tools affect adoption, fidelity, and effectiveness and to identify key determinants of sustainability, we will review the pharmacy settings before and after the implementation phase, and the pharmacy profile and logic model will be revisited and revised accordingly following implementation. We will conduct bivariate analyses to describe and test relationships between sustainability and characteristics of the settings and staff. Implementation fidelity will be assessed using the framework developed by Carroll et al. with conDescriptive analyses will be used to describe client outcomes, preferences, and satisfaction with the testing experience as well as demographic, social, and behavioral characteristics of clients who seek HIV testing at the pharmacies. Bivariate and multivariate analyses will be conducted to assess whether the outcomes and client characteristics differ between those who access testing in urban versus rural pharmacies or between the two provinces. For each primary and secondary outcome, results and the estimated effect size and its precision will be presented.Preliminary cost information will be collected from the patient surveys and from the pharmacist . In addition, costs of implementation, delivery, and sustainability for pharmacies will be explored. These costs include the costs of training, consumables to perform the testing, administration, and reimbursement for the service (by the government or other third-party payers). Costs at the government level include procurement and distribution of the POC test kits, costs of confirmatory testing for those that tested as true and false positives using the POCT, and reimbursement (as described above). These costs will be critical in developing a framework for subsequent full economic evaluation of the POCT strategy versus the usual testing options available in each province.ClinicalTrials.gov.Any protocol amendments will be submitted to the NL Health Research Ethics Board and the University of Alberta Health Research Ethics Board for approval and noted in the registered protocol at All investigators, research assistants, and data analysts will have access to the trial data.Data collection is underway, but not yet complete. Data cleaning and analysis have not commenced.In this study, we have used an effectiveness-implementation hybrid design that takes a dual focus a priori in assessing clinical effectiveness and implementation. The design does not replace large-scale trials that test effectiveness since it is a small scale under-powered study focusing on implementation of the intervention. Instead, it determines potential effectiveness for a main trial in assessing the feasibility of implementation.The results of this implementation/effectiveness pilot study will inform modifications to the HIV POCT model for Canadian pharmacies to optimize effectiveness, increase scalability, and help construct a cost-effectiveness framework to assess sustainability. This innovative model of HIV testing will utilize existing resources (pharmacists) and infrastructure to improve access to testing for those who are not getting tested either because they cannot or choose not to access the traditional healthcare system. The impact of this study has provincial and national importance as it will provide valuable information about how to effectively and efficiently improve access to HIV testing in specific communities using a model that can be adapted for other communities and provinces within Canada, and beyond. If it is deemed to be feasible, effective, and acceptable, the pharmacy-based HIV testing model may become an important mechanism to increase testing among those at risk, facilitate timely diagnosis, and support client entry into care.Future applications of this research may expand the role of pharmacists in offering POCT for other STBBI as this testing technology becomes more widely available in Canada. Pharmacists could play a more direct role in HIV primary prevention by offering pre-exposure prophylaxis (PrEP) programs in provinces where pharmacists have authority to prescribe medications, provide education and counseling, offer regular HIV testing (by POCT), and order laboratory tests for confirmatory testing, monitor PrEP therapy, and provide timely linkages to care for those with reactive results and/or in need of additional STBBI testing. With the scope of pharmacist practice expanding rapidly in Canada as well as other countries, utilizing pharmacists to expand access to HIV testing may become an important adjunct to traditional/standard HIV testing options in many areas.There will be no publication restrictions for the full trial results, and publication will be sought in peer-reviewed journals. The authors plan to hold stakeholder meetings to disseminate study results, as well as present the results at local and national conferences."} +{"text": "Bacillus cereus remain a public health problem. Secreted toxins are one of the main factors contributing to B. cereus pathogenicity. A promising strategy to treat such infections is to target these toxins and not the bacteria. Although the exoenzymes produced by B. cereus are thoroughly investigated, little is known about the role of B. cereus collagenases in wound infections.Despite the progress in surgical techniques and antibiotic prophylaxis, opportunistic wound infections with B. cereus culture supernatant (csn) and its isolated recombinantly produced ColQ1 is characterized. The data reveals that ColQ1 causes damage on dermal collagen (COL). This results in gaps in the tissue, which might facilitate the spread of bacteria. The importance of B. cereus collagenases is also demonstrated in disease promotion using two inhibitors. Compound 2 shows high efficacy in peptidolytic, gelatinolytic, and COL degradation assays. It also preserves the fibrillar COLs in skin tissue challenged with ColQ1, as well as the viability of skin cells treated with B. cereus csn. A Galleria mellonella model highlights the significance of collagenase inhibition in vivo.In this report, the collagenolytic activity of secreted collagenases (Col) is characterized in the This bacterium is the major cause of emetic and diarrheal food poisoning worldwide, but also associated with serious opportunistic non-gastrointestinal-tract infections.,2 Moreover, it is able to cause wound infections.,3,4 Like many pathogenic bacteria, B. cereus is currently evolving multi-drug resistance,\u20137 which narrows the choice of possible treatments and consequently increases economic costs, morbidity, and mortality rates.\u201310 To overcome this therapeutic crisis, the development of new antibiotics will not produce lasting success, but alternative strategies need to be employed to cope with resistance development. To combat the emergence of resistance, the development of antivirulence agents targeting the pathogenicity of bacteria rather than their viability, has gained major interest.\u201313 These agents specifically block the virulence factors involved in bacterial invasion and colonization of the host. This reduces the selection pressure for drug-resistant mutants and provides a window of opportunity for the host immune system to eliminate the bacteria.,11,13 The pathogenicity of B. cereus arises from the production and dissemination of tissuedestructive exoenzymes such as hemolysins, phospholipases, and proteases.,15,16 It is believed that these exoenzymes assist in maintaining the infection, allowing the bacteria to reach multiple sites in the body and to evade the immune system. There have been only few studies to support the idea of Bacillus exoenzymes contributing to the pathology of wound infections and little evidence to elucidate the direct role of specific toxins during the infection.,18 The dermal layer makes up 90% of the skin structure. The architecture and integrity of the dermis are maintained by COL. COL I, II, and III are predominant in the extracellularmatrix (ECM) of the skin.,22 COL fibers are supramolecular structures, COL molecule is made up by regular packing of three supertwisted alpha helices.,22 The individual alpha chains consist of a repeated three amino acid motif (Glycine-X-Y), with X-Y often being proline (28%) and hydroxyproline (Hyp) (38%).,22 Because of its highly intertwined structure and high content of specific amino acids , fibrillar COLs resistmost proteases and can be degraded only by certain types of mammalian or bacterial collagenaseswith unique specificities to degrade COL.,24,25The skin is the largest and most exposed of all human organs and, therefore, most prone to injury.,27 After the initial local colonization, bacteria can potentially invade into deeper tissues with the help of necrotic virulence factors such as collagenases.,28 By degrading the structural COL scaffold of the ECM at multiple sites, bacterial collagenases assist the bacteria in invading the tissue.,30 Bacterial collagenases belong to the zinc metalloprotease family M9. They harbor a collagenase unit, which is accompanied by accessory domains involved in substrate recognition and COL swelling. To date, only a few collagenase-secreting bacterial genera have been identified. Clostridium collagenases such as ColH and ColG are the best characterized ones.Bacillus collagenases have received less attention. Their contribution to wound infections however is assumed to be a main factor in the woundinvasion stage.Bacterial wound infection is a public health problem occurring when bacteria adhere to an impaired skin.B. cereus in the skin. Our results showed that the model B. cereus collagenase ColQ1 degrades the dermal fibrillar COLs and confirmed it as a promising for drug target. Using two small molecules, which we had recently described as inhibitors of the collagenase ColH (produced by Clostridium histolyticum) and the elastase LasB (produced by Pseudomonas aeruginosa), we could substantiate that these inhibitors also inhibit B. cereus collagenase activity. Indeed, we found that these compounds were able to protect the integrity of the dermal COL in an ex vivo pigskin model treated with recombinant ColQ1, confirming their potency as broad-spectrum inhibitors of bacterial collagenases, as suggested earlier by Sch\u00f6nauer et al. Moreover, these compounds reduced in vitro cytotoxic effects of the B. cereus csn, containing various collagenases, toward fibroblast and keratinocyte cell lines, restored their morphology, and improved their adhesion. The toxicity of B. cereus csn and ColQ1 was verified in vivo in Galleria mellonella larvae. Furthermore, we showed that treatment with collagenase inhibitors significantly improved their survival rate.Here, we report on the establishment of a simple pre-clinical ex vivo pig-skin model to evaluate the effect of COL degradation by 22.1 and the csn of B. cereus ATCC 14 579 to challenge our skin model. ColQ1 was selected as a model Bacillus collagenase to study the isolated effect of this virulence factor in a skin wound setting. ColQ1 is a close homologue of ColA of B. cereus ATCC 14 579 (Uniprot: Q81BJ6) and similarly to ColA, it displays a remarkably high peptido- and collagenolytic activity compared to clostridial collagenases. Both enzymes share an overall sequence identity of 72% and a similarity of 84%. Sequence conservation is higher within the collagenase unit, i.e., the catalytic core of the enzyme, increasing to 79% and 89%, respectively.,36 Proteolytic activity of ColQ1 and of B. cereus csn (which represents amore complex source of COL-degrading factors),36 were validated in an in vitro peptidolytic assay using a custom-made collagenase-specific quenched fluorescence substrate. The csn of B. cereus showed peptidolytic activity that could be completely abrogated by the addition of 20mM EDTA and was only marginally affected by serine and cysteine protease inhibitors, consistent with its metalloprotease mechanism 2.2B. cereus csn and ColQ1 during wound infection, an ex vivo pig-skin model of B. cereus infection was established. For this purpose, porcine ear skin biopsy punches were treated with different concentrations of B. cereus csn or ColQ1 to simulate COL matrix degradation after infection with B. cereus. The release of hydroxyproline (Hyp) was used as a biomarker for COL breakdown.,40 While we did not observe Hyp release in non-treated skin preparations, a significant release was detected in skin treated with various concentrations of Bacillus csn and ColQ1 csn, Hyp levels rendered 60 \u00b1 4 \u03bcgmL\u22121 being stable and selective over several human metalloproteases. Compound 2 is amoderately active LasB inhibitor (IC50 = 17.3 \u00d7 10\u22126 m) and was a hit in a virtual screening study performed on the active site of ColH.To study whether we could inhibit ColQ1 with small molecules, we investigated two previously described inhibitors of bacterial metalloproteases. Compound 1 and 2 inhibit ColQ1 with IC50 values of 183 \u00b1 7 \u00d7 10\u22126 m and 95 \u00b1 4 \u00d7 10\u22126 m, respectively , the impact of these two inhibitors on ColQ1 activity was measured in vitro. The FRET-based assay confirmed that compounds ectively . In addiectively .1 and 2, we tested them on the B. cereus csn, which contains a heterogeneous mixture of ColA isoforms and other collagenase homologs. The B. cereus csn was treated with 1.83 mM (10 \u00d7 IC50) of compound 1. The FRET-based assay revealed that the proteolytic activity furnished by the csn could be reduced by 84 \u00b1 2% compared to the uninhibited control . Remarkably, this concentration led to a decrease in the proteolytic activity of 57 \u00b1 7% serine and cysteine protease inhibitors, ii) compound 1, and iii) compound 2 showed a selective reduction of the gelatinolytic activities in all cases . The amide oxygen and nitrogen atoms form a hydrogen bond with the main-chain amide nitrogen atom of Tyr428 and the carbonyl oxygen of Glu487, respectively, while the aryl ring of compound 1 is involved in a \u03c0-\u03c0-stacking interaction with the imidazole ring of His459 (3.9 \u00c5). In contrast to compound 1, compound 2 has a different, much larger molecular backbone, but shares the same thiol prodrug moiety with the co-crystallized N-aryl mercaptoacetamide, i.e., a thiocarbamate group. Similarly to the N-aryl mercaptoacetamide, we found that the deprotonated sulfur atom of compound 2 can coordinate the active-site zinc cation (2.3 \u00c5), while the amide oxygen forms a hydrogen bond with the main-chain nitrogen atom of Tyr428 and 2 (0.05-50 \u00d7 10\u22126 m) based on their activity in the different in vitro assays along with 300 \u00d7 10\u22129 M ColQ1 were used. Non-treated and ColQ1-treated samples were used as controls. After one day of incubation, we quantified the release of Hyp and visualized the dermal COL in the skin tissue using SHG and epifluorescencemicroscopic techniques. Overall, compound 1 resulted in a reduction in Hyp release in a concentrationdependent manner as we had shown previously . For this purpose, normal human dermal fibroblasts (NHDF) and human epidermal keratinocytes (HaCaT). were chosen due to their ability to produce fibrillar COLs and their roles during wound healing.We further investigated whether the B. cereus csn (0\u221215% v/v). The cytotoxic effect of the csn was evaluated by assessing the viability using a colorimetricMTT assay and live/dead staining, followed by visualization with epifluorescence microscopy.These cells were exposed to different concentrations of B. cereus csn of the B. cereus csn, due to the prominent cytotoxic effects observed at this concentration in both NHDF and HaCaT cell lines. Cell viability was dose-dependent, but a significant rescue of viability (80 \u00b1 20% and 70 \u00b1 25%) was observed at 600 \u00d7 10\u22126 and 100 \u00d7 10\u22126 m of compounds 1 and 2, respectively, in both NHDF and HaCaT cell lines . The survival of the larvae was monitored daily for eight days. Larvae injected with the catalytically inactive mutant enzyme survived . In contrast, eight days after treatment with active ColQ1, the survival dropped to 0%, 20%, and 50% at concentrations of 500 \u00d7 10\u22129, 300 \u00d7 10\u22129, and 100 \u00d7 10\u22129 M enzyme, respectively compared to the control , ColQ1) .B. cereus derived csn at concentrations of 35\u2212100% (v/v). The survival of the larvae was studied for eight days after injection. After five days, only 15% of larvae injected with 100% (v/v) csn survived. With 65% and 35% (v/v) of the csn, the death of the larvae was delayed of B. cereus csn togetherwith compounds 1 (50\u00d7 10\u22126-300 \u00d7 10\u22126 m) (2 (5\u00d7 10\u22126-20 \u00d7 10\u22126 m) , which is 300-fold the collagenase concertation in csn. This indicates that the action of csn on the larvae might be connected to other virulence factors (such as sphingomyelinase and non-hemolytic enterotoxins) as well as collagenase, which could work together to kill the larvae.,64 This also suggests that both compounds might target other virulence factors in the csn, therefore further experiments could be performed in the future to confirm this.Similar experiments were performed with 10\u22126 m) . Compoun\u00d7 10\u22126 m . This diB. cereus collagenases might be related to the activation of melanization mechanisms in the larvae since the dead larvae turned black, as suggested for other metalloproteases.\u201370 In addition, it has been shown that collagenases digest hemolymph proteins of the larvae into small peptides, which trigger an immune response finally leading to their death.\u201369The toxic effect exerted by 3 Therefore, full characterization of virulence factors is essential to understand their role during infection and to predict whether their inhibition is beneficial for the treatment. In the present work, we characterized the collagenolytic activity of a recently discovered recombinant B. cereus ColQ1 virulence factor and B. cereus csn. In addition, we evaluated the biological effects of two small molecules that inhibit collagenases of B. cereus and other pathogens. In this context, an ex vivo pig-skin model of B. cereus infection was used to investigate B. cereus collagenases and the consequences of their inhibition. This model highlights the ability of B. cereus collagenase to decompose fibrillar COLs and disrupt their regular alignment. This mechanism might lead to an accelerated bacterial infiltration and penetration into deeper sites of the host. Moreover, as previously reported, this mechanism is one of the main obstacles to the wound-healing process.,73 We demonstrated that B. cereus csn collagenases induced cytotoxicity in fibroblasts and keratinocytes, which could be minimized using bacterial collagenase inhibitors. In an in vivo model using G. mellonella larvae, we showed that ColQ1 and B. cereus csn are toxic and induce the death of the larvae. Treatment with collagenase inhibitors significantly increased their survival rate. These findings provide new insights into the functions of B. cereus collagenases in wound infections and the importance of its inhibition by antivirulence, which could represent a promising therapeutic option.Virulence factors and their inhibitors are currently gaining wide attention because of their potential to limit the evolution of antibiotic resistance and to treat infections by reducing bacterial pathogenicity.4B. cereus strain Q1 was expressed and purified as previously described.The collagenase unit of ColQ1 from B. cereus ATTC 14 579 strain was prepared as described before.B. cereus was grown in RPMI medium (Gibco) at 30 \u00b0C ON with 160 rpm shaking. The next day, csn was harvested by centrifugation at 3000 x g for 10 min at 4 \u00b0C. The csn was sterile-filtered with 0.22 \u03bcm filter (Greiner) then, it was aliquoted and stored at \u221280 \u00b0C until use.50 measurements were performed as previously reported. In short, ColQ1 was incubated with compound 2 at RT for 1 h. The reaction was initiated by the addition of 2 \u00d7 10\u22126 m of the collagenase-specific peptide substrate Mca-Ala-Gly-Pro-Pro-Gly-Pro-Dpa-Gly-Arg-NH2 acetyl; Dpa = N-3--L-2,3-diaminopropionyl). The fluorescence was monitored for 2 min at 25 \u00b0C. The final concentrations were 1 \u00d7 10\u22129 M ColQ1, 250 mM HEPES pH 7.5, 400 mM NaCl, 10 mM CaCl2, 10\u00d7 10\u22126 m ZnCl2, 2\u00d7 10\u22126 m FS1-1, and 0 to 120 \u00d7 10\u22126 m compound 2. Due to poor compound solubility, the DMSO concentration was adjusted to 5%. The percentage of enzyme inhibition was calculated in relation to a blank reference without compound added. All experiments were performed in triplicate. Limited by the solubility of the compound, the IC50 value could not be determined using non-linear regression, but was determined by linear regression using only data within the 40\u221260% inhibition range. Regression analysis was performed using GraphPad Prism 5 . To determine the peptidolytic activity versus FS1-1 of the B. cereus csn, a similar assay as described above was performed. Csn samples were freshly thawed and used in the assay in three different concentrations . Samples were preincubated with buffer control or inhibitors for 30 min at RT, before the reactions were started upon addition of 2 \u00d7 10\u22126 m FS1-1. The final inhibitor concentrations were: 20 mM EDTA, 1x EDTA-free complete protease inhibitor cocktail as serine and cysteine protease inhibitors, 1.83mMcompound 1 and 95 \u00d7 10\u22126 m compound 2 at a final DMSO concentration of 5%. All results were extrapolated to 100% v/v and inhibition rates were normalized to the uninhibited control. Experiments were performed in triplicate and are presented as means \u00b1 standard deviation.ICB. cereus csn were loaded onto 10% SDS-PAGE gels containing 0.2% gelatin and separated by electrophoresis at 4 \u00b0C. After separation, the gels were sliced into 4 pieces (marker lane plus 2 sample lanes) each and incubated in the respective renaturation buffer supplemented with (i) nothing (control), (ii) 1x EDTA-free cOmplete protease inhibitor cocktail , (iii) 300 \u00d7 10\u22126 m compound 1 or (iv) 100 \u00d7 10\u22126 m compound 2 at RT for 2\u00d730 min with gentle agitation. The gel slices were then equilibrated in the respective developing buffer supplemented with the aforementioned compounds (i-iv) at RT for 2\u00d710 min with gentle agitation, and then incubated on at 37 \u00b0C in fresh, supplemented developing buffer. Transparent bands of gelatinolytic activity were visualized by staining with 0.1% Coomassie brilliant blue G-250 dye ON. Gels were scanned using Chemi-Doc XRS+ imaging system and image analysis was performed with Image Studio Lite v5.2 software . The integration area of the indicated molecular weight regions was measured, and values were expressed as a ratio of the control area from the same gel . Results were thereby standardized for each gel and expressed in dimensionless units. Results were obtained from two separate experiments for each condition.Aliquots of the \u22121 was digested at 25 \u00b0C by 50 ng ColQ1 in 250 mM HEPES, 150 mM NaCl, 5 mM CaCl2, 5 \u00d7 10\u22126 m ZnCl2, pH 7.5. Compounds 1 and 2 were included at different concentrations, and incubated together with COL and ColQ1 for 3 h. The reaction was stopped by the addition of 50 mM EDTA followed by visualization with 12% SDS-PAGE gels. Results were obtained from two independent experiments for each compound.Acid-soluble type I COL from bovine tail (Thermo Fischer Scientific) at a final concentration of 1 mgmL,30The synthesis was performed according to the synthetic scheme that we published before.2, the thiolate derivative was used as input, as the mercaptoacetamide compound is known to hydrolyze in aqueous solution. The final docking was performed using the Molecular Forecaster suite. In short, the protein structure was prepared using the PREPARE and PROCESS modules with a ligand cutoff of 7 \u00c5 (particle water option). The ligands were prepared using the SMART module. Docking calculations were performed using FITTED. The docking software was validated via redocking the ligand 9NB, resulting in an RMSD of 0.43\u00c5.The PyMOLMolecular Graphics System, version 2.0.6.0a0,Schr\u00f6dinger,LLC,was used for generating figures.The crystal structure of the peptidase domain of ColH (5o7e) with 1.87 \u00c5 resolution was used as target model for the docking. Ligand files were prepared as input for the docking software using OpenBabel (protonation state)71 In case of compound The skin explants of 15 mm diameter were made from ears of young pigs which were provided by a local slaughterhouse. Once the ears were received, several steps of sterilization were performed. The ears were punched, washed with sterile water followed with 3 x DMEM medium containing 10% FBS, 1% Pen-Strep and 250 ng mL\u22121 amphotericin B, with a minimum of 15 min incubation time. To assess the sterilization by antibiotics, randomly selected skin punches were incubated in DMEMmedium at 37\u00b0C ON. The next day, the exposed DMEM was plated on LB-agar plate without antibiotic to check for bacterial growth. After washing the explants, they were stored at \u221280 \u00b0Cfor a maximum of one month in DMEM supplemented with 15% (v/v) glycerol. The storage conditions were selected based on the viability of the skin which we evaluated over one month with the MTT assay at 37 \u00b0C, \u221220 \u00b0C and \u2212 80\u00b0C for several days in a total volume of 300\u03bcL containing ColQ1 or csn together with DMEM and tissue explant. To estimate the release of Hyp into the DMEM medium, the medium was collected and stored at \u221220 \u00b0C. Hyp quantification was performed using a Hydroxyproline assay kit (Sigma Aldrich).In short, Hyp was converted into a colorimetric product after adding 100\u03bcL chloramine T/oxidation buffer mixture, 100 \u03bcL4-(dimethylamino)benzaldehyde diluted in perchloric acid/isopropanol to 10 \u03bcL of DMEM medium and measured at a wavelength of 560 nm. For further evaluation, the skin tissues that were treated for 24 h were fixed with 4% paraformaldehyde (PFA) and stored at 4 \u00b0C.The fixed skin was stored ON with 10% and then 25% sucrose in PBS ON in order to prevent tissue damage before downstream evaluation. The data were plotted with GraphPad Prism 8 for three independent experiments and to calculate the probability value one-way ANOVA was performed and statistical significance was analyzed by Tukey test.The ex vivo pig-skin model was performed as reported earlier.\u22129 M ColQ1 and gradient concentrations of collagenase inhibitors 1 and 2. A total of 12 skin punches per compound were treated in duplicate for six conditions followed by incubation at 37 \u00b0C for 24 h, 5% CO2 and 300 rpm. Non-treated condition was considered as a healthy state, the other samples were incubated with 300\u00d7 10\u22129 M ColQ1 combined with either compound 1 (0\u2212400 \u00d7 10\u22126 M) or compound 2 (0\u221250 \u00d7 10\u22126 M). After 24 h, all samples were fixed in 4% PFA and stored after treating them with 10% and 25% sucrose/PBS as described before and prepared for microscopic and biochemical analysis. To analyze the Hyp content in the DMEM medium for each condition, the DMEM was collected before and after treatment and stored at \u221220 \u00b0C. Finally, the optimal inhibitor concentration was determined by microscopy and biochemical evaluation. Results of three independent experiments were plotted, mean \u00b1 standard deviation.To estimate the probability value one-way ANOVA was performed and statistical significance was analyzed to illustrate the significant differences between non-treated versus treated samples. (*** p \u2264 0.001).In order to select adequate inhibitor concentrations, the skin was treated with 300 \u00d7 102O to remove dirt and redundant mounting medium.For immunostaining, tissue samples were stained with primary antibodies. ; COL III ); COL V ) (1:200 dilutions in PBS) at RT for 1 h or at 4 \u00b0C ON. Next, the solution was removed, and all samples were gently washed with 3\u00d7100 \u03bcL PBS, followed by addition of 50 \u03bcL secondary antibody solution ((IgG (H+L) Highly Cross-Adsorbed; conjugated with AF647 ) in PBS ) and 1:5000 DAPI (Thermo Fisher)) at RT for 1 h or at 4\u00b0C ON. Samples were washed 3 x with PBS again. For each slide, a 0.17 \u03bcm thick 24mm x 60mm cover glass (Thermofisher) was placed on top of a layer of Parafilm and prepared with three evenly distributed drops of in total 60 \u03bcL FluoroMountG .The slides were placed at one edge of the cover glass and slowly lowered towards it in a decreasing angle, from one side to the other. Even distribution of the mounting medium required some time and a sense of applying pressure, but when performed carefully, arising air bubbles were prevented or eliminated in this step. When all slides were sealed, everything was covered with a layer of parafilm. Since the polymerization of FluromountG requires constant pressure, some weight was applied on top of it for at least 4 h but optimally ON. Prior to imaging or storage at 4\u00b0C, all slides were cleaned using paper tissues and 70% ethanol in dHCOL fibres in the tissue were visualized using SHG generated by a Zeiss LSM 880 confocal microscope with a two-photon femtosecond pulsed laser ) set at 900 nm wavelength for excitation. The emitted fluorescent signal was detected before the pinhole using Zeiss Big.2 non-descanned NDD detectors in combination with a 380\u2212430 nm band pass filter. Images were obtained using 8% laser power, with a pixel dwell time of 8.24 \u03bcs with 4 x averaging, and the detector gain set at 500. The resulting image had a size of 512\u00d7512 pixels with a pixel size of 1.38 \u03bcm. Images were taken with a Plan-Apochromat 20x/0.8 NA objective in the dermal region of the skin. Z-stack imaging was performed by selection of a representative spot in the plane with the highest SHG signal, followed by defining the first and a last plane, resulting in a Z-stack with 10 slices spanning 45 \u03bcm. Maximum intensity projections were then generated in ImageJ using the Z-project function.Epifluorescence imaging was performed using a Nikon-Ti Eclipse inverted microscope coupled with a Lumencor SOLA white light lamp for epifluorescence. Images were captured using an Andor Clara DR-5434 camera, with filtercubes for DAPI at 365 nm staining the nuclei and the secondary antibody AF647 conjugate, which labeled COL antibodies at 640 nm. To get a good view throughout the whole skin thickness, large images with a scan area of 2\u00d71 fields of view (10%overlapping) were captured using the Perfect Focus System. Parameters such as light intensity, exposure time, magnification, and tile scan area were adjusted individually for each COL type antibody. Thus, only treated and non-treated samples for one particular COL type immunostaining can be directly compared. For illustration purposes, a LUT threshold for each subtype was selected with the non-treated control of each condition and applied on all images of the related subtype. For a summary of the imaging conditions used, please see 2 prior to the treatment. Next, cells were incubated with varying amounts of B. cereus csn (0-15%) in a total volume of 200 \u03bcL containing csn, cells, DMEM. To inhibit the collagenolytic activity of B. cereus csn compounds 1 and 2 were added to the culture along with 1.25% (v/v) B. cereus csn having 1% DMSO and incubated for 24 h. On the next day, cell viability was evaluated using MTT and live/dead staining assays. The MTT assay is based on the reduction of tetrazolium dye to purple insoluble formazan by mitochondrial succinate dehydrogenase. Live/dead imaging depends on staining the live cells with fluorescein diacetate (FDA) and dead cells with propidium iodide (PI). The MTT assay and live/dead staining were performed after 24 h and 48 h incubation for csn treatment and 24 h incubation after collagenase inhibitor treatment. To conduct the MTT assay, we removed the medium and washed the cells 2 x with sterile PBS buffer. Afterwards, we added 200 \u03bcL of a mixture containing fresh DMEM and 5 mgmL\u22121 MTT reagent in each well and incubated the plate for 2 h at 37 \u00b0C with 5% CO2. After the incubation, the medium was removed, and 200 \u03bcL of 100% DMSO was added to each well to dissolve the formazan crystals, and the plate was incubated at 37 \u00b0C for 30 min. Finally, the absorbance was measured using a PHERAstar plate reader at 0 nm for samples and at 620 nm for blanks with DMEMmedium. The viability was also evaluated via epifluorescence microscope after the live/dead staining. Cells were seeded and treated with B. cereus csn similar to the procedures mentioned above and washed 3 x with sterile PBS. 0.03 mg mL\u22121 FDA and 0.02 mgmL\u22121 PI were added into each well and incubated at 37 \u00b0C for 5 min and 5% CO2. Then the viability and morphology of cells were investigated with 5x magnification to obtain an overview of the quantity of live and dead cells. The morphological changes between the non-treated cells and cells treated with the csn, treated with csn was captured at bright field channel with 20x. The viability of the cells was calculated relative to non-treated controls using ImageJ Fiji software, the results were plotted with GraphPad Prism 8 for three independent experiments for each cell type and 9 images for each condition. To calculate the probability value one-way ANOVA was performed and statistical significance was analyzed by Tukey test. For display purpose, the brightness and contrast were adjusted for each image based on the values of the control image where no treatment was applied.NHDF (Promo Cell C-12302) and Ha-CaT (ATCC\u00ae PCS-200-011) were purchased from commercial suppliers. 50000 cells per well of NHDF and HaCaT were seeded in 96-well plate (Greiner) with DMEM medium (Gibco) including 10% (v/v) fetal bovine serum and 1% (v/v) Penicillin-Streptomycin (Pen-Strep) antibiotic. The cells were incubated at 37 \u00b0C for 24 h with 5% COGalleria mellonella larvae (Tru-Larv) were purchased from BioSystems Technology . Injections were performed using a LA120 syringe pump equipped with 1 mL Injekt-F tuberculin syringes and Sterican 0.30 \u00d7 12 mm, 30G \u00d7 1.5 needles (B. Braun). The larvae were injected in the right proleg with 10 \u03bcL of different solutions . Based on that they were classified into different groups according to the following description: untreated group, treated with sterile PBS group, treated with different amount of B. cereus csn (which was diluted in sterile PBS), treated with ColQ1 diluted with sterile PBS, treated group with a mixture of 100% B. cereus csn or 300 \u00d7 10\u22129 M ColQ1 and various concentrations of compounds 1 or 2 and treated group with only one of the compounds (diluted in PBS) to evaluate the toxicity level. We considered the larvae dead if they did not move and had a black color which reflected the activation of the melanization cascade due to the toxic effect induced by virulence factors. The survival of the larvae was analyzed using GraphPad Prism 8 using Kaplan-Meier analysis followed by equality test called log-rank test. The data of three independent experiments were combined and plotted in the survival curve, 45 larvae in total were included to test compounds with the csn and 30 larvae to test compounds with ColQ1 in the three experiments.p \u2264 0.001 was considered statistically significant while p > 0.05 was considered non-significant. The normalized measurements were statistically compared between treated and non-treated groups using generalized estimating equations model to account for correlated data arising from repeated measures. The survival of G. mellonella was analyzed using the Kaplan-Meier method and log-rank test was applied to calculate the significant difference between conditions.Graphical data in the manuscript are communicated as the means \u00b1 SDs. Statistical comparisons were performed by Tukey one-way ANOVA test, which shows significant differences between conditions. Paramateric/non-paramateric statistical analysis used in the study were based on normality and homogeneity of variance. A value of Supplementary material"} +{"text": "While there have been many efforts to develop computational tools to guide rational antibody engineering, most approaches are of limited accuracy when applied to antibody design, and have largely been limited to analysing a single point mutation at a time. To overcome this gap, we have curated a dataset of 242 experimentally determined changes in binding affinity upon multiple point mutations in antibody-target complexes (89 increasing and 153 decreasing binding affinity). Here, we have shown that by using our graph-based signatures and atomic interaction information, we can accurately analyse the consequence of multi-point mutations on antigen binding affinity. Our approach outperformed other available tools across cross-validation and two independent blind tests, achieving Pearson's correlations of up to 0.95. We have implemented our new approach, mmCSM-AB, as a web-server that can help guide the process of affinity maturation in antibody design. mmCSM-AB is freely available at The ability of antibodies to selectively and specifically bind tightly to targets and sites considered undruggable, has seen them become a major focus of therapeutic and diagnostic applications in a wide range of diseases. This specificity can be so highly tuned that they can be used to even selectively recognize a unique missense mutation, leading to their successful application in personalized medicine ,2. As anIncreasing computational power has led to a number of different approaches to guide the rational engineering of antibody binding and specificity. Initial approaches used a range of techniques, including homology modelling , proteinWe have previously shown that by using graph-based signatures to represent the wild-type residue environment we can accurately predict the effects of mutations on protein folding, stability , dynamicHere, we present a new approach, mmCSM-AB, as a web-server that enables rapid and deep evaluations of combinations of multiple mutations in antibody-antigen complexes using graph-based signatures, sequence- and structure-based information. mmCSM-AB models were trained using single-point mutations and the effects of multiple mutations were assessed, outperforming other available tools across our validation set of experimentally measured changes with double to 14 mutations. mmCSM-AB will help to guide rational antibody engineering by analysing the effects of introducing multiple mutations on binding affinity.KD, given in molar) were collected for 62 complexes Equation and the G of \u20131 kcal/mol was usedThe mCSM graph-based signatures have been widely adopted to capture both the geometry and physicochemical properties of wild-type residue environment using the cutoff scanning matrix algorithm and the resulting pharmacophore changes upon mutation . mCSM siFoldX was used to calculate interaction energies for both wild-type and modelled mutant structures.To capture changes in interaction networks upon mutation, all non-covalent interactions in the wildtype and mutant structures were calculated using Arpeggio . The difIn order to capture residue conservation, we employed different evolutionary scoring measures. These include Position Specific Scoring Matrices (PSSMs) calculated from multiple sequence alignments using PSI-BLAST on the nTo account for potential synergistic and compensatory effects of mutations, we also included information on the distances between the individual mutations.A range of supervised learning algorithms for regression currently available within the scikit-learn Python library were evaluated. These included Random Forest, Extra Trees, Gradient Boost, XGBoost, SVM and Gaussian Process. The best performing model was selected based on Pearson's correlation coefficient and Root Mean Squared Error (RMSE), evaluated under different cross-validation schemes (with 10 bootstrap repetitions), as well as blind tests. The best performing algorithm was ExtraTrees. In order to reduce dimensionality and improve performance, feature selection was carried out in an incremental stepwise greedy approach.http://biosig.unimelb.edu.au/mmcsm_ab.mmCSM-AB was developed using Materialize 1.0.0 and Flask 1.0.2, and hosted on an Apache2 Linux server. This webserver is freely available at mmCSM-AB can analyse the effects of introducing multiple point mutations on antibody-antigen binding affinity. It can be used to either predict the effects of a known mutation via Prediction Mode, or systematic exploration of all potential multiple mutations at the interface to guide rational antibody engineering via Design mode .The server requires the user to provide (i) an antibody-antigen PDB structure either in a PDB file or PDB accession code; (ii) for Prediction Mode, provide a multiple mutation denoted by list of point mutations separated by semicolons, with mutations specified as the chain ID, wild-type residue one-letter code, residue number, and mutant residue one-letter code. Alternatively, users can upload a list of multiple mutations as a text file. Design mode automatically considers all possible combinations of double- and triple-point mutations of residues on either the antibody or the antigen side of the interface.G of a given multiple mutation and complementary details such as distance among single point mutations and distance to interface on 5-, 10-\u00a0and 20-fold cross validation. We further validated the model, by performing a low redundancy leave-one-complex out validation, achieving a Pearson's correlation up to 0.64 . Following feature selection using a greedy algorithm, we were left with a total of 83 features. Interestingly, the only features selected for the final model were the graph-based structural signatures, the evolutionary score and the Arpeggio calculated interactions.P-value <0.01, Fisher's r-to-z transformation; Table Our final mmCSM-AB model achieved a Pearson's correlation of 0.95 binding affinity. Using mmCSM-AB to classify mutations into these two categories, we achieved a Mathew's Correlation Coefficient (MCC) of 0.67 and F1-score of 0.89. For comprehensive reviews of the performance in classifying favourable and unfavourable mutations across available methods, the predicted values from the comparative study Table were furTo evaluate whether the performance relies on the training dataset, we filtered out 53 out of 101 double and triple mutations and 104 out of the overall 242 multiple mutations where none of the mutations was present in the training dataset. mmCSM-AB achieved a Pearson's correlation of 0.92 across the multiple point mutations identified as additive, and 0.94 across those identified as synergistic.To guide rational antibody engineering, effective tools need to be able to identify the mutations leading to the greatest improvement in binding affinity. We therefore further assessed the ability of mmCSM-AB and available tools to ranking mutations in the order of most increasing and decreasing binding affinity. mmCSM-AB showed strong performance, achieving the Kendall's Tau and Spearman's rank-correlation coefficient up to 0.71 and 0.86 on the 242 multiple mutations, and outperforming all other approaches .The mmCSM-AB model was further evaluated against the 47 data points where the introduction of multiple mutations led to complete disruption of antigen binding. mmCSM-AB correctly classified 46 out of 47 non-binders (We further evaluated the performance of mmCSM-AB using the benchmark dataset from Barlow and colleagues . The 59 http://biosig.unimelb.edu.au/mmcsm_ab.Here we introduce mmCSM-AB, a web server that uses our graph-based signatures to predict the effects of multiple-point missense mutations on antibody binding affinity. The method represents a significant advance upon our current predictive platform, outperforming previous methods, which have primarily been limited to single-point missense proteins. mmCSM-AB can assist antibody design efforts via a freely available, user-friendly and easy to use web server at gkaa389_Supplemental_FileClick here for additional data file."} +{"text": "Soft grippers with soft and flexible materials have been widely researched to improve the functionality of grasping. Although grippers that can grasp various objects with different shapes are important, a large number of industrial applications require a gripper that is targeted for a specified object. In this paper, we propose a design methodology for soft grippers that are customized to grasp single dedicated objects. A customized soft gripper can safely and efficiently grasp a dedicated target object with lowered surface contact forces while maintaining a higher lifting force, compared to its non-customized counterpart. A simplified analytical model and a fabrication method that can rapidly customize and fabricate soft grippers are proposed. Stiffness patterns were implemented onto the constraint layers of pneumatic bending actuators to establish actuated postures with irregular bending curvatures in the longitudinal direction. Soft grippers with customized stiffness patterns yielded higher shape conformability to target objects than non-patterned regular soft grippers. The simplified analytical model represents the pneumatically actuated soft finger as a summation of interactions between its air chambers. Geometric approximations and pseudo-rigid-body modeling theory were employed to build the analytical model. The customized soft grippers were compared with non-patterned soft grippers by measuring their lifting forces and contact forces while they grasped objects. Under the identical actuating pressure, the conformable grasping postures enabled customized soft grippers to have almost three times the lifting force than that of non-patterned soft grippers, while the maximum contact force was reduced to two thirds. Softness and flexibility of constituting materials allow soft robotic grippers to be adaptive when interacting with objects model Howell, ,b was emdheight) and width (dwidth) displacements due to wall inflation are determined based on the fitting curves obtained via experimental measurements can be obtained as a function of dheight and dwidth as Equation (2). Height of the single air chamber L can be described with initial height and displacement by inflation is described by applied inflating pressure P as in Equation (3).The bending moment, roverlap) could be described based on geometric approximations. The moment arm (lmoment) could be obtained from the geometric relations derived from the distance between the bottom of the air chamber and the axis of rotation (dlayer). The bending of the constraint layer was considered as pure bending. Therefore, the neutral plane of this layer was assumed to be located at its center.The overlapping area of the interaction of the two air chambers in Mlayer) can be described using the non-linear torsional spring coefficient (klayer) and the bending angle (\u03b8). The non-linear spring coefficient could be obtained from the fitting curve obtained by the three-point bending experiments (Equation (8).During the bending motion, the constraint layer at the bottom of the structure also generated a moment to return back to the initial state. Based on the Pseudo-Rigid-Body model theory, the constraint layer was assumed to be a non-linear torsional spring. Therefore, the moment generated by the bottom layer can be represented by the applied pressure (p) and the bending angle (\u03b8), as shown in The moment generated by the air chambers . A stiffMcontact) is applied to the finger. The moment due to contact shifts the steady state from the initial state to the contacted state, as shown in Stiffness patterning affects the shape of the constant pressure plane of the moment surfaces . EquilibMcontact) is applied to both the high-stiffness and low-stiffness patterned soft fingers, the straightening angles of each stiffness pattern are different. The straightening angle of the soft finger with a high stiffness pattern (\u03b8high) is smaller than that of the soft finger with a low stiffness pattern (\u03b8low).Without any contact, the soft finger with a high stiffness pattern requires a higher actuation pressure than the soft finger with a low stiffness pattern, to achieve the same amount of bending. As shown in There are trade-offs between the low-stiffness and high-stiffness patterns in terms of the bending and straightening characteristics. In this chapter, the fabrication process for the customized soft grippers is presented. The aforementioned constraint layer design customization method yields a relatively simple fabrication process for soft grippers.Fabricating customized soft grippers may present the presupposition of using customized molds for different designs. However, we present a single mold fabrication process for customized soft grippers. The single mold refers to the fabrication of the air chamber section. As mentioned in section Modifying Moment Surfaces to Design Soft Grippers, modifying the stiffness of the constraint layer, to change the actuated posture, is relatively easy compared to changing the air chamber section's design. Therefore, the stiffness of the constraint layer is modified using a varying stiffness pattern mold, then it is bonded to the air chamber section, which stays constant for different designs. The following outlines the fabrication process of a customized soft gripper: first, the thickness tuning plates are stacked inside the base mold of the constraint layer, according to the desired design; then, a pre-cured elastomer is poured into the assembled mold and cured; finally, the fully cured constraint layer is bonded together with the air chamber section . The resAlso, a stiffness patterning method using modularized blocks is introduced in In this chapter, the experimental results that compare soft grippers with stiffness patterned constraint layers and soft grippers with typical non-patterned homogeneous constraint layers are presented. The experiment consisted of measuring the pulling forces of the soft grippers while grasping target objects. Each gripper had two identical soft fingers mounted in a single plane parallel to the ground . The stiSoft grippers with stiffness patterned constraint layer and soft grippers with typical non-patterned homogeneous constraint layers were tested. Both kinds of grippers were fabricated using the same material . In addition, the designs of the air chamber section for both the patterned and non-patterned grippers were identical. The dimensions of the soft fingers were the same as those of the soft fingers presented in Two kinds of objects were selected as target objects. One was a star-shaped object, and the other was a sphere-shaped object . These sEach experiment measured the pulling force and the contact forces between an object and a gripper while the gripper was actuated, and the object was pulled in the outward direction. A load cell and FSR sensors were used to obtain the pulling force and the contact force values, respectively. A draw-wire displacement sensor was attached to the same mount where the load cell was positioned. The soft grippers were fabricated based on optimized stiffness pattern designs for each target object.The stiffness pattern designs of the constraint layer were optimized for each target-grasping object at preselected actuation pressures. Customization of stiffness patterns was completed within tens of seconds with a personal computer that has general specifications. This rapid speed of calculation was enabled by the proposed analytical model.The experimental setup in a two-dimensional plane space is shown in The stiffness pattern of the soft gripper was also optimized to target the star-shaped object. The actuating pressure was selected to be 30 kPa. The optimized stiffness pattern was 6, 8, 8, 4, and 4 mm, from the proximal node to the distal node. Similar to the sphere-shaped object, the thickness of the constraint layer for the non-patterned soft gripper was selected to be 6 mm, which maximized the conformability to the star-shaped object. Both the customized and non-patterned soft grippers were fabricated with the same material . In addition, the designs of the air chamber sections were identical for both grippers.To evaluate the efficiency and safety, the object pulling force and the contact forces of the soft grippers with customized stiffness patterned constraint layers and the non-patterned were compared. The contact force determines the safety of the object; a lower contact force yields a safer interaction between the object and the gripper. The pulling force determines the load capacity of the gripper; a higher pulling force under the same actuation pressure enables the gripper to grasp heavier objects. First, the object was placed at the predetermined location without actuating the gripper. Then, the soft gripper was actuated to grasp the object. The object was forced out of the gripper by slowly pulling it toward the outward direction. The pulling force of the gripper, the contact forces between the gripper and the object, and the pulling displacement were measured simultaneously. Each experiment was performed five times, and the two results with maximum and minimum values were excluded from the analysis.The patterned soft gripper and the non-patterned soft gripper were actuated with the same predetermined actuating pressure, 35 kPa . The expThe contact forces obtained from the four FSR sensors attached on the object's surface are illustrated in The non-patterned soft gripper actuated up to 50 kPa exerted contact force on the FSR sensor #1 from the beginning of the experiments, unlike the previous cases. Sensors #3 and #4, in this case, were applied with almost twice the maximum contact forces than those of the customized soft gripper. Moreover, the position of the sensor #4 is opposite to the lifting direction of the object.The patterned and non-patterned soft grippers both grasped the star-shaped object with 30 kPa of actuating pressure . The patThe contact force applied on the FSR sensor #1 was almost zero, which were out of the measuring range of the sensor, for all three cases. The maximum contact force applied on FSR sensor #2 was almost similar to both for the stiffness patterned and non-patterned soft grippers. However, the contact duration was longer with the stiffness patterned soft gripper. Meanwhile, the non-patterned soft gripper with 35 kPa of actuating pressure exerted a higher contact force on the sensor #3, which was attached to the surface that is opposite to the lifting direction of the object .In summary, the experimental results show that the shape-conformable soft grippers with customized constraint layer stiffness pattern designs had better grasping performance in terms of object pulling and contact forces. The stiffness patterned soft gripper may exhibit better stability and require lower actuation pressure. Furthermore, the contact forces, which may be related to the integrity of the interaction between the gripper and the object, decreased for the shape-conformable soft gripper due to stiffness patterning of the constraint layers.In this paper, we presented an analytical approach that allows us to estimate and experimentally implement customized postures of soft pneumatic grippers. The model suggested that the moment surfaces generated in the air chamber section and the constraint layer section correspond to the bending behavior of soft grippers. The computation speed of the model was relatively fast than that of numerical methods, which have been mainly used in existing studies of soft robots. Therefore, it was possible to obtain the optimal converged postures with rapid iterations for given outlining shapes of target objects. Stiffness patterning of the constraint layers of soft grippers was proposed as a facile and powerful methodology to tune the moment surfaces, in conjunction with suitable fabrication methods. Experimental results about the grasping of objects with different shapes showed that the customized grasping posture effectively reduces the contact force and the actuating pressure while maintaining the lifting force.Future works include enhancing the proposed analytical model and further developing the customization approach. The proposed analytical model requires experimental results regarding the single air chamber inflation test and the three-point bending test. However, the results, obtained from the numerical analysis, such as finite-element analysis, can replace experimental results of the model. Ultimately, the model can be expanded into a hybrid framework that uses the rapid computing speeds of the analytical approach and the preciseness of the numerical method. Implementing topological optimization methodologies into the constraint layer can provide smooth transitions of stiffness profiles that establish grasping postures with better conformability to target objects. Furthermore, the rapid computing speeds of the analytical model can be utilized to generate an abundance of data for machine learning-based optimization processes.Finally, with our grasping posture customization approach, we hope that soft grippers would take a step closer to the industrial scenes.The original contributions presented in the study are included in the article/J-YL built the analytical model, planned and performed the experiments, and also was the main writer of the manuscript. JE performed experiments, supported building the model, and supported the preparation of the manuscript. SY performed experiments and supported the preparation of the manuscript. KC reviewed and guided the proposed analytical model, experimental plans, and the preparation of the manuscript. All authors have contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Acid-sensing ion channels (ASICs) are neuronal sodium-selective channels activated by reductions in extracellular pH. Structures of the three presumptive functional states, high-pH resting, low-pH desensitized, and toxin-stabilized open, have all been solved for chicken ASIC1. These structures, along with prior functional data, suggest that the isomerization or flipping of the \u03b211\u201312 linker in the extracellular, ligand-binding domain is an integral component of the desensitization process. To test this, we combined fast perfusion electrophysiology, molecular dynamics simulations and state-dependent non-canonical amino acid cross-linking. We find that both desensitization and recovery can be accelerated by orders of magnitude by mutating resides in this linker or the surrounding region. Furthermore, desensitization can be suppressed by trapping the linker in the resting state, indicating that isomerization of the \u03b211\u201312 linker is not merely a consequence of, but a necessity for the desensitization process in ASICs. Acid-sensing ion channels (ASICs) are a family of sodium-selective trimeric ion channels activated by extracellular acidification. This family is composed of four genes (five in humans) giving rise to six proton-sensitive isoforms which each have their own distinct expression profiles and biophysical properties . GeneticAt physiological pH, ASICs are primarily found in a resting conformation. A rapid drop in extracellular pH triggers ASIC activation and desensitization, occurring over several milliseconds and hundreds of milliseconds, respectively . Proposesee below).The large extracellular domain of individual ASIC subunits has been likened to a hand shape with distinct thumb, finger, knuckle and palm domains . The resrec\u03c4rec840\u00a0\u00b1\u00a090 ms, slope m\u00a0=\u00a00.96\u00a0\u00b1\u00a00.05, n\u00a0=\u00a05, \u22125 vs wild type) but also recovered exceptionally fast. This can be seen in rec\u03c4rec4.0\u00a0\u00b1\u00a00.5 ms, m\u00a0=\u00a09\u00a0\u00b1\u00a03, n\u00a0=\u00a05, p<1e\u22125 versus wild type, We initially examined the recovery time course of cASIC1 wild type and found that cASIC1 essentially completely desensitized with a time constant of 181\u00a0\u00b1\u00a06 ms and full50 of activation for L414A compared to wild type . However, we did observe that the desensitization of Leu414A is incomplete and a sustained current develops with pH values less than 5 and compared this to the deprotonated state to ensure that the chosen protonation scheme stabilized the structure. From recovery\u03c4 pH 7.8: 1600\u00a0\u00b1\u00a090 ms, m\u00a0=\u00a00.83\u00a0\u00b1\u00a00.01, n\u00a0=\u00a05, p=0.005 versus pH 8; pH 7.6: 11400\u00a0\u00b1\u00a0600 ms, m\u00a0=\u00a00.73\u00a0\u00b1\u00a00.03, n\u00a0=\u00a05, p<1e\u22125 versus pH 8, \u22125 versus pH 8), and this accelerated to 7.5\u00a0\u00b1\u00a00.4 ms with pH 10 , 135\u00a0\u00b1\u00a07 , 140\u00a0\u00b1\u00a03 , and 520\u00a0\u00b1\u00a050 ms , respectively . However, I306A and M364A did not substantially alter desensitization . We therefore made the double I306A/M364A mutation with the goal of dramatically altering recovery without effecting entry into desensitization. However, this double mutation did not exhibit an increased effect on recovery as compared to the single mutations . The L414I substitution was an additional surprise. If the only factors at play are size and polarity, then this mutation should have minimal effect. However, we found that the L414I construct entered desensitization 5-fold slower and recovered nearly 6-fold faster than wild type. Based on this surprising set of results, particularly the dramatic acceleration by the bulky Tyr residue and the notable effect of the conservative Ile substitution, we conclude that no simple rule of size or polarity is sufficient to explain or predict the effects of this position as yet. These results also stand in contrast to those reported recently and to the expectations of the purely steric \u2018valve\u2019 model , polar (Tyr), and hydrophobic (Phe) residues as well as a polar residue (Asn) to match the small non-polar Ala. We also substituted Leu414 for Ile, which has the same size with the same number of atoms and a similar hydrophobicity but differ in the branch point. If the only considerations at this position are size and polae\u2019 model . An impoThe RMSD analyses in p-benzoyl-L-phenylalanine (Bpa) generates a free radical when exposed to 365 nm light , both of which would be expected if the \u03b211\u201312 linker flipping was a requirement for channel desensitization. Cells transfected with cASIC1_GFP L414TAG plus R3 but without MeO-Bpa did respond to pH application . Crucially, these cells did not exhibit UV modulation nor did responses from cells expressing wild-type cASIC_GFP , indicating the UV effect was specific to the incorporated Bpa but we did observe a strong inhibition of the peak response through the course of the UV application. As seen in In the present study, we investigated the molecular underpinnings of entry to and exit from desensitization in cASIC1. We corroborate and extend structural and functional studies implicating the \u03b211\u201312 linker as a regulator of desensitization. Indeed, we report that a simple L414A mutation imparts a 5-fold and 200-fold acceleration in entry into and exit from desensitization, respectively . The acc\u22125 between pre- and post-UV) and this effect was not observed with cASIC1_GFP . Finally, the slope of the L414A pH activation curve is shallower than that of wild-type cASIC1 cells from ATCC were used and identity confirmed using STR profiling. PCR based test for mycoplasma was last performed 7/2019 and was negative. HEK293 cells were maintained in Dulbecco\u2019s Modification of Eagle\u2019s Medium (DMEM) with 4.5 g/L glucose, L-glutamine and sodium pyruvate or Minimum Essential Medium (MEM) with Glutamax and Earle\u2019s Salts (Gibco), supplemented with 10% FBS and penicillin/streptomycin (Invitrogen). Cells were passaged every 2 to 3 days when approximately 90% confluence was achieved. HEK293 cells were plated on tissue culture treated 35 mm dishes, transfected 24 to 48 hr later and recorded from 24 to 48 hr post-transfection. Cells were transiently transfected using with chicken ASIC1 wild type or mutant and eGFP using an ASIC:eGFP ratio of 7.5:1 \u00b5g of cDNA per 10 mL of media. Transfections were performed using jetPRIME (Polyplus Transfections) or polyethylenimine 25 k following manufacturer\u2019s instructions, with media change at 6 to 8 hr. For non-stationary noise analysis, media was changed after 3\u20136 hr and recordings performed within 24 hr. Mutations were introduced using site-directed mutagenesis PCR and confirmed by sequencing (Fisher Scientific/Eurofins Genomics).For experiments with non-canonical amino acid incorporation, HEK293 cells were co-transfected with three separate pcDNA3.1+ vectors each containing: (1) either wild type or L414TAG cASIC1, (2) R3 - two copies of orthogonal Bpa tRNA along with a single copy of the Bpa tRNA synthetase and (3) YAM \u2013 an additional copy of orthogonal tRNA at a mass ratio of 2:2:1, respectively. Our impression was that the addition of the YAM plasmid was not essential, but did seem to increase non-sense suppression efficiency. The tRNA and tRNA synthetase inserts were made by gene synthesis using published sequences . Transfe2 and 1 CaCl2 (pH 7.4). External solutions with a pH greater than seven were composed of (in mM) 150 NaCl, 20 HEPES, 1 CaCl2 and 1 MgCl2 with pH values adjusted to their respective values using NaOH. For solutions with a pH lower than 7, HEPES was replaced with MES. All recordings were performed at room temperature with a holding potential of \u221260 mV using an Axopatch 200B amplifier (Molecular Devices). Data were acquired using AxoGraph software (Axograph) at 20\u201350 kHz, filtered at 10 kHz and digitized using a USB-6343 DAQ . Series resistance was routinely compensated by 90% to 95% where the peak amplitude exceeded 100 pA. Rapid perfusion was performed using home-built, triple-barrel application pipettes (Vitrocom), manufactured according to Culture dishes were visualized using a 20x objective mounted on a Nikon Ti2 microscope with phase contrast. A 470 nm LED (Thorlabs) and dichroic filter cube were used to excite GFP and detect transfected HEK cells. Outside-out patches were excised using heat-polished, thick-walled borosilicate glass pipettes of 3 to 15 M\u2126 resistance. Higher resistance pipettes were preferred for non-stationary noise analysis experiments. The pipette internal solution contained (in mM) 135 CsF, 33 CsOH, 11 EGTA, 10 HEPES, 2 MgClFor UV modulation, a high-power UV LED was used as the UV light source. The UV LED was set to maximum power and triggered by TTL input. The light emission was reflected off a 425 nm long-pass dichroic mirror held in a beam combiner , on through the epifluorescence port of the Ti2 microscope then reflected off of a 410 nm long-pass dichroic mirror before being focused onto the sample through a 20x objective. For resting state trapping experiments , a singla prediction .Molecular dynamics simulations were performed using a structure of chicken ASIC1 suggested to be in the desensitized state , solvedediction using avFor simulations of the L414A mutant, the L414 side chain was manually changed to an alanine side chain prior to constructing the simulation systems.\u22121 nm\u22121. This was followed by six shorter simulations, gradually releasing the position restraints as suggested by the default CHARMM-GUI protocol. The first three short simulations were 25 ps long and used a time step of 1 fs; the fourth and the fifth were 100 ns long, while the final part of the equilibration was run for 2 ns. The equilibration simulations 4\u20136, as well as the production run, used a time step of 2 fs. In all steps, the Verlet cutoff scheme was used with a force-switch modifier starting at 10 \u00c5 and a cutoff of 12 \u00c5. The cutoff for short-range electrostatics was 12 \u00c5 and the long-range electrostatics were accounted for using the particle mesh Ewald (PME) method (The CHARMM36 force field was employed for proteins and lipi) method . The tem) method , while t) method was used) method was empl) method was used) method . Snapsho) method . FiguresKinetic simulations were pertI is the fraction of the test peak at an interpulse interval of t compared to the conditioning peak, \u03c4 is the time constant of recovery and m is the slope of the recovery curve. Each protocol was performed between 1 and 3 times on a single patch, with the resulting test peak/conditioning peak ratios averaged together. Patches were individually fit and averages for the fits were reported in the text. N was taken to be a single patch.Where xI is the current at pH X, 50pH is the pH yielding half maximal response and n is the Hill slope. Patches were individually fit and averages for the fits were reported in the text. N was taken to be a single patch.For dose-response curves, patches were placed in the middle of a three-barrel application pipette and jumped to either side to activate channels with the indicated pH. Responses to higher pH values were interleaved with pH 5 applications on either side to control for any rundown. Peak currents within a patch were normalized to pH 5 and fit to:i2\u03b4 is the variance of trace i, iT is the current value of the trace i. The ensemble variance and current for each patch were divided into progressively larger time bins. The baseline variance was measured from a 50 ms time window just prior to pH 5 application. The resulting mean current-variance data were then fitted in Originlab using:i(I)2\u03c3 is the variance, i is the single channel current, I is the average current, N is the number of channels in the patch and baseline2\u03c3 is the baseline variance. For all experiments, N was taken to be a single patch. Nonparametric two-tailed, unpaired randomization tests with 100,000 iterations were implemented in Python to assess statistical significance. Statistical comparisons of recovery from desensitization were based and reported on differences in recovery time constant.For non-stationary fluctuation analysis, runs of between 50 and 200 responses from a single patch were recorded. Within each recording, we identified the longest stretch of responses where the peak amplitude did not vary by more than 10%. We further eliminated individual traces with spurious variance such as brief electrical artifacts, resulting in blocks of 50\u2013100 traces. To further correct for rundown or drift in baseline values we calculated the variance between successive traces, as opposed to calculating from the global average , using: is a fundamental property that regulates the time course of their actions. However, the structural mechanisms that regulate entry and recovery from the desensitized state are still unclear for many LGICs. In this study, the authors use a powerful combination of approaches including ultra-fast solution exchange patch-clamp electrophysiology, MD simulations and unnatural amino acid photoactivable cross-linking to show that a change in conformation of Leu414 in the beta11-12 linker of the acid-sensing ion channel (ASIC) triggers ASIC desensitization. This paper uncovers significant new mechanistic information about ASIC desensitization kinetics.Decision letter after peer review:eLife. Your article has been reviewed by three peer reviewers, including Cynthia M Czajkowski as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Richard Aldrich as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Andrew J R Plested (Reviewer #2); Toshimitsu Kawate (Reviewer #3).Thank you for submitting your article \"\u03b211-12 linker isomerization governs Acid-sensing ion channel desensitization and recovery\" for consideration by The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.Summary:Desensitization mechanism of ASICs remains poorly understood, despite the available crystal structures in the resting, open, and desensitized states. The recent study by Wu et al. suggested that Q277 (chicken numbering) enables channel desensitization through a mechanism explained by a steric effect. In contrast, the current paper by the MacLean group provides new experimental data that suggest isomerization of the \u03b211-12 linker, rather than the steric effect by Q277, predominantly governs the transition between the closed and the desensitization states. The authors carefully characterized several mutants of an important residue in this linker (L414), using a combination of ultrafast-perfusion based patch clamp electrophysiology, non-stationary fluctuation analysis, and molecular dynamics simulations. All data support the idea that this residue flips outwardly during desensitization and the extent of energy barrier for this flip determines how quickly an ASIC channel desensitizes or recovers from desensitization. The authors also applied non-canonical amino acid cross-linking, which supports that the \u03b211-12 linker rearrangement is necessary for channel desensitization.This study is well-designed and the experiments are carefully performed. In particular, the fast perfusion system enabled the authors to demonstrate, for the first time, that a substitution of L414 with a smaller alanine residue does not affect the proton sensitivity. Furthermore, they were able to demonstrate that desensitization precedes a closed, rather than an open state. The reviewers agreed that these novel findings significantly improve our understanding of how ASICs desensitize.Addressing the following essential revisions will strengthen the manuscript.Essential revisions:1) Need to disclose the protonation sites that were used in the MD studies. The reasoning behind what residues were chosen is also important and should be described. Without this information, there is no way for current or future scientists to replicate this work and to know if the simulation protonation states have any relation to the real situation.a of the relevant residues for acid sensitivity of recovery that are altered by the mutations. Is the pH sensitivity of the recovery from desensitization substantially different between WT and L414A? This can be addressed by carrying out an experiment like in Figure 3\u2014figure supplement 2 for the mutant channel. Alternatively, the mutant L414BzF after crosslinking should have lost pH sensitivity of recovery and this might be a way to confirm the mechanism. At the very least, the authors need to address and discuss this point thoroughly.2) Since recovery from desensitization is very sensitive to pH, some of the effects of the mutations in the \u03b2 11-12 linker might be due to changes in pK3) Surprisingly, authors did not observe any modulation of channel function when using desensitized state UV applications (data not shown). Authors should show the data even if the experimental protocol did not result in any observable effects. The authors need to discuss their ideas on why they did not see any effects especially in light of their MD simulations , showing multiple residues within 4 \u00c5 of L414 in the desensitized state.4) The result that desensitization/recovery proceeds from closed channel states is an important conclusion from their data, which is only presented in the Results section. Additional discussion of this finding is warranted. Including a branched kinetic model that describes their data would highlight this finding but one can argue that it is not essential to support the main conclusions of this paper. Essential revisions:1) Need to disclose the protonation sites that were used in the MD studies. The reasoning behind what residues were chosen is also important and should be described. Without this information, there is no way for current or future scientists to replicate this work and to know if the simulation protonation states have any relation to the real situation.We report our protonation scheme in the first paragraph of the Materials and methods subsection \u201cMolecular dynamics simulations\u201d, and refer the reader to that section in the Results as well . The following text in the Materials and methods describes our protonation scheme:\u201cIn the desensitized state, a number of acidic residues are believed to be protonated, however, exactly which residues is unclear. [\u2026] For the simulations mimicking a higher pH value, all residues were kept in their standard ionization state .\u201dWe acknowledge that the used protonation scheme may not be the correct representation of the physiological state. However, the purpose of protonating residues in this work was to stabilize the channel in the proposed desensitized state, which our chosen protonation scheme does do. Indeed, we observed greater structural changes upon deprotonation .a of the relevant residues for acid sensitivity of recovery that are altered by the mutations. Is the pH sensitivity of the recovery from desensitization substantially different between WT and L414A? This can be addressed by carrying out an experiment like in Figure 3\u2014figure supplement 2 for the mutant channel. Alternatively, the mutant L414BzF after crosslinking should have lost pH sensitivity of recovery and this might be a way to confirm the mechanism. At the very least, the authors need to address and discuss this point thoroughly.2) Since recovery from desensitization is very sensitive to pH, some of the effects of the mutations in the \u03b2 11-12 linker might be due to changes in pKa values of critical residues in the surrounding area. Therefore we favor the interpretation that L414A works predominantly by accelerating the recovery transition, and not by shifting its pH-dependence. We have included these considerations in the Results .We have carried out the proposed L414A recovery experiment at different inter-pulse pH values and added the data as a new supplementary figure . We find that L414A is fast at all the pH\u2019s we examined, with small although detectable changes. That is, the pH dependence of recovery is retained but the effect is smaller over the range we examined. This observation suggests that the 414 position plays a key role in the recovery process and does not simply produce apparent changes in recovery by shifting the pH-dependence of recovery slightly to the right or left. Furthermore, it is unlikely that converting a non-polar Leu to a non-polar Ala in a region with few charged residues could produce the local electrostatic change required to appreciably alter pK3) Surprisingly, authors did not observe any modulation of channel function when using desensitized state UV applications (data not shown). Authors should show the data even if the experimental protocol did not result in any observable effects. The authors need to discuss their ideas on why they did not see any effects especially in light of their MD simulations , showing multiple residues within 4 \u00c5 of L414 in the desensitized state.We were also surprised by this. In our original desensitized state dataset, rundown throughout our recordings of small amplitude signals masked any UV-correlated peak inhibition. With more experience in UV-trapping and ncAA incorporation, we have now gone back, modified the protocol and found evidence of trapping. We specifically adjusted our approach in two ways. First, after whole cell recording was established we performed a variable number of pH applications until the peak response stabilized. This essentially allowed rundown/tachyphylaxis to occur after which we began the experiments depicted in Figure 8 . Second, we repeated the experiment using greater total UV dose (seven trains of 20x 50 ms pulses as opposed to five trains of 14x 50 ms in the resting state data). While this more aggressive irradiation protocol was harder on the cells, it did reveal state-dependent inhibition of the peak response (~ 50%) as expected if the channels were trapped in the desensitized state. We have now included this new data as an additional figure with new Materials and methods , Results and Discussion (subsection \u201cConclusion\u201d).4) The result that desensitization/recovery proceeds from closed channel states is an important conclusion from their data, which is only presented in the Results section. Additional discussion of this finding is warranted. Including a branched kinetic model that describes their data would highlight this finding but one can argue that it is not essential to support the main conclusions of this paper.conclude desensitization/recovery precedes exclusively from closed states. Only that we have found some evidence which favors that hypothesis over the idea that transitions occur primarily to/from open states. Diagnosing the procession of states is challenging and we feel more evidence would be needed before we can make some conclusion. In light of this, we have moved one section of the Results (discussing how linear reactions predict slow deactivation/resurgent current when recovery is fast) into the Discussion (subsection \u201cDoes ASIC desensitization proceed from open or shut states?\u201d) and added a note to emphasize more experiments would be needed to support the shut state hypothesis.We thank the reviewers for their interest on this point. However, we did not wish to convey that we With regard to kinetic models, prior to submission we did some branched versus linear reaction scheme simulations of desensitization and recovery kinetics for wild type and L414A-like channels. These simulations quickly reveal how grossly inadequate simple models are for ASIC behavior. In either model if we try to approximate L414A kinetics by increasing the rate constant for recovery from desensitization, a robust increase in steady-state/equilibrium current follows (from 1% of peak to >70% of peak). This is not reflected by the data. We hypothesize that this discrepancy in equilibrium current amplitude between model and data arises because of the pH dependence of recovery. As seen in Figure 3\u2014figure supplement 2, recovery gets slower and slower with acidic interpulse pH values. If one extrapolates this trend then the recovery route at pH 7 or 6.5 or pH 5 would be exceedingly unfavorable, resulting in the minimal steady-state current observed. We have included this notion and one such very simple 4-state branching model in a new Discussion section (subsection \u201cThe recovery from desensitization process\u201d), aimed at highlighting this and other ideas which emerged as a consequence of this review."} +{"text": "None of the 125 spacecraft bacteria showed active growth under the tested low-PTA conditions and amended media. In contrast, a decrease in viability was observed in most cases. Growth curves of two hypopiezotolerant strains, Serratia liquefaciens and Trichococcus pasteurii, were performed to quantify the effects of the added anaerobic electron acceptors. Slight variations in growth rates were determined for both bacteria. However, the final cell densities were similar for all media tested, indicating no general preference for any specific anaerobic electron acceptor. By demonstrating that a broad diversity of chemoorganotrophic and culturable spacecraft bacteria do not grow under the tested conditions, we conclude that there may be low risk of growth of chemoorganotrophic bacteria typically recovered from Mars spacecraft during planetary protection bioburden screenings.To protect Mars from microbial contamination, research on growth of microorganisms found in spacecraft assembly clean rooms under simulated Martian conditions is required. This study investigated the effects of low atmospheric pressure on the growth of chemoorganotrophic spacecraft bacteria and whether the addition of Mars relevant anaerobic electron acceptors might enhance growth. The 125 bacteria screened here were recovered from actual Mars spacecraft. Growth at 7\u00a0hPa, 0\u00a0\u00b0C, and a CO Life-detection experiments on future Mars landers may target Special Regions because local conditions are likely more conducive to life than other terrains3. In order to protect potential Special Regions\u2014that might be identified in the future\u2014from spacecraft contamination, and to potentially search for an extant Mars microbiota, research to characterize microbial survival, metabolism, growth, and evolution must be conducted under relevant Martian conditions.Protecting solar system bodies from contamination by Earth life not only allows the preservation of extraterrestrial habitats in their natural state but also is a precaution to avoid contamination in places where life might exist. For Mars, Special Regions have been defined as environments \u201cwithin which terrestrial organisms are likely to propagate\u201d or \u201cany region which is interpreted to have a high potential for the existence of extant Martian life\u201d6. Space fairing nations have established standard protocols for the enumeration of biological burden on Mars-bound spacecraft (see ECSS-Q-ST-70-58C)8. For current missions to Mars, the bioburden constraints are derived from quantitative studies of the Viking spacecraft in the mid-1970s. Since then, these guidelines have been routinely implemented, clean rooms and their payloads meticulously cleaned and screened, and microbial species found on spacecraft archived9. Despite the cleaning efforts, the current guidelines allow a certain amount of bacteria to be present on a Mars-bound spacecraft where it mostly encounters harsh and hostile conditions7.Prior to a launch, microbial surveys are completed for spacecraft and in spacecraft assembly facilities (SAFs) in which the landers and rovers are assembled13). Recent experiments with bacteria exposed to low-pressure (7\u00a0hPa), low-temperature (0\u00a0\u00b0C), and a CO2-enriched anoxic atmosphere have demonstrated that at least 30 bacterial species from 10 genera are capable of metabolism and growth under Mars-relevant low-PTA conditions. The most common bacterial genera capable of growth at low pressures include members from Bacillus, Carnobacterium, Clostridium, Cryobacterium, Exiguobacterium, Paenibacillus, Rhodococcus, Serratia, Streptomyces, and Trichococcus15. These hypopiezotolerant microorganisms inhabit diverse ecological niches including arctic and alpine soils, Siberian permafrost, environmental surface waters, plant surfaces, and seawater; and represent only a minor fraction of the overall microbiota while the vast majority of microorganisms were not able to proliferate even though adequate water and nutrients were provided . However, in these studies, the possibility that microbial species required specific geochemical redox couples or terminal anaerobic electron acceptors under low-PTA conditions for growth was not examined.In order for bacteria on spacecraft to survive and grow on Mars, numerous conditions must be present to permit metabolism, cell proliferation, and adaptation of Earth microorganisms. Various studies have discussed up to 22 biocidal or inhibitory factors that are likely present on Mars or possibly non-existent (subsurface environments)11. In order for microorganisms to metabolize and grow in the shallow subsurface on Mars, a different set of redox couples may be required for metabolic activity. In addition, simulated Martian atmospheric pressures of 7\u00a0hPa can restrict the capability to metabolize certain organics17. To date, bacteria from actual Mars spacecraft and their surrounding SAF clean rooms have not been tested for growth under low-PTA conditions.The importance for redox couples to provide energy that can drive metabolism under low-PTA conditions is mentioned theoretically as an essential requirement on MarsThe current study investigated two goals. First, with regard to forward contamination, a broad diversity of mesophilic and heterotrophic culturable bacteria isolated from the Viking, Pathfinder, Spirit, Opportunity, Phoenix, Curiosity, and InSight spacecraft were subjected to low-PTA conditions and growth was measured over 28\u00a0days. The second objective was to evaluate a range of Mars-relevant geochemical terminal anaerobic electron acceptors to determine if they would enhance microbial activity and growth under simulated low-PTA conditions.2 atmosphere reflecting a subset of environmental conditions microorganisms might encounter on Mars, (b) that spacecraft microorganisms are shielded from UV irradiation, (c) test for growth of 125 culturable mesophilic bacteria from authentic Mars spacecraft under low-PTA conditions, (d) create a set of \u2018conducive\u2019 nutritional and hydrated conditions such that water and nutrient requirements are not limiting in order to focus on the effects of low pressure on bacterial growth, and (e) if none of the bacteria exhibit growth under low-PTA conditions\u2009+\u2009nutrients\u2009+\u2009liquid water conditions, then it would be unlikely that they would be able to grow if additional Martian stressors were added to the experiments. The results described below are thus a set of experiments in a series of studies exploring the effects of low-pressure environments on the survival, metabolism, growth, and evolution of spacecraft microorganisms under a subset of simulated Martian conditions.The aims and\u00a0assumptions for the current study were: (a) test the effect of simulated Martian low pressure of 7\u00a0hPa at 0\u00a0\u00b0C and a CO9). The microbial samples were taken during prelaunch activities according to NASA standard protocols18 and enriched on trypticase soy agar (TSA). The strains were obtained from the Jet Propulsion Laboratory, Pasadena, CA, USA and the Northern Regional Research Laboratory (NRRL) Collection, U.S. Dept. of Agriculture, Peoria, IL, USA. A full list of the spacecraft bacteria can be found in Supplementary Table We surveyed 125 bacteria recovered from six authentic Mars spacecraft including the Viking, Pathfinder, Spirit, Opportunity, Phoenix, InSight and Curiosity platforms . Representatives from the following 6 phyla were selected: Actinobacteria; Deinococcus-Thermi; Firmicutes; and \u03b1-, \u03b2-, and \u03b3-Proteobacteria. A total of 37 different genera were tested on six TSA-based media supplemented with diverse anaerobic electron acceptors. An agar-based method was used to simultaneously screen 25 bacterial strains on individual TSA plates including a positive and negative control ; (2) Control-1: 1013\u00a0hPa, 0\u00a0\u00b0C, and a CO2-enriched anoxic atmosphere; (3) Control-2: 1013\u00a0hPa, 0\u00a0\u00b0C, and an Earth-normal atmosphere (pN2:pO2 at a ratio of 78:21); and (4) Control-3: 1013\u00a0hPa, 30\u00a0\u00b0C, and an Earth-normal atmosphere (pN2:pO2 at 78:21). The TSA assays were run for only 28\u00a0days, and not longer, because the agar-based protocol cannot be easily extended beyond 28\u00a0days due to the slow dehydration of the agar at 7\u00a0hPa15. However, the length of time tested here was adequate to identify 30 hypopiezotolerant bacteria from 10 genera in previous work with spacecraft microorganisms12 and arctic and permafrost soils15.Vegetative cells of all bacteria were prepared on 24-h cultures of TSA incubated at 30\u00a0\u00b0C. The strains were subjected to the following enrichments conditions and visually checked every 7\u00a0days for a total of 28\u00a0days: (1) simulated Martian atmospheric pressure conditions at 7\u00a0hPa, 0\u00a0\u00b0C, and a CO15. Briefly, double-thick agar plates (~\u200925\u00a0mL of TSA) supplemented with the anaerobic electron acceptors were inserted into a 4-L polycarbonate desiccator connected to a low-pressure controller . Four anaerobic pouches and an indicator tablet were added to each desiccator to maintain anoxic conditions. Once the desiccator lid was closed, the low-pressure chamber was flushed for 1\u20132\u00a0min with ultra-high purity CO2 gas passed through filter-sterilized (0.22\u00a0\u00b5m) vent lines. The desiccator was placed in a microbial incubator set at 0\u00a0\u00b0C and the pressure was reduced stepwise to reach 7\u00a0hPa (see12 for the depressurization protocol). The low-PTA conditions were maintained for 4\u00a0weeks. The desiccator was only vented and opened at 7\u00a0day intervals to replace the anaerobic pouches and the indicator tablet, or at additional time points for the determination of growth curves (see below).The design and operation of the hypobaric chamber was described previously8.The standard medium used was 0.5\u2009\u00d7\u2009TSA . An organic rich medium was used in order to provide adequate nutrition for the assays that specifically were investigating low-pressure conditions and anaerobic electron acceptors on supporting growth of chemoorganotrophic spacecraft bacteria. All tested spacecraft bacteria were originally recovered on TSA from spacecraft surfaces based on the procedures outlined in the planetary protection protocols3 (nitrate reduction); (3) TSA\u2009+\u20090.1\u00a0g (NH4)2SO4\u2009\u00d7\u20096 H2O, 1.5\u00a0g Na2SO4, 1.5\u00a0g MgSO4\u2009\u00d7\u20096 H2O, and 1.5\u00a0g sodium lactate ; (4) TSA\u2009+\u20092.5\u00a0g Fe3+citrate and 1.5\u00a0g sodium lactate (70% v/v) at pH 5.0 (1st iron reduction), (5) TSA\u2009+\u20092.5\u00a0g Fe3+citrate and 1.5\u00a0g sodium lactate (70% v/v) at pH 7.0 (2nd iron reduction); and (6) TSA\u2009+\u200910\u00a0ml vitamins , Manassas, VA, USA)\u2009+\u200910\u00a0ml mineral solution . Bacteria were streaked on all six media in groups of 25 strains plus one positive control and one negative control , and incubated at the four different environmental conditions for 28\u00a0days. Every 7\u00a0days, assay plates were visually inspected for bacterial growth, and then equilibrated back to the diverse test conditions.To investigate whether the addition of anaerobic electron acceptors enhanced bacterial growth, the following supplements were added : (1) TSA; (2) TSA\u2009+\u20091\u00a0g KNO7 cells/mL) were pipetted as 2.5-\u00b5L drops in a five-by-five grid pattern on TSA plates supplemented with the anaerobic electron acceptors listed above. To obtain growth curves of the bacteria on the specific media, approx. 1\u2009\u00d7\u20091\u00a0cm squares around the cell-suspension drops were excised from the agar every 3\u20134\u00a0days over the course of 28\u00a0days, and processed as follows.A subset of the 125 spacecraft bacteria screened above was selected for quantitative growth curve assays. Single colonies were taken from 24-h cultures on TSA and mixed in sterile 1\u2009\u00d7\u2009phosphate buffered saline (PBS). The optical density (OD) of the cell suspensions were measured at 400\u00a0nm using a spectrophotometer , and the suspensions were adjusted as necessary to obtain OD values of 0.007. The resulting cell suspensions assays22. Raw data and equations are reported in Table The cell count data were plotted on a logarithmic scale. Based on these graphs, the growth rates of bacteria were determined by the time points representing logarithmic growth. By eliminating extraneous data from the plots, and analyzing only linear functions that best fit the data, growth rates (GR) of each strain/media combination were calculated using the slopes of the linear fitted models for the exponential phases of growth curves. In addition, the doubling times (DT) were determined using the following equation:w), hydrogen ion concentration (pH), electrical conductivity (EC), and redox potential (Eh). The aw, pH, EC, and Eh measurements were taken after the supplemented TSA media were autoclaved, poured, and solidified. The following instruments were operated as per the vendor directions: (1) the aw measurements were collected using a water activity analyzer ; (2) pH levels were measured with the model 81074UWMMD using the probe Orion Ross Ultra pH/ATC triode meter ; (3) EC values were measured with a model 01,301,040 Orion probe, ; and (4) Eh values were estimated with a model 9678BNWP Orion Sure-Flow comb Redox/ORP meter . The autoclaved solid media were blended using an immersion blender to a semi-liquid consistency before taking EC and Eh measurements.All media doped with the various anaerobic electron acceptors were measured for water activity if bacteria recovered from actual Mars spacecraft are capable of growth under simulated low-PTA conditions and, (2) does supplementing microbial media with anaerobic electron acceptors stimulate growth under low-PTA conditions?3+ at pH 7 or pH 5. Although the list does not include all possible anaerobic electron acceptors on Mars, it covers plausible metabolisms that might occur on Mars in the presence of organics .To investigate the effects on growth under various incubation conditions, the following anaerobic electron acceptors for chemoorganotrophic bacteria were tested: (1) trypticase soy agar (TSA), (2) TSA\u2009+\u2009vitamins and minerals, (3) TSA\u2009+\u2009nitrate, (4) TSA\u2009+\u2009sulfate, and (5) TSA\u2009+\u2009Fe2 (21%) and on the media supplemented with the different anaerobic electron acceptors, with the exception of the iron supplemented medium at pH 5.0 . This effect was caused by the low pH of the medium, rather than the Fe3+ supplement, as these strains were able to grow on the same medium at pH 7. Furthermore, it was observed that due to the lower pH, the agar surfaces were soft which led to two issues . And second, soft agar plates led to enhanced growth of a few bacteria that have the ability to swarm . To counteract these issues, the agar concentration in the TSA\u2009+\u2009Fe3+ medium (pH 5.0) was increased to 3% and the number of strains per plate was reduced to four.Figure\u00a02 (21%), except as noted above for the pH 5.0 agar plates. In sharp contrast to the Earth-lab controls, only 14 of 125 bacterial strains were able to grow when the temperature was lowered to 0\u00a0\u00b0C with a lab-normal pO2 , Paenibacillus amylolyticus (ASB-298), Kocuria rosea , Bacillus simplex (ASB-130), Bacillus firmus (ASB-131), Plantibacter flavus (ASB-353), Rhodococcus globerulus (ASB-359), Rhodococcus sp. (ASB-356), Psychrobacillus psychrodurans (ASB-333), Dietzia maris (ASB-383), Labdella kawkjii (ASB-384), and Sphingomonas yunnanensis (ASB-391). From these 15 spacecraft bacteria, S.\u00a0equorum sub. equorum strains (ASB-309 and ASB-312), and the two Rhodococcus strains (ASB-356 and ASB-359) were able to grow under anaerobic conditions as well. The results suggested that both low temperature and low pressure led to a reduction in the number of bacteria capable of growth. Specifically, an additive effect was observed at 1013\u00a0hPa, 0\u00a0\u00b0C and Earth-lab normal pO2 conditions compared to 1013\u00a0hPa and 0\u00a0\u00b0C under a CO2-enriched anoxic atmosphere. Adding the third stressor decreased the numbers of strains capable of growth to zero. No new hypopiezotolerant bacteria were recovered from the assayed 125 Mars spacecraft strains. Summarizing, the environmental conditions of low-pressure and low-temperature had a greater effect on suppressing bacterial growth than the addition of anaerobic electron acceptors did on stimulating growth.For example, the following bacteria were able to grow at 1013\u00a0hPa, 0\u00a0\u00b0C, and lab-normal pORhodococcus isolates (strain ASB-356 and ASB-359) were not able to grow on TSA or TSA\u2009+\u2009Fe3+ pH 5 at 0\u00a0\u00b0C and CO2-enriched atmosphere, but colonies were observed on TSA\u2009+\u2009vitamins, TSA\u2009+\u2009nitrate, TSA\u2009+\u2009sulfate and TSA\u2009+\u2009Fe3+ pH 7 under the same enrichment conditions (Table S. equorum subsp. equorum (strain ASB-312) was able to grow on TSA\u2009+\u2009sulfate and possibly on TSA\u2009+\u2009nitrate at 0\u00a0\u00b0C and a CO2-enriched atmosphere. Kocuria rosea (strain ASB-358) grew better at 1013\u00a0hPa, 0\u00a0\u00b0C, and Earth pO2 on standard TSA when NO3\u2212 or SO4\u2212 were added, compared to standard TSA or TSA\u2009+\u2009vitamins. However, none of these strains were able to grow under low-PTA conditions based on the visual observations of colony sizes at the end of the 28-day assays.A few changes were observed among the various strains and the five different media. One example was that two ns Table . SimilarS. liquefaciens and T. pasteurii, were determined on four media . In addition, four representative spacecraft bacteria, that exhibited negative growth on the visually evaluated agar assays, were chosen to determine if their growth rates were so low that they were rated as negatives on the TSA assays while in fact they were true hypopiezotolerant bacteria, albeit with extremely slow growth rates. Staphylococcus equorum subsp. equorum (ASB-279) was incubated on TSA\u2009+\u2009vitamins, B. simplex (ASB-130) was incubated on TSA\u2009+\u2009Fe3+citrate and sodium lactate at pH 7, A. johnsonii (ASB-326) was incubated on TSA\u2009+\u2009vitamins, and S. yunnanensis (ASB-391) was incubated on TSA\u2009+\u2009sulfate. Aliquots (2.5\u00a0\u00b5l) of the cell suspensions were applied on each of the media tested, and incubated at low-PTA conditions for 28\u00a0days. Every 3\u20134\u00a0days, three random samples were collected by aseptically cutting out a small area of TSA agar containing the cells.In order to better quantify the effects of the added anaerobic electron acceptors under low-PTA conditions, growth rates for two hypopiezotolerant control bacteria, S. liquefaciens, was observed when the medium was supplemented with sulfate, and the slowest growth rate was observed for the iron-supplemented medium , pH, EC, and Eh measurements in the low-pressure desiccators . However, the current identified hypopiezotolerant bacteria, have been isolated primarily from extremophilic ecosystems including permafrost, arctic, and alpine niches. One notable exception is the demonstration that the mesophilic bacterium, S. liquefaciens, can grow under low-PTA conditions15.Recently, approx. 30 species of bacteria from 10 genera were described that are capable of growth at low pressures\u2009\u2264\u200910\u00a0hPa15, the majority of bacteria were found incapable of growth under low-PTA conditions, even though adequate water and nutrients were provided using the organic rich medium TSA. In the current study, we sought to expand the list of media and include a diverse set of anaerobic electron acceptors that might power anaerobic heterotrophic metabolisms such as nitrate, sulfate, or ferric iron reduction; plus the nutrients available in TSA.To identify whether mesophilic culturable bacteria prevalent on Mars spacecraft will pose a risk to forward contamination of the surface, the growth of 125 bacteria recovered directly from six authentic Mars spacecraft was tested under low-PTA conditions with a focus on investigating the effect of simulated Martian atmospheric pressure and media augmentation with anaerobic electron acceptors. In previous studies with arctic, alpine, permafrost samplesS. liquefaciens, T. pasteurii17) the assays had to be continuously maintained under stable hydrated conditions with rich sources of organics, for at least 10\u201314\u00a0days, before growth was observable. Such stable liquid and nutrient-rich conditions are very unlikely to occur on the surface of current-day Mars10, and thus, actual conditions on Mars are likely to be significantly more biocidal or inhibitory for the bacteria tested here. Although \u2018\u2026follow the water\u2026\u2019 has become a paradigm for the search for life on Mars, we should not exclude the significant role that environmental conditions in the bulk atmosphere on Mars play on defining habitable regions near the surface.Two key findings of the current study suggest that naturally occurring culturable mesophilic, organotrophic spacecraft bacteria may not be able to thrive on Mars once transferred to the surface. First, none of the 125 bacteria tested were capable of growth under low-PTA conditions suggesting a low risk of clean room chemoorganotrophic bacteria being able to proliferate on the surface of Mars. Furthermore, even when hypopiezotolerant bacteria are able to grow under low-PTA conditions . However, the current study cannot rule out all possible chemoorganotrophic metabolisms on Mars because we only focused here on naturally occurring culturable spacecraft bacteria recovered during routine planetary protection monitoring procedures. Currently these are the only isolates available that were directly recovered from Mars-bound spacecraft. There have been few cultivation attempts that targeted psychrophiles, anaerobes, or other microbial specialists from spacecraft.And second, the added anaerobic electron acceptors failed to stimulate the chemoorganotrophic bacteria tested here that represent a portion of the naturally occurring bioburden on authentic Mars spacecraft. Thus, it seems unlikely that additional geochemical redox couples on Mars will be able to overcome the inherent metabolic, genomic, transcriptomic, and proteomic constraints on many spacecraft microorganisms. This finding was unexpected because numerous papers on the habitability of Mars have suggested that geochemical redox couples are likely present on Mars, with the potential to support not only lithotrophic but also organotrophic growth to bind to DNA from dead bacteria or from free extracellular DNA. Thus, PMA-treated samples allow the differentiation between live and dead cells, and therefore provide information for characterizing the active microbiome on spacecraft. The application of PMA revealed that more than 90% of the total microbial signatures found on spacecraft or SAF floors originated from dead bacteria or free extracellular DNA34.Furthermore, only recently the microbiome of spacecraft and their surrounding clean rooms have been investigated using cutting edge omics technologies such as next generation sequencing or shotgun metagenome sequencing31 reported the absence of carbon, nitrogen, and sulfur cycling from samples collected from clean room floors in PMA-treated samples is surprising since nitrate reduction has been reported for common spacecraft contaminants like Bacillus35 and Paenibacillus36. To obtain an in-depth knowledge of the overall metabolic capability of an active clean room microbiome, more metagenomics data from actual Mars spacecraft are required.In line with the cultivation studies revealing facultative anaerobes, metagenomic studies of PMA-treated samples have found genetic evidence for fermentive and respiratory metabolic pathways in the living spacecraft microbiome. The fact that Weinmaier et al.39), and two species belonging to genera with known hypopiezotolerant strains15. Examples of genera that include hypopiezotolerant species are: Bacillus, Paenibacillus, Rhodococcus, Streptomyces, Exiguobacterium, and Serratia. It is plausible that the aerobic spacecraft screening procedure and the selection of the isolates to be tested led to an underestimation of species capable of growth under simulated low-PTA conditions despite the fact that facultative anaerobic isolates were recovered during the spacecraft assays.When selecting the bacterial strains for the current study, we chose strains from several dominant families represented in the literature as being recoverable from Mars spacecraft failed to increase their cell densities over 28\u00a0days . For example, up to 22 biocidal and inhibitory factors have been discussed in papers on the habitability of the Martian surface . Many of these biocidal factors would likely further inhibit cell growth if added as cofactors to the experiments reported here. We conclude that the probability of growth may be low on Mars for a wide diversity of culturable chemoorganotrophic bacteria prevalent on spacecraft prior to launch, and that the habitability of the modern Martian surface is likely to be significantly constrained by the harsh biocidal and inhibitory environmental factors present on the surface.Using the low-PTA assays, we have screened 125 bacteria recovered from six authentic Mars spacecraft. None of the bacteria tested were confirmed as hypopiezotolerant microbes capable of growth under simulated low-pressure Martian conditions. These results indicate that a range of the chemoorganotrophic and mesophilic bacteria on Mars spacecraft cannot easily grow under simulated conditions of low pressure, low temperature, and COSupplementary Table S1.Supplementary Table S2.Supplementary Legends."} +{"text": "Although urban regions are more heterogeneous regarding land cover (based on the Shannon index) than rural regions, the dietary range of urban foxes was smaller compared with that of rural conspecifics. Moreover, the higher \u03b413C values and lower \u03b415N values of urban foxes suggest a relatively high input of anthropogenic food sources. The diet of most individuals remained largely constant over a longer period. The low intraindividual variability of urban and rural red foxes suggests a relatively constant proportion of food items consumed by individuals. Urban and rural foxes utilized a small proportion of the potentially available isotopic dietary niche as indicated by the low within\u2010individual variation compared to the between\u2010individual variation. We conclude that generalist fox populations consist of individual food specialists in urban and rural populations at least over those periods covered by our study.Some carnivores are known to survive well in urban habitats, yet the underlying behavioral tactics are poorly understood. One likely explanation for the success in urban habitats might be that carnivores are generalist consumers. However, urban populations of carnivores could as well consist of specialist feeders. Here, we compared the isotopic specialization of red foxes in urban and rural environments, using both a population and an individual level perspective. We measured stable isotope ratios in increments of red fox whiskers and potential food sources. Our results reveal that red foxes have a broad isotopic dietary niche and a large variation in resource use. Despite this large variation, we found significant differences between the variance of the urban and rural population for \u03b4 We compared the isotopic specialization of red foxes in urban and rural environments, using both a population and an individual level perspective. We found significant differences between the variance of the urban and rural population for \u03b413C as well as \u03b415N values, suggesting a habitat\u2010specific foraging behavior. Furthermore, generalist fox populations consist of individual food specialists in urban and rural populations. Foraging generalists, in contrast, are individuals varying widely in their resource use and therefore represent the whole niche of the associated population and nitrogen (\u03b415N) stable isotope ratios of vibrissae increments, which provided us with a temporally continuous isotopic record within the same individual. To delineate the feeding habits of red foxes, we compared stable isotope ratios of red foxes with those of potential food items using Bayesian isotope mixing models.In this study, we used stable isotopic ratios of red fox whiskers (vibrissae) to quantify and compare the isotopic dietary niche width and feeding tactics of urban and rural red foxes at (I) the population level (single measurements of 119 red foxes) and (II) the individual level . For assessing the individual isotopic specialization, we used carbon , consisting of a mixture of different food items that are isotopically contrasting with natural food sources. Therefore, we predict a smaller isotopic niche for urban foxes, since cities have a relatively constant supply of anthropogenic food throughout space and time. In contrast, the abundance and availability of food resources in rural areas are habitat\u2010dependent and variable over time and space, which should be reflected in a larger dietary (isotopic) niche compared to urban foxes. Assuming that foxes nevertheless concentrate within their individual range on the most available and easiest to obtain food item, this should take up a large proportion of the fox diet and thus result in low variability in isotopic signatures over time in rural and urban foxes . Therefore, both rural and urban red fox individuals follow an specialized feeding tactic, even though foxes are a generalistic species at the population level.22.12 with a maximum diagonal extension of 291\u00a0km. Berlin as capital is characterized by highly urbanized areas, especially in the city center, whereas the surrounding federal state of Brandenburg is characterized by rural areas composed of small forests mostly embedded in agricultural landscapes. In the metropolitan area of Berlin, the density of humans increases toward the city center, forming a suburban area connecting the rural regions of Brandenburg and the highly urbanized areas of Berlin gradually.The study was conducted in Berlin and Brandenburg in the northeastern part of Germany Figure\u00a0. Both diRed foxes are found all over the study area, populating rural areas as well as highly urbanized regions. In cooperation with the state laboratory Berlin\u2010Brandenburg (LLBB), we collected a total of 119 whisker samples from dead red foxes originating from urban and rural environments. These samples stem from foxes that were either involved in accidents, were hunted or died of natural causes in the years of 2016 and 2017. Samples of urban and rural foxes were collected throughout the year with fewer data in spring and beginning summer.http://land.copernicus.eu/pan\u2010european/high\u2010resolution\u2010layers/imperviousness/%20imperviousness\u20102012/%20view\"\\h) and extracted the mean of all raster cells within the buffer. Locations having a degree of imperviousness lower than 25% were categorized as \u201crural,\u201d all other locations (\u226525%) were assigned to the category \u201curban.\u201d In the end, 85 of the individuals were assigned to the category \u201crural\u201d and 34 to \u201curban.\u201d Imperviousness is considered to be a suitable proxy for urbanization because it is also associated with factors such as human population density, light pollution, traffic, and noise of each location of death. For this, we used a COPERNICUS imperviousness raster map of 2012 with 20\u00a0m resolution ) and Brandenburg (https://lfu.brandenburg.de/cms/detail.php/bb1.c.359429.de). Since each map has its resolution regarding the land use categories, or names them partially differently, we have assigned all land cover types to the following to have a common basis. Nine land use categories were used: agriculture, forest, grassland, open areas, ruderal areas, shrubland, sealed surface, water bodies, and others. As before, landscape diversity (Shannon diversity index of the nine land use categories) was calculated within a 1\u00a0km zone around each sample location (see Appendix).Besides, we characterized the heterogeneity of the landscape by using a land\u2010use map of Berlin (Geotrupidae), earthworm (Lumbricidae), grasshopper (Orthoptera), land snail (Helicidae), land slug (Limacidae), house mouse (Muridae), and bramble (Rosaceae) and to see whether the stable isotope values of food resources vary greatly between the contrasting habitats. Since we were mainly interested in breaking down nutritional tactics and their stability over time instead of exact resource use, the analysis of food items served more as control and at the same time nicely estimates the position within the food niche. Inexperienced readers thus get a direct impression of the potential food as well as its position and can more easily follow our reasoning. We also considered adding anthropogenic food items to the food item analysis, but since these can be very diverse and are often a mixture of different resources, we consciously decided against it. Based on literature research about diet composition of red foxes in our study region and availability of food items, we chose seven potential food sources at the family level with the main focus on covering different trophic levels: dor beetle , as a reference and cut the basal 5\u00a0mm increment of the whiskers, using a scalpel. Assuming a growth rate of 0.43\u00a0mm/day relative to Vienna Pee Dee Belemnite (VPDB). For the stable nitrogen isotopes, atmospheric nitrogen was used as the standard.All food samples were defrosted and washed with distilled water. Indigestible parts such as chitin shells from beetles and grasshoppers or shells from land snails were removed. A small representative piece of each sample was cut, placed into a 2\u00a0ml tube and dried at 50\u00b0C for 48\u00a0hr [Heraeus Function Lab]. Afterward, 110\u00a0ml of a 1:2 methanol:trichlormethan solution was added and the fat was extracted using a rapid extraction system . For extraction, sample solutions were boiled at 140\u00b0C for 30\u00a0min, distilled and readded to the samples. We ran four extraction cycles of 25\u00a0min each. After extraction, samples were dried again at 50\u00b0C for 24\u00a0hr. Finally, food samples were weighed and loaded into tin capsules following the protocol of whisker samples described above. Samples were combusted and analyzed using a peripheral elemental analyzer [2.3All data analyses were performed with R Studio in R version 3.5.0 (R Core Team 2018).2.3.1We estimated and plotted the isotopic dietary niche metrics of urban and rural foxes based on stable isotope ratios of single individuals using Stable Isotope Bayesian Ellipses in R as a function of the covariates to analyze their potential effects, a linear model was used. Fixed covariates are sex , age , and Julian day (continuous). Finally, we tested whether the variance of \u03b413C and \u03b415N values differs among the urban and the rural fox population using an F\u2010Test.To model the isotope ratios . Therefore, based on the Shannon index, our urban sites are more diverse on a landscape structure scale than the rural ones.We tested the distribution of Shannon index values based on land use classes for both, urban (mean\u00a0=\u00a00.9\u00a0\u00b1\u00a00.16) and rural (mean\u00a0=\u00a00.5\u00a0\u00b1\u00a00.27) fox population see also Appendix ppendix A13C values \u00a0=\u00a01.405, p\u00a0>\u00a0.05) or on \u03b415N values \u00a0=\u00a00.026, p\u00a0>\u00a0.05). The F\u2010test confirmed a significant difference between the variance of the urban and rural population for \u03b413C values (F(84)\u00a0=\u00a01.908, p\u00a0=\u00a0.040) as well as \u03b415N values (F(84)\u00a0=\u00a011.394, p\u00a0<\u00a0.001).Our linear model yielded no significant effect of sex, age or Julian day on individual \u03b415N (range 6.8 to 9.2\u2030) and \u221222.6\u00a0\u00b1\u00a00.9\u2030 for \u03b413C (range \u221224.8 to \u221221.2\u2030), those of the 85 rural foxes 8.5\u00a0\u00b1\u00a01.5\u2030 for \u03b415N (range 4.4 to 13.2\u2030) and \u221223.3\u00a0\u00b1\u00a01.2\u2030 for \u03b413C values (range \u221226.3 to \u221218.9\u2030).The isotopic compositions of the 34 urban fox whiskers averaged 7.6\u00a0\u00b1\u00a00.4\u2030 for \u03b4n\u00a0=\u00a0119, Figure\u00a013C and \u03b415N values. \u03b415N and \u03b413C values of some red foxes fell outside the range of food stable isotope ratios, indicating that foxes might have consumed food resources of high \u03b415N and low \u03b413C values.The stable isotope ratios of individual whiskers \u00a0=\u00a0\u22123.77, p\u00a0<\u00a0.001) as well as \u03b415N values \u00a0=\u00a04.89, p\u00a0<\u00a0.001). Additionally, the two populations differed in total area (TA) and SEAc indicating different isotopic niches. Rural foxes showed a TA of 36.8\u20302 and a SEAc of 5.6\u20302, whereas the urban population had a narrower isotopic niche with a TA of 5.2\u20302 and a SEAc of 1.2\u20302. The overlap between the SEAc of the two populations was 0.8\u20302, consisting of 66.7% of urban and 14.3% of rural SEAc size.A Welch two\u2010sample 3.22) was broader than the isotopic niche of urban foxes (TA\u00a0=\u00a01.7\u20302), with urban foxes averaging 7.4\u00a0\u00b1\u00a00.7\u2030 for \u03b415N and \u221222.8\u00a0\u00b1\u00a00.9\u2030 for \u03b413C values and rural foxes averaging 8.3\u00a0\u00b1\u00a01.0\u2030 for \u03b415N and \u221223.5\u00a0\u00b1\u00a01.1\u2030 for \u03b413C values. SEAc values of individual whiskers ranged from 0.1\u20302 to 1.8\u20302 in urban and 0.1\u20302 to 5.0\u20302 in rural foxes.The isotopic niche of rural foxes (TA\u00a0=\u00a03.3\u203013C (one\u2010way ANOVA: F(12)\u00a0=\u00a06.094, p\u00a0<\u00a0.001) and \u03b415N values (F(12)\u00a0=\u00a024.741, p\u00a0<\u00a0.001). This pattern was similar for urban foxes for both \u03b413C (F(18)\u00a0=\u00a015.215, p\u00a0<\u00a0.001) and \u03b415N values (F(18)\u00a0=\u00a09.697, p\u00a0<\u00a0.001). Looking at the results of the analysis of repeated measurements over time for each individual and nitrogen (\u03b415N) isotopic signatures of red fox whiskers as well as longitudinal measurements concerning these two isotopes. The diet of red foxes has been studied over decades provide powerful measures of the trophic positions of individuals and populations, normally you have to apply baseline corrections to account for spatial variation the population level in space and (II) the individual level over space and time. For this purpose, we used carbon showed a broad range spanning multiple trophic levels and included all of the food items we examined specifically . Therefore, the mean Shannon diversity index of urban areas is higher than for rural regions see Appendix ppendix AVulpes macrotis mutica), had significantly higher \u03b413C values (difference in mean\u00a0=\u00a02.4\u2030) and lower \u03b415N values (difference in mean\u00a0=\u00a02.7\u2030) than nonurban individuals and isotopic values similar to human residents. Based on their findings they suggested a shared (anthropogenic) food source and similarities in their diet. Meaty anthropogenic food contains a noticeable amount of corn because livestock reared for meat production is often fed a corn\u2010based diet. Food crops like maize as well as sugar cane, millet, and sorghum are typical C4 plants, which differ in their \u03b413C values (\u221212 to \u221214\u2030) from C3 plants (\u221222 to \u221229\u2030) , individuals feeding on anthropogenic food sources also have lower \u03b415N values than individuals which focus on natural prey animals , as processed food usually does not contain identifiable, indigestible material such as exoskeletons, bones, feathers or hair Craig,\u00a0. Urban w4.215N value is smaller and the mean \u03b413C value is bigger for urban foxes which confirm the difference in foraging behavior. Focusing on the SEAc values of individual whiskers, it can be seen that urban individuals have an even narrower isotopic dietary niche than individuals from rural areas.Our results of longitudinal data on individual\u2010level strengthened our previous findings. Since the TA of rural foxes is bigger in comparison with urban individuals, rural foxes cover a broader isotopic niche and therefore dietary spectrum. Again, mean \u03b413C and \u03b415N values of the different whisker increments is low. Moreover, WIC/TNW ratio is very low in both populations ; Data curation (supporting); Formal analysis ; Project administration (lead); Visualization (lead); Writing\u2010original draft (lead); Writing\u2010review & editing . Jasmin Firozpoor: Conceptualization (supporting); Formal analysis ; Investigation ; Writing\u2010original draft (supporting); Writing\u2010review & editing . Stephanie Kramer\u2010Schadt: Conceptualization ; Supervision (lead); Writing\u2010review & editing . Pierre Gras: Formal analysis (supporting); Writing\u2010review & editing . Sophia E. Kimmig: Data curation (supporting); Writing\u2010review & editing . Christoph Schulze: Data curation (lead); Writing\u2010review & editing . Christian C. Voigt: Conceptualization ; Methodology (lead); Resources (supporting); Supervision (lead); Writing\u2010review & editing . Sylvia Ortmann: Conceptualization ; Funding acquisition (lead); Resources (lead); Supervision (lead); Writing\u2010review & editing ."} +{"text": "They provide unique insights into the renoprotective effects of SGLT2i and the variability in response and may thus contribute to improved treatment of the individual patient. In this mini-review, we highlight current work and opportunities of renal imaging modalities to assess renal oxygenation and hypoxia, fibrosis as well as interaction between SGLT2i and their transporters. Although every modality allows quantitative assessment of particular parameters of interest, we conclude that especially the complementary value of combining imaging modalities in a single clinical trial aids in an integrated understanding of the pharmacology of SGLT2i and their response variability.Sodium-glucose cotransporter-2 inhibitors (SGLT2i) were initially developed to treat diabetes and have been shown to improve renal and cardiovascular outcomes in patients with- but also without diabetes. The mechanisms underlying these beneficial effects are incompletely understood, as is the response variability between- and within patients. Imaging modalities allow The sodium-glucose cotransporter-2 (SGLT2), located in the proximal tubules of the kidneys, is responsible for 80\u201390% of glucose reabsorption. SGLT2 inhibitors (SGLT2i) increase renal glucose excretion and thus lower plasma glucose levels . In contAlso, the response to SGLT2i, for example regarding HbA1c and albuminuria reduction, varies largely between individuals , and thein vivo assessment of physiological, pathophysiological, and pharmacological processes at kidney tissue level , defined as renal blood flow (RBF) per unit tissue, is compromised due to microvascular damage.An imbalance between oxygen demand and RP results in hypoxia, widely believed to play an important role in the pathophysiology of kidney disease . FurtherBy blocking sodium- and glucose transport, SGLT2i are thought to reduce active sodium/potassium transport, resulting in reduced cortical oxygen demand . In contTo understand changes in kidney oxygenation due to SGLT2 inhibition, it is of crucial importance to quantitate changes in cortical- and medullary kidney perfusion and oxygen consumption. Several magnetic resonance imaging (MRI) and positron emission tomography (PET) techniques are currently available and applied to measure different aspects of renal oxygenation.2) and therefore other methods are preferred in these patients uses paramagnetic contrast fluids, usually a gadolinium chelate, to enhance the signal of water molecules. By kinetic modeling, the single kidney parameters RBF, GFR, and cortical and medullary blood volumes can be obtained. DCE-MRI is well-established to measure myocardial perfusion and has patients .Phase-contrast MRI (PC-MRI) measures RBF in renal arteries by opposing gradient magnetic pulses that induce phase shifts in moving protons. When the kidney volume is also measured, e.g., by MRI, RP can be calculated and other parameters like renal vascular resistance can be derived . AlthougArterial spin labeling MRI (ASL-MRI) uses radiofrequency pulses to invert the magnetization of the water protons in blood that distribute into organs and thus quantifies tissue perfusion. Originating from brain research, its feasibility has now also been shown in nephrology. The contrast-to-noise-ratio is relatively low, but rapidly developing technical improvements may resolve this inconvenience. Cortical measurements have a good reproducibility over a wide range of renal functions, are correlated to eGFR and are validated in animals and man. However, at this moment, the reproducibility of medullary perfusion measurements is moderate to poor . Cortica215O) infusion followed by PET-imaging is the preferred method for assessment of regional cerebral .-aBlood-oxygenation-level-dependent MRI (BOLD-MRI) uses the difference in magnetic properties of oxygenated vs. deoxygenated hemoglobin . Method in vivo data about antifibrotic properties of SGLT2i in human are lacking.SGLT2i improve fibrosis-associated biomarkers in DKD patients . Whether. However, one study contradicts the correlation between fibrosis and eGFR, despite a similar relationship between fibrosis and ADC (Diffusion-weighted magnetic resonance imaging (DWI-MRI) applies magnetic field gradients to renal tissue that displace water molecules. Kinetic modeling provides an apparent diffusion coefficient (ADC), which relates to water movement and DWI-MRI is therefore considered sensitive to changes in the renal interstitium, for example, due to renal fibrosis or edema, in RP and water handling in the tubular compartment . Several and ADC . Nonethe and ADC .In vivo data regarding SGLT2 expression are lacking. Also, little is known about kidney SGLT2i exposure and its interaction with SGLT2. To investigate changes in glucose transporter expression in kidney disease and understand the relationships between SGLT2i target exposure, receptor density and interaction with SGLT2, several radiolabeled substrates are currently used.The glucose reabsorption capacity of the kidneys is increased during hyperglycemia. Some explanations have been proposed for this phenomenon, including hyperfiltration and tubular growth. Whether SGLT2 expression is increased is unknown . Several18F]-fluoro-D-glucose is widely used in oncology and inflammation, has a high affinity for GLUTs, but a low affinity for SGLTs .Recently, -acetate PET imaging. As SGLT2i seem to improve renal oxygenation without increasing RBF, this modality has great potential to provide unique information regarding the effects of SGLT2i on hypoxia.Impaired renal oxygenation is widely believed to affect the pathophysiology of CKD . Severalin vivo. PET imaging with a radiolabeled SGLT2i circumvents this issue due to its specific affinity for SGLT2. The radiolabeled SGLT2i should preferable be an isotopologue of its marketed gliflozin, in order to fully reflect its pharmacological properties. With this approach it becomes possible to quantify the tissue pharmacology in vivo, and consequently response variability in patients. A feasibility study for the use of [18F]-canagliflozin in T2DM patients is currently ongoing .PET imaging with various radiolabeled substrates are currently used to quantitate the glucose blocking potential of SGLT2i on SGLT1 and SGLT2. However, as the substrates lack the power to discriminate between the specific transporter subtypes, it makes this approach less suitable to quantitate SGLT2 function 13C]-pyruvate-MRI .A few studies have already shown exciting findings by using different imaging modalities. By combining ASL-MRI, PC-MRI, and BOLD-MRI, an acute SGLT2i-induced improvement in cortical oxygenation in T1DM patients was not accompanied by changes in RP or RBF . Also, iin vivo in patients. Combined measurement of RBF/RP by PC-MRI and/or ASL-MRI, oxygen consumption by [11C]-acetate PET and oxygenation by BOLD-MRI would provide a wealth of information on the hypoxia effects of SGLT2i. DWI-MRI can assess the benefit of SGLT2 inhibition on renal fibrosis and PET imaging with radiolabeled drugs holds the promise to understand the response-variability between patients. Also, early identification of hypoxia can result in early intervention and thus timely prevention of kidney disease progression. We conclude that in particular the complementary value of multiple imaging modalities in a single clinical trial aids in an integrated understanding of the pharmacology of SGLT2i.The papers and current trials show the recent developments in renal imaging modalities. We have the tools to assess various effects of SGLT2i on renal oxygenation and hypoxia, fibrosis and their transporter interactions"} +{"text": "In addition, the spherical or tetrahedral configuration of the clusters could be reversibly transformed by re-regulating the proportion of counterions with opposite charges. More significantly, the configuration transformation rate has been meticulously manipulated by regulating the polarization effect of the ions on the parent nanoclusters. The observations in this paper provide an intriguing nanomodel that enables the polarization effect to be understood at the atomic level.The polarization effect has been a powerful tool in controlling the morphology of metal nanoparticles. However, a precise investigation of the polarization effect has been a challenging pursuit for a long time, and little has been achieved for analysis at the atomic level. Here the atomic-level analysis of the polarization effect in controlling the morphologies of metal nanoclusters is reported. By simply regulating the counterions, the controllable transformation from Pt 1Ag24(SR)18 and Pt1Ag28(SR)18(PPh3)4, an insight into the polarization effect in controlling the morphology of metal nanoparticles is presented.Based on the inter-conversion between Pt Atomic-level understanding of the counterion effect requires more precise molecular entities as model nanosystems and precise molecular tools. For this reason, metal nanoclusters benefit from their monodisperse sizes and accurately characterized structures, and provide an ideal platform to investigate the counterion\u2013nanoparticle interactions at the atomic level.3\u201310Metal nanoparticles with different morphologies, such as nanostars, nanorods, nanowires, nanoflowers, and so on, have all been the subjects of widespread research interest in the past few decades.i.e., CTAB or CTAC) in the preparation of the nanoparticles is able to control their morphologies.2 Mirkin and co-workers have demonstrated that manipulating (i) the ratio of metal to halide ion, and (ii) the selection of appropriate halide ions could rationally control the morphology of the nanoparticles, under otherwise identical preparation conditions.2 In this context, it is acceptable that the nature of the counterions plays a crucial role in the growth processes of nanoparticles, and the polarization effect of ion-to-nanoparticle is among one of the most effective in shape control. Nevertheless, several fundamental questions remain largely unexplored: what potential counterion\u2013metal interactions are primarily responsible for the shape control of the nanoparticles? Do the counterions mainly affect the dispersed metals in the growth processes, or just have an effect on the nanoparticles? Could the morphology of the corresponding nanoparticles be manipulated at the atomic level by regulating the species and the amount of counterions added? The counterion\u2013nanoparticle interactions should be comprehended in the nanosystem of established principles of chemistry. In this context, the counterion\u2013nanoparticle interactions as well as the polarization effects of the counterions are to be investigated by using atomically precise nanoclusters with different configurations. This would create a new opportunity for understanding the underlying chemistry of the shape control in nanoparticles.Previous research has come close to a unified conclusion \u2013 the control of the introduced salts (1Ag24(S-PhMe2)18]2\u2212 (Pt1Ag24) and [Pt1Ag28(S-Adm)18(PPh3)4]2+ nanoclusters as templates. The [Pt1Ag28(S-PhMe2)x(S-Adm)x18\u2212(PPh3)4]2+ nanoclusters could be controllably transformed into Pt1Ag24 with a spherical configurational or Pt1Ag28-1 with a tetrahedral configuration by introducing different salts (PPh4Br or NaBPh4). In addition, by regulating the proportion of the opposite salts , the spherical or tetrahedral morphology of the cluster shaped products could be reversibly converted, forming a cyclic transformation system. More significantly, the rate of the conversion from the tetrahedral Pt1Ag28-2 to the spherical Pt1Ag24 is directly proportional to the magnitude of the polarization effect of ion-to-nanocluster, which could be meticulously manipulated by regulating the interaction distance between opposite-ions and corresponding nanoclusters 4]+Br\u2212, m = 1\u20138).In this work, the polarization effect in controlling the morphology of nanoclusters was investigated at the atomic level using Br, TOAB, 98%), tetra-n-heptylammonium bromide ([N(C7H15)4]Br, 98%), tetra-n-hexylammonium bromide ([N(C6H13)4]Br, 98%), tetra-n-amylammonium bromide ([N(C5H11)4]Br, 98%), tetra-n-butylammonium bromide ([N(C4H9)4]Br, 98%), tetra-n-propylammonium bromide ([N(C3H7)4]Br, 98%), tetraethylammonium bromide ([N(C2H5)4]Br, 98%), tetramethylammonium bromide ([N(CH3)4]Br, 98%), hydrobromic acid , methylene chloride , methanol , ethanol .All reagents were purchased from Sigma-Aldrich and used without further purification: hexachloroplatinic4]+[BPh4]\u2212 (m = 4\u20138) were the same as the synthetic procedure of [PPh4]+[BPh4]\u2212, except that the [PPh4]+[Br]\u2212 was altered to [N(CmHm+12)4]+Br\u2212 (m = 4\u20138).To 10 mL of CH1Ag24(SPhMe2)18](PPh4)2 was based on a previously reported method.11The preparation of [Pt1Ag28(S-Adm)18(PPh3)4]Cl2 was based on a previously reported method.12The preparation of [Pt1Ag28(S-Adm)18(PPh3)4]Cl2 was dissolved in 10 mL of CH2Cl2, to which 10 \u03bcL of PhMe2-SH was added. The reaction was allowed to proceed for 30 min at room temperature. Then, the [Pt1Ag28(S-PhMe2)x(S-Adm)x18\u2212(PPh3)4]Cl2 nanoclusters were obtained. The ESI-MS and UV-vis measurements were used to track the ligand-exchange process.For the nanocluster synthesis, 20 mg of [Pt4 (in 3 mL of CH2Cl2) was added to the previously mentioned CH2Cl2 solution of [Pt1Ag28(S-PhMe2)x(S-Adm)x18\u2212(PPh3)4]Cl2. The color of the solution slowly altered from yellow to orange. The [Pt1Ag28(S-Adm)18(PPh3)4](BPh4)2 nanocluster was generated after 5 min, which was validated by the ESI-MS results.Typically, 10 mg of NaBPh4Br (in 3 mL of CH2Cl2) was added to the previously mentioned CH2Cl2 solution of [Pt1Ag28(S-PhMe2)x(S-Adm)x18\u2212(PPh3)4]Cl2. The color of the solution altered from yellow to green instantaneously. The [Pt1Ag24(SPhMe2)18](PPh4)2 nanocluster was generated in several seconds, which was validated by the ESI-MS results.Typically, 10 mg of PPh1Ag28(S-Adm)18(PPh3)4]Cl2 was dissolved in 10 mL of CH2Cl2. Then 10 mg of PPh4Br (in 3 mL of CH2Cl2) and 200 \u03bcL of PhMe2-SH were added simultaneously to the solution. The color of the solution altered from orange to green instantaneously, demonstrating the fast generation of [Pt1Ag24(SPhMe2)18](PPh4)2, which was further validated by the ESI-MS results.Typically, 20 mg of [Pt1Ag28(S-Adm)18(PPh3)4](BPh4)2 solution (obtained from the aforementioned conversion from Pt1Ag28-2 to Pt1Ag28-1), 25 mg of PPh4Br (twice the mole ratio of NaBPh4) was added. The color of the solution altered from orange to green instantaneously, demonstrating the fast generation of [Pt1Ag24(SPhMe2)18](PPh4)2. Then, to this solution, 30 mg of NaBPh4 was added. The color gradually altered from green to orange (quite slow compared with the generation of [Pt1Ag24(S-PhMe2)18](PPh4)2), demonstrating the slow generation of [Pt1Ag28(S-Adm)18(PPh3)4](BPh4)2. All of these processes were tracked by UV-vis and ESI-MS measurements.To the [PtmHm+12)]Br (m = 1\u20138) was added to the previously mentioned CH2Cl2 solution of [Pt1Ag28(S-PhMe2)x(S-Adm)x18\u2212(PPh3)4]Cl2. The color of the solution altered from yellow to green, and the [Pt1Ag24(S-PhMe2)18][N(CmHm+12)]2 nanoclusters were generated. The conversions were performed at \u221237 \u00b0C as this slowed down the reaction. The UV-vis measurement was performed to track the conversion, and to determine the generation rate of the [Pt1Ag24(S-PhMe2)18][N(CmHm+12)]2 nanoclusters.Typically, 10 mg of 2+ 4]2+ . No nanoPt1Ag28-2, PPh4Br was added, which triggered the transformation from the tetrahedral Pt1Ag28-2 to the spherical Pt1Ag24, as shown by ESI-MS results . Also, some nanocluster entities might be decomposed by the ligand-exchange process from Pt1Ag28-1 to Pt1Ag28-2, which also resulted in the color of the reaction solution lightening.The time-dependent UV-vis spectra of the transformations from Pt1Ag28-2 to Pt1Ag24, the rapid change in optical absorptions further demonstrated its fast conversion rate to track these PPh3-containing units. As shown in Fig. S7, ESI3-containing complexes were observed. However, no nanocluster intermediate was detected, and this was probably because of the rapid transformation which meant that the intermediates were hard to detect, or the possible intermediates were unstable that they would spontaneously transform into Pt1Ag28 or Pt1Ag24 nanoclusters.For the transformation from i.e., sphere or tetrahedron) induced by the addition of PPh4Br or NaBPh4. By noticing that the Pt1Ag24 and Pt1Ag28-1 could be obtained from the same cluster intermediate , it was perceived as a good opportunity to achieve the inter-conversion between these two nanoclusters with different configurations. As shown in 2Cl2 solution of Pt1Ag28-1 , the addition of an excess amount of PPh4Br induced the fast conversion from Pt1Ag28-1 to Pt1Ag24 within 3 s, during which process the solution turned from orange to green. It should be noted that the conversion from Pt1Ag28-1 to Pt1Ag24 should go viaPt1Ag28-2, which was hard to detect due to the rapid conversion . Conversely, a further excessive dose of the NaBPh4 resulted in the re-generation of Pt1Ag28-1 from Pt1Ag24, however, this conversion was quite a lot slower relative to its opposite process the much more favorable conversion from Pt1Ag28-1 to Pt1Ag24 relative to the reverse process. Indeed, the ejection of four Ag atoms (intra-cluster behavior) during the conversion from Pt1Ag28-1 to Pt1Ag24 was anticipated to be easier than the reverse process wherein the extraction of four Ag atoms is included (this might be an inter-cluster behavior).The previously mentioned results illustrated the generation of diverse cluster products with different morphologies 18]\u2212 nanocluster, the TOAB was not only exploited as a phase transfer agent, but also acted as a counterion (TOA+) for balancing the \u201c\u22121\u201d charge of Au25(S-PhC2H4)18. In addition, the presence of [PPh4]+Br\u2212 contributed to the high yield for the syntheses of [Ag25(SR)18]\u2212, [Ag44(SR)30]4\u2212, and so. Most previous research has focused on the counterion part of the introduced salts , however, the effect of the remaining ions received little interest. In other words, it remains unexplored as to whether the [cation]+[anion]\u2212 takes effect as a whole on the cluster synthesis. With regard to this work, a fundamental but significant question arose: what is the underlying chemistry of ion addition-induced nanocluster transformation?Previous research has demonstrated the crucial role of ions in the preparation of metal nanoclusters.Pt1Ag28-2 into the spherical Pt1Ag24. It should be noted that the precise structures of both Pt1Ag28-2 and Pt1Ag24 nanoclusters rendered them ideal nanomodels for the atomic-level analysis of the ion-induced transformation. As shown in Pt1Ag28-2 to Pt1Ag24 was activated by different [cation]+[anion]\u2212 such as [PPh4]+Br\u2212, [PMe4]+Br\u2212, H+Br\u2212, or [PPh4]+[BPh4]\u2212. For the CH2Cl2 solution of Pt1Ag28-2, although the addition of [PPh4]+Br\u2212 or [PMe4]+Br\u2212 could both trigger the transformation from Pt1Ag28-2 to Pt1Ag24, with [PMe4]+Br\u2212 the transformation was much slower (10 s versus 3 s for the color change from orange to green). Such a noticeable slowness resulted from the size disparity between the [PPh4]+ and [PMe4]+ cations. In addition, the addition of H+Br\u2212, [PPh4]+[BPh4]\u2212, Na+Br\u2212, or Mg2+Br2\u2212 had no impact on the cluster system and the cluster remained as Pt1Ag28-2 and anions worked together to activate the cluster transformation.Here, the control over the introduced salts was performed by transforming the tetrahedral t1Ag28-2 , which ePt1Ag28-2 to Pt1Ag24, the reason why the transformation rates show a remarkable difference when induced by [PPh4]+Br\u2212 or [PMe4]+Br\u2212 should be clear. In this context, the [N(CmHm+12)4]+Br\u2212 (m = 1\u20138) with gradually growing cations and an unchanged anion were further used to activate the transformation. Considering the apparent enhancement of the UV-vis absorption at 600 nm from Pt1Ag28-2 to Pt1Ag24 (with strong absorption), the optical absorption intensity at 600 nm was monitored to characterize the generation of Pt1Ag24 4]+Br\u2212 (m = 1\u20138). According to the absorptions measured at 600 nm, the time-dependent concentrations of Pt1Ag24 in the solution were obtained. As shown in mHm+12)4]+Br\u2212, the Pt1Ag24 nanocluster was generated rapidly in the beginning, and then the generation rate leveled off, and finally, all the Pt1Ag28-2 was converted into the Pt1Ag24. In addition, compared with the rapid reaction with [N(C8H17)4]+Br\u2212 (within 20 s), the reaction of [N(CH3)4]Br was relatively slow and was completed after 90 s (mHm+12)4]+Br\u2212 (8H17)4]+Br\u2212 was 3.51 when the initial ratio of [N(CH3)4]+Br\u2212 was set as 1.00, and the initial rates were also proportional to the CmHm+12 lengths in the corresponding [N(CmHm+12)4]+Br\u2212 4]+Br\u2212 . To simp1)4]+Br\u2212 .16Pt1Ag28-2 to the spherical Pt1Ag24, it was proposed that the underlying chemistry was the polarization effect of the ions introduced (inducing both cations and anions) to the nanoclusters. The explanation for this is given next.Having obtained these ion addition-induced conversions from the tetrahedral mHm+12)4]+Br\u2212 was separated into two parts: the larger [N(CmHm+12)4]+ cation and the smaller Br\u2212 anion 4]+ hindered these cations to approach the cluster kernel. Conversely, the Br\u2212 anion with a small size kept close to the cluster kernel. In addition, the Pt1Ag28 and the Br\u2212 were attractive because of their opposite charges (\u201c+2\u201d versus \u201c\u22121\u201d) whereas the Pt1Ag28 cluster and the [N(CmHm+12)4]+ cation were repulsive because of they had the same charges (\u201c+2\u201d versus \u201c+1\u201d). In this context, the Br\u2212 anion should be closer to the nanocluster relative to its corresponding cation 4]+Br\u2212 addition-induced transformations, although these [N(CmHm+12)4]+ cations displayed the same \u201c+1\u201d valence state, they moved away from the cluster as m grew from one to eight because of their increasing steric hindrance to the cluster. However, the distance between the Br\u2212 anion and the cluster remained unchanged (\u2212 to the cluster kernel (or the interaction between Br\u2212 and the cluster) was intensified as m grew, which further accelerated the cluster transformation 18, demonstrating that the nanocluster transformation resulted from the proposed polarization effect, but not from the ligand effect.For the different conversion rates in the corresponding [N(Cnchanged . Accordiormation . Such anMe4]+Br\u2212 . It shou\u2212 anion in [N(CmHm+12)4]+Br\u2212 was altered into the [BPh4]\u2212 4]+[BPh4]\u2212, m = 4\u20138) for further verification. As shown in Fig. S9 (ESI6H13)4]+[BPh4]\u2212, [N(C7H15)4]+[BPh4]\u2212, or [N(C8H17)4]+[BPh4]\u2212 to the solution of Pt1Ag28-2 can activate the cluster transformation, but not for the [N(C4H9)4]+[BPh4]\u2212 or [N(C5H11)4]+[BPh4]\u2212. It should be noted that the steric hindrances of [N(C5H11)4]+ and [BPh4]\u2212 were almost the same (5H11)4]+[BPh4]\u2212 should have counterbalanced the polarization effect to the nanocluster. By comparison, the size (or the steric hindrance) of the cation was larger than that of the corresponding anion for [N(C6H13)4]+[BPh4]\u2212, [N(C7H15)4]+[BPh4]\u2212, and [N(C8H17)4]+[BPh4]\u2212, and thus the induced polarization effect activated the cluster transformation. Indeed, the transformation rate was accelerated as the size of the cation increased from [N(C6H13)4]+ to [N(C7H15)4]+ and [N(C8H17)4]+ 18(PPh3)4 with a tetrahedral configuration and Pt1Ag24(S-PhMe2)18 with a spherical configuration, the detailed polarization effect of ions on the nanoparticles has been investigated at the atomic level. The intermediate product Pt1Ag28(S-PhMe2)x(S-Adm)x18\u2212(PPh3)4 could be controllably transformed into spherical Pt1Ag24 or tetrahedral Pt1Ag28 by simply regulating the introduced salts, which further formed a cyclic conversion system. It is significant that the rate of transforming the tetrahedral Pt1Ag28(S-PhMe2)x(S-Adm)x18\u2212(PPh3)4 to the spherical Pt1Ag24(S-PhMe2)18 is directly proportional to the polarization magnitude of the ions introduced into the nanoclusters, which was meticulously controlled by regulating the interaction distance between the opposite ions and the corresponding nanoclusters 4]+Br\u2212).In summary, based on the inter-transformation between Pte.g., CTAB or CTAC) has been pursued for several decades in the preparation of nanoparticles, while the underlying chemistry of this control remains elusive at the atomic level. It is hoped that the polarization effect proposed in this work can help to promote the understanding of the ion effect in nanoparticle syntheses, and further guide such syntheses. Overall, this work presents a maneuverable interconversion between two nanoclusters with different configurations, based on which the insights, at the atomic level, into the polarization effect in controlling the morphology of metal nanoparticles are presented. Future efforts will focus on the application of the polarization effect to fabricate more nanoclusters and nanoparticles with customized structures and morphologies.Indeed, the control over introduced salts (All the data supporting this article have been included in the main text and the ESI.vis analysis and completed the manuscript. S. W. and M. Z. designed the project, analyzed the data, and revised the manuscript.X. K. carried out experiments, analyzed the data and wrote the manuscript. X. W. assisted the UV-There are no conflicts to declare.SC-012-D1SC00632K-s001"} +{"text": "Our study identifies compounds for RNAi-based modulation of gene expression in skeletal and cardiac muscles, paving the way for both functional genomics studies and therapeutic gene modulation in muscle and heart.Oligonucleotide therapeutics hold promise for the treatment of muscle- and heart-related diseases. However, oligonucleotide delivery across the continuous endothelium of muscle tissue is challenging. Here, we demonstrate that docosanoic acid (DCA) conjugation of small interfering RNAs (siRNAs) enables efficient (~5% of injected dose), sustainable (>1\u00a0month), and non-toxic (no cytokine induction at 100\u00a0mg/kg) gene silencing in both skeletal and cardiac muscles after systemic injection. When designed to target Developing siRNA platforms that enable robust muscle delivery is the next milestone for the treatment of muscle-related diseases. Biscans et\u00a0al. demonstrate that DCA conjugation provides a foundation for delivering efficient and safe therapeutic siRNAs in both skeletal and cardiac muscles, establishing a path toward applying RNAi technology for the treatment of muscle disorders. DCA-siRNAs demonstrated productive (~55%\u201380%) silencing, which lasts longer than 1\u00a0month, translating into a ~50% increase in muscle volume. Furthermore, an exaggerated pharmacology study showed a lack of significant cytokine induction at a high dose (100\u00a0mg/kg), demonstrating the therapeutic potential of DCA-conjugated siRNAs.Here, we evaluated the potential of the optimized DCA-conjugated siRNA scaffold to silence a therapeutically relevant gene, ,,,19,,In our previous reports, we synthesized a panel of siRNAs conjugated with saturated and unsaturated fatty acids of varying carbon chain length and unsaturation, i.e., Myr 14:0), DCA (22:0), EPA (20:5 n-3), and DHA (22:6 n-3), and evaluated the impact on relative tissue distribution in mice. Each lipid conjugate was attached to the 3\u2032 end of the siRNA sense strand, which tolerates a range of covalent modifications.:0, DCA (huntingtin mRNA) by DCA-conjugated siRNA in both skeletal and cardiac muscles were cleared from the body B, with r,huntingtin mRNA silencing in quadriceps and heart without altering tissue accumulation , is emerging as a therapeutic target of interest for the prevention of muscle wasting.Mstn negatively regulates muscle mass and is primarily expressed in skeletal muscles, with low mRNA levels also reported in cardiac tissues.Mstn inhibition is associated with increased muscle mass.,Mstn using sequences extracted from Khan et\u00a0al.Ntc, compound of identical chemical configuration, but not targeting Mstn mRNA) were used as controls for expression and muscle phenotype analysis. We intentionally dosed within short periods of time to explore whether saturation of the primary clearance tissues (liver/kidney) may allow for better siRNA delivery to muscles. At 1\u00a0week and 1\u00a0month post-injection, we measured siRNA tissue accumulation and Mstn expression (mRNA and protein). To evaluate the effect of DCA-siRNA-mediated Mstn inhibition on muscle growth, we also measured muscle size/weight at 1\u00a0week and 1\u00a0month post-injection.The muscle growth factor, myostatin A, suggesIn skeletal muscles, the opposite trend was observed. No significant change in accumulation was detected after one versus two injections at the 2\u00a0\u00d7 20\u00a0mg/kg dose A. HoweveMstn expression in heart, gastrocnemius, and quadriceps muscle at either 1\u00a0week or 1\u00a0month post-injection (Ntc) showed no significant reduction in target gene expression, indicating that the observed silencing is due to sequence-specific effects, not the general siRNA chemical scaffold. For all doses tested, significant silencing of Mstn mRNA was achieved (Mstn mRNA levels was observed: one injection induced 30% silencing (p\u00a0< 0.01), two injections induced 40% silencing (p\u00a0< 0.001), and six injections induced 51%\u201356% silencing (p\u00a0< 0.0001). Although there was a positive trend between higher accumulation and higher silencing efficacy, the observed correlation did not reach statistical significance. For\u00a0example, injecting six doses (6\u00a0\u00d7 20\u00a0mg/kg) versus two doses (2\u00a0\u00d7 20\u00a0mg/kg) of siRNA resulted in a ~2.5-fold increase (p\u00a0<\u00a00.0002) in accumulation A. Two do1\u00a0month) A. These Mstn mRNA levels correlated with reductions in serum and muscle Mstn protein. A higher dose (6\u00a0\u00d7 20\u00a0mg/kg) induced better protein silencing C. HoweveDose-limiting toxicity of highly chemically modified oligonucleotides has been observed, restraining their potential clinical translation.As expected, a high dose (100\u00a0mg/kg) of cholesterol-conjugated siRNAs significantly elevated all cytokine levels. Increases of 2- to 3-fold in interleukin, 2- to 7-fold in colony-stimulating factor, 3- to 80-fold in chemokine, and 2- to 3-fold in interferon concentrations were observed compared with PBS, and the elevation of 33 out of 34 cytokines reached statistical significance . At the In addition, even if after a single injection at high doses (100\u00a0mg/kg) DCA-conjugated siRNAs do not induce significant cytokine elevation, repetitive injections at lower doses (as reported in this study) may result in a different toxicity profile. Therefore, cytokine levels have been evaluated in mice (n\u00a0= 3) after either a single injection (20\u00a0mg/kg) or multiple injections (2\u00a0\u00d7 20 and 6\u00a0\u00d7 20\u00a0mg/kg) of DCA-conjugated siRNAs . For all,,,,,34,39,,Mstn and optimized a dosing regimen for muscle delivery. Mstn silencing by DCA-siRNAs led to muscle growth without causing toxicity, validating the therapeutic potential of this compound. Our findings provide a foundation for developing efficient therapeutic siRNA for the treatment of muscle disorders.Although transformative for liver delivery, GalNAc conjugation does not allow significant oligonucleotide accumulation in tissues other than liver and kidneys.,,,,,,,,,,,Previous reports clearly demonstrate the impact of conjugate structure on the extrahepatic tissue distribution and efficacy of siRNAs. The higher accumulation of DCA-siRNA in muscle compared with unconjugated (~7 fold) and other conjugated siRNAs 32,33 can,,In the context of ASOs, the same optimization principles may not apply. Prakash et\u00a0al.In this study, we build on our previous siRNA findings by demonstrating that observed gene silencing in skeletal and cardiac muscles after systemic injection of DCA-conjugated siRNAs is not target specific. DCA-conjugated siRNAs induced robust and prolonged Mstn mRNA and protein silencing in muscles, leading to increased muscle growth. The observed silencing and change in phenotype were still maintained even 1\u00a0month after injection, demonstrating sustained efficacy. The accumulation in cardiac muscle was superior (~1.5 pmol/mg) relative to skeletal muscles (~0.4 pmol/mg), and 25% of siRNAs present at 1\u00a0week post-injection remained at 1\u00a0month (~0.38 pmol/mg), sufficient to maintain ~65% silencing. Why does heart accumulate more siRNA than skeletal muscles? Although both skeletal and cardiac muscles have generally similar endothelial structure, small changes in endothelial cell number and arrangement (which can be affected by a disease state) might significantly impact accumulation and efficacy. Moreover, blood volumes and overall drug exposure are naturally higher in heart.An interesting phenomenon observed in this current study is siRNA delivery saturation: above certain levels of siRNA accumulation in tissues and within certain periods of time, additional dosing of siRNA does not increase compound accumulation or efficacy in tissues. This saturation phenomenon, which has not previously been reported, presents differently in different tissues. In liver, an extra dose administrated within 12\u00a0h resulted in a 2-fold increase in accumulation, indicating that the initial dose of oligonucleotides (20\u00a0mg/kg) either did not saturate the liver or liver uptake mechanisms recovered over a 12-h period. Additional doses delivered within the next 48\u00a0h did not proportionally increase accumulation, suggesting two doses within 12\u00a0h might be optimal to deliver siRNAs. By contrast, only a single dose was needed to fully saturate the heart. It is clear that the maximum amount of oligonucleotide uptake is tissue specific: only ~1.8 pmol/mg was needed to reach saturation in heart, whereas ~27 pmol/mg did not fully saturate liver. Liver is a primary clearance tissue that relies on multiple mechanisms of internalization for blood/tissue exchange. These mechanisms of uptake in liver may not translate to equivalent levels of productive RNA-induced silencing complex (RISC) loading and silencing. In the heart, the extent of blood exchange is significantly less, but the uptake mechanisms may provide better functional access. Indeed, we and others have previously reported the disproportionally higher level of compound accumulation necessary to achieve productive silencing in primary clearance tissues compared with other tissues, such as muscle.In skeletal muscles, we observed a third saturation scenario. An extra dose of siRNA within 12\u00a0h had no impact on accumulation, suggesting the tissue was still saturated 12\u00a0h after injection. However, additional doses within the next 48\u00a0h resulted in a ~2.5-fold increase in accumulation. Collectively, these results advance our current knowledge of siRNA development by demonstrating that the dosing regimen of siRNAs also needs to be optimized to support optimal duration of effect in a specific tissue of interest.Mstn, DCA-siRNA achieved silencing levels that supported profound changes in phenotype. The degree of silencing was proportional to the injected dose and to muscle volume growth. In addition, both the degree of muscle silencing and the degree of muscle increase were consistent over a period from 1\u00a0week to 1\u00a0month. These data demonstrate that specific dosing can be used to achieve and maintain different degrees of target modulation.When targeting ,,,Accumulation is predictive of duration of effect.In tissues such as muscles, tissue damage can be caused by the disease, leading to an increase of oligonucleotide accumulation in muscular tissues. Therefore, we expect that the DCA performance to deliver siRNA to muscles observed here in healthy mice will be better in disease models. With that said, it is always unknown how the disease state might impact siRNA accumulation, and thus the exact distribution and efficacy will need to be confirmed in disease models.In addition, distribution and efficacy of hydrophobic-conjugated siRNAs rely on serum protein composition, which may be heavily affected by the animal diet. In the future, it would be interesting to systematically explore how high-fat diet (increase in LDL content) versus starvation will impact distribution of this class of oligonucleotides, which, theoretically, can be substantial.,Although hydrophobic conjugation supports wide extrahepatic delivery,As of the publication of this article, only ASO-mediated splicing modulations have been approved by the FDA to treat muscle-related diseases, such as Duchenne muscular dystrophy (DMA) (eteplirsen and golodirsen) and spinal muscular atrophy (SMA) (nusinersen).,,Oligonucleotides were synthesized on a Mermaid 12 synthesizer following standard protocols. In brief, conjugated sense strands were synthesized at 10-\u03bcmol scales on custom-synthesized lipid-functionalized controlled pore glass (CPG) supportsNtc), or lipid-conjugated siRNA (n\u00a0= 6 per group).Animal experiments were performed in accordance with animal care ethics approval and guidelines of University of Massachusetts Medical School Institutional Animal Care and Use Committee . In all experiments, 7-week-old female FVB/NJ mice were used and were injected s.c. with either phosphate-buffered saline (PBS controls), non-targeting control siRNA . Samples were analyzed by high-performance liquid chromatography over a DNAPac PA100 anion-exchange column (Thermo Fisher Scientific). Cy3 fluorescence was monitored and peaks integrated. Final concentrations were ascertained using calibration curves.Quantification of antisense strands in tissues was performed using a PNA hybridization assay as described.Htt, mouse Mstn, or mouse Hprt) were added to the bDNA capture plate, and the signal was amplified and detected as described by Coles et\u00a0al.At 1\u00a0week post-injection or 1\u00a0month post-injection, tissues were collected and stored in RNAlater (Sigma) at 4\u00b0C overnight. mRNA was quantified using the QuantiGene 2.0 Assay (Affymetrix). Tissue punches were lysed in 300\u00a0\u03bcL Homogenizing Buffer (Affymetrix) containing 0.2\u00a0mg/mL Proteinase K (Invitrogen). Diluted lysates and probe sets . Serum samples were activated as described in the manufacturer\u2019s protocol, with the exception that the final activated serum sample had an additional 1:2 dilution in calibrator diluent before assaying.Mice were injected s.c. with DCA- and Chol-conjugated siRNAs at\u00a0both 50 and 100\u00a0mg/kg. At 24\u00a0h post-injection, blood was collected by terminal cardiac puncture, and serum was analyzed for cytokine concentration measurement using Customized Luminex Assay . Serum samples were analyzed as described in the manufacturer\u2019s protocol.Data were analyzed using GraphPad Prism 7.01 software . For each independent experiment in mice, the level of silencing was normalized to the mean of the PBS control group. Data were analyzed using non-parametric one-way ANOVA with Dunnett test for multiple comparisons, with significance calculated relative to PBS controls and t test for comparison of two groups."} +{"text": "Various noninvasive liver fibrosis assessment tools are available. Here, we evaluated the performance of the asparagine aminotransferase\u2010to\u2010platelet ratio index (APRI), the fibrosis\u20104 index (FIB\u20104), transient elastography (TE), and the globulin\u2013platelet (GP) ratio for identifying liver fibrosis in patients with hepatitis B virus (HBV) infection.A total of 146 patients were assessed using TE, FIB\u20104, APRI, the GP ratio, and liver biopsy. Three patient grouping methods were applied: any fibrosis ; moderate fibrosis ; and severe fibrosis . Receiver operating characteristic (ROC) curve analysis, univariate analyses, and multivariate logistic regression were conducted.Regardless of patient\u2010grouping method, the area under the curve (AUC) of TE and the GP ratio were similar. Using the AF grouping method, the GP ratio showed superior performance compared with APRI and FIB\u20104: the AUCs for the GP ratio, TE, APRI, and FIB\u20104 were 0.76, 0.75, 0.70, and 0.66, respectively. Using the MF grouping method, the GP ratio also showed superior performance compared with APRI and FIB\u20104: the AUCs for the GP ratio, TE, APRI, and FIB\u20104 were 0.66, 0.68, 0.57, and 0.53, respectively. Using the SF grouping method, the AUCs for the GP ratio, TE, APRI, and FIB\u20104 were not significantly different.Compared with FIB\u20104 and APRI, the GP ratio had higher accuracy for identifying liver fibrosis, especially early\u2010stage fibrosis, in patients with HBV infection. ROC curves of noninvasive diagnostic methods for liver fibrosis (F0 vs. F1/2/3/4). The prevalence of HBV infection is as high as 8% in rural areas.Simple algorithms for assessment of serum biomarkers of liver fibrosis have been developed. The American Association for the Study of Liver Diseases recommended the age\u2013aspartate aminotransferase (AST)\u2013platelet (PLT)\u2013alanine aminotransferase (ALT) index (FIB\u20104)Although noninvasive tools show good performance in the diagnosis of the later stages of liver fibrosis, they may have not been validated for earlier stages.22.1This study included 146 patients with HBV infection. All patients were Chinese. The study was approved by the Ethics Committee of Chengdu Public Health Clinical Center. Written informed consent was obtained from participants prior to liver biopsy and blood tests. The study was complied with the ethical guidelines set out in the 2008 Declaration of Helsinki. Patients with liver inflammations attributed to factors other than HBV infection were excluded.2.2Samples were reviewed by two pathologists. Stages of fibrosis were determined according to the METAVIR2.3TE measurements were performed on the right lobe of the liver to obtain liver stiffness measure values. The results were expressed in kilopascals (kPa). The median value of 10\u00a0successful measurements was considered representative of liver stiffness. The duration of examination was <5\u00a0min. TEs were carried out within 1\u00a0week of liver biopsy.2.49/L)\u00a0\u00d7\u00a0ALT(U/L)\u22122]. The APRI score was calculated as [AST (U/L) / AST upper normal limit]/PLT (109/L). Upper limits of 37\u00a0U/L were used for ALT and AST in both women and men by local convention.Liver function tests and routine blood tests were carried out within 1\u00a0week of liver biopsy. The FIB\u20104\u00a0score2.5Three strategies were used for patients grouping: any fibrosis ; moderate fibrosis ; and severe fibrosis . The AF grouping method was used to differentiate patients with or without at least minimal liver fibrosis. The MF grouping method was used to differentiate patients with or without progressive liver fibrosis. The SF grouping method was used to differentiate patients with or without significant fibrosis or liver cirrhosis.2.6t tests or Wilcoxon rank sum tests. Fisher's exact tests were used to assess differences in count data. The area under the curves (AUC) was calculated using receiver operator characteristic (ROC) curve analysis. Multivariable logistic regression with stepwise variable selection was applied to fit the data analyzed using different grouping methods. Values of p\u00a0<\u00a00.05 were considered statistically significant.Statistical analysis was performed using STATA/SE 14.1\u00a0software (StataCorp). Normally distributed continuous data were presented as means and standard deviations (SDs), while nonnormally distributed continuous data were presented as medians and ranges. Comparisons between two groups were performed using Student's 33.1Among the 146 patients with HBV infection, 102 (69.86%) were male. The average age was 39.7\u00a0years (SD 9.43\u00a0years). The average body mass index was 23.2 (SD 3.02). Among the 146 patients, 51 (34.9%) cases were staged as F0, 54 (37.0%) as F1, 27 (18.5%) as F2, 10 (6.9%) as F3, and 4 (2.7%) as F4.9/L (SD 55.5\u00a0\u00d7\u00a0109/L) and the median white blood cell (WBC) count was 5.1\u00a0\u00d7\u00a0109/L (range 1.7\u201310.5\u00a0\u00d7\u00a0109/L).The median ALT level was 35.5 U/L (range 9\u2013715.4\u00a0U/L), the median AST level was 30.25\u00a0U/L (range 15.5\u2013448\u00a0U/L), and the median total bilirubin level was 12.2\u00a0\u03bcmol/L (range 0.89\u201378.8\u00a0\u03bcmol/L). The average PLT count was 165.9\u00a0\u00d7\u00a0103.2Using the AF grouping method, 51 patients were at stage F0 while 95 patients were at other stages. The median WBC count in F0 group was higher than in the liver fibrosis group (F1/2/3/4). The median GLB level, and gamma\u2010glutamyl transpeptidase level in the F0\u00a0group were lower than in the liver fibrosis group (F1/2/3/4). The mean PLT count for the F0\u00a0group was higher than in the liver fibrosis group (F1/2/3/4). These differences were statistically significant is presented in supplementary data Table\u00a0.3.3Multivariable logistic regression with stepwise variable selection was used for identifying relevant variables predicting liver fibrosis levels. Three models were constructed according to the three grouping methods. The results indicated that PLT count and GLB level were statistically significant predictors of fibrosis using the AF and MF grouping methods. PLT count, ALT level, and GLB level remained after stepwise variable selection using the SF grouping method. However, only GLB was statistically significant Table\u00a0. General3.49/L)\u00a0\u00d7\u00a010.Based on the results of multivariable analysis, GLB levels and PLT counts were statistically significantly associated with different liver fibrosis stages. The GP ratio could be a predictor of liver fibrosis. The GP ratio was calculated as GLB (g/L)/PLT to identify optimal cut\u2010off values for GP, APRI, FIB\u20104, and TE using the AF grouping method (F0 vs. F1/2/3/4); which were 2.12, 0.42, 1.80, and 8.20, respectively Table\u00a0. Using t4Platelet count has been shown to be a predictor of liver fibrosis.The GP ratio is potentially a suitable tool for assessing liver fibrosis in patients with HBV infection, especially for those with minimal liver fibrosis. The AUCs of the GP ratio were superior to those of APRI and FIB\u20104 using the AF and MF grouping methods. In contrast, the AUCs of TE and the GP ratio were similar using all grouping methods.TE is a rapid and noninvasive technique that can easily be performed and has become more accessible in hospitals. However, the performance of TE is correlated with liver biochemistry: if liver function is not stable, this may compromise the accuracy of TE.The APRI is frequently used for liver fibrosis assessments in patients with nonalcoholic fatty liver disease and nonalcoholic steatohepatitis.In our study, the GP ratio showed similar performance to TE in using the AF grouping system. Compared with the APRI and FIB\u20104, the GP ratio had higher sensitivity for detecting minimal liver fibrosis. These results indicated the advantages of the GP ratio over TE, given that patients with severe obesity and elevated liver stiffness have the greatest risks of discordance with liver biopsy.We followed up most patients with HBV infection in the outpatient department. Most have limited examination results compared with inpatients. The most common tests were routine blood and liver function tests and were repeatedly obtained every 1\u20136\u00a0months in these patients. Compared with FIB\u20104 and APRI, the GP ratio was more suitable for quickly distinguishing patients at stages F0 vs. F1\u20134. The GP ratio can be easily calculated using data from routine blood and liver functions tests and involves a less complex mathematical calculation than FIB\u20104 and APRI. The GP ratio should be used to identify patients who would require further examinations such as TE, liver biopsy, or magnetic resonance elastography.Our results indicate that compared with the GP ratio, the FIB\u20104 and APRI methods may be less suitable for patients with HBV infection. HBV is a major cause of liver injury in China. The GP ratio may be a promising tool for diagnosis of HBV\u2010infected patients in outpatient departments. However, larger studies for further validation are warranted.The authors declare that they have no conflict of interest.The data supporting the findings of this study are available from the corresponding author on request.Supplementary MaterialClick here for additional data file."} +{"text": "IntroductionClinicians should know the frequency and resistance patterns of bacteria that cause urinary tract infections (UTI) to provide patients with appropriate treatment and antibiotic management. However, the frequency of culture reproducing organisms and resistance patterns change in each community. Therefore, these data must be determined locally to make better treatment decisions. Herein, we aimed to determine the frequency of UTI-causing agents and current antimicrobial resistance profiles in outpatients attending our hospital.MethodsThis retrospective descriptive study included three hundred eight outpatients attending under the diagnosis of UTI between March and October 2020 who had a positive urine culture for bacterial growth. Age, sex, laboratory tests, urinalysis results, microorganisms grown in urine culture, and antibiograms were evaluated from the patients' medical records. Data were analyzed using SPSS version 23.0 for Windows.ResultsEscherichia coli (E. coli) and Klebsiella species are the most commonly detected agents. The growth in 71 (23%) of the 308 cultures was extended-spectrum beta-lactamase (ESBL) positive. In the E. coli growths, the susceptibility rates to fosfomycin, gentamicin, nitrofurantoin, trimethoprim-sulfamethoxazole, and ampicillin were 95.2%, 90.3%, 95.3%, 76.8%, and 49.3%, respectively. The susceptibility of Klebsiella species to gentamicin was as high as 93.7%, similar to that of E. coli, whereas its susceptibility rates to fosfomycin, trimethoprim-sulfamethoxazole, and nitrofurantoin were lower than those of E. coli . Of the 71 ESBL-positive growths, 52 were E. coli , and 14 were Klebsiella species . Of the ESBL-positive strains, 88.7%, 81%, and 76.1% were susceptible to fosfomycin and nitrofurantoin, respectively, and 64.9% and 45.7% were sensitive to cefoxitin and trimethoprim-sulfamethoxazole.In urine culture results,\u00a0ConclusionE. coli and Klebsiella strains, which are the most common pathological agents of UTI in our region, have limited the use of these treatments. However, the high susceptibility of E. Coli growths to fosfomycin and nitrofurantoin and susceptibility of Klebsiella growths to gentamicin may make these antibiotics stand out as suitable options for the empirical treatment of UTI in our setting.UTIs are among the most common causes of hospital admission and infections for which empirical antibiotic administration is initiated. The increasing rates of ESBL positivity and resistance to antibiotics such as ampicillin, cephalosporins, trimethoprim-sulfamethoxazole, and quinolones, especially in Escherichia coli and Klebsiella that produce extended-spectrum beta-lactamase (ESBL), and frequency of Enterobacteriaceae with multiple resistance mechanisms, including carbapenemase, which influences decisions regarding the empirical treatment of UTI range), and categorical data as values and percentages. In the comparative analysis, chi-square tests were used for categorical variables, and the Mann-Whitney U test was used for continuous variables. For all the statistical tests, p values < 0.05 were accepted as the statistical significance limit.E. coli and Klebsiella species are the most commonly detected agents. The frequency rates of all the agents are shown in Table Of the 308 outpatients included in the study, 220 71.4%) were female, and 88 (28.6%) were male. The median age of the patients was 41 years . We did not include eight cultures (2.6%) in the antibiogram because the bacterial growths were considered contaminated owing to the growth of a low amount of microorganisms or >2 kinds of microorganisms. In urine culture results, .4% were E. coli, Klebsiella species, and other gram-negative bacteria. We did not perform subgroup analysis on the gram-positive growths because of their small number. When we compared the demographic and laboratory data according to pathogens, the median ages of the patients with E. coli, Klebsiella species, and other gram-negative growths were 40 years , 58 years , and 69 years , respectively. The median ages of the patients with Klebsiella and E. coli overgrowths were significantly different . The proportions of female patients were 77.8% (n = 172), 51.5% (n = 17), and 50% (n = 9) among the patients with E. coli, Klebsiella, and other gram-negative growths, respectively . No significant differences in serum white blood cell count, CRP level, urine pH, RBC count, nitrite, bacteria, leukocyte esterase, protein, and glucose measurements were found between patient groups divided according to the bacterial growths in their urine cultures.We evaluated gram-negative growths comparatively in three groups, namely E. coli growths, the susceptibility rates to fosfomycin, gentamicin, nitrofurantoin, trimethoprim-sulfamethoxazole, and ampicillin were 95.2%, 90.3%, 95.3%, 76.8%, and 49.3%, respectively. The susceptibility of Klebsiella species to gentamicin was as high as 93.7%, similar to that of E. coli, whereas its susceptibility rates to fosfomycin, trimethoprim-sulfamethoxazole, and nitrofurantoin were lower than those of E. coli . The rates of sensitivity of the Klebsiella and Proteus species to ampicillin were 11.1% and 50%, respectively. In the antibiograms of Staphylococcus species, the third most common growth, the sensitivity to trimethoprim-sulfamethoxazole was 92.8%, and to vancomycin and tigecycline was 100%. The sensitivity to nitrofurantoin and ampicillin was also 100%; however, the number of antibiograms was low.The antibiotic susceptibility of the microorganisms grown in the urine culture is shown in Table E. coli, Klebsiella species, and other gram-negative strains is presented in Table E. coli growths, but were higher than those of Klebsiella species. The rates of susceptibility of the E. coli growths to ciprofloxacin and nitrofurantoin were higher than those of other gram-negative growths and Klebsiella species growths. A significant difference in cefoxitin susceptibility was only present between E. coli and the other gram-negative growths. The susceptibility to gentamicin was similar between the three groups.A gram-negative antibiogram panel susceptibility comparison between E. coli , 14 were Klebsiella species , 3 were Enterobacter species , and 2 were Proteus species . The ESBL positivity rate was 23.3% in E. coli, 42.4% in Klebsiella species, and 10.9% in other growths. Carbapenemase was positive only in one E. coli culture. This strain was resistant to carbapenems and piperacillin-tazobactam but susceptible to fosfomycin, gentamicin, trimethoprim-sulfamethoxazole, nitrofurantoin, and ciprofloxacin. The ESBL positivity rate was similar between Klebsiella and E. coli .The growth in 71 (23%) of the 308 cultures was ESBL positive. Of the 71 ESBL-positive growths, 52 were The median age was 39 years in the patients with ESBL-negative growth and was 57 years in those with ESBL-positive growth . The proportion of women was 74.7% (n = 171) among the patients with ESBL-negative culture growths and 59.2% (n = 42) among those with ESBL-positive growth . We found no significant differences between the two groups regarding serum WBC count, CRP level, urine pH, RBC count, nitrite, leukocyte esterase, protein, and glucose measurements.The gram-negative antibiogram panel susceptibility comparison of the ESBL-positive and ESBL-negative strains is shown in Table In cases of community-acquired UTI, antibiotic therapy is often prescribed before culture and susceptibility studies. To prescribe the appropriate antibiotic therapy for patients and reduce the development of antibiotic resistance, clinicians must determine the culture results and antibiotic resistance patterns of the agents grown in cultures locally. In this study, we aimed to determine the frequency and current antimicrobial resistance profiles of agents causing community-acquired UTI in outpatients attending our hospital, a 300-bed secondary care hospital in Turkey.E. coli and Klebsiella species were the most common growths detected. Although E. coli is reported to be the most common cause of UTI in the literature, similar to our study, the second most common pathological agent differed between studies. A\u011fca reported that the second most common urine growths after E. coli were Pseudomonas aeruginosa (6%), Klebsiella species (5%), Enterococcus species (5%), and Staphylococcus aureus (4%) [Proteus species [Klebsiella species growths were reported as the second most common, similar to our finding [ S. aureus and Klebsiella species as the second most common growths after E. coli in patients in low socioeconomic strata [In our study, eus (4%) . In anot species . In two finding ,12. Kidwc strata .E. coli growths, and the male-to-female ratio was close to 1 among patients with Klebsiella species growths [Female sex has been reported as a risk factor of UTI in the literature. UTI occurs twice more frequently in women than in men ,13. The growths . E. coli and Klebsiella species have been conducted in different centers. The susceptibility of E. coli strains to ampicillin has been reported to range from 11.6% to 28% [E. coli growth in cultures, whereas lower resistance to amoxicillin-clavulanate (16.4%) and nitrofurantoin (4.7%) was observed [E. coli strains to ampicillin was 49.3%, higher than those reported in the literature. High susceptibility to fosfomycin, nitrofurantoin, and gentamicin was observed . While the susceptibility rate to trimethoprim-sulfamethoxazole was lower (76.8%), it was higher than those reported in other studies. The rates of sensitivity to ciprofloxacin and cephalosporins were similar to those reported in the literature.Although studies on the antibiotic resistance pattern of all gram-negative growths are limited in the literature, studies on the growth of% to 28% ,15, and % to 28% ,13,16. O% to 28% ,8,15,16,Klebsiella strains, Shaifali et al. reported an ampicillin susceptibility rate of 54.54% [Klebsiella species growth\u00a0to ampicillin was much lower than those reported in the literature. Similar to the susceptibility of E. coli, the susceptibility of the Klebsiella species to gentamicin was higher (93.7%). The rates of susceptibility of the Klebsiella species to fosfomycin, nitrofurantoin, trimethoprim-sulfamethoxazole, and ciprofloxacin were lower than those of E. coli and similar to those reported in other studies.In studies that evaluated the antibiotic susceptibility of f 54.54% . In the f 54.54% ,8,13,15.f 54.54% . In our E. coli strains and from 32% to 54% for Klebsiella species growths [E. coli strains, 42.4% for the Klebsiella species, and 10.9% for other gram-negative growths. Carbapenemase production was detected in one E. coli strain. These findings are similar to those reported in the literature, and our ESBL positivity rate may be similar to that in large-center hospitals, as our hospital is a 300-bed secondary care hospital with 30 intensive care beds and a center serving patients from different regions.As the ESBL positivity rates vary among hospitals and regions, hospitals must conduct surveillance studies to determine the ESBL positivity rates and resistance patterns. In different studies from Turkey, the ESBL positivity rates ranged from 7.2% to 53% for growths -20. In o E. coli strains to fosfomycin, nitrofurantoin, and trimethoprim-sulfamethoxazole were 100%, 96.4%, and 36.4%, respectively, and the rate of sensitivity of ESBL-positive E. coli strains to ciprofloxacin was 38.1% [The antibiotic susceptibility rates of ESBL-positive microorganisms also differed in the literature. In a study conducted in Tunisia, the rates of susceptibility of ESBL-positiveas 38.1% . In our Our results should be interpreted with consideration of the limitations of this study. Owing to the study's retrospective design and limited data available in the electronic medical records in our hospital, we could not obtain information on individual patient history regarding risk factors such as urinary stones, urinary catheterization, or other instrumentations. In addition, other known risk factors of UTI were not considered; however, investigating these factors is beyond the scope of our study. E. coli and Klebsiella strains, which are the most common pathological agents of UTI in our region, have limited the use of these treatments. However, high susceptibility of E. Coli growths to fosfomycin and nitrofurantoin and susceptibility of Klebsiella growths to gentamicin may make these antibiotics stand out as suitable options for the empirical treatment of UTI. Ensuring that hospitals apply the optimum empirical antibiotic treatment by identifying infectious agents and resistance patterns will provide the most effective treatment to patients and prevent the development of antibiotic resistance.UTIs are among the most common causes of hospital admission and infections for which empirical antibiotic administration is initiated. The increasing rates of ESBL positivity and resistance to antibiotics such as ampicillin, cephalosporins, trimethoprim-sulfamethoxazole, and quinolones, especially in"} +{"text": "Healthy tissue tolerance to radiation is one of the main limitations in radiotherapy treatment. To broaden the therapeutic index, innovative approaches using non-conventional spatial and temporal beam structure are currently being investigated. Among them, proton minibeam radiation therapy is a promising solution that has already shown a remarkable increase in healthy tissue tolerance in various preclinical models. The purpose of this study is to propose potential strategies to further optimize proton minibeam spatial modulation with the use of magnetic fields. By generating a converging minibeam pattern with dipole magnetic fields, we show that spatial modulation can be improved at shallow depth for the same dose distribution at the tumor location. This indicates that proton minibeam radiation therapy could be efficiently combined with magnetic fields to further increase healthy tissue tolerance.Proton MiniBeam Radiation Therapy (pMBRT) is a novel strategy that combines the benefits of minibeam radiation therapy with the more precise ballistics of protons to further optimize the dose distribution and reduce radiation side effects. The aim of this study is to investigate possible strategies to couple pMBRT with dipole magnetic fields to generate a converging minibeam pattern and increase the center-to-center distance between minibeams. Magnetic field optimization was performed so as to obtain the same transverse dose profile at the Bragg peak position as in a reference configuration with no magnetic field. Monte Carlo simulations reproducing realistic pencil beam scanning settings were used to compute the dose in a water phantom. We analyzed different minibeam generation techniques, such as the use of a static multislit collimator or a dynamic aperture, and different magnetic field positions, i.e., before or within the water phantom. The best results were obtained using a dynamic aperture coupled with a magnetic field within the water phantom. For a center-to-center distance increase from 4 mm to 6 mm, we obtained an increase of peak-to-valley dose ratio and decrease of valley dose above 50%. The results indicate that magnetic fields can be effectively used to improve the spatial modulation at shallow depth for enhanced healthy tissue sparing. Cancer treatment is a complex process involving many approaches, from physical removal (surgery), chemotherapy to immunotherapy or radiation therapy. Accelerated photon or electron beams are typically used during conventional radiation therapy (RT). In the last few years, clinical proton therapy (PT) has been rapidly growing, given the much-improved precision in tumor targeting and sparing of normal tissue allowed by the proton physical properties . There iThe concept of spatially fractionated radiation therapy (SFRT) was first introduced at the beginning of the 20th century and subsequently re-introduced in the 1970s with Co-60 machines and in the 1990s with LINACs, under the name of GRID therapy ,5 or micThe use of charged particles such as protons or very high-energy electrons in SFRT was recently proposed, as they combine several potential advantages such as a precise ballistics, a reduction of the integral dose, and multiple Coulomb scattering of charged particles that could enable treatment of deep seated tumors with a homogeneous dose distribution, whereas normal tissues at shallow depths still benefit from spatial fractionation of the dose ,9.Proton MBRT (pMBRT) has been recently implemented at a clinical facility using high energy beams \u2265100 MeV) with passive scattering or scanning techniques 0 MeV wit. The expThe recent clinical introduction of MRI-guided radiotherapy and the integration of MRI within radiotherapy treatment machines have motivated research on potential impact of the magnetic fields on charged particle transport and dose distortions, which need to be taken into account for dose measurement, calculation, optimization, and delivery . The preFinite element and Monte Carlo (radiation transport) methods can be combined and used to accurately simulate the beam transport within the treatment volume. Fast optimized analytical methods to quantify the beam deflections in the presence of magnetic fields have been proposed to predict the trajectory of mono-energetic proton beams for the purpose of MRI-PT ,24. MontOn the other hand, magnetic fields can be used to intentionally deflect or focus particle beams around the treatment isocenter: this approach has been proposed by , who demIn this study, we investigate the feasibility of coupling magnetic fields with intensity-modulated proton planar minibeam therapy (pMBRT) to further optimize the characteristics of current proton minibeam delivery systems. Notably, we studied the possibility of using dipole fields to converge minibeams at the Bragg peak location and obtain higher PVDR at shallow depth with respect to classical pMBRT. The impact of the magnetic field on the beam delivery is, therefore, evaluated and adaptation strategies of PBS plan delivery and pMBRT planning are presented. Analytical models were used to determine proton trajectories in magnetic fields and optimize their intensity, while Monte Carlo simulations were performed to obtain the dose distribution in a water phantom.9 proton histories were simulated for each setup to obtain a level of relative statistical uncertainty of less than 1% at each voxel throughout the distribution. The dose is scored in a water phantom of 10 \u00d7 10 \u00d7 30 cm3 with a pixel dimension of 0.1 \u00d7 2 \u00d7 1 mm3.The MC simulation code TOPAS was parDifferent configurations of the last part of the beamline were simulated to investigate the possibility of using magnetic fields placed after the pMBRT collimator to create a converging minibeam pattern, and allow the use of a larger center-to-center distance for the same dose distribution at the Bragg peak location. Notably, we simulated four setups with different configurations of magnetic field and pMBRT collimator. We compared the results in terms of dose distribution in the water phantom with a classic pMBRT configuration without a magnetic field. Configuration #1, planar minibeams are produced with a 6.5 cm thick multislit pMBRT collimator containing 15 slits of 0.4 \u00d7 45 mm2 with a center-to-center (ctc) distance of 4 mm and with a slit tilt increasing linearly with the off-axis distance (0.025 degree per millimeter) to fit the beam divergence. In Configuration #2, a collimator with the same slit dimension, but with a larger ctc distance of 6 mm, is coupled with a 5-cm thick dipole placed after it to deviate the minibeams and ensure the same transverse dose distribution at the Bragg peak. The magnetic field in the dipole is uniform in space and directed towards the y direction. To converge the minibeams, the field intensity is increased with the off-axis distance of the irradiated slit . In addition, we investigated two variants of Configurations #2 and #3 in which minibeams are produced with a single scanning dynamic aperture also irradiate nearby slits. In other words, the entire proton flux of a given minibeam produced with a dynamic slit comes exclusively from the pencil beam spots with the associated x coordinate. We refer to these configurations as Configuration #2\u2032 and #3\u2032 in the rest of the manuscript.In all configurations, the entrance surface of the water phantom was placed at 7 cm from the collimator rear surface and a PBS grid aligned with the slit position was used. A total of 31 \u00d7 17 spots were used, with a vertical spacing of 3 mm and a horizontal spacing equal to the slit ctc distance at a depth of 15.7\u201318.7 cm , obtained using five energy layers from 150 MeV to 162 MeV.x position and is then optimized with the aim of obtaining the same transverse dose distribution at the Bragg peak compared to the reference Configuration #1. To do so, we first computed the off-axis x coordinates at the Bragg peak in Configuration #1 for each minibeam and each energy, which depends on the range and on the slit divergence, according to:Configuration #2 and #2\u2032, the minibeam trajectory inside the dipole as a function of the magnetic field is calculated with the Larmour formula z-axis in x coordinate at the Bragg peak x coordinate at the dipole exit. Since The magnetic field intensity in the dipole and in the water tank has to be correlated to the spot off-axis Configuration #3 and #3\u2032, the x coordinate at the Bragg peak for a given magnetic field is computed using the analytical approach proposed by [For posed by for a prposed by ).Magnetic field optimization was performed for the three beam energies used for the monoenergetic simulations as well as for the five energy layers composing the SOBP.Besides monoenergetic beams, a 3 cm wide SOBP composed of five energy layers was used to investigate these approaches with a clinically relevant scenario. An initial set of five energies between 150 MeV and 166 MeV with energy steps of 4 MeV was chosen, which corresponds to five individual Bragg peaks at a depth between 157 mm and 187 mm with a distal spacing between peaks of about 7.5 mm. We assumed as constant the weights found in .f in the GA, according to:For a given configuration, Monte Carlo simulations (TOPAS) were run separately for each energy layer to obtain a library of 3D dose maps in water as a function of the energy. The weights Optimization was performed for each configuration separately, in order to verify whether the optimal combination of weights is affected by the different minibeam patterns.Configuration #2 and Configuration #3, respectively. Such values depend on the specific set-up, as will be discussed in In both configurations, the required field intensity increases with the slit off-axis distance and decreases with the energy, because of the lower deflection required for protons that have a larger range in water. We found maximum values of 3.48 T and 6.18 T to deflect the more external minibeams in As an example of the effect of the magnetic field on minibeams pattern, the 2D dose distribution in water for a 150 MeV beam and for all investigated configurations is shown in Configuration #2 (static multislit collimator) are larger than those in Configuration #1 (no magnetic field) and Configuration #2\u2032 (dynamic aperture), indicating that the use of magnetic fields combined with a static multislit collimator leads to minibeam broadening. This is shown in It can be observed that the minibeams in Configurations #2\u2032 and #3\u2032) as the entire minibeam proton flux is generated only by PBS spots with the corresponding off-axis coordinate and is, therefore, deflected by an optimized magnetic field. When using a magnetic field inside the water phantom (Configurations #3 and #3\u2032), the difference between the use of a static collimator and a dynamic slit is less pronounced and becomes visible only after few centimeters of water thickness degrades the PVDR, since the detrimental minibeam broadening effect is dominant with respect to the benefits of a larger ctc distance. This is evident in the PVDR curves of 100 MeV and 150 MeV. The valley dose is indeed higher than that of Configuration #1 for the 100 MeV beam allows an increase of the PVDR between 25% (100 MeV) and 32% (200 MeV) at the phantom entrance and between 25% and 40% at a depth equal to half the proton range, and an equivalent reduction of the valley dose. When using a magnetic field in the water phantom, both the static collimator and dynamic slit configurations provide a higher PVDR and lower valley doses compared to the reference configuration, with the second case being more favorable because of the absence of a broadening effect. The advantage of using a dynamic slit is obvious for the lower energy beam of 100 MeV, in which the PVDR curve of Configuration #3 lies well below the curve of Configuration #3\u2032 and decreases even below that of Configuration #1 after a water thickness of 45 cm. An increase of PVDR and a decrease of the valley dose above 50% was obtained in Configuration #3\u2032 at the water phantom entrance and at half the proton range depth for all energies. Configuration #1 and the two configurations employing a dynamic slit.With respect to the reference configuration, the use of a dipole and a static collimator and within the water phantom (Configuration #3\u2032), respectively. More importantly, the use of a converging minibeam pattern would allow spatial fractionation in healthy tissues close to the tumor. In the scenario discussed, a healthy tissue placed at a depth of 10 cm would benefit from a fractionated dose pattern with a PVDR of 2.3 at shallow depths for a comparable PVDR at the SOBP location. A decrease of the valley dose of 30% and 50% was obtained with the dipole field placed after the pMBRT collimator , the increase in PVDR is related with an equivalent drop of the valley dose due to the larger separation between minibeams. This is a significant result because the valley dose and ctc distance seem to have a significant impact on healthy tissue tolerance in SFRT techniques [Configuration #2\u2032 at shallow depth would be comparable to that of Configuration #3\u2032 and the difference between the two curves in depth would be reduced.In all configurations .We also investigated the combination of both a static multislit collimator and a dynamic single scanning aperture with magnetic fields. The use of a dynamic aperture provides the best results in terms of the increase of PVDR and reduction of valley dose because the minibeam broadening (due to the deflection of a fraction of each minibeam flux with a non-optimized magnetic field intensity) is avoided. The generation of planar minibeam arrays with a dynamic aperture has been recently investigated at our facility and more details can be found in . FurtherAnother factor to consider for a practical implementation of the approaches presented is the maximum value of magnetic field required to deflect minibeams. The magnetic field intensity must be increased with the off-axis distance of the minibeam; therefore, the larger the increase in ctc distance with respect to the reference configuration (with no magnetic field), the stronger the maximum magnetic field required to achieve the same transverse Bragg peak dimension and homogeneity. Likewise, for a given increase in ctc distance, the larger the transverse area of the target volume, the higher the number of minibeam arrays required and consequently the maximum magnetic field. Moreover, the magnetic field intensity decreases with the beam energy. This is due to the lower deflection angle required for protons having a larger range in depth. Therefore, constraints on the maximum magnetic field would translate into limitations on the minimum beam energy, the number of slit and the transverse dimension of the target area. For the configurations investigated in this study, we found a maximum intensity required for the 100 MeV beam of 3.48 T in the case of a dipole placed after the pMBRT collimator and of 6.18 T if the magnetic field is applied to the water phantom in an MRI-PT guided-like scenario. We should stress that these values also depend on the distance travelled by particles in the magnetic field region. As a result, while the magnetic field extension in the MRI-PT scenario is determined by the proton range, the magnetic field intensity in the dipole placed before the water phantom can be reduced by increasing the dipole thickness. Currently, commercially available MRI-based systems exist only for LINAC, and typical magnetic fields are below 1.5 T ,38,39. T3 [For this proof of concept, we assumed a homogeneous magnetic field. This is not a difficult condition to meet with a dipole placed after the pMBRT collimator, considering that the homogeneous region can be moved to follow the slit during the scan. Concerning the MRI-PT scenario, a good homogeneous field has been reported for an MRI-LINAC system within a volume of 50 \u00d7 50 \u00d7 50 cm3 . However3 . For a mThe Bragg peak retraction in the z direction, due to the deflected trajectory, is another aspect to include in a 3D dose optimization process with such techniques. In the cases presented, the Bragg peak retraction has a negligible impact on the transverse Bragg peak optimization and was not taken into account. It is limited to a maximum of 2 mm for external minibeams, in agreement with the values found by ,41 for cWe investigated different configurations of coupling dipole magnetic fields with planar minibeams to produce a converging pattern in depth and improve the spatial modulation at shallow depth. We showed that using a static multislit collimator coupled with a dipole magnet placed after it degrades the PVDR because of the non-optimized interaction between the magnetic field and multiple minibeams at the same time. The use of a dipole magnet is advantageous only when coupled with a dynamic scanning aperture, in which case we obtained an increase of PVDR and decrease of valley dose up to 30% at the phantom entrance for the same transverse dose homogeneity at the Bragg-peak location. The use of magnetic fields in the water phantom is less affected by the minibeam broadening effect due to non-optimized magnetic fields and provides a considerable improvement of spatial modulation at shallow depth with both a static and a dynamic collimator. In this second case, we obtained an increase of PVDR above 50%. A relevant improvement of spatial modulation at shallow depth was also obtained with a more complex set-up employing five energy layers from 150 to 166 MeV, generating a 3 cm wide SOBP relevant for clinical applications. Altogether, these results show that pMBRT could be efficiently combined with magnetic fields to further improve the spatial modulation on healthy tissues, provided that a practical implementation is studied in detail."} +{"text": "The score was validated in another 174 patients. Moreover, the new score was compared to an existing tool developed in patients of any age. Compared to the previous tool, the new score was more accurate in predicting death \u22646 and \u226412 months and survival for \u22656 and \u226512 months. This demonstrates the importance of specific survival scores for the group of elderly patients.Many cancer patients with bone metastases receive palliative radiotherapy. The patients\u2019 remaining lifespan should be considered to achieve optimal treatment personalization. Since elderly patients (\u226565 years) are different from younger ones, a specific survival score was developed for this age group. In a test cohort (n = 174) or validation (n = 174) cohorts. Thirteen factors were retrospectively analyzed for survival. Factors showing significance (p < 0.05) or a trend (p < 0.06) in the multivariate analysis were used for the score. Based on 6-month survival rates, prognostic groups were formed. The score was compared to an existing tool developed in patients of any age. In the multivariate analysis, performance score, tumor type, and visceral metastases showed significance and gender was a trend. Three groups were designed with 6-month survival rates of 0%, 51%, and 100%. In the validation cohort, these rates were 9%, 55%, and 86%. Comparisons of prognostic groups between both cohorts did not reveal significant differences. In the test cohort, positive predictive values regarding death \u22646 and survival \u22656 months were 100% with the new score vs. 80% and 88% with the existing tool. The new score was more accurate demonstrating the importance of specific scores for elderly patients.Survival scores are important for personalized treatment of bone metastases. Elderly patients are considered a separate group. Therefore, a specific score was developed for these patients. Elderly patients (\u226565 years) irradiated for bone metastases were randomly assigned to the test ( Up to 70% of patients with breast or prostate cancer and up to 40% of patients with kidney cancer develop bone metastases during the course of their disease ,2,3. MetMetastatic bone pain may increase over several weeks or even months. Patients typically describe their symptoms as burning pain with episodes of break-through pain and aggravation in the night . These sTherefore, for selection of the best possible dose-fractionation regimen, it is very important to be able to judge a patient\u2019s survival prognosis prior to the start of treatment. To facilitate this judgement, several survival scores were developed for patients assigned to radiotherapy of bone metastases ,30,31,32n = 174) and a validation cohort (n = 174) using the excel random number generator. In the test cohort, the radiation dose given as equivalent dose in 2-Gy fractions (<32.5 Gy vs 32.5 Gy vs >32.5 Gy), the treatment period (2009\u20132017 vs. 2018\u20132022) and 11 potential prognostic factors were analyzed with respect to survival. These factors and their distributions in the test cohort and the validation cohort are shown in A total of 348 elderly patients (\u226565 years) irradiated for bone metastases without symptomatic spinal cord compression 2009\u20132021 were included in this retrospective study. Most common radiation regimen was 10 \u00d7 3 Gy over 2 weeks , which was used in 163 patients (47%). The entire cohort was randomly divided into a test cohort (p < 0.05) in the test cohort were evaluated for independence with the Cox proportional hazards model. Factors achieving significance (p < 0.05) or showing a strong trend (p < 0.06) incorporated in the survival score. For each factor, 6-month survival rates were divided by 10. The resulting scoring points were added for each patient. Considering the 6-month survival rates of these patient scores, three prognostic groups were formed.Univariate analyses were performed with the Kaplan\u2013Meier method plus the log-rank test . Significant factors . Moreover, both cohorts were compared for accuracy to a previous scoring tool including also three prognostic groups, which was applied to the test and the validation cohort of the present study . For theThe PPVs for correct prediction of death \u22646 months were calculated as follows:p < 0.001), ECOG-PS 0\u20131 (p < 0.001), breast or prostate cancer (p < 0.001), and absence of visceral metastases (p = 0.009) were significantly associated with survival (p < 0.001), primary tumor type , and visceral metastases were significant, and gender showed a strong trend. Therefore, all four factors were used for creating the survival score. The scoring points for these factors based on the 6-month survival rates are summarized in n = 10), 18\u201325 points and 27\u201328 points . No patient had 26 points. Median survival times of these groups were 1.5 months, 7 months and 39 months, respectively (p < 0.001). Survival rates were 0%, 51% and 100%, respectively, at 6 months, and 0%, 33% and 81%, respectively, at 12 months (p = 0.002) and groups B and C (p < 0.001).On univariate analyses of the test cohort, female gender , B (n = 141) and C (n = 22) were 1, 7 and 22 months, respectively (p < 0.001). Survival rates were 9%, 55% and 86%, respectively, at 6 months, and 0%, 33% and 81%, respectively, at 12 months (p = 0.004) and groups B and C (p = 0.005). The comparisons of the prognostic groups between the test and the validation cohorts did not reveal significant differences between both groups A , groups B (p = 0.63) and groups C (p = 0.11).In the validation cohort, median survival times of prognostic groups A standard treatment regimens. Since different metastatic sites are associated with different prognoses, each site should be considered separately. Moreover, since many elderly patients have significant comorbidities in addition to their cancer disease and reduced function of organs, such as liver, kidney, and bone marrow, they would particularly benefit from personalized treatment regimens. In the present study, the first survival score was developed specifically for elderly patients (\u226565 years) irradiated for bone metastases without neurological deficits due to metastatic spinal cord compression.The new score includes three prognostic groups (A to C) with significantly different survival outcomes. In group A (poor prognosis) of the test cohort, the median survival time was only 1.5 months , and all patients died within 5 months. Therefore, these patients should receive single-fraction radiotherapy in case of painful uncomplicated bone metastases or, otherwise, short-course radiotherapy . This suggestion agrees with the recommendations of the ASTRO evidence-based guideline of radiotherapy for bone metastases . These rp < 0.001) [p = 0.021) [p = 0.028) [Patients of group B (intermediate prognosis) in the test cohort had a median survival time of 7 months . Approximately every second patient survived for \u22656 months, and approximately every third patient for \u226512 months. Therefore, re-irradiation and re-calcification of the osteolytic bone, which generally takes several months, has become more important . Since i= 0.021) . Two-yea= 0.028) . In two = 0.028) ,17. In t= 0.028) . This shPatients of group C (favorable prognosis) in the test cohort had a median survival time of 39 months . Moreover, 81% of these patients survived for \u226512 months and 63% for \u226524 months, respectively. Therefore, these patients should receive longer-course multi-fraction radiotherapy with higher doses to reduce the rate of re-irradiations and improve the increase in bone density ,11,12. IWhen following these suggestions, the risk of a hidden selection bias due to the retrospective study design should be kept in mind. However, the score was validated within this study and proved to be superior to an existing score developed in patients of any age treated between 2009 and 2017 with respect to predicting death \u22646 and \u226512 months and survival for \u22656 and \u226512 months . TherefoA new survival score was created specifically for elderly patients (\u226565 years) irradiated for bone metastases without motor deficits due to spinal cord compression. Given its limitations, this score achieved perfect (100%) accuracy in the test cohort with respect to correct identification of patients dying \u22646 months and patients surviving \u22656 months. In the validation cohort, PPVs were lower but still high . Compared to a previous score developed in patients of any age, the new score was more accurate and, therefore, appeared preferable. Moreover, the new score can also be used to identify patients dying \u226412 months or surviving for \u226512 months. Ideally, the new score will be validated in a prospective cohort of patients."} +{"text": "RdRp and N genes of currently circulating SARS-CoV-2 variants and canine or feline 16S rRNA as an endogenous internal positive control. The developed assay had high sensitivity, specificity, and accuracy and could detect all tested SARS-CoV-2 variants, including Omicron subvariants. Clinical evaluation of canine and feline specimens revealed that the diagnostic sensitivity of the assay was equivalent to that of a commercial SARS-CoV-2 multiplex real-time RT-PCR kit. Furthermore, canine or feline endogenous internal positive control was amplified using the developed assay while avoiding false-negative results. Considering the high sensitivity, specificity, accuracy, and reliability, the developed assay can help diagnose COVID-19 in dogs and cats and potentially play a vital role in the rapid diagnosis and control of SARS-CoV-2 infections in companion animals.Given that SARS-CoV-2 infections in companion dogs and cats have been frequently reported worldwide during the ongoing COVID-19 pandemic, a multiplex real-time RT-PCR assay is urgently required to reliably detect SARS-CoV-2 infection in companion animals. In this study, we developed a tailored multiplex real-time RT-PCR assay to simultaneously detect RdRp and N genes of all currently circulating SARS-CoV-2 variants as well as the canine or feline 16S rRNA gene as an endogenous internal positive control (EIPC) for reliable diagnosis of SARS-CoV-2 infection from suspected dogs and cats. The developed mRT-qPCR assay specifically detected the target genes of SARS-CoV-2 but no other canine or feline pathogens. Furthermore, canine and feline EIPCs were stably amplified by mRT-qPCR in samples containing canine- or feline-origin cellular materials. This assay has high repeatability and reproducibility, with an optimal limit of detection (<10 RNA copies per reaction) and coefficients of variation (<1.0%). The detection rate of SARS-CoV-2 of the developed mRT-qPCR was 6.6% for canine and feline nasopharyngeal samples, which was consistent with that of a commercial mRT-qPCR kit for humans. Collectively, the newly developed mRT-qPCR with canine and feline EIPC can efficiently diagnose and evaluate the viral load in field specimens and will be a valuable tool for etiological diagnosis, epidemiological study, and controlling SARS-CoV-2 infections in canine and feline populations.Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections have been frequently reported in companion dogs and cats worldwide during the ongoing coronavirus disease. However, RT-qPCR methods developed for humans have been used for the diagnosis of SARS-CoV-2 infections in suspected companion dogs and cats owing to the lack of the companion animal-tailored methods. Therefore, we developed a multiplex RT-qPCR (mRT-qPCR) using newly designed primers and probes targeting Betacoronavirus in the subfamily Orthocoronavirinae of the family Coronaviridae [Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent of the ongoing coronavirus disease-2019 (COVID-19) pandemic, is a single-stranded positive-sense RNA virus that belongs to the genus aviridae ,4. Notabaviridae ,6. Thus aviridae ,8,9. Altaviridae ,11,12.ORF1ab), RNA-dependent RNA polymerase (RdRp), nucleocapsid (N), envelope (E), or even the spike protein (S) gene [RNase P or glyceraldehyde 3-phosphate dehydrogenase (GAPDH), as endogenous internal positive controls (EIPC) to monitor potential problems throughout the RT-qPCR process, such as sample collection, nucleic acid extraction, and subsequent test results [Rapid and accurate diagnosis of SARS-CoV-2 infection is a prerequisite for disease control in humans and animals. Currently, reverse transcription real-time quantitative polymerase chain reaction (RT-qPCR) is accepted as the global standard method for detecting SARS-CoV-2 infection. RT-qPCR is based on primers and probes that specifically amplify targeted regions of conserved viral gene sequences, such as open reading frame 1ab ((S) gene . To date(S) gene . However(S) gene ,5,6,8. N results ,15. Howe results ,17.Since the emergence of SARS-CoV-2, the World Health Organization (WHO) has classified different isolates according to their pathogenic potential and virulence as variants of concern (VOC), variants of interest (VOI), and variants under monitoring (VUM) ,19. The N, E, and RdRp gene sequences. However, other related betacoronaviruses, such as bat SARS-like coronavirus and SARS-CoV genomes, have high sequence similarity to the E-gene assay primers and probe sequences [E-gene assay, followed by a confirmation test with either the RdRp or the N-gene assays, are recommended [RdRp and N gene regions of currently prevalent SARS-CoV-2 isolates from human and animal hosts. Furthermore, we adopted a housekeeping gene stably expressed in canine and feline clinical samples as the EIPC instead of the human housekeeping gene in existing RT-qPCR assays for humans to improve the reliability of the developed RT-qPCR assay. Finally, the diagnostic performance of the developed RT-qPCR assay was comparatively evaluated with a commercial mRT-qPCR kit approved by emergency-use-authorization in Korea using nucleic acids extracted from different SARS-CoV-2 variants as well as canine and feline clinical samples. The developed mRT-qPCR assay with a canine or feline EIPC that simultaneously amplifies the RdRp and N genes of SARS-CoV-2 could serve as a promising diagnostic tool for SARS-CoV-2 detection in suspected canine and feline clinical cases.Most of the widely used RT-qPCR assays for human SARS-CoV-2 infections have been developed using primer and probe sets based on the equences . Therefoommended ,25,26,27http://usegalaxy.eu/ accessed on 20 November 2022) with a threshold of 0.99, and 3084 sequences were ultimately used to design the primers and probes in this study database, which covers all continents, including Europe, America, Asia, Africa, and Oceania . Sequences with high identity were excluded using the CD-HIT program (is study .http://www.mbio.ncsu.edu/BioEdit/bioedit.html accessed on 21 November 2022) was used to align the sequences, and conserved regions suitable for designing primer and probe sets were identified for the target RdRp and N gene sequences. Based on the conserved sequences, two sets of primers and probes were designed using Geneious Prime to specifically amplify the RdRp and N genes of SARV-CoV-2 GenBank database (http://www.ncbi.nlm.nih.gov/BLAST/ accessed on 23 November 2022) was performed against random nucleotide sequences to confirm potential cross-reactivity of primers and probes for the SARS-CoV-2 RdRp and N genes. The OligoAnalyzer\u2122 tool from Integrated DNA Technologies was used to check for possible secondary structures between primers and probes, such as hairpins, self-dimers, and hetero-dimers. The SARS-CoV-2 primers and probes were designed, paying special attention to the selection of genomic regions that differed from other SARS-CoV relatives and canine and feline coronaviruses . These p16S rRNA was used as an EIPC marker for the presence of canine or feline cellular materials. For designing 16S rRNA-specific primers and probes, 17 canine and 15 feline 16S rRNA sequences were obtained from the NCBI GenBank database. Multiple alignments were performed using the BioEdit sequence alignment editor program to identify conserved nucleotide sequences within 16S rRNA genes. Based on these conserved sequences, a pair of primers and probes was designed using the Geneious Prime software to detect 16S rRNA. A BLAST search was used to confirm the specificity of the primers and probes for canine and feline 16S rRNA using random nucleotide sequences.Furthermore, to avoid false-negative results, the canine or feline housekeeping gene RdRp and N genes, as well as canine or feline EIPC in a reaction, reporter dyes that are distinct or minimally overlap with the fluorescence spectra must be used to label the sequence-specific probes [RdRp and N genes and canine or feline EIPC in a single reaction: cyanine 5 (Cy5) and Black Hole Quencher 2 (BHQ2) for the gene, 6-carboxyfluorescein (FAM) and Black Hole Quencher 1 (BHQ1) for the N gene, and 6-carboxy-2\u2032,4,4\u2032,5\u2032,7,7\u2032-hexachlorofluorescein (HEX) and BHQ1 for each EIPC, according to the manufacturer\u2019s guidelines . The sequence, length, melting temperature (Tm), size, and genomic position of each primer and probe are listed in For simultaneous and differential detection of SARS-CoV-2 c probes . In thisBordetella bronchiseptica , and three feline pathogens, including feline calicivirus , feline herpesvirus , and feline leukemia virus were obtained from commercially available vaccines and used for the evaluation of the assay\u2019s specificity and VOIs were obtained from the National Culture Collection for Pathogens and used to develop and optimize the mRT-qPCR assay . A synthcificity . For cliRdRp and N genes of SARS-CoV-2 spanning the amplified region of the developed mRT-qPCR assay were amplified by RT-PCR from RNA samples of the SARS-CoV-2 omicron variant (hCoV-19/Republic of Korea/KDCA18126/2021) using RdRp gene-specific primers and N gene-specific primers , which were designed based on the sequence of the omicron variant (GISAID accession ID: EPI_ISL_6959993). Reverse transcription and cDNA synthesis were performed using a commercial kit . PCR was performed continually using a commercial kit in 50 \u03bcL reaction mixtures containing 5 \u03bcL of 10\u00d7 Ex Taq Buffer, 4 \u03bcL dNTP mixture, 0.25 \u03bcL TaKaRa Ex Taq, 0.2 \u03bcM of each primer, and 5 \u03bcL of SARS-CoV-2 cDNA as a template, according to the manufacturer\u2019s instructions. cDNA was amplified in a thermal cycler under the following conditions: initial denaturation at 98 \u2103 for 1 min, followed by 35 cycles of thermocycling , and a final extension at 72 \u00b0C for 5 min. The amplified 1697 bp RdRp and 971 bp N gene sequences were inserted into the pTOP TA V2 vector . The recombinant plasmid DNA samples were linearized using EcoRI and purified using the Expin CleanUP SV kit . Subsequently, in vitro RNA transcription was performed using the RiboMAX Express Large Scale RNA Production System-T7 according to the manufacturer\u2019s instructions. Using a NanoDrop Lite spectrophotometer , RNA concentration was determined by measuring the absorbance at 260 nm. After determining the RNA concentration, the copy numbers of the RNA transcript were quantified as previously described [7 to 1 copies/\u03bcL), stored at \u221280 \u00b0C, and used as an RNA standards for SARS-CoV-2 RdRp and N genes.The partial escribed . The RNARdRp gene, N gene, or EIPC primer and probe set using a commercial RT-qPCR kit and the CFX96 Touch\u2122 Real-Time PCR Detection System . The 25 \u03bcL reaction mixture containing 12.5 \u03bcL of 2\u00d7 reaction buffer, 1 \u03bcL of 25\u00d7 enzyme mix, 0.4 \u03bcM of each primer, 0.2 \u03bcM probe, 5 \u03bcL SARS-CoV-2 RNA template (106\u2013100 copies/reaction), and SARS-CoV-2-negative canine and feline RNAs as the EIPC was prepared according to the manufacturer\u2019s instructions. To optimize the mRT-qPCR conditions, the concentrations of the three sets of primers and probes were optimized, whereas the other reaction components were maintained identical to those used in monoplex RT-qPCR. The monoplex and multiplex RT-qPCR programs were the same and comprised 30 min at 50 \u00b0C for reverse transcription, 15 min at 95 \u00b0C for initial denaturation, followed by 40 cycles of 95 \u00b0C for 15 s and 60 \u00b0C for 60 s for amplification. Cy5 (SARS-CoV-2 RdRp gene), FAM (SARS-CoV-2 N gene), and HEX (EIPC) fluorescence signals for the tested samples were measured at the end of each annealing step. To interpret the monoplex RT-qPCR and mRT-qPCR results, samples with both RdRp and N gene cycle threshold (Ct) \u2264 37 were regarded as positive, whereas those with a higher Ct value (>37) were regarded as negative. The samples were considered invalid if EIPC was not detected within 40 amplification cycles.Before optimizing the mRT-qPCR, a monoplex RT-qPCR assay was performed with each SARS-CoV-2 6\u2013100 copies/reaction) of each RNA standard of SARS-CoV-2 RdRp and N genes in triplicate. For data analysis, a standard curve of Ct values from 10-fold dilutions of SARS-CoV-2 RdRp and N-gene RNA standards (106\u2013100 copies/reaction) was created using CFX96 Touch\u2122 Real-Time PCR Detection software . The correlation coefficient (R2) of the standard curve, the standard deviations of the results, and the SARS-CoV-2 RNA copy numbers in the samples were calculated from the standard curves using the detection software. The efficiency of the assay was determined using a previously described calculation [The analytical sensitivity of the mRT-qPCR assay for SARS-CoV-2 was determined using serial dilutions (10culation .B.bronchiseptica), three feline pathogens , two SARS-CoV-2-negative canine and feline clinical samples, two canine- or feline-origin cell cultures (MDCK and CRFK cells), and two non-canine- or feline-origin cell cultures (Vero and ST cells) as negative controls.To test the specificity of the mRT-qPCR assay, the assay was performed using RNA samples obtained from a SARS-CoV-2 omicron variant (GISAID accession ID. EPI_ISL_6959993), SARS-CoV (Tor2 strain), seven canine pathogens guidelines [The repeatability (intra-assay precision) and reproducibility (inter-assay precision) of the mRT-qPCR assay for SARS-CoV-2 detection were determined using three different concentrations of each viral standard gene tested. The concentrations of SARS-CoV-2 idelines . The coeRdRp and E genes and human GAPDH gene-specific primer and probe sets using the CFX96 Touch\u2122 Real-time PCR Detection System according to the manufacturer\u2019s instructions. Real-time fluorescence values of the FAM- (RdRp gene), HEX- (E gene), and Cy5 (IC)-labeled probes were measured in ongoing reactions at the end of each annealing step. To interpret the mRT-qPCR results, both RdRp and E genes with Ct \u2264 38 were considered positive. The sample was considered negative if either no Ct values were observed after the completion of 40 cycles of amplification or if Ct values were >38. Retesting was recommended if the RdRp or E gene was a single positive.A commercially available mRT-qPCR kit was used to compare the diagnostic performance of the mRT-qPCR assay developed in this study. Commercial mRT-qPCR (Kogene) was performed with SARS-CoV-2 4 RNA copies/reaction), and the results were compared. Second, to evaluate the diagnostic performance of the mRT-qPCR assay for SARS-CoV-2-positive animal samples, 37 RNA samples extracted from SARS-CoV-2-positive canine and feline clinical samples (14 dogs and 23 cats) were obtained from APQA and tested using the developed mRT-qPCR and Kogene\u2019s mRT-qPCR assays, and the results were compared. Finally, to evaluate the diagnostic performance of mRT-qPCR for blind clinical samples, a total of 520 nasopharyngeal samples (266 dogs and 254 cats) were obtained from a companion animal health-care company and tested using the developed mRT-qPCR and Kogene\u2019s mRT-qPCR assays, and the results were compared. Based on the results of both the assays for canine and feline clinical samples (37 positive samples and 520 blind samples), the inter-assay concordance was analyzed using Cohen\u2019s kappa statistic with a 95% confidence interval (CI). The interpretation of the calculated kappa coefficient value (\u03ba) was as follows: \u03ba < 0.20 = slight agreement, 0.21\u20130.40 = fair agreement, 0.41\u20130.60 = moderate agreement, 0.61\u20130.80 = substantial agreement, and 0.81\u20131.0 = almost perfect agreement [The diagnostic performance of the mRT-qPCR assay was evaluated in three steps with different categories of samples. First, to evaluate the diagnostic sensitivity of the mRT-qPCR assay for different SARS-CoV-2 variants circulating in the human population, RNA samples of 21 SARS-CoV-2 variants isolated from human clinical cases in the Korean epidemic were obtained from NCCP and testgreement .N gene, Cy5 for the SARS-CoV-2 RdRp gene, and HEX for EIPC were successfully generated by mRT-qPCR with each primer and probe set and the corresponding SARS-CoV-2 RNA or canine and feline RNA samples suggested that three fluorescent signals of FAM, Cy5, and HEX could be simultaneously detected in a single reaction. Furthermore, HEX signals for EIPC were consistently detected for nasopharyngeal samples, regardless of the SARS-CoV-2 standard RNA concentration and N (100.7%) genes genes .RdRp and N gene-specific primer and probe sets generated positive Cy5 and FAM signals with a VOC strain only. No positive signals were generated with other canine and feline pathogens or cell cultures. Using the mRT-qPCR with primers and probes for canine or feline EIPC, positive HEX signals were detected for canine or feline clinical samples, canine- or feline-origin cells, and canine or feline live attenuated vaccines. In addition, no SARS-CoV-2 or EIPC positive signals were detected in the two non-canine- or feline-origin cells were found between the Ct values and dilution factors for the monoplex RT-qPCR and mRT-qPCR assays of mRT-qPCR was determined to be below 10 copies/reaction for the SARS-CoV-2 RT-qPCR A,C,E. StR assays B,D,F. RdRp and 0.41% to 0.54% for the N gene. In contrast, inter-assay variability ranged from 0.51% to 0.66% for RdRp and 0.22% to 0.59% for the N gene, respectively for each standard RNA were tested in triplicate in six different runs performed by two experimenters on different days. The coefficients of variation within runs (intra-assay variability) ranged from 0.25% to 0.58% for ectively . These r4 copies/reaction) extracted from SARS-CoV-2 variants, and the results were compared with those of the commercially available mRT-qPCR assay using RdRp and E gene-specific primers and probe sets. For identical RNA concentration of all tested viruses, the mean Ct values by the developed mRT-qPCR assay were determined to be 33.09 (32.14\u201335.36) for the RdRp gene and 32.06 (31.02\u201334.20) for the N gene; using Kogene\u2019s mRT-qPCR, the mean Ct values were 33.31 (32.03\u201335.15) for the RdRp gene and 34.19 (33.16\u201336.24) for the E gene signals were generated in all tested clinical samples, except in four clinical samples (one dog and three cats), indicating that canine or feline cellular material was not included or degraded in the four clinical samples. The samples were unsuitable for molecular diagnosis. These results suggest that the developed mRT-qPCR assay is valuable for the clinical diagnosis of canine and feline SARS-CoV-2 infection.Subsequently, 557 canine and feline clinical samples (280 dogs and 277 cats), including 37 known SARS-CoV-2 positive samples (14 dogs and 23 cats), were tested using the developed mRT-qPCR and Kogene\u2019s RT-qPCR assays. In both assays, 37 known SARS-CoV-2-positive samples were determined to be SARS-CoV-2 RNA-positive, and the remaining 520 clinical samples were determined to be SARS-CoV-2-negative . Based oN and RdRp genes of SARS-CoV-2 and canine or feline 16S rRNA gene as an EIPC for the reliable diagnosis of SARS-CoV-2 infection in suspected canine and feline clinical cases.Given that the viral agent of COVID-19, SARS-CoV-2, is a zoonotic as well as reverse zoonotic agent, cooperative efforts of medical and veterinary scientists are required to control this ongoing pandemic. One of the biggest concerns in the field of veterinary medicine in response to COVID-19 is the emergence of new variants through host adaptation to infected animals such as companion dogs and cats, having close contact with their owners, which may pose a threat to animal and public health ,11,12. TRdRp and N gene sequences of 3084 SARS-CoV-2 strains, including various human-origin variants and animal-origin isolates in mRT-qPCR was evaluated analytically, indicating that EIPC does not interact with the SARS-CoV-2 targets or affect the sensitivity or amplification efficiency of the assay for SARS-CoV-2 detection (16S rRNA (EIPC) were amplified using mRT-qPCR in all tested canine and feline clinical samples except four; thus, invalid samples could be filtered out to ensure the high reliability of the developed mRT-qPCR assay (Most currently available mRT-qPCR methods to detect human SARS-CoV-2 infection commonly use human housekeeping genes as EIPC to discriminate false negative results during clinical diagnosis ,19,36. Uculation ,37. In tRNA gene , which aetection . Canine CR assay . In the CR assay . HoweverIn conclusion, the developed mRT-qPCR assay with high sensitivity, specificity, and reliability may be a promising molecular diagnostic tool for detecting SARS-CoV-2 in companion dogs and cats and will be useful for etiological diagnosis, epidemiological studies, and controlling SARS-CoV-2 infection in canine and feline populations."} +{"text": "Cardiovascular disease (CVD) is a serious disease that endangers human health and is one of the main causes of death. Therefore, using the patient\u2019s electronic medical record (EMR) to predict CVD automatically has important application value in intelligent assisted diagnosis and treatment, and is a hot issue in intelligent medical research. However, existing methods based on natural language processing can only predict CVD according to the whole or part of the context information of EMR.Given the deficiencies of the existing research on CVD prediction based on EMRs, this paper proposes a risk factor attention-based model (RFAB) to predict CVD by utilizing CVD risk factors and general EMRs text, which adopts the attention mechanism of a deep neural network to fuse the character sequence and CVD risk factors contained in EMRs text. The experimental results show that the proposed method can significantly improve the prediction performance of CVD, and the F-score reaches 0.9586, which outperforms the existing related methods.RFAB focuses on the key information in EMR that leads to CVD, that is, 12 risk factors. In the stage of risk factor identification and extraction, risk factors are labeled with category information and time attribute information by BiLSTM-CRF model. In the stage of CVD prediction, the information contained in risk factors and their labels is fused with the information of character sequence in EMR to predict CVD. RFAB makes well use of the fine-grained information contained in EMR, and also provides a reliable idea for predicting CVD. Cardiovascular disease (CVD) is characterized by high morbidity and high mortality, which continues to plague human beings \u20133. Data CVD has become an important public health problem in China, and the need for coping strategies is imminent. From a realistic point of view, the effective information we can get about CVD in our daily life is limited. Fortunately, more and more hospitals in China have established standard EMR systems in recent years, which makes a large number of patients\u2019 cases systematically recorded. With the rise of deep learning, the application based on the increasing EMRs has been continuously explored in the medical field , 5. In pEMR can proactively make judgments based on the information and knowledge they have mastered, make timely and accurate prompts when individual health status needs to be adjusted, and provide optimal solutions and implementation plans. The EMR of patients with CVD contains accurate pathogenesis information. However, when we focused on the specific content of the EMR, it was found that it contained more information that was not very relevant to CVD. The information mainly involves the basic condition of the patient\u2019s body or the declarative dialogue between the doctor and the patient. Moreover, when the information about possible CVD in a medical record text accounts for a small proportion, it will become difficult to effectively discover and utilize this information.We no longer simply utilize the entire EMR as in previous related works, but use the 12 risk factors proposed by Su et al. instead.The RFAB we propose contains two phases, first identifying risk factors, then predicting CVD based on the original EMR and risk factors, providing a meaningful and referential method for related predictive tasks.Our method does not simply predict CVD through risk factors. Through BiLSTM-CRF identification, not only the risk factors themselves are extracted, but also their corresponding tags with category information and time attribute information, which can consider more comprehensive information for prediction tasks.We use the character information of the original EMR text as the input of the encoder in the RFAB, the risk factor and its label as the decoder. The above two types of information are fused by the attention mechanism. This makes the predictive task focus on risk factors, and it can also take into account context information in the original EMR.For the deep learning-based neural network model, these complicated sequential information not only reduces its attention to the information that may induce CVD, but also has a high possibility to reverse its prediction results. Huang et al. have proThe purpose of this paper is to focus on the risk factors in EMRs and to predict whether an individual suffers from CVD by machine learning methods. And the experiment is mainly divided into three stages: preprocessing the dataset, identification and extraction of risk factors, and prediction of CVD. In the data preprocessing stage, there are some missing and duplicate data in a few EMR texts, so we have carried out data cleaning and interpolation. In the stage of identifying risk factors, we use named entity recognition technology that has been widely used in industry or scientific research. The purpose is to accurately and effectively identify and extract the risk factors and their categories and time attributes in the EMRs. When we compare the recognition performance of CRF and BiLSTM-CRF, both perform well, but the latter performs better in experiments. We have analyzed the reasons in the following two aspects: On the one hand, there are many repetitions of the 12 risk factors in the EMR. On the other hand, BiLSTM is good at capturing the contextual information of text sequences, which is beneficial to identify the boundaries of entities. In the CVD prediction stage, we used the neural network model (RFAB) proposed in this paper. We present the main flow described above in Fig.\u00a0Continue) and during the patient\u2019s medical treatment (During). In the input layer, we determine the embedding of each input character by looking up the dictionary, expressed as As shown in Fig.\u00a0am model contain n is inputted to the model, and the embedding layer maps characters one by one to a vector, i.e., t. As in the LSTM memory cell implemented by Lample et al. [t has left and right contextual information, i.e., The model identifies risk factors by predicting the label corresponding to each character. A sequence of length e et al. , the rept may correspond to.Then, the eigenvalues are zero-averaged by the activation function tanh, which to calculate the confidence score of the labels that each character tth column of score matrix P is outputted by the network correspond to the vector jth tag of ith character in the sequence. We introduce a transition probability matrix T that can utilize previous annotation information when tagging the current position. y by normalizing the scores above over all possible tag paths Finally, the feature information is decoded at the CRF layer, and the best labels for characters are predicted. The mentclass2pt{minimfrom Eq. :4\\documementclasspt{minimaAs shown in Fig.\u00a0Input Layer mainly tackles the problem of Feature Acquisition of the input EMR text and the input risk factors. For a Chinese raw text T, it contains m characters, i.e., T contains n risk factor words C is equal to W is equal to Embedding Layer aims to represent each item from Input Layer in a continuous space. It accepts the characteristics of two parts of content in each BiLSTM. To avoid the overfitting problem, we apply the dropout mechanism at the end of the embedding layer. work in to applyThe corpus involved in the experiment mainly consists of two parts: about 800,000 unlabeled EMRs and 1186 systematically labeled EMRs. The unlabeled corpus comes from the internal medicine department of a hospital in Gansu Province, and is mainly used for training and generating character-level embeddings required in the experiment process. As shown in Fig.\u00a0Symptoms) and the third (Diseases) chapters of \u201cClinical Practical Cardiology\u201d, an authoritative textbook for training clinicians in China, the exposition of CVD [For EMRs used for CVD prediction, we need to label them as whether CVD is confirmed or not. The basis used comes from the following three parts: the first part, mainly based on the diagnosis results of clinicians in the EMR; the second part, based on the specific definition of CVD by the World Health Organization ; the thin of CVD . In the Continue); during the patient\u2019s medical treatment (During); after the patient\u2019s medical treatment (After); before the patient\u2019s medical treatment (After). Since the Age and Gender of the risk factors do not have a time attribute, we added a time attribute as: None.It is based on the 12 risk factors included in EMRs and their labels with category and time attributes to predict CVD, rather than directly based on the sequence information of the text of EMRs. As well as the statistics of the number of risk factors as shown in Table\u00a0Accuracy (A), Precision (P), Recall (R), and F-score (F) as metrics for evaluating performance [The experiment consists of two stages: the risk factor identification stage and the CVD prediction stage. In the first stage, we utilized all the labeled EMRs, including 830 in the training set, 119 in the development set, and 237 in the test set. In the second stage, we will extract the risk factors from the EMRs that need to be used to train the prediction model through the recognition model trained in the first stage. Among the EMRs utilized to train CVD prediction models, there are 461 training sets, 66 development sets, and 132 test sets. In the experiments, we used formance , 19:12\\dAs a comparison, we use different or in the case of ablation models for both stages of the experiment. In the models described next, the first two models are used in the risk factor identification stage, and the latter models are utilized for the CVD prediction stage.CRF As a widely used traditional machine learning method, this model has been applied by Mao et al. [o et al. to the rBiLSTM-CRF This model is a good example of combining deep learning with traditional machine learning methods. In the research on the automatic recognition of named entities in an extracted medical text, Li et al. [i et al. applied SVM As one of the data mining techniques, support vector machine (SVM) is used by Menaria et al. [a et al. to suppoConvNets This model has a great influence in the field of text classification. Xiang et al. [g et al. proposedLSTM/RFAB (no att) As a special kind of recurrent neural network, Xin et al. [n et al. believesRFAB This model is proposed in this paper. In the experiments, we tune the hyperparameters by random search, and share all the experimentally selected hyperparameters as much as possible in Table\u00a0For the risk factor identification stage and cardiovascular disease prediction stage, we have carried out specific comparative experiments. In the second stage, we have done abundant experimental exploration from the aspects of input, embedding, and model ablation.In Fig.\u00a0Accuracy, Precision, Recall, and F-score. And the performance of each model when the dataset is the original EMRs, the risk factor with the label, or the risk factor without the label. As shown in Figs.\u00a0In Table\u00a0In Table\u00a0In Fig.\u00a0In Table Disease prediction research based on machine learning methods plays a pivotal role in supporting medical decisions for the correct diagnosis and treatment of diseases. Through the study of related technologies, doctors or individuals can quickly and accurately obtain key information and possible predictions after seeing a doctor, which is of great significance for reducing the pressure on experts and preventing diseases for individuals.Aiming at the study of predicting CVD based on electronic medical records, this paper proposes an effective and reference idea to identify and extract risk factors and then rely on these key information to predict CVD. Meanwhile, we propose a corresponding CVD prediction model, a risk factor attention-based model (RFAB). With the help of the attention mechanism, the model effectively integrates the information between the risk factors and the context of the EMR text, and also considers the category and time attributes of the risk factors by the mean of labels. This enables the model to avoid redundant and confusing information, while focusing on effective key information, and can also take into account the original information of the EMR.In the future, we will focus more on the research of CVD itself. Although the factors that can be found in EMRs that lead to CVD in individuals can be determined, it is undeniable that factors such as environment are diverse. Therefore, we will explore more comprehensive information sources, and then rely on machine learning methods to predict CVD efficiently and accurately."} +{"text": "Saccharomyces cerevisiae and further restrict the development of industrial bioethanol production. Transcription factors are regarded as targets for constructing robust S. cerevisiae by genetic engineering. The tolerance-related transcription factors have been successively reported, while their regulatory mechanisms are not clear. In this study, we revealed the regulation mechanisms of Haa1p and Tye7p that had outstanding contributions to the improvement of the fermentation performance and multiple inhibitor tolerance of S. cerevisiae.Various inhibitors coexist in the hydrolysate derived from lignocellulosic biomass. They inhibit the performance of HAA1-overexpressing strain s6H3 (H), and the TYE7-overexpressing strain s6T3 (T). The expression of the pathways related to carbohydrate, amino acid, transcription, translation, cofactors, and vitamins metabolism was enhanced in the strains s6H3 and s6T3. Compared to C_H vs. C_S group, the unique DEGs in AFur_H vs. AFur_S group were further involved in oxidative phosphorylation, purine metabolism, vitamin B6 metabolism, and spliceosome under the regulation of Haa1p. A similar pattern appeared under the regulation of Tye7p, and the unique DEGs in AFur_T vs. AFur_S group were also involved in riboflavin metabolism and spliceosome. The most significant difference between the regulations of Haa1p and Tye7p was the intracellular energy supply. Haa1p preferred to enhance oxidative phosphorylation, while Tye7p tended to upregulate glycolysis/gluconeogenesis.Comparative transcriptomic analyses were applied to reveal the regulatory mechanisms of Haa1p and Tye7p under mixed sugar fermentation conditions with mixed inhibitors [acetic acid and furfural (AFur)] or without inhibitor (C) using the original strain s6 (S), the HAA1 or TYE7. The positive perturbations of energy and amino acid metabolism were beneficial to the improvement of the fermentation performance of the strain. Furthermore, strengthening of key cofactor metabolism, and transcriptional and translational regulation were helpful in improving the strain tolerance. This work provides a novel and comprehensive understanding of the regulation mechanisms of Haa1p and Tye7p in S. cerevisiae.Global gene expressions could be rewired with the overexpression of The online version contains supplementary material available at 10.1186/s12934-022-01822-4. Saccharomyces cerevisiae has superior ethanol fermentation performance, but its xylose metabolism and stress tolerance are two key bottlenecks restricting the development of lignocellulosic bioethanol [S. cerevisiae with xylose metabolic ability, which improves the utilization of lignocellulose biomass [S. cerevisiae, can not only improve the conversion efficiency of the fermentable sugars to ethanol but also reduce the stringent requirements in the biomass pretreatment process [Bioethanol production using lignocellulosic biomass as feedstock could ease the pressure on fossil energy consumption and further advance carbon neutrality plans . Saccharoethanol , 3. The biomass . However biomass \u20137. There process .S. cerevisiae, ethanol production is still not ideal when using lignocellulosic biomass as feedstock due to the heterogeneous composition and the multifarious inhibitors in pretreated slurries [Various inhibitors are inevitably released in hydrolysate with the dissolution of the fermentable sugars in the pretreatment process , 10. Theslurries , 17.S. cerevisiae overexpressing TF of SFP1 or ACE2 could be increased by 300\u2013400% when fermenting in a synthetic medium with acetic acid and furfural [HAA1 [MSN2/4 [TYE7 [YAP1 [S. cerevisiae.Transcription factors (TFs) can involve in the regulation of genes, which is a feasible target for constructing robust strains using genetic engineering. The ethanol productivity of furfural . Besidesal [HAA1 , 18, MSN [MSN2/4 , TYE7 [5E7 [YAP1 , 21, etcS. cerevisiae. At present, those revealed regulatory mechanisms of the key TFs are limited to the conditions with single sugar and single inhibitor [S. cerevisiae are different among various fermentable sugars under the conditions with/without inhibitor [S. cerevisiae is more sensitive to inhibitors in the xylose fermentation stage than that in the glucose fermentation stage when mixed sugars are fermented [Understanding the regulatory mechanism of TF is very important for improving the robustness of nhibitor \u201324. Whilnhibitor , 25\u201327. ermented , 7. ConsHAA1 and TYE7 in the parental strain s6 using the CRISPR/Cas9 gene engineering method [TPO2, TPO3, YRO2, etc.) [HAA1 is involved in cellular copper/iron ion homeostasis [In our previous study, two tolerant strains s6H3 and s6T3 were constructed by respectively overexpressing g method , 28. Mixg method , 18. Haa2, etc.) . Some tr2, etc.) , 23. GO eostasis , and poseostasis , 31. HowENO1 and ENO2 (enolase), TDH , PGK1 (phosphoglycerate kinase), PGM1 (phosphoglycerate mutase), PYK1 (pyruvate kinase), and TPI1 (triosephosphate isomerase) [Tye7p, a transcriptional activator, contributes to glycolytic genes activation, such as omerase) , 33. Resomerase) , 35. HowS. cerevisiae.In the present study, the regulatory mechanisms of Haa1p and Tye7p under the conditions with/without the mixed acetic acid and furfural were studied by comparative transcriptomics analysis using the strains s6, s6H3, and s6T3 when fermenting mixed glucose and xylose , 28. TheCompared with the parent strain s6, the strains s6H3 and s6T3 had much better fermentation performance in 10% YPDX medium with/without mixed acetic acid (2.4\u00a0g/L) and furfural (1.9\u00a0g/L) Haa1p and Tye7p in enhancing the fermentation performance and inhibitor tolerance of the strains, we designed comparative transcriptomic experiments by focusing on mixed glucose and xylose fermentation with or without mixed acetic acid and furfural. The fermentation process with mixed sugars as carbon sources could be divided into the glucose fermentation stage and the xylose fermentation stage. The effect of glucose repression lasted after glucose depletion. The transcriptional response of the glucose stage caused disturbance to the subsequent xylose stage, resulting in specific transcriptomic profiles . ConsideS. cerevisiae S288C after quality control for the raw data , and the results were consistent with the results of the transcriptome analysis, suggesting that the transcriptomic results were reliable , s6H3 (H), and s6T3 (T) were C_S, C_H, and C_T under the condition without inhibitor (C), and AFur_S, AFur_H, and AFur_T under the condition with mixed acetic acid and furfural (AFur), respectively. The transcriptome data were aligned with HAA1 and TYE7 in strains s6, s6H3, and s6T3 under the conditions with/without mixed acetic acid and furfural were shown in Table\u00a0HAA1 and TYE7 were significantly increased under the initiation of UBI4 promoter (PUBI4), which has little fluctuation under mixed acetic acid and furfural stress. These results indicated that the expressions of HAA1 and TYE7 were upregulated by PUBI4. To reveal the regulation mechanisms of Haa1p and Tye7p, the differences in genome expression profiles of the groups C_H vs. C_S, AFur_H vs. AFur_S, C_T vs. C_S, and AFur_T vs. AFur_S were analyzed based on differentially expressed genes (DEGs) and Kyoto Encyclopedia of Genes and Genomes\u00a0(KEGG) enrichment. The DEGs were filtered with a threshold of false discovery rate (FDR) <\u20090.05 and a fold change (Sample B/Sample A) \u2265\u20091.5. The KEGG enrichment analysis was accorded to the KEGG database, and the enriched pathways were filtered with a threshold of enrichment ratio (E) \u2265\u20090.1 and/or P\u2009<\u20090.05.The fragments per kilobase of exon per million reads mapped (FPKM) values of HAA1, especially in the presence of inhibitors. The distribution of the DEGs in these two groups were shown in Fig.\u00a0The numbers of DEGs were 234 and 629 in C_H vs. C_S and AFur_H vs. AFur_S groups, respectively, in which 154 (234) and 517 (629) were upregulated. This result suggested that the transcription process was significantly affected by the overexpression of P\u2009<\u20090.05 from those pathways that only meet E\u2009\u2265\u20090.1, two radar maps were drawn to reveal the regulation mechanism of Haa1p in improving the sugar consumption performance and inhibitor tolerance . There were 17 specific pathways when pathways from these two groups were combined into a whole. Only starch and sucrose metabolism was concurrently enriched in these two groups. In C_H vs. C_S group, the enriched pathways were mainly involved in carbohydrate metabolism , amino acid metabolism , lipid metabolism (fatty acid biosynthesis), cofactors and vitamins metabolism (nicotinate and nicotinamide metabolism), and cell growth and death (necroptosis). While carbohydrate metabolism , amino acid metabolism (tryptophan metabolism), energy metabolism (oxidative phosphorylation), cofactors and vitamins metabolism (vitamin B6 metabolism), nucleotide metabolism (purine metabolism), and transcription (spliceosome) were significantly enriched in AFur_H vs. AFur_S group.These DEGs were used for KEGG enrichment analysis. In AFur_H vs. AFur_S group, the number of the enriched pathways (E\u2009\u2265\u20090.1) was much larger than that in C_H vs. C_S group Fig.\u00a0. Screenence Fig.\u00a0A. 11 andTo explore new regulation perspectives of Haa1p in the conditions with or without inhibitors, the KEGG enrichment analysis of the DEGs in the different regions of the Venn plot was discussed in detail Fig.\u00a0. The shaHAA1, the overexpression of TYE7 seemed to have a temperate influence on the transcription regulation in the presence of inhibitors. The proportion of the shared DEGs (86) was 38.22% and 33.33% of the total DEGs in C_T vs. C_S and AFur_T vs. AFur_S groups, respectively and 200 (258) were upregulated. Compared with the overexpression of P\u2009<\u20090.05 were further screened to draw the radar map. As shown in Figs.\u00a0KEGG enrichment analysis showed that the number of the enriched pathways (E\u2009\u2265\u20090.1) in AFur_T vs. AFur_S group was larger than that in C_T vs. C_S group Fig.\u00a0. The patThe shared DEGs were mainly involved in biotin metabolism, suggesting this pathway should occupy key loci in Tye7p regulation Fig.\u00a0; Table\u00a02Combined with the results of the KEGG enrichment analysis for the total DEGs and the regional DEGs, the key role of central carbon metabolism and amino acid metabolism in improving strain performance was demonstrated Figs.\u00a0; Table\u00a02The DEGs regulated by Haa1p and Tye7p were mixed with some potential TFs. 13, 20, 8, and 5 potential TFs were picked up from the DEGs in C_H vs. C_S, AFur_H vs. AFur_S, C_T vs. C_S, and AFur_T vs. AFur_S groups, respectively was overexpressed, positive perturbations were activated at the cellular level. The regulatory mechanisms of Haa1p and Tye7p were visually presented based on those key DEGs and KEGG pathways was \u201c1\u2009+\u20091\u2009<\u20092\u201d rather than \u201c1\u2009+\u20091\u2009>\u20092\u201d , XYL2 (xylitol dehydrogenase) from Scheffersomyces stipitis as well as XKS1 (xylulokinase) from S. cerevisiae) and pIWBGL1 (contains BGL1 (\u03b2-glucosidase) from Aspergillus aculeatus) in the industrial flocculating yeast strain KF-7 [HAA1 alone (UBI4P-HAA1-HAA1T), overexpressing TYE7 alone (UBI4P-TYE7-TYE7T), and co-overexpressing HAA1 and TYE7 , respectively, in the parental strain s6 in our previous work [us study . Strain ain KF-7 . Strainsous work .g, 2\u00a0min), and transferred into 100 mL of 10% YPDX medium in a 300-mL conical flask. Flasks were incubated in a thermostat water bath (35\u00a0\u00b0C). The broth in flasks was stirred (200\u00a0rpm) using a magnetic stirring system. If necessary, acetic acid (2.4\u00a0g/L) and furfural (1.9\u00a0g/L) were added to the sterilized medium. The batch fermentation method was described previously [The strains s6, s6H3, and s6T3 were activated at 30 \u2103 on the 2% YPD-agar plate . After 24\u00a0h, a loopful of cells was transferred into a 500-mL conical flask with 100 mL of 5% YPD medium , and cultivated aerobically for 16\u00a0h in a shaker. Fresh cells (0.5\u00a0g dry cell weight (DCW)) were collected by centrifugation (8000\u00d7www.majorbio.com) after sequencing in Shanghai Majorbio Bio-pharm Technology Co., Ltd.Cells used for RNA extraction were collected at 7\u00a0h from the control , and mixed acetic acid (2.4\u00a0g/L) and furfural (1.9\u00a0g/L) (AFur) groups. The methods for extracting and measuring total RNA and RNA-seq were performed as previously described . Three iADY2, ATO2, BTN2, ENO1, ENO2, and HSP30, with varied transcript abundance, were chosen to quantify the relative expression levels were used to search for genes that have been experimentally shown to be regulated by the TFs from documented associations in the YEASTRACT database. The analysis was conducted as previously described [Quantified gene expression results used FPKM (fragments per kilobase of exon per million reads mapped) as a unit. The gene filtered with a threshold of false discovery rate (FDR) <\u20090.05 and a fold change (Sample B/Sample A) \u2265\u20091.5 were considered as differentially expressed genes (DEGs). The KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway terms with a escribed .The concentrations of glucose, xylose, and ethanol were determined as previously described . GlucoseAdditional file 1:\u00a0Fig. S1. The glucose (A and D), xylose (B and E), and ethanol (C and F) concentration\u00a0curves of strains\u00a0s6, s6H3, and s6T3 underthe condition without inhibitor and the condition with mixed acetic\u00a0acid and furfural . Black squaresrepresent strain\u00a0s6; red circles\u00a0represent strain\u00a0s6H3; green upwards\u00a0triangles represent strain\u00a0s6T3 (Ref. 5).\u00a0Fig. S2. Evaluation of\u00a0inhibitor tolerance of strains by batch fermentation using 10% YPDX medium (A), 10% YPDX medium containing mixed\u00a0acetic acid and furfural (2.4+1.9 g/L) (B),and pretreated corn stover slurry (Ref. 5) (C). Black squares represent strain s6;\u00a0red circles represent strain s6H3; blue upwards triangles represent strain\u00a0s6T3, green stars represent strain s6H3T10.\u00a0Fig. S3. The cluster graph of\u00a0expression pattern of DEGs in each group. C_S_1, C_S_2, and C_S_3 represent the\u00a0three biological replicates of strain s6 in the control (C) group, Afur_S_1,\u00a0Afur_S_2, and Afur_S_3 represent the three biological replicates of strain s6\u00a0in mixed acetic acid and furfural (Afur) group; C_H_1, C_H_2, and C_H_3\u00a0represent the three biological replicates of strain s6H3 in control (C) group,\u00a0Afur_H_1, Afur_H_2, and Afur_H_3 represent the three biological replicates of\u00a0strain s6H3 in mixed acetic acid and furfural (Afur) group; C_T_1, C_T_2, and C_T_3\u00a0represent the three biological replicates of strain s6T3 in control (C) group,\u00a0Afur_T_1, Afur_T_2, and Afur_T_3 represent the three biological replicates of\u00a0strain s6T3 in mixed acetic acid and furfural (Afur) group.\u00a0Fig. S4. Validation of\u00a0transcriptome data by RT-qPCR. The changed fold means the ratio of the expression\u00a0level of a specific gene in the experimental group to that in the control group.The ACT1 expression level was used as\u00a0a reference in RT-qPCR.\u00a0Fig. S5. Venn diagrams of the\u00a0enriched pathways when overexpressed Haa1p and Tye7p, respectively. The black\u00a0font represents the enriched pathways, and the red font represents the\u00a0classification of each pathway.\u00a0Table S1. The results of\u00a0transcriptome data alignment with S. cerevisiae S288C.\u00a0Table S2. The differently\u00a0expressed TFs\u00a0in C_H vs. C_S (234), C_T vs. C_S (225), AFur_H vs. AFur_S (629), and AFur_T vs. AFur_S (258) groups.\u00a0Table S4. The differently\u00a0expressed genes involved in the key KEGG pathway in C_H vs. C_S, AFur_H vs.AFur_S, C_T vs. C_S, and AFur_T vs. AFur_S groups.\u00a0Table S5. The primers used for\u00a0RT-qPCR.Additional file 2:\u00a0Table S3. The genes regulated by the potential TFs in C_H vs. C_S (13), C_T vs. C_S (20), AFur_H vs. AFur_S (8), and AFur_T vs. AFur_S (5) groups.\u00a0The light yellow highlighted values are calculated by DEGs/genes (the number of the DEGs regulated by the TFs to the number of the genes regulated by the TFs) or DEGs/all the DEGs . The green highlighted genes are the DEGs that regulated by the TFs."} +{"text": "We present experiments where extreme ultraviolet femtosecond light pulses are used to photoexcite large molecular ions at high internal energy. This is done by combining an electrospray ionization source and a mass spectrometer with a pulsed light source based on high harmonic generation. This allows one to study the interaction between high energy photons and mass selected ions in conditions that are accessible on large-scale facilities. We show that even without an\u00a0ion trapping device, systems as large as a protein can be studied. We observe light induced dissociative ionization and proton migration in model systems such as reserpine, insulin and cytochrome c. These results offer new perspectives to perform time-resolved experiments with ultrashort pulses at the heart of the emerging field of attosecond chemistry. To study the dynamics of gas phase systems that can occur on timescales ranging from attosecond to minutes, a variety of experimental technics have been developed using light sources covering a wide spectral range from far-infrared up to hard X-ray. With the recent advances in light sources, new perspectives and challenges have been identified, such as the access of attosecond dynamics of electrons, the observation of structural changes in complex molecules or the understanding of electron correlation effects3. HHG is a highly nonlinear process that converts the fundamental frequency of intense femtosecond laser into high order harmonics which can extend from extreme ultraviolet (XUV) up to soft-X-ray regime4, generating light pulses of attosecond duration. These advanced technologies are so far limited to the study of molecules that can be stored in gas bottle or evaporated using ovens7, but are not yet available when one considers much larger and fragile species, such as proteins.The investigation of attosecond timescale dynamics has become accessible for small quantum systems or small molecules by using table top light sources based on high harmonic generation (HHG)2.When considering large molecular species, electrospray ionization (ESI) is the state-of-the-art technics to transfer intact, fragile and large ions from solution to the gas phase. With this technics, a huge variety of complex molecular ions of different charge states can be injected into vacuum, from small molecules up to macromolecules. A precursor ion of given mass over charge (m/z) can then be selected and usually stored in ion trapping or storing device to increase the density of the ion cloud, allowing for further investigation. While ion-trapping devices bring advantages in terms of mass spectrometry analysis technics, the existing instruments do not allow, for example, detection of emitted electrons, thus limiting the exploitation of electron based spectroscopic technics or time resolved crossed-beams type experiments8, vibrational9, electronic12 or innershell13 action spectroscopy. The use of light in the IR, UV\u2013Vis or UV domain as an activation method to generate efficient fragmentation has been thoroughly investigated over the past 20\u00a0years15, enabling the characterization of large molecular structures. These approaches were developed as complementary methods to overcome the limitations of collision induced dissociation (CID) that is based on collision with neutral gas and therefore driven by energetic barriers and statistical laws19. Recently, activation with ultrashort femtosecond laser pulses has proved its potential by unveiling specific fragments following ionization of protonated biomolecules23.Nevertheless, tremendous progress have been made in the spectroscopy of large ions. Experiments have been developed over a\u00a0broad range of electromagnetic radiations, from GHz up to soft X-ray domains in order to elucidate molecular structures and electronic properties by performing rotational25. Synchrotron can also deliver X-ray radiation that provides local excitation with site selectivity inducing new photoinduced reaction as demonstrated in peptides27. With the emergence of new large scale facilities such as free electron lasers (FEL), high energy photon at high flux and with short pulse duration are now available, offering new opportunities in terms of sensitivity, and potential applications in terms of non-linear interaction28. In pioneer experiments, Schlath\u00f6lter and co-workers demonstrated the interest of protein activation in the gas phase using FEL pulses by showing that ubiquitin responds as an ensemble of small peptides29. Although offering new exciting possibilities, the use of large scale facilities brings additional constraints, incompatible with daily use for analytical purposes or laboratory studies30. New approaches have been proposed using low cost, high photon energy sources. Giuliani et al.31 demonstrated the use of discharge lamp that delivers a continuum flux of photons in the range of 15\u201330\u00a0eV and coupled to tandem mass spectrometry for molecular activation32. Even though the cited experiments have provided an enormous amount of information on large molecular ions, no experiment has been able to measure realtime ultrafast processes involved in high energy photon excitation. A step in the development of experiments able to measure ultrafast processes in large ions without using trapping devices was made in a recent time-resolved experiments using \u201con-the-fly\u201d configurations33. This has allowed the investigation of ultrafast processes using UV\u2013visible laser pulses34. The development of experiments combining short XUV light pulse technology and ESI devices is therefore a necessity to push forward the studies on the dynamics of large molecules but remains challenging.Extending these approaches to higher photon energy, large scale facilities have also been used to provide new activation methods. At synchrotron light-sources, activation of protonated and deprotonated proteins were performed using photon energies in the vacuum ultraviolet (VUV) range of 5\u201320\u00a0eV. By making use of efficient valence ionization, VUV excitation provides new tools to create reactive radicals and new fragmentation pathwaysHere we used a combination of high harmonic generation XUV source with electro-spray ionization source and mass spectrometry (MS) technics, to study the interaction between an ultrashort XUV pulse and complex molecular ions. The HHG source uses an intense femtosecond laser to generate photons with energy in the range of 10\u201350\u00a0eV confined in ultrashort 20\u00a0fs pulses at a\u00a05\u00a0kHz repetition rate. The mass spectrometer does not include any trapping device. It rather uses an \u201con-the-fly\u201d configuration in which each ion interact only once with the light pulse as in crossed beam experiments. We investigated photoinduced reactions in reserpine and two proteins (cytochrome c and insulin). We observed that XUV excitation of complex protonated molecules leads to ionization followed by proton migration and dissociation. These experiments demonstrate the feasibility of ultrafast experiments in complex molecular ions.Our instrument combines an electro-spray source, a mass selector, a mass spectrometer and a pulsed XUV femtosecond source. A schematic of the instrument is presented in Fig.\u00a012 photons per second. HHG is a highly nonlinear process that converts IR fundamental light into high order harmonics. To do so, the IR beam is focused with a 1\u00a0m focal lens onto a 4\u00a0mm long rare gas cell leading to a focal spot estimated at 130\u00a0\u00b5m. The IR light is divergent and is mostly blocked by an iris located at 80\u00a0cm after the cell. The generated XUV light is weakly divergent and therefore can be transmitted through the hole of the iris. A metallic filter is located 2\u00a0cm after the iris, which allows eliminating the remaining IR light as well as low order harmonics. Depending on the chosen metal filter the transmitted XUV spectrum can be tuned from 10 to 50\u00a0eV. In the case of an aluminum filter of 200\u00a0nm thickness, harmonics below 17\u00a0eV are filtered out. After the filter, the XUV light is focused by a 1\u00a0m focal length toroidal mirror onto the interaction region of the mass spectrometer described in the next section. The focal spot of the XUV beam in the interaction region is estimated at about 100\u00a0\u03bcm. After the mass spectrometer, a standard XUV spectrometer allows to measure the XUV spectrum generated. Figure\u00a0The light source is designed to fit the required performances of experiments where low-density targets interact with the light delivered by a low photon flux source. The source is based on a commercial femtosecond laser system from Coherent delivering IR (800\u00a0nm), 2\u00a0mJ, 25\u00a0fs pulses at 5\u00a0kHz repetition rate. The IR beam is used to generate XUV radiation by high harmonic generation (HHG) with a flux of 10The long focal configuration is necessary to insure high photon flux required with this instrument. However, it also introduces intrinsic instabilities compared to tighter focusing configurations. Overall, the high stability required for long acquisition time necessitates a compromise between focal, flux and stability.33. The mass spectrometer consists of an electrospray ionization source, ion optics, two linear quadrupole mass filters, a collision cell and a detection device. Molecules in liquid sample are injected at a flow rate of 300\u00a0\u03bcL/h through a micrometer capillary tube at a voltage of about 3\u00a0kV. The charged droplets generated at the exit of the capillary tip are gradually transferred under a nitrogen gas flow from the atmosphere to the vacuum chambers of the spectrometer. The droplets are evaporated by the action of the gas, voltage and electrostatic repulsion; eventually leading to isolated charged molecular ions. The solvent is therefore progressively eliminated and pumped in a differentially pumped chamber. The free jet of molecular ions evolves then in a vacuum chamber at an overall pressure of 1\u2009\u00d7\u200910\u22125\u00a0mbar. Ions are manipulated using quadrupole mass filters and experiments are performed using MS\u2013MS operating mode. The precursor ion of interest is mass and charge (m/z) selected (MS1) with a first quadrupole (Q1). After interaction, the product ions are m/z analyzed (MS2) by a second quadrupole (Q2) before their detection. The detection device includes a conversion dynode and a phosphor screen to convert the ionic signal into photons that are then detected by a photomultipler. The instrument is equipped with a traveling wave collision cell35 between Q1 and Q2 for CID experiments. The collision cell is filled with argon gas at a pressure of 3.5\u2009\u00d7\u200910\u22123\u00a0mbar to ensure high transmission through the cell and collision energy voltage can be tuned from 0 to 120\u00a0V. The instrument can detect ions in the mass range 2 to 2048\u00a0m/z with a mass resolution of 4000 and achieve acquisition rates up to 20,000\u00a0Da/s. The acquisition of a mass spectrum is performed using MassLynx\u2122 software from Waters. A mass spectrum results in the integration of several scans of a specified mass range. In this work, the mass range is scanned by the quadrupole Q2 in typically 0.5\u20135\u00a0s (scan time) and a mass spectrum is typically acquired in 5\u00a0min (acquisition time). We set the scan time and acquisition time in order to achieve sufficient statistics and resolution for the chosen mass range.The XUV source is combined with a triple quadrupole mass spectrometer, a Xevo TQ-S-micro provided by Waters. While this instrument was designed for analytical purpose based on CID experiments, we have modified it to allow for laser interaction33. Window ports have been drilled to allow for the entrance and exit of the laser beam. For XUV/HHG light interaction, the mass spectrometer (10\u22125\u00a0mbar) is connected to the toroidal mirror chamber (10\u22127\u00a0mbar) of the photon beamline by a pipe of diameter 25\u00a0mm and about 30\u00a0cm long that holds the pressure gradient. This is required to ensure propagation of the XUV light without absorption along the path. Because of the high sensitivity of the photomultiplier it is crucial to protect the transport of the light into the instrument. This was done by adding slits at the entrance and exit ports of the instrument in order to minimize the noise induced by scattered light. The detection device has been shielded and filters installed to further reduce detection of scattered light within the instrument. The interaction region is located at the exit of Q1, before the collision cell. The XUV light is focused between the rods of a short quadrupole (post-filter) acting as a rf-only ion guide between Q1 and the collision cell. The interaction region, delimited by the intersection of the XUV focal point (diameter about 100\u00a0\u03bcm) and the ion beam (diameter about 1\u00a0mm), is located about 1\u00a0cm before the entrance hole (2\u00a0mm in diameter) of the collision cell. The effective pressure in the interaction region results from a gradient of pressure between the collision cell (10\u22123\u00a0mbar) and the overall pressure around the interaction region (10\u22125\u00a0mbar) and depends on the effusion of the argon gas through the entrance hole of the collision cell. As typical kinetic energies of the ions in the spectrometer are in the order of the eV or few tens of eV, ions are travelling through the instrument at a velocity of several hundred meters per second. The velocity of ions with nearby m/z values and kinetic energy is practically the same. Therefore, ions are traveling several millimeters between two XUV pulses separated in time by 200\u00a0\u03bcs. This \u201con-the-fly\u201d type of experiment ensures that ions interact only once with a pulse light, disentangling this interaction from \u201cmultiple-pulses\u201d interactions. The \u201con-the-fly\u201d configuration also offers the possibility to perform time resolved experiments34 or to measure, in principle, quantities difficult to access using traps such as electron emission or kinetic energy release. For photon interaction experiments collision voltage is set to 0\u00a0V to reduce interplay between the XUV induced signal and the residual CID background signal. Under these conditions, the travelling-wave collision cell acts as an rf-only ion guide toward Q2. The signal of the m/z ion of interest is optimized by adjusting the concentration of the ESI solution as well as the parameters of the spectrometer. Depending on the m/z selected ion, the ion current is estimated between 107 and 109 ion/s (1 to 100\u00a0pA for protonated molecules) which corresponds to a density of about 10\u2013103 ions/mm3 in the interaction region. We note that the continuous ion beam is irradiated only during a fraction of time by the XUV light (25\u00a0fs pulse every 200\u00a0\u03bcs). In the experiment, typically few ions, to few tens ions per second are excited by the XUV radiation. This contributes to the low signal-to-noise ratio in the experiment. The mass spectra are therefore mostly dominated by the intact precursor ion peak and residual CID product ions. Mass spectra are recorded with and without XUV light interaction. Without XUV light (laser OFF) the mass spectra correspond to the ones induced by the residual CID in the collision cell with 0\u00a0V collision voltage. As presented in the next section for protonated reserpine, cytochrome-c and insulin, it is noticeable that despite the very low duty cycle, product ions induced by the XUV light (laser ON) can be detected. This suggests a high efficiency of the XUV activation processes. The results demonstrate the capabilities of the instrument to study XUV induced processes in molecular ions.The spectrometer had been modified in order to excite ions with photons20 solution with 0.1% of acetic acid at a concentration of 60\u00a0\u03bcM for reserpine, 8\u00a0\u03bcM for insulin and 4\u00a0\u03bcM for cytochrome c.Reserpine (from Alfa Aesar L03506), human insulin (from Sigma-Aldrich I0908) and cytochrome c from bovin heart (from Sigma-Aldrich C2037) samples were prepared by dissolution in 50:50 MeOH:H+ bonded to a methoxy indole based group (m/z 414). Figure\u00a036.Protonated reserpine [Res.H]neutral\u2009=\u20097.88\u00a0eV38 due to Coulombic effect of the added proton and/or structure modification39. With XUV photon energies in the range 15\u201335\u00a0eV ionization is expected for molecule such as reserpine and may result in specific, non-statistical, fragmentation. Fragments at m/z 413 and 414 correspond to reserpine after a loss of TMB (m/z 195) with or without proton. As fragment\u00a0m/z 414 is not observed in CID spectra, one might assume that the XUV induced fragments\u00a0m/z 414 and 413 result from dissociative ionization of the protonated reserpine. This is consistent with the observation of a peak at m/z 304.5 corresponding to the doubly charged protonated reserpine [Res.H]\u20222+ following XUV excitation plus oxygen loss and hydrogen migration. Fragments at m/z 381\u2013383 corresponds to the reserpine after TMB (m/z 195), plus oxygen and CH3 or CH4 loss and hydrogen migration. These fragments may result from sequential dissociation following ionization, indicating that enough internal energy remains after ionization.While common fragments are observed with both activation methods, specific fragmentation pathways are observed in the case of XUV interaction as shown in Fig.\u00a0Using krypton gas for HHG instead of xenon results in higher photon energies while the photon flux is lower. The same fragmentation channels are observed with different relative intensities as depicted in Fig.\u00a0+ and ionized protonated reserpine [Res.H]\u20222+ in the ground state were optimized using CAM-B3LYP method, with 6-311++G** basis set. The calculations, performed using Gaussian40 and Gabedit41, show three possible protonation sites corresponds to the presumed protonation site with the proton bonded to the nitrogen at the tertiary amine42, as depicted in Figs. + these two conformations lie about 0.5\u00a0eV and about 1\u00a0eV above the lowest energy conformation. From these calculations, one can assume that only the C1 geometry, with a localized proton at the nitrogen site, has to be considered in this experiment. When the molecule is ionized, two charges are carried by the photoproduct. The location of these two charges depends on both electrostatic repulsion and global interaction with the surrounding atoms. For [Res.H]\u20222+ the lowest energy conformation corresponds to protonation near the oxygen of the TMB group and the conformation C5 with the proton bonded to the nitrogen at the tertiary amine lies 0.23\u00a0eV above it (9.1518\u00a0eV above C1). The energy of the structure C6 (9.6397\u00a0eV above C1) with the proton at the O1 site is 0.72\u00a0eV above the ground state geometry C4. As ionization is a fast process compared to proton migration, the energy difference between C5 and C1 conformations gives an estimate of about 9.1\u00a0eV for the calculated vertical ionization potential of the protonated reserpine. These calculations indicate that ionization of the protonated reserpine in the ground state may trigger a proton migration from the N site to the O2 site. In this work the ionized protonated reserpine [Res.H]\u20222+ results from XUV excitation and large amount of internal energy can remain in the molecule after ionization. Due to the low energy gap (0.23\u00a0eV) between the O2 and N site structures, these two conformations of the molecule can be populated after XUV photoionization. Their dissociation can result in the observation of the fragment m/z 414 if the proton is initially localized on the N site (population of conformer C5) and of the\u00a0fragment m/z 413 in the case of the O2 protonation (population of conformer C4). This shows that starting with a stable molecule with a localized proton, the XUV excitation leads to an ionized protonated specie that can be found in the two configurations due to proton migration with fragmentation products that depend on the proton localization.The O\u2013R bond cleavage after the carbonyl group in the ester bridge of the protonated reserpine leads to fragment m/z 414 while fragment m/z 413 results from the same bond cleavage but with an additional hydrogen migration to the carbonyl moiety. To validate this interpretation, quantum chemistry calculations were performed using density functional theory. Structures of protonated reserpine [Res.H]This first example demonstrates the interaction between the XUV radiation and the protonated reserpine molecule. The excitation induces an efficient ionization of the molecule accompanied by sequential dissociation and proton migration.In the following, we present the results of the interaction of XUV photons of 15\u201330\u00a0eV with two proteins of different size, structure and composition: human insulin (mass 5808\u00a0Da) and cytochrome c from bovine heart .46. Single and double ionization, non-dissociative ionization as well as emission of fragments from amino acid have been observed in X-ray excitation of [Ins.5H]5+. Sequence ions from the N-terminals of both chains were also observed44.The human insulin is a protein composed of 51 amino acids organized in two peptide chains linked by disulfide bonds. Ionization and fragmentation of insulin in various charge states have been studied using several activation methods to investigate, for example, protein disulfide bonds cleavage47 and soft X-rays (>\u2009280\u00a0eV)48. Experiments in the intermediate photon energy range has not been reported so far. However, we note that this intermediate energy range was used by Schlath\u00f6lter et al. for studying other large species, such as ubiquitin (8.5\u00a0kDa), or peptides of mass below 3\u00a0kDa50.The cytochrome c protein is composed of 104 amino acids surrounding a heme molecule that is a porphyrin with a central iron atoms and that is covalently bond to the peptide chain. The mass of the protein is about 12\u00a0kDa and depends on the amino acid sequence that varies with the species. Multiply protonated cytochrome c had been studied in several gas phase experiments using VUV photons (<\u200920\u00a0eV)6+ (m/z 969) and a fifteen times charged cytochrome c [Cyt-c.15H]15+ (m/z 816). A representation of the crystal structure of these proteins from the RCSB protein data bank (PDB)52 is depicted on Fig.\u00a0In this work, only multiply charged protonated species corresponding to m/z below 2\u00a0kDa can be observed within the mass range of our spectrometer. We focus our study on the six times protonated insulin [Ins.6H]c Cyt-c.1H15+ , of the lateral chain of glutamic acid ; of the lateral chain of\u00a0tyrosine (loss of 107\u00a0Da) and of the residue tryptophan at the C-terminal (loss of 130\u00a0Da). These losses are generally observed for ionized proteins50. For [Cyt-c.15H]15+, at this charge state, the native structure of the molecule is not preserved and the protein is expected to exist in a nearly linear extended conformation53. The fragmentation pattern, which depends on the structure and charge location, has been described in previous experiments reported in the literature. It corresponds to the dissociation of some of the amino acids constituting the molecule (peak ratio m/z\u2009<\u2009200) and cleavages of the peptide chain55. In our experiment, these channels are much weaker than dissociative ionization of the protein. Double ionization ([Cyt-c.15H]\u2022\u202217+) is also observed around m/z 720. This is in line with the photon energy range used in this work and the evolution of the ionization potential of cytochrome c that presents nearly a plateau around 14\u00a0eV for charge between 12+ and 15+.For cytochrome c Cyt-c.15H]H15+, theAs a conclusion, we clearly observe an effective interaction between XUV photons and proteins, insulin and cytochrome c. The resulting photoionization and dissociative photoionization processes correspond to the efficient valence ionization of the protein, leaving the molecule in a higher charge state with a sufficient amount of internal energy to dissociate via neutral losses. Ionization of the residual argon gas in the interaction region can produce background electrons. Although it cannot be ruled out, we observed no sign of interaction between these electrons and the molecular ions. The observed dissociative ionization processes are specific to the interaction with high energy photons in agreement with synchrotron experiments. This illustrates that the XUV excitation does not only lead to ionization of the molecule, but that the remaining internal energy of the molecule is sufficient to trigger fragmentation. Open questions regarding the localization and/or redistribution of internal energy on such large molecules remain. These questions could be further investigated using time resolved experiments that are now in reach with our XUV\u2013ESI\u2013MS instrument. Remarkably, these results are obtained without any trapping device.56 would provide detailed information on structural and dynamical information and could be implemented here in combination with the XUV source. Nevertheless, the current design is not fully appropriate for such implementation. For instance, one would need to extract the ions of interest from argon background and nearby electrodes and transport them to a free interaction region where an electron spectrometer could be installed. Such experiment can be performed even with closely situated rf fields, as demonstrated by the authors57. Because of the short pulse duration, time-resolved experiments down to the femtosecond and even attosecond timescale could be performed34. This type of experiments offer new perspectives for the development of the emerging field of attochemistry, as well as for analytical purpose where new activation methods are under development. Exciting experiments combining ion storage rings and HHG based XUV sources have been proposed to probe QED effects in heavy atomic ions58. Overall, HHG XUV light and ionic species offer a perfect playground to test fundamental physics and chemistry, which should foster further developments.We have performed experiments where mass selected molecular ions are excited using table top XUV sources based on HHG. This is done using an ESI\u2013MS mass spectrometer coupled to an XUV source operating at 5\u00a0kHz. The interaction between the ions and the light occurs \u201con the fly\u201d, meaning that no trapping device has been used. Nevertheless, despite the low number of ions, we observed the effect of the XUV interaction in a moderately large model molecule that is the reserpine, as well as in two large proteins (cytochrome c and insulin). In all these examples, the XUV radiation induces dissociative ionization, meaning that the interaction leads to an efficient valence ionization of the molecule followed by its fragmentation by loss of neutral groups. The advantage of this table-top apparatus is to give access to high energy photons, usually accessible at large scale facility such as a synchrotron. It could cover a large energy range, including soft X-ray. Because the ions are not trapped, it is also possible to include not only mass spectrometers but other types of spectroscopy such as electron kinetic energy measurements. For that goal, the combination of ESI\u2013MS and velocity map imaging"} +{"text": "Increased capacity, higher data rate, decreased latency, and better service quality are examples of the primary objectives or needs that must be catered to in the near future, i.e., fifth-generation (5G) and beyond. To fulfil these needs, cellular network design must be drastically improved. The 5G cellular network design, huge multiple-input multiple-output (MIMO) technology, and device-to-device communication are all highlighted in this comprehensive study. Hence, free-space optics (FSO) is a promising solution to address this field. However, FSO standalone is insufficient during turbulent weather conditions. FSO systems possess some limitations, such as being able to be disturbed by any interference between sender and receiver such as a flying bird and a tree, as it requires line-of-sight (LOS) connectivity. Moreover, it is sensitive to weather conditions; the FSO performance significantly decreases in bad weather conditions such as fog and snow; those factors deteriorate the performance of FSO. This paper conducts a systematic survey on the existing projects in the same area of research such as the hybrid FSO/Radio frequency (RF) communication system by listing each technique used for each model to achieve optimum performance in terms of data rate and Bit Error Rate (BER) to be implemented in 5G networks. The next phase in mobile communications standards is fifth-generation 5G) communication. It provides new services with ultra-high system capacity, massive device connectivity, ultra-low latency, ultra-high security, ultra-low energy consumption, and extremely high service quality ,3,4,5,6.G communiFurthermore, as the Internet of Things (IoT) idea develops, the rate at which physical devices are connected to the internet is exponentially expanding ,9,10,11.Local and international authorities carefully regulate the usage of this band. Most RF sub-bands are licensed to specific operators, such as cellular phone companies, television broadcasters, and point-to-point microwave lines . RF commThe optical spectrum is a promising solution for future high-density and high-capacity networks. Free Space Optical (FSO) refers to wireless connectivity based on the optical spectrum. FSO-based network technologies have distinct advantages over RF-based network technologies. For communication lengths ranging from a few nanometers to more than 10,000 km, FSO systems can deliver high-data-rate services. They are suitable for both indoor and outdoor services. FSO systems, on the other hand, suffer from their sensitivity to obstacle blocking and restricted transmitted power. As a result, combining FSO and RF systems could provide a viable answer for the massive demands of future 5G and beyond communication systems.Over the last few decades, FSO communication has been widely explored as a promising alternative to RF. In FSO, data are used to modulate a light beam in the same way it is in fiber optics. However, the light beam travels from one point to another wireless. The fact that FSO combines the high bandwidth of optical communication systems with the flexibility of wireless technologies has sparked a surge in interest in the technology.The Near Infrared (NIR), Visible Light (VL), and Ultraviolet (UV) bands are all covered by FSO technology, terrestrial and space FSO communications, such as fiber-optic systems, often operate in the near-infrared spectrum . As demoBecause FSO and RF do not interfere, FSO technology has also been explored to complement the existing RF systems . This feTo overcome the disadvantages of FSO technology, hybrid FSO/RF communication is utilized as a substitutional nonstop method to use when the primary FSO link is under bad weather conditions. As shown in In addition, the FSO communication system solves the last mile access problem. Therefore, many researchers are interested in its field . The lasFor a long time, the benefits of FSO technology have been well-known. Recent researches and developments in FSO enabling technologies on the other hand, made it easier to make use of these benefits. As a result, many new FSO-related research publications have recently been released. Given that most FSO technology classification attempts occurred in the late 1990s, we feel that existing FSO technology classifications are outmoded ,29,44.The majority of the previous categorization attempts focused solely on reviewing and differentiating FSO systems, with little regard for the establishment of new/future FSO links. As a result, fitting some emerging and future configuration classes into existing single-level classification schemes may be challenging, if not impossible. As a result, numerous survey studies must incorporate new classes, causing the overall classification scheme to become inconsistent and non-systematic in its expansion. Consider the quasi (multi-spot) diffuse system ,46, whicFSO systems are an excellent solution for existing projects to enhance their performance as a full-duplex FSO system can achieve 10 Gbps of data rate. In addition, FSO communication systems can be used in different applications for last-mile access. By comparing the FSO system with existing systems, we can conclude that the FSO system outperforms all of them, as explained in This article offers a comprehensive examination of hybrid FSO communication systems, which make use of a variety of approaches and links to accomplish the best possible levels of performance. In recent years, there has been an increase in the instances in which these technologies have been utilized.The organization of this article is broken down as follows; Preliminaries and basic concepts linked to optical wireless communication are discussed in this section. We address the optical wireless technology\u2019s naming convention because it has been discovered that researchers refer to the technology by multiple names in the literature. Preliminaries and essential components of a general FSO link, such as light sources, photodetectors, and modulation methods, are also briefly discussed. The specific components utilized in optical communication systems, as well as breakthroughs in research linked to these components, are outside the focus of this work. Papers and publications on the theory of operation, variations, and advancement of various types of light sources and photodetectors are available for interested readers ,56,57,58Optical wireless and fiber-optic communication technologies operate in the same band of the electromagnetic spectrum and have comparable transmission bandwidth capacities; hence, optical wireless communication was formerly known as fiber-less optics. New names for fiber-less optics technology arose in the literature as it advanced and was employed in new sectors, such as Lasercom, Optical Wireless Communication (OWC), and FSO. The terms \u201cOWC\u201d and \u201cFSO\u201d have become commonly used in recent decades, although \u201cfiber-less optics\u201d and \u201clasercom\u201d are now regarded as obsolete .Kaushal and Kaddoum utilize Laser Diodes (LDs) and Light Emitting Diodes (LEDs) are the most prevalent light sources in FSO systems because of their high optical power outputs and wider modulation bandwidths. LDs are popular in applications requiring large data rates. To mitigate potential eye and skin safety hazards, there are standards and power restrictions controlling the use of the LDs .Advantages of LDs: Coherent lights are those in which all individual light waves are properly aligned with one another, and all waves travel in the same direction, in the same manner, and at the same time [ame time . Due to Disadvantages of LDs: The aperture of LD is small. With LD, only point-to-point communication is possible. The concept of employing LD-based FSO communication has been around for a long time. IR LDs have already been used to show high-data-rate communication for mobile access [e access . Howevere access ), expense access , LDs areLEDs, on the other hand, are favored in indoor applications with low/medium data rates. This is because LEDs are both cheaper and more reliable than LDs. LEDs are also long-lasting sources with large-area emitters. As a result, even at relatively high levels, LEDs can be operated securely. LEDs support lower data rates than LDs ,71. DataAdvantages of LEDs: Due to recent developments in solid-state lighting, there has been a tendency over the past decade to replace incandescent and fluorescent bulbs with high-intensity white solid-state LEDs. LEDs offer advantages such as great energy efficiency, long lifespan, small form factor, lower heat generation, reduced use of hazardous materials in design, and improved color interpretation without hazardous chemicals [hemicals . LED adohemicals . AnotherDisadvantages of LEDs: All of the light produced by an LED is incoherent. As a result, all of the waves are out of phase, and the optical power communicated by an LED is comparatively low. Natural and artificial light sources also cause LED source light to interfere.A photodetector is a semiconductor device that converts light\u2019s photon energy into an electrical signal by releasing and speeding current-conducting carriers within the semiconductors. The Positive-Intrinsic-Negative photodiode (PIN) and the Avalanche Photodiode (APD) are the two photodiodes that are most frequently utilized ,64. OnlyThe optical preamplifier can be used to increase optical signal strength that has been diminished by different atmospheric circumstances, to successfully increase receiver sensitivity,Overcome eye-limit restrictions on transmitted laser power, andSuppress the limiting effect of the receiver thermal noise generated in the electronic amplifier.FSO receivers can generally be built with amplifiers for a variety of advantages, including the following:In low-cost, low-data-rate FSO lines, PIN photodetectors are preferred since they are inexpensive, can function at low bias, and can withstand vast temperature fluctuations ,70. ThisRecent breakthroughs in graphene, two-dimensional materials, and (nano)materials, such as plasmonic nanoparticles, semiconductors, and quantum dots, have opened the way for the development of ultrafast photodetectors that work over a wide wavelength range ,76,77. TTransmission dependability, energy efficiency, and spectrum efficiency vary depending on the modulation technique. The type of application determines which modulation method is used. For example, On-Off keying (OOK) modulation is the most widely used modulation method in FSO systems due to its simplicity. However, in more complicated systems that demand a high data rate, such as deep space communication, OOK can be wasteful. Pulse Position Modulation (PPM) or one of its derivatives, such as Variable-PPM, is commonly selected for such applications ,78,79.There are coherent and non-coherent optical communication systems. Non-coherent optical transmission systems use amplitude and differential phase modulations, which do not require coherent local oscillator light, whereas coherent optical transmission systems use phase and quadrature amplitude modulations for coherent detection.Both OOK and PPM are single-carrier pulsed modulation systems. Single-carrier modulation techniques become inefficient as data rates rise due to increased intersymbol interference . PPM alsTo avoid non-negative amplitudes, the DC bias is added to the pre-modulated RF signal, resulting in low power efficiency. The DC bias is required to prevent clipping and nonlinear distortion in the optical domain which may become quite large as the number of carriers increases, as in the MSIM approaches. As a result, the peak-to-average power ratio (PAPR) rises, lowering the power efficiency . AnotherA PAPR reduction technique can be utilized to improve the performance of MSIM techniques by making the signal less susceptible to a nonlinear distortion . AnotherFSO can be deployed in many applications such as for the last-mile access when a place is not suitable for constructing fiber network, providing a backup link when the fiber network experiences failure, extending/enlarging an existing fiber network as it needs short time and can be easily deployed or backhauled as FSO systems are capable of transferring data between antenna towers and public switched telephone network with high data rate speed . It can FSO systems have a lot of advantages, such as providing high transmission speed and takes less than 30 min to be easily installed , can staFSO systems also have their disadvantages, further explained as follows. First, is scintillation loss which is the sensitivity to temperature changes from the earth\u2019s heat rise. This causes \u201cimage dancing\u201d on the receiver node . Second,FSO is a viable technique for providing 5G communication at high data rates and enormous IoT connectivity. Communication systems in 5G and beyond must have the capabilities to integrate ultra-dense heterogeneous networks. Making the cells smaller is a simple but incredibly effective approach to enhancing network capacity . For 5G Fiber-optic access networks are well-suited to be extended by E-band radios (71\u201386 GHz). Hilt in demonstrDue to the increasing advancement of wireless communication systems, a dependable system that can provide larger channel capacity and data transfer rates for users is essential. MIMO systems achieve this because their multiple antennas on both the transmitter and receiver sides allow for spatial diversity and spatial multiplexing techniques. MIMO is a practical solution to avoid losing data in the channel due to fading and errors, it ensures that the receiver gains more than one version of the data sent and all of them are identical. It enhances the reliability and performance of the system with high spectral efficiency and low energy per bit . The goaFang et al. deploy pHowever, Al-Eryani et al. propose Liang et al. , interroYousif et al. , enhanceShah et al. , examineThe hybrid SIMO-RF/FSO communication system is introduced by Shi et al. , who comLiang et al. , proposeBy permitting the transmitted data to use a relay node instead of a direct route to the destination, which is greatly hampered by atmospheric turbulence, relay transmission can be used to overcome atmospheric turbulence . Serial FSO researchers are embracing the techniques and procedures used in RF relay-assisted networks because the notion of relay-assisted networks is established in terms of RF technology. Researchers employed AF ,119,120,As mentioned in Authors in ,134 utilAs mentioned before, in Sharma et al. implemenIn another paper, Sharma et al. propose Furthermore, Yongzhi and Jiliang , investiWang et al. build a \u22126 compared with the single RF/FSO system is 3 \u00d7 10\u22123. The model\u2019s architecture consists of two AF relays in the system with variable gain to create two different isolated routes (P1 and P2). The communication between the source to relay 1 and relay 2 are different: one is FSO link and one is RF link. The collected signals will be amplified by both relays independently to be sent to the desired destination through different links. If the connection between the source and relay was FSO, then the connection between the relay and the destination will be RF and vice versa. A bandpass filter (BPF) filters the received RF signal. It proposes a new Extended Generalized Bivariate Meijer-G Function algorithm that shows an improvement over the two paths due to the spatial diversity. Additionally, the relay helps for data transmission in long-range.Tahami et al. accompliChen et al. evaluateNajafi et al. , inspectOdeyemi and Owolawi show the error performance of a cooperative diversity of a communication system that is mixed of FSO and RF links that utilizes MRC at the destination . AdditioZedini et al. , investiUpadhya et al. , study tAlathwary et al. , proposeTorabi and Effatpanahi , presentAmirabadi and Vakili investigTonk and Yadav , inspectRoumelas et al. , study hLu et al. , implemeAs the number of small cells incredibly increases daily to achieve 5G wireless technology, the challenges increase to achieve effective 5G traffic backhauling. Song et al. , investiPattanayak et al. represenJiang et al. propose Bag et al. utilizedIn another approach, the source sends the data using RF signal to the destination and a quantize-and-encode relay (QER) over the FSO link in . HoweverKiran et al. propose Almouni et al. study thNguyen et al. , establiAmirabadi et al. present In conclusion, The aforementioned methodologies all make the assumption that relay nodes are buffer-less and stationary. Moving UAVs outfitted with buffers are employed by Fawaz et al. to act aWu and Kavehrad , establiLee et al. , investiPai and Sainath , analyzeGu et al. test twoErdogan et al. propose Shah et al. employ aIn this section, we will highlight different models that utilize various sub-systems with FSO communication systems. To fully utilize the benefits of hybrid FSO systems, various issues must be resolved. The transition and realization of seamless mobility for mobile users is one of the major issues. A mobile wireless light-based communication system called LiFi is a fast, bidirectional network. For instance, in a LiFi-WiFi network, a user should be able to effortlessly switch between LiFi cells and WiFi networks as well as between LiFi and WiFi networks. The general characteristics of heterogeneous (LiFi + WiFi) networks are presented by Ayyash et al. , who alsMakki et al. examine Inter-cell interference is unavoidable given the growing number of FSO cells deployed for coverage. In the RF domain, inter-cell interference coordination and mitigation methods have long been researched . The sucZhang et al. , utilizeFurthermore, Chen et al. , discussDat et al. establisThe second experiment acquires only one laser diode and utilizes only a single wavelength for data transmission. At the RAU part, the optical signal splits into two parts using a 3 dB optical coupler. The first part transmits over exactly the same as the first experiment. The second part passes to a photodetector to perform an intermediate frequency then the electrical mixer transforms it to 90 GHz. Moreover, it passes through a band pass filter and the desired band will be transferred using a 43 dB antenna. The vice versa of the process accomplishes at the receiving part to transmit the original signal by the sender.In the last stage, the MATLAB generates an intermediate frequency and passes through two distinct synchronized AWG. One of them transmits the signals over a FSO system as deployed in the second experiment. The second one transmits over a radio over fiber (RoF) MMW system at 90 GHz. It deploys high-gain antennas to extend the transmission space of the MMW link. The collected IF signals pass to two inputs of OSC and are inspected offline. The proposed system affords an excellent solution for the future technology for fronthaul mobile communication, especially ultra-dense small cells for 5G and beyond networks where the fiber cable is unsuitable. In addition, to verify the effectiveness of the developed FSO and hybrid systems, they will undertake experiments in an outdoor setting in the upcoming works under various weather conditions.Khalid et al. analyze Eslami et al. propose In Yasir et al. , the autNadeem et al. , utilizeThis hybrid RF/FSO system in , transmiSong et al. authors Grigoriu et al. examine Chatzidiamantis et al. , proposeTouati et al. appraiseSwitching between FSO and RF links in hybrid FSO/RF systems plays a significant role as it relies on the value of the SNR threshold of both links. Thus, choosing optimum thresholds of both links gives higher reliability to the system. Shrivastava et al. , proposeWhen the quality of optical link degrades and the received optical SNR falls below an upper threshold level, the RF link activates. In this situation, both signals reach the receiver separately and combine through MRC. Data duplication on the RF and FSO links provide a diversity of order two. When received optical SNR falls below the lower threshold level, then the optical link is put in standby mode and only RF link sustains the required transmission. If the RF SNR is also lower than the RF threshold, outage is declared and no transmission takes place. The optical link activates as soon as the environmental conditions become favorable and optical SNR improves. Thus, the switching scheme conserves optical power, conserves RF power, prevents generation of unnecessary RF interference, improves system performance and overcomes the drawbacks of existing switching schemes . ShrivasRakia et al. , have a Nonetheless, He and Schober propose Kumar and Borah , inspectHowever, Zhang and Hranilovic in evolve aA solution to the hard-switching drawback is to use channel coding to coordinate data delivery over both connections, as shown in Khan et al. , proposeRakia et al. , establiAbadi et al. , review Shakir et al. , proposeKhan et al. , proposeHalu\u0161ka et al. , use RSSTokgoz et al. , examineLi et al. demonstrMoreover, Li et al. propose Kiasaleh , investiGurjit et al. propose a hybridNam et al. , analyzeNajafi et al. , authorsAs stated in As small cells are the main key technique for 5G networks instead of the macrocells, it has some challenges to provide omnipresent backhaul connectivity to all small cells. Siddique et al. provide As floating BSs will be playing an important role in transmitting data in 5G FSO networks, Yu et al. propose \u22129 BER in coastal areas such as Chennai with weak weather disturbances. Contrastingly, in plain and desert areas, the FSO link of ranges 6.53 km and 7.85 km have the same BER of 10\u22129. However, in hilly areas the FSO link is only 5.3 km. The performance of the system is ideal to be used in 5G mobile networks and IoT technology as it provides high-speed connectivity. The data transmission rate of this system can reach 160 Gbps. The links in this system face some difficulties such as installing an optical fiber network in some places such as mountains which can be solved by installing FSO links with a backup hybrid RF/FSO system.However, Singh et al. analyze Zhao et al. , experimAdditionally, Schulz et al. , introduWe can conclude from all the mentioned projects that hybrid FSO/RF communication systems perform better than FSO or RF standalone systems as it relies on more than one link in case one link is facing some difficulties such as severe weather conditions and achieves full diversity. In addition, a hybrid system with parallel links outperforms FSO-RF serial link, as a parallel hybrid FSO/RF system uses two or more independent paths to deliver the signals to the receiver. A relay is required for serial hybrid FSO/RF systems as it converts the signals from FSO to RF or vice versa, such as access points allow RF users to access the whole network. DF relay performs better than AF relay; it enhances the BER performance compared to AF relay. However, the AF relay is more straightforward and less complex than the DF relay.In addition, external modulation is superior to direct modulation. For instance, return to zero (RZ) modulation is suitable for long-distance communication but is both difficult and expensive. NRZ, however, is more appropriate for short links, simpler, and more affordable. As stated in The massive increase in internet traffic and multimedia users over the last several years has put significant pressure on RF systems that operate at modest data rates. There is a need to go from RF domain to the optical domain because of the enormous development in information technology, which is pushing the information industry to higher and higher data rates. An extremely high bandwidth LOS wireless link between faraway locations is possible using FSO communication. This technology is thought to be one that will soon be able to satisfy the extremely high speed and enormous capacity demands of the modern communication industry. However, the heterogeneous nature of the air channel presents several difficulties that must be solved to fully exploit the FSO system\u2019s terabit capability. The FSO system is susceptible to many atmospheric phenomena, including absorption, scattering, and atmospheric turbulence. Numerous methods used at the physical and network layers reduce the negative impact of the atmosphere on the laser beam\u2019s quality. Numerous fading mitigation strategies first developed for RF communication, including diversity, adaptive optics, error control codes, modulation, etc., work well for FSO communication. In addition, the creation of a hybrid RF/FSO system, which guarantees carrier class availability for practically all weather circumstances, was prompted by the complementary nature of RF and FSO. It follows that, given the significant progress made in FSO communication, this technology seems to have highly promising development prospects in the near term. There are now some commercial products for FSO terrestrial and space lines on the market. We hope this technology will soon usher in a global revolution in telecommunications."} +{"text": "Line-of-sight (LOS) indoor optical wireless communications (OWC) enable a high data rate transmission while potentially suffering from optical channel obstructions. Additional LOS links using diversity techniques can tackle the received signal performance degradation, where channel gains often differ in multiple LOS channels. In this paper, a novel active transmitter detection scheme in spatial modulation (SM) is proposed to be incorporated with signal space diversity (SSD) technique to enable an increased OWC system throughput with an improved bit-error-rate (BER). This transmitter detection scheme is composed of a signal pre-distortion technique at the transmitter and a power-based statistical detection method at the receiver, which can address the problem of power-based transmitter detection in SM using carrierless amplitude and phase modulation waveforms with numerous signal levels. Experimental results show that, with the proposed transmitter detection scheme, SSD can be effectively provided with ~0.61 dB signal-to-noise-ratio (SNR) improvement. Additionally, an improved data rate ~7.5 Gbit/s is expected due to effective transmitter detection in SM. The SSD performances at different constellation rotation angles and under different channel gain distributions are also investigated, respectively. The proposed scheme provides a practical solution to implement power-based SM and thus aids the SSD realization for improving system performance. Optical wireless communications (OWC) are a promising solution to cope with the challenges of the ever-increasing data volume and bandwidth brought by the emerging indoor wireless applications such as high-definition streaming in remote working, education, entertainment, etc. With easy installation via the Fiber to the Premises (FTTP) networks, the OWC system can support higher bandwidth wireless transmission with scalability than its radio frequency (RF) counterpart, regardless of RF interference and spectrum regulations ,2,3. ParGiven that LOS optical channel is susceptible to being obstructed by the in-between small opaque objects or moving users, a resilient OWC system can be established with the redundant LOS links provided by additional transmitters/receivers . For indSpatial multiplexing (SMux) and spatial modulation (SM) are such data rate boost techniques that perform better under channel gain imbalance . CompareHowever, SM itself does not achieve transmit diversity for enhancing link robustness . The sigNote that, all the above SSD investigations were based on the assumption of perfect detection of the active transmitter in SM. On the other hand, the performance of active transmitter detection can also be significantly affected by the high channel correlation in practical scenarios, which further affects the recovery of carrierless amplitude and phase (CAP) modulation signals involving many signal levels. To address the active transmitter detection issue in SM incorporating SSD, we have theoretically proposed a joint maximum-a-posteriori (MAP) estimation algorithm . The joiTherefore, spatial modulation, as a low RF-chain complexity spatial reuse technique, can boost the data rate under such channel gain imbalance, where an effective detection of active transmitter enables SM implementation. Together with the signal space diversity technique, the performance of the OWC system can be further enhanced, which is exciting for achieving a reliable high-speed wireless transmission for 6G networks and beyond.i for optical modulation, where i\u00a0The architecture of the overall indoor OWC system is shown in 1 and Tx2, respectively, where spatial bits differ in a signal group to enable the best SSD diversity performance [j\u201d and \u201c1 \u2212 j\u201d, must be created via the quadrature amplitude modulation (QAM). The first step of constellation transformation is through the constellation rotation by multiplying \u201cej\u03c6\u201d in the constellation pair. As two spatial bits differ in a signal group, diversity interleaving with a 100% interleaving ratio is then performed as the second step by exchanging the imaginary part of one signal with the real part of the other signal. After the constellation transformation, CAP modulation is utilized to create real-valued signals for optical modulation. To enhance the robustness of spatial bit detection, here, the CAP signals are further pre-distorted without knowing much CSI. The pre-distortion technique will be described in detail in To implement SM with SSD, the original binary signal bits are divided into many subsets, followed by two consecutive subsets grouped exclusively for implementing SSD, as exemplified by the two underbraced subsets \u201c100\u201d and \u201c010\u201d in formance . As for 1\u2019s channel gain is compensated to that of Tx2 to recover the whole CAP signals in terms of signal levels from Tx2, and vice versa. After two sets of the whole CAP signals are recovered in terms of signal levels from Tx1 and Tx2, respectively, the signals from different transmitters can be picked up, respectively, and combined to perform the inverse transformation of the signal constellation, including diversity deinterleaving, K-means decision and QAM demapping. Here, K-means clustering is used for the signal decision to combat the nonlinearity introduced by pre-distortion and channel-gain-compensated CAP signal recovery. The recovered spatial bits and signal modulation bits are then combined to finalize the signal recovery.After optical signal detection, digital signal bits are obtained. The spatial bits that indicate the active transmitter are firstly recovered via a novel power-based statistical method, which will also be further introduced in The proposed novel active transmitter detection scheme consists of the signal pre-distortion technique at the transmitter and a novel power-based statistical method for active transmitter detection at the receiver. The signal pre-distortion technique shapes the CAP signal power at the transmitter to facilitate active transmitter detection. The power-based statistical method is then implemented to reduce the transmitter detection error statistically.\u03c4 is the suppress ratio with The signal pre-distortion technique is applied after CAP modulation at the transmitter side, which can be described in Equations (1) and (2) as:The pre-distortion technique is proposed based on the observation of CAP signal characteristics at the receiver. Due to the many signal levels possessed by CAP, the high signal levels in CAP with lower channel gain may be even larger than the low signal levels in CAP with higher channel gain, leading to a false active transmitter detection. The pre-distortion technique tackles this issue by limiting the high signal levels in CAP with lower channel gain while enhancing the low signal levels in CAP with higher channel gain. Although some nonlinearity is introduced in this technique, it still significantly improves the performance of resolving the active transmitter at the receiver.j-th symbol at the two-symbol-based time slot t, where jj-th symbol at the two-symbol-based time slot t with each vector length = oversampling factor, The power-based statistical method for active transmitter detection is expressed in Equations (3) and (4):When 1 and the second is from Tx2. Thus, the pair of spatial bits is recovered. Otherwise, when the values of The proposed power-based statistical method is based on the observation of received CAP signal characteristics and the channel gain conditions. In Equation (3), received signal power distribution is expressed statistically in three terms. The first inner product term gives a weighted symbol level since the contribution of each sample differs to represent a symbol. The second term indicates the range of sample levels in each symbol. The third term describes the amount of variation in the samples in each symbol. This received signal power distribution is then used to recover the spatial bits in Equation (4) with reference to the measured total channel gains. An example is given to illustrated how it works: when the values of This power-based method with three statistical terms enables a comprehensive description of the signal power distribution and thus enhances the robustness of power-based active transmitter detection under noisy channel conditions.The proof-of-concept experiment of the novel active transmitter detection scheme in SM that provides SSD was conducted using the two-transmitter-single-receiver setup in In the demonstration, ~7.5 Gbit/s data rate can be achieved using 2.5 GBaud/s 4-CAP modulation incorporating SM. The capability of the proposed active transmitter detection scheme in SM to provide SSD is shown in Thus, compared with the ideal and the proposed transmitter detection scheme at the best-performed constellation rotation, i.e., rotation\u00a0, shown a assumed , i.e., tThe proposed active transmitter detection scheme in SM is also experimentally investigated under different channel gain distributions at the 45gated in , i.e., aM error) . When h2\u03c4 in the pre-distortion technique is studied to illustrate the impact of pre-distortion in the proposed transmitter detection scheme, where \u03c4 from 1 to 0.5, which verifies the effectiveness of the pre-distortion technique. However, the BER improvement is much smaller when \u03c4 is decreased from 0.7 to 0.5. That is because more signal nonlinearity is introduced with a smaller \u03c4. In fact, a smaller \u03c4 also reduces the signal power efficiency. Therefore, a tradeoff of \u03c4 value must be made for optimized system performance. Here, The suppress ratio Note that, here, our proposed SSD is provided using CAP modulation. In fact, it has also been theoretically reported that SSD can be provided using OFDM modulation . HoweverA novel active transmitter detection scheme in spatial modulation that consists of a transmitted signal pre-distortion technique and a received power-based statistical detection method is proposed to be incorporated with a signal space diversity technique. Experimental results have shown that the proposed transmitter detection scheme can effectively recover the power-based spatial bits from the numerous-level-based CAP waveforms, thus realizing an improved system data rate brought by SM which is ~7.5 Gbit/s. Meanwhile, SSD can be offered with about 0.61 dB SNR improvement. In addition, improved BER performance with an increasing constellation rotation angle is verified when the channel gain difference ratio equals to 0.5. Furthermore, it is indicated that by using our transmitter detection scheme, a tradeoff between SM and SSD performances is expected with a decreasing channel gain difference, which is different from the ideal SSD performance without considering SM. As for the effectiveness of the pre-distortion technique, the BER improvement can be significant by selecting an appropriate suppress ratio without introducing much nonlinearity. The proposed active transmitter detection scheme provides a practical solution to implement power-based SM that incorporates SSD for pursuing a better system BER with enhanced system throughput."} +{"text": "An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the Universit\u00e0 della Svizzera Italiana.\u201dThe original article has been updated."} +{"text": "Neuroblastic tumors of the adrenal gland in elderly patients: A case report and review of the literature By Deslarzes P, Djafarrian R, Matter M, La Rosa S, Gengler C, Beck-Popovic M and Zingg T. (2022) Front. Pediatr. 10:869518. doi: 10.3389/fped.2022.869518An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University of Lausanne\u201d.The original version of this article has been updated."} +{"text": "Future opportunities for the athlete biological passport By Krumm B, Botr\u00e8 F, Saugy JJ and Faiss R. (2022) Front. Sports Act. Living 4:986875. doi: 10.3389/fspor.2022.986875An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University of Lausanne.\u201dThe original version of this article has been updated."} +{"text": "Intradermal testing with COVID-19 mRNA vaccines predicts tolerance By Stehlin F, Mahdi-Aljedani R, Canton L, Monzambani-Banderet V, Miauton A, Girard C, Kammermann K, Meylan S, Ribi C, Harr T, Yerly D and Muller YD. (2022) Front. Allergy 3:818049. doi: 10.3389/falgy.2022.818049An Erratum on Funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding provided by University of Lausanne\u201d.An omission to the The original version of this article has been updated."} +{"text": "Hydromorphone prescription for pain in children\u2014What place in clinical practice? By Rodieux F, Ivanyuk A, Besson M, Desmeules J and Samer CF. (2022) Front. Pediatr. 10:842454. doi: 10.3389/fped.2022.842454An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University of Geneva\u201d.The original version of this article has been updated."} +{"text": "Long term NIV in an infant with Hallermann-Streiff syndrome: A case report and overview of respiratory morbidity By Guerin S, Blanchon S, de Halleux Q, Bayon V and Ferry T. (2022) Front. Pediatr. 10:1039964. doi: 10.3389/fped.2022.1039964An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by University of Lausanne\u201d.The original version of this article has been updated."} +{"text": "Prognostic impact of physical activity patterns after percutaneous coronary intervention. Protocol for a prospective longitudinal cohort. The PIPAP study By Gonzalez-Jaramillo N, Eser P, Casanova F, Bano A, Franco OH, Windecker S, R\u00e4ber L and Wilhelm M. (2022) Front. Cardiovasc. Med. 9:976539. doi: 10.3389/fcvm.2022.976539An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University Of Bern\u201d.The original version of this article has been updated."} +{"text": "Genotype-specific ECG-based risk stratification approaches in patients with long-QT syndrome By Rieder M, Kreifels P, Stuplich J, Ziupa D, Servatius H, Nicolai L, Castiglione A, Zweier C, Asatryan B and Odening KE (2022) Front. Cardiovasc. Med. 9:916036. doi: 10.3389/fcvm.2022.916036An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University of Bern\u201d.The original version of this article has been updated."} +{"text": "Bacillus cereus food poisoningMulti-organ failure caused by lasagnas: A case report of By Thery M, Cousin VL, Tissieres P, Enault M and Morin L. (2022) Front. Pediatr. 10:978250. doi: 10.3389/fped.2022.978250An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University Of Geneva\u201d.The original version of this article has been updated."} +{"text": "The LAUsanne STAPHylococcus aureus ENdocarditis (LAUSTAPHEN) score: A prediction score to estimate initial risk for infective endocarditis in patients with S. aureus bacteremia By Papadimitriou-Olivgeris M, Monney P, Mueller L, Senn L and Guery B. (2022) Front. Cardiovasc. Med. 9:961579. doi: 10.3389/fcvm.2022.961579An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University of Lausanne\u201d.The original version of this article has been updated."} +{"text": "Levelling the playing field: the role of workshops to explore how people with parkinson\u2019s use music for mood and movement management as part of a patient and public involvement strategy By Rose DC, Poliakoff E, Hadley R, Gu\u00e9rin SMR, Phillips M and Young WR (2022) Front. Rehabilit. Sci. 3:873216. doi: 10.3389/fresc.2022.873216An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by Lucerne University Of Applied Sciences And Arts\u201d.The original version of this article has been updated."} +{"text": "A new dynamic word learning task to diagnose language disorder in French-speaking monolingual and bilingual children By Matrat M, Delage H and Kehoe M. (2023) Front. Rehabilit. Sci. 3:1095023. doi: 10.3389/fresc.2022.1095023An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University of Geneva\u201d.The original version of this article has been updated."} +{"text": "Surgery\u2019s role in contemporary osteoarticular infection management By De Marco G, Vazquez O, Gavira N, Ramadani A, Steiger C, Dayer R and Ceroni D. (2022) Front. Pediatr. 10:1043251. doi: 10.3389/fped.2022.1043251An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University Of Geneva\u201d.The original version of this article has been updated."} +{"text": "Is Urology a gender-biased career choice? A survey-based study of the Italian medical students' perception of specialties By Reale S, Orecchia L, Ippoliti S, Pletto S, Pastore S, Germani S, Nardi A and Miano R. (2022) Front. Surg. 9:962824. doi: 10.3389/fsurg.2022.962824An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University of Lausanne\u201d.The original version of this article has been updated."} +{"text": "Pseudo-feeders as a red flag for impending or ongoing severe brain damage in Vein of Galen aneurysmal malformation By Saliou G and Buratti S. (2022) Front. Pediatr. 10:1066114. doi: 10.3389/fped.2022.1066114An Erratum on An omission to the funding section of the original article was made in error. The following sentence has been added: \u201cOpen access funding was provided by the University of Lausanne\u201dThe original version of this article has been updated."} +{"text": \ No newline at end of file