diff --git "a/deduped/dedup_0531.jsonl" "b/deduped/dedup_0531.jsonl"
new file mode 100644--- /dev/null
+++ "b/deduped/dedup_0531.jsonl"
@@ -0,0 +1,41 @@
+{"text": "The detection limits wereestimated as 22, 60, 25, and 60 ng mL -1 for iron,nitrite, phenol, and carbaryl at the 99.7% confidence level withRSD of 2.3, 1.0, 1.8, and 0.8%, respectively. Reagent and wastevolumes were lower than those obtained by flow systems withcontinuous reagent addition. Sampling rates of 100, 110, 65, and72 determinations per hour were achieved for iron, nitrite,phenol, and carbaryl determinationsA portable flow analysis instrument is described for in situphotometric measurements. This system is based on light-emittingdiodes (LEDs) and a photodiode detector, coupled to a multipumpingflow system. The whole equipment presents dimensions of 25 \u2009cm \u00d7 22 \u2009cm \u00d7 10 \u2009cm, weighs circa 3 kg,and costs 650 \u20ac. System performance was evaluated fordifferent chemistries without changing hardware configuration fordeterminations of (i) Fe"}
+{"text": "Although new technologies are now emerging, at present the major resources for open-type analysis are the many publicly available SAGE and MPSS libraries. These technologies have never been compared for their utility in the context of deep transcriptome mining.Deep transcriptome analysis will underpin a large fraction of post-genomic biology. 'Closed' technologies, such as microarray analysis, only detect the set of transcripts chosen for analysis, whereas 'open' i.e. as MPSS tag length increases).We used a single LongSAGE library of 503,431 tags and a \"classic\" MPSS library of 1,744,173 tags, both prepared from the same T cell-derived RNA sample, to compare the ability of each method to probe, at considerable depth, a human cellular transcriptome. We show that even though LongSAGE is more error-prone than MPSS, our LongSAGE library nevertheless generated 6.3-fold more genome-matching (and therefore likely error-free) tags than the MPSS library. An analysis of a set of 8,132 known genes detectable by both methods, and for which there is no ambiguity about tag matching, shows that MPSS detects only half (54%) the number of transcripts identified by SAGE . Analysis of two additional MPSS libraries shows that each library samples a different subset of transcripts, and that in combination the three MPSS libraries still only detect 73% of the genes identified in our test set using SAGE. The fraction of transcripts detected by MPSS is likely to be even lower for uncharacterized transcripts, which tend to be more weakly expressed. The source of the loss of complexity in MPSS libraries compared to SAGE is unclear, but its effects become more severe with each sequencing cycle (We show that MPSS libraries are significantly less complex than much smaller SAGE libraries, revealing a serious bias in the generation of MPSS data unlikely to have been circumvented by later technological improvements. Our results emphasize the need for the rigorous testing of new expression profiling technologies. Homo sapiens where a complete genome sequence is now available, there remains uncertainty regarding the actual number of transcribed regions. This is true in the case of conventional genes and even more so if regions thought to yield polyadenylated non-coding RNAs are included [e.g., [In recent years, a number of techniques have emerged for large-scale gene expression analysis. Most are designed to compare the expression of many genes between cell types or under a number of different conditions. However, there has also been interest in techniques capable of identifying the complete transcriptome of a given cell or tissue. 'Closed' architecture systems, such as microarrays, are less suited to this application because they are limited by the extent to which global transcriptome coverage has been achieved. Even in organisms such as included -3. Thus,included . Second,d [e.g., ). It maya priori knowledge of the genes likely to be of interest [Much use has therefore been made of 'open' gene-expression profiling methods requiring no interest . Many ofinterest , may allinterest , but thee.g., very specific points in the cell cycle or in response to particular levels of cellular stress that only apply to subsets of the cell population. One caveat to this is that background noise in the data, e.g. due to contaminating species, degradation or mis-priming, may limit the maximum sensitivity that can be achieved . In conses, see . Estimatses, see . Howeveret al. in the form of conventional SAGE [NlaIII) in polyadenylated transcripts. This might be thought to be sufficient to map uniquely to the transcriptome [The disadvantage of sequencing very short tags is that it compromises identification of the transcripts corresponding to each tag. Ideally, every tag would map uniquely to both the genome and the transcriptome, and every transcript would be represented by at least one tag. Short sequence tag-based profiling was pioneered by Velculescu nal SAGE , which pcriptome , but becOther tag-based methodologies, especially those for gene identification and establishing transcriptional start points, have since been developed that generate longer tags from the 3' or 5' ends of transcripts, or both (reviewed in ). Until MmeI rather than BsmFI) to generate a 21 bp tag at the anchor site (which remains NlaIII). MPSS generated 20 bp tags anchored at the 3'-most DpnII sites in transcripts, in a similar manner to SAGE. The unique feature of MPSS was the proprietary, bead-based sequencing technology, which was more efficient than standard Sanger sequencing and yielded far larger tag counts. As both methods significantly increase tag length compared to conventional SAGE, they were expected to improve the prospects for unique genome and transcriptome tag mapping, as suggested by the pilot-scale use of LongSAGE for genome annotation [e.g. [LongSAGE is a modification of the standard SAGE protocol using a different type II restriction enzyme (notation . As a pron [e.g. ).i.e. less than two tags per million (tpm), full transcriptome coverage using tag-based methods can only be guaranteed if libraries containing several times this many tags are fully sequenced. Due to the efficiency of MPSS sequencing, it became feasible to sequence well in excess of 1 million tags per sample at a fraction of the cost of sequencing a similar number of LongSAGE tags. It has seemed, therefore, that MPSS was the technology most likely to offer the depth of sampling required for whole transcriptome coverage, but this has not been adequately tested.Although the ability of the MPSS and LongSAGE methods to identify abundant or differentially expressed genes has been compared, their capacity to provide complete transcriptome coverage has not. The number of transcripts expressed in a single cell can vary considerably depending on cell type, among other factors, but it has been estimated that a 'typical' human somatic cell contains ~400,000 mRNA molecules . Given t+ T-cell clone using conventional SAGE [We previously analysed a CD8nal SAGE and founnal SAGE -22, and nal SAGE ,24. HereNlaIII (as in SAGE) or DpnII (as in MPSS), which each have four-base recognition sites, the recognition site ought to be present, on average, every 256 base pairs. However, some transcripts will not have these sites and both SAGE and MPSS are expected to be similarly affected. There are 13,665,294 and 410,369 NlaIII sites in the human genome and in all the human sequences in Release 19 of the RefSeq database [DpnII sites are 7,112,355 and 253,936, so this site is rarer, suggesting that the ability of MPSS to tag more transcripts is in this way compromised. In RefSeq, excluding predicted transcripts from the genome, the proportion of cDNAs lacking the LongSAGE recognition site is less than 0.6% whereas the proportion lacking the MPSS site is substantially higher, at ~2.3% (552). In terms of the total pool of transcripts, these numbers are relatively small, but cannot be overlooked if the entire transcriptome of a cell is to be identified. A better strategy would involve a combination of sites: only 39 of the 24,261 human sequences in RefSeq Release 19 lack both NlaIII and DpnII recognition sites.Before undertaking a direct comparison of the two transcriptome-profiling methods, we consider the systematic limitations of the methods, as previously done in a generalised way . First, database , respectNlaIII and DpnII sites in the human genome demonstrates the effect of tag length on transcript identification [see Additional file et al. [A second limitation of tag-based methods is the difficulty of matching each tag to a unique transcript. The single most important benefit of open expression technologies is their ability to identify previously uncharacterised genes, which requires that novel tags can be linked to sequenced transcripts or, if they have not been previously identified, to the genome. Analysis of e et al. , obtaine+ T-cell clone (clone 29) activated with beads coated in anti-CD3 and anti-CD28 antibodies. Clone 29 was derived from the peripheral blood mononuclear cells of a subject given a modified vaccinia virus Ankara (MVA) vaccine containing a polyprotein made from HIV-1 gag fused to a string of cytotoxic T-cell epitopes as part of a vaccine trial [LongSAGE and MPSS libraries were prepared from a single sample of RNA extracted from a CD4ne trial . It was i.e. all TCR/CD3 components, CD2, CD4, CD5, CD6, CD11a (LFA-1a), CD43, CD45 and CD53. The MPSS library, however, lacked tags corresponding to both CD3\u03b3 and CD69. The LongSAGE CD69 transcript tag derived from the 3' untranslated region (UTR), upstream of the only DpnII site in the full length cDNA. Between the NlaIII site and the DpnII site there is a potential polyadenylation signal, suggesting that alternative polyadenylation could be responsible for the absence of a CD69 MPSS tag. Even though CD3\u03b3 is the most weakly expressed transcript of those tested here, the lack of any MPSS tags derived from transcripts of CD3\u03b3 is very surprising, given the supposedly increased depth of the MPSS libraries compared to SAGE. Taken in isolation, this finding could have implied that there is an additional region of the CD3\u03b3 3' UTR containing a potential MPSS tag that is not recorded in the main DNA sequence databases. However, in a second MPSS library made from the same mRNA sample the CD3\u03b3 tag was represented at 9.5 tags per million. Thus, the tag is produced and, given its expression level, should be found in every library of this size provided that every transcript is equally likely to be sampled. This provided the first indication of MPSS sampling problems, despite the size of MPSS libraries.FACS analysis indicated that, prior to library generation, the activated T-cell clone expressed CD4, CD28, CD45 and CD69, but not CD27 or CD62L (data not shown). The LongSAGE data perfectly matched the FACS results and revealed the expression of each of the classical T-cell markers, NlaIII and DpnII sites, and for which the potential tags at all such sites are unique in both the human genome and the Ensembl transcriptome, was extracted from Ensembl [i.e. non-CD4+ T cell-derived LongSAGE libraries; for example comparison with an activated CD8+ T cell-derived LongSAGE library yielded a correlation of 0.55. Importantly, however, the correlation between libraries produced from the one RNA sample using the two methods was far lower than that for LongSAGE libraries produced from distinct cell populations. The coefficient obtained for a comparison of our activated CD4+ T-cell LongSAGE library with the activated CD8+ T-cell library referred to above, for example, is 0.76 and when our library is compared to a second LongSAGE library of similar size generated from the same cells in the \"resting\" state, i.e. prior to activation with anti-CD3 and anti-CD28 antibody coated beads , the correlation coefficient is 0.88.Ideally, the level of expression of every distinct transcript identified by the two methods would be compared. However, ambiguities in tag to gene mapping and differences in tag anchoring sites mean that different populations of potential tags will be sampled in each case, making such comparisons non-trivial. Therefore, a set of test transcripts that contain both Ensembl . This seGiven that both methods are believed to be generally reproducible -31, the i.e. ~0.25% [i.e. only tags that matched either the genome or the known transcriptome were kept. Some genuine tags carrying polymorphisms unrepresented in the databases, or for which no cDNA sequence is available and a splice junction or polyadenylation occurs within the tag, are likely to be removed. However, as these are comparatively rare events, this is not expected to have a large effect on library complexity [The rate of addition of novel tag sequences to the library provides a measure of whether a given library is large enough to identify every potential tag sequence in the initial sample, since, when all existing tags have been sequenced, this rate should approach zero. As expected, given their relative sizes, this appears to be the case for the MPSS but not the LongSAGE library. However, the rate of novel tag addition is likely to be artificially increased in the LongSAGE library due to sequencing error accumulation ; MPSS ha. ~0.25% versus ~. ~0.25% . A simplmplexity . Removali.e. the number of unique sequences identified has reached its maximum). On the other hand, while the all-tag analysis suggests that a LongSAGE library needs to be substantially larger than 500,000 tags to sample all transcripts in the cDNA pool, the analysis of known genes does not. This difference is not surprising because known genes are likely, on average, to be expressed at a higher level than novel transcripts, aiding their initial identification [i.e. non-protein encoding, transcriptional units absent from gene databases. In this case, larger SAGE libraries would be required to identify a full set of such unconventional transcripts.The analysis using all tags matching the genome Fig. versus tfication ,37. Howei.e. 500,000 tags) there are many more distinct tag sequences in the LongSAGE library than in the MPSS library . Allowing for differences in sequencing error rate by considering only tags that match the human genome, LongSAGE identifies 7.4-fold more unique tag sequences than MPSS . Even using the entire MPSS library, which is 3 times the size of the SAGE library, MPSS identifies 6.3-fold fewer tags than SAGE. Although up to half of this difference may be accounted for by the lower number of DpnII sites in the genome, a ~3-fold reduction in number of distinct species identified by a method intended to analyse samples to a greater depth is unexpected. This large difference suggests either that the LongSAGE library contains many spurious tags randomly matching genomic sequences or that the MPSS library lacks many genuine tags, despite the sequencing of tags from every captured transcript.Great sampling depth is only of value if the open expression technology identifies transcripts irrespective of their sequence. There is a large discrepancy in the number of different sequences identified by the two methods. At the same sampling depth is 45 tags per million (tpm), whereas for transcripts identified by both methods the average total SAGE tag count for a transcript is 65 tpm and for those identified by LongSAGE only, it is 23 tpm . This suggests that MPSS fails to detect weakly expressed transcripts. Since this is not what is expected of a method capable of sampling many more tags than SAGE, it implies that there are systematic biases in MPSS sequencing, or in library production, or both.i.e. that generated matches to the human genome): human genomic DNA, DNA from other human transcripts or ditags from previously generated human SAGE libraries. The only LongSAGE library previously produced in our laboratory was derived from anti-CD3 antibody-treated CD8+ T-cells . In this library, there are very high tag counts for tags derived from transcripts encoding CD8 and several other molecules that are completely absent from the CD4+ T cell-derived SAGE library. Similarly, tags that are extremely abundant in both the activated CD8+ T cell-derived library and our activated CD4+ T cell-derived library are completely absent in another large resting CD4+ T cell-derived library , e.g. CCL4L1 at 1688 tpm, 2029 tpm and 0 tpm, respectively. Thus, library cross-contamination seems unlikely. In addition, the new libraries did not contain any tags derived from transcripts encoding markers of cells that are likely sources of cDNA contamination in our laboratory, e.g. B cells , myeloid cells or keratinocytes . Finally, in the case of genomic DNA contamination, the abundance of contaminating tags would be expected to correlate directly with the number of copies of that sequence found in the human genome. However, the abundance distributions for SAGE tags from UTBS transcripts detected by SAGE only is equivalent to that of all the tags matching UTBS transcripts detected by SAGE as well as those detected by SAGE and by MPSS [see Additional file A trivial explanation for these results is that there is DNA contamination of the LongSAGE library but not the MPSS library. It is of course very difficult to prove that there has been no contamination of a library when deep transcriptome analysis of the given cell has not been undertaken previously. Clearly, every care was taken to ensure that there was no contamination of the libraries at any stage. However, if the SAGE library was contaminated after the initial RNA sample was divided, there are three possible sources of contaminating DNA that could explain our results . This suggests that random sampling during MPSS library preparation has a large effect on the resulting 'bead library', profoundly reducing its complexity. Since only 2,646 transcripts are identified in the UTBS dataset when the three MPSS libraries are combined suggests that sequencing length has an effect on the complexity of the library: the longer the tag sequence, the smaller the number of unique tags that are sequenced . These loci therefore represent possible transcriptional loci for which no clear evidence has previously been obtained. The pairs of tags are listed in Additional file i.e. 72) are found in genomic regions masked in Ensembl, which are more difficult to analyse by other methods owing to the presence of repetitive elements. Identifying the transcripts corresponding to all these novel loci should be relatively simple using both tags as primers for direct PCR or nested 5' rapid amplification of cDNA ends (RACE) [bona fide new transcriptional loci remain to be discovered.The deep sampling by LongSAGE and to a smaller degree by MPSS means that tags can be matched to genomic regions where transcripts have not previously been identified or predicted by Ensembl . However, many of these matches are unlikely to correspond to actual transcriptional loci, as tags may match more than one genomic site or may represent sequencing errors arising fortuitously from more abundant tags that match the genome elsewhere. On the other hand, it is likely that loci identified by both methods will represent genuine regions of transcription. To investigate the likely numbers of new transcriptional loci identifiable using this approach, strict criteria were used to identify regions where transcription was detected by both methods. Tags were required to match the genome only once, at a position where no known Ensembl genes are annotated within 5000 bases in the sense or antisense direction. Of all the tags, only 5, 975 unique LongSAGE tags and 392 MPSS tags satisfied these criteria (using only the first of the three MPSS libraries). The genomic matches to the tags in both lists were then examined in order to ascertain whether they could be part of the same gene. If a LongSAGE tag matched the genome within 5,000 bases of, and on the same strand as, an MPSS tag, this pair of tags was considered to define a potentially new transcriptional locus. This procedure identified only 147 tag pairs, none of which occur within 5,000 bases of predicted genes in Ensembl Release 40 (s (RACE) . OverallWe have described the production of large LongSAGE and MPSS libraries from a single RNA sample and consider their usefulness for identifying the complete transcriptome of a clonal population of cells, including transcripts not expressed in all cells and hence present on average at less than one copy per cell. The two methods give very different estimates of the number of genes expressed by a single cell. Both by counting the number of genomic loci represented or by extrapolation from the number of known genes found, the SAGE tags sequenced are estimated to represent 20,000\u201330,000 transcripts, whereas the MPSS tags represent 7,000\u20139,000 transcripts. The total number of genes in the human genome is still being debated, but the current consensus places it under 30,000 protein encoding genes [et al. [It should be noted, however, that some transcripts are lost in the course of sequencing, as demonstrated by the loss of library complexity in terms of the number of species identified as tag sequences are extended from 14 to 17 to 20 bp. We are not the first to identify such effects. MPSS sequencing proceeds in four-base steps yielding four-base \"words\" and it has been noted by Meyers g. TTAA) ,43, lead [et al. estimateDpnII cleavage vary significantly in length (i.e. the distance between the cleavage site and the end of the cDNA). The effect of tag-position within the cDNA on the observed abundance of MPSS tags has been analysed by Chen and Rattray [MmeI after cleavage with DpnII and ligation of a linker so that the same length of sequence, i.e., the tag, is immobilised on the beads in each case. This approach is analogous to the SAGE process, which ensures that all ditags amplify uniformly. Tag-position bias is likely to affect the observed abundance of many tags in our libraries. However, if the library is sequenced to completion, as our data suggests sequenced using \"steppers\" 2 and 4 [i.e. tag lengths 14 and 17 bases including the DpnII site, were also provided.Our LongSAGE library was generated according to the standard protocol using the I-SAGE Long kit from Invitrogen and was sequenced on an ABI 3700 capillary DNA sequencer using BigDye v3 terminators (Applied Biosystems) to a depth of 503,431 tags. MPSS libraries were produced from the same RNA sample as the LongSAGE library by Lynx Therapeutics Inc under their standard service agreement. They initially provided a library of 1,744,173 tags and then two further libraries of 1,573,952 and 956,867 tags, respectively; giving a total of 4,274,992 MPSS tags. All three libraries were provided as 20 bp reads . Further information was then extracted for each tag: three windows were examined, both up- and downstream of the tag, for the presence of gene annotation . For each window, all Ensembl genes and predicted genes were recorded. The genomic tags were then classed as being outside any known gene (default), or as exonic, intronic or boundary (i.e. crossing an exon-intron boundary), or as matching multiple genes.The data source for genome and transcriptome data was Ensembl (versionNlaIII and DpnII) in both the sense and antisense direction. If a tag extended beyond the known 3' end of a transcript, it was extended along the genome unless the transcript was predicted to contain a polyA site , in whiInformation from the extractions described above was combined for automated tag-to-gene-mapping. First, frequencies for each tag, in both the genome and transcriptome, were calculated, and then each tag was matched to the genome and classified as one of the following: single match, multiple match, no match or excess matches (more than 20 hits to the genome). No further analysis was undertaken for the excess matches. For single matches and multiple matches where only one match occurred in or near a known gene, tags were further annotated as matching the gene or the region downstream of a gene in a sense or antisense direction. Tags matching the known transcriptome were also categorised as matching a known transcript in the sense or antisense direction or matching multiple known transcripts.DpnII and NlaIII) and for which all sense and antisense tags in all exons of the gene encoding that transcript were unique within the transcriptome and within the genome. Some tags may be absent from the genome due to splicing, polyadenylation and the fact that the genome is not complete. This set consisted of 8132 genes. For each gene, all possible tags derived using either method were extracted, and expression of the gene was calculated as the sum of the abundance of each of its corresponding tags.The UTBS transcript set described in the text was produced by identifying all known Ensembl transcripts that contained at least one restriction site for each enzyme declares that there are no competing interests.LH devised the basic computational approaches, carried out the initial data analysis, identified the discrepancies discussed and drafted an initial report. VBS devised an improved bioinformatic strategy, undertook the rigorous testing of these results and participated in formulating the manuscript. MTV and SHIA designed and carried out the library production and data acquisition procedures. JKS and SLRJ produced the biological samples necessary for the work and undertook their analysis and testing. SJD conceived the study, participated in its design and co-wrote the manuscript. EJE coordinated and planned the detailed study, participated in the data analysis and interpretation and co-wrote the manuscript.All authors read and approved the final manuscript.Effect of tag length on frequency of matches to the genome and transcriptome. Additional figure showing a histogram of the frequencies of every tag found in the Ensembl genome and transcriptome for various combinations of tagging enzyme and tag length.Click here for fileNumber of transcriptional loci identified. Additional table showing the number of different active transcriptional loci identified in the same cell sample by either SAGE or MPSS according the method described in the text when various alternative parameters are used.Click here for file+ T-cell library matching UTBS transcripts according to whether the transcripts are also detected by MPSS. Additional figure comparing the apparent frequency distributions of transcripts from known genes according to whether their corresponding tags were found by SAGE and MPSS or exclusively by one of these techniques in order to demonstrate that transcripts detected only by SAGE did not represent a fixed level of genomic contamination.Comparisons of tag abundance distributions for LongSAGE tags from the activated CD4Click here for fileNovel loci of transcription identified by combining LongSAGE and MPSS. Additional table listing all the pairs of SAGE and MPSS tags found close together in genomic regions with no previously annotated transcriptional locus nearby.Click here for file"}
+{"text": "Analysis of a 2.6 million longSAGE sequence tag resource generated from nine human embryonic stem cell lines reveals an enrichment of RNA binding proteins and novel ES-specific transcripts. To facilitate discovery of novel human embryonic stem cell (ESC) transcripts, we generated 2.5 million LongSAGE tags from 9 human ESC lines. Analysis of this data revealed that ESCs express proportionately more RNA binding proteins compared with terminally differentiated cells, and identified novel ESC transcripts, at least one of which may represent a marker of the pluripotent state. Embryonic stem cells (ESCs) can be derived from the inner cell mass of blastocysts and are defined by their ability to be propagated indefinitely as undifferentiated cells with the potential, upon appropriate stimulation, to generate cell types representing all three embryonic germ layers . Since tMicroarray-based approaches have been used to define the transcriptomes of numerous human ESC lines, including BG01, BG02, WA01, WA07, WA09, WA13, WA14, TE06, UC01 and UC06 -22. ThesLongSAGE libraries were constructed using total RNA purified from nine different human ESC lines cultured as undifferentiated cells by serial passaging on mouse embryonic fibroblast (MEF) feeder layers Table 11. To enaTo investigate the similarities and differences between the libraries, we performed hierarchical clustering using Pearson correlation coefficients . For thiP = 1.8 \u00d7 10-7 and 1.0 \u00d7 10-6, respectively, by one-sided t-tests).To assess the representation of known genes in the nine human ESC transcriptomes, we compared our data to other human sequence tag-based resources -18. HighP < 3.0 \u00d7 10-5; Additional data file 4). The most significantly affected transcript encoded Secreted frizzled-related protein-1 (Sfrp1), a well characterized antagonist of WNT signaling. Our analysis suggested that the two isoforms of Sfrp1 we identified either retained or lost the 3' untranslated region -PCR . InteresNlaIII for SAGE and Dpn1 for MPSS) and the fact that different mRNA preparations were used in each study. To further explore this lack of concordance, we compared the longSAGE and MPSS-derived gene lists to a common gene list derived from Affymetrix expression arrays generated from the same RNAs used to construct our LongSAGE libraries [To generate a list of transcripts common to all libraries (excluding the HSF-6 library because of the differentiation markers found therein), we first identified tags from each library that uniquely mapped to transcripts within RefSeq and the ibraries . The AffLongSAGE offers opportunities for discovering novel transcripts. These can be identified as tags that map uniquely to the genome but not to any available transcript resources. To look for these, we used the 2.5 million tag meta-library, which contained 379,645 unique tag sequences. Grouping LongSAGE tags that mapped to genomic locations in close proximity to one another resultedMAPK2 gene, none of the identified ORFs demonstrated Ka/Ks ratios suggestive of purifying selection [To further characterize these putative novel, low-abundance ESC library specific transcripts, we compared the ESC meta-library to publicly available data derived from 247 non-ESC SAGE libraries that together contained 654,491 unique tag sequences. This comparison identified 20,047 tag sequences found only in the human ESC meta-library . For subsequent analyses, we focused on those tags that uniquely mapped at least 2 kb away from any known gene. This analysis reduced the number of tags to 634 , of which 301 were found within genomic regions exhibiting sequence conservation between human and mouse or rat . We used rapid amplification of cDNA ends (RACE) to cloneelection . Hence, Foxb1 gene within its first intron. Foxb1 encodes a winged helix transcription factor involved in the development of the vertebrate central nervous system and Foxb1-/- mice display phenotypes consistent with a requirement for this gene in both embryonic and postnatal stages of development [Foxb1 transcripts except for a single Foxb1 tag in the HSF-6 library. This general lack of Foxb1 expression in ESCs and the genomic location of the Foxb1 gene within the first intron of HA_003240 are consistent with the notion that Foxb1 expression is repressed by expression of HA_003240, possibly by steric inhibition of the transcription initiation complex [Oct4 encodes a transcription factor that regulates a number of key human ESC markers, including Nanog, through co-operative binding with a Sox family member [Foxb1 locus is intriguing and suggests an interesting mechanism for negatively regulating Foxb1 expression in Oct4-expressing cells.Four RACE clones were found to have genomic coordinates that overlapped with those of known transcripts . One of these .To more fully characterize a transcript identified by a singleton tag , we attempted to recover a full length transcript using 5' and 3' RACE and primers annealing within the terminal exon of the putative transcript. Alignment of the resulting candidate full length sequence to the human genome revealed a transcript that contained two introns with matches to other ESTs, of which 7 were found only in data derived from pluripotent human ESC lines. One RACE clone that overlapped an EST derived from pluripotent human ESC lines (HA_003152) was also found to be expressed in all nine ESC lines studied here. BLAT alignmenOct4, Lin28 and Msx1 in the same RNA preparations. Figure Oct4 and Lin28 in the human ESCs stimulated to differentiate into embryoid bodies and an up-regulation of expression of the early differentiation marker Msx1. Significant reduction of expression was observed in four of the six transcripts tested, including HA_003152, whose expression was undetectable at d30 to compare transcript levels in RNA purified from human ESCs maintained under conditions that promote their maintenance in an undifferentiated state to RNA extracts obtained from human ESCs that had been stimulated to differentiate into embryoid bodies. To provide a comparative dataset we selected five additional novel transcripts for qPCR. In all cases, qPCR amplicons were designed to cross exon-exon boundaries. As controls we also monitored expression of 0 Figure . These tAs part of the ongoing effort to elucidate mechanisms regulating ESC self-renewal, we generated 2.5 million LongSAGE tags from nine human ESC lines. Comparison of these data to libraries prepared from differentiated tissues identified a group of ESC-library specific transcripts and an enrichment of transcripts encoding mitochondrial and RNA binding proteins (by comparison to differentiated cells). RNA binding proteins play a role in the regulation of mRNA processing and examination of non-canonical longSAGE tags in the human ESC libraries suggest that these cells express a distinct collection of gene isoforms. One such isoform may bypass translational down regulation through the expression of a transcript lacking predicted miRNA target sequences.An emerging theme in digital gene expression profiling is the identification of a large class of transcripts that map uniquely to the genome, but cannot be localized to any known or computationally predicted transcripts. Tags in this class are predominantly found at relatively low levels. Analysis of the 2.5 million LongSAGE tags generated in the course of this study revealed 14,588 such tag sequences, a subset of which were found exclusively in human ESCs. As a first step towards understanding the relevance of these transcripts to ESC biology we generated 5' RACE clones for 52 novel apparently ESC-specific transcripts. Analyses of these transcripts revealed that the majority do not appear to encode proteins and do not overlap existing pseudogene predictions. One transcript was found to be expressed across all nine ESC lines we profiled and matched ESTs generated by others from ESCs. Its restricted expression pattern suggests that it may represent a novel transcriptional marker for the maintenance of pluripotentiality. In addition to the discovery of this potential marker, we also identified four novel transcripts that may participate in the regulation of expression of known genes, one of which is known to play a direct role in differentiation. Our analyses indicate that there are many previously undiscovered transcripts expressed in human ESCs and support the contention that sampling of SAGE libraries to depths beyond currently accepted practice is required to fully explore the coding potential of the mammalian transcriptome. To assess possible functions associated with such rare transcripts, we are actively pursuing the cloning and characterization of the remaining novel human ESC-specific transcripts identified in this study.Detailed information regarding the human ESC lines used in this study can be found at the NIH Stem Cell Information website . The pasNine LongSAGE librariein silico to form 14 bp tags. A total of 2,508,608 tags corresponding to 222,337 unique 14 bp tag sequences were utilized in this analysis. These tags were directly compared to all unique tags from the human SAGE libraries to generate a list of tags found solely in the ESC meta-library.LongSAGE tags of at least 99.9% accuracy software as folloNlaIII restriction sites (CATG). Of these, our analysis defined a subset of 19.4 million genomic tag sequences that were unique within the genome.LongSAGE tags were mapped to known and computationally predicted transcripts using versions of the following databases available as of March, 2005: RefSeq , RefSeqXA second table was generated that stored information about exons: genome sequence contig, transcript orientation, exon number, exon boundary type and nucleotide positions of exon boundaries for all approximately 267,000 exons annotated on release 35 of the Reference Sequence genome. The LongSAGE tag sequences were compared to the unique genomic tag table, yielding sets of genomic positions for all tags in the library. These in turn were compared to the table of exon information, producing a mapping for each tag relative to annotated exons.t-test comparing two samples was used. The null hypothesis was that the two samples arose from populations with the same mean and standard deviation. The values within each sample were the number of GO categories represented in each library of the set, nine in the ESC set and four in the normal set. To account for variation due to library size, only the transcripts with the top 1,000 expression values were included. A one-sided p value was reported. Microsoft Excel was used to perform the computation.For the GO category comparisons, a standard p value for the null hypothesis that the two tag frequencies arose from Poisson distributions with the same mean. This was derived using a normal approximation to the Poisson as described by Kal et al. [p < 0.05 were selected. Tag counts were converted to tags per million, and transcripts that differed by less than three-fold were eliminated. All pairs of tags existing within the same transcript were then listed if the differential expression for the two tags was in the opposite direction.To select differentially expressed LongSAGE tags, the ESC and CGN meta-libraries were compared on a tag per tag basis to obtain a l et al. . All trafree\u2122 kit; Ambion, Austin, TX, USA) treated RNA using the BD SMART RACE cDNA Amplification kit following the manufacturer's recommended protocol . Gene specific 5' RACE primers were designed using custom scripts and Primer 3 [\u00ae-TOPO\u00ae vector using the TOPO TA Cloning\u00ae Kit for Sequencing (Invitrogen). Plasmid vectors were electroporated into bacterial cells, and recombinant clones were selected on agar plates containing appropriate antibiotics as described [First strand 5' and 3' RACE ready cDNA was synthesized from 2.0 \u03bcg of DNase I treated total RNA using the SuperScript Choice System following the manufacturer's recommended protocol (Invitrogen). Gene specific primer pairs were designed using custom scripts and Primer 3 [RNA was obtained from H9 cells before and after induction of differentiation using a 30-day embryoid body protocol. Undifferentiated H9 cells maintained for 7 days on matrigel in media conditioned by mouse embryonic fibroblasts and supplemented with 4 ng/ml fibroblast growth factor (bFGF-2) were harvested for embryoid body formation. Briefly, the cells were incubated with TrypLE (Invitrogen) for 10 minutes at 37\u00b0C and then collected by scraping. Resultant cell aggregates were subsequently cultured in non-adherent dishes using KOSR-based media without FGF2, for 15 to 30 days. At appropriate time-points RNA was extracted into Trizol. cDNA was synthesized from 2.0 ug of DNase I (DNA-Primer 3 to ampliPrimer 3 using a The following additional data are available with the online version of this paper. Additional data file Summary of mouse specific tag types identified.Click here for fileGenomic mappings for 268,515 unique tag sequences found in nine independent human embryonic stem cell lines.Click here for fileTag counts are expressed for each GO category for the top 1,000 by tag count.Click here for fileStatistically significant differentially expressed LongSAGE tags found between embryonic stem cells and terminally differentiated tissues.Click here for fileThe 4,337 genes found in common across 8 undifferentiated human embryonic stem cell lines.Click here for fileThe 20,047 LongSAGE tags exclusively expressed in embryonic stem cell lines.Click here for fileThe 634 LongSAGE tags exclusively expressed in ESCs that uniquely map to the human genome at least 2 kb away from an annotated transcript.Click here for fileThe 301 LongSAGE tags exclusively expressed in ESCs that uniquely map to species conserved regions of the human genome at least 2 kb away from an annotated transcript.Click here for fileThe 52 ESC specific transcripts identified by 5' RACE.Click here for fileRACE and qPCR primer sequences used in this study.Click here for file"}
+{"text": "Intramembranous bone formation is essential in uncemented joint replacement to provide a mechanical anchorage of the implant. Since the discovery of bone morphogenic proteins (BMPs) by Urist in 1965, many studies have been conducted to show the influence of growth factors on implant ingrowth. In this study, the influence of bone morphogenetic protein-2 (rhBMP-2) and transforming growth factor \u03b22 (TGF-\u03b22) on implant osseointegration was investigated.ex vivo high-resolution micro-computed-tomography after 28 days of healing. Bone volume per total volume (BV/TV) was recorded around each implant. Afterward, all samples were biomechanically tested in a pull-out setup.Thirty-two titanium cylinders were implanted into the femoral condyles of both hind legs of New Zealand White Rabbits. Four experimental groups were investigated: controls without coating, a macromolecular copolymer + covalently bound BMP-2, adsorbed BMP-2, and absorbed BMP-2+TGF-\u03b22. All samples were analyzed by P < 0.05). Copolymer+BMP-2 showed no significant difference in comparison to controls. In the pull-out setup, all groups showed higher fixation strength compared to the control group; these differences were not significant.The highest BV/TV ratio was seen in the BMP-2 group, followed by the BMP-2+TGF-\u03b22 group in high-resolution micro-computed-tomography. These groups were significantly different compared to the control group (No differences between BMP-2 alone and a combination of BMP-2+TGF-\u03b22 could be seen in the present study. However, the results of this study confirm the results of other studies that a coating with growth factors is able to enhance bone implant ingrowth. This may be of importance in defect situations during revision surgery to support the implant ingrowth and implant anchorage. Intramembranous bone formation is essential in uncemented joint replacement to provide a mechanical anchorage of the implant.1432Thus, the aim of this study was to investigate the influence of bone morphogenetic protein-2 (rhBMP-2) and TGF-b2 on implant osseointegration using high-resolution micro-computed-tomography and biomechanical methods in an animal model of New Zealand White Rabbits.Eight mature New Zealand White Rabbits were used. They were housed in standard laboratory conditions. All animals were fed with autoclaved water and food. All animals were investigated preoperatively by a veterinarian. This included a general health check and an examination of parasites. The study was approved by the Institutional Animal Care and Use Committee, Germany.\u00b5l of BMP2-solution (250 \u00b5g/ml), which was allowed to evaporate overnight under sterile conditions. This serves as a positive control of nonspecifically absorbed BMP2. In group 4, the use of a solution of BMP2 plus TGF-b2 (12.5 ng/\u00b5l) for nonspecific coating of the cylinders resulted in 12.5 \u00b5g BMP2 plus 625 ng TGF-b2 per cylinder. All four different groups of cylinders were implanted in each animal.Thirty-two implants were designed as titanium cylinders with an innerthread for the biomechanical test. The innerthread was designed to screw a special manufactured stem in for the pull-out test . They weThe animals were preanesthetized with 25 mg/kg ketamin and 5 mg midazolam intramuscularly. A sterile catheter was placed and the anesthesia was started using propofol. After intubation the anesthesia was maintained with isofluran and a ringer-solution. A broad-spectrum antibiotic (tardomyocel comp. III) and analgesic (buphrenorphin) was applied.Surgery was done by one surgeon under the same condition. The animals were placed in supine position on the operating table. After disinfection of both hind legs and sterile coverage of the animal, a small skin incision was made with a scalpel above the patella tendon. The incision was moved easily medially or laterally to perform the subcutaneous incision directly to the bone. This technique was used to minimize infection. The second step was to display the medial condyle of the femur. Using a wound spreader, the periosteum and bone could be easily shown. A hand drill (3 mm diameter) was used to drill a hole to fit the implant . The impAfter 28 days of surgery, all animals were euthanized to analyze the early implant osseointegration.ex vivo high-resolution micro-computed-tomography apparatus using synchrotron radiation [\u00b5m) was analyzed. This nondestructive method enables a fast, three-dimensional, and quantitative measurement of the bone tissue around implants.\u00b5m). After a defined segmentation process, a global threshold was defined as barrier between implant and bone and also between bone and soft-tissue. The analysis produces a quotient of bone volume to total volume (BV/TV). These data can be seen as bone ingrowth of implants when compared to the cylinder of each other group.All samples were analyzed in an adiation . With spAll samples were embedded in Technovit 4004 and fixed in a special block. The fixation strength was measured by a MTS Mini Bionix 858 Test Star . The mechanical testing was accomplished at a rate of 0.5 mm/sec with a longitudinal force direction to the implant axis. All data were recorded by the Test Star II software for statistical analysis.t-test was used to analyze the differences in fixation strength and BV/TV between all groups. P < 0.05 was considered statistically significant.Mean values and standard deviation were analyzed for all groups. Furthermore, the independent-samples No complications were found during the examination period of 28 days. All extracted samples could be used for the high-resolution micro-computed-tomography and biomechanical pull-out test.\u00b5m width. In this three-dimensional area, the software was able to differentiate between bone and nonbone (soft-tissue). To homogenize all received data independently of their implant location and bone stock quality in each animal, the control cylinder was assumed to be 100% bone ingrowth and the coated cylinders were compared relatively to the control in percentage. This enabled a comparison of all animals, independent of their individual cancellous bone stock in the condyle.A region of interest was defined around each cylinder with 300 The highest ingrowth of implant was seen in the BMP-2 group (115.4 \u00b1 7.5%), followed by the BMP-2 + TGF-b2 group (113.5 \u00b1 7.2%). The copolymer + BMP-2 group was found to be 103.0 \u00b1 2.3%. There were no significant differences between groups 2, 3, and 4.P < 0.05) and BMP-2 + TGF-b2 group (P < 0.05) were significantly different compared to the control group. The copolymer + BMP-2 group was not significantly different compared to the control group (P=0.17) [BMP-2 group ((P=0.17) .Figure The fixation strength was defined as the point of failure during the biomechanical pull-out test when the implant can be removed. To homogenize all received data independently of their implant location and bone stock quality of each animal, the control cylinder was assumed to be 100% fixation strength and the coated cylinders were compared relatively to the control in percentage. This enabled a comparison of all animals, independently of their individual cancellous bone stock in the condyle.P > 0.5), but a trend of increased implant ingrowth especially in the BMP-2 group [The highest pull-out strength was found in the BMP-2 group (192.5 \u00b1 135.4%). The pull-out strength of the copolymer + BMP-2 group was 117.6 \u00b1 49.1% and of the BMP-2 + TGF-b2 group was 113.0 \u00b1 77.4%. There was no significant difference between control group and each other group and BMP-2 + TGF-\u03b22 group (P < 0.05), compared to the control group. The combination of copolymer as an anchor for covalent binding of BMP-2 on the titanium surface showed an increase in implant ingrowth, but the difference was not significant. The pull-out test showed the same distribution, but no significant differences between coated implants and the control group.In high-resolution micro-computed-tomography, we found a significant difference of the BMP-2 group into osteoblasts and osteoclasts leads to osteoid production and mineralized bone under the influence of locally acting growth factors. BMPs, as a member of the TGF-\u03b2 superfamily, have a variety of functions in the development and reparation of bone tissue. Several studies have shown the induction of osteoblast proliferation, differentiation, and influence on bone formation.354546et al. demonstrated that 12 \u00b5g TGF-\u03b22 and 25 \u00b5g BMP-2 is the optimum dose.\u00b5g BMP2 to simulate a more physiological concentration of both growth factors. In the copolymer group, BMP-2 was linked covalently to the copolymer via a limited number of binding sites for the growth factor. This led to a smaller amount of BMP-2 immobilized on the implant surface that was reflected by a minimal increase of bone ingrowth compared to the BMP-2 group and BMP-2 + TGF-b2 group.In the present study, the BMP-2 + TGF-\u03b22 group showed a significant increase in high-resolution micro-computed-tomography but a nonsignificant increase in the mechanical testing. However, we have not found the clear superiority of BMP-2 + TGF-\u03b22 compared to BMP-2 coating of implants as described in other studies.A limitation of the present study may be the lower concentration of growth factors on the implant surface compared to other studies that makes an interpretation of our results in comparison to other studies more demanding. Furthermore, the small number of animals may lead to failures in statistics and might change to significant level with a higher amount of animals. However, this study does present an increase in implant ingrowth in all three groups compared to the control group, which reflects the potency of growth factors during implant ingrowth.In future, one has to consider the use of growth factors especially in revision total hip arthroplasty with great loss of bone stock, which can possibly replace or be added to bone grafts that are commonly used in uncemented revision cases."}
+{"text": "Triarylpyrroles e.g.4c and 4s inhibit the MDM2\u2013p53 and MDMX\u2013p53 protein\u2013protein interactions. H)-ylidene)methyl)-2,5-dimethyl-1H-pyrrol-1-yl)-4,5,6,7-tetrahydrobenzo[b]thiophene-3-carbonitrile as an MDM2\u2013p53 inhibitor (IC50 = 12.3 \u03bcM). MDM2\u2013p53 and MDMX\u2013p53 activity was seen for 5-((1-(4-chlorophenyl)-2,5-diphenyl-1H-pyrrol-3-yl)methylene)-2-thioxodihydropyrimidine-4,6-dione and 5-((1-(4-nitrophenyl)-2,5-diphenyl-1H-pyrrol-3-yl)methylene)pyrimidine-2,4,6-trione , and cellular activity consistent with p53 activation in MDM2 amplified cells. Further SAR studies demonstrated the requirement for the triarylpyrrole moiety for MDMX\u2013p53 activity but not for MDM2\u2013p53 inhibition.Screening identified 2-(3-((4,6-dioxo-2-thioxotetrahydropyrimidin-5(2 Mutation of the TP53 gene occurs in approximately 50% of common adult sporadic cancers, resulting in inactive protein.3 Alternatively, p53 may be silenced by the overexpression of the regulatory proteins MDM2 or MDMX (MDM4).6 MDM2 amplification has been reported to occur in approximately 11% of all tumors and the paralogue MDMX has been reported to be amplified in brain (11%), breast (5\u201340%), and soft tissue tumors (17%). Overexpression of MDMX has also been observed in a wider range of tumor types, including uterus (15%), testes (27%), melanoma (65%), stomach/small intestine (43%), and lung (18%).7The tumor suppressor protein p53 functions as a molecular sensor in diverse signalling pathways resulting from cellular stresses, such as DNA damage, oncogene activation and possibly hypoxia.8 In addition to the MDM2 gene being a target for p53-dependent transcription, MDM2 regulates p53 in an autoregulatory negative feedback loop by binding to the p53 transactivation domain, and acting as an E3-ligase for polyubiquitination of p53 to promote p53 degradation by the ubiquitin-mediated proteosomal pathway.12 MDMX also inhibits p53 transcriptional activity, but does not act as an E3 ligase independently of MDM2, and its expression is not p53 dependent.13 Furthermore, MDMX\u2013MDM2 heterodimers have enhanced E3 ligase activity over MDM2 alone and may be an important mechanism of p53 regulation.The MDM2 and MDMX proteins regulate the activity of p53 with different and non-redundant mechanisms.14 A number of potent MDM2\u2013p53 inhibitors have been reported based on diverse chemotypes,15 such as the cis-imidazoline RG-7112 (IC50 = 12 nM),16 spirooxindoles, e.g. MI-888 (IC50 = 6.8 nM),17 and the substituted piperidone AM-8553 (IC50 = 2.2 nM),18 and have demonstrated cellular activity consistent with inhibition of MDM2\u2013p53 binding and in vivo antitumor activity. However, these series lack significant potency against MDMX,19 and overexpression of MDMX offers a possible mechanism of resistance to such MDM2\u2013p53 inhibitors. For this reason compounds able to inhibit both interactions have great significance.20The MDM2\u2013p53 binding interaction is amenable to small-molecule inhibition, as it consists of a relatively deep binding groove on the surface of the MDM2 protein into which an amphipathic helix of p53 binds.22 The 3-imidazolyl indole (1a) is a mixed MDM2\u2013, MDMX\u2013p53 inhibitor , and has provided the first X-ray crystal structure of MDMX bound to a small-molecule ligand.19 A series of MDM2\u2013p53 inhibitory pyrrolidone derivatives, e.g. , also show modest MDMX activity in addition to MDM2 inhibition.23 The indolyl hydantoins, e.g. RO-5963 , are the most potent MDM2\u2013p53 and MDMX\u2013p53 inhibitors reported to date.24To date, there have been few reports of small-molecule MDMX inhibitors. The 5-oxo-pyrazolylidene SJ-172552 was identified in an MDMX high-throughput fluorescence polarisation assay and showed selective MDMX inhibition, through a complex, irreversible mechanism.In this paper, we describe the discovery, structure\u2013activity relationships (SARs) and cellular activity of triarylpyrrole compounds with promising inhibitory activity against both MDM2\u2013p53 and MDMX\u2013p53. Comparison of molecular models of the triarylpyrroles with a small series of the related diarylpyrrole MDM2\u2013p53 inhibitors demonstrates key structural requirements for mixed MDM2 and MDMX inhibition in this series.25 Follow up IC50 determinations on active compounds revealed pyrrole 3 as a hit, with an IC50 of 12.3 \u00b1 1.5 \u03bcM against MDM2\u2013p53, which also demonstrated dose-dependent cellular activity by Western blotting for MDM2 and p53 induction. A series of 96 related analogues was purchased, based on similarity searching and visual inspection, and screened for MDM2 activity. Twelve compounds (4a\u2013l) showed promising MDM2\u2013p53 inhibitory activity with IC50 values in the 0.12\u20138.4 \u03bcM range were 5\u201310 fold more potent than the comparable 2,5-dimethyl analogues . Substitution on the thiobarbituric acid moiety had a modest negative effect on potency.A pilot set of 800 structurally diverse compounds, obtained from the Cancer Research UK screening collection was studied in an MDM2\u2013p53 ELISA protein\u2013protein binding assay, at 5 and 20 \u03bcM concentrations.\u03bcM range . Pyrrole5 was prepared from 1,2-dibenzoylethane and the appropriate 4-substituted aniline, using trifluoroethanol (TFE) as solvent and trifluoroacetic acid (TFA) as catalyst under microwave heating . The use of mono-N-methyl barbituric or thiobarbituric acid gave 4x\u2013z as inseparable mixtures of regioisomers.In order to validate and further explore the SAR, a selection of 2,5-symmetrically substituted pyrroles heating .26\u201329 Th4c were determined, and selected compounds were assayed for MDMX\u2013p53 inhibitory activity , with the exception of the 4-nitro derivatives 4s and 4t that were equipotent. The 4-N-aryl substitutents had a profound influence on potency for MDM2\u2013p53. Thus, potency was conferred by chloro- or bromo-substituents or electron-withdrawing groups e.g. nitro or cyano (4q\u2013t). In contrast, larger or electron-donating groups gave poor MDM2 inhibition, e.g. OCH3, t-Bu (4o and 4p). The 4-aryl substituent also significantly influenced activity against MDMX, with 4-nitro- (4s and 4t), 4-cyano- (4q and 4r), or 4-chloro- (4c) N-phenyl substituents conferring the greatest inhibitory potency. The N,N-diethylbarbituric acid or thiobarbituric acid derivatives were 3\u20134 fold less potent against MDM2 compared with their unsubstituted analogues, whereas the mono-N-methyl analogues were equipotent with their parents. Similarly, N-alkyl substitution on the barbituric acid or thiobarbituric acid moiety (4u\u2013z) resulted in a significant loss of MDMX\u2013p53 activity. In all cases reduced solubility was observed for the N-alkyl derivatives.MDM2 inhibitory SARs for a series of analogues of activity . Unexpec7 was prepared by heating aldehyde 5b with Meldrum's acid 8 in toluene with piperidine acetate as catalyst that was reacted with barbituric acid affording a mixture of 3- and 4-isomers 11 that were only separable by HPLC (e.g. X = Cl). The limited practicality of this route prompted the search for a method capable of yielding either regioisomer, as required. Thus, \u03b2-ketoesters 15, prepared from Meldrum's acid 12, were subjected to a tandem homologation/addition sequence mediated by the Furukawa reagent,32 with oxidation of the intermediate providing the \u03b1-ester 1,4-diketones 16 gave a small loss of MDM2 inhibitory potency, independent of the position of the thiobarbituric acid residue, but resulted in a >200 fold loss of potency for MDMX. Similarly, the 2- or 5-cyclopropyl derivatives retained modest MDM2 inhibitory activity, independent of the position of the thiobarbituric acid residue, but were inactive against MDMX.Substitution of the 2-phenyl residue with a methyl group resulted in a greater than 3-fold loss of MDM2 inhibitory potency for the mixture of regioisomers 11a . Replace34 For this reason, we opted to generate representative binding modes for the pyrroles from the X-ray co-crystal structures of MDM2 and MDMX with structurally similar ligands, i.e. imidazole derivatives , by superposition and replacement of the ligand.19 Pyrrole (4c) with mixed MDM2 and MDMX potency, and two MDM2 selective pyrroles (11c and 11d) were aligned in the ligands 1a in MDM2 and 1b in MDMX , using the ligand builder function and the CCP4 \u2018cprodrg\u2019 plugin within COOT.36 The high degree of structural overlap between the original ligand and the modeled compound in these models gives confidence that the binding modes are reasonable.Previously, we have demonstrated that docking ligands into the MDM2 binding pocket can yield multiple low energy solutions.4c in both MDM2 and MDMX . In particular, the N-4-chlorophenyl ring occupies the pocket normally filled by Trp23 of p53 for both MDM2 and MDMX,14 overlaying the chloroindole ring of 1a or 1b. The 5-phenyl ring of the pyrrole occupies the Phe19 pocket with good overlap with the 1-phenyl ring of 1a or 1b. The remaining phenyl ring is accommodated by the Leu26 pocket with a less well defined overlap and different vector compared with the original ligands. The thiobarbituric acid group projects away from the protein surface into the space occupied by the carboxylic acid residue of 1a and the amide group of 1b, suggesting that these groups may act, in part, as a hydrophilic cap.37The binding modes for and MDMX show gooThe MDM2 binding mode model is consistent with the observed SARs, as the Trp23 pocket of MDM2 shows a strong preference for haloaromatic groups as seen in the X-ray structures of high-affinity ligands. The preference for haloaromatic groups in the MDMX Trp23 binding-pocket is not as well established as for MDM2 due to the smaller number of deposited structures; however, the SARs in this series suggest that the pocket is similar to that in MDM2.1a bound to MDMX and the acid group of 1b bound to MDM2, and raises the possibility that water mediated H-bonding to the protein backbone may be important for affinity.The role of the barbituric acid or thiobarbituric acid group is less well explained by the models. The positioning of the groups in the models is consistent with that seen for the amide of 11c into MDM2 and MDMX shows the N-4-chlorophenyl group occupying the Trp23 pocket as seen for 4c 19 and the AM-8553 series (4ERE).18 The MDMX structure also places the t-butyl group into the Phe19 pocket, but the 5-phenyl ring no longer makes a good interaction with Leu26 pocket which appears to be broader and shallower than for MDM2. This observation may explain the dramatic loss in MDMX potency for this series, compared with the retention of MDM2 potency. Interestingly, the model of 11d , an SJSA-1 derived line that is resistant to Nutlin-3a due to p53 mutation. For comparison, the MRK-NU-1 breast cancer cell line has amplified MDMX and wild-type p53. Compounds showed growth inhibitory activity in the 2\u201310 \u03bcM range without a strong correlation to either MDM2 or MDMX inhibitory activity. Disappointingly, the compounds were equally growth inhibitory in all cell lines regardless of their potency vs. MDM2 or MDMX, and the MDM2 and MDMX status of the cell line. Importantly, the p53 mutant SN40R2 line was equally sensitive to the pyrroles 4c, 4r, 4t and 4x, in contrast to the 2\u2013100 fold difference in activity reported for potent MDM2 inhibitors.Growth inhibitory activity was determined for selected pyrroles in a panel of cell lines with defined MDM2, MDMX and p53 status . The SJS3, 4c and 4d to investigate the transcriptional activation of p53 and the subsequent induction of p53-dependent proteins by Western blotting. In the SJSA-1 line induction of MDM2, p53 and p21 was clearly visible at 5 \u03bcM for each compound no induction of MDM2, p53, and p21 was observed. In contrast to the growth inhibition data, these results clearly demonstrate a p53-dependent cellular response to the pyrroles.Cell lines with defined MDM2 and p53 status were treated with increasing concentrations of pyrroles compound . In the N-phenylpyrrole or alkylidene barbituric acid groups have been identified as \u2018frequent-hitters\u2019 in HTS campaigns.38 With this in mind, it is likely that, despite the ability of these compounds to activate p53 dependent cellular processes, the modest growth inhibition seen for this series is the result of additional off-target activity.Compounds with e.g.1a, is reported to have weak MDMX activity. The hydantoin derivative RO-5963 is reported to be a potent dual MDM2 and MDMX inhibitor, but with an unusual binding mode and cellular mechanism of action. We have identified a series of 1,2,5-triarylpyrroles that display MDM2\u2013p53 and MDMX\u2013p53 inhibitory activity in an cell-free ELISA MDM2\u2013p53 binding assay, and are able to activate p53-dependent gene transcription in whole cells. Modeling studies suggest a binding mode which is consistent with other reported MDM2 inhibitors, and that offers insight into the structural requirements for the design of compounds able to inhibit both MDM2\u2013p53 and MDMX\u2013p53. However, the lack of p53-dependent growth inhibitory activity and the poor physicochemical properties for this series presents a substantial challenge for their further development as drugs; however, they represent interesting dual MDM2/MDMX ligands for both structural and mechanistic studies.To date, the majority of potent MDM2\u2013p53 inhibitors are highly selective for MDM2 over MDMX, and there are very few published inhibitors of MDM2 that retain significant potency for MDMX. The imidazole series, DCMDichloromethaneDIBAL-HDi-isopropylaluminium hydrideDIPEADiisopropylethyl amineDMFN,N-DimethylformamideDMSODimethyl sulfoxideELISAEnzyme-linked immunosorbent assayMDMMurine double minuteNMON-Methylmorpholine-N-oxidePCCPyridinium chlorochromatePTSAp-Toluenesulfonic acidSARStructure\u2013activity relationshipSRBSulforhodamine BTFATrifluoroacetic acidTFE2,2,2-TrifluoroethanolTHFTetrahydrofuranTPAPTetrapropylammonium perruthenatewtWild-type."}
+{"text": "Mdm2 and Mdmx are recognized as the main p53 negative regulators. Although it is still unknown why Mdm2 and Mdmx both are required for p53 degradation, a model has been proposed whereby these two proteins function independent of one another; Mdm2 acts as an E3 ubiquitin ligase that catalyzes the ubiquitination of p53 for degradation, whereas Mdmx inhibits p53 by binding to and masking the transcriptional activation domain of p53, without causing its degradation. However, Mdm2 and Mdmx have been shown to function collaboratively. In fact, recent studies have pointed to a more important role for an Mdm2/Mdmx co-regulatory mechanism for p53 regulation than previously thought. In this review, we summarize current progress in the field about the functional and physical interactions between Mdm2 and Mdmx, their individual and collaborative roles in controlling p53, and inhibitors that target Mdm2 and Mdmx as a novel class of anticancer therapeutics. TP53 gene or inactivation of the p53 signaling pathway occurs at a high frequency in many human tumors, suggesting that p53 plays a critical role in preventing normal cells from becoming cancerous. p53 is a stress-inducible protein; it is inactive under normal physiological conditions and activated in response to various types of stresses such as DNA damage and ribosomal stress domain [Mdm2 cells to study the interactions of p53 in the Mdm2 mutant background. Upon induction of p53 in MEF cells, Itahana et al. demonstrated that the Mdm2 RING mutant protein, although deficient in the ability to ubiquitinate p53, is fully capable of binding to p53 proving that Mdm2 cannot suppress p53 transcriptional activity through binding alone. However, the authors also showed that the C462A mutation alters the structure of the Mdm2 RING domain to the extent that the Mdm2 C462A mutant is unable to heterodimerize with MdmX. Therefore, the study cannot explain whether Mdm2 and MdmX interaction is required for p53 suppression.In order to determine whether Mdm2-p53 binding alone is sufficient to suppress p53 activity, or whether Mdm2-mediated ubiquitination is also required in that regard, Itahana et al. generateet al. study, mice harboring an MdmX C462A mutation in one of the critical zinc-coordinating residues of the RING domain died at approximately day 9.5 of embryonic development as the result of an increase in apoptosis and a decrease in cell proliferation. The concomitant deletion of p53 completely rescued the embryonic lethality of the MdmX C462A mutation [in vivo. In a similar study performed by Pant et al., the authors used a tamoxifen-based Cre-inducible MdmX \u0394RING allele to investigate the role of Mdm2-MdmX heterodimerization in Mdm2 and p53 regulation. They found that although the heteroduplex is essential during embryonic development, heterodimerization is dispensable during the adult life of the mouse. Together, these studies provide compelling evidence that the action of the heterodimer of Mdm2 and MdmX, and not necessarily the independent action of either protein is crucial to the appropriate control of p53. However, these studies cannot answer a remaining question as whether the Mdm2 E3 ubiquitin ligase function is still required for p53 suppression, because the Mdm2 RING mutation simultaneously disrupts its E3 ligase function and its binding to MdmX, and because in vitro studies have shown that Mdm2 by itself is a relatively weak E3 for p53 degradation and its heterodimerization with MdmX enhances its E3 activity [in vivo, if technically possible, would be essential for understanding the importance of the in vivo cooperation between Mdm2 and MdmX.Recently, two studies using MdmX RING domain mutant knock-in alleles demonstrated that the RING domain of MdmX, like that of Mdm2, is also critical for regulating p53 activity during early embryogenesis ,33. In tmutation . Importaactivity ,26. Thuset al. have shown that MdmX stimulates Mdm2-mediated ubiquitination of p53, as well as Mdm2 self-ubiquitination in vitro [in vitro overexpression studies. Quantitative analysis has demonstrated that the level of endogenous MdmX is present at different proportion to that of Mdm2 in several types of human cell lines [in vitro studies and also indicates the relative level of Mdm2 and MdmX is crucial for controlling p53 stability and activity, which was further demonstrated by the recent crystal structures studies. Linker et al. revealed that the primary and secondary interfaces in Mdm2 homodimers or Mdm2/MdmX heterodimers are crucial for the binding of ubiquitin E2 enzyme and ubiquitylation of the subunit. Because Mdm2 homodimers have two primary and secondary interfaces for ubiquitin E2 enzyme binding and the E2 enzyme can be recruited by either monomer, which will lead to the ubiquitylation of the other subunit. However, in the Mdm2/MdmX heterodimer, only Mdm2 can provide the primary E2 interaction site while the secondary interface will be provided by MdmX, which will not cause the ubiquitylation and degradation of Mdm2. Therefore, the ratio of Mdm2/MdmX can be used to explain Mdm2 status in different situations: Mdm2 will form homodimer and degrade by itself through ubiquitination if the ratio is high, on the contrary, Mdm2 will be stabilized if the ratio is low [Although Mdm2 and MdmX have a synergistic relationship that effectively inhibits p53, as discussed above, Mdm2 and MdmX also have independent roles in the regulation of p53. MdmX can inhibit p53 transcriptional activity by interfering with the ability of p53 to interact with the basal transcription machinery, while Mdm2 can target p53 for degradation. Several studies have reported that elevated MdmX levels stabilize p53 by inhibiting Mdm2-mediated p53 degradation without interfering significantly with Mdm2-dependent p53 ubiquitination ,22,34,35in vitro . These ill lines . This obo is low .in vivo and in vitro experiments have demonstrated that p53 can bind to p53-responsive elements located within the Mdm2 gene and promote its transcription thereby set up a negative feedback regulatory loop [Both ory loop ,39, whilIn vitro studies have shown that DNA damage can destabilize Mdm2 by means of autoubiquitination [It has been widely accepted that Mdm2 antagonizes p53 by promoting its ubiquitination and proteasome-dependent degradation ,10. In atination ,42 and Mtination ,44. As dtination . This bitination . Consisttination . Thus, itination , these dIn vitro studies have shown that UbcH5 functions as an E2 enzyme for Mdm2-induced p53 ubiquitination and degradation [in vivo as the main E2 for Mdm2, or whether there are other E2 enzymes that interact with Mdm2 remains to be determined. A recent study [Ubiquitin-conjugating enzymes (E2s) have a dominant role in determining which of the lysine residues are used for polyubiquitination. Like many other RING domain proteins, the Mdm2 RING domain can promote the transfer of ubiquitin molecules from an E2 conjugating enzyme directly to the lysine residues of the target substrates . Becauseradation . Whethernt study has showIn addition to ubiquitination as a mechanism of controlling Mdm2 and MdmX, the activity of these proteins depends on their phosphorylation status. A number of kinases have been reported to phosphorylate Mdm2 and MdmX at different residues. DNA damage stimulates activation of multiple kinases including ataxia telangiectasia mutated (ATM) , checkpoin vivo.The dimerization of Mdm2 and MdmX and Mdm2's E3 ligase function also appear to be regulated by phosphorylation. In an MdmX3SA knock-in mouse model, Mdm2 retains the ability to bind to MdmX, but is significantly reduced in its capacity to degrade MdmX, resulting in an increase in the concentration of Mdm2-MdmX heterodimers . Thus, tAlthough approximately 50% of cancers harbor p53 mutations, the other 50% of cancers retain WT p53, yet they remain uninhibited by the tumor suppression activity of p53. This is generally accomplished through the overexpression of Mdm2 or MdmX by gene amplification or mutation. It has been accepted, at least theoretically, that reactivation or restoration of the p53 function in tumors is a promising cancer therapeutic strategy. Some proposed strategies include repressing the expression of Mdm2, blocking the p53-Mdm2 interaction, and inhibiting the ubiquitin ligase activity of Mdm2 ,64. For et al. [in vitro and in vivo experiments that a \u201cstabilized alpha-helix\u201d of p53 peptide, SAH-p53-8, preferentially inhibits the binding of p53 with MdmX and reduces cancer cell viability, thereby overcomes MdmX-mediated cancer resistance. SAH-p53-8 is derived from the so-called \u201cstapled\u201d peptides SAH-p53 that was designed based on the peptide sequence of the p53 transactivation domain. This peptide show protease resistant combined with increased cellular uptake properties due to a chemical designed strategy termed \u201chydrocarbon stapling\u201d, which can mimic the biological function of the nature \u03b1-helical structure. Co-immunoprecipitation experiments indicate that this peptide can bind to both Mdm2 and MdmX within the cells. Although SAH-p53-8 exhibits a 25-fold greater binding preference for MdmX over Mdm2, it has been shown to have the ability to kill cancer cells that overexpress Mdm2, MdmX, or both of the proteins. More importantly, SAH-p53-8 has been shown to efficiently induce a tumor-suppressive response in vivo. The study provides a clue to reactivate p53 tumor suppressor function by synergistically applying Mdm2 and MdmX inhibitors in cancer cells, and affords new therapeutic opportunities for simultaneously inhibiting both Mdm2 and MdmX to restore p53 using drug combinations or dual-inhibitory drugs [Recently, Bernal et al. showed iry drugs -72. Thusin vivo, nor is it capable of blocking p53 activity by binding alone. This is consistent with an earlier report that small molecules that inhibit the E3 ubiquitin ligase activity of Mdm2 can activate p53 [in vivo evidence that the association of Mdm2 with MdmX, but not the Mdm2 E3 ligase activity, is necessary for p53 control, at least in the developmental stage of mice, which is consistent with previous data based on in vitro experiments. Nevertheless, several questions still remain: Whether degradation must occur in order for p53 to be rendered inactive, or whether ubiquitination without degradation is sufficient for the inhibition of p53, how the Mdm2-MdmX heterodimer enables Mdm2 to be more efficiently ubiquitinating p53, and whether the Mdm2-MdmX heterodimer affects p53 ubiquitination. Although much has already been learned about the regulation of p53 by Mdm2 and MdmX, much still remains unknown. Crystal structure studies are needed to further understand at the molecular level how exactly the Mdm2-MdmX-p53 ternary complex is formed and why the Mdm2-MdmX complex is a more efficient E3 ligase complex than Mdm2 alone. The histone acetyltransferase PCAF [Over the past decade, considerable progress has been made towards understanding the regulation of p53 by Mdm2 and MdmX, and much of which has come from data obtained from various mouse models. It is generally accepted that the ubiquitination of p53 is a fundamental mechanism of p53 control and that Mdm2 is the principal p53 ubiquitin ligase ,73. A stvate p53 ,74. Howease PCAF has been"}
+{"text": "Combination of CVCVA5 adjuvant and commercial avian influenza (AI) vaccine has been previously demonstrated to provide good protection against different AI viruses in chickens. In this study, we further investigated the protective immunity of CVCVA5-adjuvanted oil-emulsion inactivated AI vaccine in chickens, ducks and geese. Compared to the commercial H5 inactivated vaccine, the H5-CVCVA5 vaccine induced significantly higher titers of hemaglutinin inhibitory antibodies in three lines of broiler chickens and ducks, elongated the antibody persistence periods in geese, elevated the levels of cross serum neutralization antibody against different clade and subclade H5 AI viruses in chicken embryos. High levels of mucosal antibody were detected in chickens injected with the H5 or H9-CVCA5 vaccine. Furthermore, cellular immune response was markedly improved in terms of increasing the serum levels of cytokine interferon-\u03b3 and interleukine 4, promoting proliferation of splenocytes and upregulating cytotoxicity activity in both H5- and H9-CVCVA5 vaccinated chickens. Together, these results provide evidence that AI vaccines supplemented with CVCVA5 adjuvant is a promising approach for overcoming the limitation of vaccine strain specificity of protection. Avian influenza viruses (AIVs) not only lead to massive economic loss in poultry industry but also cause dangerous issue to human public health. The highly pathogenic H5N1 AIVs have evolved into more than ten distinct phylogenetic clades based on their hemagglutinin (HA) genes \u20134. NatioThe inactivated avian influenza vaccine is not able to provide a robust protection of cross-reactive and mucosal antibodies against the circulating mutant viruses in the field . To dateAdjuvant has been licensed in human influenza vaccine, papillomavirus vaccine and hepatitis B virus vaccine , 12. TheAll animal studies were carried out in strict accordance with the recommendations in the National Guide for the Care and Use of Laboratory Animals. The protocol was approved by the Review Board of National Research Center of Engineering and Technology for Veterinary Biologicals, Jiangsu Academy of Agricultural Sciences. The surgery and euthanasia was performed under anesthesia with sodium pentobarbital solution (100 mg/kg body weight) via intravenous route to minimize suffering.50, 108.0/0.1 ml) were purified by centrifugation and inactivated with beta-propiolactone . The purified virus recovered in the same volume of phosphate buffer solution buffer was added into Marcol 52 mineral oil to produce a water-in-oil emulsion vaccine . H5 subtype viruses of A/Mallard/Huadong/S/2005 , A/Chicken/Zhejiang/2011 and A/Chicken/Huadong/4/2008 [The H5 vaccine and the corresponding antigen in heamagglutinin inhibition (HI) assay are commercially available. The H9 subtype AI vaccine was prepared as a previously described in a water-in-oil form . Brieflyclade 7) were proThe recipe of adjuvants, CVCVA5, was described in previous reports with two different use forms . In one G. gallus domesticus), including the white feather broilers, yellow feather broilers and dot feather broilers, as well as aquatic poultry, the Mallard duck (Anas platyrhynchos) and domestic geese (Anser cygnoides). Groups of twenty 10- to 15-day-old chickens were adopted to test the efficacy of adjuvants on H5 vaccine in each broilers breed that received a single dose of H5 or H5-CVCVA5 vaccines via subcutaneous route with volume of 0.5 ml, respectively. The sera were collected at 2-, 3- and 4-week post-vaccination (wpv). Both ducks and geese were obtained from Shengjia Poultry, Jiangsu, China. The maternal antibodies of hemagglutinn inhibition (HI) against H5 (Re-5) were less than 2 log2 before vaccination. Groups of ten 14-day-old mallard ducks received only a single dose of vaccines via subcutaneous route injection (0.5 ml) of H5 or H5-CVCVA5 vaccines, and na\u00efve control, respectively. All birds were bled on week 2, 3 and 4 post-vaccination for serum collection. The efficacy of the two-shot vaccination was assessed in the domestic geese. Briefly, three groups of twenty 14-day-old goslings were received the prime vaccination subcutaneously of H5 or H5-CVCVA5 vaccines (0.5 ml), respectively, and a third group was set as non-immunized control group. Four weeks after prime vaccination, geese in each group were boosted with the same dose of vaccine as used in the prime injection via subcutaneous route. The geese were bled at 3-, 4- and 8-week, and then at 4-week interval thereafter, until 32 wpv. The serum antibody titers were measured by HI assay.The efficacy of the adjuvant CVCVA5 on H5 (Re-5) vaccine was evaluated in three broilers breeds (The T-helper (Th) type-1 cytokine IFN-\u03b3 and Th2-type cytokine IL-4 in chicken serum were detected by commercially available ELISA kit following the manufacturer\u2019s instructions. Briefly, four stock serum samples from SPF chickens of each group were measured by ELISA Kit at three-week post vaccination from each group.The mucosal antibody from the tracheal, bronchoalveolar lavage fluids aSN assays were performed with alpha method. All serum samples from chickens were heat inactivated . The antisera stock solutions were mixed with equal volumes of tenfold serial dilutions of H5 subtype variant virus solutions. The variant strains were included viruses from subclade 2.3.4.6 and clade 7 . The control viruses were from clade 2.3.4 , a homologous strain to the Re-5 vaccine virus. After 1 hour incubation at 37\u00b0C, the mixtures were inoculated into 11-day old SPF chicken embryos and the embryos were incubated and observed daily for up to 5 days. The death embryos or embryos with positive HA titer were used to determine the end-point titers that were calculated as the reciprocal of sera to neutralize the highest viruses contents in 50% of the eggs.6 cells/well and incubated in triplicate with H5 (Re-5) or H9 (NJ02/01) inactivated viral antigen (5\u03bcg/ml), mitogen phytohemagglutinin with the concentrations of 25\u03bcg/ml was used as the positive control, or medium alone as the negative control, followed by 68 h incubation at 37\u00b0C in an atmosphere of 5% CO2. The lymphocyte proliferation response was evaluated by MTT assay with cell proliferation assay kit. Data was reported as stimulation index (SI), which was the mean of experimental wells/mean of antigen free wells (negative control).The study was carried out about the lymphocyte proliferation response of the splenocytes which derived from SPF chickens administered by the H5 subtype vaccine (Re-5) or the H9 subtype vaccine (NJ02/01) with or without CVCVA5 adjuvant to the inactivated H5 or H9 antigen, respectively. The inactivated H5 subtype viral antigen (Re-5) was purified from the H5 HI test antigen, and the inactivated H9 subtype viral antigen (NJ02/01) was purified from inoculated SPF chicken embryo allantoic fluids. At 3 days post immunization, single cell suspension was generated from the harvested spleens in PBS (pH 7.2) that supplemented with 1% penicillin/streptomycin. The splenocytes were separated with a chicken lymphocyte separation medium , pelleted at 1000 rpm for 10 min, resuspended in RPMI 1640 medium supplemented with 10% chicken serum and 1% penicillin / streptomycin. Viable splenocytes were added to 96-well plates in 0.1 ml at 1\u00d71019/B19) embryo fibroblast cells were infected for 8 hours with S (H5N1) virus with multiplicity of infection of 1, or 10 hours with NJ02/01 (H9N2) virus at multiplicity of infection of 2. The effector cells derived from peripheral blood mononuclear cells isolated from the inbred SPF chickens at 21 days post-vaccination, which previously immunized with the H5 subtype vaccine (Re-5) or the H9 subtype vaccine (NJ02/01) with or without CVCVA5 adjuvants, respectively. Various amounts of effector T cells in 100 \u03bcl of RPMI 1640 supplemented with 10% chicken serum, 1% L-glutamine, 1% sodium pyruvate, and 1% MEM nonessential amino acids were added to each well. 104 target cells (100 \u03bcl /well) infected or uninfected were also seeded to each well. Each cell sample was plated in triplicate. Microtiter plates were centrifuged at 250\u00d7g for 5 min before they were incubated for 4 hours in a humidified chamber at 37\u00b0C, 5% CO2. After 4 hours, the plates were centrifuged, the supernatant was harvested, and the substrate tetrazolium salt was added. The OD values were read at 490 nm in an enzyme-linked immunosorbent assay reader. The specific LDH activity release was calculated by using the following formula: /(maximum release -spontaneous release) \u00d7100.CTL activity was measured by the non-radioactive alternative method of lactate dehydrogenase (LDH) cytotoxicity assay , which detects the stable cytosolic enzyme LDH released from lysed cells. The assay was performed according to manufacturer\u2019s instruction and the previous study . Briefly2, data not shown) (2). The similar effects of CVCVA5 adjuvant on improving of HI antibodies level elicited by H5 vaccine were also observed in yellow or dot feather broilers, respectively . The chiectively .The impact of adjuvant on the production performance of three species broilers was further assessed according to three indexes, including the slaughter weight, the ratio of feed conversion, and the ratio of death and culling after vaccination. The H5-CVCVA5 vaccinated white feather broilers showed no obvious difference based on the three indexes to those birds which immunized with the commercial vaccine or the unvaccinated control group . Similar2, were elongated to approximately 12 weeks in the boosted geese and ZJ (subclade 2.3.4.6) viruses in 11-day old SPF chicken embryos . SimilarAs shown in The T cell provided protection against H9 subtype heterologous virus challenge in lymphocyte adoptive transfer assay in our previous study . To furtThe inactivated H5 and H9 subtype AI vaccines play a vital role in helping to prevent and control of AI outbreaks and spread in China and other countries, in which existed numbers of village or backyard poultry raising farming , 8. The A two-dose regimen of inactivated H9N2 AI vaccine is needed to enhance the immunologic response in broiler chickens . Single Two-dose vaccination schedule is needed to elicit effective protection in ducks or geese according to the criteria of the commercial H5 AI vaccine or national recommended immunization program in the field application. The efficacy of different types of H5 vaccines are tested in the duck but not geese under laboratory settings in previous reports \u201326. CompSelection of appropriate vaccine strains raise many potential challenges and may result in suboptimal protection in field use . Thus, i+ subtype lymphocytes transfer assay in our previous studies [Without addition of the immunostimulatory components, most of the oil-based emulsion-inactivated AI vaccines induce overwhelming antibody responses and nearly undetectable cellular immune response . The imm studies . CombiniThe mucosal immune response is the first line of defense against influenza virus infection. However, the currently available parenteral influenza vaccine induces polarization of serum antibody immunity, which does not prevent influenza virus primary infection at the mucosal surface. The live-virus-vectored vaccines expressing the AI HA gene can stimulate mucosal immunity to AI viruses, such as the recombinant fowlpox or recombinant Newcastle disease based vector expressing the H5 or H7 HA. However, the maternal-derived anti-vector antibodies influenced the timing and route of application in field usage . AdditioIn summary, the present study demonstrated that the CVCVA5 adjuvant significantly improved the efficacy on commercial H5 and/or H9 subtype AI vaccines in comprehensive immune response, including serological and mucosal antibodies, cytokine and cellular responses in the chickens, ameliorated the HI antibody level in broiler chickens and ducks, prolonged immune persistence periods in geese. Therefore, the CVCVA5 based on the currently licensed AI vaccine has a great potential to become an effective adjuvant in poultry use."}
+{"text": "The H5 subtype highly pathogenic avian influenza (HPAI) virus is one of the greatest threats to global poultry industry. To develop broadly protective H5 subunit vaccine, a recombinant consensus HA sequence (rHA) was constructed and expressed in virus-like particles (rHA VLPs) in the baculovirus-insect cell system. The efficacy of the rHA VLPs vaccine with or without immunopotentiator (CVCVA5) was assessed in chickens. Compared to the commercial Re6 or Re6-CVCVA5 vaccines, single dose immunization of chickens with rHA VLPs or rHA-CVCVA5 vaccines induced higher levels of serum hemagglutinin inhibition titers and neutralization titers, mucosal antibodies, IFN-\u03b3 and IL-4 cytokines in sera, and cytotoxic T lymphocyte responses. The rHA VLPs vaccine was superior to the commercial Re6 vaccine in conferring cross-protection against different clades of H5 subtype viruses. This study reports that H5 subtype consensus HA VLP single dose vaccination provides broad protection against HPAI virus in chickens. The H5 subtype highly pathogenic avian influenza (HPAI) viruses not only affect millions of domestic poultry, including chickens, ducks, and geese, as well as thousands of migratory wild birds, but also risk to the public health. Since the H5 subtype HPAI viruses emerged in Southeast Asia in 1990s and have evolved into 10 phylogenetic clades (0\u20139) and more than 30 subclades based on their hemagglutinin (HA) genes . RecentlA strategy of culling in combination with vaccination is a major current measure to prevent and control the spread of H5 HPAI viruses in several countries and regions \u20138. In ChIn this regard, there is an urgent need for developing an effective, broadly cross-protective, and safe H5 vaccine for use in poultry farms. Consensus sequences contain the most common residue at each position after aligning with a population of sequences. Previous studies reported that the consensus HA-based influenza vaccine candidates have elicited cross-reactive immune responses \u201312. TherSpodoptera frugiperda Sf9 insect cells were maintained in serum-free SF900II medium at 27\u00b0C and used for production of recombinant baculoviruses (rBVs) and VLPs. Two HPAI strains (ZJ and DT) used as challenge virus were originated from clade 2.3.4.6 and clade 7 . In this study, the recombinant Re6 commercial vaccines, and chicken sera against monovalent Re5, Re6, Re7, and Re8 antigens were purchased from Weike Biotechnology Co., Ltd., Harbin, China. Four commercial recombinant vaccines of Re5, Re6, Re7, and Re8 were inactivated whole-virus-based mineral oil emulsion vaccines (Weike). The six inner genes of four recombinant vaccine strains are derived from A/Puerto Rico/8/1934 virus . The HA and NA genes of Re5\u2013Re8 are derived from different subclades of H5 wild-type viruses, Re5 from strain of A/duck/Anhui/1/2006 (clade 2.3.4) (e 2.3.4) , and Re8e 2.3.4) .S. frugiperda cell by Gene Script Co. Ltd .It is known that nucleotide changes are more often because of non-sense mutations. Thus, the frequency of mutations at the nucleotide level is higher than that of the amino acid changes. Other researchers have also reported the consensus sequence derived from the nucleotide level . The comSf9 insect cells according to the manufacturer\u2019s instruction. Sf9 cells were infected with rBVs expressing HA. After 4\u2009days, the cell culture supernatants were harvested for preparation of vaccines.The optimized rHA ORF genes were cloned into the pFastBac vector plasmid to make recombinant Bacmid baculovirus DNAs using DH10Bac competent cells . A recombinant baculoviruses (rBVs) expressing influenza rHA protein was generated by transfection of Sf9 cells after 72\u2009h. The primary antibody was chicken anti-sera against Re6 antigen at a dilution of 1:800, and the secondary antibody was FITC labeled goat anti-chicken IgY (1:500) .The indirect immunofluorescence assay (IFA) was carried out to test the expression of HA in infected g for 20\u2009min at 4\u00b0C) to remove cell debris. The rHA VLPs in the supernatants were pelleted by ultracentrifugation . The sedimented particles were resuspended in phosphate-buffered saline (PBS) at 4\u00b0C overnight and further purified through a 20\u201330\u201360% discontinuous sucrose gradient at 100,000\u2009\u00d7\u2009g for 2\u2009h at 4\u00b0C -polyacrylamide gel electrophoresis (PAGE) sample buffer , separated by SDS-PAGE, and then transferred to nitrocellulose membranes, subsequently probed with sera derived from Re5 and Re6 vaccine-immunized chickens, respectively. The secondary antibodies were HPR-labeled goat anti-chicken IgY .The functionality of rHA protein incorporated into VLPs was quantified by hemagglutination assay (HA assay) using 1% (v:v) chicken red blood cells. The concentration of protein was measured by Pierce BCA Protein Assay Kit (Thermo Fisher Scientific).For negative staining of VLPs, sucrose gradient-purified VLPs were applied to a carbon-coated Formvar grid for 30\u2009s. Excess VLPs suspension was removed by blotting with filter paper, and the grid was immediately stained with 1% phosphotungstic acid for 90\u2009s. Excess stain was removed by filter paper, and the samples were examined using a transmission electron microscope.The purified recombinant HA proteins were recovered to same volume before ultracentrifugation treatment. The diluent is phosphate buffer solution . The rHA (ca. 12\u2009\u00b5g/dose) were prepared as water-in-oil emulsion with or without immunopotentiator CVCVA5, named as rHA or rHA-CVCVA5, respectively. The rHA antigen stock solutions were normalized so that the vaccine preparations had the same reciprocal of hemagglutination titers equal to or higher than 256 as measured with 1% chicken red blood cells. This range of titers is a prerequisite for the monovalent Re6 vaccine preparations with mineral oil-in-water emulsion adjuvant in the manufacture company. The preparation of rHA-CVCVA5 was followed as described in our previous reports \u201320. Brien\u2009=\u200925 in each group) of 3-week-old age specific pathogen-free (SPF) chickens or 15- to 18-day-old age Hy Line Brown breed commercial chickens were used to determine the immunogenicity and efficacy of the recombinant rHA VLPs. The vaccine test groups included the rHA with or without of CVCVA5. The commercial H5 subtype vaccines (Re6) were set as comparison control with or without CVCVA5, and a group of unvaccinated chickens as a na\u00efve control. Serum samples from all birds were collected at 2 and 3\u2009weeks postvaccination (PV).Groups (50 (0.1\u2009ml) H5 wild-type influenza viruses (ZJ and DT) at 3\u2009weeks PV. These challenge viruses were derived from the clade of 2.3.4.6 and clade 7 . Chickene group) .50) of H5 subtype variant viruses . After 1\u2009h incubation at 37\u00b0C, the mixtures were inoculated into the 96-well plates with a monolayer of DF-1 cells, a chicken embryo fibroblasts cell line, and then incubated to observe cytopathic effects (CPE) daily for up to 5\u2009days. The cell death or CPE were used to determine the end-point titers that were calculated as the highest reciprocal dilution of sera at which virus infection is blocked in DF-1 cells according to the method of Reed and Muench (Serum antibody levels were titrated by hemagglutinin inhibition (HI) assay or serum-neutralization (SN) assay in DF-1 cell , 22. Thed Muench .The chicken cytokine levels of IFN-\u03b3 and IL-4 in sera at 2 and 3\u2009weeks PV were measured by commercially available ELISA kits following the manufacturer\u2019s instructions as described in our previous reports , 20. The4 cells in 100\u2009\u00b5l/well) infected or uninfected were also seeded to each well. The effector cells derived from peripheral blood mononuclear cells isolated from the inbred SPF chickens at 21\u2009days PV, which previously immunized with H5 rHA vaccine or the commercial H5 subtype vaccine (Re6) with or without CVCVA5 adjuvants, respectively. Various amounts of effector T cells were added to each well. Each cell sample was plated in triplicates. Microtiter plates were centrifuged at 250\u2009\u00d7\u2009g for 5\u2009min and incubated in a humidified chamber at 37\u00b0C, 5% CO2. After 4-h incubation, the supernatants were harvested, followed by the addition of the substrate tetrazolium salt. The OD values were read at 490\u2009nm in an enzyme-linked immunosorbent assay reader. The specific LDH activity release was calculated by using the following formula: /(maximum release\u2009\u2212\u2009spontaneous release)\u2009\u00d7\u2009100.Cytotoxic T lymphocytes activity was measured by the lactate dehydrogenase (LDH) cytotoxicity assay kit , which is a non-radioactive method to detect the stable cytosolic enzyme LDH released from lysed cells. The assay was performed with the manufacturer\u2019s instruction and our previous study , 20. Brit-test or a one-way analysis of variance. Comparisons used to generate P values are indicated by horizontal lines .Experimental data are presented as mean\u2009\u00b1\u2009SD of the mean. Prism 7 was used for data analysis. The statistical significance was analyzed with Student\u2019s via an intravenous route to minimize suffering. All experiments involved in the live H5 subtype viruses were performed in the biosafety level 3 laboratory facilities. At the end of experiments, the discarded live viruses, wastes, and infected animal carcasses were autoclaved and incinerated to eliminate biohazards.All animal studies were carried out in accordance with the recommendations in the National Guide for the Care and Use of Laboratory Animals. The protocol (VMRI-AP150306) was approved by the Review Board of National Research Center of Engineering and Technology for Veterinary Biologicals, Jiangsu Academy of Agricultural Sciences and Yangzhou University. The surgery and euthanasia were performed under anesthesia with sodium pentobarbital solution (100\u2009mg/kg body weight) Sf9 insect cells 72\u2009h after infection with rHA rBVs with or without immunopotentiator , and Re6 vaccine in the presence or absence of immunopotentiator as controls . The levels of serum antibody against different antigens were measured by HI assay at 2 and 3\u2009weeks after a single dose vaccination. Using 4 HA units (HAU) of Re6 as a testing antigen, the HI titers of chickens shot with the Re6 vaccine were higher than those that induced by the rHA vaccine, similarly, HI titers induced by Re6-CVCVA5 vaccine were higher than those by the rHA-CVCVA5 vaccine at 2- or 3-week PV and DT (clade 7) viruses, respectively in chickens, including serological and mucosal HI antibodies, cytokines, and induction of cross-protection against two heterologous subclades of H5 subtype AI virus challenge.Development of vaccine based on the consensus HA sequences is one of the potential strategies to induce broadly protective immune responses against influenza viruses with high mutation rates and antigenic diversities. Consensus sequences encode the most common residues found at each position for the selected population, which are compatible with the site mutation residues in different clade viruses. We have compared the consensus HA sequence from the amino acid residues with that from the nucleotide level, and there were slightly differences mainly located at 310\u2013340 amino acid residues (data not shown). It is expected that consensus sequences at nucleotide levels would be stable in mRNA translation. Besides, we have developed a wild-type Re6 HA-based vaccine which was poorly immunogenic and conferred partial protection against challenge with ZJ and DT viruses (data not published). In contrast to the single virus strain sequence-based subunit vaccine providing strain-restricted immunity, the consensus-based vaccines have been previously investigated as a strategy for eliciting broadly reactive immune responses for different pathogens, including chikungunya virus , hepatitThe consensus rHA VLPs based vaccines elicited protective immune responses similar to those of the inactivated whole-virus vaccine, and could be applied to the assessment criteria of commercial inactivated virus vaccine, the index including serum HI titers and protective efficacy PC. We have also tested the efficacy of a single copy of the M2 ectodomain (M2e) as a broad-spectrum influenza vaccine candidate in chickens . NeverthThe rHA vaccine in the absence of CVCVA5 elicited higher levels of serological and mucosal antibody responses than those of the Re6 vaccine without CVCVA5 in terms of the HI antibody titers against the Re7 and Re8 antigens in both of the SPF and commercial chickens, and of the SN titers in DF-1 cells. However, higher levels of HI titers after Re6 vaccination were observed when using Re6 as a test antigen in both the SPF and commercial chickens because of an Re6 antigen exactly matching with the Re6 vaccine. In contrast to the HI titers, the neutralizing titers more closely reflect the capability of antibodies inhibiting the replication and propagation of virus. The results of humoral immune responses indicated that the rHA-based vaccine can elicit broad-spectrum antibodies reactive to different antigens, which is consistent with other influenza vaccines based on consensus HA sequences .The VLPs vaccines were superior to the inactivated whole-virus vaccine in eliciting cell-mediated immune responses in animal models as reported in previous studies , 32, 33.The challenge studies were carried out with two heterogeneous H5 subtype virus strains (ZJ and DT). The results from this study provide evidence that the rHA VLPs vaccines could be applicable to the field where there exist different subclades of H5 subtype viruses. In contrast to the Re6 vaccine in the absence of immunopotentiator, the rHA vaccine without immunopotentiator broadly protected the vaccinated chickens free from weight loss and death after infection with heterogeneous viruses. The multi-immune components including humoral antibodies, cytokine responses, and CTL activities elicited by the rHA vaccines with CVCVA5 adjuvants could contribute to broad protection against H5 subtype variants infection.Besides construction of the broad-spectrum antigen, the adjuvant is another option to improve the efficacy of vaccines. The adjuvant, CVCVA5, was shown to be effective on improving the efficacy of the commercial inactivated monovalent or polyvalent vaccines in our previous studies , 19. ComIn this study, we have reported HA consensus sequence-based H5 rHA VLPs vaccines, which elicited a broad-spectrum serum and mucosal antibody response, and protected chickens experimentally challenged with different subclades of H5 subtype viruses. The results from this study support that development of subunit VLP vaccines based on consensus HA sequences conferring a broad-spectrum protection can provide a potential application in poultry farm with continuing antigenic drift variants.via intravenous route to minimize suffering. All experiments involved in the live H5 subtype viruses were performed in the biosafety level 3 laboratory facilities. At the end of experiments, the discarded live viruses, wastes, and infected animal carcasses were autoclaved and incinerated to eliminate biohazards.All animal studies were carried out in accordance with the recommendations in the National Guide for the Care and Use of Laboratory Animals. The protocol (VMRI-AP150306) was approved by the Review Board of National Research Center of Engineering and Technology for Veterinary Biologicals, Jiangsu Academy of Agricultural Sciences and Yangzhou University. The surgery and euthanasia were performed under anesthesia with sodium pentobarbital solution (100\u2009mg/kg body weight) Conceived and designed the experiments: YT, PW, JL, XZ, and XL. Performed the experiments: YT, PW, JL, XZ, MM, and LF. Analyzed the data: YT, PW, JL, XZ, XL, and DP. Contributed reagents/materials/analysis tools: YT, PW, JL, XZ, MM, LF, JH, and DP. Wrote and edited the paper: YT, PW, S-MK, DP, and XL. All authors read and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. YT, PW, JL, XZ, MM, LF, and JH are authors on a pending China patent application describing the culture and produce of rHA in bioreactor system (Title: Preparation method of the H5 subtype HA-based subunit vaccine. Reference number: 201710140542.2)."}
+{"text": "We investigated the influence of nanoparticles\u2019 shape on the physiological responses of cells, when they were fed with spherical and needle-shaped PLGA-PEG nanoparticles (the volume of the nanoparticles had been chosen as the fixed parameter). We found that both types of NPs entered cells via endocytosis and upon internalization they stayed in membrane bounded vesicles. Needle-shaped, but not the spherical-shaped NPs were found to induce significant cytotoxicity in the cell lines tested. Our study evidenced that the cytotoxicity of needle-shaped NPs was induced through the lysosome disruption. Lysosome damage activated the signaling pathways for cell apoptosis, and eventually caused DNA fragmentation and cell death. The present work showed that physiological response of the cells can be very different when the shape of the fed nanoparticles changed from spherical to needle-like. The finding suggests that the toxicity of nanomaterials also depends on their shape. For example, positively charged gold NPs depolarized cell membrane to the greatest extent while NPs of other charges had negligible effect3. Surface chemistry induced cytotoxicity had many different origins, including de-activation of biomolecules due to specific surface binding, non-specific protein binding and their denaturation (i.e. beta-sheet formation)4, membrane perturbation induced temperature/pH changes, and direct release of various toxin5. For instance, magnetic iron NPs coated with dendritic guanidines resulted in a similar cell penetration ability as human immunodeficiency virus-1 transactivator (HIV-TAT) peptide6. Reports on size induced cytotoxicity were more complicated, as more than one material parameter was usually involved. Nonetheless, some reports suggested that the size effect is directly linked to the chemistry due to the different surface activity related to the specific surface of small particles, as compared to their larger sized counterparts7.Cytotoxicity is an important measure in both evaluating the impact of nanomaterial on public health and developing them for various biomedical applications, such as drug delivery and bio-sensing. Many works could be found in the literature trying to establish the correlation between specific material parameters and cell physiological responses/viability, and most of them focused on the surface charge, chemistry and size of the NPs. The cytotoxicity induced by NP surface charge was a result of Columbic interaction, i.e., the negatively charged plasma membrane attracted to positively charged NPs, which could cause membrane disruption and/or proton pump effect8, the effect of shape of the nanoparticles on cell responses was much less investigated. Relevant work included the toxicity study of carbon nanotubes which were found to induce significant cytotoxicity and even claimed to be \u2018new asbestos\u20199. Direct plasma membrane penetration, endosomal leakage, and nuclear translocation had been detected when CNTs were fed to various cell lines10. However, differences in the aspect ratio12, complicate surface chemistry and charge12 of the examined CNT samples made it difficult to determine conclusively the origin of the cytotoxicity. Therefore, question whether NPs shape, chemistry, charge, or a specific combination of all possible characteristic contributes to cytotoxicity remained open.While there was strong clinical evidence that shape of NPs had a significant impact on cellular fate (e.g. asbestosis)13. BSA coated PLGA microneedles were found to enhance green fluorescent protein (GFP) knockdown of GFP expressing endothelial cells after co-incubation with siRNA, which phenomenon was much less significant when PLGA microspheres were employed14. Nonetheless, the nature of the shape effect was not fully understood.A few polymeric material systems had also been studied in this regard. For example, needle-shape polystyrene particles with dimension of 4.4\u2009\u00d7\u20090.45\u2009\u00b5m, \u2018blunt\u2019 end, were found to cause transient cell membrane disruption, although cell recovery was identified after 48 h16 and biocompatibility17. It is a very attractive candidate in drug delivery with features of controlled18 and sustained19 release, stealth20 and targeting21. Here we engineered the PLGA-PEG NPs into spherical- or needle-shaped morphologies. Needle-shaped NPs were formed by direct stretching of the as-synthesized spherical NPs in order to maintain the same volume, chemistry and charge. When introduced to cells, the needle-shaped NPs were found to induce a series of physiological changes in cells, which eventually led to significant cytotoxicity. The nature of the shape effect and its induced cytotoxicity pathway were discussed. The present work show that physiological response of the cells can be very different when the shape of the fed nanoparticles changed from spherical to needle-like. The finding suggests that, in addition to the known material parameters such as composition, surface chemistry, and surface charge, shape is also an important parameter affecting the toxicity of nanomaterials.In the present work, we investigated the effect of shape of Poly (lactic-co-glycolic acid) polyethylene glycol nanoparticles (PLGA-PEG NPs) on the physiological response of human cells. PLGA is a FDA approved material for biomedical application due to its biodegradabilityScanning electron microscopy (SEM) images showed that the developed PLGA-PEG NPs have two distinct morphologies: nanospheres Fig.\u00a0 and nano1H-NMR , with a peak at 1.66 ppm, was assigned as a reference for the presence of lactic acid (LA) monomer. Methylene group (-CH2), with a peak at 4.85 ppm, served as the reference for glycolic acid (GA) monomer. The presence of methyl group (-CH3) and methylene group (-CH2) indicated the content of PLGA. In addition, the methane group (-CH), with a peak at 3.64 ppm, was the reference for the ethylene glycol (EG) monomer, which suggested the successful poly ethylene glycol (PEG) conjugation to PLGA.The chemical composition of the PLGA-PEG copolymer was evaluated by NMR Fig.\u00a0, which s\u22121 corresponded to the stretch of the carbonyl groups (C=O) in the PLGA chain. Bands between 1300 and 1150\u2009cm\u22121 were the asymmetrical and symmetrical vibrations of the C-C(=O)-O in the polymer chain. Band at 3000\u2009cm\u22121 was due to the stretching of \u2013CH3 group of LA, band at 2956\u2009cm\u22121 was due to the stretching of \u2013CH2 group of GA. In the spectrum of pure PEG, band at 2885\u2009cm\u22121 was originated from \u2013CH stretching of the methylene group in PEG. The in plane C-H deformation from 1185 to 1090\u2009cm\u22121 can also be observed. The spectra taken from both spherical- and needle-shaped PLGA-PEG NPs showed similar features. In particular, the \u2013CH stretching band at 2885\u2009cm\u22121 confirmed the incorporation of PEG in both types of NPs.Surface chemistry of spherical- and needle-shaped PLGA-PEG NPs was studied by FTIR Fig.\u00a0. In the Zeta potential measurements of the PLGA-PEG NPs showed that both spherical and needle-shaped NPs were negatively charged, and the measured average zeta potential was almost the same for both shapes were fed with spherical and needle-shaped NPs at 50\u2009\u00b5g/mL for 24\u2009hours. NPs were internalized by the cells, as evidenced by the fluorescence signal (red color) inside the cells Fig.\u00a0. Most ofDifferent morphological evolution of the cells was identified after their being incubated with spherical- or needle-shaped NPs at all feeding concentrations , and such difference became more significant at higher feeding concentrations. Figure\u00a022. Figure\u00a0The abnormally enlarged lysosome suggested possible membrane disruption in cells fed with needle-shaped NPs. We therefore examined the LDH release in the respective cell samples, as LDH served as common indicator for membrane damage24. To check this, we quantitatively studied the cytosol expression of cleaved caspase 3 in the corresponding cells. Figure\u00a0A possible consequence of lysosome membrane disruption is the activation of cleaved caspase 3The cytotoxicity was evaluated by MTT assay. Figure\u00a0As a common method to detect DNA fragmentation resulting from apoptotic signaling cascades, TUNEL assay was employed to further investigate the cytotoxicity and the possible DNA damage induced by PLGA-PEG NPs. DNA fragmentation was observed in cells fed with needle-shaped NPs at all concentration range tested Figure\u00a0, and was25. Then it is interesting to note that even with lower cellular uptake amount, the needle-shaped NPs induced much more significant cytotoxicity than its spherical counterpart. Obvious physiology changes of the cells started from the lysosomal membrane disruption, as evidenced in enlarged lysosomes, detectable LDH release, and the enhanced activation of cleaved caspase 3, when the needle-shaped NPs were fed to them.Although both spherical- and needle-shaped PLGA-PEG NPs entered cells via endocytosis, the former one showed higher cellular uptake amount. Similar shape dependent cellular uptake had been reported in the literature, and was explained by the different membrane bending energies required for entry of NPs with different shapes26. This hypothesis provided a possible origin for the observed lysosome membrane disruption in the case of needle-shaped NPs.It is important to note that the spherical- and needle-shaped NPs shared similar volume, surface chemistry and charge, as the needle-shaped ones were obtained by stretching the spherical-shaped ones. The only difference between the two types of NPs was their shape. In particular, sharp ends presented in the needle-shaped, but not the spherical-shaped NPs. It had been suggested that local sharpness of the NPs significantly affected their interaction with the lipid bilayer membrane\u2014energy penalty determined that nanodiamond with small radius of curvature at the end would \u201csink\u201d into the plasma membrane, while the ones with large radius of curvature stayed above the membrane27. Nevertheless, such membrane rupture did not cause much cytotoxicity in the case of nanodiamond, mainly due to the fact that NDs\u2019 cytosol escape took place at the early endosome stage, which usually caused little damage to the cell27. As a comparison, the local sharpness of needle-shaped PLGA-PEG NPs failed to rupture the vesicle membrane inside the cells. This might be due to the significantly different stiffness of the PLGA-PEG (in the range of 2\u201380 MPa29) from that of diamond (130 GPa30). Mechanisms of how stiffness of PLGA-PEG NPs affected their interaction with the vesicle membrane remain unclear and required further investigation.In fact, experimental evidence showed that nanodiamond indeed cut through the vesicle membrane and were released to cytoplasm, shortly after their cellular entry32. The experimental results suggested the following cytotoxicity pathway . Surface chemistry of the PLGA-PEG NPs was studied by FTIR. Surface charge of the PLGA-PEG NPs was evaluated in PBS by DLS. To verify the conjugation of Nile Red to PLGA-PEG NPs, the photoluminescence of aqueous solution containing NPs was measured with photoluminescence (Hitachi P7000).HepG2 cells were used in this study. The cells were cultured with Dulbecco\u2019s modified Eagle\u2019s media and 10% fetal bovine serum . All PLGA-PEG NPs were sterilized before using by UV light for 15\u2009min. Cells were seeded and incubated for 24\u2009hours before the NPs were introduced. The feeding concentration of the NPs was 25, 50, 100 and 250\u2009\u00b5g/mL unless otherwise specified.For confocal microscopy study, the NPs fed cells were washed with PBS twice to eliminate free NPs that were not taken up by cells. 2\u2009mL DMEM containing 0.005% vol/vol Lysotracker green (Invitrogen) was added to the dish, and 1\u2009hour of incubation was allowed, then the cells were examined with confocal microscope (Leica SP5TCS II) with a 63\u2009\u00d7\u2009water-immersion objective lens.35 at the end of their incubation with NPs. Then, cells were dehydrated in a grade series of ethanol and embedded in Spurr resin. The resin blocks were sectioned by using an ultramicrotome (Leica) and the 90\u2009nm thickness sections were transferred onto TEM grid (TED Pella Inc. USA). The grids were further stained with uranyl acetate and lead acetate solution. The cell samples were observed using TEM (FEI TS12).For transmission electron microscopy study, the NP-fed cells were fixed using typical proceduresAll biochemical assays were started from seeding the cells in 96 well plate for 24\u2009hours, then adding the NPs for another 24\u2009hours of incubation. Cell membrane perturbation was studied by LDH release assay , the apoptotic toxicity was investigated by Caspase 3 activity assay , the cytotoxicity of the cells was evaluated using MTT assay.In Situ Cell death Detection Kit-TMR red assay kit ).NDA fragmentation was studied by using Terminal transferase deoxy-UTP Nick End Labeling (TUNEL, Supplementary information"}
+{"text": "Data on the experience of endoscopic retrograde cholangiopancreatography (ERCP) in the management of pancreaticobiliary maljunction (PBM) is limited.A retrospective review of patients with PBM who underwent therapeutic ERCP at our endoscopy center between January 2008 and January 2016 was performed. Demographic, clinical, radiological and endoscopic data was documented. Patients who underwent sphincterotomy were divided into dilated group and undilated group based on their common channel diameter.Sixty-three PBM patients underwent 74 ERCP procedures. The technical success rate was 97.3%. ERCP therapy significantly decreased the levels of elevated liver enzymes and bilirubin. After an average of 27 months follow-up, 7 patients (11.1%) were lost. The overall effective rate of ERCP therapy was 60.7% (34/56). Decline in severity and frequency of abdominal pain was significant. Procedure-related complications were observed in 5 (6.8%) cases. Between the dilated group and undilated group, no significant difference was observed in effective rate, adverse events and follow-up results.ERCP can serve as a transitional step to stabilize PBM patients before definitive surgery. PBM patients with undilated common channel could benefit from sphincterotomy as well as those with dilated common channel. Pancreaticobiliary maljunction (PBM) is a congenital anomaly in which the pancreatic duct and bile duct join together and form a long common channel outside the duodenal wall . WithoutNonetheless, Data on the experience of ERCP in the management of PBM are limited to small series \u201310. Two From January 2008 to January 2016, consecutive patients with PBM who had undergone endoscopic therapy in the digestive endoscopy center of our hospital were included.Inclusion criteria contain: 1) PBM confirmed by ERCP, 2) ERCP treatment performed. Exclusion criteria include: 1) patients with primary sclerosing cholangitis, malignant diseases or prior liver transplantation; 2) therapeutic ERCP was not performed.In patients who had repeated ERCP therapy, each ERCP was regarded as an independent case (i.e. number of cases > number of patients). Demographic, clinical, radiological and endoscopic data of the included patients was obtained from medical records. Data on pre- and post-ERCP laboratory parameters including serum alanine aminotransferase , aspartate aminotransferase , alkaline phosphatase , \u03b3-glutamyl transpeptidase , total bilirubin , direct bilirubin and amylase concentrations was collected within seven days before and after the day of ERCP, respectively. Only patients with abnormal laboratory data before ERCP were analyzed.Informed consent was obtained after the risks and benefits of the ERCP were explained to the patient and key family members. A subsequent radical surgery was recommended to all patients considering the risk of malignancy. All ERCPs were performed under general anesthesia with endotracheal intubation, in the prone position, with a duodenoscope by two experienced endoscopists (BG and LKB). Duodenoscope JF240 was used for infants and children, JF-260V or TJF-260V were used for adults.We first attempted common channel cannulation using a doublelumen sphincterotome , and obtained an optimal image of pancreaticobiliary junction. Once a definitive diagnosis of PBM was established, sphincterotome preloaded with a guidewire was used for selective cannulation. Pre-cut method was applied in cannulation failed cases. After successful cannulation of common bile duct (CBD), we aspirated 10 ml of bile sample for measurement of biliary amylase . EST was performed to help the bile and pancreatic juice flow freely into the duodenum. Epinephrine-containing icy saline was injected into the submucous coat of the papilla to prevent post-EST hemorrhage. We also used endoscopic hemoclip placement (EHP) to treat patients with high risk of post-EST bleeding. Strictures were dilated by biliary dilation catheters and/or balloons . Stones were extracted with baskets and/or balloons . Endoscopic papillary balloon dilation (EPBD) were applied if a large CBD stone was present. At last, endoscopic nasobiliary/nasopancreatic drainage (ENBD/ENPD) or endoscopic retrograde biliary/pancreatic drainage (ERBD/ERPD) was performed to prevent complications when it was necessary. After ERCP, the American Society for Gastrointestinal Endoscopy (ASGE) grading system was used to grade the complexity of ERCP procedures .The following parameters were assessed by phone calls and by searching the medical records for the period from the initial ERCP to the radical surgery (if available) or to the last follow-up: general condition , severity of abdominal pain , frequency of abdominal pain/pancreatitis , number of readmission, and number of added ERCP. A comparison of the patient condition before and after the index ERCP therapy were analyzed to evaluate the follow-up outcome.The diagnostic criteria for PBM were 1) An abnormally long common channel and/or an abnormal union between the pancreatic and bile ducts must be evident on direct cholangiography. (2) In cases with a relatively short common channel, it is necessary to confirm that the effect of the papillary sphincter does not extend to the junction by direct cholangiography. (3) Abnormally high levels of pancreatic enzymes in the bile duct and/or the gallbladder serve as an auxiliary diagnosis . PBM was An abnor2 test with Yates correction or with Fisher's Exact Test. Statistical significance was defined as p < 0.05 (two-tailed). The study protocol was approved by the Institutional Review Board in our hospital.SPSS Statistics 18.0 software was used. Categorical variables were expressed as frequencies and percentages and continuous variables were expressed as means with standard deviation (SD) or range. The Wilcoxon signed-rank test (for paired samples) and Mann-Whitney test (for unpaired samples) were used for the comparison of continuous data. The categorical variables were tested using \u03c7n = 74 cases). Forty-five (71.4%) patients were female. The mean age was 24 (range 1 - 82) years, 38 patients (60.3%) were under 18 years of age. Among them, 37 patients were under 12 , 1 patient was adolescent (aged 13 - 17 years). The main symptoms of the cases (one case may involve one or more symptoms) included abdominal pain , vomiting , jaundice , and fever . The mean duration of symptom before treatment was 2.5months (range 4 days-20 years). Indications for ERCP (one case may involve one or more indications) were pancreatitis , pancreaticobiliary calculi , biliary obstruction , and stent migration .A total of 63 PBM patients underwent 74 ERCP treatments (n = 13), or in the pancreaticobiliary ducts adjacent to the common channel (n = 8). Eighteen cases had extrahepatic bile duct stones, 9 had pancreatic stones. Three patients were complicated by pancreas divisum (2 incomplete and 1 complete), 1 patient had low confluence of cystic duct and CBD, 6 patients suffered with chronic pancreatitis. The mean level of biliary amylase of 23 patients whose bile was obtained was 55,716 (range 1222 to 353269) IU/L. Only two patients\u2019 biliary amylase concentration was under 10000 IU/L. They were both adults, one had severe jaundice, the other had prior EST treatment. Table ERCP showed Komi type a PBM in 36 patients , 2nd level in 29 (39.2%), 3rd level in 23 (31.1%), and 4th level in 5 (6.7%).Therapeutic ERCPs were performed in all 74 cases (range 1 to 3 times per patient). The technical success rate was 97.3% (72/74). Pancreatic stones could not be extracted in two cases because of complicated pancreaticobiliary ductal union. Therapeutic procedures included pre-cut in 1 case (1.4%), EST in 57 cases , EHP in 4 cases (5.4%), EPBD in 11 cases (14.9%), stricture dilation in 5 cases (6.8%), stone extraction in 46 cases (62.2%), ERBD in 15 cases , ERPD in 4 cases , ENBD in 48 cases (64.9%) and ENPD in 3 cases (4.1%). The procedure difficulty was determined as 1p < 0.001), AST , ALP , \u03b3-GT , TBIL and DBIL were remarkably decreased after ERCP therapy (Table p = 0.6). The mean (\u00b1 SD) duration of hospitalization after ERCP was 6.3 (\u00b1 2.8) days.The levels of serum ALT follow-up, 7 patients (11.1%) were lost. In the remaining 56 patients, 12 (21.4%) had radical surgery after an average of 13 months (range 14 days - 65 months). Two patients (3.6%) developed gallbladder carcinoma at follow-up. They were both female in their sixties, without congenital biliary dilatation. One had type a PBM, whose cancer was discovered 3 years after ERCP. The other had type b PBM, whose cancer was discovered during the prophylactic cholecystectomy 3 months after ERCP.p < 0.001) and frequency of abdominal pain, but no significant change in frequency of pancreatitis was observed. Fifty patients (89.3%) reported their general condition after index ERCP were excellent or better, 5 (8.9%) were same, and 1 (1.8%) was worse. Eighteen patients (32.1%) required readmission (range 1 to 3 times per patient) and 10 patients (17.9%) underwent added ERCP (range 1 to 2 times per patient). The follow-up results were detailed in Table The total effective rate of ERCP therapy in PBM was 60.7% 34/56). Among whom, 7 patients underwent radical operation with soundly preoperative condition, 27 patients without surgery were still in good clinical condition. Endoscopic intervention resulted in significant decline in severity cases, 3 (4.1%) were diagnosed with moderate post-ERCP pancreatitis (PEP) and 2 (2.7%) were mild hemorrhage. All AEs were resolved completely by conservative treatment. No patient had to be taken to the intensive care unit, and hospital mortality was zero.n = 20; mean [range] age, 25 [1 - 78] years; 15 women [75.0%]) and the undilated group . There was no statistical difference between two groups for baseline data. After ERCP therapy, no difference of severity and frequency of abdominal pain was identified between two groups. Frequency of pancreatitis did not differ between two groups either. No difference was seen for effective rate , general condition , number of readmission and number of added ERCP . Regarding adverse events, no statistical difference was observed between two groups . The results of further analysis were detailed in Table In our study, 56 patients underwent 57 EST procedures and 6 were excluded from the analysis due to follow-up loss. We therefore categorized 50 patients into two groups according to whether their common channels were dilated or not: the dilated group , and a higher percentage (60.3%) in pediatric patients. The main pre-ERCP complications were pancreaticobiliary calculi, biliary obstruction and pancreatitis. Regarding the Komi\u2019 s classification of PBM, type a was more frequent than other two types. Congenital biliary dilatation was present in 54% of the PBM patients and Todani type Ia, Ic and IV-A were detected most commonly. Compared with the survey carried out by the JSPBM in Japan , the cha.4%, and Endoscopic treatments for PBM primarily include EST, stent insertion and ENBD/ENPD. If pancreaticobiliary stones or protein plugs were detected, stone extraction would be needed. Published literature on the use of ERCP in PBM is limited to small series. Ng et al. first reOur study showed that ERCP was an effective treatment option for PBM patients accompanied by biliary obstruction. Patients benefitted from ERCP with significant decline in the levels of elevated liver enzymes and bilirubin. However, a trend towards increase in the level of serum amylase was observed. It could be related to asymptomatic hyperamylasemia, a common condition ranging from 16.5% to 18.3% after ERCP ,21. Our It is reported that the overall incidence of biliary carcinoma with PBM is more than 200 times higher comparing to the risk in the general population . The agePreoperative ERCP may benefit PBM patients at the following 4 points. (1) ERCP provides detailed information on the pancreaticobiliary systems, rules out other possible pancreaticobiliary anomalies, and helps to decide on the appropriate surgical strategy. (2) A better physical status is achieved preoperatively. Biliary obstruction can be treated by ERCP. (3) Recurrent pain and pancreatitis attacks among PBM patients may be attributed to sphincter of Oddi dysfunction and protein plugs incarceration, ERCP is an ideal method to resolve these problems. (4) For PBM patients without congenital biliary dilatation, a sphincterotomy before cholecystectomy may help reduce the risk of malignancy.rd level and 4th level. This was because our series contained a large proportion of children and patients who need interventions in pancreatic diseases. The difficulty increases, the technical success rate decreases and the complication rate increases. Our technical success rate was 97.3%. Two cases were failed because of complicated pancreaticobiliary ductal union. The overall frequency of PEP was 4.1%, higher than the 2.6% reported in unselected series [According to the ASGE ERCP difficulty grading system, 37.8% (28/74) of our ERCP procedures were evaluated as 3d series . This miNg and TeruOur study had several limitations. Firstly, the retrospective and single-centred design increased the likelihood for recall bias and selection bias. Secondly, parameters in our study such as general condition and pain situation were subjective. Thirdly, the small number of patients who underwent EST may limit the reliability of statistical analysis.In conclusion, ERCP can serve as a transitional step to definitive surgery for PBM patients. It can guarantee pancreaticobiliary drainage and relieve clinical symptoms, only with a low incidence for mild complications. PBM patients with undilated common channel could benefit from EST as well as patients with dilated common channel. Further studies with greater sample sizes were warranted."}
+{"text": "Structural analysis of this variant revealed an altered RNA structure that facilitates the interaction with SRSF3, an SR protein family member that promotes pri-miRNA processing. Our results are compatible with a model whereby a genetic variant in pri-mir-30c-1 leads to a secondary RNA structure rearrangement that facilitates binding of SRSF3 resulting in increased levels of miR-30c. These data highlight that primary sequence determinants and RNA structure are key regulators of miRNA biogenesis.MiRNA biogenesis is highly regulated at the post-transcriptional level; however, the role of sequence and secondary RNA structure in this process has not been extensively studied. A single G to A substitution present in the terminal loop of pri-mir-30c-1 in breast and gastric cancer patients had been previously described to result in increased levels of mature miRNA. Here, we report that this genetic variant directly affects Drosha-mediated processing of pri-mir-30c-1 A single variant in mir-30c-1 found in breast and gastric cancer patients leads to increased levels of mature miRNA. Here the authors show that this variant alters the RNA structure of this pri-miRNA leading to enhanced binding of SRSF3 and increased Drosha-mediated processing. MicroRNAs (miRNAs) are short non-coding RNAs that negatively regulate the expression of a large proportion of cellular mRNAs, thus affecting a multitude of cellular and developmental pathways13Due to the central role of miRNAs in the control of gene expression, their levels must be tightly controlled. As such, dysregulation of miRNA expression has been shown to result in grossly aberrant gene expression and leads to human disease678111317Several studies have shown that there is a correlation between the presence of polymorphisms in pri-miRNAs and the corresponding levels of mature miRNAsHere, we investigate the mechanism by which the pri-mir-30c-1 variant detected in breast and gastric cancer patients results in an increased expression of this miRNA. We found that this genetic variant directly affects the microprocessor-mediated processing of this miRNA. A combination of structural analysis with RNA chromatography coupled to mass spectrometry revealed changes in the pri-miRNA structure that lead to differential binding of a protein factor, SRSF3, that has been previously reported to act as a miRNA biogenesis factor. These results provide a mechanism by which the pri-mir-30c-1 genetic variant results in an increased expression of the mature miR-30c. Altogether these data highlight that primary sequence as well as RNA structure have a crucial role in the post-transcriptional regulation of miRNA biogenesis.27-to-A mutation observed in a Chinese population might affect miRNA biogenesis. It was previously shown that this substitution results in an increase in the abundance of the mature miRNA; however, the mechanism that leads to an increased expression is unknown21in vitro reaction. We found that in vitro-transcribed pri-mir-30c-1 was readily processed in the presence of MCF7 total extracts, rendering a product of \u223c65\u2009nucleotidess that corresponds to pre-mir-30c. Notably, the processing of the G/A variant was increased, when compared to the WT version, as was observed in living cells scores scores . Constratructure . These d27-to-A substitution in RNA structure, we performed structural analysis by selective 2\u2032-hydroxyl acylation analysed by primer extension (SHAPE)in vitro-transcribed RNA comprising 380 nucleotides of pri-miR-30c-1 (either WT or the G/A variant sequence) was treated with N-methylisatoic anhydride (NMIA), which reacts with the 2\u2032hydroxyl group of flexible nucleotides smaller than the mean of all reactivity, whereas exposed regions are those with more than two consecutive nucleotides having R larger than the mean of all reactivity. We observed that the WT sequence presents two buried regions located between nucleotides +8 to +40 and +9 to +25, as well as two exposed segments between nucleotides \u221225 to +7 and nucleotides +40 to +8 behaved similarly, exclusively affecting the G/A variant. Importantly, we were also able to show that a knock-down of SRSF3 expression affects the processing of pri-mir-30c-1 G/A variant, as expected; yet it does not compromise the processing of pri-mir-30c-1 G/A variant that lacks the CNNC motif , or those with tumour suppressor functions is often dysregulated in cancer1214Despite a more comprehensive knowledge on the role of RNA-BPs in the post-transcriptional regulation of miRNA production, there is only circumstantial evidence on how RNA sequence variation and RNA structure impact on miRNA processing. There are several reports showing that a single-nucleotide substitution in the sequence of pre-miRNAs could have a profound effect in their biogenesis. Nonetheless, there is limited information about single-nucleotide polymorphisms (SNPs) in the TL region of pri-miRNAs. A bioinformatic approach led to the identification of 32 such SNPs in 21 miRNA loop regions of human miRNAs4327-to-A variant) that was found in breast cancer and gastric cancer patients and leading to increased expression of miR-30c . The G27\u2013to-A mutation was generated by a two-step PCR strategy. First, pri-miR-30c-1 was amplified with a 30Cmut1 oligo (5\u2032-CCTTGAGCTTACAGCTGAGAG-3\u2032) and 30c1s and with 30Cmut2 oligo (5\u2032-CTCTCAGCTGTAAGCTCAAGG-3\u2032) and 30c1a. Both PCR products, were purified (Qiagen), pooled and used as a template for amplification with 30c1s and 30c1a primers. The resulting PCR product was cloned in pGEMt (Promega). pGEMt G/A plasmid was digested with EcoRI for cloning into pCDNA3.1. The CNNC motifs were subjected to site-specific mutagenesis by PCR amplificationEcoRI (New England Biolabs), purified by agarose gel electrophoresis and ligated to the large EcoRI fragments of pri-mir-30c-1 to produce the desired constructs . All the sequences were confirmed by automatic sequencing. A list of oligonucleotide sequences used in this study is presented as A pri-mir-30c-1 construct was amplified from human genomic DNA by PCR with specific primers 30c1s (5\u2032-CAAGTGGTTCTGTGTTTTTATTG-3\u2032) and 30c1a (5\u2032-GTACTTAGCCACAGAAGCGCA-3\u2032) The PCR product was digested with 2. Cells were tested for mycoplasma contamination.MCF7 and HEK 293 T cells were grown in high glucose Dulbecco's modified Eagle's medium (Invitrogen) supplemented with 10% (v/v) fetal calf serum (Invitrogen) and penicillin-streptomycin (Invitrogen) and incubated at 37\u2009\u00b0C in the presence of 5% COin vitro-transcribed RNA (0.33\u2009\u03bcg per 105 cells) or oligonucleotides-encoding pre-mir-30c (Sigma Aldrich)(5\u2032-UGUAAACAUCCUACACUCUCAGCUGUGAGCUCAAGGUGGCUGGGAGAGGGUUGUUUACUCC-3\u2032) using Attractene (Qiagen), following manufacturer's instruction. pCDNA-3, pri-miR-30a or oligo30a (5\u2032-UGUAAACAUCCUCGACUGGAAGCUGUGAAGCCACAAAUGGGCUUUCAGUCGGAUGUUUGCAGC-3\u2032) were used as negative controls in DNA or RNA transfections, respectively. Cells extracts were prepared at 8 or 48\u2009h (after RNA/DNA addition) by direct lysis using 100\u2009\u03bcl of lysis buffer . For SRSF3 gene silencing/overexpression MCF7 cells grown in 15\u2009cm dishes were transfected with ON-TARGETplus siRNA (Dharmacon) or pCG. As negative controls, cells were transfected with ON-TARGETplus Non-targeting siRNAs and pCG plasmid, respectively. Cells were split in 24-well dishes 24\u2009h after transfection and 24\u2009h later transfected with different versions of pri-miRNA constructs. HEK 293Ts were grown to 70% confluency in 6-well plates and then transiently co-transfected with 3\u2009\u03bcg of FLAG-Drosha and 1\u2009\u03bcg of FLAG-DGCR8, or 4\u2009\u03bcg FLAG-empty vector per well. Cells were expanded for 24\u2009h, then split to 10\u2009cm plates and expanded for a further 24\u2009h before cells were scraped, collected and snap frozen until required.MCF7 cells grown in 24-well plates were transfected with either pri-mir-30c-1 (WT or G/A) constructs, Equal amounts of total protein (determined by Bradford assay) were loaded in 12% NUPAGE gels (Invitrogen) and transferred to cellulose membranes using IBLOT system (Invitrogen). Identification of SRSF3 was performed with a rabbit polyclonal antibody , Dilution 1:500), followed by a secondary horseradish peroxidase-conjugated antibody and ECL detection (Pierce). Other primary antibodies used in this study were: mouse monoclonal anti-PARP-1 antibody (E-8): sc-74469, Santa Cruz Biotechnology, Dilution 1:500 ); Rabbit polyclonal anti-DDX17 antibody ((S-17): sc-86409, Santa Cruz Biotechnology, Dilution 1:500); Rabbit polyclonal anti-hnRNP A1 antibody ; Rabbit polyclonal anti-TLS/FUS antibody .ApaI (New England Biolabs). In addition, shorter pri-mir-30c-1 probes were PCR amplified from pri-mir-30c-1 plasmids for in vitro processing assays (in vitro-transcribed generated a 153 nucleotides (CNNC) or 146 nucleotides (\u0394CNNC) and then subsequently PCR purified (Qiagen) for in vitro transcription. An aliquot of 1\u2009\u03bcg of DNA or 400\u2009ng of PCR product were in vitro transcribed for 1\u20132\u2009h at 37\u2009\u00b0C using 50\u2009U of T7 RNA polymerase (Roche) in the presence of 0.5\u2009mM ribonucleoside tri-phosphates (rNTPs) and 20 U of RNAsin (Promega). When needed, RNA transcripts were labelled using (\u03b1-32P)-UTP and following DNAse treatment, unincorporated 32P-UTP was eliminated by exclusion chromatography in TE equilibrated columns or PAGE gel purification, followed by phenol/chloroform and ethanol precipitation.Before RNA synthesis, pri-mir-30c-1 plasmids (WT and G/A variant) were linearized using g assays using a g for 20\u2009min at 4\u2009\u00b0C. After centrifugation the supernatant was loaded into a chromatography column (Biorad) previously prepared with T7 Tag antibody Agarose (Novagen). The flow-through was collected and loaded a second time. The column was washed two times with 10\u2009ml of lysis buffer. Then, eluates were eluted with 10 serial 0.8\u2009ml volumes of elution buffer and collected in microcentrifuges tubes containing 200\u2009\u03bcl of 1\u2009M Tris pH 8.8 mixed and stored at 4\u2009\u00b0C. The fractions were analysed on SDS\u2013PAGE (Invitrogen) followed by Coomassie blue staining. The eluates containing the protein were dialyzed overnight against BC100 buffer , and stored in aliquots at \u221280\u2009\u00b0C.Purification of SRSF3 from MCF7 cells was performed following transient expression of epitope-tagged SRSF3 ref. . Brieflyin vitro-transcribed pri-mir-30c-1 was incubated with 650\u2009\u03bcg of MCF7 total cell extractin vitro processing reactions were supplemented with increasing concentrations of immunopurified T7-SRSF3 (in vitro processing reactions were performed in the presence of buffer A (0.5\u2009mM adenosine triphosphate (ATP), 20\u2009mM creatine phosphate and 6.4\u2009mM MgCl2). Reactions were incubated for 1\u2009h at 37\u2009\u00b0C and treated with proteinase K. RNA was extracted by phenol/chloroform and ethanol precipitation. Samples were resolved in an 8% 1 \u00d7 TBE polyacrylamide urea gel. An uncropped scan of the experiment corresponding to Radio-labelled T7-SRSF3 . The in One step qRT-PCR was used to calculate pre-miR-30c-1 levels. Specifically, 300\u2009ng of total RNA was used with SuperScript III Platinum SYBR Green One-Step qRT-PCR Kit on CFX96 real time system. Primers located within the precursor sequence of miR30c or within the primary sequence of miR30c were used to calculate pre-miR30c levelsLysates from pri-miR-30c-1 (WT and G/A) transfected cells were pre-cleared with mouse IgG beads followed by incubation with a polyclonal rabbit anti-SRSF3 antibody (MBL). The complexes were pulled-down using protein G beads (Amershan), then treated with proteinase K (Sigma) and RNA was extracted and purified using Trizol (Invitrogen)The alignment of 98 vertebrate sequences of pri-mir-30c-1 were retrieved from USCS genome browser. Each nucleotide frequency was plotted in a heat map.Evolutionary constraint was quantified for individual nucleotide positions as the number of rejected substitutions, as calculated by the GERP++ algorithm. This data was extracted from the UCSC genome browser, where they had been calculated over their 36-way mammalian genome alignments.http://rna.urmc.rochester.edu/RNAstructureWeb/).RNA structure models as shown on 2) using 170\u2009nM RNA in the presence of increasing amounts of purified SRSF3 protein (260 and 500\u2009nM). Then, RNA alone or pre-incubated with SRSF3 was treated with NMIA. RNA was phenol extracted and ethanol precipitated and then subjected to primer extension analysis.Pri-mir-30c-1 RNA was treated with NMIA (Invitrogen), as the modifying agent2). Samples were incubated with 1\u2009\u03bcl of the Fe(II)\u2013EDTA complex, 1\u2009\u03bcl of sodium ascorbate and 1\u2009\u03bcl of hydrogen peroxide for 30\u2009s at 37\u2009\u00b0C. Fe(II)\u2013EDTA (7.5\u2009mM Fe(SO4)2(NH4)2\u00b76H2O and 11.25\u2009mM EDTA, pH 8.0), 0.3% hydrogen peroxide and 150\u2009mM sodium ascorbate solutions were freshly prepared. As a control a lacking Fe(II)\u2013EDTA reaction was performed\u22121 glycogen, 1\u2009\u03bcl of 3\u2009M NaCl, 2\u2009\u03bcl of 0.5\u2009M EDTA and 2.5 volumes of ice-cold ethanol. RNAs were re-suspended and reverse-transcribed using fluorescent primers as described for SHAPE reactivity. cDNA products were resolved by capillary electrophoresis.Pri-mir-30c-1 RNA was subjected to Hydroxyl radical footprinting. Briefly, 1.7\u2009pmol of RNA was denatured and folded in folding buffer smaller than the mean of all reactivity, whereas exposed regions are those with more than two consecutive nucleotides having R larger than the mean of all reactivity.SHAPE electropherograms of each RT were analysed using the quSHAPE softwarePri-mir-30c-1: SRSF3 complexes were assembled as described for SHAPE analysis. After protein incubation samples were subsequently subject to primer extension using fluorescent primer (NED) 5\u2032-CTAGATGCATGCTCGAGCG-3\u2032.pri-miR-30c ref. . Primer in vitro-transcribed pri-mir-30c-1 and total MCF7 cell extractsRNase-assisted RNA chromatography with RNAse A/T1 was performed, using The data that support the findings of this study are available from the corresponding author upon reasonable request.How to cite this article: Fernandez, N. et al. Genetic variation and RNA structure regulate microRNA biogenesis. Nat. Commun.8, 15114 doi: 10.1038/ncomms15114 (2017).Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Supplementary Figures and Supplementary Table"}
+{"text": "Spherical aerogel particles with narrow particle size distributions in the range of 400 to 1500 \u00b5m and a specific surface area of around 500 m2/g are produced. Overall, it can be concluded that the jet cutting method is suitable for aerogel particle production, although the shape of the particles is not perfectly spherical in all cases. However, parameter adjustment might lead to even better shaped particles in further work. Moreover, the biopolymer-based aerogel particles synthesized in this study are tested as humidity absorbers in drying units for home appliances, particularly for dishwashers. It has been shown that for several cycles of absorption and desorption of humidity, aerogel particles are stable with an absorption capacity of around 20 wt. %.The aim of this work is to develop a method to produce spherical biopolymer-based aerogel particles, which is capable for scale-up in the future. Therefore, the jet cutting method is suggested. Amidated pectin, sodium alginate, and chitosan are used as a precursor (a 1\u20133 wt. % solution) for particle production via jet cutting. Gelation is realized via two methods: the internal setting method and the diffusion method . Gel particles are subjected to solvent exchange to ethanol and consequent supercritical drying with CO Biopolymer-based aerogels have become increasingly important for various applications in foods, pharmaceuticals, tissue engineering, catalysis, and cosmetics in the last decades ,2,3,4,5.To produce aerogel particles or beads of different sizes, dripping and emulsion gelation methods are the most promising approaches. In the emulsion gelation method, the aqueous biopolymer solution is dispersed in an oil phase. Resulting emulsions are stabilized by using surfactants. Subsequently, gelation of the droplets (dispersed phase of the emulsion) is induced by addition of a solution of the gelling agent . PrincipParticle sizes of a few hundred microns up to several millimeters can be achieved with different variations of the dripping method, such as simple dropping and vibrated dropping . Within So far, the production of biopolymer aerogel particles has been realized only as a batch process on a small scale and no scale-up approaches are demonstrated. However, to enable industrial applications, a new and scalable production technology for aerogel particles is required. Whereas, for the production of small aerogel particles (below 500 \u03bcm), the emulsion\u2013gelation method described above seems to be promising ,11 for lhttps://www.genialab.com/production/). Particle sizes can be set between a few hundred microns and up to several millimeters with different parameters, such as nozzle diameters, jet velocities, cylinder ratios and cutting frequencies [Throughout the jet cutting method, a liquid jet of biopolymer solution is cut by a rotating cutting disc. Obtained liquid cylinders fall along their trajectory to the ground. Throughout falling, due to surface tension cylinders form spheres, liquid drops are collected at the end of the particle trajectory. So far, the jet cutting method is used for the production of inorganic and organic spherical beads for diverse pharmaceutical, agricultural, cosmetic and cleaning applications was kindly provided by Magnesia GmbH, L\u00fcneburg, Germany. Sodium hydroxide (NaOH), citric acid, and calcium chloride (CaCl2) were purchased from Th. Geyer GmbH & Co. KG, Lohmar, Germany. Denatured ethanol 99.8%, and pure ethanol 99.5% were obtained from Carl Roth GmbH and Co. KG, Karlsruhe, Germany, and carbon dioxide (CO2) with a purity of 99.9% was supplied by AGA Gas GmbH, Hamburg, Germany. Sodium alginate (Hydagen 558P) was provided by BASF SE, Ludwigshafen, Germany, and sodium alginate (BioChemica A3249) was purchased from AppliChem GmbH, Darmstadt, Germany. Glacial acetic acid was purchased from VWR Chemical, Langenfeld, Germany. Shrimp chitosan was purchased from Sigma Aldrich, Taufkirchen, Germany. All chemicals were used as received. Deionized water was used throughout the study.Amidated pectin (29% of esterification (DE), 21% of amidation (DA)) was kindly provided by Herbstreith und Fox KG, Neuenb\u00fcrg/W\u00fcrtt., Germany. Calcium carbonate .Pectin stock solutions were prepared by dissolution of biopolymers in deionized water by using magnetic stirring at room temperature to reach different concentrations (1\u20133 wt. %). Part of the pectin solutions was further neutralized with 0.5 M sodium hydroxide to pH 7, and 0.18 g CaCO3 per 1 g alginate were added and stirred until a homogeneous dispersion was achieved.Alginate stock solutions of 1\u20133 wt. % were prepared by dissolution of alginate powder in deionized water. Stirring was performed with a dissolver stirring tool mounted on a high torque stirrer . For experiments with internal setting method, 0.37 g CaCOChitosan stock solutions of 3 wt. % were prepared by dissolution of chitosan in 3 wt. % acetic acid with mechanical stirring.Hydrogel particles were produced with the JetCutter Type S from geniaLab GmbH, Braunschweig, Germany. The extrusion of the biopolymer solution through the nozzle was driven by compressed air from the house supply line. A schematic drawing of the jet cutting process is shown in After cutting, liquid cylinders fell downwards into the collection bath. Throughout falling, cylinders formed spherical droplets due to the surface tension of the biopolymer solution. A suitable collection bath was placed at a distance of around 50 or 80 cm below the cutting tool. The composition of the gelation bath was adjusted to enable the gelation of the polysaccharide solutions. Used gelation methods and gelation baths for biopolymer solutions are shown in The volume of the gelation bath was at least five times the total volume of the processed biopolymer solution to enable good stirring and particle separation. After finishing the jet cutting, the content of the collection baths was stirred with a magnetic bar for at least ten more minutes to ensure complete gelation of particles and to avoid agglomeration. Gelled particles were removed from gelation bath via filtering. To avoid any loss of particles during collection, filter mesh sizes below nozzle diameters were chosen. Collected particles were transferred to the solvent exchange.Two ways of solvent exchange (water to ethanol) were performed on collected particles: (1) direct solvent exchange to 100 wt. % ethanol, (2) stepwise solvent exchange until a final ethanol concentration of 98 wt. % (respectively 98 vol.%) was reached inside the particles. Further, for some samples, an additional washing step with deionized water was performed before starting the solvent exchange to remove remaining components of the gelation bath from the particles, thus avoiding agglomeration during particle collection.2 was performed in an autoclave at a constant temperature of 60 \u00b0C and a pressure of 120 bar. Continuous flow of CO2 was set until complete extraction of ethanol was done. Afterwards, slow depressurization of the autoclave (1\u20132 bar/min) was performed. Dried particles were collected from the autoclave and stored in sealed boxes until analysis.After solvent exchange particles were taken out from the ethanol bath and packed into a filter paper, supercritical drying with COSpecific surface areas of aerogel beads were measured via low temperature nitrogen adsorption/desorption (BET).Investigation of the inner structure of the resulting aerogels and particle size determination were done via a scanning electron microscope (SEM) analysis . Intact and cut particles were used for the study of outer and inner structure and particle size. Cutting of particles was done with a scalpel. Prior to analysis, all samples were sputtered with a thin layer (a few nanometers) of gold to avoid electrostatic charging during measurements. Such prepared samples were studied with the SEM under high vacuum at an accelerating voltage of 3 kV and magnifications between 1000 and 100,000 fold.Shape and size of obtained hydrogel and aerogel particles and beads were examined with an optical microscope (VisiScope TL384H) from VWR International GmbH, Darmstadt, Germany.The humidity absorption/desorption capacity of aerogel particles were tested in laboratory scale. For the determination of absorption capacities, aerogel samples were kept in a humidity chamber under two conditions: (a) 40 \u00b0C and 100% relative humidity (RH) and (b) 27 \u00b0C and 80% RH for three hours. Absorption capacity was determined in two ways: (a) samples were dried at 60 \u00b0C overnight and the weight of aerogel samples was determined after an absorption step and a drying step, respectively. The absorption capacity, The adsorption capacity in percentage was calculated as following:In this work, the jet cutting method was combined with two gelation mechanisms: the diffusion and internal setting methods (as described in the introduction). For both methods, the gelation in the collecting bath needs to be optimized, so that the droplets can form a gel without agglomeration or coalescence with other droplets/particles. Therefore, one of the most crucial parameters of the process is the gelation time.High gelation rate might lead to an immediate gelation after the first contact of solution droplets with the gelation bath. Additionally, a low surface tension of the droplets might cause strong deformation during their hit on the gelation bath surface. Both factors might lead to a situation, when the droplet deformed by the hit on the surface of the bath`s liquid is \u201cfrozen\u201d immediately by fast gelation. On the other hand, inappropriately long gelation time might cause deformation of liquid droplets or their coalescence during stirring in the gelation bath, thus leading to non-uniformity of particle size and shape. Depending on the polymer type, slow gelation rates might cause improper gelation after set gelation time, which results in the collection of only partly gelled particles, leading to a dissolution of non-gelled droplets during the washing step before solvent exchange. Therefore, the effects of the pH value of the solution, temperature, viscosity, and biopolymer concentration on the gelation rate should be accounted to understand their impact on the resulting particle shape. Another important parameter influencing the particle shape is the distance between the cutting tool and the collection bath, which needs to be properly adjusted.In this work, all studied biopolymer solutions were processed with a table-top-sized JetCutter. The flow was induced with pressurized air. Pressures between 1 and 2 bar were sufficient to reach throughputs between 2 and 13 kg/h. Among the others, two important parameters of the jet cutting process are the cylinder ratio, which describes the ratio between the height and the diameter of the cut cylinders, and the nozzle diameter, which directly influences the jet diameter. The optimum values of these parameters for the production of spherical particles of certain size depend on the solution properties, such as viscosity and surface tension. Nevertheless, both parameters are not the only ones influencing the particle shape and size during the cutting process; the nozzle diameter is important to set the cylinder diameter, and therefore, strongly influences the particle size as well.Thus, one of the goals of this work was to find the optimal cylinder ratio in combination with the nozzle diameter, which led to perfect sphericity of the particles and narrow particle size distribution.Different cylinder ratios (cylinder height/cylinder diameter) were realized by tuning the nozzle diameter (from 250 to 1000 \u00b5m), the throughput, and the cutting frequency, as shown in The diffusion method of gelation could be applied to both aqueous solutions of pure pectin and pure alginate .2 solution. Free calcium ions diffuse into biopolymer solution droplets and interact with ionized carboxyl groups and hydroxyl groups of the polysaccharide chains, resulting in the formation of junction zones (\u201cegg-box\u201d model) and thus in gelation of the solution [Solutions of both biopolymers with different concentrations were jet cut and collected in a gelation bath containing an aqueous 0.5 wt. % CaClsolution ,17. EvenAs can be observed from Larger nozzle diameter (1000 \u03bcm) results in mainly small, well-shaped particles comparable to those from smaller nozzles but also large, deformed particles indicating that breakdown of the large primary droplets into smaller secondary droplets occurred during the collection and stirring. The deformation and breakdown of large primary particles might be explained by the decreased stability of larger particles. Increasing nozzle diameter resulted in the increased particle volume. Due to the spherical shape of falling droplets with the increased droplet diameter, the volume (and mass) was increased with the diameter to the power of 3, whereas the cross-sectional area of the particles was increased by diameter to the power of 2. As a result, the ratio between the volume and cross-sectional area increased with the increased droplet diameter. This change has an impact on the droplet stability when hitting the surface of the collection bath and during stirring inside the bath. Increasing particle mass resulted in an increasing ratio of particle energy to particle surface, leading to less stability of the droplets. Therefore, larger particles were more likely broken down than smaller particles.On the other hand, for both systems, larger particle diameters and deformed particles were obtained with larger nozzles, indicating deformation of unbroken droplets inside the gelation bath before proper gelation occurred. However, smaller nozzle diameter rarely resulted in deformed particles and narrower particle size distributions. This difference might be due to the larger impact of a gelled outer layer at small particles. For smaller particles, the ratio between the stable outer surface and the volume was smaller than that for larger particles. Therefore, a stable gelled layer at the outside was more effective for stabilization than for large particles, and might help to avoid deformation and destruction of particles during stirring. It has been found that high cylinder ratios generally led to deformed and elongated particles. Tails and flattening of the beads were observed, compared with 3 wt. % pectin, with a nozzle diameter of 500 \u03bcm and a cylinder ratio of 12.1 in It is obvious that the nozzle diameter and cylinder ratio had a highly significant impact on the particle size and shape. Nevertheless, these two parameters are not sufficient to explain particle shape and size. It is likely that vigorous stirring needed for particle separation also causes damage of not yet gelled particles. More uniform flow fields and less turbulence inside the gelation bath could help to improve this situation. Therefore, further studies need to be done to evaluate the impact of these parameters on the particle size and shape.During collection of pectin particles from a calcium chloride solution, solid needles were observed around the particles .These needles were removed during washing, solvent exchange, and drying step. Most likely, these needles resulted from crystallized calcium chloride. Another observation for both polysaccharides is the formation of bubbles inside many of the particles, especially for spherical ones. The gas was slowly released during the solvent exchange before supercritical drying; nevertheless, generated bubbles resulted in particle inhomogeneity. One possible explanation for this phenomenon is that air was partly dissolved in the biopolymer solution during preparation. At the moment of extrusion through the nozzle, the pressure drop led to degassing of the gas from the solution, resulting in gas bubbles in the gelling particle. Another explanation might be that, during reshaping to spherical droplets from cut cylinders, air was included to the droplets. For the deformed particles, it was likely that the gas was released during the deformation and breakdown. The gas bubbles led to inhomogeneous structure of the particles and might influence their flowing and mechanical properties. 3 particles, and gelation was induced during the contact with an acid in the gelation bath. For neutralized pectin solutions , an aqueous solution of citric acid (pH 3\u20134) was used as a collection bath, whereas 30 wt. % acetic acid was used for alginate. The drop of the pH value in the bath induced the dissolution of CaCO3 and ionotropic gelation of pectin and alginate took place as described in Chapter 3.2. In this case, the particle size of the dispersed solid CaCO3 was limit for the processing with the JetCutter. The used size of the nozzle was restricted by the particles with adequately small size, which were allowed to pass the nozzle. In our case, CaCO3 had an average size of ca. 1 \u03bcm. To avoid the blocking inside the nozzle due to agglomerates of CaCO3, appropriate homogenization of the dispersions with certain devices was suggested.In case of the internal setting method, pectin and alginate solutions were mixed with solid CaCOSignificant differences between pectin and alginate were observed at a low polymer concentration (1 wt. %). Pectin beads re-dissolved during the washing step due to incomplete gelation and weak particles. Further, after the collection in filter paper and supercritical drying particles with low pectin concentration (1 wt. %) were agglomerated. Pectin hydrogel particles from 2 and 3 wt. % solutions and alginate particles were stable and overcame the washing step and solvent exchange without a visible damage. Gel particles obtained at different combinations of nozzle diameters and cylinder ratios for internal setting method are shown in 3 particles. Calcium ions were released and induced gelation of pectin molecules in the whole droplet. Due to the required dissolution of CaCO3 particles, the gelation process seemed to be slower but led to more homogeneous particles, compared to the diffusion method. Nevertheless, it seemed that slower gelation favored deformation of droplets inside the gelation bath. Sheer forces due to stirring of the bath might cause the deformation of the droplets, whereas gelled particles might withstand these forces. Generally, the internal setting gelation of pectin seemed to be slower compared to the diffusion method, which resulted in weaker gel particles after the same gelation time. In the internal setting method, contact with the gelation bath caused the drop of the pH value inside the droplets and subsequent dissolution of distributed CaCOIn conclusion, both diffusion and internal setting methods are feasible to be combined with the jet cutting method to produce spherical-shaped aerogel particles from pectin and alginate solutions. However, the diffusion method is simpler in preparation and handling stock solutions in comparison to the internal setting method, where particle production is challenging due to the higher viscosities of the solutions and blocking of the nozzle with calcium carbonate. Further, especially for pectin, the diffusion method seems to be more favorable in terms of particle deformation and breakdown during the process. Therefore, a clear dependency of the particle shape and size on the cylinder ratio and nozzle diameter was observed. In case of the alginate solution, the internal setting method seems to be more independent of the nozzle diameter, regarding the particle shape. Nevertheless, spherical alginate particles could also be produced with the diffusion method. Taking into account this fact and also regarding the handling of the process, the diffusion method seems to produce spherical hydrogel particles more easily.2/g; 2 wt. % pectin internal setting method with carbon dioxide: 2/g) produced in pre-studies. Aerogel particles presented, which were produced via emulsion gelation combined with diffusion and internal setting method in literature, showed alginate specific surface areas were around 394 m2/g by the diffusion method and 469\u2013590 m2/g by the internal setting method [2/g for pectin by the diffusion method [After solvent exchange and supercritical drying, aerogel particles were obtained. The specific surface areas of the obtained aerogel particles are shown in g method and 470\u2013n method . ObtaineSimilar specific aerogel surface areas were obtained with both internal setting and diffusion methods. Due to promising results for amidated pectin and alginate during the particle production with the jet cutting system and understanding of the parameters, chitosan aerogel particles were produced to demonstrate the high potential for the application in dishwashers . Produced chitosan aerogel particles are shown in In order to select the best absorbent aerogel before incorporating them into the dishwasher, absorption/desorption capacities of the chitosan aerogel particles were characterized in laboratory conditions. Five different chitosan aerogel samples were tested. The aerogel samples were kept in the humidity chamber at 27 \u00b0C and 80% RH, which simulated the lowest temperature and RH conditions in a dishwasher during the drying step. The absorption/desorption cycles of the chitosan aerogel sample with the highest absorption capacity are shown in It was observed that absorption and desorption capacity was nearly constant over several cycles of absorption and desorption. First prototype tests resulted in consistent absorption capacities as those in laboratory scale.This result is promising for the application of aerogel particles in dishwashers for the reduction of total energy consumption. Throughout the whole washing cycle in a conventional dishwasher, the drying step itself consumed a considerable amount of energy by heating up the water temperature to approximately 65 \u00b0C. This energy consumption could be reduced by lowering the heating temperature or even eliminating the heat step completely from the drying step by using water vapor absorbers instead. Tested chitosan aerogels showed the potential for the use as water vapor absorbers for the reduction of energy consumption.2. It was shown that the jet cutting method could be combined with both the internal setting and the diffusion gelation method of polysaccharide solutions to obtain spherical hydrogel particles. One important parameter of the cutting process was identified to be the cylinder ratio, defined as the ratio between cylinder height and diameter. Cylinder ratios close to one resulted in spherical aerogel particles, whereas much higher cylinder ratios tended to result in deformed or broken particles. Structural analysis of the obtained aerogel particles revealed high specific surface areas, which were comparable to those of monolithic aerogels. Therefore, no negative effect of the jet cutting method on the aerogel properties was observed. For further optimization and understanding of the production process, the impact of the solution flow rate, cutting frequency, and the jet velocity during the cutting event on the particle size and shape will be studied in future work. Spherical biopolymer-based aerogels from amidated pectin, sodium alginate, and chitosan solutions were successfully produced via the jet cutting method and subsequent supercritical drying with CORegarding the application of produced aerogel particles, it was shown that obtained chitosan aerogel particles showed a high humidity uptake capacity of around 20 wt. % when applied in an industrial prototype of a dishwasher. Spherical shape of particles was required due to easier handling and application in the prototypes. Therefore, aerogel particles produced via the jet cutting method are promising for the industrial application in this field."}
+{"text": "In temperate areas, the main limitation to the use of maize in the food chain is its contamination by B-series fumonisins (FBs) during cultivation. Since the content of this group of mycotoxins may be distributed unevenly after milling, the aim of this study was to compare the distribution of FBs in maize fractions derived from two industrial dry-milling processes, that is, a dry-degermination (DD) system and a tempering-degermination (TD) system. Grain cleaning reduces FBs by about 42%. The germ of the two degermination processes showed a similar FB content of kernel after cleaning. Conversely, an animal feed flour resulted in a FB content that was two times higher than whole grain before cleaning. A significant FB reduction was observed in the milling fractions in both processes, with a higher reduction in the TD system than in the DD one. The average decontamination respect to uncleaned kernels in the DD process was 50%, 83% and 87%, for maize flour, break meal and pearl meal, respectively, while it was 78%, 88% and 94% in the TD process for small, medium and flaking grits, respectively. Among the milling fractions, the flaking grits with the highest particle size resulted in the highest FB reduction. Milling processes are methods that can be used to transform whole grains into forms suitable for conversion into consumable products. They usually separate the botanical tissue of the grain and reduce the endosperm into flour or grits .From the processing perspective, the maize kernel is composed of four primary structures: the endosperm, germ, pericarp and tip cap, and they generally make up 83%, 11%, 5% and 1% of the maize kernel, respectively . The endMost of the maize used for food is first processed by either wet or dry milling industries. Wet milling fundamentally differs from dry milling in that it is a maceration process in which physical and chemical changes occur in the nature of the basic constituents. Wet milling produces pure starch for industrial and food uses, and by-products composed of protein, fibre and germ .Dry-milling is the main milling procedure adopted in the maize food chain, and it produces refined endosperm products with various particle sizes and other by-products such as germ and animal feed flour . This prFusarium species from the Liseola section that are the most important mycotoxins found in maize grain in temperate areas [w > 0.9), these mycotoxins are formed in maize prior to the harvest. B-series fumonisins are fairly heat-stable and their content is only significantly reduced during processes in which the temperature exceeds 150 \u00b0C [Considering the increasing role of maize milled products as basic food, their quality characterization becomes extremely important. From a nutritional point of view, maize and its derived products are good sources of starch, proteins, lipids and different bioactive compounds ,8. Howevte areas ,10. As Fs 150 \u00b0C . The mycs 150 \u00b0C . The cons 150 \u00b0C .1 (FB1), fumonisin B2 (FB2) and FBs as a sum of FB1 and FB2 in milling products and by-products obtained from two degermination systems and from the processing of the same maize lots at the industrial commercial scale, in order to obtain a clear comparison of the decontamination levels obtainable by applying different dry-milling processes.The re-distribution of FBs in maize dry milling has already been analysed extensively in other studies ,15,16,17\u22121, 1292 \u00b5g\u00b7kg\u22121, 2123 \u00b5g\u00b7kg\u22121, in 2011, 2012 and 2013 growing seasons, respectively for both FB1 and FB2 and for their sum, between the milling fractions obtained considering both the compared dry-milling processes. Otherwise the interaction between the milling fraction and the year of production of the processed lots was never significant for FB contamination.Analysis of variance (ANOVA) showed significant differences .It is important to note that each commercial mill is likely to have its own particular set up, thus giving rise to different percentages of reduction, but a cleaning process is essential to obtain a healthy whole grain for milling.\u22121 and 510 \u00b5g\u00b7kg\u22121 in the DD system and in the TD system, respectively. Germ contamination was not significantly different from that of the whole grain after cleaning . No significant differences were observed for the animal feed flour contamination of the compared processes.Several previous scientific studies collected and analysed germ and animal feed flour together, and the results showed a 2 to 3 times higher FB content than the whole grain ,20,21. Ip < 0.001) less contaminated than the germ and animal feed flour, and showed a variable re-distribution from the whole grain before cleaning in the different compared processes. The pearl meal, break meal and maize flour from the DD process had mean FB contents of 157, 216 and 623 \u00b5g\u00b7kg\u22121, respectively, which resulted in a decontamination, compared to the whole kernel before cleaning, of 87%, 83% and 50%, respectively. As far as the particle size effect is concerned, the FB content in the pearl and break meal was significantly lower (p < 0.001) than in the maize flour.As a consequence of this unequal distribution of mycotoxins in the milling fractions, the products destined for human consumption were always significantly the mean milling yield was 55%, while they only contributed to 6% of the total FB contamination of the whole grain. The pearl meal, break meal and maize flour from the DD dry milling system had a mean yield of 55%, and their FB content amounted to 9% of the total contamination of the kernel. These three products mainly originate from the horny endosperm, but a higher percentage of floury endosperm that is not completely separated by this process still remains especially in the maize flour .Previous studies have overlooked the fumonisin redistribution with different milling process, whereas the present study has been specifically designed to directly compare the decontamination of FBs obtained through the application of different degermination and milling processes to the same lots of maize grain.To compare the results of several scientific studies concerning the effect of maize milling on the distribution of mycotoxins, F. verticillioides produces only a few FBs in the germ. Therefore, even though the germ is not the best substrate for FB production, the floury endosperm around the germ could instead be a good place for the fungus that grows in the germ to produce mycotoxins. Philippeau et al. [Castelles et al. attributu et al. reported1 biosynthesis, possibly through the uptake of \u03b1-1,6 linked glucosides, such as dextrin. Field experiments conducted in Italy [Moreover, considering the amylose/amylopectin ratio of the maize endosperm, Dombrink-Kurtzman and Knutson concludein Italy showed tOverall, the collected data underline how the floury endosperm was more contaminated than the horny one, and that the different effectiveness of degermination systems to separate these fractions could lead to a different decontamination capacity in the derived products. Thus, the TD system process, which is able to better separate the horny endosperm from finer fractions , permits1 and FB2) [\u22121, respectively. All the samples collected in this study were under the regulatory levels recommended.Considering European legislation regarding fumonisins (sum of FBand FB2) , differeThe data collected in the present study from an experiment performed during different growing seasons in an industrial mill, through the contemporaneous application of different degermination processes to the same maize lot, underline the important role of the adopted milling process in the re-distribution of the FB content in milling products and by-products. As far as the by-products are concerned, the animal feed flour showed an important increase in contamination, in part because it receives cleaned fractions that are highly contaminated, while the germ resulted in a similar FB content to the whole kernel after cleaning. As far as the endosperm fractions are concerned, the FBs in products that are derived from the horny endosperm are distributed in a different way from those obtained from the floury endosperm. The former are less contaminated, and thus present a lower health risk in the food chain. Finally, this study has proved, for the first time, that the application of a TD process to dry milling leads to endosperm fractions with a lower health risk for fumonisin contamination than the DD process.The fate of FBs has been investigated by sampling and analysing commercial maize lots (>200 t), cultivated in 2011\u20132013 period in the same growing area in Northwest Italy. In each year, the sampling was replicated on 3 different lots, for a total number of sampled lots equal to 9 . In ordeAll the maize lots were processed in an industrial mill on two separate dry-milling lines based on different degermination processes as described in detail in Blandino et al. . In both1 and FB2 of 2000 \u00b5g kg\u22121. All the other milling fractions had a mean particle size >500 \u00b5m and a maximum limit for these toxins of 1400 \u00b5g kg\u22121.The first process was based on a dry milling technology coupled to a dry-degermination (DD) system and prodThe second process was based on a dry-milling technology coupled to a tempering-degermination (TD) system and the In both processes, the main by-products were the germ and the animal feed flour, a mixture of impurities, bran and a part of the mealy endosperm. The usual expected yield of these by-products, in comparison to whole grain before cleaning, was 10% for the germ and 35% for the animal feed flour.\u22121 of maize kernel, a dynamic sampling procedure was planned in which each aggregate sample was the result of careful blending of 40 incremental samples of 100 g each, collected for 1 h at regular intervals. A sampling lasting 1 h was performed twice for each lot and each dry-milling process in order to obtain two replications. The samples collected for each lot were the whole grain before and after cleaning, all the products and the by-products for both processes (DD and TD), for a total of 216 samples (3 years \u00d7 3 lots/year \u00d7 12 milling fractions \u00d7 2 replications). The sampled products No 401/2006 . ConsideAll the samples were subjected to a further milling step using a hammer mill to provide a homogenous particle dimension of less than 1 mm.v/v) on a mechanical shaker at 100 rpm for 20 min. The extracts were filtered through Whatman no. 1 filters and 10 mL of filtered extract were diluted with 40 mL of Phosphate Buffered Saline (PBS) pH 7.8 . A second filtration was performed with a Munktell Glass Microfiber AB .For the fumonisin extraction, 50 g of flour was mixed with 100 mL methanol/water . The clean-up procedure involved pipetting 10 mL of filtered extract and passing it completely through a FumoniTest WB\u00ae affinity column at a rate of about 1\u20132 drops/second. After, 5 mL of PBS were added and passed through the column and at last the analyte was recovered with 2 mL of pure LC/MS (Liquid Chromatography/Mass Spectrometry) grade methanol and injected into the LC-MS/MS system, according to the method described below.The clean-up method was performed with a FumoniTest WB1 and FB2 were quantified by injecting 10 \u03bcL of the purified extracts into the LC-MS/MS system. The LC system consist of a Varian 212-LC chromatographic pump, a reversed-phase Agilent column, Pursuit 5 C18 and a ProStar 410 autosampler. The LC system was coupled with a triple quadrupole mass spectrometer 310-MS equipped with an electrospray ionization (ESI) source. The chromatographic run had a duration of 15 min , with acetonitrile and water acidified with acetic acid 0.1% as the mobile phase. The FBs were identified by using the electrospray ionization source in the positive ion mode. The protonated FB1 (722 m/z) molecule was fragmented into its product ions at 352 m/z (used for identification) and 334 m/z (used for quantification). For FB2 the fragmentation pathway was instead the production of the ions at 318 m/z (used for identification) and 336 m/z (used for quantification) from the precursor protonated FB2 (706 m/z). The quantification was performed on the basis of calibration curves with a linearity range of between 4 and 4000 \u00b5g kg\u22121. For both FB1 and FB2 the limit of detection (LOD) and the limit of quantification (LOQ) were 1 and 4 \u00b5g kg\u22121, respectively. The mean percentage of recovery at two different concentration levels for FB1 and FB2 was 78% and 87% (RSD%: 15%), respectively. All the reported results were corrected for the recovery rate.FBThe normal distribution and homogeneity of variances were verified by performing a Kolmogorov-Smirnov normality test and a Levene test, respectively.The FB contamination was compared by means of an analysis of variance (ANOVA), in which the milling fractions were the independent variables and the year of production of the processed maize lots was the random factor. Since the level of contamination within the maize produced in the same growing season was very similar, the lots cultivated and sampled in the same year were considered as replication. The FB content were transformed using the equation:"}
+{"text": "In this study, we created a new model to determine strain fatigue characteristics obtained from a bending test. The developed model consists of comparing the stress and strain gradient surface ratio for bending and tensile elements. For model verification, seven different materials were examined based on fatigue tests we conducted, or data available in the literature: 30CrNiMo8, 10HNAP, SM45C, 16Mo3 steel, MO58 brass, and 2017A-T4 and 6082-T6 aluminum alloys. As a result, we confirmed that the proposed method can be used to determine strain fatigue characteristics that agree with the values determined on the basis of a tensile compression test. As such, estimating the fatigue limit is one of the most important aspects of strength analysis of structural components. To examine the fatigue limit of various materials, tests must be performed under tension\u2013compression or oscillatory bending, and the test results for the specimens of the occurrence of stresses and strains must be subsequently analyzed. However, the origin of these stresses is not usually considered, although the terms normal or shear stress appears in the analysis of fatigue life. The amplitude of the normal stress It is important to note that in the case of bending, we always have a linear distribution of the strain gradient, which in the case of elasticity corresponds to the same stress distribution. The situation changes dramatically in the event of plastic deformations. Here, even if the sheet is rolled, the stress distribution is not linear due to various elastic and plastic deformations in the cross-section . TherefoFew studies have paid attention to differences in fatigue resulting from the load ,10,11,12The effect of the stress gradient, and thus the strain gradient, is rarely directly included in fatigue life estimation models. The gradient method is one of the methods of forecasting fatigue limit discussed in the literature , where, x is the distance from the bending plane, Few attempts have been made to discuss or use the gradient in the literature. In one of the latest publications , a strai studies .In this study, seven different materials were used based on selected fatigue tests available in the literature, along with tests conducted by us: 10HNAP, 30CrNiMo8, SM45C, 16Mo3 steel, MO58 brass, and 2017A-T4 and 6082-T6 aluminum alloys.Fatigue life determined from fatigue tensile\u2013compressing tests indicates lower or comparable values for fatigue life obtained under oscillatory bending . TherefoThe results obtained under oscillatory bending conditions represented by the determined secant modulus based on the ratio of the stress gradient and the strain gradient at the critical location, i.e., on the surface, are significantly similar to the results obtained during tension\u2013compression.fN is the fatigue life in cycles, a and m are constants in the regression model.Basquin proposedented as :(2)\u03c3a=\u03c3ffN is the number of loading reversals (semi-cycles); f\u03c3\u2019 and c are the coefficient and exponent of the fatigue limit, respectively; and f\u03b5\u2019 and c are the coefficient and exponent of the plastic fatigue strain.The basic fatigue characteristic for tension\u2013compression is the Manson\u2013Coffin\u2013Basquin modulus (MCB) ,25:(4)\u03b5aFor tension\u2013compression, the uniaxial distribution of strains and stresses is as presented in In the case of an elastic body model for bending, the distribution of strains and corresponding stresses are linear, as presented in x is the distance from the bending plane, and R is the maximum height.In the literature, no simple model exists for determining strains and stresses according to the model of the elastoplastic body for specimens without notches when bending. For small strains, the distribution of normal strains in the cross-section for bending was linear , and we The second condition to be met is a physical condition, i.e., a bending moment that must be balanced by the normal stresses:K\u2019 is the coefficient of cyclic strength, and n\u2019 is the exponent of cyclic strengthening.The Ramberg\u2013Osgood equation combines the stress amplitude with the strain amplitude and is described as :(7)\u03b5a,t=In total, the system of equations consisting of conditions in Equations (5)\u2013(7) must be met; on this basis, the elastoplastic strains and appropriate stresses can be determined. The distribution of strains and stresses was shaped as presented in x from Equation (6), we obtained a derivative for strains:Using the strain derivative after x for bending has the form:In the elastic range, the stress derivative after By using the assumptions in Equations (7) and (8), we obtained the ratio of stress and strain derivatives:According to Equations (3) and (4), for a model of the elastoplastic body:x from Equation (10), we obtained:Counting the derivative on both sides after In other words,Assuming that the secant modulus at a given point includes plastic strains, it is defined as:K\u2019 is the coefficient of cyclic strength and n\u2019 is the exponent of cyclic strengthening.Eventually, by dividing the sides of Equations (10) and (16), we obtained:E is Young\u2019s modulus, a is the exponent of fatigue stress, and We proposed the following relationship between the amplitude for oscillatory bending according to the elastoplastic model and the amplitude including the gradient:In the proposed model, we assume that Eventually, by substituting Equation (17) into Equation (18), we obtained:We obtained the maximum stress on the surface; in other words, for Calculating the maximum stress provided the basis for calculating the strain gradient:Eventually, after introducing Equation (21) into Equation (22), we obtained:The analysis was conducted on 7 materials from different material groups. A part of the research data was obtained from the available literature, and some data were obtained from our own research. The analyzed and tested materials were 10HNAP, based on our research under tension\u2013compression and undeTensile\u2013compression tests were performed under standard conditions on solid round specimens. In fatigue tests, diabolo-type cylindrical specimens with no geometric notch were used, as presented in a,grad\u03c3, which was the basis for calculating the strain gradient using Equation (23).For oscillatory bending, in the first stage, the stress amplitudes from the elastic body model were converted into the elastoplastic body model according to the description and Equations (5)\u2013(7). Then, the calculated stress amplitudes were converted into the model proposed in this paper, which included the stress gradient, according to Equation (21) We interpreted the results by analyzing the fatigue life scatter, which was used with the help of the logarithm :(24)E=f and by including the strain gradient (grad), we found that the smallest scatter, and the closest to the tension\u2013compression, were the results produced by the model that included the gradients. When analyzing subsequent materials, we found that this was the same situation for all steel and brass. Thus, for all analyzed steel and brass, satisfactory results were obtained according to the proposed model. For aluminum alloys, no improvement in the scattering was achieved, but the obtained results were acceptable. Scattering for 6082-T4 aluminum alloy also remained in the trend as 2017A-T6 aluminum, but the difference was even smaller, especially between bending under controlled strain and bending with the gradient.When comparing different models, in order to select the one closest to reality, the fatigue life scatter was analyzed for each of the examined materials. The results are presented in This paper proposes a model that enables the conversion of strain fatigue characteristics obtained on the basis of a cyclic bending test into equivalents, which coincides with the characteristics obtained in the tensile\u2013compression test. The proposed model is based on the ratio of the stress and strain gradient at a critical location, i.e., on the surface.From the analyzed materials, we found that the strain amplitudes obtained on the basis of the oscillatory bending test with restraint for a given fatigue life were greater than or equal to those obtained in the tensile\u2013compression test.For the analyzed materials, we concluded that the strain amplitudes obtained on the basis of the proposed model during the oscillatory bending test with restraint for a given fatigue life were comparable to those obtained from the tensile\u2013compression test, with the exception of 16Mo3 steel.From the use of scattering, we found that the most reliable calculation results for the oscillatory bending were obtained when including the secant modulus considering plasticity, i.e., the ratio of the stress gradient and the strain gradient."}
+{"text": "Obesity has been identified with an expanded danger of a progression of illnesses that include different organ-frameworks of the body. In the present examination, we evaluated the hypolipidemic properties of Echinochrome (Ech) pigment in a\u00a0high-fat diet\u00a0(HFD) induced hyperlipidemia in rats. After the hyperlipidemic model was set up, rats were haphazardly separated into five groups as follows: normal control group, HFD group, Atorvastatin (ATOR) group (80 mg/kg), Ech group (1 mg/kg) and combined group ATOR\u2009+\u2009Ech. The outcomes demonstrated that Ech improves lipid profile, liver functions, kidney functions and antioxidant markers of obese rats. The findings of the present investigation indicated that the Ech possesses hypolipidemic potential in obese rats. Cardiovascular infirmities consolidate a wide scope of the risky issue, for instance, atherosclerosis, arteriosclerosis, stroke, and obesity . ObesityEven though the fact that hypolipidemic medications can treat hyperlipidemia, they are constrained because of the absence of adequacy and safety. Statins or 3-hydroxy-3-methylglutaryl-coenzyme A reductase inhibitors (HMG-CoA reductase inhibitors) are utilized clinically for hypercholesterolemic patients with moderate and high cardiovascular illness hazard . They deThe marine environment is a bounteous wellspring of unfamiliar utilitarian nourishments promoting the improvement of new bioactive particles with various properties . For insThe present examination evaluated the hypolipidemic effect of echinochrome in correlation with statins in a model of high-fat diet incited hyperlipidemia in the male albino rats.Figure\u00a0Obesity-induced by HFD had a substantial elevation in adiposity parameters, TL, TC, TG, and LDL-C while HDL-C decreased when compared to control rats Table . In contObese rats presented significant elevation (p\u2009<\u20090.05) in liver enzymes activities; AST, ALT, and ALP, while total proteins and albumin synthesis inhabited as compared to the control group. These changes in liver function parameters were improved after the administration of ATOR and/or Ech Table .Table 2EHFD induced kidney dysfunction, which confirmed by the significant increase (p\u2009<\u20090.05) in urea, creatinine, uric acid, and CK concentrations in tissues MDA concentration while GSH, GST, and CAT levels decreased significantly. Theses parameters showed no differences between the ATOR and HFD groups. While Ech alone or combined with ATOR induced a significant increase in antioxidant markers and reduced lipid peroxidation marker (MDA) Tables , 5, 6.Ta in tissuThe liver sections of control rats are formed of the classic hepatic lobules. Fig.\u00a0a. The hiKidney sections of control groups displayed the normal appearance of the tissue where glomeruli appear as dense tufts of capillaries enclosed in the outer layer of Bowman capsules. Many renal tubules were observed Fig.\u00a0a. KidneyHistological examination of extensor digitorum longus muscle of control rats showed the normal structure of skeletal muscle Fig.\u00a0a. MuscleIt is prominent that dyslipidemia is the most real and huge danger factor of a high-fat diet (HFD) intake on health . In the In the present examination, HFD caused increments in concentrations of AST, ALT, and ALP enzymes and decrease total proteins and albumin concentrations. An increase in liver enzymes activity may be indicative of some liver impairment, or possibly damage . BesidesDyslipidemia appears to play a pathogenic role in the development of renal diseases . In the Muscle harmed in the present assessment was verbalized in HFD rats by ahuge increment in CK and histopathological examination. It was accounted for the uncontrolled utilization of high fat-diet associated with skeletal muscle dysfunction and the development of muscle atrophy . MoreoveOur study indicated the development of oxidative stress in the liver, kidney, and muscle of HFD- fed group. HFD can cause increased lipid peroxidation via progressive and cumulative cell injury resulting from the pressure of the large body weight . In the Overall, our investigation exhibited the possibilities of echinochrome pigment in ameliorating hyperlipidemia induced by HFD and standardized the biochemical and histopathological changes in the liver, kidney, and muscle. Ech pigment has hypolipidemic property and antioxidant role, which reduced HFD complications in the liver, kidney, and muscles.Atorvastatin powder, purity 98.7%, purchased from SIGMA Pharmaceutical Company, Egypt. Carboxymethyl cellulose (CMC) purchased as a powder from El-Nasr Pharmaceutical Chemicals Company, Egypt.Sea urchins (Paracentrotus lividus) were collected from the Mediterranean shoreline of Alexandria (Egypt) at that point shipped to the research facility stuffed in ice. The collected Sea urchins were quickly shade-dried. After the evacuation of the inner tissues, the shells and spines were washed with cold water, air-dried at 4\u00a0\u00b0C for 24 h in the dark and then were grounded. The powders (10 g) were dissolved by gradually adding 20 ml of 6 M HCl. The pigments were extracted 3 times with a similar volume of diethyl ether. From that point onward, the gathered layer of ether was washed by utilizing NaCl (5%) until the acid was nearly removed. At that point, we utilized the anhydrous sodium sulfate over the solution of ether-including pigments for drying which followed by the evaporation of the solvent under reduced pressure. The extract including the polyhydroxylated naphthoquinone pigment was stored at -30\u00b0C in the dark , 29.Adult male albino rats weighing 150\u2009\u00b1\u200910 g were utilized in this investigation. The rodents were acquired from the National Organization for Drug Control and Research . They were housed in polyacrylic cages in the well-ventilated animal house of the Zoology Department, Faculty of Science, Cairo University. Rats were maintained in a friendly environment of a 12 h/12 h light\u2013dark cycle at room temperature (22\u201325\u00a0\u00b0C) where food and water ad libitum were provided. They were acclimatized to research center conditions for 7 days before initiation of the analysis.The animals were fed a high-fat diet (HFD) with the vitality of 6.3 kcal/g, involving 19% from protein, 35% calories from fat, and 46% from sugar for four weeks . The eatThirty rats were assigned into five main groups (6 rats/group).Rats were fed normal diets for 4 weeks, then given orally 0.5% CMC for 16 consecutive days.Rats were fed HFD for 4 weeks, and then were given 0.5% CMC orally for 16 days.Rats were fed HFD for 4 weeks, then given orally ATOR(80 mg/kg in 0.5% CMC) daily foRats were fed HFD for 4 weeks, then given orallyEch(1mg/kg in 0.5% CMC) daily foRats were fed HFD for 4 weeks, then given orallyATOR and after one hour were given orally Ech at the same doses and duration as ATOR and Ech groups.At the end of the experiment, rats were euthanized with an overdose of sodium pentobarbital (100 mg/kg). The blood collected from the rats via cardiac puncture and then was separated by centrifugation to obtain sera which were stored at\u2009\u2212\u200980\u00a0\u00b0C for the biochemical measurements. The liver, kidney, and muscle were removed and were immediately blotted using a filter paper to remove traces of blood. Portions of these tissues were put away at\u2009\u2212\u200980\u00a0\u00b0C for biochemical examination. Different pieces of liver and kidney tissues were suspended in 10% formal saline for fixation preparatory to histopathological assessment.g for 15\u00a0min at 4\u00a0\u00b0C and the resultant supernatants were used for the oxidative stress analyses.The liver, kidney, and muscle tissues were homogenized (10% w/v) in ice-cold 0.1 M Tris\u2013HCl buffers (pH 7.4). The homogenate was centrifuged at 860\u00d7The serum aspartate aminotransferase (AST) and alanine aminotransferase (ALT) were estimated by the method of Reitman and Frankel , serum aThe liver, kidney and muscle tissues were removed immediately, washed and fixed in neutral buffered formalin (10%) for further processing by the ordinary routine work: dehydration, clearing, and embedding. The paraffin-embedded blocks of the liver and kidney tissues were cut by using microtome in 4\u00a0\u03bcm-thick tissue sections then hematoxylin and eosin were used for staining. The tissue sections were assessed under light microscopy independently by 2 investigators in a blinded way.Data of the current study are represented in tables as mean\u2009\u00b1\u2009SE and were analyzed by one-way ANOVA followed by Duncan post-hoc test for multiple comparisons using the Statistical Package for the Social Sciences software (SPSS 20). The differences between means were considered statistically significant when the P value was less than 0.05."}
+{"text": "Glioma is one of the most aggressive malignant brain tumors which is characterized with highly infiltrative growth and poor prognosis. NKAP (NF-\u03baB activating protein) is a widely expressed 415-amino acid nuclear protein that is overexpressed by gliomas, but its function in glioma was still unknown.CCK8 and EDU assay was used to examine the cell viability in vitro, and the xenograft models in nude mice were established to explore the roles of NAKP in vivo. The expressions of NKAP, Notch1 and SDF-1 were analyzed by immunofluorescence analysis. The expression of NKAP and Notch1 in glioma and normal human brain samples were analyzed by immunohistochemical analysis. In addition, CHIP, Gene chip, western blot, flow cytometry, immunofluorescence, ELISA and luciferase assay were used to investigate the internal connection between NKAP and Notch1.Here we showed that overexpression of NKAP in gliomas could promote tumor growth by contributing to a Notch1-dependent immune-suppressive tumor microenvironment. Downregulation of NKAP in gliomas had abrogated tumor growth and invasion in vitro and in vivo. Interestingly, compared to the control group, inhibiting NKAP set up obstacles to tumor-associated macrophage (TAM) polarization and recruitment by decreasing the secretion of SDF-1 and M-CSF. To identify the potential mechanisms involved, we performed RNA sequencing analysis and found that Notch1 appeared to positively correlate with the expression of NKAP. Furthermore, we proved that NKAP performed its function via directly binding to Notch1 promoter and trans-activating it. Notch1 inhibition could alleviate NKAP\u2019s gliomagenesis effects.these observations suggest that NKAP promotes glioma growth by TAM chemoattraction through upregulation of Notch1 and this finding introduces the potential utility of NKAP inhibitors for glioma therapy.The online version of this article (10.1186/s13046-019-1281-1) contains supplementary material, which is available to authorized users. Glial-derived gliomas account for the vast majority of malignant brain tumors . ResearcDrosophila melanogaster gene CG6066, an NKAP ortholog, led to over proliferation of D. melanogaster neural precursor cells, resulting in lethal tumor formation . The H score, ranging from 0 to 300, represented higher weight for higher-intensity staining in a given sample. In this study, the median of H score is 157.Total proteins were extracted using lysis buffer containing 10\u2009mmol/L Tris-HCl (pH\u20097.4), 1% Triton X-100 and protease/phosphates inhibitors , separated by 10% SDS-PAGE gel electrophoresis, transferred to polyvinylidene difluoride (PVDF) membranes and probed with primary antibodies. The membranes were subsequently probed with horseradish peroxidase-conjugated secondary antibodies followed by development using an enhanced chemiluminescence detection system . Anti-GAPDH antibody was used to monitor the loading amount. M-CSF ELISA were performed according to the manufacturer\u2019s instructions .NKAP Forward 5\u2032-GGATCCTCACTTGTCATCCTTCCCTTTG-3\u2032.Reverse 5\u2032-GAATTCATGGCTCCTGTATCGGGCTC -3\u2032.NOTCH1 Forward 5\u2032-AAGCTGCATCCAGAGGCAAAC-3\u2032.Reverse 5\u2032-TGGCATACACACTCCGAGAACAC-3\u2032.NOTCH2 Forward 5\u2032-GTTACAGCAGCCCTTGCCTGA-3\u2032.Reverse 5\u2032-CCATGGATACAAGGGTTACTTGCAC-3\u2032.NOTCH3 Forward 5\u2032-ATCGGCTCGGTAGTAATGCTG-3\u2032.Reverse 5\u2032-ACAACGCTCCCAGGTAGTCA-3\u2032.NOTCH4 Forward 5\u2032-TGCGAGGAAGATACGGAGTG-3\u2032.Reverse 5\u2032-GGACGGAGTAAGGCAAGGAG-3\u2032.CCND1 Forward 5\u2032-GGGCCACTTGCATGTTCGT-3\u2032.Reverse 5\u2032-CAGGTTCCACTTGAGCTTGTTCAC-3\u2032.CTNNB1 Forward 5\u2032-GAGTGCTGAAGGTGCTATCTGTCT-3\u2032.Reverse 5\u2032-GTTCTGAACAAGACGTTGACTTGGA-3\u2032.DVL2 Forward 5\u2032-GACATGAACTTTGAGAACATGAGC-3\u2032.Reverse 5\u2032-CACTTGGCCACAGTCAGCAC-3\u2032.HES1 Forward 5\u2032-GGACATTCTGGAAATGACAGTGA-3\u2032.Reverse 5\u2032-AGCACACTTGGGTCTGTGCTC-3\u2032.N-cadherin Forward 5\u2032-CTCCTATGAGTGGAACAGGAACG-3\u2032.Reverse 5\u2032-TTGGATCAATGTCATAATCAAGTGCTGTA-3\u2032.Twist1 Forward 5\u2032-AGCTACGCCTTCTGGTCT-3\u2032.Reverse 5\u2032-CCTTCTCTGGAAACAATGACATC-3\u2032.Vimentin Forward 5\u2032-AGATCGATGTGGACGTTTCC-3\u2032.Reverse 5\u2032-CACCTGTCTCCGGTATTCGT-3\u2032.SDF-1 Forward 5\u2032- TCTCCATCCACATGGGAGCCG-3\u2032.Reverse 5\u2032- GATGAGGGCTGGGTCTCACTCTG-3\u2032.GAPDH Forward 5\u2032-GCACCGTCAAGGCTGAGAAC-3\u2032.Reverse 5\u2032-TGGTGAAGACGCCAGTGGA-3\u2032.Trizol reagent was used to extract RNA. The concentration and purity of RNA were determined by measuring the absorbance at 260\u2009nm and the absorbance ratio of 260/280\u2009nm in a Nano-Drop 8000 Spectrophotometer . A PrimeScript RT reagent kit with gDNA Eraser was used to synthesize the cDNA. An ABI 7300 Fast Real-time PCR System and an SYBR Green PCR kit were used for real-time PCR. The primer sequences were as follows:Total RNA was extracted using Trizol (Invitrogen) and treated with DEPC water. After RNA quality examination, A total amount of 2\u2009\u03bcg RNA per sample was used as input material for the RNA sample preparations. Sequencing libraries were generated using NEBNext\u00ae Ultra\u2122 RNA Library Prep Kit for Illumina\u00ae following the manufacturer\u2019s recommendations and index codes were added to attribute sequences to each sample. RNA concentration of library was measured using Qubit\u00ae RNA Assay Kit in Qubit\u00ae 3.0 to preliminary quantify and then dilute to 1\u2009ng/\u03bcl. Insert size was assessed using the Agilent Bioanalyzer 2100 system , and qualified insert size was accurate quantification using StepOnePlus\u2122 Real-Time PCR System . The clustering of the index-coded samples was performed on a cBot cluster generation system using HiSeq PE Cluster Kit v4-cBot-HS (Illumina) according to the manufacturer\u2019s instructions. After cluster generation, the libraries were sequenced on an Illumina Hiseq 4000 platform and 150\u2009bp paired-end reads were generated. The mRNA sequencing assay was achieved by Annoroad Gene Technology Co., Ltd., Beijing, China.Cell proliferation was determined using a Cell Counting Kit-8 (CCK-8) assay kit and a cell-light 5-ethynyl-2\u2032-deoxyuridine (EdU) Apollo Imaging Kit . For the CCK-8 assay, U87 and U251 cells were seeded into 96-well plates for 0, 24, 48, and 72\u2009h at a density of 3000 cells per well. Then, 10\u2009\u03bcL CCK-8 solution was added to each well and incubated with the cells for 2\u2009h. Absorbance was detected at 450\u2009nm using a microplate reader . EdU immunocytochemistry staining was performed by using a Cell-Light\u2122 EdU Apollo In Vitro Imaging Kit at 24\u2009h after the cell was plated into 96-well plates. The EdU-positive cells were visualized under a fluorescence microscope .2 at 37\u2009\u00b0C, the migrated cells that had stuck to the lower surface of the membrane were fixed in 4% paraformaldehyde and stained with 0.1% crystal violet for 5\u2009min. The number of migrated cells was counted in five randomly selected fields at 200\u00d7 magnification using a microscope. For the invasion assay, the transwell chambers were coated with Matrigel (BD Bioscience), and same procedures as those for the migration assay were followed.To assess the migration and invasion ability of glioma cells in vitro, migration and invasion assays were performed using transwell chambers with 8-\u03bcm pores . For the migration assay, 1000 transfected cells were suspended in 200\u2009\u03bcL serum-free medium and added to the upper transwell chamber. After incubation for 12\u2009h in a humidified atmosphere containing 5% COCells were plated in 48-well plates, transfected with the reporter plasmid pGL2-Notch1 promoter-Luc and together with an siRNA-NKAP or control expression vector. Luc activities were determined using a luciferase assay system over a period of 24\u2009h.NOTCH1 promoter 1 Forward 5\u2032-GGCTCCTCCGCTTATTCACAT-3\u2032Reverse 5\u2032-CGCCTGGGACTACTTCTCGT-3\u2032.NOTCH1 promoter 2 Forward 5\u2032-CTATGGCAGGCATTTTGGACT-3\u2032Reverse 5\u2032-GCTGATTTATTTCTCCACCACGA-3\u2032.NOTCH1 promoter 3 Forward 5\u2032-TAGGTCCCTCCCAGCCTTT-3\u2032Reverse 5\u2032-GCTGATTTATTTCTCCACCACGA-3\u2032.U87 cells were cross-linked with 1% formaldehyde and quenched by adding 125\u2009mM glycine. Chromatin was isolated by adding cell lysis buffer , and DNA were sheared into fragments of 300\u2013500\u2009bp by sonication. Lysates were pre-cleared for 1\u20132\u2009h using Salmon Sperm DNA/Protein A Agarose , after which precipitation was induced using anti-H3K27me3 or anti-NKAP (Abcam). An isotype-matched IgG was used as a negative control. To reverse the DNA cross-linking, the precipitates were incubated with pronase for 2\u2009h at 42\u2009\u00b0C and 68\u2009\u00b0C for 8\u2009h. The Notch1 promoter DNA in the immunoprecipitation was detected by qRT-PCR and agarose gel electrophoresis. The following primers were used:Transfected cells were detached with trypsin and washed 1\u20132 times with cold phosphate-buffered saline (PBS). The cells were fixed with cool 70% ethanol at room temperature, and then washed again with PBS. The cells were immediately stained with propidium iodide using a BD Cycletest Plus DNA reagent kit following the manufacturer\u2019s protocol. Analyses of cell cycle were performed using a FACS Calibur Flow Cytometer .THP-1 cells were cultured in RPMI-1640 medium with 10% fetal bovine serum and 100\u2009ng/ml Phorbol-12-myristate-13-acetate (PMA) for 72\u2009h. The adherent THP-1 Cells induced by PMA were co-incubated with U87 cells stained with GFP fluorescence for 48\u2009h. The THP-1 cells were then sorted and harvested by a SONY SH800 Cell Sorter. After washed by PBS twice, the sorted cells were incubated with Alexa Fluor\u00ae 647-conjugated anti-human CD206, Phycoerythrin-conjugated anti-human CD80, . Multiple-color FACS analysis was performed using a FACS Calibur Flow Cytometer and analyzed by FlowJo software .All experimental animal procedures were conducted strictly in accordance with the Guide for the Care and Use of Laboratory Animals and approved by the Animal Care and Use Committee of the Shandong provincial hospital affiliated to Shandong University. The male BALB/c nude mice were randomized divide into four groups in a blinded manner, each group including five 4-weeks-old nude mice. Two groups were used for subcutaneous xenograft study, and the other two groups were used for stereotactic intracranial tumor implantation.5cells were subcutaneously injected in the right flanks of nude mice. For stereotactic intracranial tumor implantation, 5\u2009\u00d7\u2009105cells glioma cells were harvested by trypsinization, counted, and resuspended in culture medium. Mice were anesthetized by intraperitoneal administration of ketamine (132\u2009mg/kg) and implanted using a stereotactic head frame at a depth of 3\u2009mm through a bur hole placed 2\u2009mm lateral and 0.5\u2009mm anterior to the bregma. For histopathologic analysis, the mice brain were made into frozen sections with 8-\u03bcm thickness. Slides were incubated overnight at 4\u2009\u00b0C with primary antibodies (anti-NKAP diluted at 1:100).For subcutaneous xenograft study, 5\u2009\u00d7\u200910For study of tumor microenvironment, tissue was minced and digested with trypsin for 20\u2009min at 37\u2009\u00b0C. The homogenate was then filtered through a 40\u2009\u03bcm filter and prepped using Fixation/Permeabilization solution according to the manufacturer\u2019s instructions . Cells were then incubated with FITC conjugated anti-mouse TMEM119 antibody, APC conjugated anti-mouse Gr-1 antibody, FITC conjugated anti-mouse Neutrophil (Ly-6B) antibody and FITC conjugated anti-mouse CD11b antibody prior to FACS analysis.P\u2009<\u20090.05 was considered statistically significant.Quantitative data were expressed as the mean\u2009\u00b1\u2009standard deviation (SD). Significance was tested by one-way analysis of variance (ANOVA) or two-tailed t-tests among various groups. For in vivo studies, Kaplan-Meier curve and log-rank analyses were conducted using MedCalc software . To elucidate the functions of NKAP in gliomas, we firstly tested the effects of NKAP on glioma cell growth. We infected both U87 and U251 glioma cells with the lentiviruses expressing GFP and siRNA of NKAP. Nonspecific lentiviral vectors were used as the negative control. qRT-PCR and western blot analysis indicated a decrease by approximately 70% in the si-NKAP-infected cells compared with the scrambled siRNA-infected cells co-cultured with NKAP knockdown U87 and U251 cells was much less than those co-cultured with control cells, suggesting that NKAP was involved in the altered polarization of TAMs and M-CSF , also known as C-X-C motif chemokine 12 (CXCL12), has been implicated in the recruitment of monocytes/macrophages to the bulk of tumors. Macrophage colony-stimulating factor (M-CSF), on the other hand, is a secreted cytokine that causes macrophages to differentiate into tumor-associated macrophages (TAM) by binding to the colony stimulating factor 1 receptor (CSF1R). When we looked at the tumor-stromal boundary in the xenografted mice, a decrease in SDF-1 expression was observed in the glioma tissues knockdown of NKAP Fig.\u00a0a. ConsisAMs Fig. d, e. We AMs Fig. f. To eluTo further identify the potential NKAP targets in gliomas, we performed RNA sequencing in triplicates to determine the gene expression profiles of the control and NKAP knockdown cell lines Fig.\u00a0a. IntereTo verify this hypothesis, qRT-PCR and western blot were performed in the U251 and U87 cell lines to assess the expressions of the genes within Notch signaling pathway. The results showed that depletion of NKAP significantly inhibited both the mRNA and protein expression levels of Notch1, NICD and Hes1. In contrast, the levels of Notch2, Notch3 and Notch4 were moderately regulated and neutrophils (Ly6B+) were significantly down-regulated has emerged as a critical regulator of glioma progression, within which immune tolerance and suppression are the key regulatory factors. The TME contains many different non-cancerous cell types in addition to cancer cells, such as endothelial cells, pericytes, fibroblasts and immune cells. Among them, the majority of immune cells are macrophages, often comprising up to ~\u200930% of the tumor mass . HoweverNKAP plays an important role in neural development considering its interesting expression patterns in the neural system. It was reported that NKAP is expressed at heterogeneous levels in different parts of the brain, with a higher expression in the proliferative progenitor cell types in the SVZ region, but a lower expression in the adult neural cells such as glial cells . These rBy use of tissue collection and immunohistochemical analysis, we observed that the expression of NKAP was significantly upregulated in the gliomas. The level of increase in NKAP expression was positively correlated with the degree of glioma malignancy and inversely correlated with the prognosis. More importantly, we detected that cellular proliferation, migration and invasion were significantly inhibited upon NKAP knockdown in the cell lines of gliomas. Furthermore, downregulation of NKAP could reduced the recruitment and polarization of TAM by decreasing the secretion of SDF-1 and M-CSF. As a corollary, NKAP seemed to be a key regulator of glioma progression and TME, but its molecular mechanisms still remain unclear.. Instead of a repressor component, NKAP transactivated Notch1 in the glioma cells. To make a step further, we carried out a ChIP assay and detected a direct binding between NKAP and Notch1 promoter region. Notch1 inhibition could indeed alleviated the functions resulted from upregulation of NKAP. Overall, our findings provided new insight into the regulatory relationship between NKAP and Notch1 in the tumorigenesis of gliomas.In order to explore the mechanisms of NKAP in gliomagenesis, we performed RNA sequencing analysis to determine differentially expressed genes affected by NKAP. Notch1 was observed as one of the most closely related genes. The regulatory relationship between NKAP and Notch1 was firstly reported in mammalian T cells. Pajerowski has reported that NKAP could directly interact and co-localize with the known Notch co-repressors CIR and HDAC3 in the regulation of mammalian T-cell development, resulting in suppression of Notch target genes . NeverthIn the nervous system, mounting evidences have revealed that aberrant Notch signaling has been closely involved in the development of gliomas. Among them, a critical role of Notch1 in regulation of immune suppressive TME has drawn active attention. According to Ling\u2019s study, activating Notch1 signaling could promote the expression of M-CSF in BV2 cells , though In summary, we have identified NKAP as an important oncogenic factor in gliomas, and indicated its ability to promote glioma proliferation and invasion. What\u2019s more, we provided unequivocal evidence for the first time demonstrating that NKAP performed its function in part via regulating glioma immune microenvironment through targeting Notch1. These novel findings would provide a new perspective for glioma chemotherapeutic intervention.In this manuscript, we have identified NKAP as an important oncogenic factor in gliomas. What\u2019s more, we provided unequivocal evidence for the first time demonstrating that NKAP performed its function in part via regulating glioma immune microenvironment through targeting Notch1.Additional file 1:Figure S1. Western blot assay was performed to test the knockdown efficiency of NKAP in U87, U251 and GL261 cells. (TIF 906 kb)Additional file 2:Figure S2. GO and KEGG analyses were based on the RNA sequencing profiles resulted from NKAP knockdown. Both cytokine production involved in the immune response and Notch signaling pathway were significantly affected. (TIF 3542 kb)Additional file 3:Figure S3. Agarose gel electrophoresis of the CHIP assay. It was performed by using antibody against NKAP with primers targeted to the promoter region of Notch1. Isotype-matched IgG was used as a negative control. (TIF 695 kb)Additional file 4:Figure S4. Proportion of myeloid-derived suppressor cells (A) and neutrophils (B) were significantly down-regulated in the NKAP depleted gliomas. C, Percentage of TMEM119 positive microglia was not affected by NKAP knockdown. (TIF 560 kb)Additional file 5:Figure S5. Mechanism map of NKAP in the feedback loop between glioma development and tumor immune microenvironment. (PNG 169 kb)"}
+{"text": "Moreover, the nucleoid-associated protein H-NS represses conjugation at non-permissive temperature. A transcriptomic approach has been used to characterize the effect of temperature on the expression of the 205 R27 genes. Many of the 35 tra genes, directly involved in plasmid-conjugation, were upregulated at 25\u00b0C. However, the majority of the non-tra R27 genes\u2014many of them with unknown function\u2014were more actively expressed at 37\u00b0C. The role of HtdA, a regulator that causes repression of the R27 conjugation by counteracting TrhR/TrhY mediated activation of tra genes, has been investigated. Most of the R27 genes are severely derepressed at 25\u00b0C in an htdA mutant, suggesting that HtdA is involved also in the repression of R27 genes other than the tra genes. Interestingly, the effect of htdA mutation was abolished at non-permissive temperature, indicating that the HtdA-TrhR/TrhY regulatory circuit mediates the environmental regulation of R27 gene expression. The role of H-NS in the proposed model is discussed.Conjugation of R27 plasmid is thermoregulated, being promoted at 25\u00b0C and repressed at 37\u00b0C. Previous studies identified plasmid-encoded regulators, HtdA, TrhR and TrhY, that control expression of conjugation-related genes ( Horizontal gene transfer (HGT) is a process of genetic exchange that highly contributes to evolution and adaptation of bacteria to new niches by promoting acquisition of genes coding for different metabolic pathways, toxins, adhesins, or antimicrobial resistances. Plasmid conjugation is one of the main HGT mechanisms responsible for the rapid dissemination of antibiotic resistances between pathogenic strains Bennett, . In ordeSalmonella enterica and Escherichia coli genes, are clustered in two separated regions, Tra1 and Tra2 and non-permissive (37\u00b0C) temperatures was determined. Having in consideration that the HtdA-TrhR/TrhY regulatory circuit described seems to play a crucial role in the environmental control of the R27 conjugation, the transcriptomic studies have been extended to an lac derivative AAG1 . M9 minimal media plates with the following composition: 1 x M9 salts, 0.2 % lactose, 10 \u03bcM thiamine and 1.5 % bactoagar, were used to differentiate donor from transconjugant cells in conjugation experiments, as described previously . An intragenic region within R0009 gene was used as a negative control . PCR amplification of those regions was performed using primers pairs described in EcoRI (forward primers) or BamHI (reverse primers) restriction sites. The PCR-amplified fragments were cloned in plasmid pGEM-T and subsequently in pRS551, either in BamHI or EcoRI-BamHI sites. The resulting constructs were transferred to the attB chromosomal locus of the AAG1 strain using previously described protocols . The RNA was purified using an SV Total RNA Isolation System (Promega) according to the manufacturer's instructions. The RNA was DNase treated with TURBO DNAse (Ambion). After concentration, using a RNeasy Minielute Clean-up kit (Qiagen), purity and quality of the purified RNA was tested by Bionalyzer 2100 (Agilent Technologies). All samples show a RNA integrity number (RIN) over 8.0.The RNA used in microarray experiments was purified from three independent cultures grown in LB under shaking conditions. The temperature of incubation was either 25 or 37\u00b0C and samples were taken at mid logarithmic phase , as previously described were obtained after direct ligation of 10 \u03bcg of total RNA with CircLigase RNA Ligase (Epicentre). To obtain circRNA samples carrying primary transcripts , prior ligation, 10 \u03bcg of total RNA was treated with phosphatase and subsequent RNA 5\u2032 polyphosphatase (Epicentre). CircRNA samples were retrotranscribed to cDNA using the AMV RT (Promega) with specific primers of the AN operon grown in LB at 25\u00b0C to an ODN operon . cDNA seN operon . The PCRThe 5\u2032RACE was performed using the FirstChoice RLM-RACE kit (Ambion) and following manufacturer's instructions. After cDNA synthesis, two rounds of PCR were performed using the primer pairs outer primer/R1-trhU and inner primer/R2-trhU. The outer and inner primers are supplied by the manufacturer. The amplicons generated were purified and sequenced.To determine cotranscription among the different transcripts generated from the AN operon, cDNA was obtained from total RNA samples using primers R1-0009, R1-trhP, and R1-trhU and the reverse transcriptase AMV RT (Promega). PCR amplification using the following primer pairs was used to detect cotranscription: F1-htdK/R1-0009, F1-0009/R1-trhP, and F1-trhW/R1-trhU. As control of DNA contamination, not retrotranscribed samples were used.The conjugation of the R27 plasmid is tightly regulated by temperature Taylor . Moreovelac derivative of MG1655) carrying the R27 plasmid were analyzed. Cultures were grown in LB at permissive (25\u00b0C) and non-permissive (37\u00b0C) temperatures. Previous studies showed that R27 conjugation occurs more efficiently in cells growing exponentially as compared with cells in stationary phase of growth . Only expression values higher than 100 (arbitrary units of intensity of fluorescence), in at least one of the two conditions compared, were considered as significant expression and thus, the fold-change expression between conditions was calculated. We arbitrarily defined the fold-change threshold for the differential expression of a gene to 2-fold or higher. Therefore, genes with a fold-change \u2265 +2 are induced at 25\u00b0C whereas genes with a fold-change \u2264 \u22122 are induced at 37\u00b0C. A summary of the number of genes with a temperature-dependent expression repressed at 25\u00b0C. In tra operons, except for the Z operon coding for the entry-exclusion system, were found significantly induced at 25\u00b0C. Consistent with the increased conjugation frequency detected at low temperature, several genes from the AC operon, coding mostly for proteins required for mating pair formation, were found among the genes with the greatest induction. For instance, the gene trhA, coding for the major subunit of the putative conjugative pilus, is induced more than 14-fold. The expression of the R operon, coding for the regulators TrhR and TrhY, that are required for activation of tra operons expression, was induced at 25\u00b0C consistent with previous transcriptional studies and a protein involved in the turnover of disulphide bonds (R0135) , involved in plasmid replication; tetC (R0083), the tetracycline repressor; insA (R0095) and three insB genes that encode putative transposases of the insertion element IS1; the ORF R0195, encoding a IS2 transposase; and another putative transposase, R0148. Interestingly, the hha gene, coding for a transcriptional regulator that acts in combination with H-NS, is more expressed at 37\u00b0C, in concordance with its repressor role in the regulation of the conjugation process at non-permissive temperature described for H-NS/Hha proteins , the mobilization of IS elements is induced promoting the transfer of material from the plasmid to the chromosome.Altogether, our data clearly indicate that at 25\u00b0C there is a higher expression of the tetR, tetA, and tetD), genes involved in partitioning (parA/R0020 and parB/R0019), replication (repHIB) or transposition .Evidently, many R27 genes were not thermoregulated. Some of R27 genes were highly expressed (higher than 600 arbitrary units of intensity of fluorescence) at both temperatures. These results could be expected, since many of these genes are involved in global processes that may take place at any temperature. Among these genes we found the tetracycline operon by counteracting the activation mediated by TrhR/TrhY were performed to monitor the effect of HtdA in the transcriptional expression of the R27 genes at both permissive and non-permissive temperatures . The downregulated gene is a non-tra gene with unpredicted function. Among the tra genes, 33 out of the 35 genes are upregulated in the htdA mutant. Only the R operon, containing the trhR and trhY genes, is not upregulated more than two-fold. This is consistent with previous transcriptional data from a lacZ fusion with the R operon promoter, indicating that the R operon was not regulated by the HtdA protein are derepressed in an htdA mutant strain clearly indicates that HtdA, initially described as a regulator of the tra genes, controls directly or indirectly the expression of most R27 genes. Interestingly, among the genes derepressed in the htdA mutant, we found hns and hha which are also involved in the temperature regulation of R27 conjugation , which have been found induced in R27 (htdA+) under all conditions known to promote R27 conjugation such as low temperature (parA/R0020 and parB/R0019), 2 genes involved in citrate transport (citA and citB), part of the Tc operon and 2 genes involved in UV protection (mucA and mucB) , S4. Amond mucB) , S4. AllhtdA mutation at non-permissive temperature was tested (htdA mutant strain (107 of 161). Only 11 genes were derepressed by the htdA mutation as compared with the 139 genes detected at permissive temperature. On the other hand, at 37\u00b0C, 43 genes have lower expression in the htdA mutant than in the strain harboring the wt plasmid whereas at 25\u00b0C only one gene showed a decrease in the expression. Among the tra genes, 4 genes are derepressed in the htdA mutant, htdF from AN operon, trhZ from the Z operon and the 2 genes from the R operon; whereas 6 genes are repressed trhH (F operon), traI and R0118 (H operon), trhW (AN operon), R0016 (Z operon) and trhC (AC operon). Among the non-tra genes, only 7 genes were depressed in the htdA mutant, all of them were also repressed by HtdA at permissive temperature.The effect of the s tested and a cotra genes and its regulatory role vanishes at 37\u00b0C.Overall our data suggest that the repressor role of HtdA at permissive temperature is not limited to the tra genes, which is temperature dependent. Having in consideration that HtdA acts by counteracting the activation mediated by TrhR/TrhY, the role of the activators in the temperature mediated regulation was studied. F operon expression has been systematically used as a reporter to study HtdA-mediated regulation of the tra genes between 25 and 37\u00b0C of the different genes. Genes with no altered expression are defined by having a M value between +1 and\u22121, equivalent to\u22122 > FC > +2. A closer look to the gene expression pattern of the tra operon in response to the temperature reveals three different patterns. Pattern 1, unresponsiveness, as shown by the Z operon, where the expression of the three genes is not altered by temperature. Pattern 2, operons with induced expression at 25\u00b0C and the temperature responsiveness is greater among the proximal genes than among the distal ones. This is the most common pattern, shared by the operons AC, H, R, and F. The decrease in the expression of downstream genes in the same operon is defined as transcriptional polarity and it is a common feature among polycistronic operons. Pattern 3, the unusual pattern shown by the AN operon. The 4 proximal genes do not respond to temperature, the 4 following genes show a pattern similar to those described in pattern 2. trhP and trhW, are induced at 25\u00b0C (2.9 and 3.7-fold) and the 2 distal genes (trhU and trhN) are again unresponsive to temperature. This expression profile within a polycistronic operon suggests the presence of complex regulatory mechanisms. Previous studies on the transcriptional organization of the tra genes revealed that the AN operon contains 8 genes, spanning from htdA to trhN which were apparently cotranscribed , R0009 and trhP (233 bp), and trhW and trhU (249 bp). Promoter search with the BPROM software grown at 25\u00b0C. Circularized RNA was obtained from either the isolated RNA or after 5\u2032polyphosphatase RNA treatment . The circRNA samples were used as template to obtain cDNA using specific primers (htdA-htdK) was the only transcript detected after polyphosphatase treatment, indicating a triphosphated mRNA and thus, a transcript with a real transcriptional start. Transcript #3 was detected from RNA untreated (processed transcript). Remarkably, its 5\u2032 end overlaps with the 3\u2032 end of transcript #2, by 46 bp. A possible explanation of this result is that transcript #3 was generated from a secondary promoter different from the promoter upstream of htdA and once is generated this transcript would be further processed to the final transcript #3 detected. Transcript #4 has a 5\u2032 end located downstream of the 3\u2032 end of transcript #3, therefore it could be generated from processing of a pre-existing polycistronic operon. After several trials, a transcript containing the sequences of the distal genes trhU-trhN was not identified by circRNA. However, 5\u2032RACE experiments identified several transcripts trhU-trhN with distinctive 5\u2032 ends (transcript #5). Further studies will be required to identify the exact location of the 3\u2032 end of this transcript.In primers . SequencRT-PCR assays using prtrhP and trhU was assessed. The three IR and a R0009 intragenic region used as a negative control upstream of R0009, results suggest E. coli are complex, with internal promoters or terminators that generate multiple transcription units. For 43% of operons, differential expression of polycistronic genes was observed, despite being in the same operons, indicating that E. coli operon architecture allows fine-tuning of gene expression.Further extensive experiments will be required to fully describe the transcripts generated from the AN operon and the mechanisms involved in its generation. We would like to highlight that our data identify different mRNA species that provide a feasible explanation to the differential expression from the AN operon genes as revealed by the microarray data. The results obtained suggest that different transcriptional and posttranscriptional events may participate in the control of AN operon expression, such as partial termination, processing and transcription from secondary promoters. As reported by Conway et al. , a largetra genes observed under these conditions .The datasets generated for this study can be found in the ArrayExpress repository, under the accession number MG contributed in the investigation. SP contributed in the investigation and writing the manuscript. CM and CB contributed in the conceptualization, investigation, formal analysis, and writing the manuscript. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "As we write these lines, the coronavirus-19 (COVID-19) pandemic has reportedly killed over 2'400'000 people, leaving many individuals and families in mourning throughout the world. The current context has put a major strain on people as it has drastically altered our daily lives and caused many societal challenges. We are experiencing much change and multiple losses. In addition to increased unemployment and financial difficulties, COVID-19 has required exceptional sanitary measures such as social distancing, confinement and quarantine, adding a painful sense of isolation to individuals and families in mourning represent a promising avenue to address the treatment gap. They are immediately accessible and can reach a large number of individuals. They also diversify the ways to deliver evidence-based treatments , 16. IBIN = 1,257) on guided IBIs, all based on CBT, out of over 4,100 studies. Results showed a promising overall effect on grief reduction with significant moderate effects sizes , stable over time from post to 3-month follow-up assessment. To the best of our knowledge, two IBIs targeting grief-related symptoms have been tested to date in an unguided format. In the first study, Dominick et al. (Although IBIs have only recently started focusing on grief-related symptoms, they have shown promising and stable results, demonstrating their feasibility and efficacy. A recent systematic review and meta-analysis identifik et al. proposedk et al. assessedWhile most of the interventions were developed for people who were bereaved or suffering from PTSD, one study extented this treatment to other types of loss. Indeed, Brodbeck et al. have devFew psychotherapists are trained in treating complicated grief . Given iThe new version of the programme, named LIVIA 2.0, is currently in development. Like its predecessors, it will consist of 10 sessions to be completed over 3 months. In order to improve the effectiveness of and adherence to the programme, which consists of promoting the autonomy of the participants completing it and reducing the risk of avoidance and drop-out due to feelings of failure, LIVIA 2.0 will include the following changes. First, guidance on demand will be implemented as it is a cost-effective alternative to guidance and will help better meet the participants' needs and expectations with the challenge of making the programme as effective as possible while optimizing the use of human resources . No reseIn the coming years, we have planned to compare the efficacy of LIVIA-FR and LIVIA 2.0. This study is supported by the Swiss National Science Foundation. It is hypothesised that LIVIA 2.0 will require less guidance than LIVIA-FR and be at least as efficient. A more refined exploration will be done on the short-term efficacy of each module by monitoring the participants' state throughout the programme. We also hope that this study will show that the envisaged improvements will be effective and will improve not only access but also, and above all, adherence to the programme.Although grief is a natural response to loss, our social context plays a vital role in how we experience these events. Given the circumstances, there is clearly an urgency to offer support to people mourning. IBIs such as LIVIA are promising to meet needs that were already present but are not satisfied or exacerbated by the current sanitary crisis. With such uncertainty and insecurity because of COVID-19, having the support of a programme like LIVIA 2.0 can be \u201cthe lifebelt\u201d that can help navigate these turbulent times. Indeed, the current pandemic context has made the grieving process harder. Isolation, social distancing and confinement all have significant effects as we feel as they rob us of relationships crucial to our well-being. The lack of relationships may lead to difficulties in coping with the fear of the unknown in an ambiguous crisis situation as COVID-19. Faced with loneliness, nothing can replace true human contact, but internet-based interventions may serve as an intermediary to build new relationships that may help to overcome mourning. Nevertheless, progress must be made not only in technology but also in the design of programmes to better target needs and offer relevant help to the greatest number. Traditional psychoeducational programmes are perhaps still too standardised and uniform today to respond to the variety of suffering and research has the potential to help guide technology in the right direction. And hopefully, we will be better equipped to support ourselves in times of loss as a result of this pandemic.LB and AD conceived the work. LB, AD, and LE made the literature search. LB drafted the paper. MK, LE, and VP revised the work. All authors provided approval of the version to be submitted.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Streptococcus vaccines). Although there is increasing evidence to support vaccination in pregnancy, important gaps in knowledge still exist and need to be addressed by future studies. This collaborative consensus paper provides a review of the current literature on immunization during pregnancy and highlights the gaps in knowledge and a consensus of priorities for future research initiatives, in order to optimize protection for both the mother and the infant.Immunization during pregnancy has been recommended in an increasing number of countries. The aim of this strategy is to protect pregnant women and infants from severe infectious disease, morbidity and mortality and is currently limited to tetanus, inactivated influenza, and pertussis-containing vaccines. There have been recent advancements in the development of vaccines designed primarily for use in pregnant women (respiratory syncytial virus and group B Vaccination of pregnant women induces a vaccine-specific immune response in the mothers and the transfer of vaccine-specific antibodies via the placenta and breastmilk to directly protect the infant during the first months of life from the targeted pathogens , 2. The Streptococcus (GBS) infections in infants through maternal vaccination has become a priority and a target for potential new vaccine candidates in trials and development , and perelopment \u201312.To optimize the protection offered to mothers and infants by maternal immunization, several factors that can affect this strategy must be better understood . The goaThe main aim of this consensus paper is to discuss current knowledge regarding immunization during pregnancy and highlight the gaps that need to be addressed to ensure the highest protection for both the mother and their infants. References were identified through searches of PubMed for human studies published in English using the terms \u201cimmunization\u201d or \u201cvaccination\u201d or \u201ctetanus\u201d or \u201ctetanus disease\u201d or \u201ctetanus vaccine\u201d or \u201cpertussis\u201d or \u201cTdap\u201d or \u201cpertussis immunization\u201d or \u201cpertussis vaccination\u201d or \u201cpertussis vaccine\u201d or \u201cTdap vaccine\u201d or \u201cTdap immunization\u201d or \u201cinfluenza\u201d or \u201cinfluenza vaccines\u201d or \u201cinfluenza immunization\u201d or \u201cmaternal influenza vaccination\u201d or \u201cinfluenza vaccines in pregnancy\u201d or \u201cRSV\u201d or \u201crespiratory syncytial virus\u201d or \u201cGBS\u201d or \u201cGBS vaccine\u201d or \u201cGroup B streptococcus\u201d and \u201cpregnancy.\u201d Articles resulting from these searches and relevant references cited in those articles were reviewed. References were also provided by authors. Outcomes assessed were safety, immunogenicity, efficacy, and effectiveness of immunization during pregnancy against tetanus, pertussis, influenza, RSV, and GBS diseases. After the initial review, a meeting was held in Italy to discuss the current literature and knowledge gaps. A consensus on the content was reached after multiple rounds of revision among the authors.Maternal immunization, and the use of medication in pregnancy in general, have been a focus of ethical deliberations for decades. Until recently, the ethical prevailing approach for immunization during pregnancy was based on the precautionary principle, which limits introduction of new intervention whose ultimate effects are uncertain. This precautionary principle-centered approach, combined with risk aversion among legal departments of vaccine manufacturers, led to exclusion of pregnant women from most vaccine trials for decades, leading to gaps in evidence of vaccine safety and efficacy among pregnant women. With an increasing focus on maternal immunization, there has been reconsideration of relevant ethical paradigms resulting in several recent developments in this area.First, a report of the U.S. National Vaccine Advisory Committee's Working Group on Maternal Immunization recommended that \u201cRelevant regulations, statutes, and policies\u2026should be modified to indicate that pregnant women are not a vulnerable population for the purposes of ethical review\u201d . This reGlobally, a progress has also been made in the prioritization of immunization in pregnancy and the inclusion of pregnant women in vaccine trials. The WHO Strategic Advisory Group of Experts on Immunization (SAGE) recommended in 2012 that pregnant women should be highly prioritized for influenza vaccination in countries that consider initiating or expanding of seasonal influenza vaccine programs . In 2015These and other developments in ethical considerations for maternal immunization are likely to result in a more conducive environment for maternal immunization research and deployment. However, there are a few areas that require further deliberations .Safety of vaccines administered during pregnancy needs to be evaluated for both the mother and her newborn, and is an important consideration for the mothers' willingness to receive a vaccine during pregnancy. There is a significant bulk of evidence to support the safety of immunization with tetanus toxoids (TT), the longest standing vaccine that is recommended during pregnancy. There is also an increasing body of evidence to support the safety of pertussis and influenza immunization during pregnancy (see below specific sections). However, continuous assessment and reporting of adverse events after immunization during pregnancy remains important, especially for relatively newly introduced maternal vaccines , as it informs about rare events that might follow immunization. In addition, assessment of baseline pregnancy outcomes in unvaccinated women in different world regions and settings will help in establishing baselines to assess safety outcomes against.Furthermore, there is significant heterogeneity and lack of consensus on adverse event reporting in maternal immunization studies. This is a challenge for comparing and pooling data from different studies. In an attempt to overcome this weakness, WHO and the Brighton Collaboration worked together to provide written guidance on how to conduct safety studies in the field of maternal immunization . The iniThe immune system of a pregnant woman is adapted to allow for the survival of the semi-allogeneic fetus. Serum estradiol levels increase up to 500-fold during normal pregnancy , and theBordetella pertussis (B. pertussis) antigens pertussis toxin (PT) 1 month after receipt of Tdap vaccine were not significantly different in pregnant and non-pregnant women and were comparable in both after 1 year is the dominant immunoglobulin isotype that crosses the placenta and contributes to maternally derived passive immunity during early infancy. In healthy pregnant women, IgG transfer across the placenta begins toward the end of the first trimester of pregnancy and increases as pregnancy progresses. IgG concentrations in the fetus are 5\u201310% of the maternal levels at 17\u201322 weeks gestation, 50% at weeks 28\u201332, and usually exceed maternal levels by 20\u201330% at term \u201352. The A number of factors should be considered when determining the ideal timing of vaccination in pregnancy including time-dependent safety when administered at different time points in gestation, time-dependent efficiency of transplacental transfer of vaccine-induced antibodies, interference with infants' immune response to vaccination and clinical efficacy/effectiveness , 63. FurBased on the literature review and consultation among authors, a consensus on priorities for future research related to factors affecting immunization during pregnancy was reached .Vaccines currently recommended and used are aimed to protect against tetanus, pertussis and influenza diseases. Different vaccine formulations and dosages exist for use in pregnant women in selected countries in Europe, North America, South America, and Asia .The World Health Organization (WHO) recommends that if a pregnant woman has never received a tetanus-toxoid -containing vaccine (TT-CV) or her vaccination status is unknown, she should receive two TT (or Td) vaccine doses 4 weeks apart during pregnancy, with the second dose given at least 2 weeks before delivery. Based on WHO recommendations, five total doses are likely needed for protection throughout the childbearing years so a third dose is given 6 months after the second dose, and two additional doses are recommended to be given during the next 2 years or during two subsequent pregnancies . For womSeveral studies have demonstrated TT-CVs to be safe in pregnancy \u201376. As tSeveral studies have shown that following maternal immunization with TT-CVs, anti-TT IgG is actively transferred across the placenta, leading to protective levels in the infant \u201380. VaccIf a Tdap vaccine in pregnancy is being considered to replace a single dose of TT vaccine in some settings, in order to provide dual coverage for pertussis and tetanus disease, it is important to assess the immunogenicity of Tdap in inducing anti-TT IgG compared with TT or Td formulations. In a small study from Vietnam, vaccination with Tdap in pregnancy resulted in higher cord anti-TT IgG levels compared with vaccination with TT, however, this difference did not persist at 2 months of age . These rBoth maternal and neonatal tetanus were very common in most developing countries even into the 1980's. In 1989, the WHO called for the elimination of maternal and neonatal tetanus by the end of the century. At that time, 59 countries reported maternal and neonatal tetanus. As part of the MNTE program, and along with safer birth techniques and effective immunization strategies in children and adults, more than 150 million women were vaccinated against tetanus during pregnancy. Altogether, these practices contributed to the elimination of maternal and neonatal tetanus in 45/59 countries as of the end of 2018 , 85. HowBased on the literature review and consultation among authors, a consensus on priorities for future research related to immunization against tetanus during pregnancy was reached .Data on tolerability and safety of pertussis immunization during pregnancy are reassuring . This haB. pertussis antigens. Antibodies against all B. pertussis antigens included in the Tdap vaccine have been shown to reach peak levels at the end of the second week after Tdap administration in non-pregnant women of childbearing age, and this peak is followed by a rapid decline up to 5 months . HoweverThe optimal timing for maternal influenza immunization has not been established, and recommendations allow administration at any time during pregnancy , 152. ImInfluenza can be a severe disease for pregnant women, neonates and young infants. The severity of infection increases as pregnancy advances, with the greatest maternal risk occurring during the third trimester of pregnancy , 157. YoMultiple studies have shown that administration of an IIV during pregnancy reduces the risk of influenza in pregnant woman by ~35\u201350% , 160\u2013162The efficacy of IIV in pregnancy in the prevention of maternal and infant influenza disease varies depending on the setting as well as the match of the vaccine utilized to circulating influenza strains. The majority of efficacy data are derived from studies performed in LMICs when compared to HICs. While influenza disease is seasonal in countries with temperate climates , there is no seasonal pattern in tropical countries.Altogether, current data on safety, immunogenicity, and efficacy of maternal IIV vaccination, for the pregnant women and their infants has resulted in pregnancy as a potential indication in the vaccine label by the European Medicines Agency as of July, 2019 . In AustBased on the literature review and consultation among authors, a consensus on priorities for future research realted to immunization against influenza during pregnancy was reached .Haemophilus influenzae type b (Hib) polysaccharide or Hib conjugated vaccines in pregnant women was associated with mild inhibition of infants' immune responses to Hib conjugated vaccines on immune responses to tetanus-containing vaccines in infancy is of importance in countries where a replacement of the existing tetanus vaccination program by a Tdap vaccination program is being considered. A small study in Vietnam reported higher anti-TT levels after primary immunization with tetanus-containing vaccines in infants born to Tdap-vaccinated pregnant women compared to infants born to TT-vaccinated pregnant women . A studySeveral vaccines are conjugated to TT as a carrier protein and thus vaccine-induced immune responses to these vaccines in infant born to Tdap-vaccinated pregnant women might also be affected. Hib anti-polyribosylribitol phosphate (PRP) levels were higher after primary immunization with Hib TT-conjugated vaccine in infants born to Tdap-vaccinated pregnant women when compared to infants of unvaccinated mothers , 178.One study found no differences between anti-Men C antibody levels after primary immunization with meningococcal C TT-conjugated vaccine in infant born to Tdap-vaccinated when compared to unvaccinated pregnant women . More stStudies have shown that Tdap immunization in pregnancy is associated with decreases in humoral immune responses to infants' immunization with acellular pertussis (aP) containing vaccines. Several studies describe significantly lower anti-PT IgG levels in infants born to Tdap-vaccinated pregnant women after the completion of primary immunization, while results were less consistent after booster immunization , 175\u2013177B. pertussis antibody levels at delivery in infants born to unvaccinated women and their anti-B. pertussis antibody levels after wP vaccination should be investigated in clinical trials as theseData on the potential impact of maternal influenza immunization on the immune response of infants to their immunization against influenza are scarce as influenza vaccines are administered in infants older than 6 months, when most maternally-derived antibodies already have waned from infant's circulation. Earlier studies performed to assess immunogenicity of influenza vaccination in infants younger than 6 months old found that post vaccination seroprotection rates (titer \u2265 1:40) were higher in infants who received IIV at 6 months of age when compared to infants who received vaccination during 6\u201312 weeks of age . AnotherMechanism of interference between maternally-derived antibodies and infant's immune responses to subsequent immunizations has not been fully explored . Some prIn utero priming of the fetal immune system after vaccination against influenza in pregnancy has been reported. IgM antibodies against influenza vaccine antigens were detected in nearly 40% of cord blood specimens of newborns born to women vaccinated with IIV in pregnancy . Newborns of mothers colonized with GBS are at higher risk of developing meningitis and sepsis . Althougin vitro are the most studied candidate vaccines . A recenin vitro .Several challenges for the development of GBS vaccines for maternal immunization remain unsolved. There are only 10 known GBS serotypes, of which 6 are associated with 98% of all described strains that cause invasive disease and even a trivalent vaccine would provide coverage for 80% of all global invasive disease cases . The prePhase 1b/2 clinical trials have shown that vaccination of pregnant women with a trivalent GBS vaccine induces anti-GBS antibodies that are transferred to the newborn at delivery \u2013210. OthFinally, the clinical effectiveness of GBS vaccines in pregnant women and neonates has not been determined. Considering the relatively low incidence of invasive GBS disease, especially in HICs, the pathway of licensure of a GBS vaccine targeted at pregnant women with the main objective of protection of their infants against early and late-onset invasive GBS disease is likely to require an alternate approach than conventional efficacy trials. This would include demonstrating the safety of the vaccine in pregnant women , and benchmarking their immune responses to a serological endpoint associated with reduced risk for invasive GBS disease. Studies are currently underway in LMICs and HICs, which are investigating the association of maternal-derived serotype-specific IgG (using a standardized assay) and threshold associated with 80\u201390% risk reduction for invasive GBS disease.As current GBS vaccines that are under development are conjugated to TT or the DT mutant CRM197, it will be important to investigate whether these vaccines given to pregnant women may result in interference to infant vaccines conjugated to these carrier proteins and given in infancy . Current evidence suggests that CRM197-conjugated GBS vaccine administered in pregnancy did not affect infants' immune responses to PCVs .RSV is the most common cause of severe lower respiratory tract infections (LRTIs) in young children worldwide with a disproportionate high burden of disease in LMICs . PretermRecently, several new vaccines, including live-attenuated, gene-based vector vaccines, and particle-based vaccines, have been developed and found to be safe and well-tolerated in the non-pregnant population , 193. HeStudies on RSV-F protein in pregnant women have shown that these vaccines are safe and immunogenic in pregnant women , 220. ThA phase 3, randomized, placebo controlled trial including 4,636 pregnant women has been conducted in 11 countries with a RSV-F nanoparticle alum-adjuvanted vaccine showed that protection against RSV LRTI hospitalization was noted , but the primary study endpoint for reduction of medically-significant RSV LRTI was not met . This isMultiple factors could have affected the outcomes measured in this first immunization study of a RSV vaccine in pregnancy. Pregnant women were vaccinated during 28\u201336 weeks gestation, and efficiency of transfer of anti-RSV antibodies were found to be higher in women vaccinated <30 weeks GA compared with women vaccinated \u226530 weeks GA. In addition, vaccine efficacy varied in different settings, being higher in middle-income countries (compared with HICs). Mathematical modeling can help predict women and infants who are expected to benefit the most from RSV vaccines. This could be achieved by defining women who are expected to deliver in RSV season and the preferred timing of vaccination to optimize protection in those infants. Ideal timing of vaccination could be predicted based on the kinetics of antibody response in mothers, the efficiency of antibody transfer and their estimated half-life, and duration of infants' exposure to seasonal RSV.Based on the literature review and consultation among authors, a consensus on priorities for future research related to immunization during pregnancy against GBS and RSV was reached .B. pertussis secretory immunoglobulin A (sIgA) antibodies were detected in colostrum and in breast milk up to 8 weeks after delivery from women vaccinated with Tdap during pregnancy . JE: Research support to my institution from Novavax, GlaxoSmithKline, Merck, Novavax, Chimerix; consultant for Sanofi Pasteur and Meissa. Have received both honoraria from both companies mentioned. MDS is currently, or has previously been, a Chief or Principal investigator on vaccine trials funded by vaccine manufacturers including GSK< MCM, Sanofi-Pasteur, Novartis Vaccines, Pfizer, Novavax, and Medimmune. These studies are conducted on behalf of the GSK, Novartis, Sanofi Pasteur MSD, and Novavax, University of Oxford and MDS receives no personal payment for this work. LV has received speaker's fees from GSK, Pfizer, Novartis, Sanofi Pasteur, and MSD. GlaxoSmithKline in the past 3 years. MO'R: Funding for clinical trials on Rotavirus Vaccines (GlaxoSmithKline up to 2008), Meningococcal b vaccines (GSK and Novartis up to 2017), RSV (Medimmue up to 2018), Pneumococcal vaccines (Merck to date). Funding for epidemiology/impact of disease studies: Enteric virus impact studies (Takeda vaccines to date), Vaccination acceptance study (Sanofi Pasteur to date). Travel support to present study results received; No speakers fees nor honorariums perceived. MAS has received grants to support research projects and consultancy fee from GSK, Pfizer, MSD, Seqirus, and Sanofi Pasteur. KF has served on the vaccine advisory boards for Sanofi-Pasteur and Seqiris in the past 3 years and received honoraria for attending meetings and speaker fees. SM; Institution received grant support in relation to studies on GBS and RSV including from BMGF, Pfizer, GSK, Minervax, and Novavax. No personal fees received from any of these sources, expect advisory honorarium from BMGF. FM-T has received honoraria from GSK, Pfizer, Sanofi Pasteur, Merck Sharp & Dohme, Seqirus, and Janssen for taking part in advisory boards and expert meetings, and for acting as speaker in congresses outside the scope of the submitted work. FM-T has also acted as principal investigator in RCTs of the above-mentioned companies as well as Ablynx, Regeneron, Roche, Abbot, Novavax, and Medimmune, with honoraria paid to his institution. FM-T research activities received support from the Instituto de Salud Carlos III : project ReSVinext ISCIII/PI16/01569/Cofinanciado FEDER and project Enterogen (ISCIII/PI19/01090). LM has received speaker's fees from GSK, Pfizer, Novartis, Sanofi Pasteur and MSD. RD has received grants/research support from Pfizer and Merck Sharp & Dohme; has been a scientific consultant for MeMed, Merck Sharp & Dohme, and Pfizer and a speaker for Pfizer. MC has received honoraria from GSK, Pfizer, Sanofi Pasteur, Merck Sharp & Dohme and Seqirus for taking part in advisory boards and expert meetings, and for acting as speaker in congresses outside the scope of the submitted work. MC has also been the principal investigator in RCTs of GSK, Sanofi Pasteur, and Novavax with honoraria paid to his institution. PD is investigator of vaccine trials for a large number of vaccine manufacturers and institutions for which the university of Antwerp obtains grants. UH is a member of the Global Pertussis Initiative and the Collaboration of European Experts on Pertussis Awareness Generation, CEEPAG . The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Despite significant progress in reaching some milestones of the United Nations Sustainable Development Goals, neonatal and early infant morbidity and mortality remain high, and maternal health remains suboptimal in many countries. Novel and improved preventative strategies with the potential to benefit pregnant women and their infants are needed, with maternal and neonatal immunization representing effective approaches. Despite significant progress in reaching some milestones of the United Nations Sustainable Development Goals, neonatal and early infant morbidity and mortality remain high, and maternal health remains suboptimal in many countries. Novel and improved preventative strategies with the potential to benefit pregnant women and their infants are needed, with maternal and neonatal immunization representing effective approaches. Experts from immunology, vaccinology, infectious diseases, clinicians, industry, public health, and vaccine-related social sciences convened at the 5th International Neonatal and Maternal Immunization Symposium (INMIS) in Vancouver, Canada, from 15 to 17 September 2019. We critically evaluated the lessons learned from recent clinical studies, presented cutting-edge scientific progress in maternal and neonatal immunology and vaccine development, and discussed maternal and neonatal immunization in the broader context of infectious disease epidemiology and public health. Focusing on practical aspects of research and implementation, we also discussed the safety, awareness, and perception of maternal immunization as an existing strategy to address the need to improve maternal and neonatal health worldwide. The symposium provided a comprehensive scientific and practical primer as well as an update for all those with an interest in maternal and neonatal infection, immunity, and vaccination. The summary presented here provides an update of the current status of progress in maternal and neonatal immunization. Maternal and pediatric morbidity and mortality remain at the forefront of the international public health agenda. Vaccines are safe and highly effective at reducing death and disability in young children, but most vaccines are given weeks to months after birth, while the highest pediatric mortality occurs around the time of birth and, specifically, within the neonatal period (the first 28\u2009days) . NeonataThe 5th International Neonatal and Maternal Immunization Symposium (INMIS) convened experts from immunology, infectious diseases, vaccinology, clinicians, industry, public health experts, and social scientists in Vancouver, Canada, from 15 to 17 September 2019 to review the most relevant advances in maternal and neonatal immunization. The overarching focus of the conference was to review how best to secure protection for the next generation against potentially preventable infectious diseases via maternal and early-life immunization strategies. Over 250 participants attended the 2.5-day meeting that included 11 invited expert presentations, 28 oral presentations from submitted abstracts, 101 poster presentations, and 3 expert panel discussions. The meeting sessions were organized to begin with an opening overview keynote address. On the first day, there were sessions on the themes \u201cProtecting newborns and infants through maternal immunization\u201d and a \u201cMulti-disciplinary approach to improve maternal vaccine uptake,\u201d including a panel discussion, \u201cOvercoming hurdles to increase maternal vaccination uptake.\u201d The second day was dedicated to the themes \u201cThe mechanistic underpinnings of maternal and neonatal immunization\u201d and \u201cPromoting healthy infant life through optimizing neonatal immunization,\u201d with the former including a panel and audience discussion, \u201cHow does the maternal-newborn immune dyad communicate?\u201d Finally, the third day focused on the theme \u201cThe next generation of neonatal and maternal immunization research\u201d with an audience discussion of \u201cIs the field on the right path? What are we missing?\u201d The closing keynote speech addressed the controversial issue of the ethics of maternal immunization research and implementation. A brief summary of the keynote presentations follows, with subsequent sections each dedicated to one of the themes of the symposium. In light of the current COVID-19 pandemic, the current state of knowledge on the impact of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) on the maternal-infant dyad is briefly discussed to consider the output of the meeting in the perspective of the ongoing pandemic and given implications for SARS-CoV-2 vaccine development and the inclusion of special populations, such as pregnant women.Shabir A. Madhi highlighted that despite global progress in reducing under-5-year-old childhood deaths as part of the Millennium Development Goals agenda (1990 to 2015), reductions in mortality rates during the neonatal period lagged behind those for children 1 to 59\u2009months of age. Furthermore, there has been limited focus on the prevention of stillbirths, despite the stillbirth rate exceeding neonatal mortality rate in many low- and middle-income countries (LMICs). United Nations Sustainable Development Goal (SDG) 3.2 aspires to prevent all preventable deaths of newborns and children by 2030, as well as to reduce neonatal mortality to 12 per 1,000 live births. In addition, the Every Newborn Action Plan aims to reduce stillbirth rates from 20 to 10 per 1,000 births by 2035.Streptococcus (GBS) and Gram-negative bacteria. Such data are essential to inform the prioritization of research and interventions aimed at reducing maternal and childhood deaths.Dr. Madhi emphasized that quantifying the potential of maternal vaccines, or other prophylactic interventions, in reducing maternal morbidity, stillbirth rates, and early-infancy deaths requires in-depth and objective assessment of the causes of newborn and infant deaths. Recent surveillance using minimally invasive tissue sampling has unmasked the hitherto largely neglected contribution of infection as the dominant immediate cause of stillbirth and neonatal deaths, including deaths usually attributed to premature birth. This approach also has the potential to identify the potential contribution of pathogens for which there are no current vaccines, for example, group B The potential of maternal immunization (coupled with other possible changes) in reducing neonatal mortality is manifest by the near elimination of neonatal tetanus globally following the recommendation of routine tetanus vaccine immunization of pregnant women . FurtherIn recent years, outbreaks of the H1N1 pandemic influenza, Zika, and Ebola viruses have severely and uniquely affected pregnant women and their offspring. Ruth Karron underlined that pregnant women must be proactively considered in research agendas and in efforts to deploy vaccines against emerging infectious diseases; these vaccines have rarely been designed or developed with pregnant women in mind. For this and other reasons, pregnant women have in some cases been denied vaccines that would have protected them and their offspring from severe epidemic threats. This has become particularly relevant now amid the COVID-19 pandemic. Recommendations to guide the development and deployment of appropriate vaccines in these situations did not exist prior to the recent Ebola outbreak. To address this need, the Pregnancy Research Ethics for Vaccines, Epidemics, and New Technologies (PREVENT) Working Group was formed. This is a multidisciplinary, international team of 17 experts specializing in bioethics, maternal immunization, maternal-fetal medicine, obstetrics, pediatrics, philosophy, public health, and vaccine research and policy. After consultation with >100 additional experts in ethics, public health, vaccine science, maternal and child health, and regulatory affairs, the group developed a guidance document that put forth 22 recommendations across the domains of epidemic preparedness, vaccine research and development, and vaccine deployment. A key recommendation was the presumption of inclusion, suggesting a change to the traditionally default position, so that pregnant women would be included in vaccine development and deployment unless their exclusion can be justified from a scientific and ethical standpoint. The presumption of inclusion reframes decisions about investments in vaccine research, development, and delivery in ways that are profoundly important for public health and equity, and the principles are similarly applicable to neonates. The PREVENT Working Group recommendations and framework will be useful when weighing the potential risks and benefits for pregnant women and their newborns in the development and deployment of vaccines against current and emerging epidemic threats, such as COVID-19. Many of the PREVENT recommendations may also be relevant as the inclusion of pregnant women is more broadly considered in the context of biomedical research.Data were presented on the various challenges and opportunities related to implementation of maternal immunization programs in different settings, including LMICs and high-income countries (HICs). Philipp Lambach highlighted the efforts by the WHO to support maternal immunization platforms in LMICs, using tetanus vaccination in pregnancy as a proof of concept of this approach. The Maternal Immunization and Antenatal Care Situation Analysis (MIACSA) project, a collaborative effort between the WHO Departments of Immunization, Vaccines, and Biologicals (IVB) and Maternal, Newborn, Child, and Adolescent Health (MCA), was undertaken in 2016 to 2019 . MIACSA Deshayne Fell described how electronic health care data, such as insurance claims, health administrative databases, registries, and electronic health records, are being increasingly used to address important research on vaccine coverage and safety and the effectiveness of maternal immunization in HICs. In the United States, studies using the Vaccine Safety Datalink (VSD) have shown the safety of influenza immunization during pregnancy, with no increases in risks of proximal adverse events in pregnant individuals or specific adverse obstetric events, such as hyperemesis, gestational hypertension, gestational diabetes, preeclampsia, or chorioamnionitis , 10. Sim11\u2013Maternal and Neonatal Immunization Field Guide for Latin America and the Caribbean illustrated how researchers and governments need to collaborate closely in the field of maternal and neonatal immunization to ensure that research findings are translated into policy. In 2017, the Pan American Health Organization (PAHO) and WHO published the aribbean , positioAlba Vilajeliu presented data from a study of the current state of maternal and neonatal immunization policies, strategies, and practices in Latin America and of the knowledge and perceptions of pregnant women and health workers in this regard, including the importance of integration between immunization and antenatal care services, similar to the goals of MIACSA. The study suggested an important role of health care provider (HCP) recommendations. Penda Johm reported that high levels of acceptance of maternal immunization in The Gambia are based on previous vaccination experiences, sensitization messages from trusted HCPs, and monetary incentives. Further data reinforcing the importance of an HCP recommendation was provided by Neisha Sundaram ; recommendations from doctors were highly valued by pregnant women in Bengaluru, India. However, awareness of and access to vaccines other than that for tetanus were limited, highlighting the need for better information about maternal vaccines. Karina Top presented data from an HIC setting, further examining the information provided about vaccines given to pregnant women in product monographs and demonstrating that clear, evidence-based product monographs could support increased vaccine uptake in pregnancy . Eliz KiA session on the mechanistic underpinnings of maternal and neonatal immunization was organized to (i) illustrate the potential of systems immunology for the understanding of the immunobiology of the mother-infant dyad, (ii) provide an update on the immunobiology of pregnancy and its relevance for vaccine responses, and (iii) communicate new findings on the rules and mechanisms underlying transplacental transfer of maternal antibodies. A successful pregnancy requires dynamic changes in the maternal and fetal immune systems. After birth, the immune systems of the newborn and young infant develop to meet the challenges of tolerance to commensals and immunity to infectious pathogens . The impJohn Tsang discussed the potential of systems biology to discover new components regulating immune responses in pregnancy and infancy and to develop quantitative models of how these components interact at the level of the mother-infant dyad. Systems biology approaches have been successfully applied to the analysis of immune responses to influenza immunization in healthy adults. Baseline (preimmunization) and vaccine-induced parameters, including blood cell populations, genes, and serum proteins, are predictors of influenza vaccine responses . Dr. TsaTransfer of maternal antibodies across the placenta provides rapid protection to the infant against pathogens to which the mother is immune. Maternal antibodies of different specificities are not transferred equally, but the rules underlying variability in the transfer process remain poorly understood . Madeleiin vitro and that wP vaccination induced higher serum bactericidal activity than aP vaccination. Although maternal immunization was associated with lower titers of infant IgG versus pertussis antigens, its impact on serum bactericidal activity was limited, suggesting that high-quality infant antibodies may be produced even under the cover of maternal antibodies. Studies also suggested that maternal antibodies have a limited impact on infant T cell responses to vaccines presented the results of a randomized trial evaluating the functional properties of antibodies induced by acellular pertussis (aP) versus whole-cell pertussis (wP) vaccines in infants born to mothers who received the Tdap vaccine during pregnancy . The stuvaccines . Marjolevaccines .in vitro models, to define mechanisms of action . A publication from The Gambia highlighted the global variation in the etiology of community-acquired neonatal sepsis and sugge Gambia revealedin vitro. It is not currently known which BCG substrain/formulation offers the best protection.BCG remains a significant interest because of the potential for nonspecific (heterologous) immunomodulatory effects resulting in protection against a range of infections, as postulated for COVID-19 \u201336. Howe\u2013Paul Heath highlighted that preterm infants have lower-than-normal concentrations of maternal IgG, resulting in increased susceptibility to infection , includiRebecca Ford explored approaches to protect infants in Papua New Guinea against pneumococcal disease through different vaccination schedules, including neonatal immunization with a pneumococcal conjugate vaccine (PCV), and suggested that there may be a benefit to a pneumococcal polysaccharide vaccine booster dose in such a strategy. Results of a randomized controlled trial in The Netherlands on modulation of infant immune responses after maternal Tdap vaccination were complemented with the opsonizing capacities of the elicited antibodies in infants. The opsonizing capacity of antibodies may help us to understand whether this modulation is clinically relevant. The opsonizing capacity was higher in the offspring of women receiving the Tdap vaccine before primary infant vaccination and lower after priming, with similar findings before and after the second-year-of-life booster. Differences resolved at 2 years of age.Bahaa Abu Raya reported a meta-analysis of most of the globally available data on infant immune responses to pertussis, tetanus, diphtheria, and pneumococcal vaccines after maternal Tdap vaccination. Although schedules, epidemiologies, and vaccines used differed among countries and trials, the modulatory effect of the Tdap vaccine in pregnancy was observed in most studies for all vaccine antigens.Plasmodium falciparum, leading to direct infant immunization and reduced malaria risk in breastfed infants.Lieke van den Elsen presented data on the detection of malaria antigens in breastmilk and suggested that such antigens may promote immune defenses against Studies conducted in Southeast Asia and sub-Saharan Africa (SSA) have highlighted different contributions of various pathogens to neonatal sepsis, necessitating the development of new vaccines for maternal and neonatal immunization \u201347. Step\u20134). Mild-to-moderate local and systemic reactions were common, but none led to withdrawal from the trial. The vaccine yielded a robust antibody response, supporting progression to clinical trials in pregnant women.Kimberly Center presented results of a first-in-human study of a hexavalent GBS conjugate vaccine (GBS6) (NCT03170609). Healthy adults were enrolled into a randomized, placebo-controlled trial of a single dose of hexavalent GBS vaccine containing 5, 10, or 20\u2009\u03bcg capsular polysaccharide of serotypes Ia, Ib, II, III, IV, and V, with or without aluminum phosphate shared the results of a maternal immunization trial with a respiratory syncytial virus F protein vaccine . The vacShigella is an important cause of diarrhea in children; however, there is no licensed vaccine available yet. Esther Ndungo presented the results of a 2-year longitudinal study of 100 mothers and infants conducted in Malawi to determine the repertoire, functional capacity, and maternal-infant transfer efficiency of antibodies against Shigella, identifying differences between levels of transfer of all Shigella-specific IgGs and functional antibodies; further understanding of this would be critical for any Shigella vaccine.During the current COVID-19 pandemic, concerns have been raised about the possibility of vertical or perinatal transmission of SARS-CoV-2 and the effect of the infection on the pregnant woman, the fetus, or the infant. Disease severity and complications of COVID-19 appear to be relatively low during pregnancy, although multiple international studies are ongoing . Multipl59\u2013The most common clinical manifestations of COVID-19 in pregnancy have been reported as fever (40%) and cough (39%). Pregnant women are less likely to report fever and myalgia than nonpregnant women of reproductive age. Increased maternal age, high body mass index, chronic hypertension, and prior diabetes have been linked with severe COVID-19 in pregnancy. Regarding obstetric complications, several publications suggest the possibility of adverse obstetric outcomes in women with COVID-19, including Caesarean section, premature birth, low birth weight, and adverse pregnancy events , 63. HowVertical transmission of SARS-CoV-2 to the infant is a potential concern, for which initial case series did not show substantial evidence \u201367. Some\u201369\u2013Maternal and neonatal immunization is an effective key strategy in reducing death and significant morbidity from infectious diseases globally. While significant progress has been made in the implementation of maternal immunization programs in various regions and settings worldwide, much remains to be done. The integration of maternal immunization and antenatal care programs is critical, and local solutions which can adapt to different specific needs and a variety of settings are required. Our increased understanding of the mechanistic underpinnings of maternal and neonatal immunization will enable further vaccine development to be based on a bedrock of scientific evidence. Ongoing surveillance of known and emerging infectious diseases affecting women during pregnancy and infants in early life is required to ensure that the development and implementation of safe new vaccines remain relevant to the prevention of severe disease in these potentially highly susceptible populations. At a time of global challenges to health care systems worldwide due to the SARS-CoV-2 pandemic, ensuring the continuation of vaccination programs for pregnant women and newborns must remain an international priority."}
+{"text": "The coronavirus disease 2019 (COVID-19) pandemic has caused recurring and major outbreaks in multiple human populations around the world. The plethora of clinical presentations of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been described extensively, of which olfactory dysfunction (OD) was established as an important and common extrapulmonary manifestation of COVID-19 infection. The aim of this protocol is to conduct a systematic review and meta-analysis on peer-reviewed articles which described clinical data of OD in COVID-19 patients.This research protocol has been prospectively registered with the Prospective Register of Systematic Reviews . CINAHL, ClinicalTrials.gov, Cochrane Central, EMBASE, MEDLINE and PubMed, as well as Chinese medical databases China National Knowledge Infrastructure (CNKI), VIP and WANFANG, will be searched using keywords including \u2018COVID-19\u2019, \u2018coronavirus disease\u2019, \u20182019-nCoV\u2019, \u2018SARS-CoV-2\u2019, \u2018novel coronavirus\u2019, \u2018anosmia\u2019, \u2018hyposmia\u2019, \u2018loss of smell\u2019, and \u2018olfactory dysfunction\u2019. Systematic review and meta-analysis will be conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and the Meta-analyses Of Observational Studies in Epidemiology (MOOSE) guidelines. Articles will be screened according to pre-specified inclusion and exclusion criteria to extract studies that include new clinical data investigating the effect of COVID-19 on olfactory dysfunction. Included articles will be reviewed in full; data including patient demographics, clinical characteristics of COVID-19-related OD, methods of olfactory assessment and relevant clinical outcomes will be extracted. Statistical analyses will be performed using the Comprehensive Meta-Analysis version 3.This systematic review and meta-analysis protocol will aim to collate and synthesise all available clinical evidence regarding COVID-19-related OD as an important neurosensory dysfunction of COVID-19 infection. A comprehensive search strategy and screening process will be conducted to incorporate broad clinical data for robust statistical analyses and representation. The outcome of the systematic review and meta-analysis will aim to improve our understanding of the symptomatology and clinical characteristics of COVID-19-related OD and identify knowledge gaps in its disease process, which will guide future research in this specific neurosensory defect.PROSPERO registration number: CRD42020196202.The online version contains supplementary material available at 10.1186/s13643-021-01624-6. The novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the aetiological agent of the coronavirus disease 2019 (COVID-19) global pandemic, has infected over 102 million people worldwide, accounting for over 2,200,000 deaths as of 27 September 2020 . ConsistRecent systematic reviews and meta-analyses regarding COVID-19-related OD have found significant discordance between subjective reporting of smell changes and objective quantitation of olfaction \u201313, suggIn this systematic review and meta-analysis protocol, we aim to investigate the demographic characteristics of COVID-19 patients presenting with OD, and to ascertain whether there is any age, sex, or ethnic predisposition to COVID-19-related OD. In addition, we will investigate the potential associations between olfactory neurosensory impairments and other otolaryngologic or neurologic disorders in COVID-19 infection. Finally, we aim to determine the prevalence of COVID-19-related OD as an isolated symptom, including its onset and duration, and whether OD may be a prognostic indicator for COVID-19 disease severity.This systematic review will include peer-reviewed articles which described clinical data on OD in patients of all ages who were confirmed with SARS-CoV-2 infection by reverse transcription polymerase chain reaction (RT-PCR) tests.R2 index and p-value] [The systematic review protocol has been registered on the Prospective Register of Systematic Reviews . The research progress will be periodically updated on PROSPERO. The systematic review and meta-analysis will be carried out according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Metap-value] .st January 2020 to the date of completion of data extraction. Search keywords include \u2018COVID-19\u2019, \u2018coronavirus disease\u2019, \u20182019-nCoV\u2019, \u2018SARS-CoV-2\u2019, \u2018novel coronavirus\u2019, \u2018anosmia\u2019, \u2018hyposmia\u2019, \u2018loss of smell\u2019, and \u2018olfactory dysfunction\u2019. Additionally, articles published within this time period will be searched from the following Chinese medical databases: China National Knowledge Infrastructure (CNKI), VIP and WANFANG, to ensure greater scope of representation from different geographical and ethnic populations. The detailed search strings of each database can be found in Supplementary Table For the systematic review, the research group will search CINAHL, ClinicalTrials.gov, Cochrane Central, EMBASE, MEDLINE and PubMed for articles published from 1Subsequently, the search results will be combined and duplicates will be removed by Excel . Eligible articles will be screened by four authors by the article titles and abstracts, followed by full text examination. Disagreements will be resolved by another author (T.W.H.C.).Potentially eligible articles will be categorised using Microsoft Excel into three groups according to the article titles and abstracts: (A) articles containing clinical data on COVID-19; (B) epidemiological-modelling studies, animal models and experiments, and laboratory investigations which did not contain sufficient clinical data; and (C) guidelines, editorials, commentaries and review articles that did not contain new clinical data. After initial categorisation, full text of the articles containing clinical data [under category (A)] will be examined for their eligibility for inclusion. The design of the study selection strategy is summarised in Fig. The inclusion criteria for systematic review include (1) COVID-19 diagnosis confirmed by SARS-CoV-2 RT-PCR tests; (2) studies which reported clinical data on olfactory disturbances, either qualitatively or quantitatively; and (3) written in English or Chinese. The exclusion criteria include (1) articles which did not report individual clinical data on olfactory disturbances; and (2) articles that did not contain new clinical data. Case reports and case series of insufficient sample size (i.e. <\u200910 patients) will be included in the systematic review, but not the meta-analysis.The methodological quality of studies will be determined using the Newcastle\u2013Ottawa Scale (NOS) with a maximum of nine points (stars) for observational studies . \u2018SelectData will be extracted independently by four authors . Disagreements will be resolved by mutual consensus. For included articles, the following data will be extracted: (1) basic information of the articles ; (2) patient demographics ; (3) disease characteristics ; (4) relevant investigation outcomes ; (5) the method(s) used to assess olfaction ; (6) relevant imaging and endoscopic findings; and (7) any treatment provided.I2 statistics [p value\u2009less than\u20090.05 will be deemed statistically significant. All analyses will be performed using the Comprehensive Meta-Analysis version 3 . Descriptive statistics will be used for outcomes which are not suitable for meta-analyses.The prevalence of OD in COVID-19 patients will be computed for each of the studies. Pooled estimate of the prevalence of COVID-19-related OD will be calculated using the random effects meta-analysis, as the included studies involved different centres, different populations and different tools for olfactory assessment. Analysis of heterogeneity will be performed using the atistics . Publicaatistics . Egger\u2019satistics . The outThis systematic review and meta-analysis will be the most up-to-date and comprehensive study that evaluates COVID-19-related OD. Meticulous search strategy will be applied to identify all relevant peer-reviewed articles from multiple medical databases, thereby increasing the sensitivity and specificity of the search strategy. One potential limitation of this meta-analysis will be the heavy reliance on observational studies, which may be prone to biases and confounding factors. However, quality assessment procedures as mentioned will help in the selection of articles. Strict adherence to the PRISMA and MOOSE guidelines will help to improve the reporting quality of the study. Additionally, this research protocol has been prospectively registered on PROSPERO, which aims to maintain transparency throughout the study process. Any amendments made in the process of the systematic review or meta-analysis will be clearly indicated on PROSPERO. The outcome of this systematic review and meta-analysis will be crucial in quantifying the global prevalence and disease burden of COVID-19-related OD and serve to identify knowledge gaps in understanding its disease course. This article will be instrumental for future research regarding this important neurosensory defect in the COVID-19 pandemic.Additional file 1 : Table S1. Search strings according to medical database platforms. PRISMA-P checklist. MOOSE checklist. PROSPERO registration."}
+{"text": "Heterologous pathways are linked series of biochemical reactions occurring in ahost organism after the introduction of foreign genes. Incorporation ofmetabolic pathways into host organisms is a major strategy used to increase theproduction of valuable secondary metabolites. Unfortunately, simpleintroduction of the pathway genes into the heterologous host in most cases doesnot result in successful heterologous expression. Extensive modification ofheterologous genes and the corresponding enzymes on many different levels isrequired to achieve high target metabolite production rates. This reviewsummarizes the essential techniques used to create heterologous biochemicalpathways, with a focus on the key challenges arising in the process and themajor strategies for overcoming them. Today, incorporation of metabolic pathways into host organisms is a majorstrategy for increasing the production of valuable secondary metabolites.Heterologous expression began as the introduction of a single foreign gene intothe cells of host organisms, termed expression systems, most of which at thetime were bacteria. Over the past 40 years, the methodology of heterologousgene expression has significantly evolved, making it possible to introduce bothindividual genes and entire gene clusters into the genomes of various hostorganisms ,2. The dThis paper reviews the essential techniques for creating heterologousbiochemical pathways in various host organisms, outlines some key challengesarising in the process, and suggests some strategies for overcoming them.Although modern metabolic engineering techniques have permitted us to acquiremultiple biologically derived chemicals, there is no single approach yet thatwould result in successful heterologous expression. The following key stepsmust be taken to efficiently insert an exogenous metabolic pathway into aheterologous host:1. Isolation of the necessary metabolic pathway genes for the biosynthesis ofthe target compound;2. Incorporation of the biosynthetic pathway genes into a suitable stablevector(s);3. Selection of an appropriate host organism; andFig. 1).4. Selection of methods for the maintenance and optimization of the givenmetabolic pathway in the heterologous host[Even if all these conditions are met, it is almost impossible to predict inadvance whether functional heterologous expression of a gene cluster will beachieved. In some cases, the heterologous metabolic pathway works withvirtually no additional modifications, while a lengthy and extensiveoptimization is required for other pathways and organisms-8.in silico models are highly predictivewhen applied to well-investigated metabolic pathways and well-known hostorganisms. Computational models allow researchers to alter gene expression andenzyme production levels in silico and directly observe theireffect on the pathway flux. These models, however, are difficult, if notimpossible, to apply to experimental systems for which many crucial parametersare unknown [Alongside the experimental approaches, computational and modeling methods forthe elucidation of metabolic pathways and their manipulation in host cells havebeen developed. The unknown . A broad unknown ,11.In a2 [In order to incorporate an exogenous metabolic gene cassette into a hostorganism, one must also take into account the complexity of metabolic networksand the necessity to maintain the metabolic balances in the host; i.e., tomonitor the production and consumption of essential metabolites, such as NADH,ATP, and O2 . Various2 , 14, 15.2 ,17.This2 .Table 1.Choosing a suitable expression system for a metabolic pathway is one of themost critical steps in the development of a high-expression process . The mosP. pastoris and S. cerevisiae havealso been given the status of \u201cgenerally recognized as safe\u201d (GRAS)organisms as they do not produce any known oncogenic or toxic products [Single-cell eukaryotic microorganisms, yeast, are widely used as hosts forheterologous expression . Inaddiproducts -25.Saccharomyces cerevisiae is a convenientheterologous host, since an extensive methodology has been developed forcontrolling the expression of the heterologous biosynthetic pathways in thisorganism. To become familiar with the general methods of heterologousexpression of metabolic pathways in yeast, as well as successful examples ofheterologous biosynthesis of secondary metabolites in S.cerevisiae, see review [In particular, e review .Pichia pastoris , which is activated by the addition of methanol andinactivated by the addition of glucose, glycerol or ethanol [in vivo recombination. The existence of sequencedand annotated genomes of several P. pastoris strains is alsobeneficial to metabolic engineering [P.pastoris [A vast library of constitutive and inducible promoters with varied expressionstrengths has been described for ethanol ). If theineering . Moreovepastoris , 29.Candida boidinii,Hansenula polymorpha, and Pichia methanolica[Yarrowia lipolytica that is able to metabolize crude oil[Other types of yeast can also be used as heterologous hosts for metabolicpathways, such as the methylotrophic yeasts hanolica and olearude oil, 32.Aspergillus species as hosts can be extremely convenient forthe heterologous expression of fungal gene clusters, since source promoters andterminators can be exploited. For example, a cluster of penicillin biosynthesisgenes was successfully transferred to Neurospora crassa andAspergillus niger [Among various filamentous fungi, Aspergilli are the most commonly usedheterologous hosts . The undus niger .Howeverus niger , 35. Forus niger .Plants are a promising expression system for the heterologous production ofplant natural products . MetabolWhen working with plants, it is important to understand that their metabolismvaries significantly depending on the species, the tissue and the developmentalstage; often the same plant changes its metabolic profile almost beyondrecognition during flowering . ThestrIt is noteworthy that plants can be used as an expression system both in theform of a whole organism and as a cell culture, each having its own advantages:the whole organism is self-sufficient and requires minimal maintenance from theresearcher, while the cell culture usually yields higher quantities of targetmetabolites . At presChloroplasts, the semiautonomous organelles in plant cells, serve asbiosynthetic sites for various metabolites. These organelles have a doublemembrane and are characterized by a high concentration of ATP and a variety oflow-molecular-weight compounds, which makes them another promisingbioengineering target. Studies have shown that localization of the heterologouspathway in the chloroplasts typically significantly increases production of thetarget metabolite , 43.The disadvantages of plants as heterologous hosts include the relatively highcost of engineering, complex transformation protocols, slow growth andreproduction rates, as well as the negative public attitude towards geneticallymodified plants.Fig. 2).The selection of the necessary vector for gene transfer of the metabolicpathway is largely determined by the host organism in which heterologousexpression is planned. A vector must be able to efficiently transduce itstarget cells, as well as stably replicate in the selected host either byincorporation into the genome or as extragenomic DNA . FurtherExtrachromosomal vectorsExtrachromosomal genetic elements known as plasmids were first developed as avector system for bacteria over 40 years ago ,46. TodaThe recent development of the Modular Cloning System and the availability ofcommercial standard parts has significantly streamlined the engineering ofextrachromosomal plasmids for yeast, thus permitting the assembly of both low-and high-copy plasmids with either single or several coding sequences .Integrative vecDirect incorporation of biosynthetic gene cassettes into the host genome is analternative approach to heterologous gene delivery. The main methods ofchromosomal integration are based on recombination, transposition, orviral-mediated integration of exogenous genomes into the host DNA .Vectors containing exogenous target genes flanked with the host recombinationsites are used for homologous recombination. The endogenous host recombinasespromote the site-specific integration of target genes into the chromosome ofthe heterologous host. However, the efficiency of homologous recombination isgreatly dependent on the size of the gene cassette. Therefore, successfulintegration and expression of a large metabolic pathway might require severalsequential recombination steps .Gene delivery based on transposition recruits the so-called \u201cjumpinggenes\u201d, transposons, and the transposase enzyme, which recognizes thespecific flanking sites of the target gene cassette. Longer gene sequences aretransposed less efficiently; however, unlike in the case of homologousrecombination, the insertion sites of transposons are random, resulting invarying levels of heterologous expression from clone to clone and allowing oneto select the clones with superior target metabolite production rates . An addiThe viral-mediated gene delivery system is based predominantly on bacteriophageintegrases and the corresponding integration sequences: thus, many methods arebased on \u03c6C31 integrase , 51, derIrrespective of the chosen DNA delivery method, attention should be paid to thecoding sequences being incorporated. They may be either directly obtained fromthe source organisms or chemically synthesized. The latter option is preferred,because it also makes it possible to optimize the codon content , 4,thusFig. 3).Heterologous expression of natural product biosynthetic pathways is amultistage process each stage of which is fraught with difficulties. Problemaccumulation has a strong impact on heterologous gene expression levels,resulting in low amounts or no production of target metabolites. Identificationand elimination of metabolic bottlenecks are crucial for a successfulexpression of the heterologous pathway, thus significantly increasing theoperation of the entire pathway. Bottleneck elimination depends on thephysiological features of the host organisms, as well as on the properties ofthe metabolic pathway . In thisProduct inhibition and metabolite toxic burdenOne of the common problems of heterologous expression is metabolicself-inhibition; i.e., the depression of enzyme activity by its own product. Inthe case of metabolic pathways, enzyme activity may be depressed at severalstages, resulting in a measurable decrease in the biosynthesis rate and productyield depletion. The general solution to this problem is to substitute thefeedback-regulated enzymes with their inhibition-resistant allele or mutantforms .Another metabolite-related problem is the toxicity of the heterologousmetabolic pathway products to the host cells . In ordeOptimization of regulatory sequencesInsufficient heterologous pathway expression may also be caused by the use ofnon-optimal regulatory sequences. There are two common approaches to promoterselection: whenever possible, the native promoters of the pathway genes areused or they are replaced with the host-specific regulatory sequence. The firstapproach is generally used when the host and the heterologous pathway sourceare phylogenetically close andthe pathway is active in the source organism . The secRegulatory sequence fine-tuning might also be helpful in obtaining the optimalratio of metabolic pathway enzymes .For somGC content and codon-usage problemsAs mentioned above, a certain coding sequence can be obtained either directlyfrom the source genome or synthesized chemically. The technologies of preciselarge-scale DNA synthesis, such as ,76, havOptimization of the pathway enzymes combinationYarrowia lipolitica[The efficiency of a heterologous pathway does not linearly depend on the amountof gene copies. Initially, biosynthetic pathway metabolite production riseswith increasing gene dosage; however, overexpression of heterologous proteinsleads to a significant drop in the metabolic pathway output, sinceintracellular accumulation of metabolites can trigger cellular stressresponses, and the metabolic efflux to the heterologous pathway cannot bebalanced by the host cells . Thus,apolitica.The most efficient heterologous pathway may comprise enzymes derived fromdiverse sources, with genes originating from several metabolic pathways or evendifferent organisms . In someSpatial proximity of the enzymes\u2019 active sites may increase the totalrate of heterologous metabolite conversion and reduce the intermediate effluxand can be achieved by direct protein fusion or scaffolding. The advantage ofscaffolds over direct fuses lies in preserving the enzyme amino acid sequencesintact, which is generally better for the function of the protein. Three majorscaffold types include the DNA scaffold, which is based on plasmids and allowsone to easily change the distance between interacting proteins, the RNAscaffold, whose advantage is its small size, or the protein scaffold, a widerange of which is available .Subcellular compartmentalization of heterologous pathway enzymes imposesspatial restriction on metabolite production. Fortunately, this issue can beresolved by co-localizing all the enzymes in the same compartment usingwell-characterized localization tags for mitochondria, endoplasmic reticulum,vacuole, nucleus, membrane, and peroxisome.Membrane-associated enzymes impose the most stringent requirements onintracellular localization, thus often necessitating co-anchoring of all othermetabolic pathway enzymes in the same membrane . SeveralAs the ultimate aim of heterologous expression of metabolic pathways is theproduction of valuable secondary metabolites through a chain of enzymaticreactions, the sizes of individual heterologous proteins are irrelevant to theyield of the target product. The size of the expressed protein is characterizedby the length of the coding sequence of the heterologous genes, as well as bythe spatial restrictions imposed by cellular and subcellularcompartmentalization of the heterologous pathways in the host cells. Thus,maximization of heterologous metabolite production is a multidimensionaloptimization problem to which the contribution of the pathway proteinsefficiency prevails over their respective amounts and sizes.Metabolic flux and host pathway adjustmentin situ, and off-line methods based on sample collection [The substrate accessibility may dramatically influence the activity of thewhole pathway. A preliminary metabolic flux analysis (MFA) based on NMR, massspectroscopy, or other metabolomics approaches can facilitate planning of theheterologous pathway augmentation .All thellection . Applicallection .This strategy is implemented by identification of the branch-point metabolite,common in both the host and heterologous pathways, and simultaneouslyupregulating the target compound pathway, and downregulating the rival nativeenzymes, while maintaining the balance between the two in order to preserve thehost viability. The upregulation usually comprises activating or doublThis approach helps one to attain several objectives at once: enhance the finalmetabolite biosynthesis, increase the desired metabolic flux, and reduce thecompeting effluxes. For example, this method has yielded manifold improvementsto the heterologous biosynthesis of alpha-santalene and n-buPrecursor accessibilityAnother key requirement for a sustainable and effective heterologous pathwayexpression is precursor availability. The deficit of ATP , CoA derIt is important to note that metabolite efflux to heterologous pathways canoverlap and amplify the deleterious influence of heterologous products andinhibit the host primary metabolism . No singGenome editing for heterologous pathway optimizationde novo synthesis of the hostgenomes containing nontypical sequences. This field is poorly developed formulticellular hosts, but several attempts to synthesize the yeast genome havebeen successful [Modern state-of-the-art genome editing technologies allow for unprecedentedlarge-scale intervention into the host genome previously unattainable withother approaches. The heterologous pathway expression is assisted with suchgenome editing tools as RNAi ,zinc-ficcessful , 104.Optimization of the cultivation processWhen implementation of biotechnological methods has proved unsuccessful,adjustment of host cultivation protocols may yield the desired functioning ofheterologous pathways. Adaptation of cultivation methods is a laborious andtime-consuming process but may significantly improve the heterologous pathwayexpression . For insP. pastoris utilizingCO2 as a carbon source, which switches a heterotrophic organism toautotrophy [The problems related to host organism cultivation may also be solved byadjusting the host primary metabolism. An inspiring example is the recentcreation of the novel strain of totrophy .The valuable properties of many natural secondary metabolites, combined withtheir low levels of production in native organisms, translate into anincreasing relevance of the development of heterologous expression techniques.This review has analyzed and summarized the common limiting factors impedingheterologous expression in eukaryotic hosts and suggested several importantavenues for improvement, which involve applying the most advanced molecularbiology tools to each problem. Since heterologous metabolic pathway expressionis not a single method but a plethora of various approaches, no universaladvice to researchers, who are taking their first steps in this area, exists.Nevertheless, the numerous encouraging examples of heterologous pathwayexpression create a high degree of confidence as to the future of the field.Thus, as demand for the heterologous expression of complex metabolic pathwaysrises, the principal tools and techniques of metabolic engineering examinedhere may guide researches in their quest to create successful and productiveheterologous expression systems and advance the application of eukaryotic hosts."}
+{"text": "In this study, efficacy and safety of embolization alone and trans-arterial chemoembolization were compared in 265 patients with intermediate stage hepatocellular carcinoma. Trans-arterial chemoembolization was associated with a significant increase of complete radiological response, but without significant impact on overall response, and survival outcomes after propensity score matching. Both techniques showed similar safety profiles. To this day, embolization alone and trans-arterial chemoembolization are two available options in the treatment of intermediate stage hepatocellular carcinoma.p = 0.3905), progression-free survival (p = 0.4478) and transplantation-free survival (p = 0.9020) was observed between TACE and TAE. TACE was associated with a higher rate of complete radiological response but without any impact on overall radiological response, progression-free survival and overall survival compared to TAE.No definitive conclusion could be reached about the role of chemotherapy in adjunction of embolization in the treatment of hepatocellular carcinoma (HCC). We aim to compare radiological response, toxicity and long-term outcomes of patients with hepatocellular carcinoma (HCC) treated by trans-arterial bland embolization (TAE) versus trans-arterial chemoembolization (TACE). We retrospectively included 265 patients with HCC treated by a first session of TACE or TAE in two centers. Clinical and biological features were recorded before the treatment and radiological response was assessed after the first treatment using modified Response Evaluation Criteria in Solid Tumors (mRECIST) criteria. Correlation between the treatment and overall, progression-free and transplantation-free survival was performed after adjustment using a propensity score matching: 86 patients were treated by bland embolization and 179 patients by TACE, including 44 patients with drug-eluting beads and 135 with lipiodol TACE, 89.8% of patients were male with a median age of 65 years old. Cirrhosis was present in 90.9% of patients with a Child Pugh score A in 84% of cases. After adjustment, no difference in the rate of AE, including liver failure, was observed between the two treatments. TACE was associated with a significant increase in complete radiological response (odds ratio (OR) = 8.5 : 2.8\u201325.4)) but not in the overall response rate (OR = 2.2 (95% CI = 0.8\u20135.8)). No difference in terms of overall survival ( Liver cancer is the second cause of cancer-related deaths worldwide and is mostly represented by hepatocellular carcinoma (HCC) . ApproxiNevertheless, the proper action of chemotherapy is poorly described and its interest is still debated as ischemic action of embolization seems to constitute the major part of cytotoxic action . Thus, tOther RCT have compared TACE versus TAE and meta-analyses reported that the overall survival and the therapeutic response were similar between the two intra-arterial treatments ,14,15. AWe aim to compare radiological response, treatment-related toxicity and long-term outcomes in a retrospective bicentric study of patients with hepatocellular carcinoma treated by TAE versus TACE.Inclusion criteria were patients presenting HCC diagnosed at histology or using non-invasive criteria at imaging based on European Association for the Study of the Liver (EASL) guidelines, considered not resectable or not amenable to percutaneous ablation by a multidisciplinary tumor board. Patients were Child Pugh score A or B, with a performance status 0 or 1, and no prior trans-arterial procedure. TACE or TAE as a bridge to transplantation was not an exclusion criterion. Exclusion criteria were trans-arterial procedure performed in the treatment of an acute bleeding of HCC and the absence of available pre- and post-procedure imaging to assess the radiological response.We retrospectively included all patients meeting these criteria in two centers in France: Jean Verdier University Hospital center in Bondy and Grenoble-Alpes University Hospital center: 112 patients treated at Jean Verdier University Hospital from 1 December 2007 until 1 November 2013 were included and 153 patients from the Grenoble University Hospital from 1 June 2011 until 1 December 2014, for a total of 265 patients.TACE using either drug-eluting beads, doxorubicin or idarubicin was the n = 26) received TACE before switching to TAE as the only trans-arterial technic used in the center.The only exception to the systematic use of TAE for HCC treatment in Jean Verdier Hospital was a period of several months when all patients .Patients were treated with trans-arterial therapy following standard local protocol. Each indication of TACE or TAE was validated during multidisciplinary tumor board including at least an hepatologist, an interventional radiologist and a liver surgeon. First, diagnostic arteriography was performed under local anesthesia, through the right femoral artery, followed by trans-arterial therapy, as selective as possible according to tumor localization and number.\u00ae, Pfizer Pharma, New York, NY, USA) or 10 mg of injectable lyophilized Idarubicin were either manually emulsified with 5\u201310 mL of iodized oil or loaded on 100 \u00b5m drug-eluting beads , as previously described [\u00ae, Curamedical, Assendelft, The Netherlands) to obtain an arterial flow stop during 10 min, under fluoroscopic control.In case of TACE, 50 mg of injectable lyophilized Doxorubicin , without emulsion with chemotherapy, was injected through the catheter as selective as possible. This \u201clipiodolization\u201d was followed by embolization using an absorbable gelatin sponge until complete stasis of the arterial flow.In case of TAE, 10\u201315 mL of pure iodized oil . Medical parameters as well as biological data were extracted from the patients\u2019 electronic medical records and independently reviewed. Radiological reviewing was realized for this study blindly to clinical data by two radiologists (Olivier Sutter and Yann Teyssier).All imaging examinations were archived in a picture archiving and communication system performance status, body mass index (BMI), etiology of chronic liver disease and cirrhosis status. Cirrhosis was diagnosed by biopsy or using non-invasive methods (transient elastography or blood tests). Several biological variables were recorded at inclusion: albumin, prothrombin time, bilirubin, transaminases, gamma glutamyl transferase, alkaline phosphatase, creatinine, platelets and alpha-fetoprotein (AFP). Every patient had a pre-operative imaging, and a post-operative imaging, within 3 months after TACE as recommended , to asseDuring the post-embolization period, adverse events (AEs), throughout two months following the treatment, were recorded based on clinical examinations, systematic biological follow-up and imaging follow-up. We graded the AE from grade 1 to 5 according to the Common Terminology Criteria for Adverse Events (CTCAE) v5.0. After the first treatment, all patients were prospectively followed-up until death or the last recorded visit, until 30 June 2018.Categorical variables such as tumor response rate and adverse events were compared using exact Chi-square tests. Survival outcomes such as progression-free survival (PFS), overall survival (OS) and liver transplant-free survival (LTFS) were computed using the Kaplan\u2013Meier method, and Log-rank tests were used to compare survival rates. Overall survival was calculated from the date of first treatment to the date of death or last recorded visit and data were censored at the date of liver transplantation. Progression-free survival was calculated from the date of first treatment to the date of death, date of radiological progression or last recorded visit. Transplantation-free survival was calculated from the date of first treatment to the date of death, date of liver transplantation or last recorded visit.p-value for univariate analyses and odds ratio (OR) with 95% confidence interval (95% CI) for propensity score weighted results, including the age, Child Pugh score, AFP level, the sum of the 2 main liver nodules and the number of tumors. Analyses were performed using Stata version 16.1 .Logistic regressions were used to compare binary variables. Mixed linear models were used to assess differences in the evolution of the Child Pugh score TAE and TACE. For each analysis, comparisons were performed with and without weighting by a propensity score (inverse probability of treatment weighting (IPTW)) to adjust for confounding factors. Statistical significance is expressed with A total of 265 patients were included with a median follow-up of 21.7 months: 86 patients were treated by TAE and 179 patients by TACE, including 44 patients with DC beads and 135 with Lipiodol TACE. Chemotherapy used in TACE was doxorubicine and idarubicine in respectively 110 and 25 patients. The median number of sessions of TACE and TAE during follow-up was 2 (IQR (interquartile range) = 1\u20132) in each group. All patients treated by TAE were treated in Jean Verdier hospital and 86% of patients treated by TACE were treated in Grenoble Hospital.p = 0.0003), had a higher tumor burden , with less BCLC A HCCs , compared to patients treated by TACE. In contrast, patients treated by TAE had a lower Child Pugh score and a lower AFP level compared to patients treated by TACE (p = 0.001). Other treatments following TAE and TACE are detailed in Two-hundred and thirty-eight patients (89.8%) were male with a median age of 65 years old. Cirrhosis was present in 241 patients (90.9%) with Child Pugh A in 83.8% of cases. Etiologies of underlying chronic liver diseases are detailed in by TACE . Fifty-oAdverse events occurring within 2 months following the trans-arterial treatment were observed in 25.7% of patients, including mainly fatigue, pain, biliary complications and liver failure. Without any adjustment, all grade AE were more frequent in patients treated by TACE compared to patients treated by TAE ), whereas the incidence of grade 3 and 4 AE was not significantly different. After weighting by propensity score , TACE was no more significantly associated with a higher rate of all grade AE (OR = 2.3 (95% CI: 0.8\u20136.3).p = 0.2095). Similar results were obtained after weighting by propensity score with an increase of the Child Pugh score of +0.8 after TAE and +0.4 after TACE (p = 0.2267). When restricting the analysis to cirrhotic patients, no difference of the increase of the Child Pugh score was observed without (p = 0.3345) and with propensity score weighting (p = 0.5032).Child Pugh score variation at the first radiological assessment did not differ significantly between groups with an increase of +0.6 after TAE and +0.4 after TACE (p = 0.0049). Also, higher PFS (p = 0.00001) and LTFS (p = 0.0009) were observed in responders (CR and PR) versus non responders (SD and PD) , 21.2% of patients had stable disease and 15.5% a progressive disease at imaging. In the whole population, a significant increase in OS was observed in patients presenting a tumor response (CR and PR) with a median survival time of 32.1 months versus 20.1 months in non-responders ( and PD) .p = 0.044). After propensity score weighting, TACE was no more significantly associated with a higher rate of radiological response ORR (OR = 2.2 (95% CI = 0.8\u20135.8)) but remains significantly associated with a higher rate of complete response (OR = 8.5 (95% CI: 2.8\u201325.4)).Without any adjustment, TACE was associated with a higher ORR (67.4%) compared to TAE . After weighting by a propensity score, PFS was still not statistically different between patients treated by TACE compared to TAE (p = 0.4478) (p = 0.6201).The median PFS of the whole cohort was 9.3 months. Without weighting, no significant difference between TACE and TAE was observed . No signp-value = 0.0009). In contrast, after weighting by a propensity score, no significant difference in terms of overall survival was identified in TACE versus TAE (p = 0.0001) and after weighting by propensity scores .The median overall survival of the whole cohort was 27.7 months, with 76%, 54% and 39% of survival at 1, 2 and 3 years. Without weighting, median overall survival was longer in patients treated by TACE (32.7 months) compared to patients treated by TAE . Among Cp = 0.1512). After weighting by a propensity score, no significant difference in terms of transplantation-free survival was identified in TACE versus TAE (p = 0.6607).The median transplantation-free survival of the whole cohort was 19.1 months, with 72%, 37% and 22% of survival without transplantation at 1, 2 and 3 years. The median transplantation-free survival was not different in patients treated by TACE (18.7 months) compared to patients treated by TAE . The samThis study aimed to compare the efficacy and tolerance of TAE compared to TACE in patients with unresectable and non-ablatable HCC based on a retrospective analysis of two tertiary centers in France. Overall, there is limited selection bias between TACE and TAE in each center because TACE was systematically performed in patients from Grenoble and TAE was systematically performed in patients from Jean Verdier Hospital. The only exception was a period of 18 months in Jean Verdier Hospital when all patients received TACE before switching to TAE as the only trans-arterial treatment.In terms of radiological response using mRECIST criteria, TACE was more efficient to achieve a complete radiological response even after adjustment using a propensity score. These data suggest that the adjunction of chemotherapy with embolotherapy could increase the rate of radiological response. All pre- and post-treatment imagings were reviewed by two independent radiologists in order to better characterize the tumor burden and the objective tumor response. However, one of the limits of our study is the absence of assessment of interobserver agreement about radiological response by two independent reviewers. If the assessment of radiological response after trans-arterial treatment could be sensitive to interobserver variation, progressive disease is considered as more reproductible . In our In terms of impact of radiological response on overall survival, radiological response was associated with longer overall survival. TACE was associated with a higher rate of complete response but without increasing overall survival compared to TAE. This absence of benefit on survival outcomes suggests that complete radiological response is not the only determinant of long-term survival and other parameters such as treatment toxicity, ability to achieve partial radiological response or stable disease, ability to repeat trans-arterial treatment or initiate subsequent treatment by systemic therapy should be taken into account. Moreover, the absence of difference in terms of progressive disease between the two treatments may explain the absence of difference in overall survival.Besides, TACE was associated with a higher rate of toxicity compared to TAE but the rate of grade 3 or 4 AE was not different. Previous data on toxicity and efficacy have also suggested that TACE was associated with a higher radiological response rate with an increase rate of toxicity ,16. ThisThe crude difference observed between the two treatments in terms of raw overall survival may be explained by the higher rate of patients receiving liver transplantation in the TACE group compared to TAE as transplantation-free survival was not different between the two treatments. It is difficult to differentiate the effect of liver transplantation per se with the prognostic impact of the clinical and tumor features of patients amenable to transplantation that have lower tumor burden, a younger age and less comorbidity. The absence of difference in terms of overall survival after propensity score adjustment suggests that patients\u2019 features play a key role in the initial difference between TACE and TAE. However, we cannot exclude that the absence of difference in terms of OS and LTFS between the two groups after adjustment is due to the small size of each group.These data are important to consider in the therapeutic algorithm of HCC patients which needs to include parameters such as liver function, possibility of liver transplantation and potential treatments available in second line. As our study showed a higher rate of complete response with TACE, and previous published data suggest a higher rate of adverse events and liver deterioration with this technique , TACE coIn conclusion, TACE was associated with a higher complete radiological response rather than TAE but without a significant impact on progression-free and overall survival after adjustment, and with a possible lack of statistic power. Future studies are needed to properly evaluate the effects of the adjunction of chemotherapy to trans-arterial embolization on survival, in patients with HCC."}
+{"text": "In particular, they enable experiments to be conducted that are not practical or feasible to conduct in real world settings; they can capture heterogeneity in agent circumstances, knowledge, behaviour, and experiences; and they facilitate a multi-scale, causal understanding of system dynamics. However, developing detailed, empirically informed agent-based models is typically a time and resource intensive activity. Here, we describe a detail-rich, ethnographically informed agent-based model of a Nepalese smallholder village that was created for the purpose of studying the impact of multiple stressors on mountain communities. In doing so, we aim to make the model accessible to other researchers interested in simulating such communities and to provide inspiration for other socio-ecological system modellers. Specifications table*Method detailsIn this paper, we describe an empirically informed agent-based model (ABM) of a rural Nepalese village. The original purpose of the model was to simulate the impact of multiple stressors on mountain people and households Here, we describe the ABM using the ODD protocol Section 3), it details the model validation (see Section 4) and sensitivity analysis (see Section 5) that was conducted for the original study, and it provides practical guidance for using the model (see Section 6), adapting the model (see Section 7), and analysing the outputs (see Section 8). Additional discussion of the model can be found in Roxburgh In addition to the ODD protocol, the paper discusses the number of replicates necessary when conducting experiments using the model , a wife of a referent, a parent(-in-law) or grandparent(-in-law) of a referent, a daughter or widowed daughter-in-law of a referent, or a still in education son of a referent, will have the household itself as their finance controller. In contrast, sons of referents who have completed their education manage their own income and expenditure, along with that of their wife and their children. This approximates how finances are managed in Namsa. Finally, villagers have variables stating the timing of their marriage and the timing of their death which are either determined at model initialisation (see I4 and I6) or at birth (see S5), and married women have additional variables stating their desired number of children and the timing of their next child's birth (see S5).\u2022E1.2 Households are characterised by their members, fields , polytunnels, livestock, cash, loans, a crop strategy, a referent individual, and potentially a monthly remittance income (see I11). The crop strategy is a list of the number of fields the household is provisionally allocating to each of the available crop types during the year ahead. The referent individual is the youngest adult (defined as being over-18 years of age) male who does not have siblings within the household. Alternatively, in the absence of an adult male without siblings, the referent is the youngest widowed adult female.\u2022E1.3 Chickens are characterised by the household they belong to, their age, their sex, and the stage they are at in their egg laying cycle (see P2) in the case of females.\u2022E1.4 Goats, cattle and buffalo are characterised by the household they belong to, their age and their sex.\u2022E1.5 Fields and polytunnels are characterised by the household they belong to and their current usage.E2. Globals. The model includes a number of global variables which are accessible by all agents and processes in the model. These include past and present crop yields (see I12), forecast crop yields (see S6), produce prices and expense parameters (see Sc). Details of how the parameters were determined are provided in Roxburgh ters see , and sceE3. Scales. Each field equates to half a ropani of land (254.35 m2), each polytunnel equates to 68.34 m2, and one time-step corresponds to one day. of land 54.35 m2,E4. Visualisation. For visualisation purposes, fields are represented by hexagons, households by house symbols, villagers by arrowheads, animals by circles that are coloured according to species, and polytunnels by grey rectangles . Next, a check is performed to determine whether villagers are due to receive a salary, a wage, or a pension on the current time-step. Villagers with salaried jobs are paid at the end of each month at the rate shown in E1.1).S2). On the first day of each month, villagers who are attending school also incur an expense of NPR 400, while those who are attending college incur an expense of NPR 800. Should the day be one of the 23 festival daysS11). As part of this assessment, the villager will determine how many portions of meat he and his dependents can afford to consume each week. After these financial matters have been processed, checks are performed for each villager to see whether they are due to die (see S3), to marry (see S4), or to give birth (see S5) on the current time-step.Next the personal expenditure of each villager \u2013 which is paid for by their finance controller \u2013 is determined. Each day, villagers who are not abroad will incur food expenses and non-specific other living expensesP2. Chickens. On each time-step, chickens age by one day and consume 32g of maize, 32g of millet and 10g of wheat, the cost of which is deducted from the cash stock of their household. When a female chicken reaches six months of age, it will begin a cycle of laying an egg a day for thirty days, followed by three months of no eggs. These eggs are each sold for NPR 7 by the chicken's household. This money is added to the household's cash stock. Male chickens will be slaughtered when six months old (183 days), while female chickens will be slaughtered when five years old . It is assumed that the carcass is sold for NPR 1,500 and a replacement 14-day-old chick is purchased for NPR 400. The net income is added to the household's cash stock. In the real world, there is of course more variability in poultry life courses and economics. The processes and parameters set out here are designed to represent what is typical. This is also true for the livestock.P3. Goats. On each time-step, goats age by one day and consume 25g of maize and 25g of millet,P4. Female Cattle (Cows). On each time-step, cattle age by one day and consume 240g of maize and 240g of millet, the cost of which is deducted from the cash stock of their household. At 913, 1461, 2009, 2557, 3105, 3653, and 4201 days of age, cows will give birth to one calf. The calves are sold for NPR 3,000. Cows will provide milk each day post-pregnancy up until two months before they next give birth or until they reach 4,688 days of age. It is implicitly assumed that some of this milk is consumed within the household, but households with just one or two members are assumed to have surplus to sell. One-member households receive NPR 135 per day for selling their surplus milk, while households with two members receive NPR 90. This money is added to the household's cash stock. Cows will die upon reaching 18 years of age and are replaced by a 548-day-old calf at a cost of NPR 3,000.P5. Male Cattle (Oxen). On each time-step, cattle age by one day and consume 240g of maize and 240g of millet, the cost of which is deducted from the cash stock of their household. Oxen will die upon reaching 18 years of age and are replaced by a 548-day-old calf at a cost of NPR 3,000.P6. Buffalo. On each time-step, buffalo age by one day and consume 280g of maize and 280g of millet, the cost of which is deducted from the cash stock of their household. At 1642, 2220, 2798 and 3376 days of age, female buffalo will give birth. The calves will be sold immediately for NPR 10,000. This money is added to the household's cash stock. Female buffalo will provide milk each day post-pregnancy up until three months before they next give birth or are slaughtered. As with the cows, it is implicitly assumed that some of this milk is consumed within the household, but households with just one or two members are assumed to have surplus to sell. One-member households receive NPR 225 per day for selling their surplus milk, while households with two members receive NPR 150. This money is added to the household's cash stock. Female buffalo will be slaughtered at 10 years of age, and males will be slaughtered at 14 years of age. Their meat will be sold for NPR 26,500 in the case of males, and NPR 19,875 in the case of females. They are immediately replaced by a 548-day-old calf at the cost of NPR 10,000. The net income is added to the household's cash stock.P7. Households. Households begin each time-step by checking whether the current day is a crop plantation or harvesting day (see S6). Once this is known, the particular fields that are to be allocated to the crop are selected randomly from those that are not currently in use, except in the case of rice cultivation which is only done in the specialist paddy fields. Plantation then takes place (see S7). If the current day is instead a harvesting day, fields will be harvested, and crops will be sold (see S8). The newly harvested fields will then become available for planting once again. day see . If it iE1.2). Following this, the households check whether the finance controller of each of their members needs updating. Again, the conditions set out earlier for determining the finance controller of villagers determine whether this is the case (see E1.1). After this, should the time-step correspond to the first day of a month, households that are designated as being in receipt of a remittance (see I11) will receive the said remittance. This is worth NPR 10,000 per month. Next, households check whether any of their members have met the conditions necessary to trigger household fission and then whether their circumstances have changed such that they need to buy or sell livestock and/or poultry (see S9 and S10). Finally, households assess the state of their finances in light of the income and/or outgoings that have taken place earlier in the time-step (see S11). As part of this last process, the household will determine how many portions of meat that those with their household as their finance controller can afford to eat each week. Meat is modelled as it represents the main luxury expenditure outside of festival times.After crops have been dealt with, the next step for households is to check whether their referent needs to be updated. The conditions set out earlier for selecting the referent determine whether this is the case . Secondly, finance controllers regularly make twelve-month income and expenditure forecasts in order to assess whether they can afford to make immediate debt repayments or buy meat (see S11).C6. Interaction. Households do not interact with one another, and nor do the animals, fields, or polytunnels. Villagers, however, can indirectly affect one another's circumstances through their influence on their household or finance controller. This influence is primarily, though not exclusively, financial in nature.C7. Collectives. There are two main types of collective in the model. Firstly, there are households. Households consist of a collection of villagers, as well as fields, animals, and other assets, and they are responsible for certain group-level decisions such as determining crop strategies. In terms of their implementation in the model, households are notably depicted not only as collections of villagers and assets, but also agents in their own right. Secondly, villagers are allocated to finance controllers who manage finances on their behalf. When multiple villagers are allocated to a given finance controller, they essentially become a financial collective. These financial collectives can be the same as a household or they can be a subset of household members. They are not formal social units in Namsa, but they help approximate how real-world finances are managed.C8. Heterogeneity. There is scope for very substantial agent heterogeneity in the model. Villagers in particular have a wide range of state variables which in combination produce circumstances that can be quite particular to them. Households can also differ substantially as a result of differences in members and in assets. There is somewhat less heterogeneity when it comes to animals as only age, sex, and ownership can differ. Fields, meanwhile, are only distinguished by their ownership, their usage, and whether they can support rice cultivation.C9. Stochasticity. The model has a large number of stochastic components. These are used in the initialisation stage to generate diversity in the initial simulation conditions and also to \u201creproduce variability in processes for which it is unimportant to model the actual causes of the variability\u201d . The particular number depends on the weekly meat consumption variable of their finance controller. Meat consumption is paid for each Sunday. A notable assumption is that market prices for both meat and other foodstuffs will not fluctuate over time. This simplification was made to avoid over complicating the model dynamics \u2013 the desired focus of the model at this stage is on the stressors outlined in Sc, of which food price fluctuations is not one.S3. To die. When a villager is scheduled to die, they ask their partner \u2013 should they have one \u2013 to set their widowed status to true.S4. To marry. In line with local tradition, when a female villager marries, she will leave the village and cease to exist in the model. In addition to this, her parents will incur an NPR 100,000 expense \u2013 the cost for them of providing a dowry. When a male villager who has permanently migrated marries, his parents will similarly incur an NPR 100,000 expense. His role in proceedings will then also be over. When any other male villager marries, a new female villager is created to be his wife. Her current age, her destined age of death, her educational attainment, and her career will be determined using the same methods as those employed in villager initialisation . Her household will be the same as her husband's household. Her desired number of children will be stochastically determined with the value being drawn from one of the two fertility scenario distributions (see Sc2). And the timing of her first child's birth will similarly be stochastically determined with the value being drawn from the distribution shown in S5. To give birth. When a villager gives birth, a new villager is created. Its age is set to zero, its education is set to pre-school. Its parents are set to the villager who has given birth and to the partner of that villager. Its sex is randomly determined with an equal chance of it being either male or female; more nuanced genders and sexualities are not modelled as information was not forthcoming during the field survey. Its destined age of marriage is determined by drawing a value at random from the sex appropriate marriage age probability distributions that are associated with S6. To update crop strategy. This is a multi-step process. Firstly, households conservatively update their forecast of future grain needs, which means assessing the expected grain consumption of household members, livestock, and poultry for the year ahead (see P2-6 and S2). Secondly, they forecast per-field yields for the coming year for each crop. This is done by applying exponential smoothing to yield data from the previous ten years.S7. To plant crops. When plantation occurs, the status of the chosen fields is updated to reflect their new usage. Households then incur a cost equal to the seeds required per half ropani for that crop, multiplied by the current market price of the crop, multiplied by the number of fields to be grown, plus the per half ropani cost of fertiliser, multiplied by the number of fields to be grown . For households without male cattle or buffalo, harvests also result in an oxen hire cost of NPR 425 per field that the relevant crop is planted in. In the actual village, households will often exchange labour when engaging in agricultural tasks. Sometimes this involves money exchanging hands, but we have decided not to explicitly simulate the hiring of agricultural labour in this version of the model as it would require a number of additional assumptions be made. Another notable simplification in this part of the model is the assumption that harvests are always sold in their entirety. In Namsa, only the cash crops are typically sold. However, by selling harvests in the model and then buying produce as required, management of household food stocks is greatly simplified. It is assumed that the grain (i.e. the non-cash crop produce) is both sold and bought at the fixed market price given in S9. To perform household fission. Prior to the 2015 Nepal earthquake, when there was more than one son in a household the elder son(s) traditionally claimed a share of their family's land and built their own home around the time that they married and had their first child. However, following the earthquake \u2013 which occurred towards the end of the initial data gathering phase \u2013 the cost of construction jumped significantly, forcing a delay in household fission.S10. The youngest remaining son in a household will ultimately inherit his parents\u2019 home and remaining land unless he chooses to migrate. Consequently, he will not need to engage in household fission himself.Here, S10. To determine animal ownership. During the fieldwork decision-making focus group When the number of animals a household has is below the number it should have given its size and land, the household will purchase additional animals to correct the disparity. Chickens are bought when 14 days old at a cost of NPR 400. Their sex is randomly assigned. Goats are bought when 112 days old at a cost of NPR 5,000 and also have their sex randomly assigned. If the new animal is to be a bovine, it will be bought when 548 days old at a cost of NPR 300 if it is an ox or cow, or NPR 10,000 if it is a buffalo. The breed is stochastically determined with two fifths of bovine designated buffalo, while the remainder are designated cattle. If the household does not already have buffalo or cattle, the animal will be designated female, otherwise it will be designated male. These rules approximate the livestock sex and breed preferences observed in the Namsa fieldwork survey data S11. To assess finances. Once all income and outgoings have been determined for the households and villagers, finance controllers take stock of the impact the transactions have had on their cash situation. If the transactions during the current time-step have resulted in a finance controller having a negative cash balance, the controller will typically need to take out a loan to cover the shortfall.in lieu of the empirical data.For the model to provide useful insights into the future evolution of villages like Namsa, the initial model conditions need to be realistic. One option would have been to directly replicate the fieldwork village in virtual form. However, there are two significant ethical reasons why this would be inappropriate. Firstly, doing so would pose a privacy threat to the research subjects as, in combination, the individual and household attributes that would be included in the code could potentially lead to their identification by third parties There are a number of obfuscation techniques available such as randomization, data swapping, and data desensitisationThe alternative approach, population synthesis, has seen a flurry of interest in recent years as it represents a key stage in the spatial microsimulation process I1. Population synthesis. Given the limitations of existing methods of population synthesis, it has been necessary to design a bespoke approach \u2013 an approach that can generate realistic households composed of realistic individuals, and that can approximate the composition of household types seen at the fieldsite using just the data we have available. Rather than generating a population of individuals and then allocating them to households , we determine a set of household types and then generate individual household members iteratively to suit the types of household created. The main steps are as follows:1.The number of households that should be generated is specified. For the purposes of this study, this will always be 14 \u2013 the number of households that resided in Namsa at the time of the fieldwork 2.Each of the households to be generated is assigned a household type from the options shown in 3.The next sequence of steps involves populating each household in turn with members. This process starts by selecting the age of the household's referent individual, 4.The next step is to select the age of the referent's partner, 5.If the household type is not a nuclear family plus daughter-in-law, the age and sex of the referent's current children is decided, and the provisional date of the next birth is selected if the current number of children is not equal to the value of the lifetime number of children that was determined in the previous step. This process involves first deciding the age of the eldest child probabilistically based on 6.The next step is to select the age of the referent's mother, 7.Next, the age of the referent's father, 8.If the household is a nuclear household plus daughter-in-law, the referent will not have children, but he may still be living with siblings. The number of children the referent's parents have is decided probabilistically according to 9.complex household. In this instance, the age of the referent's brother, At this point, just one household type remains to be completed: the This basic flow is repeated until all of the households have been populated. This bespoke approach allows creation of virtual villages that should be qualitatively similar to Namsa in terms of population and household structure, while preserving the anonymity of the research subjects. It also means that actual individuals are not simulated, so it addresses the core ethical concerns mentioned earlier. Furthermore, it is well suited to the data available and can create small populations without issue. There are, however, certain limitations to the approach. Firstly, the archetypal households are limited to those observed at the fieldsite in 2015 and, secondly, the reference distributions that inform the probabilistic selections are based on small samples. This means that possible outcomes are always quite strongly tethered to the particularities observed in Namsa.I2. Initialise villager educational attainment. Villagers who are less than 2,090 days old will be in pre-school at the time of model initialisation as they will have been less than five years old when the current school year started. Villagers who are older than 2,090 days but younger than 5,740 days will be in school as they will have been between five and 15 years old when the current school year started. Villagers who are older than 5,740 days but younger than 6,570 days will either be undertaking their +2 (i.e. college) or will have left the education system as they will have been between 15 and 18 years old when the current school year started. Both possibilities are equally likely. Of the villagers who are older than 6,570 days, but under 28 years-of-age, by default a quarter will have attained +2 qualifications, a quarter will have left education after completing their SLC , and the remaining half will have left school before attaining any qualifications. Villagers who are over 28 years-of-age but under 45 years-of-age will have a 25% chance of having attained their SLC. The remainder \u2013 along with villagers who are over 45 years-of-age \u2013 will have left school before attaining any qualification. These rules have been derived from the data on educational attainment that was collected during the fieldwork household surveys and from discussions during the typical life and young persons\u2019 focus groups I3. Initialise villager career. Villagers who have completed their education will be allocated to a career pathway at initialisation. The particular career pathway that is selected for a villager is determined using the same principles employed in S1, with the villager's sex, age, and educational attainment all being taken into account.I4. Initialise villager age of marriage. The marriage age of those who are single at the time of model initialisation is determined by drawing a value at random from the viable valuesI5. Initialise villager fertility. The desired child count of married women who are under the age of 50 is also determined at initialisation. The process used to select this value is the same as the one outline in S4, except it takes into account the number of children the women already have, the time since they had their last child, and the time since they married. Specifically, the number of children that a woman already has determines her minimum desired fertility, women who have been married more than 9.78 years without having a child are assumed to desire no children, and women whose last child was born 9.86 years ago are assumed to desire no more children. The choice of these values is based on data from the Nepal Demographic and Health Survey S4 for women who are yet to have a child, and S5 for women who already have at least one child, except the viable values that are drawn from are constrained by the time that has elapsed since the women were deemed to have last given birth or married.I6. Initialise villager age of death. The age at which the newly initialised villagers are set to die is determined by drawing a value at random from the viable valuesI7. Initialise household fields. The number of fields a household is assigned is determined using a stochastic function that takes into account the number of adults in the household. The function is based on a linear regression model which was fitted to data from the Namsa household surveys Here, I8. Initialise household paddy fields. Two, three, or four of the fields that are assigned to households with 16+ ropani of land will be deemed suitable for rice cultivation when the model is initialised. The precise number of fields allocated is randomly determined with each option being equally likely.I9. Initialise household polytunnels. Three randomly chosen households who have at least two adults under the age of 60 will be allocated polytunnels. The particular number of polytunnels each household is allocated is randomly determined but will be either one, two, or three. These rules are designed to allocate polytunnels to the kinds of households that had been targeted for support by an international non-governmental organisation (INGO) that ran a tomato cultivation project in the actual village.I10. Initialise household animals. The number and sex of animals each household has at initialisation is determined using the same deterministic method as that outlined in S10. However, rather than the animals being assigned the default age values for new livestock and poultry, their initialisation ages are randomly determined out of the viable age values.I11. Initialise finances. Finance controllers have their finances initialised using a two-step process. Firstly, they conservatively forecast their income and outgoings for the forthcoming year, while assuming meat consumption of two portions per week and the realisation of expected crop yields (the latter being of relevance only to household). Based on this assessment, the amount of initial cash that they require to meet their consumption and livelihood needs during the coming year without going into debt can be determined. They are then allocated this sum. In addition to this, households may be allocated historical savings or debts. This is done by randomly assigning each household the net finances of one of the households in Namsa (as established in the household surveys), using a sampling without replacement approach. Two households are also assigned a monthly remittance income of NPR 10,000. Finance controllers who are villagers will, meanwhile, be allocated historical savings that take into account their estimated earnings and expenditures since they left education. This is important as it can affect the timing of household fission.I12. Initialise past and future yields. Ten years of historical crop yield data is generated at initialisation to enable the crop strategy selection procedure to function. This is done by repeating ten times for each crop the procedure that is outlined in Sc3 for stochastically determining a given crop's yield.I13. Initialise future yields. For simplicity, the yield of each crop for each simulation year is predetermined. This is done in the same way as the above procedure, but for fifteen years\u2019 worth of yields instead of ten (simulations are fifteen years long).I14. Initialise crop status. The initial crop strategy of households is determined using the same method as that outlined in S6. Wheat, cabbages, and cauliflower are scheduled to be planted at the time the model starts so households assign the applicable number of fields to each of these crop types, following essentially the same process as that set out in S7.In the study that the model was originally designed for Sc1. Earthquake scenarios. The first earthquake pathway is an effort to mimic the impact of the 2015 earthquakes which struck Namsa towards the end of the main fieldwork spell, shortly after the baseline data \u2013 which the initial model conditions are generated from \u2013 was collected \u2022Five percent of livestock die as a result of collapsed shelters and stress. Households are not able to sell these deceased animals due to the chaotic circumstances and loss of access to local markets due to landslides blocking the highway.23\u2022There is a fifty percent chance of individuals who live alone leaving the village to reside permanently with family members elsewhere in Nepal. There is also a fifty percent chance of households that are composed of two adults emigrating permanently if one of those adults has a salaried job. When a household leaves the virtual village, their house, fields, and animals cease to exist.\u2022The yield of the crops that are in the ground between 25 April and 12 June (one month after the 7.3 Mw aftershock) is 30% below what it would otherwise have been due to terrace collapses, soil movements, damage to the village irrigation system, and a reduction in the time that villagers can dedicate to agricultural activities.\u2022The standard cash crop yield permanently declines by 15% as a result of the reduced irrigation capacity of the village following the damage caused to the village irrigation system which the households are unable to repair.\u2022Off-farm labouring stops for three months, as villagers focus on dealing with the challenges on their own farm and because of the regional economic disruption. However, once this three-month period is over, off-farm labouring opportunities increase by 50% above the baseline rate for twenty-four months as the recovery and reconstruction processes get underway.\u2022On 25 October 2016, eighteen months after the main shock, households take out reconstruction loans to fund the rebuilding of their houses. The size of the loan depends on the number of members the household has at that moment in time. Households of one or two members are issued with a loan of NPR 300,000. This rises to NPR 425,000 for households with three to four member, NPR 550,000 for households with five to six members, and NPR 675,000 for households with seven or more members. The role of reconstruction grants is reflected in the chosen loan sizes. Details of the loan repayment process are given in S11.The second earthquake scenario is a counterfactual in which the earthquakes do not happen and therefore the impacts listed above are not realised. This alternative pathway provides a baseline against which the consequences of the earthquakes for household finances and village demographics can be gauged.Sc2. Fertility scenarios. The first fertility scenario assumes an average fertility rate of 1.6 children per woman, while the second assumes the slightly higher rate of 2.1 children per woman. Sc3. Crop variability scenarios. The first crop variability scenario assumes a continuation of the status quo in terms of inter-annual yield variability, while the second scenario assumes a slightly higher degree of inter-annual yield variability than is currently reported. This latter scenario is intended to represent the potential consequences of climate change and the increasing threat of pests and diseases for agriculture.et al.et al.Given the stochastic nature of the model, it is necessary to conduct multiple runs for each scenario that is being simulated in order to determine what constitutes typical outcomes and to gauge variability Model validation is the \u201cprocess of ensuring that there is a correspondence between the implemented model and reality\u201d Here, we have drawn upon a combination of empirical validation methods and qualitative validation methods to assess the appropriateness of the model, recognising the respective strengths and drawbacks of each approach. In respect to empirical validation, we conducted four-hundred one-year-long model runs for the no-earthquake, low crop variability scenarios in an effort to approximate the conditions in the year leading up to the fieldwork. We then calculated the total income and expenditure of the village in each replicate, breaking this down by income and expenditure type. We then compared the mean and the range of the values produced in this simulated data to the household survey data That the survey data points fall within the range of simulated outcomes in most income and expenditure categories is a positive sign.Whether this comparison of values is of much value is debatable given the number of strong assumptions that were necessary to make the two data sets comparable and the substantial variability in model outcomes. This makes the efforts that were made to cross verify the data that informed the design of the model Sensitivity analysis (SA) explores the response of a model's outputs to plausible changes in input parameters C, a feasible upper extreme, +C, and a feasible lower extreme, \u2212C.C value and then a further 200 simulations were conducted for each parameter group at its +C value and then at its \u2212C value. As there are 11 parameter groups, this yielded 4,600 simulations in all.30The thematic groups are shown in +C and \u2212C values on these statistics is shown in Eight summary statistics were calculated for each simulation once the runs were completed: (a) the total households in the village at the end of the simulation; (b) the average household size at the end of the simulation; (c) the villager count at the end of the simulation; (d) the combined number of days that households spent in debt over the course of the simulation; (e) the total number of households who were in debt at some point during the simulation; (f) total household cash at the end of the simulation; (g) total household loans at the end of the simulation; (h) the Gini Index for the village at the end of the simulation.The results of the SA show that the demographic outcomes of the model are by and large insensitive to changes in the parameters that are set out in Unsurprisingly, the household loans, debt days, and experience of debt metrics tend to increase and decrease together, with household cash going in the opposite direction. However, the strength of the relationship between the metrics appears to differ from one parameter group to another. This suggests that they affect village economics in somewhat different ways. The shifts that are seen in household Gini coefficient between the parameter groups lends further weight to this idea. Another important observation is that the magnitude of the changes tends to be substantially greater in the case of household loans and the Gini coefficient that in the case of the other metrics. This is the result of certain households becoming trapped in debt spirals whereby the high rate of interest on debts magnifies any initial financial problems they may have had, pulling them deeper and deeper into the red in a non-linear fashion. By contrast, such feedbacks do not play a prominent role in the other metrics.Given the changes in the statistics are typically of a similar order of magnitude to the parameter variations, the model can be considered unlikely to suffer from highly sensitive non-linearities, and the conclusions drawn from the model can be considered reasonably robust. An important limitation of this SA is that it does not consider interaction effects between parameter groups Sc) a designated number of times, each time with a different seed (see below). This can be altered in the model code if necessary.The model code, its associated files, example output, and R scripts for analysing the output are available from University of Leeds at doi.org/10.5518/962 1.The paths to the R scripts must be updated. The paths are set in the \u2018to load-r-scripts\u2019 function of the NetLogo code.2.The path name for the output data needs to be updated. The path is set in the \u2018to prepare-data-log\u2019 function. Data logging can be turned on or off using the \u2018logData?\u2019 switch on the interface.3.The paths at the top of the \u2018simulateMarriage.R,\u2019 \u2018simulateFertility.R,\u2019 and \u2018simulateDeath.R\u2019 scripts need to be updated so that they point towards the data in the data folder. These scripts can be found in the \u2018r_scripts\u2019 folder.4.If conducting one run at a time, the scenario chooser menus on the NetLogo interface can be used to set the cash crop and subsistence crop half yield frequency parameters, to set the average fertility parameter, and to determine whether or not to simulate an earthquake. If conducting a scenario sweep, this is not necessary.5.To change the other parameters from their defaults, update the values that are set in the \u2018to initiate-parameters\u2019 function and search for other instances of the variables which may require updating. Note that changes made to these parameters on the interface will be reset when the \u2018to initiate-parameters\u2019 function is called on setup, so the changes need to be made directly in the code. Also note that crop yields cannot be set by the user. They are determined by the \u2018initiate-future-yields\u2019 function at initialisation.6.If conducting a scenario sweep, the number of replications that should be run for each scenario combination can be set using the \u2018numberOfRuns\u2019 slider on the interface. Each replication uses a different seed.Prior to running the model in its default form, the following needs to be done:To run a single simulation: Set the parameter values using the methods explained above, click \u2018Setup,\u2019 and then either repeatedly click \u2018Step\u2019 to advance one day at a time or click \u2018Run Simulation\u2019 to run the model for the period of time determined by the \u2018simulationLength\u2019 slider.To run a batch of simulations: Determine the number of runs to conduct for each scenario combination using the method described above and then click \u2018Conduct a Scenario Sweep.\u2019 If a non-standard batch of simulations is desired, adjust the \u2018scenario-sweep\u2019 function and any other parameters as required following the previously stated advice.Other things to note:\u2022A csv file is created for each simulation run if data logging is set to on. By default, the file name shows the earthquake, fertility, and crop variability scenarios that were simulated, as well as the seed. For example, \u2018EQ-21-12-01.csv\u2019 stands for the earthquake scenario, the 2.1 fertility rate, the 12/10 crop variability scenario, and seed #1.\u2022It is advisable to turn off the \u2018view updates\u2019 tick box to speed up the simulations.\u2022To parallelise the simulations, open multiple instances of NetLogo and adjust the \u2018scenario-sweep\u2019 function so that different parameter combinations and/or seeds are run in each instance.\u2022The model is computationally expensive relative to most NetLogo ABMs. It is therefore advisable to run it on relatively powerful devices.\u2022Each output file is around 30 MB. It is therefore advisable to ensure sufficient storage is available before running large batches of simulations.\u2022The land holdings of each household can be highlighted by clicking the \u2018Highlight a HH\u2019 button after setup and then clicking on the centre of a household in the world window. Click the \u2018Highlight a HH\u2019 button again to return to the normal view.Section 8 as it facilitates ready visual identification of suspect model behaviours.The model can potentially be adapted for use in studies besides the one that it was originally designed for. If this is to be done, users should take care to verify that the model behaves as it should after making the intended changes, and they should be aware of the particularities of the fieldsite that was modelled By default, the model logs a wide range of simulation values to a CSV file, with a separate CSV file being generated for each run. The outputted data includes the scenario details, the annual crop yields, information on significant events that occurred during the simulations, and the status of villager and household agents on each day of the simulation. Example output from the simulations is provided in the \u2018example_output\u2019 folder, in the \u2018data_analysis\u2019 folder of the supplementary material.1.Open RStudio and install the \u2018Shiny\u2019 package if it has not already been installed.2.Open the \u2018output_viewer.R\u2019 script in RStudio and click \u2018Run App\u2019.3.When the app opens, click the \u2018Browse\u2026\u2019 button and select a data output file of interest e.g. a file from the \u2018example_output\u2019 folder.4.Once the upload is complete, select one of the tabs to see an overview of the data, bearing in mind that it can take a while to process, even on a relatively high-performance machine. If the seasonal decomposition charts throw an error, try deselecting some households.If you have RStudio, the files can be explored using the \u2018output_viewer.R\u2019 application. To do this:1.Open the scripts in RStudio and ensure that the required packages are installed.2.Update the path names so that they point to the data that is to be analysed.3.Run the code.The data can alternatively be explored using custom R scripts. Examples are provided in the \u2018data_analysis\u2019 folder. By default, the scripts are set up to analyse the 10 example outputs that are included in the supplementary materials. To use the scripts:The \u2018village_statistics.R\u2019 script calculates key demographic and financial variables for each of the simulations. The \u2018financial_trajectory.R\u2019 script plots household finances over time for each scenario combination.doi.org/10.5518/962The model code, its associated files, example output, and R scripts for analysing the output are available from University of Leeds at The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper."}
+{"text": "Amyloidosis is a relatively rare human disease caused by the deposition of abnormal protein fibres in the extracellular space of various tissues, impairing their normal function. Proteomic analysis of patients\u2019 biopsies, developed by Dogan and colleagues at the Mayo Clinic, has become crucial for clinical diagnosis and for identifying the amyloid type. Currently, the proteomic approach is routinely used at National Amyloidosis Centre and Istituto di Tecnologie Biomediche-Consiglio Nazionale delle Ricerche . Both centres are members of the European Proteomics Amyloid Network (EPAN), which was established with the aim of sharing and discussing best practice in the application of amyloid proteomics. One of the EPAN\u2019s activities was to evaluate the quality and the confidence of the results achieved using different software and algorithms for protein identification. In this paper, we report the comparison of proteomics results obtained by sharing NAC proteomics data with the ITB-CNR centre. Mass spectrometric raw data were analysed using different software platforms including Mascot, Scaffold, Proteome Discoverer, Sequest and bespoke algorithms developed for an accurate and immediate amyloid protein identification. Our study showed a high concordance of the obtained results, suggesting a good accuracy of the different bioinformatics tools used in the respective centres. In conclusion, inter-centre data exchange is a worthwhile approach for testing and validating the performance of software platforms and the accuracy of results, and is particularly important where the proteomics data contribute to a clinical diagnosis. The term \u201camyloidosis\u201d is applied to a class of protein deposition diseases where misfolded proteins accumulate in form of insoluble fibrils in the extracellular space of several tissues. These deposits progressively lead to organ dysfunction, most frequently involving the heart, kidneys and central nervous system ,2. To daThe clinical spectrum of amyloidosis is determined by the type of amyloidogenic protein and the affected organs. Early diagnosis and accurate amyloid typing are crucial since organ dysfunction increases with continuing amyloid deposition. An accurate diagnosis of amyloidosis involves the analysis of tissue biopsy from the affected organ or, alternatively, using the less invasive procedure of subcutaneous fat aspiration. Tissue biopsies are commonly formalin-fixed paraffin-embedded (FFPE), which is one of the most common methods for storing tissue samples. Collected samples are stained with Congo Red (CR) dye, and amyloid fibrils are detected by the typical birefringence under polarised light .To identify the amyloid protein, immunological staining approaches, such as immunohistochemistry (IHC), have been proven to be the gold standard ,6. IHC hMore recently, in view of the IHC limitations, some clinical centres have started to rely entirely on mass spectrometry (MS)-based proteomics methods for amyloid typing ,8,9. MS-There are relatively few proteomics platforms dedicated to the analysis of amyloid around the world. The need to define common standard procedures and share experiences on several topics concerning amyloid proteomics and related methodologies led to the formation of the European Proteomics Amyloid Network (EPAN) in 2017. In this context, an inter-centre study focused on LC-MS/MS raw data exchange was carried out at National Amyloidosis Centre (NAC) in London and Istituto di Tecnologie Biomediche-Consiglio Nazionale delle Ricerche (ITB-CNR) based in Milan.The NAC proteomics platform operates regularly as a clinical diagnostic test for amyloidosis and also for research into the pathogenesis of the disease. Since 2012, more than 2000 clinical samples, which include various tissue types, have been analysed by MS. The experience of the NAC in running a UK-accredited amyloid proteomics service to type amyloid, together with the benefits and limitations of the approach, have recently been reported . ProteomITB-CNR has applied gel-free proteomics to study amyloidosis since 2008 in collaboration with Hospital San Matteo (HSM) in Pavia. In particular, it has mainly analysed fat aspirate samples, and liver and cardiac tissues, supplied by HSM. Of note, analysed samples concern critical cases unsolved by IHC and are prepared without LCM. ITB-CNR developed the \u03b1-value algorithm to diagnose the four main types of amyloidosis, AL lambda and kappa, and TTR and AA, based on label-free approach . AdditioThe present work is focused on the comparison of the amyloid proteomics results obtained in the two centres based in London and Milan. We report our experience of exchanging the mass spectrometry raw data for evaluating the quality and the confidence of our results achieved through the use of different software platforms and algorithms for amyloid protein identification.In the context of EPAN data exchange working group, forty LC-MS/MS raw data files were sent from NAC to ITB-CNR in order to be re-processed with their bioinformatics tools.Mass spectrometer raw data of seven fat aspirates and thirty-three FFPE samples from different tissue types were selThe results are shown in There were 3/40 cases where the ITB-CNR and NAC results were not in agreement. Two NAC AL (\u03ba) cases between the proteomics data obtained in London and Milan, part of European Proteomics Amyloid Network (EPAN). This study demonstrated an excellent level of performance of the different bioinformatics tools used by London and Milan proteomics centres.In a small proportion of NAC MS raw data analysed at ITB-CNR, the results disagreed. In some cases, this arose from a difference in reporting procedures. At the NAC, we report samples as no amyloid signature in cases where only one of the Mayo Clinic\u2019s signature proteins is present. We currently do not include vitronectin as a signature protein even though it has been proposed as a signature protein ,15 and iOf note, when \u03b1-value was updated with additional amyloid proteins, such as lysozyme, insulin and semenogelin, the identification of amyloidosis subtyping resulted in agreement with NAC findings.Although the two centres applied different procedures in terms of search engine platforms and algorithms, the comparison allowed a very good concordance (>92%).These findings indicate that the MS-based approach is robust, sensitive and less affected by biases than antibody-based methods. The availability of untargeted proteomic profiles permits the re-evaluation of data and the consideration of new subtypes. This is useful for the definition of different panels composed of different biomarkers leading to a high-precision diagnosis and the eligibility of the patients to specific therapeutic treatments, translating basic research to real-life and transforming medicine from evidence-based to personalised.This is the first inter-laboratory comparison of amyloid proteomics raw data analysed using different search engines, different analysts and applying the algorithms currently in use at each centre. This approach, which was initiated at the first European Proteomics Amyloid Network meeting in London in 2017, offers a simple and inexpensive model for future accreditation studies.A scheme of NAC and ITB-CNR proteomics data analysis workflow is shown in p < 0.05. Proteomics results are linked to the NAC database, and the most likely amyloidogenic protein is displayed by using an algorithm, which has been previously described [FFPE tissue biopsies and unfixed fat aspirates were obtained from patients attending the UK NHS National Amyloidosis Centre and also received from other clinical centres for immunochemical and proteomics characterization. Proteomics analysis procedure has been previously described in detail . MS raw escribed .In addition, Mascot output data were also analysed and validated by running Scaffold 4.9.0 . Scaffold filtering parameters for protein identification were protein threshold confidence level >99%, with a minimum of two assigned peptides and a probability >95%.LC-MS/MS raw data of thirty-three FFPE and seven fat aspirates were selected from NAC database in order to be re-analysed by ITB-CNR centre.MS raw data obtained by NAC were processed by Discoverer 1.4 software, based on SEQUEST algorithm. Matches between spectra were only retained if they had a minimum Xcorr of 2.0 for +1, 2.5 for +2 and 3.5 for +3 charge state, respectively; protein rank was fixed to 1, while peptide confidence was fixed to \u201chigh\u201d. In addition, the FDR was set to <5%. For amyloidosis subtyping, which involves evaluating which specific amyloid protein was prevalent in each patient, a parameter was calculated; this was obtained by normalizing the patient over control ratio (>3) of each biomarker\u2019s spectral count ; \u03b1-value"}
+{"text": "Lymph node metastases presenting with locally advanced cervical cancer are poor prognostic features. Modern radiotherapy approaches enable dose escalation to radiologically abnormal nodes. This study reports the results of a policy of a simultaneous integrated boost (SIB) in terms of treatment outcomes.Patients treated with radical chemoradiation with weekly cisplatin for locally advanced cervical cancer including an SIB to radiologically abnormal lymph nodes were analysed. All patients received a dose of 45\u00a0Gy in 25 fractions and a SIB dose of 60\u00a0Gy in 25 fractions using intensity modulated radiotherapy/volumetric modulated arc therapy, followed by high dose rate brachytherapy of 28\u00a0Gy in 4 fractions. A control cohort with radiologically negative lymph nodes was used to compare impact of the SIB in node positive patients. Treatment outcomes were measured by overall survival (OS), post treatment tumour response and toxicities. The tumour response was based on cross sectional imaging at 3 and 12\u00a0months and recorded as local recurrence free survival (LRFS), regional recurrence free survival (RRFS) and distant recurrence free survival (DRFS).In between January 2015 and June 2017, a total of 69 patients with a median follow up of 30.9\u00a0months (23 SIB patients and 46 control patients) were identified. The complete response rate at 3\u00a0months was 100% in the primary tumour and 83% in the nodal volume receiving SIB. The OS, LRFS, RRFS and DRFS at 3\u00a0years of the SIB cohort were 69%, 91%, 79% and 77% respectively. High doses can be delivered to regional pelvic lymph nodes using SIB without excessive toxicity.Using a SIB, a total dose of 60\u00a0Gy in 25 fractions chemoradiation can be delivered to radiologically abnormal pelvic nodes with no increase in toxicity compared to node negative patients. The adverse impact of positive nodal status may be negated by high dose deposition using SIB, but larger prospective studies are required to confirm this observation. Cervical cancer is a common cancer and worldwide is the fourth most common cancer for both incidence and mortality in women . In the For patients with locally advanced cervical cancer, concurrent cisplatin-based chemotherapy with external beam radiotherapy (EBCRT), followed by brachytherapy is considered to be the standard of care . The proAlongside these developments in diagnostic imaging, there has been rapid progress in the technology used for planning and delivery of radiotherapy including intensity modulated radiotherapy (IMRT), volumetric modulated arc therapy (VMAT) and image guided radiotherapy (IGRT) in the past decade. These new techniques are now commonly used, and they enable accurate delivery of therapeutic doses of radiation and simultaneous integrated boosts (SIB) to high doses in nodal chains for patients receiving cervical EBCRT . ImproveThere is no consensus on the optimum external beam radiotherapy technique when there are positive pelvic lymph nodes . HistoriThe aim of this service evaluation study was to assess treating extended nodal volumes with a SIB up to 60\u00a0Gy to radiologically positive nodes in patients referred for radical chemoradiation, in terms of response, toxicity, recurrence free survival (RFS) and overall survival (OS).Patients with locally advanced cervical cancer were offered radical concurrent chemoradiation incorporating SIB to radiologically positive pelvic or para-aortic lymph nodes. Those treated postoperatively or with atypical histology were excluded. In addition to routine demographic data, nodal site and size, dosimetry, treatment response and toxicity were extracted.2 using the EMBRACE planning guidelines [18FDGPET was performed. Radiologically abnormal lymph nodes were identified on the planning CT scans and a separate clinical target volume (CTV) defined which was expanded by 5\u00a0mm globally to form the planning target volume (PTV). This was always inside the 45\u00a0Gy nodal PTV and using a SIB all nodes in the node positive patients were boosted to 60\u00a0Gy in 25 fractions. Dose planning constraints are shown in Table Treatment was standardised delivering a radiotherapy dose of 45\u00a0Gy in 25 daily fractions using IMRT/VMAT with weekly cisplatin 40\u00a0mg/midelines for the idelines . The intAfter completing radiotherapy, patients were assessed prospectively at 4\u00a0weeks, 12\u00a0weeks, 6\u00a0months and 6 monthly thereafter. Treatment outcomes were measured by post treatment tumour response, sites of recurrence, overall survival and toxicities. The tumour response was based on size criteria on CT thorax, abdomen and pelvis at 3 and 12\u00a0months and recorded as local recurrence free survival (LRFS), regional recurrence free survival (RRFS) and distant recurrence free survival (DRFS). Post treatment toxicities were graded using Common Terminology Criteria for Adverse Events (Version 4.0). Toxicity events are presented as the maximum toxicity reported at any follow-up time; acute toxicity was defined up to 12\u00a0weeks and late toxicity from 6\u00a0months onwards.A control group of cervical cancer patients with radiologically negative lymph nodes treated under the same departmental planning and treatment delivery protocols using IMRT/VMAT (45\u00a0Gy in 25 fractions for EBRT and 28\u00a0Gy in 4 fractions for brachytherapy) without SIB were identified from the EMBRACE patients treated at this centre. The control group was matched with the SIB cohort for the length of follow up and histology to provide a ratio of 2 control over 1 SIB cases. Demographic and tumour characteristics between the treatment groups were compared using the\u00a0Kruskal\u2013Wallis test\u00a0for continuous variables and Chi-square test for categorical variables.P value\u2009<\u20090.05 was considered statistically significant. Statistical analysis was carried out with SPSS version 25.0 software .Overall survival (OS) defined as death from any cause, local relapse free survival (LRFS) defined by relapse in the vagina, cervix, uterus, fallopian tubes or ovaries, regional relapse free survival (RRFS) defined by relapse in pelvic or para-aortic lymph nodes and distant relapse free survival (DRFS) defined by relapse in the peritoneal cavity, mediastinal or supraclavicular lymph nodes or distant organs including bone were calculated using the Kaplan Meier method; and the resulting survival curves compared using the Mantel-Cox log-rank test. For all tests, a Between January 2015 to December 2017, there were a total of 23 patients treated with SIB with a median follow up of 31.5\u00a0months (range 2.2\u201352.2). For the control cohort, 46 patients who received IMRT/VMAT without SIB were included with a median follow up of 30.2\u00a0months (range 1.4\u201379.3). The control group had no radiological evidence of nodal metastases. None of the patients were subject to laparoscopic node evaluation. Demographic details of patients in both groups are shown in Table 18FDGPET and 5/23 (22%) were treated with SIB to positive para-aortic lymph nodes. 13/23 patients with FIGO I/II were upstaged to the FIGO IIIC1 due to the positive nodal disease. Within the SIB cohort, the average Dmean right pelvic nodal PTV was 67.8\u00a0Gy (62.4 from EBRT\u2009+\u20095.4 from Brachytherapy) and the average Dmean left pelvic nodal PTV dose was 67.7\u00a0Gy. (EBRT 62.4\u2009\u00a0from EBRT+\u2009 5.3\u00a0from Brachytherapy).As shown in Table All treated patients in the SIB cohort showed complete radiological response at the primary site on follow-up CT at 3\u00a0months. When considering the lymph nodes 83% of them had complete response three months after treatment and 13% showed partial response (>\u200950% regression). There were no recurrences at the primary site; two patients (2/23) with positive nodes relapsed at the treated nodal site; three-year LRFS was 90%. One of the two nodal relapses was in a patient presenting with massive adenopathy measuring 80\u00a0mm diameter.p\u2009=\u20090.76), 93% (p\u2009=\u20090.76), 95% (p\u2009=\u20090.10) and 89% (p\u2009=\u20090.30). As indicated and shown in Fig.\u00a0The OS, LRFS, RRFS and DRFS at 3\u00a0years of the SIB cohort were 69%, 91%, 79% and 77% respectively compared to the control cohort, where these numbers were 77% (Acute and late toxicities are summarised in Table The aim of this study was to review the results of a policy treating radiologically abnormal lymph nodes with a simultaneous integrated boost up to 60\u00a0Gy over 25 fractions in patients referred for radical chemoradiation in the setting of locally advanced cervical cancer.Lymph node involvement has been regarded as an important prognostic factor in reports of conventionally treated cervical cancer patients; more than 50% of our SIB cohort were upstaged in FIGO staging due to positive lymph node involvement , 11. In An important result from this study is the finding that toxicity is not increased over that in a control cohort when SIB is used. No\u2009\u2265\u2009grade 3 toxicities in terms of acute and late GI/GU, fatigue, lymphoedema and pelvic fracture were found in our SIB cohort which is similar to other published studies \u201318. GradThere are a limited number of published studies using SIB for nodal disease. The two largest series of 74 and 75 patients respectively report a good oncological outcome and a low toxicity profile , 19. A sHowever, the impact of a high dose boost to radiologically abnormal pelvic nodes remains uncertain and retrospective comparative data fails to confirm improvement in OS and LRFS in patients with locally advanced cervical cancer . The res10 [Cervical cancer is a cancer which requires a high radiation dose for clinical and radiological remission of the primary tumour, and it would be expected that involved nodes may require a similar radiation dose for sustainable control. In this study the radiation dose to the SIB CTV was 67.8\u00a0Gy. This dose achieved complete remission and sustained nodal control. Dose response data from the EMBRACE study based on cumulative dose to the primary site combining external beam and brachytherapy doses has suggested that primary tumours\u2009<\u20093\u00a0cm require a dose of around 84\u00a0Gy EQD210 . Data fr10 . Several10 \u201325. At l10 , 25. HowLimitations of this study are the small sample size and its retrospective nature. Reflecting the two cohorts with no randomised treatment allocation there were demographic differences between the treatment groups. The strength of this study is the use of a standardised radiotherapy treatment protocols and follow up procedures.This data strongly supports the emerging picture that a high dose can be delivered to regional pelvic lymph nodes without excessive toxicity and with a high probability of local control. Whether this can alter the natural history for such patients and overcome the worse prognosis associated with positive lymph nodes in cervical cancer should be the subject of a multicentre prospective randomised trial to formally evaluate the role of SIB in this group of patients. In this it will be important to consider the impact of FDG PET staging which results in stage migration with many more node positive patients being found to have systemic metastases. With this in mind and reflecting on the impact of high radiation doses in releasing immunogenic antigens combination therapy using high dose radiation to macroscopic nodes with immunomodulating drugs may be the way forward."}
+{"text": "In our work, the removal of cationic and anionic dyes from water was estimated both experimentally and computationally. We check the selectivity of the adsorbent, Zn\u2013Fe layered double hydroxide (LDH) toward three dyes. The physical and chemical properties of the synthesis adsorbent before and after the adsorption process were investigated using X-ray photoelectron spectroscopy, energy dispersive X-ray, X-ray diffraction, FT-IR, HRTEM, and FESEM analysis, particle size, zeta potential, optical and electric properties were estimated. The effect of pH on the adsorption process was estimated. The chemical stability was investigated at pH 4. Monte Carlo simulations were achieved to understand the mechanism of the adsorption process and calculate the adsorption energies. Single dye adsorption tests revealed that Zn\u2013Fe LDH effectively takes up anionic methyl orange (MO) more than the cationic dyes methylene blue (MB) and malachite green (MG). From MO/MB/MG mixture experiments, LDH selectively adsorbed in the following order: MO\u2009>\u2009MB\u2009>\u2009MG. The adsorption capacity of a single dye solution was 230.68, 133.29, and 57.34\u00a0mg/g for MO, MB, and MG, respectively; for the ternary solution, the adsorption capacity was 217.97, 93.122, and 49.57\u00a0mg/g for MO, MB, and MG, respectively. Zn\u2013Fe LDH was also used as a photocatalyst, giving 92.2% and 84.7% degradation at concentrations of 10 and 20\u00a0mg/L, respectively. For visible radiation, the Zn\u2013Fe LDH showed no activity. There are many industries that use dyes, such as the paper, plastics, and leather tanning industries2. The effluent discharge of the textile industry leads to environmental pollution owing to the existence of complex mixtures of methylene green (MG), methyl orange (MO), and malachite blue (MB) as cationic and anionic dyes3 and toxic metal ions in polluted water4. The usage of dyes has a hazardous effect on all life forms, that is, humans, plants, and animals; therefore, their effective disposal is necessary5. Thus, different chemical and physical techniques have been applied, such as biodegradation, reverse osmosis, activated sludge, chemical oxidation, and electrochemical methods involving membrane separation, chemical oxidation, anaerobic and aerobic microbial degradation, adsorption, and photodegradation8.The textile industry is characterized by the consumption of large quantities of water, much of which contains dyes12. Activated carbon, which is widely used as an adsorbent for wastewater treatment, is expensive and therefore, uneconomical13. The removal of organic pollutants with the use of layered double hydroxide (LDH) has been of great interest to many researchers in recent years. This is because of its unique features and properties, such as high surface area, low toxicity, low cost, high capacity of anion substitution, recoverability, and high stabilities for chemical and thermal properties14. Several techniques have been reported for the modification of LDHs, for example, the reconstruction process, the ion exchange process, and coprecipitation in the presence of organics15. Many limitations of LDHs remain, including the inability to used in highly acidic or basic medium. The challenge is the preparation of LDH materials by applying new techniques and using advanced modifications, environmentally friendly methods, and easy operation. In our study, we selected Zn\u2013Fe LDH as a model over other LDHs owing to its high stability constant at nearly 25.27 and low solubility product reach of 62.5116. Furthermore, increasing amounts of solid adsorbent wastes require the development of new recycling methods. This is a critical requirement around the world17.The adsorption process is thought to be more efficient when compared with other physicochemical wastewater treatment techniques18. The study of the characteristic optical properties of a material is important in providing data regarding electronic transitions, fundamental gaps, localized states, and trapping levels. Absorption of visible light from the top of the valence band (which is mainly composed of oxygen (O) 2p orbitals hybridized with Fe or Zn 3d orbitals) to the bottom of the conduction band is the reason for the electronic band gap transition19.Many gaps still exist in the science describing the flow of electrically conducting fluids, and such gaps are most regular with regard to multiphase fluids . Finally, despite the availability of many environmental applications, to date there are few to no reported environmental or medical applications involving nano-conducting fluids. With increasing investigation, it is predictable that nanofluids can make a considerable impact in many applications. So, the dielectric behaviour of solid materials has been reported and explained using different models. The purpose of this study is to provide an overall account of the dielectric properties of a material. Dielectric studies could also aid an understanding at the molecular level of the basic interaction of the nanoparticles in aqueous systems21. Therefore, after an experimental study of the adsorption process, we examined the LDH applied as a photocatalyst for the MO dye. In this study, we aim to analyze a multiadsorbate system by studying the selectivity of the main dye in the ternary system and then the interaction and behavior of two model cationic dyes (MB and MG) and one model anionic dye, MO22, with Zn\u2013Fe LDH applied as an effective adsorbent material. The Zn\u2013Fe LDH prepared was well characterized by FT-IR, XRD, FESEM, HRTEM, UV\u2013Vis spectroscopy, N2 adsorption/desorption, zeta potential, partial size analysis, and XPS. The adsorption mechanism of the electrical behavior was analyzed. This study highlights the potential application of Zn\u2013Fe LDH as an efficient adsorbent of anionic and cationic dyes and its electrical properties, which extend its scope for application in environmental remediation processes 2\u00b76H2O was purchased from Chem-Lab NV, Belgium, Fe(NO3)3\u00b79H2O. Hydrochloric acid was supplied by Carlo Erba reagent while NaOH was supplied from Piochem for laboratory chemicals, EGYPT. MB, MO, and MG powder were purchased from Oxford Laboratory Reagents (India) Table 2.Zn Germany. The EDAX had been used to determine the molar ratio of Zn\u2013Fe LDH. The BET specific pore volume, specific surface area, and pore size distribution of the nano-adsorbents were determined by N2 adsorption using an automatic surface analyzer . For analyzing the elemental composition of the prepared material (Kratos-England), X-ray photo electron spectroscopy (XPS) with Al-K\u03b1X-ray mono chromatic source (h\u03c5\u2009=\u20091486.6\u00a0eV) has been used. Zeta potential and hydrodynamic particle size were investigated by te Nano-Zeta sizer . Using high-resolution transmission electron microscopy to determine the microstructures of the used LDH. The procedure of sample preparation for zeta potential measurements was as explained in our previous work14The formed LDH/nitrate type has been characterized by XRD . The accelerating voltage used was 40\u00a0kV, 30\u00a0mA current, ranging from 5\u00b0 to 60\u00b0 scan angle, and scan step of 0.05\u00b0. To determine the vibration of chemical bonds, Bruker (vertex 70 FTIR-FT Raman) Germany spectrophotometry covering a frequency range of 400\u20134000\u00a0cm24:F(R), R, K(\u03bb), and S(\u03bb) are the K\u2013M or re-emission functions, the diffuse reflectance of the sample, the absorption coefficient, and the scattering coefficient, respectively.The optical band gap of the sample material was performed with the Kubelka\u2013Munk (K\u2013M) function using the following equation26:The absorption coefficient \u03b1 was calculated using the Lambert Law from the measured absorbance results Where A : the optical absorbance and d: the sample\u00a0thickness.\u00a0The following expression, suggested by Tauc, Davis, and Mott, is used:g(eV) is the band gap energy\u00a0of the material, and the index n determines the kind of transition.\u00a0It can be equal to 1/2, 2, 3/2, or 3 for directly allowed, indirectly allowed, forbidden direct, and forbidden indirect transitions, respectively. In this case of direct transitions of Zn\u2013Fe LDH nanoparticles, the value of n is equal to 1/2.where h,\u03bd, h\u03bd and C are the plank's constant, the frequency, the incident photon energy, and proportional constant, respectively. EF(R\u221e), which is proportional to \u03b1. The \u03b1 is substituted by F(R\u221e) in the Tauc equation. Thus, the relational expression in the experiment becomesThe acquired diffuse reflectance spectrum is converted to the Kubelka\u2013Munk function. The vertical axis is then converted to the quantity \u03c3ac) at different temperatures, including the effect of gamma irradiation on it. The dielectric properties of the nanoparticles are studied using a HIOKI 3532 LCR HI-TESTER in the frequency region from 200\u00a0Hz to 5\u00a0MHz. The nanoparticles are made into pellets, and the surfaces of the samples were coated with a silver paste and placed between the two copper electrodes that act as a parallel plate condenser.The dielectric behaviour of Zn\u2013Fe LDH samples as a function of frequency was studied in the form of the dielectric constant, dielectric loss, and the ac conductivity (\u03b5\u2032) of the material is measured by using the formulad is the thickness,\u00a0\u03b50\u00a0is the free space permittivity and A is the area.The dielectric constant were used to separate the catalyst from the solution. Uptake experiments were performed in batch mode to estimate the effects of the initial concentration of MB and the other competing dyes (MO and MG). The amount of dye removed is estimated by:Several experiments were performed to obtain data regarding the influence of the solution pH, adsorbent amount, initial dye concentration, and the selectivity of LDH toward applied dyes. Falcon tubes (50\u00a0mL) contained 0.05\u00a0g of the synthesized adsorbent and 20\u00a0ppm of dye as a pollutant. The pH of the dye solution was adjusted from 3\u201310 using HCl or NaOH (0.10\u00a0N), and measurements were made with a Metrohm 751 Titrino pH meter. The adsorption steps were performed for two other dyes. All experiments took place in the dark, and the Falcon tubes were put on an orbital shaker (SO330-Pro) for 20\u00a0h at 250\u00a0rpm until reaching equilibrium. A UV\u2013Vis spectrophotometer was used to estimate the residual concentration of each dye at a wavelength of 675, 464, and 618\u00a0nm for MB, MO, and MG, respectively2 and \u03c72:Equilibrium conditions were investigated by isotherm models and discussed in terms of nonlinear equations. We demonstrated the significance of our results using the statistical parameters RSun-light driven photocatalytic dye degradation was applied in the experiments. The LED visible source was a lamp from Philips model 3PM5 with 14\u00a0W of nominal power. No cut-off filters were used for irradiation. The photodegradation of methyl orange dye was performed by using a photocatalytic glass reactor containing of a cylindrical glass tube. The experiments were happened in sunlight between 11 am to 3\u00a0pm when the sunlight intensity had been nearly constant with low variation. The experimental procedures of the included degradation tests were done by mixing definite amounts of the LDH as a photocatalyst with MO dye solution in the dark for about 24\u00a0h as a step to achieve the dye adsorption/desorption equilibrium state. After that, the photocatalytic activity happened in the visible light. After adjusting the test volume to about 50\u00a0mL and the reaction temperature to 35\u00a0\u00b0C, the photodegradation parameters (dosage (10\u00a0mg), concentration (10 and 20\u00a0mg/L), pH (pH 8), and contact time (to 240\u00a0min) were determined. At the end of the experiment, LDH particles were separated from the solutions by centrifugation and the residual concentrations of MO dyes was estimated.2+ ions in the solution were detected by using an atomic absorption spectrophotometer . Also, to investigate the chemical stability of adsorbents, XRD spectra were investigated after the adsorbents were collected and dried in a dryer at 60\u00a0\u00b0C.The chemical stability of Zn\u2013Fe LDH was performed using 0.10\u00a0g of adsorbent and was added into 200-mL aqueous solution at different initial pH (2.5\u201311) and shack for 24\u00a0h. Then, the dissolved Zn2\u2009=\u20090.999). After every 15 samples, 3 standard solutions of dye were run to confirm the reliability of data from the spectrophotometer. All experiments were performed in triplicate to ascertain reproducibility, and the average concentration was estimated by applying the mean and standard deviation (\u00b1\u2009SD) obtained from SPSS version 16. A p-value of less than 0.05 was taken to be statistically significant.The remaining concentration of the dye in samples was recorded using a UV\u2013Vis spectrophotometer. The plastic and glassware used in the research experiments were cleaned and washed with 5% HCl aqueous solution and then immersed in bidistilled water. All chemicals used in the research experiments were of high grade, and the precision in dye records was determined by consecutively inserting each dye solution standard into the UV\u2013Vis spectrophotometer to get a calibration curve . The Zn\u2013Fe LDH models were built from the crystal structure of hydrotalcite [Mg3Al(OH)8]. The Mg2+ and Al3+cations were replaced by Zn2+ and Fe3+ cations, respectively. The cell formula was Zn20Fe5 (OH)50(NO3)5, and the cations distribution was adopted as reported by Fan et al.28 for the 4 (M2+/M3+) molar ratio. The cell and the studied dyes were optimized using the Universal forcefield43, and the QEq charge method29 was applied. The optimization process was done by the Forcite module, as implemented in the Materials Studio 2017 package. The convergence tolerance quality was set to be ultra-fine.The MC simulation was performed by the Adsorption Locator module as implemented in the BIOVIA Materials Studio 2017 package (Eads). Two surfaces were cleaved from the optimized constructed cell using build tool in the Materails Studio package, that is, LDH (001) and (010) surfaces. A 35\u00a0\u00c5-thick vacuum slab above the LDH surfaces was created, and the two models are shown Fig.\u00a0The adsorption of MO, MB and MG molecules on the Zn\u2013Fe LDH surface was carried out using MC simulation, by using the adsorption locator module that uses the Metropolis MC method to obtain the lowest-energy conformers between the adsorbate and adsorbent surface. This module calculates method. BET surface area, the total pore volume, and average pore size of the sample are 71.61\u00a0m2/g, 0.078\u00a0cm3/g, and 2.61\u00a0nm, respectively. The average pore size is\u2009<\u200950\u00a0nm and there is extensive spreading of pore size up to 16\u00a0nm and Fe 2p1/2 (725.5\u00a0eV) refer to small positive change33.FESEM images were applied to perform the morphology of the synthesis LDH as shown in Fig. \u00a0nm Fig. h. (XPS) 34.The spectra of FTIR and XRD for Zn\u2013Fe nitrate LDH were presented in Fig. h\u03bdF(R\u221e))2 was drawn against the h\u03bd using the Kubelka\u2013Munk function, and the direct band gap of Zn\u2013Fe LDH nanoparticles could be evaluated by extrapolating the linear part of the curve as shown in Fig.\u00a0\u03b1\u2009<\u2009104\u00a0cm\u22121) is representative of the indirect band gap for Zn\u2013Fe LDH as a function of photon energy (h\u03c5). We plot (\u03b1h\u03c5)1/2 and extrapolate the linear portion of curves to the values of (\u03b1h\u03c5)1/2\u2009=\u20090. The intercepts in Fig.\u00a0Eg\u2009=\u20091.76\u00a0eV).The optical or photon properties of Zn\u2013Fe LDH samples, such as the band gap energy, were identified using UV\u2013Vis , and the resulting spectrum is displayed in Fig.\u00a0k and n. The values of k and n have been calculated using the following equations6:n\u2009=\u2009(1\u2009+\u2009R)\u2009+\u2009[(1\u2009+\u2009R)2\u2009\u2212\u2009(1\u2009\u2212\u2009R)2(1\u2009+\u2009k2)]1/2/(1\u2009\u2212\u2009R).The theory of reflectivity of light has been used to calculate the values of other optical parameters like \u03bb\u2009=\u2009500\u00a0nm; it is almost constant in the 500\u2013800\u00a0nm region. For the additional investigation of the optical data, several useful relationships can be inferred to link the real and imaginary parts of the dielectric function and the optical constants (n and k). The accompanying relationships have been utilised to compute the values of the real part (\u03b5r) and imaginary part (\u03b5i) of the dielectric constant for Zn\u2013Fe LDH4The variances of the refractive index and the coefficient of extinction with energy are shown in Fig. 35. The magnitudes of the real dielectric constant are higher than the imaginary dielectric constant since they are reliant on n and k values. The real part of the dielectric constant contains a term that describes the amount by which it will impede the speed of light in the material, and the imaginary part shows how a dielectric absorbs energy from an electric field because of dipole movements36.Its variety with photon energy is depicted in Fig.\u00a0c is the speed of light in a vacuum. The reliance of the optical conductivity on the incident photon energy for various Zn\u2013Fe LDH nanoparticles is displayed in Fig.\u00a0The optical conductivity, which is related to the refractive index and absorption coefficient as given below, is then determined:\u22121, then \u03b5\u2032\u2009=\u2009\u03b5s, and dipoles follow the field. Dipoles start to lag behind the field as the frequency increases (with \u03c9\u2009<\u20091/\u03c4), and \u03b5\u2032 decreases slightly. The dielectric constant drops (relaxation process) when the frequency surpasses the characteristic frequency (\u03c9\u2009=\u20091/\u03c4). Dipoles do not comply with the field at this point,and \u03b5\u2032\u2009=\u2009\u03b5\u221e at extremely high frequencies (\u03c9\u2009>>>\u20091/\u03c4). At low frequency, the dielectric constant is very high, and it is initially found to diminish with frequency and then to become somewhat stabilized. The high value of \u03b5\u2032 at frequencies less than 1\u00a0kHz, which increases as the frequency diminishes and the temperature increases, corresponds to the system's bulk effect. The issue of interfacial charge carriers is an important factor for the improvement of dielectric values in the frequency region. The requirement for a high value of the dielectric constant in the low-frequency area can obstruct the charge carriers at the electrode. At low frequency, the dielectric loss is extremely high, but with increasing frequency, it falls rapidly. The dielectric loss increases with increasing temperature, analogous to the temperature reliance of the dielectric constant as shown in Fig.\u00a0Figure\u00a0ac of Zn\u2013Fe LDH as a function of frequency at various temperatures. The conductivity plot has the accompanying characteristics: (i) scattering at lower and converging at higher frequencies of conductivity spectra with increased temperature. With increasing temperature, the plot shows that conductivity increments. In the low-frequency region, frequency independent conductivity behavior is noticed, but that becomes sensitive in the high-frequency region, generally known as hopping frequency, moved to the higher-frequency side with increment of temperature. The conductivity increments in the higher-frequency region, because of the hopping of charge carriers in finite clusters.Figure\u00a0\u2013, which competes with anionic MO and thereby prevents adsorption equilibrium37. Zeta potential is a technique to study the stability of the prepared material and dispersion in solution measurements. As observed in Fig.\u00a0Adsorption isotherms explain how molecules of the adsorbate are distributed between the solid and liquid phases as the adsorption process reaches an equilibrium state. Modeling is crucial to comparing and predicting the LDH for which two- or three-parameter isotherm models apply well. Two-parameter models are commonly applied owing to simplicity and ease of fitting, and because the two-parameter models fit the data well, the use of a more complex model is not required. The adsorption isotherms for MB, MO, and MG are shown in Fig.\u00a038. The Langmuir adsorption isotherm is widely used for the modeling of homogeneous adsorption on the surface of the monolayer and assumes that the adsorbent surface is uniform and that all sorption sites are identical. The Freundlich isotherm model is suitable for heterogeneous adsorbent surfaces and multilayer adsorption. The Langmuir\u2013Freundlich (L\u2013F) isotherm model is used for both heterogeneous and homogeneous distributions at high and low concentrations39. Table 2; for MB, R2 was 0.996 and 0.993 for the Langmuir and Langmuir\u2013Freundlich isotherm models, respectively, whereas the qe was 133.29\u00a0mg/g. Based upon this result, the Langmuir model was the best model for explaining the adsorption process, where homogeneous adsorption is on the surface of the monolayer, and the surface of LDH is uniform and without interactions between adsorbents. This indicates that the Langmuir model is more suitable for explaining the process of MB adsorption and better represents the experimental data was 230.68, and these results indicate that multilayer adsorption occurred on heterogeneous surfaces. The Langmuir\u2013Freundlich model was more suitable to describe and explain the process of the adsorption of MO with Zn\u2013Fe LDH. For MG, the correlation coefficient was 0.997 for the Langmuir\u2013Freundlich model isotherm. This suggests that the Langmuir\u2013Freundlich model was better than other applied models due to the presence of chemical bonds between metal ions (LDH) and dye, with ion exchange in solution. .Isotherm models explain the behavior of the adsorption of MB, MO, and MG well upon comparing the calculated values from adsorption isotherms with experimental values applied to fit the experimental data using a nonlinear relationship with a Langmuir adsorption isotherm modelata Fig. . The R2 \u22121 fingerprint wavenumber region. These peaks are consistent with the di-substituted and monosubstituted benzene rings present in MG and confirm its adsorption onto the LDH surface plane decreased from 0.414\u00a0nm in the case of LDH to 0.6933, 0.693 and 0.8990 in LDH/MG, LDH/MB and LDH/MO respectively, which revealed a high effective penetration of dye into LDH interlayers40. This increasing may refer to one of the following reasons: the anion exchange of nitrate molecules, rearrangement of Zn\u2013Fe LDH ions, and removal of water molecules or the adsorption of dye molecules on the surface of LDH via hydrogen-bonding, as per the scheme submitted41. Table qmax, obtained from isotherm model fits) for this LDH is carefully compared with those for other adsorbents. Considering the high adsorption capacity, it seems that the Zn\u2013Fe LDH prepared in this study could potentially be used as a cost-effective adsorbent for dye-polluted aquatic systems.The mechanism of adsorption of dye on the LDH surface can be investigated using FT-IR spectra. The FT-IR spectrum of the Zn\u2013Fe LDH, after the addition of MG, showed peaks in the 800\u2013400\u00a0cmace Fig.\u00a0. This waqe) of dyes is negatively affected when the concentration of each dye in the mixture is increased in the range of 10\u20131000\u00a0mg/L. The decrease in the adsorption capacity of MB and MG is lower than MO, which is probably due to the affinity of MO toward the positively charge adsorbent surface . This results in multilayers of MO formed on the surface of the adsorbent30, which agrees with the adsorption results to form O2\u2212 radicals. These radical groups of \u00b7OH and O2\u2212 will result in the decomposition of MO. The effect of initial MO concentration on photodegradation efficiency has been achieved by varying the initial MO concentration between 10 and 20\u00a0mg\u00a0L\u22121 with other parameters such as catalyst concentration, reaction temperature, and pH value remaining constant, and the result is shown in Fig.\u00a052.As we all recognize, light ray absorption by the photocatalyst, the separation of the photoelectrons, and holes are important factors during the photocatalytic interaction. According to the above experimental result data, the proposed photodegradation mechanism of Zn\u2013Fe LDH can be illustrated that LDH can absorb visible light rays because of the narrow band gap of 1.765\u00a0eV. Under solar light irradiation, the electrons in the valence band of LDH can be inspired to the conduction band, leaving holes in the valence band. The structure of LDH can effectively restrain the recombination of photoelectrons and holes to improve photocatalytic activity. The holes left in the valence band of LDH can more easily induce the formation of hydroxyl radicals (\u00b7OH) from OH groups\u22121, and on the (010) Zn\u2013Fe LDH surface were \u2212\u2009140.2, \u2212\u2009100.4, and \u2212\u200996.7\u00a0kcal\u00b7mol\u22121, respectively. The electrostatic interaction of MO, MB, and MG with the (001) Zn\u2013Fe LDH surface were, \u2212\u20094.51, \u2212\u20093.96, and \u2212\u20093.63\u00a0kcal\u00b7mol\u22121, while with (010) Zn\u2013Fe LDH surface were, \u2212\u200933.39, \u2212\u200927.27, and \u2212\u200925.43\u00a0kcal\u00b7mol\u22121, respectively. This obtained trend in both adsorption energies and electrostatic interactions agrees with the experimental adsorption capacities of the studied dyes on the Zn\u2013Fe LDH.To understand the interactions between the dyes and LDH surface, MC studies were performed using Zn\u2013Fe LDH (001) and (010) planes. The lowest-energy structures between the studied adsorbates and the Zn\u2013Fe LDH (001) and (010) surfaces as obtained from the MC simulations are shown in Figs. In this research, a coprecipitation method was applied to synthesise Zn\u2013Fe-LDH, then the Zn\u2013Fe-LDH was used for dye adsorption in single and ternary systems after investigating the structure of the prepared material using physical and chemical methods. For the single system, the maximum adsorption capacities were 230.68, 133.29, and 57.34\u00a0mg/g for MO, MB, and MG, respectively; for the ternary solution, the respective values were 217.97, 93.122, and 49.57\u00a0mg/g. Experimental isotherm data fits well with nonlinear isotherm models. Furthermore, pseudo-first-order, pseudo-second-order, and Avrami models described the adsorption kinetic data for MO, demonstrating chemisorption and physisorption properties. The optimum pH was 7, 9, and 6 for MO, MB, and MG, respectively. The adsorption mechanisms were investigated for dyes through XRD and FT-IR analyses and Monte Carlo simulation. Moreover, LDH proved that it could be applied as a photocatalyst for dye-polluted water.Supplementary Information."}
+{"text": "We prepared single-walledcarbon nanotube (SWNT) suspensions inphosphate buffer solutions containing 1% of a coconut-based naturaldetergent (COCO) or 1% of sodium dodecyl sulfate (SDS). The suspensionsexhibited strong photoluminescence (PL) in the near-infrared region,suggesting that the SWNTs, such as those with and chiralities,were monodispersed. Upon diluting the suspensions with a detergent-freephosphate buffer solution, the PL intensity of the SDS-containingSWNT suspension was significantly lower than that of the COCO-containingSWNT suspension. The COCO-containing SWNT suspension was more stablethan the SDS-containing SWNT suspension. The SWNT concentration ofthe suspensions prepared via bath-type sonication was lower than thatof the suspensions prepared via probe-type sonication. However, near-infrared(NIR) PL intensity of the SWNT suspensions prepared via bath-typesonication was much higher than that of the SWNT suspensions preparedvia probe-type sonication regardless of the detergent. This suggestedthat the fraction of monodispersed SWNTs of the suspensions preparedvia bath-type sonication was larger than that of the suspensions preparedvia probe-type sonication, although the SWNT concentration was low.Our results indicated that COCO favored the fabrication of SWNT suspensionswith stable and strong NIR PL, which are useful for various biologicalapplications. A typical methodinvolves the use of synthesized surfactants, such as sodium dodecylsulfate (SDS) and sodium cholate (SC).30 Appropriate amounts of SWNT powder are added to surfactant aqueoussolutions, and the mixtures are sonicated using probe- rather thanbath-type sonicators to separate the bundled SWNTs. Because surfactantmolecules are adsorbed on isolated SWNT molecules and \u201cwrap\u201dthe surface of SWNTs, the formation of SWNT bundles is prevented evenafter sonication is stopped. Wrapping methods using several organicmolecules have been proposed. Single- or double-stranded DNA moleculesare commonly used to wrap SWNTs utilized for biological applications.35 The affinity of DNA molecules for the surface of SWNTs is affectedby the DNA sequence, and the DNA wrapping manner can be regulatedby adjusting the DNA sequence.38 Synthesized polymers, such ascarboxymethyl cellulose, have also been widely used for dispersingSWNTs.Raw SWNTs are insolublein water and can easily form bundles. However,the aforementioned physicochemical properties cannot be observed forbundled SWNTs.43 Therefore, several researchers have proposed newmethods for improving the stability of SWNT suspensions using surfactants.49Although dispersing SWNTs using SDS or SC is convenientand inexpensive,many researchers have demonstrated that dispersing SWNTs using surfactantsresults in unstable suspensions.50 Wewrapped CNTs using eco-friendly green chemicals and subsequently preparedaqueous suspensions of SWNTs containing carboxylic groups (1.0\u20133.0at. %) and multiwalled CNTs using a bath-type sonicator. However,we could not disperse bare SWNTs fabricated using the HiPco methodusing the bath-type sonicator. Furthermore, the fabricated SWNTs didnot exhibit NIR PL because they were produced using an arc dischargemethod. SWNTs fabricated using the HiPco method include several chiralities,which exhibit strong NIR PL.52 Chirality, which is defined as the chirality vector , determines the structures and physicochemicalproperties of SWNTs. For example, the SWNTs with and chiralities, which are typically fabricated using the HiPco method,exhibit strong NIR PL.Recently, we prepared CNT dispersions using several coconut-andbamboo-derived natural detergents.In this study, we dispersed SWNTs fabricatedusing the HiPco methodusing a coconut-based natural detergent (COCO) for the first time.Probe- and bath-type sonicators were used to prepare SWNT suspensions.Unlike the SDS-wrapped SWNTs, the COCO-wrapped SWNTs exhibited stableNIR PL even at low COCO concentrations.242 Appropriate amountsof SWNT powder are addedto an SDS aqueous solution, and the mixtures are sonicated. Probe-typesonicators have been widely used to achieve good results, althoughbath-type sonicators have also been used. SDS molecules attach tothe surface of the debundled SWNTs during sonication; therefore, SWNTscan be dissolved in aqueous solutions. Thereafter, the samples aretypically centrifuged to disperse the aggregates, and the supernatantsare stored as \u201cSDS-SWNT\u201d hybrid suspensions.The typical method used to solubilize SWNTswith SDS is simple.Weused a similar procedure to solubilize SWNTs utilizing the aforementionedCOCO detergent. SWNT solubilization with SDS was performed in parallelfor comparison. During our experiments, mixtures of the SWNT and SDS or COCO detergent were sonicated using probe- or bath-type sonicators and theprepared suspensions are denoted as SDS/probe, SDS/bath, COCO/probe,and COCO/bath, respectively. The photographs of the prepared suspensionsbefore centrifugation are illustrated in Figure S2. The spectra of the SDS- and COCO-containing SWNT suspensions werecompared to determine the concentration of SWNTs of each sample andnot to evaluate the stability of the SWNT suspensions.The UV\u2013vis absorbance spectra of the supernatantsof theSWNT suspensions subjected to centrifugation at 1500 rpm for 180 minare presented in Figure S2 (absorbance of 10 times diluted samples).The spectra represent the averagedata from three independent experiments.The SWNT concentrations of the suspensions subjected to probe-typesonication were higher than those of the suspensions subjected tobath-type sonication and chiralities -containing phosphate buffer solutions were distinct \u2460\u2013\u2463, 3b. TheseY axis) were converted into those of theoriginal suspensions althoughPL measurements were carried out with the diluted samples. The rawdata without conversion are shown in Figure S4. The brightest peak in the PL maps of the SWNT suspensions correspondedto the SWNT with a chirality irradiated with 725 nm light (The cross sections of the PL maps shownin 11 The SWNT concentration of the SWNT suspensions prepared via probe-typesonication was higher than that of the SWNT suspensions prepared viabath-type sonication (The PL intensity of monodispersed SWNTs is much strongerthan thatof bundled SWNTs.nication 2. The SWS4). It seems thatthe ratio of monodispersed SWNTs was higher in suspensions preparedby bath-type sonication after centrifugation. In other words, suspensionsprepared via bath-type sonication might include bigger aggregates;then, the aggregates might be well removed by centrifugation. If so,bath-type sonication was effective for preparing SWNT suspensionsfor PL measurements, although the dispersion efficiency of bath-typesonication was low.On the other hand,after centrifugation to remove aggregates, itseems that most of the SWNT suspensions fabricated via bath-type sonicationwere monodispersed, although the SWNT concentrations of the suspensionswere low and chiralities, respectively], the COCO-dispersed and SWNTs red shifted by 10\u201315 and 5 nm, respectively.Typically, the dielectric constants of adsorbed molecules affect thePL intensity and wavelengths of SWNTs.57 We hypothesized that the differences in the physicochemicalproperties of SDS and COCO induced the differences in peak shiftsbetween the SDS- and COCO-dispersed SWNT suspensions. The PL wavelengthof the SDS-dispersed SWNTs changed when the suspension was dilutedwith SDS-free phosphate buffer solution. This was attributed to thedetachment of SDS from the surface of SWNTs upon dilution leadingto the formation of SWNT aggregates.The numerical analysis of the cross sectionsshown in 58 Because COCO formsmixed anion and amphoteric ions as micelles, the critical micelleconcentration of micelles of COCO might be lower than that of SDS.It might be helpful for better solubilization.58Although we do not have clear explanation about the reasons ofthe difference between SDS and COCO, there is a possible explanation.COCO includes two types of natural surfactants although detailed information is not providedby the manufacturer. On the other hand, Madni et al. suggested thatcombination of two different surfactants is effective to improve solubilizationefficiency of CNTs.The suspensions were evaluated by Raman spectroscopy.All samplesshowed no significant changes in G/D ratios. We think that SWNT structureswere not collapsed by sonications.62 Although we did not quantitatively evaluate the SWNT morphologiesin this work, AFM observation in liquids is an attractive researchsubject to understand interactions between SWNTs and organic moleculesbased on the fact that adsorption equilibrium is an attractive subject.Last, we evaluated each suspensionusing atomic force microscopy(AFM). The AFM images of the SDS/probe, SDS/bath, COCO/probe, andCOCO/bath suspensions are presented in 3We preparedmonodispersed SWNT suspensions using SDS or COCO andprobe- or bath-type sonicators. The natural eco-friendly COCO detergentwas as effective as SDS for preparing SWNT suspensions. The SWNT suspensionsprepared using COCO were more stable than those prepared using SDS.In addition, although the SWNT concentrations of the suspensions preparedvia bath-type sonication were lower than those of the suspensionsprepared via probe-type sonication, the fraction of monodispersedSWNTs of the suspensions prepared via bath-type sonication was higherthan that of the suspensions prepared via probe-type sonication. Ourresults provide helpful information for developing various biologicalapplications that use the NIR PL of SWNTs.4SWNTs , SDS ,and COCO were usedas received. COCO consisted of 20% sodium alkyl ether sulfate andalkyl betaine detergents.SWNT powder was mixed with 1 mL ofa 20 mM phosphate buffer solution(pH 7.0) containing SDS or COCO. The concentrations of SWNTs and detergentsin the final mixture were 0.5 mg/L and 1%, respectively. The mixtureswere sonicated using a probe-type or a bath-type sonicator . The probe-type sonicated samples were processedfor 90 min at a current density of 0 C, an amplitude of 60%, a frequencyof 20 kHz, and a power of 130 W. The bath-type sonicated samples wereprocessed for 90 min at a current density of 0 C, frequency of 45kHz, and power of 100 W.To evaluate the stability of the suspensions,300 \u03bcL of eachsample was diluted with 2700 \u03bcL of a detergent-free phosphatebuffer solution in a plastic cuvette. The concentration of SDS orCOCO was decreased to 0.1%. The samples were photographed immediatelyafter preparation and then 1 and 7 days later. The remaining sampleswere centrifuged at a rate of 15\u2009000 rpm and a current densityof 8 C for 180 min and, thereafter, 70% of each supernatant was usedas the SWNT suspension for the subsequent experiments.For ultraviolet\u2013visible(UV\u2013vis) optical spectroscopyexperiments, the centrifuged SWNT suspensions were diluted 10 timeswith a detergent-free phosphate buffer solution in a two-sided clearquartz cell cuvette. A UV\u2013vis spectrophotometer was used to record the absorbance of the samples inthe wavelength range of 400\u20131100 nm.63 The PL profiles of all samples were obtainedin the excitation and emission wavelength ranges of 600\u2013800and 900\u20131400 nm, respectively.A PL spectrometer was used to record the PL spectra of the samples. For the PLmeasurements, each sample was diluted with detergent-free and detergent-containingphosphate buffer solutions in a quadruple clear quartz cell cuvetteuntil the UV\u2013vis absorbance of each sample at 808 nm was 0.1.Raman spectroscopy wascarried out by a microscopic Raman spectrometer. 25 \u03bcLof original SWNT dispersions was dropped on a glass coverslip anddried in air. Raman spectra were measured with 532 nm excitation wavelengthusing a 20\u00d7 objective lens in air.65AFM experiments wereperformed using an MFP-3D instrument in a phosphate buffer solution.A BL-AC40TS-C2 cantilever was used forthe AC mode measurements. To prepare the samples for the AFM experiments,SWNT suspensions were diluted with a detergent-free phosphate buffersolution. The diluted suspensions were dropped onto a mica surfacethat was pretreated with a 0.01% solution of 3-aminopropyltriethoxysilane(AP-mica).Each stability evaluationexperiment was repeated five times. Eachsuspension was prepared independently. The results of the three middlePL spectroscopy measurements for the SWNTs with chirality wereused for data analysis. Conversely, the PL intensity and PL peak shiftof each suspension were measured three times to eliminate fluctuationsin the preparation procedures, such as sonication."}
+{"text": "Emissions of NOx at a country level are also shown to vary considerably dependingon the mix of vehicle manufacturers in the fleet. Adopting the on-roadmix of vehicle manufacturers for six European countries results inup to a 13.4% range in total emissions of NOx. Accounting for the manufacturer-specific fleets at a countrylevel could have a significant impact on emission estimates of NOx and other pollutants across the Europeancountries, which are not currently reflected in emission inventories.Road vehicles make important contributionsto a wide range of pollutantemissions from the street level to global scales. The quantificationof emissions from road vehicles is, however, highly challenging giventhe number of individual sources involved and the myriad factors thatinfluence emissions such as fuel type, emission standard, and drivingbehavior. In this work, we use highly detailed and comprehensive vehicleemission remote sensing measurements made under real driving conditionsto develop new bottom-up inventories that can be compared to officialnational inventory totals. We find that the total UK passenger carand light-duty van emissions of nitrogen oxides (NO At the local scale, estimating the emissions along individualroad links is required to understand near-road exposures to air pollution.Equally, at a national scale, establishing total emissions is requiredto meet international obligations, such as the European National EmissionCeiling Directive (NECD).3 Moreover, environmental conditions, such as the influence of ambienttemperature, can also have an effect on road vehicle emissions.5The road transport sectoris arguably a uniquely challenging sectorfor which to estimate emissions. In the UK alone, there are millionsof individual vehicles that move in both space and time, representinga wide range of fuel types, emission standards, vehicle classes, andtechnologies. Even nominally identical vehicles may behave differentlybased on driver behavior, vehicle mileage, and levels of maintenance.x from road vehicles. Given the wide rangingimpactsof NOx emissions into the atmosphere,it is important that emission estimates are robust and representativeof the region being considered. In Europe, over the past decade, therehas been substantial focus on how road vehicle emissions of NOx contribute to ambient nitrogen dioxide (NO2) concentrations, which have often exceeded ambient air qualitylimits.6 Emissions of NOx also play a central role in the formation of O3 and PM2.5, both of which are important pollutants froma direct health impact perspective and in terms of wider environmentaldamage. Extensive evidence of considerable differences between emissionsmeasured in the laboratory for Type Approval purposes and real drivingemissions has also been widely reported and is well established.8 However, the incorporation of increasingly available real drivingemissions data to emission inventories has not been as extensive.Of particular recent interest has been the emission of NO9 In 2018, the NAEI indicated that the transportsector was responsible for 52% of the UK\u2019s NOx emissions, with 31% coming from road transport.10 The NAEI forms the basis of reporting totalUK emissions as a part of the National Emissions Ceiling Directive,1 as well as providing an input to local and regionalscale air quality models. It is important therefore that the inventoryaccurately represents the emissions from sectors such as road transport.In the UK, the National Atmospheric Emissions Inventory (NAEI)is the primary inventory that categorizes the emissions of many greenhousegases and air quality pollutants. It covers multiple sectors, includingindustry, agriculture, land-use, energy generation, and transport.12 based on recommendations from the European Monitoring and EvaluationProgram (EMEP)/European Environment Agency (EEA) Emission InventoryGuidebook.13 Initially, the emission factordevelopment was based entirely on laboratory measurements. More recently,portable emission measurement systems (PEMSs) have been incorporatedinto the emission factor development. The 2019 EMEP/EEA guidebooknotes that a combination of laboratory and on-board measurements arenow typically used for emission factor development, with other methodssuch as vehicle emission remote sensing and tunnel studies being usedfor validation purposes. Indeed, the literature encompasses studieswhich have used PEMS,15 vehicle emission remote sensing,17 and even aircraft-based flux measurements18 to independently validate emission inventory estimates.Like many European emission inventories, the UK NAEI relies heavilyon the COPERT emission factor approach for estimating road transport emissions,7 Choosing a representative sample of a country\u2019svehicle fleet from which to derive emission factors is therefore apotentially important issue. The advantage of remote sensing overother methods are the large sample sizes and comprehensive fleet coverage,which provides a better representation of in-use vehicle fleets.Measuringrelatively few vehicles using laboratory-based or on-vehiclemeasurement techniques such as PEMS can provide detailed single vehicleemission information, but it is challenging to measure many vehiclesusing these methods due to cost and time constraints. It is knownthat emissions can vary significantly by the vehicle manufacturerand vehicle model, but currently no account is taken of these differencesin the emission factor or inventory development.A focus on the UK over other European countries for inventory verificationis advantageous given that Great Britain is an island. In countriessuch as Germany, France, and Belgium, gasoline and diesel fuel soldmay not be used within the country itself, leading to some uncertaintyin the allocation of fuel use (and hence emissions) to a specificcountry. Conversely, in the UK close to 100% of road transport fuelsold is used in the UK. This means that robust comparisons can bemade between so-called \u201cbottom-up\u201d and \u201ctop-down\u201dinventory methods. Specifically, there is high certainty in the top-downcalculations that rely on total fuel sale data.x, CO, and NH3 emissions estimates at a UK scalefor light-duty vehicles (LDVs). We achieve this aim through the calculationof distance-based emission factors and make direct comparisons withthe 2018 UK inventory. Additionally, calculations are made of CO2 emissions to enable a direct comparison with fuel use statisticsand provide a means of verifying the methods developed.The primaryfocus of this work is to exploit the comprehensivefleet coverage provided by vehicle emission remote sensing to develophighly detailed and comprehensive bottom-up NOx emissions, whichhave persistently been thought to be underestimated,and provide a national level quantification of total emissions. Finally,for the first time, we consider the influence of different vehiclemanufacturer fleet mixes, which can be determined from remote sensingdata. By considering different measured vehicle manufacturer proportionsin other European countries, we establish how these contrasting manufacturerproportions affect total emissions of NOx and CO2.A specificfocus is to estimate NO22.120 but is summarized here. A remote sensing device (RSD) consists ofa UV/IR source, multiple detectors, optical speed-acceleration bars,and a number plate camera. A RSD is deployed such that vehicles drivepast the set-up unimpeded, with the concentrations of gases in theirexhaust plumes and their speed and acceleration being measured remotelyvia open path spectroscopy. Spectrometry is achieved using a collinearbeam of IR and UV light which, after being absorbed by exhaust plumes,is separated into its two components within the detector. Nondispersiveinfrared detectors measure CO, CO2, hydrocarbons (HCs),and a background reference. The UV component passes through a quartzfiber bundle and is used to measure NH3, NO, and NO2.Thedevelopment of and operating principles behind vehicle emission remotesensing has been described in considerable detail in other publications,2 is calculated, from which fuel-specific (g kg\u20131) emission factors can be calculated. The further transformationfrom fuel-specific to distance-specific (g km\u20131)emission factors is described later in the text.One hundred measurements are taken in half a secondfor each vehicle plume exhaust when the rear of the vehicle is detected.From these measurements, the ratio of a pollutant to COVehicle numberplates are recorded alongside emission and speedmeasurements and are used to obtain vehicle technical data, such asengine size, fuel type, Euro standard, and vehicle manufacturer. Inthis study, the data were obtained from CDL Vehicle Information ServicesLtd., a commercial supplier. CDL retrieved the data from the Driverand Vehicle Licensing Agency and the Society of Motor Manufacturersand Traders Motor Vehicle Registration Information System. Data relatingto the total mileage of each vehicle at its last annual technicalinspection test was also obtained through CDL for vehicles greaterthan three years old.21 supplemented with the data fromthe University of Denver Fuel Efficiency Automobile Test (FEAT) instrument.22 A total of 304,039 measurements were collectedof Euro 2\u20136 vehicles in three key classes of LDVs: diesel lightcommercial vehicles (LCVs) and diesel and gasoline passenger cars(PCs). A statistical summary of the data set is provided in Vehicle emission measurements were conductedbetween 2017 and 2020at 37 sites across 14 regions in the United Kingdom using two remotesensing instruments\u2014the majority with the Opus AccuScan RSD5000,2.2\u20131)emission factors is required for the \u201cbottom-up\u201d approachto estimating total UK emissions. The vehicle power-based approachused has been previously developed and evaluated24 but is briefly outlined here. The principal steps include (i) thedevelopment of a vehicle power-based method to calculate g km\u20131 emissions from remote sensing data, (ii) developmentof relationships that enable the prediction of emissions over any1-Hz drive cycle, and (iii) the application of the g km\u20131 emissions to a UK national scale. Because vehicle emission remotesensing measurements tend to be made under higher engine load conditionsthan full drive cycle averages, their direct use would tend to overestimatemean exhaust emissions. The method provides a way in which to estimateemissions for typical real-world drive cycles that may have loweraverage engine loads, for example, for typical urban driving.The calculation of distance-specific .A physics-based approach to calculating vehicle power is used, accountingfor all the main forces acting on a vehicle. First, instantaneousvehicle power is calculated as the total power to accelerate the vehicle,to overcome the road gradient, to resist both rolling and air resistance,and to power auxiliary devices adjusted for losses in the transmission.Vehicle specific power, VSP, is calculated as the instantaneous powerdivided by the vehicle mass .As none of the road load or aerodynamic drag coefficients were known,generic values taken from Davison et al.\u20131 andVSP for vehicles with different fuel types, vehicle types, Euro standards,and pollutant species were established using generalized additivemodels (GAMs), which are flexible enough to consider nonlinear relationshipsbetween variables. The mgcv R package26 was used to fit the models. These models wereused to predict emissions for 1 Hz drive cycles from PEMS tests obtainedfrom the UK Department for Transport (DfT).27 The PEMS data contained a total of 4,243 km of real-world drivingover 58 PEMS routes which included urban, rural, and motorway portions.The maximum VSP value across these drive cycles was 37.2 kW t\u20131 , and GAMs were fit between 0 and 40 kW t\u20131. Emissions from negative VSP conditions were assumed to be zero.The approach is flexible enough that it can be applied to any 1 Hzdrive cycle, for which VSP is available or can be calculated.Relationshipsbetween emissions in g s\u20131) can be calculated as the totalof all time-specific emissions divided by the total distance. Thedistance-specific emission factor used for the total UK emission estimationwas the mean of all the distance-specific factors from each of the58 real-world drive cycles. Factors were calculated separately foreach of the urban, rural, and motorway conditions. The next step isto apply these emission factors to the corresponding driving activitydata in the UK, thus providing a means of estimating total UK emissions.With 1 Hz modeled time-specific emissions, distance-specific emissionfactors .In order to apportion this vehicle mileage data into different fueltypes, information available in the remote sensing data, such as averagemileages by fuel type, was used, as provided in Distance-specificemission factors for each vehicle type were used to calculate a bottom-upestimate of total UK emissions through multiplication with UK-widemileage data. Estimates of the total distance travelled by UK PCsand LCVs per annum were obtained from a publicly available governmentdatabase.The vehicle mileages are alreadyapportioned into urban, rural, and motorway driving conditions butnot by fuel type or Euro standard. The data in Table S1. To calculate UK totals for the exhaust pollutants, the g km\u20131 emission factors for each combination of pollutantspecies, vehicle category, Euro standard, and driving condition were multiplied by the corresponding apportionedmileage. While emission inventories themselves are often not reportedwith the associated uncertainties, the estimates presented here areprovided alongside the 95% confidence interval calculated from theoriginal g kg\u20131 measurements.Apportionment into Euro standards is straightforward,simply applyingthe ratio between the five Euro standards for each of the three vehiclecategories\u2014Diesel PC, Gasoline PC, and Diesel LCV\u2014givenin F. The value of F is therefore also the factor by which one would multiply the emissionreported in the NAEI to arrive at the emission estimated using thevehicle emission remote sensing data. A F of 1 wouldmean that these two values were the same, F >1 wouldmean the emission is under-reported in the NAEI and F < 1 would mean that the emission is over-reported.The estimatedUK totals can be directly compared with the NAEI.The comparison can be expressed through the use of a ratio betweenthe bottom-up estimated emission and the emission reported in theNAEI, here labeled four driving conditions\u2014urban, rural, and motorway, and a separatecold start contribution. Incommon with most emission inventories, the increased emissions ofsome pollutants after engine start are considered as separate emissionsfrom hot, stabilized emissions. For some pollutants, such as CO andHCs, the cold start emissions can be substantial. In the NAEI, coldstart emissions are only considered in urban areas and reflect theestimated number of trips.TheNAEI reports air quality pollutant sources for 29 This means that it is highly unlikely that remote sensing measurementsinclude a significant proportion of cold start emissions given theproximity required of a cold start to the measurement location. Therefore,when urban comparisons are made, the estimates are compared with boththe urban value from the NAEI and a combination of the urban and coldstart contributions.The potential importance of coldstart emissions raises the questionabout the extent to which vehicle emission remote sensing includesa cold start contribution. Given that the vast majority of emissionmeasurements are made in urban areas, it might be expected that remotesensing data would include some fraction of elevated emissions dueto cold starts. However, for gasoline vehicles, the three-way catalystreaches effective operating temperature within 1\u20132 min of the engine starting.2 from fossil fuels only, so the figuresreported do not includethe additional presence of biofuels. Assuming that diesel in the UKcontains up to 3.7% biodiesel and gasoline up to 4.6% bioethanol,30 an adjustment factor can be calculated throughthe multiplication of the bio-/fossil-fuel ratio by the ratio of fuelCO2 emissions (kg) per liter of the biofuel and fossilfuel .31 The adjustments are therefore 1.032 for gasoline and 1.034for diesel and used to uplift the reported NAEI CO2 values.The NAEI is required to report road transportemissions of CO2.432 These data provide over 700,000 remote sensingmeasurements for the UK, Sweden, Switzerland, Belgium, France, andSpain. The data usefully contain information on the breakdown of differentmanufacturers and vehicle models, which can be used to consider theeffects on NOx emissions due to differentnational fleet mixes. An advantage of these data is that they providea direct, on-road measurement of the vehicle fleet, which accountsfor the vehicle km driven by vehicles made by different manufacturers.These data are considered more representative of in-use vehicle fleetsthan, for example, statistics on new vehicle sales, which would notreflect actual distances travelled by different vehicle types. Thedata do show strong country-specific characteristics. For example,France is dominated by Renault and Peugeot-Citroen, Sweden by Volkswagenand Volvo, and Switzerland by Volkswagen and, to a lesser extent,Daimler and BMW .To investigate the importance of different fleet compositions inEuropean countries, data from the CONOX project were analyzed, whichprovides a database of European vehicle emission remote sensing measurements.2 and NOx based on UK mileage data for Euro 5 and Euro 6 dieselPCs but using the fleet mix for each country. In this respect, theanalysis addresses the question of \u201chow would UK emissionsof NOx change if the UK had the fleetof France, Spain, Belgium, Switzerland, or Sweden?\u201d The calculationskeep the vehicle km the same between the fuel type used and Euro standard,that is, that of the UK, and simply considers different proportionsof manufacturer families according to the fleets in other countries.Manufacturer and engine size-specific emission factors were developedfor this purpose using the UK-based data set outlined in the We haveconsidered the total emissions of CO33.1\u20131 for NOx and CO2 is shown in P < 0.05) of VSP inmodeling both CO2 and NOx inall three vehicle categories for all five Euro standards considered.Most of the relationships shown in 2, which highlights the benefit of expressing emissions asa function of vehicle power demand rather than vehicle speed. Indeed,an inherent problem with speed-dependent emission factors is thatas the speed tends to zero, the emissions tend to infinity, whichmeans fitting a model through the data is difficult.The relationshipbetween VSP and emission rate in g s2 andNOx emissions and their associated F values are tabulatedin All predicted CO2 at a national scale. The total estimated emissions from this methodwere 91.3 \u00b1 0.9 Mt CO2. This value is very similarto the NAEI value of 90.0 Mt, giving an F value equalto 1.01. The similarity extends when considering the two fuel typesindependently\u2014gasoline vehicles were shown to have an F value of 1.00 and diesel vehicles 1.02. When consideringdiesel PCs and LCVs separately; however, divergence from the NAEIis apparent, with the PCs having an associated F of1.14 and the LCVs 0.81. The bottom-up calculations therefore suggesta different allocation of diesel fuel use (or CO2 emissions)than is suggested by the NAEI, although the sum of PC and LCV CO2 is in good agreement. It should be noted that the comparisonfor gasoline is considered more robust than for diesel fuel becausealmost all gasoline use in the UK (97%) is for PCs, whereas dieselfuel is used in a wide range of vehicle types including PCs, LCVs,buses, and other heavy-duty vehicles, which introduces some uncertaintyin the allocation between diesel-fueled vehicles.33An important first step is to establishwhether there is a carbon/energybalance for the detailed bottom-up approach to estimate COx,the total UK estimateswere 280 \u00b1 6.3 kt NOx. On a UK scale,the NAEI underestimates NOx emissions,with F between 1.24 and 1.32 depending on whethercold start emissions are included or excluded, respectively. Thesecomparisons can be made at a more disaggregated level by consideringthe vehicle categories individually. Estimated gasoline PC emissionswere higher than those reported in the NAEI, with NOx emissions of 29.5 \u00b1 1.5 kt (1.82 < F < 1.95). The NOx predictions forlight-duty diesel vehicles were similarly under-reported in the NAEI,being 251 \u00b1 5.0 kt NOx (1.19 < F < 1.27). Of this diesel total, PCs contribute 169 \u00b12.9 kt NOx (1.44 < F < 1.54) and LCVs 81.2 \u00b1 2.0 kt NOx (0.88 < F < 0.94).With respect to NO2, with F values between 0.77 and1.27. Conversely, NOx is shown to have F values between 0.70 and 2.24, with some important variabilitydepending on driving conditions .The comparisonbetween the NAEI and the bottom-up remote sensingdata estimations is made on a fully disaggregate level, includingvehicle category and driving condition, as shown in x emissions in urban areas where exposures to the elevated concentrationsof NO2 are the greatest. In total, the NAEI reports 84.0kt NOx from LDV activity in urban areasand from cold start emissions, with 70.1 kt coming from just urbanemissions. Conversely, the new bottom-up estimates suggest total urbanNOx emissions of 103 \u00b1 2.5 kt, adifference of 19 kt including cold start emissions or 32.9 kt excludingthem. These results suggest the NAEI may be under-reporting urbanemissions by 22\u201347%. As discussed previously, it is consideredthat the remote sensing measurements comprise a very low proportionof enhanced emissions due to cold start effects. For this reason,the underestimate in urban NOx emissionsis considered to be closer to 47% than 22%.A specific interest is the quantification of NO3. Atthe UK scale, the NAEI is seen to consistently underestimate theseemissions, with F = 2.86 for CO and F = 2.23 for NH3. The equivalent visualization, as shownin Figure S3.The total UK bottom-upestimates for the other air quality pollutantswere 537 \u00b1 25.4 kt CO and 9.1 kt \u00b1 0.5 NH35 and the NAEI allocations of gasoline and diesel fuel use in urbanareas. The NAEI assumed a newer vehicle fleet compared with the observation-basedvalues used for the bottom-up calculations. Using these NAEI assumptionsresulted in UK-wide LDV emissions with F values of1.05 for CO2 and 1.06\u20131.13 for NOx, or 1.19 and 1.05\u20131.26 in only urban areas. However,there were some significant disparities on a disaggregated level whenusing NAEI fleet assumptions, for example, with F = 1.20 for gasoline CO2 (compared with F = 1.00 using the bottom-up methods). These results strongly suggestthat the use of the observation-based fleet information in the bottom-upemission calculations provide a much better explanation of the totalUK emissions. On this basis, much of the discrepancy between the NAEIand the bottom-up methods is associated with the vehicle fleet andvehicle activity assumptions rather than the emission factors. Nevertheless,even adopting the NAEI vehicle fleet assumptions still results inup to a 26% underestimation of NOx emissionscompared with the bottom-up calculation in urban areas.It is important to consider the underlying reasonsbehind the disparitybetween the bottom-up estimates and the values reported in the NAEI,which could be associated with vehicle fleet assumptions and/or theemission factors. We have re-calculated the bottom-up emissions basedon the fleet composition assumptions used in the NAEI3.2An inherentbenefit of the vehicle emission remote sensing datafor use in the emission factor and emission inventory developmentis the comprehensive coverage of a wide range of vehicle manufacturersand models, which is difficult to achieve through laboratory or PEMSstudies owing to the large number of vehicles that would need to betested. Vehicle fleets can vary from smaller city-wide to larger country-widescales. For example, some cities may tend to have a higher than averageproportion of vehicles from a certain manufacturer .x emissions between different manufacturergroups and engine sizes, revealing the considerable differences fromthe mean levels of emissions for each engine size and vehicle category . In this case, manufacturer\u201cfamilies\u201d have been used, which groups similar enginetypes across different manufacturers.7 Forexample, the Volkswagen group (VWG) consists of Volkswagen, Audi,Skoda, and Seat. With large databases of vehicle emission remote sensingdata, it is possible to disaggregate the data further. For example,an account can be taken of the mandatory and voluntary software andhardware fixes applied to certain VWG vehicles following the dieselgate scandal, which has had an appreciable effect on reducing NOx emissions from certain vehicle models; reducingemissions between 30 to 36%.36x between different manufacturers andvehicle models. Such differences would not be important if vehiclefleets were uniformly mixed throughout Europe. However, there areconsiderable differences between the compositions of vehicle fleetsacross different countries, which could have important effects oncountry-level emissions of different pollutants.Emission factor modelsused throughout Europe do not account formanufacturer-level differences in emissions and instead provide genericfactors, for example, for Euro 5 diesel PCs below 2.0 L engine capacity.However, it is clear from x from a French-like fleet of diesel cars are 7.9%higher than a UK-like fleet, despite the fact that CO2 emissionestimates decrease by 12.7%. Conversely, the NOx estimate of a Swedish fleet mix is 5.5% lower despite a 1.2%increase in CO2.The resultsof the fleet composition analysis are shown in 2 and NOx in that as CO2 emissions decrease, emissions of NOx tend to increase. The higher emissions of NOx for a French fleet is attributable to two main factors. First,a higher proportion of small diesel-engine PCs, which tend to havehigher NOx emissions (see 3 compared with 2152cm3 in Switzerland in the CONOX database. Larger diesel-enginevehicles tend to use selective catalytic reduction for NOx control, which is highly effective, rather thanLean NOx Traps that are not as effectivefor NOx control.37 Second, France has a higher proportion of manufacturers such asRenault that tend to have higher in-use emissions of NOx compared with most other manufacturers.7In general, ions see 3. The avx emissions between the fleet of differentcountries, as shown in x emissions between the Euro 5 & 6 diesel PC fleet of Sweden comparedwith that of France; differences that are not currently reflectedin emission factors or inventories. This finding highlights the potentialbenefits of considering the fine details of vehicle fleets when attemptingto estimate emissions. Given the growing amount of the detailed vehicleemission remote sensing data available in Europe and elsewhere,41 the methods adopted in the current work could be used in many othercountries.Differences in the magnitude of NOx emissions from current assumptionswill likely have several implications. First, it would directly affectthe evaluation of urban exposures to concentrations of NO2, with potential impacts on meeting European Directive annual meanlimits of 40 \u03bcg m\u20133. Second, a country-levelchange in estimated NOx emissions of around10% compared with the current assumptions would have wider air qualityimplications; especially for regional air quality modeling activities.Furthermore, at a country level, increases or decreasesin totalNO"}
+{"text": "We describe here the results of a multidisciplinary study on an infant mummy from 16th century Upper Austria buried in the crypt of the family of the Counts of Starhemberg. The macroscopic-anthropological, radiological (whole-body CT scan), histological (skin tissue), and radiocarbon isotope investigations suggested a male infant of 10\u201318 months' age, most likely dying between 1550 and 1635 CE , that presented with evidence of metabolic bone disease with significant bilateral flaring of costochondral joints resembling \u201crachitic rosary\u201d of the ribs, along with straight long bones and lack of fractures or subperiosteal bleeding residues. Although incompletely developed, the osteopathology points toward rickets, without upper or lower extremities long bone deformation. The differential diagnosis is vitamin C deficiency (scurvy) . As additional pathology, there was significantly enlarged subcutaneous fat tissue along with a histologically enlarged subcutaneous fat layer consistent with infantile adipositas as a coincident disorder. Finally, remnants of lung tissue with pleural adhesion of the right lung indicate possibly lethal pneumonia, a disease with an increased prevalence in vitamin D deficient infants. Ultimately, the skull presented with extensive destruction of the bones of the base and dislocation of the bones of the skull squama. These changes, however, are most likely post-mortal pseudopathology, the result of a burial in a flat, narrow coffin because there were no bone fractures or residues of bleeding/tissue reaction that would have occurred whilst the patient was alive. Whilst recent anthropological and palaeopathological examination of human remains of past populations provides more and more insight into living conditions, disease, and possibly the cause of death in historic populations, the information on infants and their fate is comparably sparse. This is mostly due to the fact that the preservation of infantile human remains is frequently limited due to their smaller size, the significantly higher fragility of the biomaterial, and less care during material recovery , 2. AccoUntil now, there exist only isolated case reports or small series on infant mummies from areas with a cultural history of embalming or non-intentional mummification, such as in ancient Egypt and South America; only rare cases of mummified infants have been described from European locations. Our interdisciplinary study reports another case of a well-preserved aristocratic infant mummy from the 1600s CE. The little body survived into present times due to the fact that it was a member of a high aristocratic family with a burial in a protective crypt setting. This study provides clear evidence for infantile palaeopathology which may have partly escaped from detection in cases with only preserved skeletons.The main aim of the study was to obtain relevant information about the potential identification of the infant. The secondary aim was to identify the status and nature of its tissue preservation and any measures necessary for the maintenance of the corpse.The naturally mummified body of the infant comes from the family crypt of the Counts of Starhemberg, one of the oldest aristocratic families in Austria. It is located close to the family residence at Wildberg castle, in the small village of Hellmons\u00f6dt, Upper Austria. This is some 15 km north of the Upper Austrian capital Linz in the mountain region of the \u201cM\u00fchlviertel\u201d , 8. The The study was approved by the Diocese of Linz, Upper Austria. Additionally, oral consent was obtained from the local church authorities and the head of the still-existing family branch.During some restoration work on the crypt, the opportunity was taken to open the infant's coffin and a macroscopic investigation was undertaken. The body underwent anthropological measurements as far as possible and, subsequently, submitted to a whole-body CT scan. The scan was performed in the supine position with a slice thickness of 0.625 mm, an interval of 0.625 mm, 120 kV, and 200 mA in the standard algorithm as previously described , 11. AddFurther relevant data were obtained from a soft tissue biopsy for radiocarbon dating and histological examination. A small piece (c. 6 x 2 mm) of skin/subcutaneous soft tissue was removed from the lower lumbar region of the mummy with a scalpel. The material was divided into two; a larger piece for radiocarbon dating and a smaller one for histology.The radiocarbon dating was performed after the extraction of skin protein (mainly collagen) according to routine protocols . TherebyThe histological analysis was prepared with a rehydration procedure, followed by the embedding and cutting as previously described in detail . The preThe male infant was found enveloped in a very elaborate long silk coat which included a hood covering the skull. In parallel with this anthropological\u2013paleopathological investigation, the coffin and coat of the infant were investigated and restored by the Department of Art and Heritage Conservation, Diocese of Linz, Austria (E. Biegler-Machow and JW). Whilst the coffin did not provide further useful information, the coat analysis showed a socially high-status fabric made of perfectly woven silk which was excellently preserved. The body was lying in the supine position, the right arm along the right side of the body, the left arm angled in the elbow joint with the left hand resting on the upper abdomen .The skin of the ventral, dorso-thoracic, and abdominal walls was completely intact without any evidence of incisions or other manipulations. It was considerably darkened, but otherwise very well preserved with, for instance, the finger and toe nails were intact. The umbilicus was significantly retracted where thIn contrast to the post-cranium, the face and skull appeared abnormal in that the face seemed to be flattened, with several small skin defects at the chin and nose . FurtherThe body had a crown-to-heel length of 53 cm. Due to the facial deformity, no further measurements were taken from the skull.The whole-body scan produced 613 axial slices which were also used for subsequent three-dimensional reconstructions . The imaThe various skeletal elements of the post-cranium were anatomically aligned. Typical ossification centers were seen at the epiphyses of the long bones, but were absent from small bones of the hand, wrist, and feet, indicating an individual aged between 12 and 18 months old , 16. IndThe post-cranial skeleton showed distinct pathology. The costochondral joints of the rib cage had bilateral knob-like expansion . These cThe skull was morphologically remarkable with an unusually flat face and skin defects . On 3-D in situ, but no myocardial remains could be identified. In the upper abdomen, the remains of the liver and, remarkably, the intestinal loops that appeared expanded and fixed in a net-like arrangement, were present. These organ remnants were anatomically located and without obvious pathological findings.The soft tissue of the subcutis was remarkable in that it seems considerably thickened. This was most evident at the umbilicus which was retracted by approximately 1 cm. This strongly suggested significant fat tissue accumulation . A furthRadiocarbon dating showed two possible time segments of the calibration curve within the 95.4% probability range. The measurements gave 360 +/\u2014 26 years before present which means a calibrated death time period either between 1456\u20131529 CE or 1550\u20131635 CE .A very small full-thickness sample of the skin biopsy was used for histopathological evaluation. The superficial epidermis was absent, as expected. However, the subcutaneous collagenous soft tissue was excellently preserved showing the very typical woven bundles. Seen interspersed were islands of monovacuolar fat cells which were significantly enlarged, particularly when compared to an age- and site-matched control case . AccordiThe life and living conditions of infants in historical populations have only been investigated to a limited extent. The main reason is the lack of study material mainly due to the rapid degradation of infantile human remains following burials and the extreme rarity of the practice of embalming infants and preservation of the mummies , 4. DespIn populations that practiced artificial embalming, more cases, or even small case series, of infant mummy analyses are available, such as in Egypt \u201326 and SIn this report, we describe one of those rare cases where spontaneous mummification of an infant has occurred found in an aristocratic family crypt. This was in a small town in Upper Austria belonging to the Starhemberg family, who lived close by. They are one of the oldest and most renowned aristocratic families in Austria tracing back to the 11th century . WildberAs a first aim, this study was established to hopefully identify the individual. Besides the lack of direct written evidence from the coffin, the parish records from Hellmons\u00f6dt do not start before 1659 CE ; therefore, contemporaneous documentation of the identity was not possible.We, therefore, performed radiocarbon dating of a small subcutaneous sample which indicated the periods of death between 1456 and 1529 CE or 1550 and 1635 CE. This relatively large span of dates comes from the flat curve of the calibration dataset that is established to convert the non-calibrated measurements into the expected span of death period. Since, however, the building of the crypt from the year 1499 CE was modified in the late 16th or beginning of the 17th century, the first range of data seems very unlikely since the cadaver would have to have been kept at another adequate burial place until the crypt was ready. There is no indication that the respective branch of the Starhemberg family had another family crypt at (or before) that time period. Accordingly, the infant most probably died between 1550 and 1635 CE. In addition, the first burial of an adult, Reichard von Starhemberg, in the reconstructed crypt took place in 1613 CE can be ruled out, since the costochondral junctions of both sides of the thorax were similarly affected. In accordance with the Istanbul terminological framework in palaeopathology where thRickets is the consequence of a lack of vitamin D, a vitamin that is first intestinally processed from pro-precursors, then further modified in the skin under the non-enzymatic action of sunlight to precursors and then further modified in the kidney to the active vitamin. The presence of sufficient vitamin D is required to mineralise the non-mineralised osteoid bone matrix into typical bone. Therefore, any interference of this pathway may result in a lack of vitamin D, and result in rickets in the growing skeleton. It was described first by Daniel Whistler (1619\u20131684 CE) in 1645 CE, rapidly followed by the description by Francis Glisson (1597\u20131677 CE) in 1651 CE. The latter had already attributed the disease to nutritional causes .Vitamin C deficiency (scurvy) was first described clinically by M\u00f6ller in 1859 CE and Barlow in 1883 CE , 48. HowSince the overall appearance of the infant clearly rules out malnutrition by lack of food, the bone lesions in rickets must have come from another disturbance of vitamin D metabolism. It is interesting that in previous times socially highly ranked people avoided sunlight exposure, and particularly darkening of the skin. Aristocrats were expected to have white, pale skin, whilst laborers were expected to have suntans. This also applied to small infants, who, like the Starhemberg infant, were at risk of developing rickets due to the lack of ultraviolet rays on their skin.n = 60). Others described rickets prevalence rates between 13.7 and 48.1% but did not affect the rest of the body. It has previously been observed that dehiscence of the coronal suture may occur as ossification variants without any trauma or other manipulations occasioThe likely conclusion is that the mummy is Reichard Wilhelm, 1625\u20131626 CE, the first son of Erasmus der J\u00fcngere (1595\u20131664 CE). He had a number of pathological findings where the tentative conclusions are that he was overweight in keeping with being very well fed, had vitamin D deficiency from lack of sunlight resulting in rickets, and that the disruption of his skull bones and upper cervical spine was post-mortem changes from being place in a too flat coffin for the skull. He died aged 10\u201318 months from pneumonia. His body was wrapped in an expensive silk coat in keeping with his aristocratic status.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.Concept preparation: AN, SP, JW, and OP. Analysis performation: AN, SP, CH, and OP. Preparation of first text draft and preparation of final draft: AN, SP, JW, CH, and OP. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."}
+{"text": "Zika virus (ZIKV) diagnostics are crucial for proper antenatal and postnatal care and also for surveillance and serosurvey studies. Since the viremia during ZIKV infection is fleeting, serological testing is highly valuable to inform diagnosis. However, current serology tests using whole virus antigens frequently suffer from cross reactivity issues, delays, and technical complexity, especially in low and middle income countries (LMICs) and endemic countries. Here, we describe an indirect ELISA to detect specific IgG antibodies using the ZIKV envelope domain III (EDIII) protein expressed in Drosophila S2 cells as an immunogen. Using a total of 367 clinical samples, we showed that the EDIII-ELISA was able to detect IgG antibodies against ZIKV with high sensitivity of 100.0% and specificity of 94.7% when compared to plaque reduction neutralization tests (PRNTs) as the gold standard and using 0.208 as the cut-off OD value. These results show the usefulness of the recombinant envelope domain III as an alternative to standard whole virus proteins for ZIKV diagnostics as it improves the sensitivity and specificity of IgG ELISA assay when used as an immunogen. This method should, therefore, be extended to serological diagnostic techniques for other members of the flavivirus genus and for use in IgM diagnostic testing. The first ZIKV outbreak was confirmed in Yapp Island in Micronesia in 2007, and throughout the last decade, it has been responsible of a number of outbreaks throughout the world and specificity of 94.7% [90.3\u201397.2].The EDIII IgG ELISA displayed a high sensitivity of 100.0%; this can be explained by the fact that most neutralizing antibodies are correlated to the rise in IgG antibody titers directed against the EDIII protein which is known to participate in receptor recognition . EDIII aThe test also displayed high specificity results of 94.7% compared to the PRNT test. This could be explained by the fact that EDIII is one of the most diverse proteins among flaviviruses . This diThe ZIKV strain used in the study was the lineage from the epidemic South Pacific strain, as the amino acid sequence of the EDIII is well conserved, and the divergence among the E proteins ranges between ~6% between lineages and ~2% within lineages . The EDIGlobally, the sensitivity and specificity obtained showed a clear improvement of the performance of the ELISA test when using ZIKV EDIII protein as an immunogen. When compared to the PRNT test, the hands-on time is also far shorter as it takes around 4 h to obtain a result, whereas it takes 4 to 6 days when using PRNT. Despite being also highly sensitive and specific, PRNT testing does not distinguish between IgM or IgG antibodies in order to pose the diagnostics of recent or old ZIKV infection, contrary to the EDIII ELISA. The PRNT test is also expensive, labor intensive, and requires use of BSL3 lab and highly trained personnel, whereas the EDIII ELISA is cheap, easy to implement, and can be performed in a standard BSL2 lab.When compared to the standard IgG ELISA, the advantage of the EDIII is the high sensitivity and specificity and, moreover, the fact that no mouse brain antigen is used in the assay which poses ethical problems due to the presence of animal tissue. Both sensitivity and specificity results show that the EDIII IgG ELISA could be an ideal tool for serosurvey studies without need of confirmation by neutralization assay, but more importantly, it might be useful for the differential diagnostics of congenital infection by ZIKV during pregnancy or of neonatal cases with microcephaly. In preparation for future outbreaks and emergence events, it will be important to adapt the use of ZIKV EDIII antigen in IgM ELISA assays for the detection of acute cases or as a vaccine candidate."}
+{"text": "Moreover, analyzing factors influencing the IgG anti-S response, we found that only the type of vaccine affected the antibody titer (p < 0.0001). Only mild vaccine reactions resolved within few days (40% of subjects) and no severe side effects for either homologous groups or the heterologous group were reported. Our data support the use of heterologous vaccination as an effective and safe alternative to increase humoral immunity against COVID-19.We evaluated the post-vaccination humoral response of three real-world cohorts. Vaccinated subjects primed with ChAdOx1-S and boosted with BNT162b2 mRNA vaccine were compared to homologous dosing (BNT162b2/BNT162b2 and ChAdOx1-S/ChAdOx1-S). Serum samples were collected two months after vaccination from a total of 1248 subjects. The results showed that the heterologous vaccine schedule induced a significantly higher humoral response followed by homologous BNT162b2/BNT162b2 and ChAdOx1-S/ChAdOx1-S vaccines ( The COVID-19 pandemic has severely impacted the world in terms of health, society, and economy, and currently, vaccination is the most effective strategy to contrast SARS-CoV-2. Five vaccines have been authorized by the European Medicines Agency (EMA) and the Italian Medicines Agency (AIFA): Comirnaty (Pfizer-BioNTech), Spikevax (Moderna), Vaxzevria (AstraZeneca), COVID-19 Vaccine Janssen (Johnson&Johnson), and Nuvaxovid (Novavax). All except COVID-19 Vaccine Janssen require a two-dose vaccination schedule (primary vaccination), each at different time intervals. From December 2020, Italy started a vaccination campaign 2080 [1240\u20132080] BAU/mL), followed by the homologous mRNA vaccine schedule (group BNT/BNT) (1480 [923\u20132080] BAU/mL) and by the homologous adenovirus-based vaccine (group ChAd/ChAd) (267 [127\u2013561] BAU/mL) , .To assess for asymptomatic SARS-CoV-2 infection, both healthcare workers and university staff were periodically monitored with rapid antigen test and anti-nucleocapsid (N) antibodies test, respectively. Since anti-N antibodies can only be detected after a natural infection, any positive result may indicate that a vaccinated subject was in contact with the virus. Three subjects tested positive for the antigen test (confirmed by SARS-CoV-2 RNA PCR), all in the 1\u20132 months before the vaccination (November\u2013December 2020). Anti-N IgM and IgG antibodies were discovered in 11 and 6 subjects, respectively. All subjects positive for IgM or IgG anti-N, resulted negative for SARS-CoV-2 RNA PCR. Three of the anti-N IgG positive subjects had declared a prior COVID-19 diagnosis.n = 175, 62% ChAd/ChAd; n = 32, 11% BNT/BNT; n = 77, 27% ChAd/BNT) we were able to collect a series of supplementary information in addition to the vaccine schedule on age, sex, BMI, smoking, diabetes, cardiovascular diseases, respiratory tract diseases, COVID-19 diagnosis, vaccine side effects, allowing a logistic regression analysis. The results showed that only the vaccine schedule significantly affected the antibody titers (p < 0.0001), (From those who had completely filled out a questionnaire within the Informed Consent Form out of t0.0001), . RegardiSafety considerations associated with the ChAdOx1-S vaccine have led some European countries to recommend the switch from the homologous booster to a heterologous booster, such as BNT162b2. Several studies have been assessing the safety and efficacy of various combinations of heterologous prime-boost vaccination in clinical trials ,9, and hWe found that two months after vaccination, IgG levels of the heterologous ChAd/BNT group were significantly higher than those of the homologous groups (BNT/BNT and ChAd/ChAd) and that those of the BNT/BNT group were significantly higher than ChAd/ChAd, according also to recent reports ,13. The Moreover, we analyzed which factors could influence the IgG anti-S levels among vaccine schedule, sex, age, BMI, smoking, diabetes, cardiovascular diseases, respiratory tract diseases, COVID-19 diagnosis and vaccine side effect, in order to identify the role of each factor net of the others. Our results demonstrated that only the vaccine schedule significantly impacted antibody response and on the contrary to previously published papers ,15, the This report is subjected to some limitations. Firstly, approximately 25% of our subjects had antibody titer, which was approximated to 2080 BAU/mL (upper limit of assay); however, since the censored data mainly concerned the heterologous schedule of vaccination, the proposed statistical approach seems also more conservative. Secondly, the impact of asymptomatic subjects on anti-S IgG levels, is not fully addressed, because we received only partial information about SARS-CoV-2 infection in the period between vaccination and antibody titer analysis, or also prior to vaccination. However, following the periodic testing of nucleocapsid-specific IgG antibodies in the university staff, and of SARS-CoV-2 antigens in the healthcare workers , to check for the possible contact with the virus over the course of the study, we minimized the effects of any asymptomatic infection on our results. Finally, our study did not account for other mechanisms of immune protection such as T-cell responses and their role in vaccine protection against severe infections of SARS-CoV-2 , nor didDifferently from recent randomized and observational clinical studies ,12,13, oThese data assess the short-term humoral response (two months) induced by the vaccination over a 12-month course following the administration of the primary COVID-19 vaccination. The study will end in June 2022, and the long-term dynamic of SARS-CoV-2 anti-S-IgG titers of the heterologous group compared to homologous groups of vaccinated subjects may be of utmost importance to provide prospective real-life data about immunogenicity."}
+{"text": "Lactobacillus sporogenes and Clostridium butyricum, were tested against colon (HT-29 and HCT 116), lung (A549), and liver (HepG2) cancer cell lines, alone or in combination with 5-fluorouracil (5FU). Moreover, the underlying mechanism of PBT and PBT-5FU against the HT-29 cell line was evaluated using the Hoechst 33342 staining, revealing characteristic apoptotic modifications, such as chromatin condensation, nuclear fragmentation, and membrane blebbing. Furthermore, the increase in the expression of pro-apoptotic Bax, Bid, Bad, and Bak proteins and the inhibition of the anti-apoptotic Bcl-2 and Bcl-XL proteins were recorded. Collectively, these findings suggest that the two strains of probiotic bacteria, alone or in association with 5FU, induce apoptosis in colon cancer cells and may serve as a potential anticancer treatment.Cancer remains a leading cause of death worldwide and, even though several advances have been made in terms of specific treatment, the late-stage detection and the associated side effects of the conventional drugs sustain the search for better treatment alternatives. Probiotics are live microorganisms that have been proven to possess numerous health benefits for human hosts, including anticancer effects. In the present study, the in vitro effect of the association of two probiotic strains (PBT), Cancer is a leading cause of death globally; the year 2020 marked nearly 10 million fatal cases, with the most common being lung (1.8 million), colon and rectum , and liver cancer-related deaths. In 2017, the World Health Assembly passed a resolution (WHA70.12) urging governments to take immediate action in order to reach the objectives of the 2030 UN Agenda for Sustainable Development, which aims at the reduction in cancer-related deaths [Lactobacillus and Bifidobacterium being the most commonly used genera, followed by Bacteroides and Clostridium [Probiotics are live microorganisms that in proper amounts may provide health benefits for the host; they have been proven as effective in improving the immune system and intestinal health . In addistridium .Probiotics have revealed their therapeutic benefits in the chemoprevention of cancer or as adjuvants during cancer chemotherapy; most in vitro studies have been conducted on gastric and colon cancer cells where various probiotics decreased cell proliferation and induced apoptosis. However, similar effects were reported on some systemic cancer cells, mainly leukemia and lymphoma cells; additionally, several clinical studies emphasized probiotics\u2019 efficacy in stopping cancer progression in various cancer patients . Further5-fluorouracil (5FU) is a chemotherapeutic frequently employed in the treatment of different types of cancer, such as colon, breast, stomach, esophageal, skin, and pancreatic cancers . 5FU is Lactobacillus sporogenes and Clostridium butyricum, were tested as anticancer agents, alone or in combination with 5FU, against colon, lung, and liver cancer cells, respectively; their efficacy is assessed in a comparative manner in order to identify the potentially different mechanisms against gastrointestinal versus lung and liver cancer.In the current study, two probiotic strains, Lactobacillus sporogenes, and Clostridium butyricum TO-A were purchased from American Type Cell Collection , Sigma Aldrich, Merck KgaA . The cell culture media, McCoy\u2019s 5A Medium (ATCC\u00ae 30-2007\u2122), Eagle\u2019s Minimum Essential Medium (EMEM-ATCC\u00ae 30-2003\u2122), and DMEM were purchased from ATCC . All the reagents corresponded to the analytical standard purity and were applied according to the manufacturers\u2019 recommendations.Phosphate saline buffer (PBS), fetal bovine serum (FBS), penicillin/streptomycin mixture, trypsin-EDTA solution, dimethyl sulfoxide (DMSO), 3--2,5-diphenyltetrazolium bromide (MTT), 600) to correspond to 107 CFU/mL (colony-forming units per milliliter) [The bacterial strains were cultured under proper anaerobic conditions in MRS and BHI broth, respectively, following the steps described in the literature: (i) incubation at 37 \u00b0C for 24 h; (ii) centrifugation at 3500 rpm for 10 min; (iii) washing with PBS; and (iv) resuspension in the PBS and adjustment of the optical density ,12.\u00ae HTB-38TM), colorectal carcinoma , human hepatocellular carcinoma , and human lung carcinoma , which were purchased from ATCC (American Type Cell Collection) as frozen vials. The cell culture involved the following steps: (i) specific media addition\u2014for HT-29 and HCT 116 cells McCoy\u2019s 5A Medium (ATCC\u00ae 30-2007\u2122), for HepG2 Eagle\u2019s Minimum Essential Medium (EMEM\u2014ATCC\u00ae 30-2003\u2122), and for A549 DMEM; (ii) supplementation with 10% FBS and 1% antibiotic mixture (100 U/mL penicillin/100 \u00b5g/mL streptomycin); and (iii) standard conditions\u2014incubation in a humidified atmosphere at 37 \u00b0C and 5% CO2.Four tumoral cell lines were selected for the current study, namely colorectal adenocarcinoma -2,5-diphenyltetrazolium bromide) assay, as presented in our previous study . BrieflyTo determine the cytotoxic potential of the test samples, a microscopic evaluation of the cells\u2019 morphology and shape was performed. The cells were observed under bright field illumination and photographed at 24 h after treatment and compared with the solvent (media). The photos were taken using Cytation 1 . The analysis of the images was performed by means of the Gen5\u2122 microplate data collection and analysis software .The potential toxicity of the samples at the nuclear level was evaluated by using the Hoechst 33342 staining assay protocol according to the manufacturer\u2019s recommendations and to our previous research . In brie\u00ae First Strand cDNA Synthesis Kit, and quantitative real-time PCR analysis was performed using the Quant Studio 5 real-time PCR system in the presence of Power SYBR-Green PCR Master Mix.Given that, following the cell viability test, the most affected cell line was HT-29, it was decided that the influence of 5FU, PBT, and PBT-5FU on gene expression should be established by applying the RT-PCR method to this www.graphpad.com, accessed on 13 July 2022) was used. The differences between the data were compared by performing the one-way ANOVA analysis and Dunett\u2019s multiple comparisons post-test. The statistically significant differences between the data were labeled with * .The data were processed as means \u00b1 standard deviation (SD). The software GraphPad Prism version 6.0.0 for Windows were tested on the HT-29, HCT 116, HepG2, and A549 cell lines for 24, 48, and 72 h. In all cases, the viability percentages varied in a sample type manner, PBT displaying an actual anti-cancer effect only in adenocarcinoma colorectal cells. In HT29 cells, the cytotoxic activity of PBT alone increased in a time-dependent manner assay was performed. Three samples 5 \u00b5M was selected as an indicator for apoptosis. Several apoptotic features were noticed. In HT-29 cells, PBT and 5FU induced chromatin condensation, while PBT-5FU produced chromatin condensation, nuclear fragmentation, and membrane blebbing .With regard to the data obtained in the viability cell assessments on human adenocarcinoma colorectal cells, HT-29 highlighted an important decrease in cell viability after the sample treatment. To obtain more detailed information regarding the mode of action of PBT, 5FU, and PBT-5FU on colorectal adenocarcinoma cells, the expression of certain genes involved in apoptosis was evaluated: Bax, Bid, Bad, and Bak (pro-apoptotic genes) and Bcl-2 and Bcl-XL (anti-apoptotic genes). Lactobacillus sp. proved to be the most effective in improving the symptoms of inflammatory bowel disease [Fusobacterium, which are strongly associated with colorectal cancer proliferation; moreover, probiotics decreased pneumonia as well as the need for postoperative mechanical ventilation [The gut microbiota as well as the microbiota-derived metabolites have revealed a significant impact on the host immune homeostasis at both the local and the systemic level by causing changes of cell and protein expression which influence systemic inflammation and immune homeostasis . In the disease . Certaintilation . Equallytilation . TherefoLactobacillus plantarum was able to selectively inhibit 5FU-resistent HT29 cells; however, they also reported a similar behavior in HCT116 cells, which contradicts the findings of the current study [C. butyricum in the PBT combination, which has the ability to modulate mucus production and to induce the glycosylation of mucins in HT29 cells, which contain a glycosylated mucus layer [Lactobacillus sp. there is a compositional and structural diversity which significantly influences their antiproliferative activity, such as with the relative proportions of the individual monosaccharides in the produced exopolysaccharides; however, the level of their antiproliferative effect is time-dependent as it was recorded in the current experiment [HT-29, the human colorectal adenocarcinoma cell line, was the first established (1964) colon cancer cell line of human origin used as a model in the study of human colorectal cancers. The cells are known to possess specific characteristics: (i) they express functional receptors for hormones and peptides; (ii) they can synthesize the receptor of dimeric immunoglobulin A; (iii) they can be differentiated in culture under the impact of differentiation inducers ; (iv) they possess the capacity to express features of enterocytes and mucus-producing cells and to secrete metabolites, growth factors, pro-angiogenic factors, cytokines, and other factors that sustain cellular survival; and (v) they maintain their cellular properties unchanged even after 100 passages . While tnt study . Other ant study . A possius layer . Therefous layer . In addiperiment . Anotherperiment . CollectBifidobacterium sp.), Ahn et al. reported increased cell death in A549 cells [Bacillus polyfermenticus inhibited in vitro cultured A549 cells alongside other cell lines [Enterococcus against several cancer cell lines, including A549, was first described by Sharma et al. in 2018 [C. butyricum. However, as those outcomes were reported following an in vivo study, one may assume that the immune response was involved in the anticancer activity of the probiotic [In A549 lung carcinoma cells, the application of PBT revealed modest anticancer effects; similar antagonistic interactions with 5FU were reported for the PBT-5FU combination. Controversially, using different probiotic strains and Bcl-2 and Bcl-XL (anti-apoptotic genes) was quantified by means of RT-qPCR on HT29 cells, which showed the higher cytotoxicity effects during MTT tests. The outcome of each apoptotic phase is regulated by several genes and their interconnections, with the major contribution of the mitochondria as well as the miRNAs, which act as key factors in the apoptotic process . The cenof Bcl-2 . Similar of Bcl2 ; these rl cancer . In addi pathway . One canL. sporogenes and C. butyricum, were tested on colon, lung, and liver cancer cells, where cytotoxic effects were noticed in particular on the intermediate differentiated HT29 colon cell line. The studies at the cellular level revealed the occurrence of apoptosis under the effect of the probiotic mix, as indicated by the nuclear morphology assessment by means of Hoechst 33342 staining. In addition, at the molecular level, the expression of the pro-apoptotic markers was significantly increased, while the anti-apoptotic markers displayed a decreasing tendency. Moreover, the probiotic mix revealed a cytotoxic activity comparable to the synthetic drug 5FU, an activity which was also validated at the molecular level by the expression of pro- and anti-apoptotic markers. Collectively, the experimental data show that probiotics have the ability to efficiently fight cancer proliferation; the combination of probiotics with 5FU induced additive cytotoxic effects. Therefore, one can conclude that the two strains of probiotic bacteria may serve as a potential anticancer treatment, particularly against colon cancer. Further studies should reveal their efficiency in vivo and eventually in clinical settings.Effective cancer treatment is still a goal only glimpsed and not yet achieved due to the ever-evolving nature of the pathology itself, which poses numerous challenges and requires complex research. Probiotics have showed promising anticancer effects which, combined with their ability to fight the side effects of synthetic drugs, may provide potential useful treatments in the future. Two strains of probiotics,"}
+{"text": "Central venous access devices (CVADs) can have high rates of failure due to dressing-related complications. CVADs placed in the internal jugular vein are at particular risk of dressing failure-related complications, including catheter-associated bloodstream infection and medical adhesive-related skin injury. Application of Mastisol liquid adhesive (MLA) may reduce CVAD dressing failure and associated complications, by reducing the frequency of dressing changes. The aim of this study is to investigate whether, in an intensive care unit (ICU) population, standard dressing care with or without the addition of MLA, improves internal jugular CVAD dressing adherence.This two-arm, parallel group randomised controlled trial will be conducted in three Australian ICUs. A total of 160 patients (80 per group) will be enrolled in accordance with study inclusion and exclusion criteria. Patients will be randomised to receive either (1) \u2018standard\u2019 CVAD dressings (control) or (2) \u2018standard\u2019 dressings in addition to MLA (intervention). Patients will be followed from the time of CVAD insertion to 48\u2009h after CVAD removal. The primary outcome is \u2018dressing failure\u2019 defined as requirement for initial CVAD dressing to be replaced prior to seven days (routine replacement).This study will be the first randomised controlled trial to evaluate the clinical effectiveness of MLA in the adult intensive care unit population and will also provide crucial data for patient-important outcomes such as infection and skin injury.ACTRN12621001012864. Registered on 2 August 2021Australian New Zealand Clinical Trials Registry Central venous access devices (CVADs) are used extensively worldwide to deliver critical treatment and haemodynamic monitoring , 2. They.Although the placement of CVADs in the jugular vein has been associated with an increased risk of central line associated bloodstream infection (CLABSI) and device failure compared to placement in the subclavian vein , 7, 8, tTraditional practice for dressing and securement of internal jugular (IJ) CVADs has been the use of polyurethane transparent dressings with or without additional securement from sutures or commercial sutureless stabilisation devices . Howevern\u2009=\u20094) to 100% (n\u2009=\u200930) with the use of MLA \u2018Central line associated bloodstream infection\u2019 defined as per the National Healthcare and Safety Network (NHSN) \u2018Primary bloodstream infection\u2019 defined as per the NHSN \u2018Local infection\u2019 as defined by the Centres for Disease Control and Prevention (CDC)/NHSN \u2018Arterial or venous infection\u2019 criteria Loss of dressing integrity not requiring dressing change (i.e. lifting at edges with/without reinforcement required) assessed for all dressings per patientDressing dwell time assessed for all dressings per patientPremature dressing removal Number of dressing changes Device dwell-time Serious adverse events Adverse skin events relating to MARSI Cost , as informed by standard diagnosis related groups (DRGs), staff time estimates to apply and remove dressings and product costs.Staff and patient satisfaction on dressing application and removal assessed at initial application, all dressing changes and final removal for all dressing changes per patient.Skin colonization, measured both descriptively (i.e. organism) and quantitatively (i.e. colony forming units)Participants are enrolled prior to or within 12\u2009h of their CVAD insertion, and will continue on the study until 48\u2009h after their CVAD is removed. It is anticipated that each CVAD will dwell for an average of 7\u00a0days, resulting in an average enrolment time for each participant of 9\u00a0days (including CVAD dwell whilst in ICU and on the ward).A total of 160 patients will be recruited. At 90% power and a significance level of 0.05, 77 patients per group are required to detect a 25% absolute difference in the primary outcome were conducted at each site prior to study commencement. Each member of the research team at each site will be provided extensive education on the study protocol prior to recruitment commencement, and will be encouraged to notify the project manager of any recruitment difficulties to ensure strategies are in place to overcome these and ensure adequate participant enrolment.Patients will be randomised in a 1:1 ratio to either \u2018standard\u2019 dressing care or \u2018standard\u2019 dressing care in addition to MLA. Randomisation will occur in computer-generated randomly varied block sizes of four and six and will be stratified by patient sex to account for facial hair differences.Randomisation allocation will be concealed until the point of randomisation using a central, web-based randomisation service embedded within the study database.A statistician independent of the research team will generate the randomisation allocation sequence, and this will be uploaded onto the randomisation service without viewing by the research team. Patients will be enrolled and randomised by members of the research team, with intervention allocation assigned as per the randomisation service.Due to the nature of the intervention, blinding of patients/clinicians and research staff to the intervention is not possible. However, the statistician will be blinded for analysis, and microbiology laboratory staff will be blinded when culturing swab growth. The infectious diseases consultant will also be blinded to treatment allocation when apportioning infection outcomes.Not applicable. There will be no instances where it would be necessary to unblind the statistician, laboratory staff or infectious diseases consultant.n\u2009=\u20098 per group) will undergo inter-rater reliability assessments of site and skin complications. Research staff will also collect timing data for dressing changes (n\u2009=\u200910 per group) to inform cost estimates (see Secondary Outcome 13).Patient demographic and CVAD insertion data will be collected by research staff at the time of patient enrolment (see Table\u00a0n\u2009=\u200910 per group) to assess skin colonisation under dressings (see Secondary Outcome 15). Patient outcome and adverse event data will be collected at 48\u2009h after CVAD removal.At the time of CVAD removal, research staff will collect procedural data, in addition to complications and treatment summary data. If able, patient reported satisfaction (see Secondary Outcome 14) will also be collected, in addition to a convenience sample of skin swabs for dressing changes over the weekend. Research staff will retrospectively collect as much data as possible from the patient\u2019s medical notes and a study-specific bedside data collection log (documenting number of and reason for dressing changes) to complete the weekend data collection as fully as possible.Data will be entered either on to a hard copy data collection form and then transposed into an online Research Electronic Data Capture (REDCap) database , 24 or dPatient confidentiality will be maintained at all times. Only research staff at each site will have access to identifiable patient information. All data entered into the REDCap database will be de-identified and only re-identifiable at the recruiting site using local screening and recruitment logs. Upon trial completion, only the statistician, project manager and local site principal investigators will have access to the de-identified data once exported from REDCap.n\u2009=\u200910 per group occurring at either CVAD removal or dressing change if the CVAD has dwelled for three or more days. To do this, a sterile dry swab moistened with 0.9% saline will be firmly moved in a twisting back and forwards motion across the area immediately surrounding the CVAD insertion site. The swab will then be placed in a sterile container and transported to a nearby microbiology laboratory to be qualitatively and quantitatively cultured as per standard practice. After analysis, the swabs will be destroyed.Skin swabs will be collected from the area immediately surrounding the CVAD insertion site to assess micro-organism colonisation in a convenience sample of P values <\u20090.05 will be considered significant.All randomised patients will be analysed by intention to treat, except for those patients whose CVAD insertion is cancelled/failed or who withdraw consent. Continuous data will be reported as means (standard deviation) or median (interquartile limits), as appropriate. Categorical data will be presented as frequency (percentage). The primary outcome, dressing failure, will be investigated using logistic regression with \u2018treatment\u2019 as the main effect. Incidence rates of dressing failure, skin colonisation and CLABSI with 95% confidence intervals will summarise the effectiveness of each intervention, and Poisson regression will be used to test for group differences. Kaplan-Meier survival curves (with log rank Mantel-Cox test) will compare dressing failure over time. Other secondary clinical outcomes will be compared between groups with appropriate parametric or non-parametric techniques. None planned.Inter-rater reliability of the daily check site assessment (site complications and evidence of MARSI) will be completed by two research staff at one time point for each of the selected patients using a specific data collection form. Inter-rater reliability will be measured using proportions of specific agreement and by Cohen\u2019s kappa.Costs between groups will be analysed according to the nursing time taken to conduct dressing changes costed against standard hourly registered nurse wage rates at that site, in addition to cost of resources used as per hospital stores. Costs of treating complications will be based on standard local Diagnosis Related Groups and published estimates.Prior to analysis, data will be cleaned and attempts at locating missing data will be made. Missing data that is unable to be found will not be imputed prior to analysis. In addition to intention-to-treat analyses, per protocol analyses will also be completed to address protocol non-adherence.The study has been prospectively registered with the Australian New Zealand Clinical Trials Registry (ACTRN12621001012864), and the protocol will be published in an open access, peer-reviewed journal before the end of patient recruitment.The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.The coordinating centre is responsible for concept inception, study design, funding acquisition and ethical conduct of the trial. The coordinating centre comprises of the chief investigator and associate investigators, including the project manager. There is no formal steering committee for this study.Not applicable.Enrolled patients will be monitored and treated for untoward medical occurrences in line with standard clinical care. Therefore, only adverse events which the treating clinicians believe are associated with the intervention will be reported.DeathCLABSIMARSIIn this trial, the following will be considered as serious adverse events (SAEs):All SAEs from randomisation to 48\u2009h after removal of the CVAD will be recorded on REDCap and reported to the coordinating centre within 24\u2009h.Patient study numberNature of the eventCommencement and cessation of eventThe principal investigator\u2019s assessment of the relationship between the study intervention and the event Whether treatment was required for the event and what treatment was administeredThe minimum information to report will include:It is the responsibility of each site\u2019s principal investigator to inform the chief investigator and project manager of all SAEs which occur at their site. Copies of reports and correspondence to and from the reviewing HREC and research governance will also be sent to the coordinating centre. The project manager will be responsible for reporting all SAEs to the reviewing HREC and alerting other participating sites of the SAE if required.The project manager will undertake quality checks for allocation integrity and monitor 100% source data verification for the first patient per site, consent forms, primary outcome and a random 5% of other data for all patients. The project manager will also conduct regular remote monitoring on the REDCap database and regular data cleaning to ensure the integrity of the study data. Data queries will be compiled and sent to each participating site at regular intervals throughout the study and as part of final data cleaning.The project manager will be responsible for communicating protocol amendments to the reviewing HREC and recruiting sites. The project manager will also be responsible for ensuring amendments and reports are forwarded by research staff to Research Governance at each site. The project manager will notify research staff at each recruiting site if amendments or new data have the potential to impact patients, who will then inform all relevant participants.Locally, results will be presented at hospital seminars including the clinical departments which participate in the trial, and at annual hospital symposiums. Results will be published in a relevant peer-reviewed journal with a wide readership. Results will also be disseminated through conference presentations at local and international nursing and medical assemblies. The investigators are members of professional organisations and bodies including infusion nursing and infection prevention and will use their professional networks to further highlight trial results.Authorship will be determined as per the National Health and Medical Research Council Authorship Guidelines .The aim of this trial is to assess the effectiveness of MLA, compared to \u2018standard\u2019 dressing care, in improving dressing adhesion and reducing dressing changes in internal jugular CVADs. This trial has several strengths and limitations. A strength of the study is its randomised design which minimises bias and confounding factors thereby increasing the reliability of the results. However, a limitation of this study is the inability to double-blind randomisation allocation due to the nature of the intervention, which may introduce performance bias. Another strength of this protocol is the requirement for daily checks of the central line dressing to ensure accurate data collection and monitoring for serious adverse events. This is particularly relevant as there is very limited pre-existing evidence of skin reactions to and effectiveness of MLA. However, daily checks will not be able to be carried out in person on weekends due to staffing limitations. Nonetheless, this study will be the first randomised controlled trial to assess the clinical and cost effectiveness of MLA and, as such, will contribute much needed evidence on strategies to reduce CVAD dressing failure in critically ill patients.Current protocol: Version 1.0, dated 13 May 2021Date recruitment began: 02 September 2021Anticipated date of recruitment completion: 01 September 2022"}
+{"text": "The inexpensive and non-toxic H-SiO2 particles imparted delicate lotus leaf inspired hierarchical surface nano-morphologies while the fatty acid modification afforded a suitable drop in surface energy. Comparison studies were carried out to explore the effects of fatty acid chain length and pipette as opposed to spray coating deposition methods on the coatings hydrophobicity. It was determined that the longest chain length fatty acid coatings showed enhanced hydrophobic properties due to their extended hydrophobic alkyl chain. A pipette deposited suspension containing H-SiO2 nanoparticles and octadecanoic acid generated a coating with the most favourable average water contact and tilting angles of 142 \u00b1 6\u00b0 and 16 \u00b1 2\u00b0 respectively.Special wettability durable coatings, with average water contact angles exceeding 140\u00b0, have been fabricated utilising functionalised hydrophobic-SiO Special wettability durable coatings, with water contact angles exceeding 140\u00b0, have been fabricated using inexpensive and non-toxic functionalised hydrophobic-silica nanoparticles embedded in fatty acids. These approaches often involve time intensive multistep fabrication pathways that are unsuitable for large scale commercial coating production. From this, Lu et al. used a facile method to produce robust paints from nanoscale TiO2 particles and a \u00a36 per gram perfluorooctyltriethoxysilane,20 fluoro-containing SiO2 nanoparticles were synthesised by Wang's research team21 and Liang et al. used a slightly more involved procedure to create alkenyl-functionalized SiO2 particles which were grafted and co-cast with a fluoroalkylsilane.22 Additional work in this field includes: transparent superhydrophobic SiO2 paper generated using octadecyltrichlorosilane functionalised nanoparticles,23 insulating silica aerogels fabricated from the one-step drying of polyethoxydisiloxane/methyltrimethoxysilane based sols24 and Shi et al. fabricated a highly water repellent SiO2/polyvinylidene fluoride film via spray coating.25 Whilst extremely functional (average water contact angles 140\u2013174\u00b0), each of these surfaces are still flawed \u2013 this time by material expense and toxicity.Neinhus and Barthlott's work has inspired many biomimetic superhydrophobic surfaces found in the literature. Plasma etching,26\u201328 To improve on many existing approaches, extreme wetting regimes should be afforded after one treatment of any substrate using non-fluorinated economically viable coating precursors.The hydrophobic properties of the cheaper, more environmentally friendly and non-fluorinated octadecanoic acid (\u00a325 per kilogram) have been explored on chemically etched zinc, aluminium or glass substrates. Wei, Chen and Mittal independently generated hierarchical roughness by immersing their respective surfaces in concentrated HCl. A final coating of octadecanoic acid sufficiently lowered their surface energies. Average water contact angles > 150\u00b0 were achieved in all cases but unfortunately substrate etching substantially reduced the versatility of said methods.2 nanoparticles were combined with low surface energy fatty acids29\u201331 (C8\u2013C18 carbon chain lengths) to establish the desired lotus-like effect upon curing, 2 (H-SiO2) particle loading, fatty acid concentration and chain length and coating deposition method afforded comparably high average water contact angles on octadecanoic acid coatings, A facile production of inexpensive, non-toxic water repellent surface coatings involving a one pot method is described herein. Surface structuring functionalised SiO32 This wetting state allowed water to remain suspended on top of an air layer entrapped between surface asperities.33 Subsequently, liquid droplets rolled from the material collecting dust and dirt particles; an action that rendered the surface self-cleaning.20,34\u201336 More recently these single application non-fluorinated coatings37\u201341 have generated interest from the coatings industry as the long chain acids suitably fulfil the low surface energy hydrophobicity requirement, are low cost, have marketable viability and maintain performance. Therefore, fine tuning this facile one-pot method could potentially result in compatibility with commercial self-cleaning products.42,43In addition to the high average water contact angles, the Cassie\u2013Baxter effect explained why the coatings also showed relatively low average water tilting angles.2 particles (0.5\u20131.0 \u03bcm diameter) and fatty acids were purchased from Sigma-Aldrich, AEROSIL\u00ae OX50 SiO2 nanoparticles were acquired from Evonik and laboratory solvents were bought from Fisher Scientific. All chemicals were of analytical standard and were used as received.Unrefined SiO8H16O2), decanoic (C10H20O2), dodecanoic (C12H24O2), hexadecanoic (C16H32O2) and octadecanoic acids (C18H36O2) (2.00 wt%) were separately stirred in different aliquots of absolute ethanol (88.00 wt%), 40 min at 40 \u00b0C, prior to the addition of innately hydrophilic SiO2 nanoparticles (10.00 wt%). After a further 20 min of stirring, the five SiO2 particle containing suspensions were oven dried at 60 \u00b0C for 120 min. This process afforded hydrophobic-SiO2 (H-SiO2) particles coated in selected non-fluorinated hydrocarbon chains.Octanoic (C2 (H-SiO2) nanoparticles were sonicated, 60 min at 40 \u00b0C, in their respective octanoic, decanoic, dodecanoic, hexadecanoic or octadecanoic acid/ethanol mixture. In every case, H-SiO2 particles had been treated with the corresponding polymer material in which they were finally dispersed. Optimised particle loadings and acid concentration compositions are outlined in Hydrophobic-SiOGlass substrates were covered in double sided Scotch tape (25 \u00d7 30 mm) to aid coating adhesion. Pipette application and spray coating were the two methods utilised to deposit hydrophobic slurries onto the taped surfaces. Whilst octadecanoic acid containing samples were dried at 60 \u00b0C for 20 min to prevent recrystallisation, all other coatings were dried overnight at room temperature and pressure. Spray coating was carried out using a BADGER airbrush spray gun and SprayCraft universal airbrush propellant.\u22121). Transmission electron microscopy (TEM) was completed using 100 kV JEOL CX100 equipment to determine unrefined and functionalised SiO2 particle sizes. Surface topographies were investigated using a JEOL JSM-6301F scanning electron microscope (SEM) with an acceleration voltage of 5 or 10 kV.X-ray photoelectron spectroscopy (XPS) was carried out using a Thermo Scientific XPS K-Alpha X-ray Photoelectron Spectrometer with a monochromated Al K\u03b1 X-ray source at 1486.6 eV. Atmospheric pressure thermogravimetric analysis (TGA) was carried out using a Netzsch Jupiter analyser. Fourier transform infra-red (FT-IR) spectroscopy was performed using Bruker alpha platinum-ATR equipment . An average value and associated error were calculated for each sample. The tilting angle, defined as the angle at which a water droplet readily slides off a slanted surface (fixed droplet volume of 0.5 mL), was recorded using a digital angle finder. Averages and standard deviations were calculated.Three water contact angles were measured per coating at ambient temperature A high-speed camera was used to capture methylene blue dyed water droplets bouncing on the functional surfaces to confirm water repellency. Samples were also immersed in vegetable oil (20 s) prior to further water contact angle tests for coating robustness comparison.2 coatings from functionalised hydrophobic-SiO2 (H-SiO2) nanoparticles embedded in fatty acids. H-SiO2 particles were produced by stirring SiO2 nanoparticles in a fatty acid /ethanol mixture. The H-SiO2 slurries were prepared by sonicating H-SiO2 particles in their respective octanoic, decanoic, dodecanoic, hexadecanoic and octadecanoic acids stock solutions, A one pot method was developed to superhydrophobic SiO2 nanoparticles. Resulting data allowed fatty acid/particle binding method determination. The octadecanoic acid coating, seen in 2 particles were present at the surface of the sample. The C1s scan closely matched environments identified in the octanoic acid functionalised SiO2 starting material; 284.9 eV (C\u2013O(OH) environment),3 286.8 eV (C\u2013OH environment)4 and 289.1 eV (C\u2013O(OR) environment).44 Peaks in the O1s scan further supported the presence of ester linkages between SiO2 particles and the fatty acid. Consistency in acid/particle linkage was supported by the acid, alcohol and ester environments which were reported in all hydrophobic coatings, irrespective of fatty acid chain length. Furthermore, 2 nanoparticles functionalised with the long chain octadecanoic acid coating, air buoyancy effects gave rise to a mass percentage greater than 100% at 50 \u00b0C. From this, it was determined that the organic fatty acid mass loss occurred at temperatures between 200 and 600 \u00b0C. It is most probable that the organic material removed from the sample at temperatures nearing 600 \u00b0C were chemically bonded to the nanoparticles' surface as a significant amount of thermal energy was required for removal. Any additional acid material capped the functionalised particles by secondary forces, as represented by the mass loss at lower temperatures.XPS data was used to determine the chemical environments found in the acid samples containing embedded functionalised SiO2 nanoparticles, all acid precursors, acid functionalised SiO2 nanoparticles and the coatings with H-SiO2 nanoparticles were then compared using FT-IR analysis. 2 symmetric alkane stretches and C PBM data was replaced with SVG by xgml2pxml:00000000000000000000000000000000111111110000000011111111000000000000000000000000O carboxylic acid stretches were detected in both the hexadecanoic and octadecanoic acid precursors at around 2850 cm\u22121 and 1700 cm\u22121 respectively.1 Other peaks at 1060 cm\u22121 , 770 and 760 , originally seen in the H-SiO2 spectra, represented Si\u2013O\u2013Si asymmetric transverse-optical stretching, symmetric Si\u2013O\u2013Si stretching and bending respectively.45 An absence of the broad O\u2013H stretch at 3000 cm\u22121 was typical of acid dimerization in all samples.2 Peak positions showed no significant deviation with fatty acid chain length as chemical properties were similar.The functional groups in uncoated SiO2 particle diameters and final surface morphology required the use of these small scale precursors to ensure some coating nanostructure was achieved.Transmission electron microscope (TEM) images of the as received nanoparticles, Surface topographies were then assessed using scanning electron microscopy (SEM), 2 nanoparticles. This precursor was deemed superhydrophilic in nature as average water contact angles were <5\u00b0. Surface wettability was subsequently determined for the optimised water repellent coatings. versus spray coated samples were similar. The coating sample containing hexadecanoic acid had average water contact angles of 142 \u00b1 1\u00b0 and 128 \u00b1 23\u00b0 for pipette and spray application respectively. Average tilting angles were found to be indistinguishable within experimental error.Initially, functional testing was carried out on untreated/as received SiO2 particles more evenly in the less viscous shorter chain acids; short chain acids showed no sign of crystallising during this process. With that said, pipetting was advantageous for the more bulky octadecanoic acid coatings where a maximum average water contact angle of 142 \u00b1 6\u00b0 was achieved. The longer chain acids, such as octadecanoic acid, had a greater tendency to crystallise during slurry deposition. It was found that crystallisation during pipette application was reduced due to the speed and nature of deposition (reaction temperature was closely maintained throughout). This promoted even surface coverage and likely elevated average water contact angles; unfortunately this was not the case for the spray deposition alternative as the method promoted slurry cooling. In spite of this, the environmentally friendly and cheap \u2018hydrophobic coatings\u2019 are of significance as they have only been marginally outperformed by coatings of much greater toxicity and expense. Sino et al. created a fluoroalkylsilane based emulsion with TiO2/ZnO particles while other work documents the use of TiO2/SiO2 particles combined with fluorinated polymers and epoxy resins .46,47The largest difference in hydrophobicity was realised when the octadecanoic acid polymer was incorporated into coating slurries; the pipette application generated an average contact angle \u223c80\u00b0 larger and an average tilting angle \u223c40\u00b0 lower than the spray application alternative. In contrast, the functional results were improved by \u223c30\u00b0 on the sprayed short chain decanoic acid coatings. This data confirmed that the use of spray deposition benefited short chain polymer systems by distributing H-SiOet al.48 Crick's work made use of SiO2 particles modified with the expensive and environmentally harmful polydimethylsiloxane (PDMS).48A high-speed camera was used to support the average water contact angle data results obtained on the shortest and longest fatty acid chain length coatings. 2/fatty acid coatings, prepared in this work, were preserved after the oil immersion test, The functionality of the SiO2 nanoparticles in the long chain octadecanoic acid coating afforded the highest average water contact angle (\u223c142\u00b0) whereas the short chain octanoic acid with embedded nanoparticles was considerably lower (\u223c111\u00b0). This observation was justified by considering the nonpolar \u2013(CH2)n\u2013 to polar \u2013COOH group ratio; as the carbon chain length increased so does the net repulsion between the hydrophobic nonpolar aliphatic chain and surface water. The hydrophobic character of the long chain easily dominated, negating the polar influence of the acid functional group that permits hydrogen bonding with water.49Consistent chemical properties present in all fatty acid coatings resulted in near identical XPS, FT-IR, TEM and SEM data irrespective of carbon chain length. In contrast, differences arose when comparing sample functionality. The pipette deposit of H-SiO2 particles and long carbon chain length fatty acids. These durable hydrophobic coatings have been achieved using a facile one pot synthesis followed by pipette or spray deposition methods. Trends suggested that water repellency was increased with fatty acid carbon chain length; the coating comprising hydrophobic-SiO2 (H-SiO2) particles (6.00 wt%) originally in an octadecanoic acid (3.21 wt%)/ethanol mixture (90.79 wt%) had favourable average water contact and tilting angles of 142 \u00b1 6\u00b0 and 16 \u00b1 2\u00b0 respectively. Further work should be aimed at scaling up this process by making use of dip coating techniques or by incorporating these slurries into commercial products to form the \u2018smartest\u2019 self-cleaning surfaces.We have successfully generated inexpensive and non-toxic coatings containing functionalised SiOThere are no conflicts to declare."}
+{"text": "A moderate correlation was also found between global longitudinal strain at stress and the severity of coronary occlusion; r = 0.62, p < 0.0001. With a cut-off value of \u221219.1, global longitudinal strain under stress had a sensitivity of 74.1% and a specificity of 76.7% for detecting significant CAD. Hs-CRP was significantly higher in patients with manifested CAD. Conclusion: Evaluation of longitudinal strain parameters at rest and under stress may predict coronary artery disease in patients with stable angina pectoris. A measurable Hs-CRP is a potential marker of coronary stenosis. Strain data could assist in diagnosing CAD severity.Introduction: CAD (coronary artery disease) is a leading cause of death and disability in developed nations. Exercise testing is recommended as a first-line diagnostic test for patients with stable angina pectoris. In addition to myocardial strain, high-sensitivity CRP (hs-CRP) can predict the presence of significant coronary artery disease. Aim of work: The purpose of this study was to demonstrate the utility of 2D-speckle tracking at rest and under stress along with hs-CRP for detection of CAD in patients who were referred to the chest pain unit with stable or low risk unstable angina pectoris. Methods: A total of 108 individuals met the inclusion criteria and gave their written consent to participate in this study. Coronary angiography was performed within 48 h after admission to the chest pain unit. Myocardial strain was recorded at rest and during dobutamine administration. Results: Global longitudinal strain at stress appeared to be moderately correlated with the presence of significant coronary artery disease (CAD); r = 0.41, Atherosclerosis of the epicardial coronary arteries results in coronary artery disease. This reduction in coronary artery flow may occur asymptomatically or symptomatically, may be associated with exercise or rest, or may cause myocardial infarction or angina, depending on the severity of involved coronary obstruction and the rapidity of development .One of the established tools for diagnosing coronary artery disease (CAD) is dobutamine stress echocardiography (DSE). Due to relatively low interobserver agreement and the qualitative nature of the diagnosis, it is difficult to achieve high accuracy of visual diagnosis of wall motion abnormalities during DSE ,3. ThougThese limitations can be overcome by measuring the global longitudinal strain (GLS) and strain rate of the myocardium. Echocardiographic longitudinal strain reliably indicates regional myocardial deformation and deformation rate . During acute and chronic ischemia, as well as stress-induced ischemia, the motion of the left ventricular wall is accurately depicted. The use of myocardial strain during DSE to assess viability and ischemia has recently been reported ,7,8,9.As atherosclerosis progresses, inflammation plays a vital role in plaque stability or rupture. Studies have shown a correlation between elevated high-sensitivity C-reactive protein and coronary stenosis and severity of stenosis ,11.In this study, we examined the utility of myocardial strain derived from dobutamine echocardiography under rest and stress in relation to invasive coronary angiography results in patients who presented with acute chest pain.Patients with typical chest pain were admitted to the chest pain unit (CPU) at Coburg Hospital, Germany. Three hundred and ten patients with stable and unstable angina (TIMI risk score 0\u20131) were screened. Among them, 108 individuals matched the inclusion criteria and gave their written consent to be enrolled in this study . All incAn elevated troponin level, ST-segment elevation, or depression during the admission process at CPU, a history of coronary artery disease or acute myocardial infarction, coronary artery bypass grafting, chronic total coronary occlusion, significant valvular heart disease, end stage renal failure, or refusal to give the written consent were considered exclusion criteria.Prior to invasive assessment with coronary angiography, a dobutamine stress echocardiogram was performed.Examinations were performed with a digital ultrasonic device system in harmonic mode 2.0/4.3 MHz with maximal frame per second (FPS) count available at the necessary sector width. The range of FPS was from 64 to 112 with a mean value of 83.Nevertheless, conventional echocardiography measurements were performed, including 2D measurements of the cardiac chambers and the Ejection Fraction (EF%), continuous wave and pulsed wave Doppler studies, color Doppler studies, Simpson\u2019s method for calculating the Ejection Fraction, and analysis of wall motion abnormalities.During DSE, dobutamine was infused directly into the bloodstream at a dose of 10 mg/kg/min via a peripheral infusion line. The dose was increased at 3-min intervals to 20, 30, and 40 mg/kg/min with intravenous atropine up to 2 mg given, if necessary, to augment the heart rate response. Blood pressure and electrocardiogram were monitored continuously.The following criteria were considered for termination of the test: 85% of the age-predicted maximum heart rate response, development of wall motion abnormality, severe electrocardiographic changes indicative of angina, systolic blood pressure greater than 240 mm Hg, abnormal blood pressure reaction during stress, or significant arrhythmia.Two experienced echocardiographers recorded images in the lateral decubitus position. Standard 2D grayscale images of three standard apical views and parasternal long-axis and parasternal short-axis views at the level of mitral valve, papillary muscles, and apex were acquired at rest, at a dobutamine dose of 20 mg/kg/min, at peak stress, and at recovery 1 min after stress. As per protocol, a cine image of one representative cardiac cycle per stage and view was digitally stored for later offline analysis.In order to achieve optimal speckle-tracking at high heart rates, each image was optimized for left ventricular analysis, and the picture frame rate was increased to reach a target of 90 frames/s without compromising endocardial border detection. Two experienced echocardiographers, blinded to other results, examined wall motion visually. At the end of enrollment, the operators evaluated 20 random studies again to assess intra- and interobserver variation.Echocardiographic images were obtained prior to coronary angiography. Three uninterrupted cardiac cycles were applied for each of three standard apical views and were kept for offline longitudinal strain analysis, using EchoPAC software . For assessment of longitudinal strain, we recorded standard 2D ultrasound images with a frame rate between 60 and 90 frames per second (fps) from the standard views. The endocardium was manually marked out from selected cineloops of apical view images. A further manual adjustment of the region of interest was applied after visual evaluation. The full image was excluded if >2 segments were poorly tracked. Speckle tracking was carried out on all three apical views of rest images and negative global systolic longitudinal strain was estimated. The average of GLS in apical four-, three-, and two-chamber images was used for our analysis. p < 0.05 was considered statistically significant. Our statistical analyses were conducted using SPSS 21.0 . Continuous variables are expressed as mean \u00b1standard deviation and categorical variables are expressed as counts and %. We calculated diagnostic measures including sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). In order to compare the diagnostic performance of myocardial strain between rest and stress conditions, receiver operating characteristic curves were analyzed. The correlation between variables was assessed using Pearson\u2019s correlation coefficient. In this study, values of intra- and inter-observer reproducibility were evaluated by intra-class correlation coefficients (ICC). In all analyses, A total of 108 patients who presented to our chest pain unit with stable and unstable angina pectoris for coronary angiography were examined. Subjects excluded from the study are identified in Women accounted for 50% of our study cohort (54). A total of 87 of the patients were hypertensive while 52 had already been diagnosed with diabetes. A total of 41 patients of the studied group were tobacco smokers (38%) and 40 patients showed either elevated lipid profile or were taking lipid-lowering drugs.2 \u00b1 3.8. Troponin levels at admission were 0.081 ng/mL \u00b1 0.03. Of our subjects, 35 had coronary artery lesions > 70% on coronary angiography (We found that the average age of our cohort was 64 years old \u00b1 10 and that the mean body mass index was 25.4 kg/miography .A significant increase in hs-CRP was observed among patients with significant CAD (3.2 mg/L + 1.8) as compared to those without CAD (1.9 mg/L + 1.5). The -vessel CAD patients showed a mean hs-CRP value of 4.9 + 1.1 mg/L. To detect significant coronary stenosis, a cut-off value of 2.8 mg/L had 85.7% specificity (95% CI: 75.9\u201392.6) and 67.7% sensitivity (95% CI: 48.6\u201383.3).p = 0.001). Additionally, GLS under stress was lower in people with significant CAD (\u221221.9% + 3.7 vs. \u221220.8% + 3.3).A total of 31 patients showed segmental wall motion anomalies in response to dobutamine stress (at least in one segment). Patients with CAD had significantly lower global longitudinal strain (GLS) at rest than those without ; r = 0.41, p < 0.0001. We found a weak correlation between WMA under stress and CAD, r = 0.26, p < 0.0001) whereas < 0.0001 .Under resting conditions, a mean global strain of \u221218.2% \u00b1 2.3 was detected, and a mean global strain of \u221222.3% \u00b1 2.9 was detected under stress in patients with one-vessel CAD. In patients with 2-vessel CAD, the mean global strain at rest was \u221215.4% \u00b1 1.5 and the mean global strain under stress was \u221217.7% \u00b1 2.4. For patients with three vessels of the coronary artery, the mean global strain was \u221213.3% \u00b1 1.1 at rest and \u221214.6% \u00b1 1.6 under stress .A cutoff value of \u221219.1 (AUC: 0.754) led to a sensitivity of 74.1%, a specificity of 76.7% for detecting significant CAD, with a positive predictive value of 56.1% and a negative predictive value of 88.06% , Table 2Hs-CRP correlated with global longitudinal strain parameters at rest and under dobutamine stress, r = 0.4 and 0.5 .According to ICC tests, the interobserver agreement was 0.84 for GLS at rest, 0.83 for GLS under stress, and 0.74 for WMA under stress. For the same values, we calculated intraobserver agreement values of 0.85, 0.87, and 0.79, respectively.A population of patients with suspected CAD presenting in the chest pain unit was investigated for changes in myocardial longitudinal strains at rest and under stress. There is strong evidence that coronary stenosis can affect strain at rest and that longitudinal strain measurements can detect the presence of coronary artery disease ,13,14. HIn visual assessment of WMA under stress, a weak correlation with the presence of CAD was observed without significant correlation with the severity of coronary lesions. Our statistical analysis did not include the assessment of resting WMA because of established intermediate sensitivity and specificity, as well as the lack of novelty of such an evaluation against stress-induced WMA.In both the rest and dobutamine-induced stress conditions, global longitudinal strain was correlated with the occurrence of CAD and with coronary atherosclerosis severity. Further, a cut-off value for GLS under rest and stress was determined based on the number of diseased coronary arteries. Our prospective study showed that a GLS under stress provided better sensitivity and specificity than GLS at rest and the visual assessment of WMA on dobutamine echocardiography.Few GLS values at rest were lower than \u221214%. In such patients, we assume the presence of an underlying small vessel disease or microvascular dysfunction which could not be detected in the coronary angiography, but a concomitant peri- or myocarditis cannot be ruled out.Researchers found that regional 2D strain was reduced in segments supplied by stenotic coronary arteries. A number of studies suggested that impaired longitudinal strains could help identify which coronary artery is stenotic ,17. NeveThe study by dos Santos et al. looked at the applicability of left ventricular longitudinal strain in the emergency room. The authors enrolled 78 patients with clinically suspected unstable angina pectoris. Coronary cineangiography revealed severe coronary lesions in the vast majority of the 15 patients eligible for 2D-STE. Additionally, the authors noticed a significant reduction in global strains in patients with severe lesions in any epicardial coronary artery, as well as a significant reduction in longitudinal strains in the left ventricular inferior and lateral walls of the right and circumflex coronary arteries . In termAnother Swedish study that included 296 consecutive patients with clinically suspected stable angina pectoris and normal left ventricular ejection fraction demonstrated the global longitudinal peak systolic strain measured at rest is an independent predictor of significant CAD, and it significantly improved the diagnostic performance of exercise testing. Furthermore, 2D strain echocardiography was able to distinguish high-risk patients .According to a study published in 2018, Scharrenbroich and coauthors assessed differences in strain obtained by speckle tracking and left ventricular ejection fraction to predict cardiac events in patients after acute myocardial infarction, compared to those with known coronary artery disease. By incorporating endocardial GCS to baseline characteristics and ejection fraction into a regression model, ROC analysis significantly enhanced the prediction of cardiac events in patients with CAD .An initial 3D speckle tracking echocardiography was performed on patients with acute coronary syndromes prior to coronary angiography in a cross-sectional study in Bangladesh. Patients with significant stenosis experienced significant reductions in all strain parameters. GLS was demonstrated to effectively identify patients with significant stenosis via receiver operating characteristic curve analysis . GLS with a cutoff value of \u221213.50% showed good sensitivity and specificity for detecting significant stenosis .Clinical decision-making could be doubtful in patients without visible regional WMA during pharmacological stress testing and typical angina pectoris symptoms. Strain measurements can provide additional diagnostic information for patients undergoing conventional stress echocardiography .Beyond its sensitive role in diagnosis of infarcted areas in acute myocardial infarction, global longitudinal strain offers important prognostic features. Reduction in GLS could suggest an increased mortality risk, reinfarction, congestive HF, or stroke more reliably than EF and wall motion score index changes .The inflammatory response to early myocardial necrosis is likely to trigger the elevated hs-CRP levels rather than chronic vascular inflammation. In the REGARDS study, CRP was approved as a prognostic indicator for primary prevention for patients with a high risk of cardiovascular disease, defined as Framingham coronary risk score \u2265 10% or atherosclerotic heart disease (ASCVD) risk \u2265 7.5% .In a study of 700 patients with chronic stable angina, serum levels of hs-CRP were strongly associated with the development of cardiac death, non-fatal acute MI, or hospitalization with unstable angina at 1-year follow up . DespiteVariability in heart rate had a negative impact on strain rate offline analysis. Many patients were therefore excluded from the study because the analysis was aborted. Atrial fibrillation patients were also excluded due to the inability of the program to compensate for heart rate variability during the study. Furthermore, the failure to obtain a perfect acoustic window during the echocardiography remains a limiting factor among obese patients or patients with hyper-inflated thoracic walls. It can be challenging to capture images with sufficient quality under pharmacological stress. The study is limited by the fact that it is based on only a single center with small number of participants. A large multicenter study is needed to fully corroborate our results. In most cases, coronary occlusion intensity was determined by angiography; only 23 patients were evaluated further using FFR or IFR (instantaneous wave-free ratio). Study and control group ejection fractions were near-normal. Other results could have been obtained with a lower left ventricle ejection fraction group.The presence of coronary artery disease can be predicted using longitudinal strain indices at rest and under stress in patients with stable angina pectoris. The results can determine the severity of coronary artery involvement before invasive diagnostic procedures are performed. Stress echocardiography sensitivity and specificity were significantly higher in strain parameters under stress than at rest and in comparison, to traditional visual assessment of wall motion abnormalities in stress echocardiography. Patients with stable angina pectoris should undergo a global longitudinal strain measurement prior to invasive coronary investigation. As an inflammatory marker, hs-CRP can be useful in the acute setting to rule out significant coronary artery disease (CAD), and it should be combined with other imaging modalities to enhance its sensitivity."}
+{"text": "Most patients with inherited retinal degenerations (IRDs) have been waiting for treatments that are \u201cjust around the corner\u201d for decades, with only a handful of seminal breakthroughs happening in recent years. Highlighting the difficulties in the quest for curative therapeutics, Luxturna required 16\u00a0years of development before finally obtaining United States Food and Drug Administration (FDA) approval and its international equivalents. IRDs are both genetically and phenotypically heterogeneous. While this diversity offers many opportunities for gene-by-gene precision medicine-based approaches, it also poses a significant challenge. For this reason, alternative strategies to identify more comprehensive, across-the-board therapeutics for the genetically and phenotypically diverse IRD patient population are very appealing. Even when gene-specific approaches may be available and become approved for use, many patients may have reached a disease stage whereby these approaches may no longer be viable. Thus, alternate visual preservation or restoration therapeutic approaches are needed at these stages. In this review, we underscore several gene-agnostic approaches that are being developed as therapeutics for IRDs. From retinal supplementation to stem cell transplantation, optogenetic therapy and retinal prosthetics, these strategies would bypass at least in part the need for treating every individual gene or mutation or provide an invaluable complement to them. By considering the diverse patient population and treatment strategies suited for different stages and patterns of retinal degeneration, gene agnostic approaches are very well poised to impact favorably outcomes and prognosis for IRD patients. For years, seminal breakthroughs to restore vision have been \u201cjust around the corner,\u201d yet most patients with inherited retinal degenerations (IRDs) find themselves continuing to wait. More than 2\u00a0decades have passed since the first large animal, Lancelot the Briard dog, was successfully administered gene therapy for Leber\u2019s congenital amaurosis type 2 (LCA2), and his vision was restored . HoweverGiven the recent advances in our understanding of the mechanisms underlying IRD pathobiology, can we approach IRDs more broadly? Can we use gene-agnostic strategies to identSeveral groups have explored this possibility through a wide variety of approaches and ongoing efforts. For example, supplementation with Rod-derived Cone Viability Factor (RdCVF) holds promise for preventing secondary cone demise in primary rod dystrophies . Advancevia adeno-associated viral vectors (AAV) to restore vision to patients, while bypassing photoreceptors entirely have explored their capacity to slow the progression of retinitis pigmentosa (RP) . These iosa (RP) . Identifosa (RP) . Consequesponses . Given tesponses . To counesponses .rd1 and rd10 mouse models of retinitis pigmentosa, oral NAC reduced cone cell death and preserved cone function by mitigating oxidative damage (NCT05537220).A well-established antioxidant agent and reactive oxygen species (ROS) scavenger, N-acetylcysteine (NAC) has been studied for over a decade , and thee damage . Evidence damage tetrakis (4-benzoic acid) porphyrin, and alpha-lipoic acid . Findingctively) . While gctively) , tissue ctively) .rd1 and rd10 mouse models of retinal degeneration, treatment with the anti-inflammatory cytokine transforming growth factor beta (TGF-\u03b2) rescued degenerating cones and protected against loss of visual function pathway also appears to be a viable strategy for increasing proteasomal activity . By contNR2E3 in which rods are replaced by S-cones that remain preserved for extended periods of time in an effort to test the safety and initial efficacy of NR2E3-based gene therapy. This trial aims not only to treat ESCS patients and dominant forms of RP linked to NR2E3 mutations, but is also trying to harness NR2E3\u2019s potential to promote homeostasis in the degenerating retina with other forms of RP . Multiple clinical trials focusing on stem cell-derived RPE for treating IRDs remain ongoing . Further evidence from emerging trials will be required to demonstrate that this strategy can be successful in practice and yield significant improvements for patients\u2019 vision.In theory, iPSC-derived RPE bears similar potential for clinical application, particularly when differentiation methods generate and maintain the apical-basolateral polarity characteristic of native RPE structure and function . Donor-tin vitro photoreceptor model make them suitable for clinical use, including their anti-inflammatory properties derived from extracellular vesicle release or theiror model . In a soor model . A proteor model . These rIn a non-randomized clinical trial for RP patients, intravitreal injection of autologous bone marrow-derived MSCs improved the best-corrected visual acuity of all participants for several months after the procedure . Unforturd1 mouse model, which is characterized by rapid onset retinal degeneration akin to RP, CiPC transplantation into the subretinal space partially restored the pupillary reflex and visual function, as measured by the light-aversion behavioral paradigm without paradigm . In someparadigm . Evidencparadigm . While lNCT05392751). Activating endogenous retinal stem cell populations for development into photoreceptors would be exciting; however, any genetic defects would persist in this reactivated stem cell population. These cells might mature into photoreceptors and function normally for some time, but the genetic basis for degeneration would remain. Eventually, degeneration would likely occur at the same rate experienced by the patient prior to any intervention. For early onset IRDs, such as LCA2, this strategy may not be viable; however, slower progressing IRDs, like RP, may be well-suited for treatment by this approach. Further studies will be required to assess the feasibility of repeating treatment to activate retinal stem cells multiple times over a patient\u2019s lifetime. Extended-release formulations could also play a role in long-term activation of patients\u2019 retinal stem cells.Reports of true retinal stem cells in the adult human eye raise the possibility of an endogenous source for cellular regeneration . Adult ni.e., channelrhodopsin, ChrimsonR and Opto-mGluR6) in bipolar cells and retinal ganglion cells, direct stimulation of these secondary and tertiary cell types of the neural retina can bypass photoreceptors while maximizing the activation of typical visual circuits of the brain.Optogenetic therapy provides an unparalleled opportunity for restoring vision to patients who have experienced significant photoreceptor cell death. Through AAV-mediated expression of opsins (NCT03326336). For patients with severe photoreceptor loss, optogenetic therapy represents hope of regaining some level of vision.A series of non-human primate studies established the proof of concept for optogenetic gene therapy targeting retinal ganglion cells and demoFurther advances in optogenetic therapy may be capable of improving the best visual acuity that treatment can offer. For example, rather than achieving vision at the level of object localization within arm\u2019s length , future Following severe photoreceptor degeneration, many retinal interneurons remain physiologically and metabolically stable. Imbuing bipolar cells with ligo signaling pathway, the G-protein pathway traditionally activated by mGluR6 at the photoreceptor-ON bipolar cell synapse to integrate this technology with optogenetic approaches.To this end, Vedere Bio (whose assets have now been acquired by Novartis) and Vedere Bio II have initiated recent advances harnessing for IRDs . AAV varfor IRDs . By contl layers . Furtherl layers . SpearheContinued improvement of AAVs toward increasingly efficient transduction of the outer retina and RPE will be fundamental to the future of gene therapy for IRDs. In combination with selective promoters to achieve minimal toxicity, these novel AAV serotypes will also increase treatment accessibility by enabling patients to receive injections in-office as opposed to in the operating room. Multiple surgical steps that carry significant risks will also be avoided, such as vitrectomy and retinotomy associated with subretinal injections.rd10 mice remained healthy following optoporation of multicharacteristics opsin (MCO1), a broad-band activatable white-opsin that can be reliably stimulated by ambient light . These promising studies may lead to novel therapies that do not require active stimulation goggles, while nano-enhanced optical delivery may obviate the need for AAV and reduce immune response concerns. Immunogenic risks associated with introducing synthetic opsins will remain, but eliminating the introduction of AAV particles will remove the major exacerbating factor.For some patients, cytotoxicity may eliminate AAV-mediated gene therapy as an option. A novel method for nano-enhanced optical delivery may alleviate these concerns and serve as a laser-assisted gene therapy alternative . Known ant light ; 2019. Int light . This want light . Furthernt light . NanoscoDevelopments in artificial vision over the last few decades illustrate significant advances in retinal and cortical prosthetic devices. Creative approaches harness reprogramming of other sensory systems for prosthetics as well. The impressive success of several implantable and wearable prosthetic designs , Suprachoroidal Retinal Prosthesis , and the PRIMA high-resolution photovoltaic retinal prosthetic system . The Alpha AMS implant successfully improved visual performance in multiple participants for up to 24 months , not IRDi.e., optic nerve-lateral geniculate nucleus-visual cortex) due to developmental and neurological disorders or ocular malformations .Cortical visual prosthetics have also been in development for many decades . Recent NCT04725760) and expansion efforts in the European Union following Conformit\u00e9 Europ\u00e9ene (CE) Mark approval. This technology augments visual prosthetic devices and is currently intended for use alongside traditional assistive technologies like the white cane or a guide dog.Patients without any functional vision could also benefit from tactile based visual sensory substitution using deThere are many viable, gene-agnostic strategies for treating IRDs. While most of them are best suited for treating patients at a particular stage of disease progression, in combination, they could constitute a powerful arsenal for maintaining or restoring vision across the disease-stage spectrum.For patients who remain early in their IRD diagnosis, oral supplementation with NAC or other antioxidant cocktails remains a very useful therapeutic option. Mounting evidence in ongoing clinical trials suggests good efficacy for this strategy. Given that NAC and many other supplements can be self-administered by patients daily and at home, adherence to these regimens will likely be high. Delivery of RdCVF or proteasomal enhancers to patients with early disease represents a more definitive opportunity to prevent or at least delay otherwise inevitable visual impairment. Compliance with these treatments is likely to be high for stand-alone delivery by intravitreal injection, while extended-release formulations or AAV-based approaches may limit the burden of frequent repeat treatments.via a proprietary injection system (ETIS) developed by Eyevensys represent competing delivery strategies for both gene therapies and pharmacological/neuroprotective approaches to IRDs. Furthermore, improvements in the design of synthetic opsins may eventually support IRD patients\u2019 ability to regain vision without relying on goggles, which are traditionally required by optogenetic therapies.For IRD patients with intermediate stage of disease progression, combinatorial treatment is even more likely to provide the necessary synergy to restore or protect visual function. In some instances, subretinal surgical implantation of ESCs, iPSCs, or CiPCs may be necessary to augment native photoreceptors. However, surgical and immune rejection risks can be significant, making some patients poor candidates for these procedures. By contrast, intravitreal delivery of a small molecule cocktail is much more accessible for most patients, and activation of a patient\u2019s own retinal stem cells eliminates immune rejection concerns. For patients with early onset disease that obliterates function of their own retinal stem cells, optogenetic strategies targeting bipolar cells or retinal ganglion cells remain gene-agnostic without relying on a patient\u2019s degenerating photoreceptors. Advances in AAV design may also allow the field to transition from subretinal to intravitreal treatment delivery. Optoporation or electro-transfection For advanced-stage IRD patients with significant outer retina loss, vision restoration will focus on bipolar cells, retinal ganglion cells and the image-forming pathways in the brain. Optogenetic technologies would be ideal for patients whose bipolar cells and retinal ganglion cells remain responsive. By contrast, implantable retinal and cortical visual prosthetics are best suited to create artificial vision in patients whose inner retinal function is compromised as well. Creative approaches using tactile devices, like BrainPort Vision Pro, can also expand the population of patients for whom vision restoration is possible.From retinal supplementation and stem cell transplantation to optogenetic therapy and retinal prosthetics, a variety of creative strategies hold promise in the quest to protect or restore vision for a broad population of people living with IRDs. Focusing on gene-agnostic approaches to treating IRDs will expedite the development of meaningful therapeutic solutions for patients. Distinct approaches will be suited to IRD patients at various stages of disease progression. Other aspects of health and financial access may also contribute to the \u201cbest treatment\u201d for a given patient. Gene specific approaches represent the ultimate example of precision medicine and remain highly desirable and critically important to pursue. However, by investigating common targetable disease pathways and putting sufficient parallel emphasis on the development of gene-agnostic IRD therapeutics as well, we can hope to achieve the long-promised \u201cjust around the corner\u201d treatments in time to make a difference for the vast majority of IRD patients."}
+{"text": "During the COVID-19 pandemic, control measures, especially massive contact tracing following prompt quarantine and isolation, play an important role in mitigating the disease spread, and quantifying the dynamic contact rate and quarantine rate and estimate their impacts remain challenging. To precisely quantify the intensity of interventions, we develop the mechanism of physics-informed neural network (PINN) to propose the extended transmission-dynamics-informed neural network (TDINN) algorithm by combining scattered observational data with deep learning and epidemic models. The TDINN algorithm can not only avoid assuming the specific rate functions in advance but also make neural networks follow the rules of epidemic systems in the process of learning. We show that the proposed algorithm can fit the multi-source epidemic data in Xi\u2019an, Guangzhou and Yangzhou cities well, and moreover reconstruct the epidemic development trend in Hainan and Xinjiang with incomplete reported data. We inferred the temporal evolution patterns of contact/quarantine rates, selected the best combination from the family of functions to accurately simulate the contact/quarantine time series learned by TDINN algorithm, and consequently reconstructed the epidemic process. The selected rate functions based on the time series inferred by deep learning have epidemiologically reasonable meanings. In addition, the proposed TDINN algorithm has also been verified by COVID-19 epidemic data with multiple waves in Liaoning province and shows good performance. We find the significant fluctuations in estimated contact/quarantine rates, and a feedback loop between the strengthening/relaxation of intervention strategies and the recurrence of the outbreaks. Moreover, the findings show that there is diversity in the shape of the temporal evolution curves of the inferred contact/quarantine rates in the considered regions, which indicates variation in the intensity of control strategies adopted in various regions. When applying the compartment model to simulate the disease transmission dynamics, some parameters or particular functions are assumed to describe the intensity of the control interventions. However, these preset specific functions may not accurately quantify the intervention strategies, which brings great challenges to accurately make prediction and evaluation. In this study, we developed an extended transmission-dynamics-informed neural network algorithm by integrating deep neural network with epidemic model. Even for insufficient case data, the proposed algorithm can still help us reconstruct the temporal evolution trend of the epidemic and infer unknown parameters. We inferred the time series on contract rate and quarantine rate for six regions based on the case data, on which the reasonable and interpretable functions, describing the dynamic variation in the intensity of control strategies, can be successfully selected and determined. The inferred contact/quarantine rates in various regions show the diverse shapes and regional dependent, and hence the variation in the intensity of control measures. This suggests the dynamic zero-case policy exhibits the different efficacy in reducing contacts and increasing the quarantine and isolation. The COVID-19 pandemic has lasted for three years since the end of 2019. Due to the continuous variation of the virus strain and the dynamic adjustment of prevention and control measures, it is a great challenge to propose a dynamic model of infectious diseases to evaluate the effectiveness of non-pharmaceutical interventions (NPIs) 1m,and{q1(c(t) and q(t) learned from the TDINN algorithm as observed data, denoting them as \u03b8 and \u03d1 represent the unknown parameter vectors in (c(t) and quarantine rate q(t) inferred by the TDINN algorithm based on the criterion of minimizing the root mean squared error (RMSE). We computed the root mean square errors c(t) andctors in and (5),i=1,2,3,withLcictors in and (5),tions in and (5) ci and RMSEqi were selected as the best candidates. According to Figs Note that, the functions with the smallest RMSEfor Xi\u2019an: for Guangzhou: for Yangzhou: for Hainan: for Xinjiang: c2(t) and q2(t), c3(t) and q2(t), c3(t) and q2(t), c3(t) and q3(t), c2(t) and q1(t) respectively.Based on the above results, we can select the optimal functions to quantify the evolution of the interventions in each region, that is, for Xi\u2019an, Guangzhou, Yangzhou, Hainan and Xinjiang, the optimal rate functions are i, j = 1, 2, 3) as a metric to evaluate the fitting performance of model ARMSEciqFor Xi\u2019an, Guangzhou and Yangzhou,Considering all possible combinations of rate functions, the smaller of model on multifor Xi\u2019an: for Guangzhou: for Yangzhou: for Hainan: for Xinjiang: c2(t) and q2(t) (or c3(t) and q2(t), c3(t) and q2(t), c3(t) and q3(t), c2(t) and q1(t)) as the contact rate and quarantine rate leads to the smallest ARMSE value for Xi\u2019an . This indicates that model (c2(t) and q2(t) (or c3(t) and q2(t), c3(t) and q2(t), c3(t) and q3(t), c2(t) and q1(t)) are the optimal functions for quantifying the evolution of control interventions in Xi\u2019an. Based on the optimal rate functions for each region ,,5), whicTo further illustrate the effectiveness of our proposed method, we also apply the proposed TDINN algorithm to the simulation of multiple waves of COVID-19 infection. To do this, we simulated the dynamics of the epidemic based on daily reported cases in Liaoning province and visualize the simulation results in In fact, as the epidemic initially took off, we observed an increase in quarantine rate and a decrease in contact rate due to enhanced intervention measures to mitigate epidemic. While the outbreak was subsiding, the gradual relaxation of control interventions led to the quarantine rate decline and the contact rate increase, and thereby possibly inducing a resurgence of epidemic. As a consequence, comparing the inferred contact rate and quarantine rate with the time series of daily reported cases containing multiple epidemic waves , we can During the COVID-19 pandemic, control measures played an important role in mitigating the disease spread. In particular, massive contact tracing following prompt quarantine and isolation showed decisive effect in dynamic clearing of the COVID-19 epidemic in China. Hence quantifying the dynamic contact rate and quarantine rate and estimate their impacts remain challenging. In this study, we integrated data-driven deep learning and dynamics-driven first principle modeling, and proposed an extended transmission-dynamics-informed neural network (TDINN) algorithm by encoding SIR-type compartment model into the neural networks, in order to obtain the time-dependent rate functions of mechanistic models. With the developed TDINN algorithm, we simulated the dynamics of COVID-19 infection in Xi\u2019an, Guangzhou, Yangzhou, Hainan, Xinjiang and Liaoning province, by simultaneously inferring the unknown time-independent and time-dependent parameters.The TDINN algorithm enables us to successfully encode the contact rate and quarantine rate derived from deep neural networks into the compartment model, as well as integrating the transmission dynamic model into the deep neural networks. It is important to note that the TDINN algorithm overcomes some disadvantages of traditional transmission dynamic models for simulating the development process of the COVID-19 epidemic. For example, in the classic compartment model, the contact rate and quarantine rate are usually assumed to be constant or particular time-dependent functions, respectively, to describe the intensities of control interventions , 12. Thac(t) and quarantine rate q(t). The estimations of contact/quarantine rates show the regional-dependent , where the contact rate gradually decreases and the quarantine rate gradually increases and q(t)) learned by TDINN algorithm 5) to acthm Figs . The selIn this study, we proposed the TDINN algorithm, which not only extends the traditional transmission dynamic model by embedding the time-dependent functions learned from the deep neural network, but also extends the neural network by embedding the information of the transmission dynamic model. The novel approach enables us to well integrate the advantages of the transmission mechanism model and the deep neural network. Compared with traditional dynamic models, the TDINN algorithm has better data learning ability and inference ability of unknown rate functions. Compared with end-to-end deep learning, our main results are more interpretable due to the incorporation of known propagation mechanisms. Furthermore, this method can be easily extended to more complex compartment models to study other aspects of emerging infectious diseases.Our study has some limitations. The transmission dynamic model we consi"}
+{"text": "The development of wood-based thermoplastic polymersthat can replacesynthetic plastics is of high environmental importance, and previousstudies have indicated that cellulose-rich fiber containing dialcoholcellulose (ring-opened cellulose) is a very promising candidate material.In this study, molecular dynamics simulations, complemented with experiments,were used to investigate how and why the degree of ring opening influencesthe properties of dialcohol cellulose, and how temperature and presenceof water affect the material properties. Mechanical tensile properties,diffusion/mobility-related properties, densities, glass-transitiontemperatures, potential energies, hydrogen bonds, and free volumeswere simulated for amorphous cellulosic materials with 0\u2013100%ring opening, at ambient and high (150 \u00b0C) temperatures, withand without water. The simulations showed that the impact of ringopenings, with respect to providing molecular mobility, was higherat high temperatures. This was also observed experimentally. Hence,the ring opening had the strongest beneficial effect on \u201cprocessability\u201d(reduced stiffness and strength) above the glass-transition temperatureand in wet conditions. It also had the effect of lowering the glass-transitiontemperature. The results here showed that molecular dynamics is avaluable tool in the development of wood-based materials with optimalthermoplastic properties. Wood-derived materials are particularly interesting,due to the natural abundance, biodegradability, and regrowth of wood.All three main components of wood are interesting when developing new bio-based thermoplastics; however,in this study, the focus is on materials originating from cellulose,a resource of high interest for the forest industry and society today.The thermoplastic properties of cellulose-based materialsare dependingon a multitude of factors, including molecular structure, intermolecularinteractions, crystallinity, fibril structure, and the hierarchalstructure of fibers as well as the presence of plasticizers and chemicalmodifications. To facilitate the replacement of fossil-based plasticmaterials with bio-based cellulose materials, fundamental knowledgeabout the underlying mechanisms that influence the processing andfinal properties is required.C2 and C3 carbon atoms in the ring structure is cleaved, shows interestingproperties since this modification has led to increased ductilityand decreased glass-transition temperature (Tg).4 This indicates that several thermo-mechanicalproperties of cellulose materials can be improved by ring openingof the glucose unit. One important goal of this study is to examinehow and why the degree of ring opening in cellulose influences thethermoplastic properties of the cellulose material. Atomistic moleculardynamics (MD) computer models of systems with disordered amorphouscellulose were used to assess the effects of ring opening on the molecularbehavior of the cellulose, not considering the separate complex effectsof changing the fiber morphology and the supramolecular structureof the cellulose in the fiber wall.Dialcohol cellulose, i.e., modifiedcellulose where the bond betweenthe When replacing a fossil-basedplastic material, the new bio-basedmaterial should for economic and practical reasons have approximatelythe same characteristics as the former material. Ideally, the newmaterial should also be processable in existing processing equipment,to avoid having to develop new less effective processing techniques,the former that have been optimized over many years.6 In the case of cellulose, thepresence of water will also affect the thermoplastic and mechanicalproperties; it is well known that its plasticizing effect will havean impact on, e.g., Tg.7For polymericmaterials, such as cellulose derivatives, the thermoplasticproperties depend on polymer chain interactions, including hydrogenbonds, dispersive and electrostatic interactions, and chain entanglements.9Experimental methods, requiring preparation and characterizationof physical samples, are typically both time-consuming and labor-intensive.This limits the maximum number of samples and thus the number of variablesthat can be studied. Computer simulations do not have this limitationand can be used as a valuable complement. Molecular dynamics simulationis a useful technique for investigating atomistic interactions incellulosic and polymeric materials, both in crystalline phases andamorphous systems, since it is relatively fast and often reproducesmaterial property trends accurately.Tg. The limitations of MD can partly be mitigatedwith periodic boundary conditions, mimicking infinitely large molecularsystems.MD simulations are efficient for predicting and explainingmolecularinteractions and material properties but are still naturally limitedmainly by the available computational resources, which constrain thenumber of atoms and the time scale for the simulations. Current timelimits are in the nano- to microsecond range, which, however, is sufficientfor describing several material properties, such as density or 5 MD simulations involving time-dependent processes, suchas tensile testing, are by necessity performed at very high deformationrates, leading, e.g., to higher tensile strengths and modulus thanmeasured experimentally.10 Awareness ofthe differences between experimental and simulated polymer systemsis thus necessary to correctly interpret the results. Simulationshave a high potential to accelerate future material research, butexperimental verifications of key findings are still necessary.When modeling polymers with MD, the chains are shorterthan mostreal polymer chains, to enable system equilibration within reasonableCPU time, but still sufficiently long to avoid spurious contributionsfrom chain ends, particularly in low-mobility systems.11) and two water contents (0 and 25 wt %) were used. The water contentwas chosen to span over the actual water content used for the previouslyextruded material.13 The degree of modification, i.e.,the percentage of cellulose repeat units being converted into ring-opened cellulose, was evenly distributed between 0 and 100% conversion. To examine how the three variables influencedthe material, several material properties, including pressure\u2013volume\u2013temperature(PVT), free volume, structural changes, mobility/diffusivity, tensileproperties, and electrostatic interactions ,were investigated. Since it is difficult to fully encapsulate thecomplexity of real cellulosic materials, the molecular systems inthis study were simplified to avoid higher hierarchical structures,such as fibril structures, and focus on amorphous systems, which canstill indicate the behavior of the materials on a grander scale. Thereason for comparing MD with experiments despite the differences inmicrostructure/crystallinity is to see the general trends in, e.g.,how the ring opening affects the glass-transition temperature. TheMD results were, when possible, compared to the properties of real100% ring-opened dialcohol cellulose (samples prepared for this study),but the mechanical properties were compared to DMA measurements ofdialcohol cellulose with 0\u201340% ring openings (samples froma previous study).14In this work, fully atomistic MD simulations were performed foramorphous cellulose and dialcohol cellulose systems. The influenceof temperature, water content, and degree of ring opening was investigated. Two temperatures(room temperature (23 \u00b0C) and 150 \u00b0C, a temperature thathas been used to extrude cellulosic materials22.12.1.1d-glucoseunits, connected between the C1 and C4 carbons using \u03b2-glycosidic bonds. Dialcoholcellulose was then created by starting from a cellulose template,where the bond between the C2 and C3 carbons was removed, and the resulting structurewas hydrogenized. The dry polymer systems contained 20 chains, whereasthe wet systems, which contained 25 wt % water, had 16 chains. Eachchain comprised 50 repeating units, resulting in structures with ca.20,000 atoms. The number of chains was chosen such that all systemswould have approximately the same number of atoms, be sufficientlylarge, and be reasonably fast to simulate, i.e., not contain too manyatoms. Pure cellulose and dialcohol cellulose chains were createdusing a single type of repeat unit, whereas mixed systems with 25,50, and 75% ring openings were created using a script that generatedchains with a certain fraction of ring openings, and randomly placedcellulose and dialcohol cellulose units. Another script was used toconvert the Material Studio data files to GROMACS-compatible format.A 21-step decompression method was used to equilibrate the system,5 after which a 10 ns NPT simulation was used toset the system to the desired simulation temperature. The Debyer software(https://github.com/wojdyr/de-byer) was used to obtain the X-ray diffraction (XRD) patterns of thesimulated systems, using 1/3 of the box length as cutoff, a step sizeof 0.1 \u00c5, and a wavelength of 1.54 \u00c5. Repeating units ofcellulose and dialcohol cellulose are shown in Tables S1 and S2.Cellulose anddialcohol cellulose repeating units were prepared in Biovia MaterialsStudio (2016). Cellulose was constructed by using 2.1.2Tg), andcoefficient of thermal expansion (CTE) of the materials, a seriesof isothermal\u2013isobaric ensembles (NPT) simulations using Parrinello\u2013Rahmanpressure coupling were performed at 1 atm pressure. The systems, withand without water, started at 575 K (302 \u00b0C) and 800 K (527 \u00b0C),respectively, going down to 150 K (\u2212123 \u00b0C) in decrementsof 25 K. The reason for the lower starting temperature for the systemswith water is because the wet systems become unstable around 600 K,due to boiling of the water model. Experimentally, cellulose woulddegrade at lower temperatures than 600 K, but as the model doesn\u2019tallow bond breaking or degradation, the simulated temperatures arefeasible. The molecular systems were equilibrated for 30 ns at thestarting temperature, using a time step of 1 fs. Due to the smalltemperature decrements, a somewhat shorter equilibration time (10ns) was used for the subsequent temperature steps. The specific volumeat each temperature was calculated as the average volume of the simulationbox during the final 0.5 ns of the simulation using GROMACS built-infunction, gmx energy. The system densities at 296 K (23 \u00b0C) and423 K (150 \u00b0C) were evaluated in a similar fashion, using gmxenergy, for the final 0.5 ns of 10 ns equilibrations at these temperatures.To predict the density, specific volume,glass-transition temperature (The precision of the generated PVT data was assessed using threetests: (i) Triplicate samples were evaluated for representative systems,(ii) two water models (TIP3P and TIP4P) were compared, and (iii) coolingand heating PVT data were compared.Tg was calculated using broken stickregression, where two straight lines were fitted to specific volumedata at two different temperature regions. One line was fitted inthe glassy phase below Tg and one in therubbery phase above Tg. The intersectionbetween the two lines was defined as Tg. A linear fit using the seven lowest and highest temperatures wasused.Tg in the broken stick regression.The coefficient of thermal expansion (CTE) was determinedas thereciprocal specific volume of the material multiplied by the volumechange with respect to temperature2.1.316 and that large systems are recommended.The (true) stress \u03c3 was defined as the negative pressure tensorin the deformation direction:z-direction (Pz) fluctuatessignificantly, and to compensate for this, the stress was calculatedas the rolling average over strains \u00b12.5% from the current strain.To account for the absence of measurement points before the startof the simulation, the stress was set to zero at these points. Theyield strength was determined as the maximum value in the averagedcurve between 3 and 97% strain. The engineering strain \u03f5 wasusedLt is the length of the simulation box (in the direction of extension)at time t and L0 is theinitial length. The Young\u2019s modulus, E, wasdefined as the slope of the initial linear part of the stress\u2013straincurveX, Y, and Z directions as is standard practice15 to make sure, by rotating the simulation box,that we have similar tensile responses in all directions.Deformationsimulations were performed using a semi-isotropic Parinello\u2013Rahmanpressure coupling. The systems were isotropically coupled in two ofthe directions and were deformed in the third direction at a rateof 0.001 \u03bcm/ns for approximately 6 ns, or until the system hadreached 100% strain. Note that the chain lengths and the system sizecan affect the yield strength significantly2.1.4XD ofspecies X (water or polymer chain) was computed fromthe mean square displacement (MSD) of 10 ns canonical ensembles (NVT)simulations, using Einstein\u2019s relation17ri(t) is the center of mass of molecule i at time t. Only the linear or near-linearpart of the MSD curves was used, to avoid artifacts from the initialballistic behavior, subsequent cage-like diffusion, and from the poorstatistics at the end of the curve.The mobilityof water molecules and polymer chains was determined from their three-dimensionalBrownian motion. The diffusivity 18 For penetrant diffusion, the effective freevolume depends on the size of the penetrant molecules. The fractionalfree volume (FFV) is defined as:Voccupied is the volume occupied by the van der Waals volume of the atomsand Vtotal is the total volume in thesystem. By inserting spherical probes with different radii in themolecular system, the FFV and the FFV distribution were determinedas a function of probe radius. The FFV can be used to predict thediffusivity of penetrant molecules in the system.19The diffusivity/mobilityof a species is coupled partly to the free volume of the system, i.e.,the unoccupied space in the system, which is related to the molecularpacking efficiency. The free volume fluctuates slightly due to molecularmovement and oscillations, and it is affected by the molecule structureand intermolecular forces. If the molecular cohesion is high, themolecules become more tightly packed, and the free volume decreases.2.1.5CHB(t) was computed as:hi(t) is a binary function,which is 1 if hydrogen bond i exists at time t and is 0 otherwise. Hydrogen-bond interactions were computedover a 10 ns interval for each combination of species, e.g., polymer\u2013polymer,polymer\u2013water, and water\u2013water. As the polymer systemswere quite immobile and a significant portion of the hydrogen bondshad a much longer lifetime than the simulation time of 10 ns, C(\u03c4) was fitted to an exponential decay function whereterm i with weight Ki corresponded to process i:N = 1 or 2 is usuallysufficient for rapid processes like water\u2013water interactions,20N = 2 was used when the datawas extrapolated to 100 ns. The hydrogen-bond density was computedas the average number of hydrogen bonds over the 10 ns simulation,divided by the volume of the computational box. Integrating CHB over time, using trapezoidal numerical integration,gives an estimate of the average hydrogen-bond lifetime \u03c4HB:Hydrogen bonds arecommon in polar hygroscopic polymers and play an important role inthe molecular mobility of the polymer and its interactions with water.In the simulations, hydrogen bonds were defined as configurationswith donor-acceptor distances <0.35 nm and hydrogen-donor-acceptorangles <30\u00b0. The hydrogen-bond time autocorrelation function 2.22.2.1d-glucose units by dissolving thesodium metaperiodate in 1500 mL of deionized water in a 2 L Erlenmeyerflask. Isopropanol was thenadded as a scavenger, after which the pH was adjusted to 3.6\u20134using acetic acid. An amount of 30 g of MCC, which had been driedin an oven at 50 \u00b0C, was then added to the solution and stirredat 200 rpm at room temperature. The solution was kept in a dark environmentuntil the consumption of sodium metaperiodate, determined by UV\u2013visspectroscopy at 290 nm , correspondedto a degree of modification of 100%. The solution was then washedusing distilled (DI) water repeatedly until the UV absorbance of thewashing water was similar to that of the DI water used. This wet dialdehydecellulose was then stored at 4 \u00b0C until further use. To preparedialcohol cellulose, the dialdehyde cellulose was resuspended in 200mL of DI water in a 2 L flask for at least 30 min, after which 0.02M monobasic sodium phosphate was added. A mass of 15 g of sodium borohydride and 100 mL of DI water were then slowly added dropwise intothe dialcohol suspension. The suspension was stirred at 200 rpm for4 h at room temperature, after which it was dialyzed for 1 week againstDI water, and then dried in an oven at 40 \u00b0C.First, a dialdehydecellulose solution was prepared, which in turn was converted to dialcoholcellulose. A sodium metaperiodate solution and microcrystalline cellulose, MCC, was mixed at a molar ratio of 1.1 sodiummetaperiodate/1,4-anhydro-2.2.22.2.2.12 purge gas flow of 50 mL/min.Thermogravimetryanalysis (TGA) measurement, using a Mettler Toledo TGA/DSC 1, wasperformed with a 5 mg sample placed in a 70 \u03bcL alumina crucible.It was heated from 30 to 600 \u00b0C with a heating rate of 10 \u00b0C/minusing a N2.2.2.22 purgegas flow was 50 mL/min. The high heating and cooling rates were chosento be able to observe the glass transition more clearly.Differentialscanning calorimetry (DSC) measurements were performed using a MettlerToledo DSC 1. The sample with a weight of 5.5 mg was placed in a 40\u03bcL aluminum pan having a pierced lid. The temperature was firstkept at \u221230 \u00b0C for 5 min, whereafter it was raised to220 \u00b0C at a heating rate of 20 \u00b0C/min. After 5 min at 220\u00b0C, the temperature was decreased to \u221230 \u00b0C at acooling rate of 20 \u00b0C/min, and kept at \u221230 \u00b0C for5 min, before the whole cycle was repeated. The N2.2.2.3\u20131 with a built-in universal ATR. The scanningstep was set to 1 cm\u20131 and with a resolution of4 cm\u20131. 16 scans were recorded for each spectrum.Fourier transform infrared (FTIR) spectroscopy absorbance was measuredusing a PerkinElmer Spectrum 100 FTIR Spectrometer from 600 to 4000cm2.2.2.4A PANalyticalX\u2019Pert Pro was used for the XRD measurements, using a Cu K\u03b1radiation source (wavelength of 1.54 \u00c5) operating at 45 kV and40 mA.2.2.2.5n-heptane as the liquid. The sample density \u03c12 was calculated as:0 is the density of n-heptane (0.6838 g/cm3), \u03c11 is the density of air (0.0012 g/cm3), and A and B are the weights in air and n-heptane, respectively.21 Three replicateswere measured.The density measurementwas performed using the Archimedes\u2019 principle with a DichtebestFestkoerper FNR 33,360 density testing kit attached to an XR 205SM-DRbalance scale. The measurement was performed at room temperature using 33.13). As mentionedabove, DSC revealed that the dialcohol cellulose had a substantiallylower glass-transition temperature than cellulose shown in S3. What was apparent also was that the dialcoholcellulose sample cold-crystallized above the glass-transition temperature(in the approximate region of 120\u2013160 \u00b0C), as observedclearly in the second heating seen in Figure S3. The subsequent melting (the peak at approximately 180 \u00b0C)involved a larger endothermal change in enthalpy than the exothermalpeak preceding it, which indicated that the sample contained crystalsafter the cooling from the first heating. The low endothermal enthalpychange, ca. 17 J/g, indicated, however, a low overall crystallinity.A small crystallization exotherm was observed in the cooling curve. The first heating curve wasless easily interpreted due to a broad endothermal signal due to theevaporation of water. However, the endothermal peak above 200 \u00b0Cindicated melting of crystals existing in the pristine material and/orformed in the cold-crystallization process; the former supported bythe XRD data and150 \u00b0C ,14 were analyzed in moredetail. This revealed a linear decrease in density with an increasingdegree of ring opening shown in 3 at 23 \u00b0C and 0% water) was close to those previously reportedfor amorphous or paracrystalline cellulose from simulations24 and experiments.26 Accordingto the literature,27 crystalline cellulosehas a density of 1582\u20131630 kg/m3, while simulatedcellulose24 has been shown to have a densityof 1400\u20131450 kg/m3. However, the density of thesimulated (ring-opened) dialcohol cellulose (1320 kg/m3) was lower than observed experimentally here (1450 kg/m3) at 23 \u00b0C and 0% water. This difference in density is mostprobably a consequence of the presence of a crystalline or \u201cparacrystalline/semiordered\u201dcomponent in the experimental samples, as indicated by the XRD curvewith several narrow peaks, with the most prominent occurring around15\u00b0 (2\u03a6), see The simulated specific volume of the dry systems increased withincreasing temperature and degree of ring opening as displayed in Tg simulations, but the differencewas an average less than 1%. Hence, the computationally cheaper (TIP3P)was used in the study, as recommended for CHARMM36.Two common MD models for water are TIP3P and TIP4P,where the TIP3Pis optimized for the CHARMM36 force field, which was used in all ofthe simulations. In 5 Notably, the most rigid systems (0 and 50% ringopening) showed an upturn in specific volume when heated to the glass-transitionregion (LH) seen in To validatethat the PVT properties were generated with a \u201csufficiently\u201drelaxed molecular structure, the PVT curves were generated first fromlow-to-high (LH) temperature and then from high-to-low (HL) temperature. The difference is expected, considering the very rapid changein temperature in the simulations. Expected was also the significantlylower Tg in the presence of water and the (002) planes, respectively.34 When comparing simulated and experimental XRDspectra for 100% ring-opened dialcohol cellulose, these peaks nearlycoincide, although the experimental curve also has some smaller peaksin the range of 21\u201330\u00b0, as seen in X-ray diffraction (XRD) patterns of simulated materialswere obtained using the Debyer software. In 35 whereas for our 100% dialcoholcellulose, the peak at 15\u00b0 dominates. Also in the simulations,the height of the 15\u00b0 peak increases distinctly, whereas thepeak around 22\u00b0 decreases, with increasing degree of ring openings.Since the trend of XRD simulations clearly coincides with the experiments,this is obviously a real consequence of an increased degree of ringopenings. This observation could be coupled to experimental findingsfrom the literature, where dialcohol cellulose crystallizes more readilywith increasing degree of modification.36Experimentally, for pure cellulose, the peak at22\u00b0 normallydominates,37 asseen in Figure S5. The height of the firstRDF peak, around 5 \u00c5, decreases with increasing degree of ringopenings and becomes higher in the presence of water. A comparisonwith RDF data from Kulasinski9 concludedthat although there is some ordering in our simulated ring-openedsystems, the simulated structures are clearly not crystalline.The radial distribution function (RDF) for the C4 carbon wasusedto further investigate the ordering of the atoms,3.2.3Figure S6.Tensiletest simulations were performed to examine the mechanical propertiesof cellulose with different degrees of ring opening. The tensile testin the molten state can also serve as an indirect assessment of theelongational viscosity in a thermoplastic processing operation. Visualizationof a representative dry dialcohol cellulose MD system subjected to0, 50, and 100% strains reveals that necking (voiding) occurred between50 and 100% strain, as can be seen in Stress\u2013strain curves at 23 \u00b0C and 25%water contentfor pure cellulose and pure dialcohol cellulose, i.e., with 100% ringopening, as seen in x-direction. When the same boxes as in GROMACS routines for stress\u2013strain responseuse stresseson a single face of the computational box. Thus, the response patterndepends on the location of the face and on the void fraction at thatposition. Since the computational box is periodic, the position ofthe face can be shifted by translating the atom coordinates in the 38 The simulated modulus for dry cellulose (8.0 \u00b1 0.5 GPa) isclose to previously reported values (from modeling).39 For the wet systems, the modulus decreased with increasingdegree of ring opening (RO), but for the dry systems, it rather increasedbefore starting to decrease, showing a maximum around 50\u201375%ring opening. The strength showed a similar pattern with respect tothe degree of ring opening.In 14 Itshould be noted that the systems are quite different in terms of morphology.The simulated system is 100% amorphous, whereas the experimental systemscontain fibers/nanofibrils that have been partly converted to dialcoholcellulose, i.e., still consisting of a fraction of crystalline cellulose.The initial moisture content in the real material is also intermediateto that of the simulated systems. Nevertheless, the trends show somecommon important features. Within the range of experimental dialcoholcellulose content (0\u201340%) and simulated range (0\u2013100%),both the simulated and experimental data indicate a clear decreasein modulus/stiffness with increasing degree of ring opening at 150\u00b0C, whereas the decrease is less, or even absent, at room temperature(40). In addition, the wet simulated systemscontained a larger amount of water. Finally, it should also be notedthat the simulated modulus and strength were determined by consideringthe actual box cross section, which means that up to the yield point,they corresponded to true stress, whereas the experimental data werebased on the initial sample cross section, i.e., engineering stress.However, this effect is small at a strain up to the yield stress,as seen in In perature6e and 7.perature2b and inperature3e of the3.2.4In HB) were several orders of magnitude longer thanthe polymer\u2013waterand water\u2013water hydrogen bonds , and torsional/dihedral) were affected,in general, less than the Coulombic energy and in the dry system alsothe LJ energy during the deformation. However, noteworthy is thatthe bend energy decreased in the dry cellulose (0% ring opening) withincreasing strain. This is probably due to a relaxation of the moleculesassociated with an increase in volume (voiding) during the deformation.This behavior in bend energy has also been observed earlier for astarch/glycerol system.10The total potential energy in the polymer system was strongly affectedby the applied strain seen in 3.3.3\u20137 cm2/s for the wet cellulose systems and2 orders of magnitude lower (ca. 1 \u00d7 10\u20139 cm2/s) for the dry systems than at 150 \u00b0C. The diffusion coefficientsof these solid materials are higher than what would be seen experimentally,but the trends of increasing diffusivity with ring openings are stillnotable.At 23 \u00b0C, the diffusivity of the polymer was,as expected,lower than that at 150 \u00b0C, but it was still linearly dependenton the degree of ring opening, which can be seen in 5 Therefore,the diffusivity of the polymer systems was assessed from the mostlinear parts of the curve, e.g., between 3 and 7 ns, where the MSDrefers better to the random walk process.The diffusivity was obtained from the linear or near-linearpartof the mean square displacement (MSD) curves using 43 This indicatesthat the material can potentially be a good barrier material withlow diffusivity for many penetrant molecules.44 The fractional free volume distribution of cellulose, with or withoutring opening and water and independent of temperature, was also similarto that of the other polymers, with only polystyrene showing a differentdistribution (S6). In dry conditions, the freevolume at a 0.01 nm probe radius decreased slightly with increasingdegree of ring opening, whereas it remained nearly constant in wetconditions seen in An important factor affecting diffusivity/mobilityis the sizeand size distribution of the free volume in the system. In 4Dry and wet amorphous systemscontaining amorphous cellulose anddialcohol cellulose were simulated with molecular dynamics in orderto examine how and why the degree of ring openings influences thethermoplastic and mechanical properties of cellulose and dialcoholcellulose. As complement and validation for the simulations, experimentalmeasurements were performed on such materials. The goal of the studywas to understand how improved thermoplastic bioplastics can be derivedfrom cellulose-containing natural sources like wood.Tg decreased with increasing degree of ringopening, whereas diffusion and tensile simulations revealed more complexpatterns. However, the simulations showed what was observed in theexperiments, that the impact of ring opening toward a more \u201cprocessable\u201dmaterial (lower stiffness and yield strength) was greater at a hightemperature (150 \u00b0C). Hence, the simulations showed that theconversion from cellulose to dialcohol cellulose provided increasedmolecular mobility at conditions where thermoplastic processing normallyis performed (above Tg) but has less effecton the material/mechanical properties at ambient conditions. The findingsin this study reveal trends and molecular mechanisms that are valuableto assess for the development of thermoplastic polymers from, e.g.,wood-based natural resources.The mobility,fractional free volume, and polymer diffusivity ofthe cellulose systems were all affected by the presence of water,the actual temperature, and degree of ring opening. These effectscorrelated with changes in molecular interactions/potential energyand hydrogen-bond density and lifetime, which in turn affected themechanical properties of the materials. As expected, the mobilityincreased with increasing temperature and water content, leading tolower elastic modulus and strength but higher polymer diffusivityand ductility (and consequently higher thermoplasticity). The simulateddensity and"}
+{"text": "Little is known about whether digital competence is related to psychological wellbeing, with most previous research focusing on students and elderly people. There is also limited evidence on seasonal changes in psychological wellbeing, particularly in specific groups. Social housing residents are an underserved and under-researched population. The objectives of this study were to explore associations between digital competence and psychological wellbeing , and to explore seasonal effects, in social housing residents.A repeated survey design was used. The Happiness Pulse questionnaire with a bespoke digital module was sent via post or e-mail at four timepoints between July 2021 and July 2022 to 167 social housing residents in West Cornwall, England. There were 110 respondents in total; thirty completed all four questionnaires and 59 completed an autumn/winter and summer questionnaire. Data were analysed using descriptive and inferential methods including regression, repeated measures analysis of variance and panel analysis.Significant positive associations were found between digital self-efficacy and mental wellbeing, and between digital self-efficacy and life satisfaction. However, there were no significant seasonal changes in psychological wellbeing.The findings extend the existing literature beyond student and elderly populations and suggest that improving digital competence is a potential pathway to improving psychological wellbeing. Surveys with larger samples and qualitative studies are needed to elucidate the mechanisms involved.The online version contains supplementary material available at 10.1186/s12889-023-16875-2. The ubiquitous role of digital technology in everyday life has seen an increasing interest in the impact of technology on health and wellbeing. Evidence from surveys and systematic reviews strongly suggests that using technology is beneficial for mental and social wellbeing , 2. It hStudies of digital competence and psychological wellbeing have been limited with a narrow focus on specific populations. A small number of recent studies have explored the associations between digital competence and wellbeing in educational settings or with older adults. Wang and colleagues found that (survey-assessed) digital competence was associated with reduced burnout in university students; this was due to the indirect effect of digital competence on reducing cognitive load . A studySeasonal variation in physical health outcomes is well established, with excess winter deaths reported in medical journals for the last 150 years. Regarding mental health, a large clinical literature exists on seasonal variations in mood, with a higher incidence of mood and affective disorders in autumn and winter . There iPsychological wellbeing is a complex phenomenon influenced by a range of individual , community and place-based factors , 18. TheSocial housing providers are private not-for-profit organisations that provide rental accommodation at around 50\u201360% of market rates for those who may be excluded from the private market due to health or economic circumstances . The rolSmartline is a collaborative programme of research that explores the opportunities for technology to support social housing residents to live healthier and happier lives in their homes and communities . The SmaSeeking to understand the role of digital competence in maintaining wellbeing in social housing residents throughout the year, we proposed three hypotheses:Digital self-efficacy will be positively associated with psychological wellbeing as measured by mental wellbeing and life satisfaction.Seasonal variation in psychological wellbeing will be observed, with significantly higher mental wellbeing and life satisfaction in summer compared to autumn/winter.[if hypotheses 1 and 2 hold true]: Digital self-efficacy will have a protective influence on the observed seasonal related reductions in psychological wellbeing .An exploratory survey design was used, with the survey repeated four times during a year: July 2021, November 2021, March 2022 and July 2022. As the majority of questionnaires were completed online, the CHERRIES Checklist for Reporting Results of Internet E-Surveys was usedAll current participants in the Smartline project (n\u2009=\u2009167 in July 2021) were invited to take part in the survey. The Smartline cohort comprises social housing residents aged 18\u2009+\u2009living in properties managed by Coastline Housing in West Cornwall. The invitation to participate in the survey was addressed to the tenancy holder (i.e. one person per household). To minimise digital exclusion in the research process, both online and postal questionnaires were used. For participants with e-mail addresses, an information sheet , behaviours (e.g. frequency of use) and digital competence (e.g. self-efficacy). Questions on specific technologies and technology in general were included. The digital module was informed by theories of behaviour change , 39 and The psychological wellbeing variables selected for this study were:The short Warwick-Edinburgh Mental Wellbeing Scale (SWEMWBS) mental wellbeing summary score. This was a single score comprised of seven individual items: optimism, worth, peace of mind, resilience, competence, autonomy and relationships \u201347.The ONS-4 life satisfaction question , a measuThese two variables were selected as they are widely used and well validated measures of psychological wellbeing .A time variable was used to assess seasonal changes, with each completed questionnaire assigned a number from 1 (July 2021) to 4 (July 2022).The general technology self-efficacy question was selected as the digital competence measure. This was deemed an encompassing measure of digital competence; self-efficacy has been recognised as the ultimate outcome of digital competence .One hundred and ten respondents participated in at least one survey, 30 respondents completed only one questionnaire, 21 completed two questionnaires, 29 completed three questionnaires, and 30 completed all four questionnaires. Twenty four people (approximately 22%) opted to complete paper questionnaires.Simple descriptive statistics (i.e. frequencies and percentages) were calculated for demographic data and digital competency for the 110 participants that completed at least one survey. The SWEMWBS scoring protocol was followed to calculate a summary mental wellbeing score for each individual at each timepoint; the seven individual item scores were summed and then transformed according to national norms . Means aOrdinary least squares (OLS) regression with robust standard errors was used to explore the associations between digital self-efficacy and mental wellbeing, and between digital self-efficacy and life satisfaction was used to explore seasonal changes in mental wellbeing and life satisfaction mental wellbeing and (b) life satisfaction. Here we found that people with higher digital self-efficacy reported higher mental wellbeing as assessed by the SWEMWBS score and this relationship was statically significant (p\u2009=\u20090.001) when controlling for age, gender and disability status Table\u00a0. SimilarOf note, the older age group in this sample (65\u2009+\u2009years) had significantly higher mental wellbeing than those aged 35 to 64 years (p\u2009<\u20090.001). People reporting one or more disabilities had significantly lower life satisfaction than those with no disabilities (p\u2009=\u20090.006).Seasonal changes in psychological wellbeing.The mean mental wellbeing and life satisfaction scores by season are presented in Table\u00a0F\u2009=\u20091.06, p\u2009=\u20090.372) or between autumn/winter and summer . Similarly, there were no significant changes in life satisfaction between all four survey waves or between autumn/winter and summer . The panel analysis controlling for age, gender and disability or between autumn/winter and summer .The ANOVA analysis found no significant changes in mental wellbeing between all four waves of the survey and psychological wellbeing, in addition to considering seasonal changes in wellbeing for social housing residents. We found a significant positive association between digital self-efficacy and mental wellbeing, and between digital self-efficacy and life satisfaction , and was selected for reasons of familiarity to participants, consistency and comparability.Although this was a small-scale study, the survey tool and methods may be used as a blueprint for larger population surveys. The questionnaire could be used or adapted by social housing organisations or councils to explore digital competence and wellbeing of residents, and to inform the development and evaluation of interventions. The repeated survey approach is recommended to explore seasonal changes and provide stronger evidence of causal links in the relationships between technology and wellbeing , 12.Social housing residents are at high risk of both digital and social exclusion , 58. In In this repeated survey study of social housing residents, we found evidence of a positive association between digital self-efficacy and mental wellbeing, and between digital self-efficacy and life satisfaction. The findings extend the existing literature beyond student and elderly populations and suggest that improving digital competence is a potential pathway to improving wellbeing in social housing residents. Surveys with larger samples and qualitative studies are needed to establish whether the findings may apply to other socio-economically disadvantaged populations, identify any seasonal effects, and elucidate the mechanisms involved. Our findings highlight the value of investing in digital inclusion for social housing organisations, public health practitioners, councils, commissioners, and policymakers, and may also be of interest to technology developers and marketers.Below is the link to the electronic supplementary material.Supporting File 1: Participant information sheetSupporting File 2: The questionnaire including the Happiness Pulse and digital module"}
+{"text": "Penicillium rubens, have been identified in spacecraft, the effect of microgravity on fungal biofilm formation is unknown. This study sent seven material surfaces inoculated with spores of P. rubens to the International Space Station and allowed biofilms to form for 10, 15, and 20 days to understand the effects of microgravity on biofilm morphology and growth. In general, microgravity did not induce changes in the shape of biofilms, nor did it affect growth in terms of biomass, thickness, and surface area coverage. However, microgravity increased or decreased biofilm formation in some cases, and this was incubation-time- and material-dependent. Nanograss was the material with significantly less biofilm formation, both in microgravity and on Earth, and it could potentially be interfering with hyphal adhesion and/or spore germination. Additionally, a decrease in biofilm formation at 20 days, potentially due to nutrient depletion, was seen in some space and Earth samples and was material-dependent.Fungi biofilms have been found growing on spacecraft surfaces such as windows, piping, cables, etc. The contamination of these surfaces with fungi, although undesirable, is highly difficult to avoid. While several biofilm forming species, including Biofilms are cell populations that grow embedded in an extracellular matrix (ECM), which has effects on its ability to adhere to itself and to surfaces, and which changes the interaction between cells and nutrients, quorum-sensing molecules, and its environment in general . For exaOne of the most important phases in biofilm formation is adhesion and depends largely on cell\u2013surface and cell\u2013cell adhesion. Following adhesion, microbes proliferate and form the ECM. The ECM has antimicrobial-resistant properties complicating the removal of fungal cells from surfaces. If the process of adhesion is blocked, proliferation and formation of the extracellular matrix will be prevented, providing an ideal way to stop biofilm development . The facPenicillium chrysogenum now called Penicillium rubens [Penicillium has been the most isolated genus of fungi [Because fungal biofilms are challenging to remove from surfaces due to the ECM, fungal biofilms, or molds, which in the context of this investigation are considered equivalent to fungal biofilms, are a concern for human spaceflight for at least two reasons. The first comes from the potential damage to the surfaces (namely equipment) upon which they grow. Fungal biofilms can directly damage surfaces by using the material as a source of energy, or indirectly by degrading it with enzymes and other metabolic byproducts . This ism rubens ) startedm rubens . Both inof fungi . Many ofof fungi .Penicillium rubens has been reported to cause rare but severe cases of esophagitis, endophthalmitis, and invasive pulmonary mycosis among immunocompromised individuals [The second reason why fungal biofilms are a concern to human spaceflight is medical in nature. The ECM and altered cell\u2013cell communication via quorum-sensing molecules can increase antifungal resistance and pathogenicity compared to planktonic cells ,12, and ividuals ,19,20.P. rubens biofilms grown for 10, 15, or 20 days on seven different spacecraft- and nosocomial-relevant materials on board the ISS, and how they compare against their matched ground controls. Separately, we have reported on the original design of this experiment [While these challenges are significant for space stations orbiting Earth, even greater complications can arise in long-term missions. For example, in case of an emergency, such as critical equipment failure or urgent medical needs, the crew can potentially be back on the ground in a matter of hours. However, this will not be true for missions beyond the lower Earth orbit (LEO), including trips around the Moon, Earth\u2013Mars transit, or on space habitats on the Moon and Mars . Althougperiment and on tperiment ,30,31. D\u00ae 28089\u2122Penicillium rubens ATCC was chosen given the genus ubiquity in space stations and this specific strain\u2019s presence (and damage caused) in the Mir space station [Aluminum Alloy (Al6061), as it is used in spacecraft structures, thermal control, structures for electronic devices and panels, etc.;Stainless Steel 316 (SS316), as it is used in spacecraft environmental control and life support system (ECLSS) tanks and tubing (including for potable water), and Extravehicular Mobility Unit (EMU) elements, and on Earth in surgical equipment and implants;P. rubens);Quartz, as it is used in spacecraft windows , as it is used in spacecraft structures, antennae, pressure vessels, brackets, fittings, etc., and on Earth in implants;Carbon fiber, as it is used in spacecraft aeroshells and other applications;MIT Nanograss, chosen to be interrogated as a potential solution to fungal biofilm formation in space. This was developed by the Massachusetts Institute of Technology (MIT) and is the substrate (nanoetched silicon wafer) described in the making of a lubricant-impregnated surface (LIS) without the lubricant oil and treated to increase hydrophobicity . station . The expt0) set of samples that included seven replicates per material was also prepared.For each of these materials except SS316, six replicates of each incubation time were launched to ISS to be fixed with 4% paraformaldehyde (PFA) at the end of the incubation period (for the post-flight morphology studies presented in this manuscript). SS316 was set up similarly but had 12 replicates per incubation time to enable more in-depth morphology analyses. Additionally, all materials had seven replicates of per incubation time launched to the ISS to be preserved in RNAlater for post-flight differential gene expression analyses (samples available through NASA\u2019s space Biology Biospecimen Sharing Program). This yielded a total of 288 space samples. An equivalent set (288 samples) was prepared at the same time as the Earth control. Additionally, a time zero , unless otherwise specified.P. rubens was rehydrated with sterile water for 30 h, and then used to inoculate potato glucose agar plates. After incubation for eight days at 25 \u00b0C, the plates were flooded with 6 mL of 1X PBS (Sigma Cat. P4417) and colonies were gently rubbed with an inoculation loop to dislodge spores. The spore solution was then used to inoculate new PGA plates (working plates) in a 6 \u00d7 6 grid fashion. The working plates were six days old when used to inoculate the material coupons with spores were shortened to 13 mm height at BioServe Space Technologies, bag washed with 1% Liquinox (m/v) solution in distilled water, rinsed with distilled water, dried at 100 \u00b0C, and autoclaved for 30 min at 121 \u00b0C (dry cycle) before plate assembly . More de2 material coupons to the bottom of the coupons (over the label). The clear side of the tape was then peeled in preparation for spore inoculation for 30 min and allowed to air dry overnight at room temperature; this supplemented the surface with nutrients to promote fungal growth over the coupons. The next day, coupons were carefully handled to stick the cardboard side of 1 cmculation .P. rubens grown on PGA plates 24-well plate a. An inoFour loaded plates (inside a bag) and a temperature and humidity recorder were housed in each of the spaceflight hardware: BioServe Space Technologies\u2019 PHAB . The temThe assembled PHABs were transferred to cold stow (4 \u00b0C), where they remained at this temperature for pre-launch, launch, and ISS stowage until experiment activation. The reduction in temperature and the placement of plates inside a plastic bag to reduce gas permeability help prevent activation until samples reach microgravity. Additionally, the PHABs were maintained oriented during pre-launch stowage and during launch so that the coupons were on top to minimize the risk of detachment from the risers.t0 samples did not undergo any cold stowage, but were immediately activated after inoculation by transferring to 25 \u00b0C for 6 h to allow spores to adhere to the surface, then fixed with 4% PFA on Earth. The objective of including these samples was to determine if they showed a discrepancy in spore count between materials at the start of the experiment and to determine if later comparison across material type was valid, or a result of unequal inoculation. The 6-hour incubation was chosen to allow enough time for spore adherence to prevent spores from washing away during staining.The The Earth controls were performed following the same timeline as the flight set, albeit with a 2-hour delayed start, described as step 3 in The start of the experiment began at sample \u201cactivation\u201d when samples were removed from cold stow (after ~12.5 days) and provided with high relative humidity (RH) (> 90%), oxygen availability (ISS\u2019 atmosphere), warm temperature (25 \u00b0C), and darkness, rendering the ideal environment for fungal growth. To activate the samples, the PHABs were opened, the plastic bag around the four plates was removed, and the two pieces of absorbent mat (one on the lid and one on the floor of the PHAB) were wet with three water syringes . Then, the plates were returned to the PHAB, and the PHAB was placed at 25 \u00b0C undisturbed for 10 d 2 h, 14 d 19 h, or 20 d 3.5 h.After the designated incubation period, the samples were \u201cterminated\u201d while in microgravity to fix the spaceflight morphology or preserve the spaceflight transcriptome. Termination of the experiment consisted of flooding the wells with either 4% PFA (morphology samples) or RNAlater (transcriptomic samples), and took place inside the Life Sciences Glovebox (LSG). To terminate the samples, the Breathe-Easy membrane was pierced with a needle to inject enough fixative or preservative into each well. The punctured membranes were dried from the outside with sterile gauze, and sterile RNase-free sealing strips were placed over to seal the holes left by the needles. Terminated plates fixed with 4% PFA were stowed at 4 \u00b0C, and plates preserved with RNAlater at \u221295 \u00b0C on orbit; immediately upon receiving them back on Earth, they were stowed at 4 and \u221280 \u00b0C, respectively, to prevent sample degradation.Upon return to Earth, the plates with the fixed coupons were maintained at 4 \u00b0C. When ready for morphology data acquisition, the coupons were transferred (without touching the coupon\u2019s surface with the biofilm) into a new 24-well plate in batches of six or eight (space and Earth control of a respective coupon treated on the same session). Each coupon was stained with a mixture of 400 \u00b5L of Calcofluor White , staining the chitin present in fungal cell walls, and 400 \u00b5L of Biofilm Ruby , marking proteins present in the extracellular matrix, for 30 min protected from light. Then, each coupon was gently dipped in distilled water four times to remove excess stain and glued onto a microscope slide. To prepare the slides for microscopy, 65 \u00b5L of VectaShield anti-fade hardening mounting medium and a cover slip were gently placed on top of the coupon. The coupons then stayed protected from light for 2.5 h to allow the mounting medium to harden. Microscopy images were acquired with a Nikon A1 Confocal Microscope using the 40 \u00d7 0.6NA objective and the NIS Elements 5.21.03 imaging software. Two emission filters were used: 425\u2013475 nm for Calcofluor White and 500\u2013550 nm for Biofilm Ruby, with a 405 nm and 561 nm laser, respectively. The 3D structure of the biofilm was captured using a z-stack , and each coupon was imaged in one large field . The coupons were always imaged in the center to avoid potential user bias looking for a particular section of biofilm.After imaging the biofilms, a qualitative analysis was performed on all the sample images using the NIS Elements Viewer 5.21.00 software . The qualitative analysis consisted of describing the morphology of the biofilms to find differences and similarities of the biofilms, namely observations regarding structure and distribution, based on material surface, incubation time, and gravity condition.The images taken with the microscope were stored as .nd2 files, which is an incompatible format for quantitative analysis with COMSTAT2. Thus, all the images were converted to .tiff files using ImageJ v1.48 . To thisFor the analysis process, COMSTAT 2 software v2.1 ,37,38 wa0t samples was produced by COMSTAT2 using the same methodology. Thickness and surface area were not calculated as these samples only contained spores which could be stacked or clumped together, potentially leading to a false statistical significance when comparing amount of spores. Biomass was hence deemed the best parameter to examine for this set of samples as it takes into account the overall volume occupied by the spores.The biomass of each of the p < 0.05). Statistical analysis was performed using R version 4.1.2 in RStudio . The data obtained from the quantitative analysis were first tested for normality and homogeneity to determine if parametric or nonparametric statistical tests would be used. Normality was tested using the Shapiro\u2013Wilk test and homogeneity was tested using Levene\u2019s test. The data did not comply with the assumptions of normality and homogeneity (only a few subsets complied); therefore, nonparametric statistical tests were used. The Kruskal\u2013Wallis test and Dunn\u2019s test with Bonferroni correction were used to compare the median values of biomass and surface area coverage of biofilms based on the gravity and incubation time per material as well as to compare the differences between materials. A significance level of 0.05 was used . However, after 15 days of growth, the spaceflight samples had 37.8% more surface area coverage than the Earth controls (p < 0.05) (No significant differences in < 0.05) .P. rubens biofilm mass or thickness when compared throughout the gravity conditions and incubation times. Nevertheless, there were two statistical differences in surface area coverage under two comparisons. Biofilms formed on Earth showed differences between days 10 and 20, where the surface area coverage had an 85.8% decrease from day 10 to day 20 (p < 0.05), as seen in p < 0.05); such an increase in microgravity was also noticeable in the microscopy images . When comparing the samples grown for 20 days, those grown in space had 58.8% less thickness compared to Earth samples (p < 0.05), shown in There were no significant differences observed in mass or surface area coverage when comparing incubation times and gravity conditions of p < 0.05). Such a biofilm reduction (mass and thickness) was not observed in microgravity between 15 and 20 days. Surface area coverage (p < 0.05). Significant changes between space and Earth samples were also observed in biofilm thickness. After 10 days of incubation, samples grown in space were 2.3 times thicker compared to samples on Earth (p < 0.05). A similar observation was seen in the samples after 20 days, with samples grown in space being 1.97 times thicker compared to samples on Earth (p < 0.05) (The biofilm mass a and thicoverage b of samp < 0.05) c.Biofilms grown on Silicone, Nanograss, and Aluminum Alloy presented no significant differences when compared by gravity and incubation time regarding biofilm mass, thickness, or surface area coverage.p < 0.05). There were no significant differences in biofilm thickness and mass between materials. There were also no significant differences between materials in biofilm thickness, mass, and surface area for Earth samples grown for 10 and 20 days.After 15 days growing in Earth conditions , the biop < 0.05) (p < 0.01) a and 9% < 0.01) b than Nap < 0.05) (p < 0.01) and 4% (p < 0.05) more biofilm surface area coverage, respectively a. Stainlectively b. There p < 0.05) . There wP. rubens biofilm morphology on seven clinically or spaceflight relevant materials both in space and on Earth. All materials were susceptible to fungal biofilm formation, some more than others, regardless of the gravitational condition. The fungal samples had equivalent initial amounts of spores on all material substrates, as there were no significant differences in biomass after 6 h of incubation. Although the temperature was kept close to 25 \u00b0C during the incubation time across samples, and despite our efforts to provide a stable environment, the humidity presented significant variations. Specifically, the samples incubated for 10 and 20 days on Earth had RH of approximately half that of the rest of the conditions. Since each PHAB contained samples of all material coupons, this does not affect the comparisons across materials, but it could interfere with comparisons across incubation times and between gravitational regimes. The primary aim of this investigation was to characterize P. chrysogenum is 90%, and decreases in fungal growth below this RH have been observed [P. rubens [P. rubens cannot restart growth following a desiccation event of RH below 75% [P. chrysogenum (the previous species name for P. rubens) as xerotolerant, meaning it can undergo a complete life cycle at low RH values and is considered resistant to dry environments [P. rubens survives in dry environments should be further examined.One of the primary dependent variables examined was fungal biofilm formation in microgravity compared to Earth\u2019s gravity. Three cases of significantly increased biofilm growth in microgravity were observed: Carbon Fiber at day 15 (surface area increase), Stainless Steel at day 15 (surface area increase), and Titanium Alloy at day 10 and day 20 (thickness increase). However, there were two cases that showed the contrary with significantly decreased biofilm growth in microgravity compared to Earth: Carbon Fiber at day 10 (surface area decrease) and Quartz at day 20 (thickness decrease). These two cases, where fungal biofilms on Earth grew better than in space, are of particular interest because those samples correspond to the conditions (Earth at 10 days and 20 days) that experienced fluctuations of humidity . The ideobserved . Previou. rubens . On occa. rubens . Additioelow 75% . Since tronments . While tWith that in mind, our results suggest mixed effects of microgravity on biofilm growth dependent on the material surface and without a clear trend in time. It is important to note the significant outlier in the Carbon Fiber 10-day sample grown on Earth and the effect of removing this outlier from the statistical test. When removed, the comparison between Earth and space sample groups is no longer significant; however, all of the other observed differences remain the same, which implies mixed effects of microgravity on biofilm growth. Despite the changes in biofilm growth in microgravity, no consistent visual trends in morphological shape across gravity conditions were observed. Therefore, in our experiment, microgravity did not impact the fungal shape of the biofilm. The mechanisms behind the changes in fungal biofilm formation in microgravity are still unclear and need further investigation. One potential mechanism behind the changes observed in this investigation could lie in how different fungi sense gravity. The octahedral crystal matrix protein (OCTIN) is the structural protein comprising the vacuolar protein crystals that the fungal family Mucorales uses to sense gravity . While tAfter qualitative analysis of the fungal biofilms, no difference in biofilm shape was observed across materials . HoweverAspergillus fumigatus, and other hydrophobins have been identified in hyphae [P. rubens (Accession No. AM920436.1) using NCBI\u2019s BLAST Tool, meaning that P. rubens could also have hydrophobic spores if this gene is being expressed. Nanograss is a highly hydrophobic material and therefore should allow the P. ruben spores to strongly adhere. Therefore, hydrophobicity is not the factor interfering with spore and hyphal adhesion preventing biofilm formation on Nanograss.The Nanograss material was of particular interest in this investigation as a potential biofilm-resistant material. Nanograss was the most efficient material at reducing fungal biofilm formation, both in space and on Earth. One potential explanation for the reduced biofilm formation is a reduced strength of adhesion. Spore adhesion was the same across all materials by 6 h time , meaningn hyphae ,46,47. AP. rubens biofilms have been grown on gypsum, and data on the effect of roughness and surface charge on P. rubens biofilm formation are not available. Additionally, data on the roughness and charge of the materials used in this investigation are not available as these properties are manufacturer- and sometimes batch-dependent. Future experiments should measure this material\u2019s properties to examine the effect of different material charges and roughness on biofilm growth on hydrophobic materials. Due to the decreased biofilm formation on Nanograss, this material warrants further studies with varying conditions and strains to determine its full potential as an anti-microbial material.Other factors such as surface roughness and charge are likely at play in preventing biofilm growth on Nanograss . In the Candida albicans, another fungal species known for biofilm formation, can maintain a consistent population in starvation after an initial drop in cell population [A decrease in biofilm growth was observed as incubation time increased in three cases: Titanium Alloy Earth samples (mass and thickness decreased between day 15 and day 20 and surface area decreased between day 10 and day 20), Quartz space samples (thickness decrease between day 15 and day 20), and Stainless Steel Earth samples (surface area decrease between day 10 and day 20). Since there was only a thin coat of PDB on the surface of the coupons, it is possible the nutrients were depleted as incubation time increased, causing the fungal biofilms to be in a starvation state. When in unfavorable conditions, small groups of fungal persister cells will form and remain even as other fungal cells die . These cpulation . If the Another possibility for the decrease in biofilm formation on day 20 due to depleted nutrients is the weakening of cell\u2013cell and cell\u2013surface adhesions. Fungi have two types of adhesion proteins which are vital for cell\u2013cell adhesion and cell\u2013surface adhesion, lectin-like adhesions and sugar-independent adhesins . DepleteP. rubens up to 20 days of incubation. Even in the few cases where there was increased biofilm formation in microgravity, this effect was not maintained in time. Additionally, due to the reduced fungal biofilm formation, both in space and on Earth, Nanograss could be considered as a potential anti-microbial material for certain spacecraft equipment. The results presented in this manuscript can contribute to the efforts of fungal biofilm control in space, but long-term tests (>20 days) as well as complementary transcriptomic and proteomic analyses are highly recommended, as microgravity effects were both material- and incubation-time-dependent.Altogether, the results show that microgravity does not have a strong effect on biofilm shape and morphology of"
\ No newline at end of file