diff --git "a/cluster/551.jsonl" "b/cluster/551.jsonl" new file mode 100644--- /dev/null +++ "b/cluster/551.jsonl" @@ -0,0 +1,40 @@ +{"text": "The airlines of the developing world are being advised by the United Nations Environment Programme (UNEP) to bank their stocks of halons\u2014chemicals vital for extinguishing aircraft fires\u2014as the 2010 deadline to cease production approaches. Most developed nations already have plans for halon recycling and banking systems\u2014registries of who has excess halon to sell. For developing countries, however, the challenges of starting up such systems may leave some airlines grounded.Halons have been used for years in many kinds of fire-extinguishing systems. However, when they escape into the atmosphere, UV light causes them to release highly reactive bromine radicals that deplete the ozone layer. Indeed, halons are thought to be three to ten times more ozone-unfriendly than chlorofluorocarbons. For this reason the Montr\u00e9al Protocol obliged developed nations to cease halon production in 1994, and set a 2010 target date for the developing world.The trouble is that, while replacements for halons now exist for nearly all other applications, these chemicals remain essential for aircraft safety. Jim Curlin, information manager of the UNEP Division of Technology Industry and Economics, OzonAction Branch, explains, \u201cAircraft fire-extinguishing systems must have good dispersion and fire-suppression functions, must work at low temperatures, be of low toxicity to humans for the time that they are trapped in an affected plane and have an excellent weight-to-volume ratio.\u201d Currently, he says, there is no drop-in replacement for halons that has all these characteristics, making halon availability critical to airlines.Even developed countries are not without halon banking problems. Developed nations have enough halon 1301\u2014which is used in cargo bay and engine fire-fighting equipment\u2014to last some 25 years, by which time a replacement should be available, explains John O\u2019Sullivan, a member of the UNEP Halons Technical Options Committee and fire representative for the International Air Transport Association in Montr\u00e9al. But there isn\u2019t enough halon 1211, which is used by aircrew in handheld extinguishers. \u201c[Halon 1211] can still be made in developing countries, so at least in this respect [developing nations] should have fewer problems,\u201d says O\u2019Sullivan. \u201cBut European regulations, for example, make it difficult to import. This is a problem we still have to address.\u201dStarting up halon banking systems is certainly in the best interest of developing world airlines. With passenger safety a top priority, aircraft that do not maintain their halon-based systems would eventually fail airworthiness inspections and be banned from flying to many destinations. But how easy will it be for developing countries to start such systems, and where does halon banking figure on their priority list?\u201cMost focus first on economic problems and then on the environment,\u201d says Wilman Rajiman, the Indonesia Halon Bank Project manager at Soekarno-Hatta International Airport in Jakarta. \u201cIn Indonesia we started to discuss a national halon bank in 1995, but due to an economic crisis in 1998 it was not launched until March 2000. The major problems we faced were capital investment, knowledge, training, and local regulations.\u201dFor many countries, cash flow will be the major obstacle. Rajiman explains that Indonesia received a grant from the World Bank, but must spend its own money and then ask for reimbursement. Poorer nations may find that stipulation difficult, yet airline-servicing companies worldwide must comply strictly with the halon specifications laid down by aircraft manufacturers and foreign aviation authorities. Maintaining proper halon stocks is therefore vital to their business.Flyers may be comforted to know that the Montr\u00e9al Protocol contains a clause that allows developing nations to temporarily restart halon production for critical systems if supplies fail\u2014always supposing the necessary infrastructure exists. \u201cThat\u2019s a situation we all want to avoid,\u201d says Curlin, \u201cand one of the reasons we are encouraging companies and countries to develop halon banks.\u201d"} +{"text": "In 2001, the chief of the United States Agency for International Development (USAID), Andrew Natsios, gave this justification to the US Congress for why the agency opposed giving antiretroviral therapy (ART) to Africans with HIV:\u201cIf we had [HIV medicines for Africa] today, we could not distribute them. We could not administer the program because we do not have the doctors, we do not have the roads, we do not have the cold chain\u2026[Africans] do not know what watches and clocks are. They do not use western means for telling time. They use the sun. These drugs have to be administered during a certain sequence of time during the day and when you say take it at 10:00, people will say what do you mean by 10:00?\u201d [The Lancet: \u201c[ART] is not\u2026a technology that most poor people could adhere to\u2026[Further] The use of public funds to subsidise the treatment of patients in the poorest countries who are most able to comply\u2026would be highly inequitable\u201d [Natsios was not the only policy maker to justify withholding ART from Africans on the basis that weak infrastructure, or patients' inability to take tablets, would stymie adherence. Senior officials of the World Bank and Thai government said in uitable\u201d .Two new systematic reviews prove these speculations were mistaken ,4. Despip is less than 0.001).The first review (which I coauthored) identified 31 studies from North America and 27 from sub-Saharan Africa examining adherence to ART [Some may see this result as surprising. To live in Nairobi means to face so many privations compared to New York that to overcome them and excel seems almost storybook untrue. But privation can cut both ways. People who have been denied the necessities of life, who then receive the gift of medicines and a chance to live, may be more likely to appreciate ART.Although Africans take ART more faithfully that North Americans, there is room for improvement. Here is where the second review is instructive . The autThe authors found only two qualitative studies of barriers and facilitators of adherence among patients in poor countries . There aIn rich countries, the study failed to identify any obvious \u201cbig fix\u201d that could turn non-adherent patients into adherent ones. On the other hand, for developing countries, \u201cfinancial constraints\u201d towered above the other reasons why poor patients may fail to adhere to ART. That is cruelly ironic, because the same international development policy makers who rejected the idea that poor people could adhere to ART also worked for financial donors such as USAID and the World Bank, and their passionate arguments against ART stalled the delivery of the one variable that helps adherence\u2014money.Where is the flaw that allowed speculation to get ahead of evidence in development policy making, and to reach the baseless conclusion that Africans could not adhere to ART, or needed to be commanded paternalistically to adhere to ART, when no such conclusion would be reached for rich people? More to the point, how can one recognize when a particular development policy is so baseless and speculative, the better to abandon it?A serviceable answer, I believe, is that one should be highly suspicious whenever development policy makers sound dismissive of the people whom they are hired to help. The central aspiration of development work is helping the poor and sick become richer and healthier. Such an aspiration is incompatible with speculating that certain foreigners are incapable of enjoying the fruits of development. I believe that the views of Natsios and the World Bank and Thai officials, speculating that Africans could not adhere to ART, were dismissive in just this way.Dismissing patients in this way leads to a lower standard of medical care. The medical establishment is more sensitive to the standard of care than is the development establishment, and so the medical establishment must be vigilant\u2014and vocal\u2014against bad development policy. Development policymakers have also freely opined that Africans could not manage to take artemisinin-based combination therapies for malaria, or second-line treatments for tuberculosis. We now know that Africans are capable of all these things\u2014but overcoming the dismissals and excuses took years, during which millions died."} +{"text": "Mutagenesis plays an essential role in molecular biology and biochemistry. It has also been used in enzymology and protein science to generate proteins which are more tractable for biophysical techniques. The ability to quickly and specifically mutate a residue(s) in protein is important for mechanistic and functional studies. Although many site-directed mutagenesis methods have been developed, a simple, quick and multi-applicable method is still desirable.We have developed a site-directed plasmid mutagenesis protocol that preserved the simple one step procedure of the QuikChange\u2122 site-directed mutagenesis but enhanced its efficiency and extended its capability for multi-site mutagenesis. This modified protocol used a new primer design that promoted primer-template annealing by eliminating primer dimerization and also permitted the newly synthesized DNA to be used as the template in subsequent amplification cycles. These two factors we believe are the main reasons for the enhanced amplification efficiency and for its applications in multi-site mutagenesis.DpnI digestion after the PCR amplification and enhanced the overall efficiency and reliability. Using our protocol, we generated single site, multiple single-site mutations and a combined insertion/deletion mutations. The results demonstrated that this new protocol imposed no additional reagent costs (beyond basic QuikChange\u2122) but increased the overall success rates.Our modified protocol significantly increased the efficiency of single mutation and also allowed facile large single insertions, deletions/truncations and multiple mutations in a single experiment, an option incompatible with the standard QuikChange\u2122. Furthermore the new protocol required significantly less parental DNA which facilitated the DpnI to destroy the parental methylated DNA from the newly synthesized unmethylated mutant DNA and transformed into E. coli cells where the nick is ligated by host repair enzymes. The process while extremely useful and simple does have some limitations /(1+0.7 [K+])) + 0.41(% [G+C]) \u2013 500/(probe length in base) \u2013 1.0(%mismatch) [m pp and Tm no were calculated for each primer. All primers and their Tm no and Tm pp are detailed in Table Plasmid pDESTSIRV30, pDESTSIRV33 expressing the SIRV proteins (CAG38830 and CAG38833), pDESTAVRA expressing MRSA vraR protein CAG40961) and pDESTFaBH2 expressing AAG06721) were conAAG06721). All the and pDESm no -5\u00b0C for 1 minute and 72\u00b0C for 10 minutes or 15 minutes according to the length of the template constructs (about 500 bp per minute for Pfu DNA polymerase). The PCR cycles were finished with an annealing step at Tm pp-5 for 1 minute and an extension step at 72\u00b0C for 30 minutes. The PCR products were treated with 5 units of DpnI at 37\u00b0C for 2 hours and then 10 \u03bcl of each PCR reactions was analyzed by agarose gel electrophoresis. The full-length plasmid DNA was quantified by band density analysis against the 1636-bp band of the DNA ladders. An aliquot of 2 \u03bcl above PCR products, the PCR products generated using QuickChange\u2122 or generated as described in [E. coli DH5\u03b1 competent cells by heat shock. The transformed cells were spread on a Luria-Bertani (LB) plate containing antibiotics and incubated at 37\u00b0C over night. The number of colonies was counted and used as an indirect indication of PCR amplification efficiency. Four colonies from each plate were grown and the plasmid DNA was isolated. To verify the mutations, 500 ng of plasmid DNA was mixed with 50 pmole of T7 sequencing primer in a volume of 15 \u03bcl. DNA sequencing was carried out using the Sequencing Service, University of Dundee. For multiple site-directed mutations, deletions and insertions, the PCR was carried out in 50 \u03bcl of reaction containing 10 ng of template, 1 \u03bcM of each of the two primer pairs, 200 \u03bcM dNTPs and 3 units of Pfu DNA polymerase. The PCR cycles, DNA quantification, transformation and mutation verification were essentially the same as described above.For single-site mutation, deletion or insertion, the PCR reaction of 50 \u03bcl contained 2\u201310 ng of template, 1 \u03bcM primer pair, 200 \u03bcM dNTPs and 3 units of Pfu DNA polymerase. The PCR cycles were initiated at 95\u00b0C for 5 minutes to denature the template DNA, followed by 12 amplification cycles. Each amplification cycle consisted of 95\u00b0C for 1 minute, Tribed in was tranHL designed the experiments, carried out the practical work and drafted the manuscript. JHN was involved in the research discussion and helped to finalise the manuscript. All authors read and approved the final manuscript."} +{"text": "MicroRNAs (miRNAs) have been implicated in the regulation of milk protein synthesis and development of the mammary gland (MG). However, the specific functions of miRNAs in these regulations are not clear. Therefore, the elucidation of miRNA expression profiles in the MG is an important step towards understanding the mechanisms of lactogenesis.P<0.05). Integrative miRNA target prediction and network analysis approaches were employed to construct an interaction network of lactation-related miRNAs and their putative targets. Using a cell-based model, six miRNAs were studied to reveal their possible biological significance.Two miRNA libraries were constructed from MG tissues taken from a lactating and a non-lactating Holstein dairy cow, respectively, and the short RNA sequences (18\u201330 nt) in these libraries were sequenced by Solexa sequencing method. The libraries included 885 pre-miRNAs encoding for 921 miRNAs, of which 884 miRNAs were unique sequences and 544 (61.5%) were expressed in both periods. A custom-designed microarray assay was then performed to compare miRNA expression patterns in the MG of lactating and non-lactating dairy cows. A total of 56 miRNAs in the lactating MG showed significant differences in expression compared to non-lactating MG (Our study provides a broad view of the bovine MG miRNA expression profile characteristics. Eight hundred and eighty-four miRNAs were identified in bovine MG. Differences in types and expression levels of miRNAs were observed between lactating and non-lactating bovine MG. Systematic predictions aided in the identification of lactation-related miRNAs, providing insight into the types of miRNAs and their possible mechanisms in regulating lactation. Caenorhabditis elegans, where aberrant expression of lin-4 caused abnormal cell division and proliferation, affecting the timing of cell division and development in larvae [MicroRNAs (miRNAs) are small non-coding RNA molecules that are approximately 22 nucleotides (nt) in length, which negatively regulate specific target genes by mRNA degradation or translational repression. The role of miRNA was first reported in n larvae . Severaln larvae ,4.The bovine mammary gland (MG) is a complex organ that grows and develops after calf birth . The comTo obtain miRNA expression profiles and to compare the difference in miRNA expression between periods of lactation and non-lactation, we used next-generation sequencing technology to sequence two miRNA libraries constructed from tissue samples taken during these two periods. Using computational prediction, potential targets for these miRNAs were identified, leading to the construction of an interaction network related to lactation. Our integrative analysis highlights the complexity of gene expression networks regulated by miRNAs in MG during lactation.Hematoxylin-eosin staining (HE) and immunofluorescence (IF) were employed to verify the microstructure differences of the lactating and non-lactating MG tissues used in constructing miRNA libraries. In the lactation MG Figure , 1C, mans1-casein, a major milk protein, was measured using real-time PCR. As expected, \u03b1s1-casein mRNA was highly expressed during the lactation period and had barely detectable expression during the non-lactation period were expressed in both periods. The unique miRNAs were categorized into three groups based on their hits: 283 miRNAs matched with known bta miRNAs registered in the miRbase database; 96 miRNAs were conserved among other mammals but have yet to be identified in bovines; and 505 miRNAs were mapped to the bta genome, with the extended genome sequences having the potential to form hairpins.All clean reads were then aligned against the bta pre-miRNAs only accounted for 38.8% (283/730) of the sequences deposited in miRbase (version 17.0) from the original 884 unique miRNAs, the remaining 523 miRNAs were fabricated on the microarray. Only 304 of the 523 miRNAs were identified by the microarray, including 187 of the known miRNAs, 43 of the conserved miRNAs and 74 of the novel miRNAs of all 885 pre-miRNAs detected in this experiment, including novel miRNAs, were searched by BLAST. It was determined that 800 of the pre-miRNAs (90.4%) matched with the bta genome and that 26 mature miRNAs of pre-miRNA hairpins were located at more than two genomic loci on different chromosomes .The genomic density distribution of bovine pre-miRNAs, i.e., the number of pre-miRNAs per Mbp of individual chromosome, was analyzed Figure . DensitiIn gene studies, genes are clustered to identify co-expressed genes from the same primary transcript or to identify gene clusters that share similar functions. We followed the criteria proposed by miRbase and defined 10 Kbp as the maximum inter-distance for two pre-miRNAs to be considered as clustered. There were 230 pre-miRNAs grouped into 55 clusters, accounting for only 28.75% (230 out of 800) of the total pre-miRNAs Figure .We predicted all of our mature miRNAs of pre-miRNA hairpins using UNAfold software and found that 104 pairs of known miRNAs and novel candidates share the same pre-miRNA structure and that their pre-miRNAs\u2019 chromosomal locations were identical to identify putative targets. More than 10,000 annotated mRNA transcripts were selected as potential targets, equivalent to approximately 35 targets per miRNA. All targets were then processed by Gene Ontology annotation analysis. It was found that these targets have a wide range of diverse functions, with over half involved in transcriptional activity Figure .SLC35b2, CABP4, GPR, MAPKAP and PRLR. Of these predicted targets, miR-138 is known to inhibit PRLR protein translation by regulating STAT5 and MAPK, thereby suppressing the proliferation and viability of mouse mammary epithelial cells [PRLR causes ductal outgrowth and side branching when grafted into PRLR\u2212/\u2212 epithelium [Based on the results of target prediction, miRNAs and target gene interactions were integrated using EGAN to construct possible regulatory networks to investigate the relationship between miRNAs and lactation Figure . We chosal cells . In thisal cells . PRLR caithelium .STAT5, NM_001012673.1) mRNA contains a complementary site for the seed region of bta-miR-141, while the 3\u2019UTR of Hexokinases was paired with five miRNAs: bta-miR-500, bta-miR-199a, bta-miR-125b, bta-miR-181a and bta-miR-484 proteins into the RNA-induced silencing complex (RISC), where it guides the RISC to silence target mRNAs, whereas the other arm goes on to be degraded via the miRNA processing canonical pathway. Because mature miRNA is derived from different pre-miRNAs depending on the body\u2019s needs, the amount of the degraded arm will vary under different physical conditions.XDH) gene, a putative target of miR-29, miR-15b and miR-107, is an early event in mammogenesis in vivo and in vitro rather than a terminal component of differentiation [BCAT2 encodes a branched chain aminotransferase found in mitochondria. It has recently been observed that branched chain amino acids can play a signaling role for protein synthesis in addition to serving as substrates [BCAT activity increased in mammary tissue during rat lactation and was 6-fold higher than in virgin rats [PRLR is the putative target of miR-142, miR-23, miR-374b, miR-30a and miR-27b and plays a function in MG development together with prolactin. Prolactin promotes alveolar survival, maintains tight junctions and regulates milk protein and lactose synthesis [STAT5 serves as a common point in the signal transduction pathways of several lactogenic and galactopoietic hormones in the MG [STAT5 is the miR-17/92 cluster [Cav-1) abrogates PRL-induced gene expression by sequestering JAK2. The loss of both Cav-1 alleles results in precocious MG development during pregnancy and the concomitant precocious activation of STAT5. All of the above indicate that although little is known about the exact functions of these miRNAs, the relationships with their respective target genes indicate potential roles in lactation.MiRNAs exert their effects by interacting with target mRNAs. Therefore, target-predicting software (Targetscan) was used to identify putative targets. On the basis of the predicted targets of the 283 known miRNAs, an interaction network composed of these miRNAs and their candidate targets expressed during lactation was constructed by EGAN. Using this network, it was demonstrated that 37 miRNAs interact with a total of 15 targets, which are involved in amino acid, fatty acid and lactose metabolism. It has been reported that the increased activity of the xanthine dehydrogenase , analyzed their expression and predicted their putative targets. Our results also demonstrated that miR-141, miR-484 and miR-500, characterized by the miRNA-gene regulatory networks, are probably essential for lactation via the targeting of STAT5 and HK2.The interaction network predicted that K2 3\u2019UTR . The resThe aim of our work was to examine miRNA expression profiles in bovine MG and to evaluate miRNA functions through the identification of differentially expressed miRNAs in lactation and non-lactation MG. Our identification of novel miRNAs highlights the importance of miRNAs with low abundance and less conservation between species. An interaction network of known miRNAs and their target genes relating to lactation was constructed to postulate the functional roles of miRNAs in the MG. This integrated analysis provides important information that may inspire further experimental investigation into the field of miRNAs and their targets during lactation.ad libitum under controlled environmental conditions and were humanely sacrificed as necessary to ameliorate suffering.Experiments were performed according to the Regulations for the Administration of Affairs Concerning Experimental Animals and approved by the Institutional Animal Care and Use Committee at Zhejiang University, Zhejiang, China. Animals were allowed access to food and water Two multiparous dairy cows were used for miRNA library construction. The first was a 6-year-old cow that had been lactating for 2 months, which was used to make the lactation miRNA library, and the second was a 4-year-old non-lactating, non-pregnant cow, which was used to construct the non-lactation miRNA library.In the microarray assay, two other multiparous cows were added to each period, and mixed RNA samples were made. The two additional lactation cows had been lactating for 3 and 4 months and were 4 and 5 years old, respectively. The two additional non-lactating, non-pregnant cows were 4 and 5 years old.Bovine MG tissues were collected and immediately stored in liquid nitrogen until further use. Blocks of MG tissue were fixed in 4% formalin for 48 hours, processed and embedded into paraffin blocks according to routine procedures.The paraffin-fixed blocks were serially sectioned into 8 \u03bcm coronal slices and stored at \u221220\u00b0C until further use. For routine histological studies, paraffin sections were stained with hematoxylin and eosin. Hematoxylin-eosin stained sections were analyzed by light microscopy using a Nikon fluorescence microscope .Alpha-casein was detected in frozen sections by immunofluorescence. Sections were fixed with 4% formaldehyde for 10 minutes. The slides were then rinsed 3 times in PBS for 5 minutes each and blocked for 60 minutes. The blocking solution was replaced by primary antibody solution , and the samples were incubated overnight at 4\u00b0C. The next day, slides were rinsed 3 times in PBS for 5 minutes each. FITC-conjugated secondary antibody (1:200) with DAPI was added, and the slides were incubated for 1 hour at 37\u00b0C in the dark, followed by 3 rinses in PBS for 5 minutes each. The specimens were viewed under a fluorescence microscope .Total RNA was extracted using a Qiagen miRNeasy Mini Kit according to the manufacturer\u2019s protocol. Subsequently, the RNA samples were sent to LC Science to construct the small RNA libraries using an Illumina small RNA kit and to be sequenced using Genome Analyzer .http://www.mirbase.org/index.shtml) and the bovine mRNA Rfam, Repbase, genome and EST databases (http://www.ncbi.nlm.nih. gov/projects/genome/guide/cow/ and BTA 4.0: ftp://ftp.ensembl.org/pub/release-57/ fasta/bos_Tau-rus/dna/) were exploited. The sequencing data were first filtered into mRNA using Rfam and Repbase, and then mapped to miRbase. The mapped data were then aligned to genome and EST databases for annotation purposes. The remaining unmapped data were mapped to genome and EST data, secondary structures were predicted using UNAFold software [Small RNA reads were processed using Illumina\u2019s Genome Analyzer, and the ACGT101-miR program was then used to process the sequencing data. The mammalian miRbase known miRNAs reported in miRbase; (2) conserved miRNAs sharing highly similar sequences corresponding to their precursors in other mammalian genome assemblies, and (3) bovine novel candidates where reads and the predicted secondary structures are not mapped to the miRNAs or pre-miRNAs in miRbase, but are mapped to the in situ synthesis using PGR (photogenerated reagent) chemistry. Hybridization was performed overnight on a \u03bcParaflo microfluidic chip using a micro-circulation pump (Atactic Technologies) [Total RNA was extracted using a Qiagen miRNeasy Mini Kit . For each stage, equal quantities of total RNA isolated from three individual cows were pooled. A custom-designed microarray assay was performed to analyze miRNA expression patterns in lactating and non-lactating periods by LC Science . The array included probes for 523 miRNA derived from the sequencing data and reported bovine miRNA (from miRbase) with 5S rRNA as a data normalization control. The probes were synthesized by ologies) . Hybridiologies) .The starting point of the miRNA target prediction strategy was the utilization of known miRNAs listed in Additional file GAPDH was used as a gene assay control and bovine S18 rRNA as a miRNA control. Fold changes were determined by the threshold cycle (CT). Fold changes of miRNA expression were calculated using the 2\u2212\u0394Ct method, where \u0394Ct = (Ct target \u2212 Ct control) Sample.The gene expression assay and differentially expressed miRNAs identified using deep sequencing were validated using real-time PCR. Total RNA were extracted from the MG tissues in both periods separately using Trizol reagent . The RNA was divided into two portions, one for genetic testing and the other for miRNA detection. Genetic testing started with 500 ng of total RNA, and this RNA was reverse transcribed to cDNA using a SYBR\u00ae PrimeScript\u00ae RT-PCR Kit . For miRNA detection, 2 \u03bcg of total RNA was reverse transcribed to cDNA with a specific stem-loop primer using M-MLV , with incubation for 60 minutes at 42\u00b0C, followed by heating for 10 minutes at 95\u00b0C and storage at 4\u00b0C. These cDNA were then used as templates in a SYBR\u00ae Premix Ex Taq\u2122 kit with specific primers . Cells were maintained in Dulbecco's Modified Eagle's Medium supplemented with 10% (V/V) fetal bovine serum , 100 U/mL penicillin and 100 mg/mL streptomycin. The cells were maintained at 37\u00b0C with 5% CO5 cells/ml/well on the day before the transfection. Mimics of miR-125b, miR-141, miR-181, miR-199a, miR-484 and miR-500 and the antisense inhibitor miR-141 were transfected by Lipofectamine 2000 according to the manufacturer\u2019s protocol. The transfection efficiency was examined using FAM-conjugated siRNA. The mimics were RNA duplexes, the inhibitors were single-stranded, and the negative controls (NC) and inhibitor negative controls (INC) for all miRNA mimics and inhibitors were designed by Invitrogen and had no homology to any bovine genome sequences . Equal amounts of protein lysate were separated by SDS- polyacrylamide gel electrophoresis (PAGE) and then electrophoretically transferred to polyclonal difluoride membranes. Each protein was incubated with a specific antibody and detected with an electrogenerated chemiluminescence (ECL) kit. Beta-actin was used as a loading control. Antibodies for STAT5 and \u03b2-actin were manufactured by Boster (China), and the HK2 antibody was purchased from Santa Cruz (USA). The intensity of the protein fragments was quantified using Imagpro-Plus software. All data are from three independently repeated experiments.P<0.05.All data were analyzed using SPSS software . Values in the texts and figures represent the results of at least three separate experiments. Group comparisons were performed using ANOVA with the Student\u2019s t-test. Differences were considered statistically significant at Ago2: Argonaute; CAV-1: Caveolin-1; CV: Coefficients of variation; ECL: Electrogenerated chemiluminescence; EMT: Epithelial-to-mesenchyme transition; FBS: Fetal bovine serum; GO: Gene ontology; HE: Hematoxylin-eosin staining; HK2: Hexokinases; IF: Immunofluorescence; INC: Inhibitor negative control; L: Lactation; MG: Mammary gland; miRNA: MicroRNA; NC: Negative control; NL: Non-lactation; nt: Nucleotides; RISC: RNA-induced silencing complex; SDS-PAGE: SDS-polyacrylamide gel electrophoresis; siRNA: Small interference RNA; STAT5: Signal transducers and activators of transcription; TGF-\u03b2: Transforming growth factor-beta; XDH: Xanthine dehydrogenase.The authors declare no competing interests.ZL performed the main experiment and wrote the paper. HL participated in the study design and paper revision. XJ was involved in executing the study. LL assisted with the experimental design and was involved in revising the paper. JL designed the study, guided the execution of the study, and revised the paper. All authors have read the manuscript and approved its publication.Table S1. Profile of known bovine miRNAs. Profile of the known bovine miRNA.Click here for fileTable S2. Profile of conserved miRNAs originating from pre-miRNAs. Profile of conserved miRNAs originating from pre-miRNAs.Click here for fileTable S3. Profile of bovine novel miRNAs. Profile of bovine novel miRNAs.Click here for fileTable S4. Profile of microarray assay. Profile of microarray assay.Click here for fileTable S5. Differentially expressed miRNAs. Differentially expressed miRNAs.Click here for fileTable S6. Bovine pre-miRNAs with two or more genome locations. Cattle pre-miRNAs with two or more genome locations.Click here for fileTable S7. Same pre-miRNA structure and location. Same pre-miRNA structure and location.Click here for fileTable S8. Primer sequences used in the q-PCR experiments. Primer sequences of the q-PCR experiments.Click here for fileTable S9. Small interfering RNA. Small interference RNA.Click here for file"} +{"text": "Fraxinus rhynchophylla and Artemisia capillaris. Our previous study found that FR ethanol extract (FREtOH) significantly ameliorated rats\u2019 liver function. This study was intended to investigate the protective mechanism of ESC in hepatic apoptosis in rats induced by carbon tetrachloride. Rat hepatic apoptosis was induced by oral administration of CCl4. All rats were administered orally with CCl4 twice a week for 8 weeks. Rats in the ESC groups were treated daily with ESC, and silymarin group were treated daily with silymarin. Serum alanine aminotransferase (ALT), aspartate aminotransferase (AST) as well as the activities of the anti-oxidative enzymes glutathione peroxidase (GPx), superoxide dismutase (SOD), and catalase in the liver were measured. In addition, expression of liver apoptosis proteins and anti-apoptotic proteins were detected. ESC significantly reduced the elevated activities of serum ALT and AST caused by CCl4 and significantly increased the activities of catalase, GPx and SOD. Furthermore, ESC significantly decreased the levels of the proapoptotic proteins and significantly increased the levels of the anti-apoptotic proteins (Bcl-2 and Bcl-xL). ESC inhibited the release of cytochrome c from mitochondria. In addition, the levels of activated caspase-9 and activated caspase-3 were significantly decreased in rats treated with ESC than those in rats treated with CCl4 alone. ESC significantly reduced CCl4-induced hepatic apoptosis in rats.Esculetin (ESC) is a coumarin that is present in several plants such as In our preliminary study, ESC was demonstrated to inhibit CCl4-induced acute liver injury in rats. However, there is still little information on the effect of ESC on CCl4-induced fibrosis in rats.Although many chronic viral hepatitis patients have been treated with interferon, the results of the therapy have not always been satisfactory. Chronic viral hepatitis in Taiwan , as wellin vivo and antiin vivo . Furtherat liver . It has 4) is extensively used to induce lipid peroxidation and toxicity. CCl4 is metabolized by cytochrome P450 2E1 to the trichloromethyl radical (CCl3\u2212), which is assumed to initiate free radical-mediated lipid peroxidation leading to the accumulation of lipid-derived oxidation products that cause liver injury [4 administration in vivo [Carbon tetrachloride , serum alanine amino transferase (sALT), superoxide dismutase (SOD), glutathione peroxidase (GPx) and catalase in the liver. We also examined the effects of ESC on the regulation of the proteins in the Mitochondrial-Dependent Apoptotic Pathway in CCl4-induced liver apoptosis. In this study, silymarin was used as a positive control drug.In the present study, we examined the effect of ESC on CCl4-treated group (p < 0.001) where ESC (100 and 500 mg/kg BW) significantly decreased the activities of serum ALT and AST (p < 0.001). The effect of supplement of silymarin was similar to that observed for the ESC-treated group (p < 0.001) (The serum activities of ALT and AST were significantly elevated in the CCl< 0.001) .4 led to a marked increase in ALT and AST levels. In addition, when liver injury was evaluated by a histological approach [4 treatment was detected. Furthermore, CCl4 was also found to cause a variety of histological changes in the liver, including centrizonal necrosis, portal inflammation, and Kupffer cell hyperplasia. These histological changes as well as the increase in hepatic enzymes were significantly attenuated by ESC (100 and 500 mg/kg/BW) .4-treated group, sections showed degeneration and necrosis, fibrosis, hepatocyte infiltration containing neutrophils and mononuclear cells. In addition, central-to-central bridging necrosis was also noted. Moreover, treatment with of ESC (100 and 500 mg/kg BW) or silymarin significantly decreased the degree of inflammation, necrosis and fibrosis in CCl4-treated groups rats.As shown in 4-induced hepatic fibrosis rats. It was found that the activities of SOD, GPx and catalase were markedly increased in rats treated with ESC (100 and 500 mg/kg BW). Hepatic GPx activities were significantly increased in rats treated with ESC (100 and 500 mg/kg BW) and silymarin (200 mg/kg BW) (P < 0.001) .4 exposure [4-induced decrease in activities of anti-oxidative enzymes, such as Catalase, SOD and GPx. These results suggested that ESC reduced CCl4-induced hepatic fibrosis in rats, probably by exerting a protective effect against hepatocellular fibrosis with its free-radical scavenging ability.Living tissue is a major defense mechanism involving anti-oxidative enzymes that convert active oxygen molecules into non-toxic compounds ,27. Antiexposure . Glutathexposure . There aexposure . We havec levels were significantly increased in the CCl4-treated group. In addition, the activated-caspase-9 levels and activated-caspase-3 levels were significantly decreased in ESC-treated groups (100 and 500 mg/kg) .4 in liver is partly involved in the apoptosis pathway in vivo. At least 2 different apoptosis pathways\u2014the Mitochondrial pathway and the Death-receptor pathway\u2014lead to caspase activation [4-induced apoptotic hepatocytes [c release will form a complex with pro-caspase-9 and its cofactor Apaf-1 (apoptotic protease-activating factor-1). Therefore, it is responsible for activating caspase-9, which further activates caspase-3 and executes the apoptotic program.Carbon tetrachloride is a common hepatotoxin used in liver injury research. Early studies showed that the damage induced by CCltivation . Althougatocytes ,33, litt4-treated group. Bak, t-Bid and Bax levels in the CCl4-treated group were significantly higher than those in the control group. Also, Bak, t-Bid and Bax levels were significantly decreased in the ESC-treated groups (100 and 500 mg/kg) in the liver tissue of the CCl0 mg/kg) .4-treated group than those in the control group. Bcl-xL and Bcl-2, but not p-Bad, were significantly increased in ESC-treated groups (100 and 500 mg/kg) .Mitochondria are known to be a vulnerable target of various toxins and oxidative stress. The mitochondrial apoptotic pathway is regulated by the Bcl-2 family of proteins, which consist of both anti-apoptotic (such as Bcl-2) and pro-apoptotic (such as Bax) proteins . Activat4. Our data show that Bax protein content markedly increased in rats receiving CCl4 alone, suggesting that oxidative stress caused by CCl4 administration has activated Bax, Bak and t-Bid. We also found that Bcl-2 protein content markedly increased in rats receiving CCl4 alone. Administration of ESC effectively decreased the levels of Bak, Bax, and t-Bid.In this study, we have also demonstrated that ESC increased the levels of Bcl-2 and Bcl-x L and decreased the levels of Bak, Bax, t-Bid, cytochrome c, activated caspase-9, and activated caspase-3 proteins in rats exposed to CCl4-induced hepatotoxicity in rats. ESC significantly reduced CCl4-induced hepatic fibrosis in rats, probably by exerting a protective effect against hepatocellular fibrosis with its free-radical scavenging ability and inhibiting the Mitochondrial-Dependent Apoptotic Pathway in CCl4-induced liver apoptosis in rats in olive oil, twice a week for 8 weeks. The animals received only CCl4, CCl4 with ESC or silymarin . The ESC and silymarin were given when the CCl4-induced chronic injury model started, and the total drug treatment duration was 8 weeks.Apoptosis was induced in five groups of 10 rats by an oral administration of 0.5 mL/rat CClAfter the blood was drawn from rats at the eighth week, the animals were sacrificed at the same time and the livers were quickly taken out. They were then weighed after being clearly washed with cold normal saline and the moisture sucked up. The largest lobe of liver was divided into two parts for each liver sample, one part was submerged in 10% neutral formalin for the preparation of pathological section, and the other part was stored as a reserve at \u221280 \u00b0C.The blood was centrifuged at 3024 g at 4 \u00b0C for 15 min to separate serum. The levels of serum Alanine Aminotransferase (sALT) and serum Aspartate Aminotransferase (sAST) were assayed using clinical test kits spectrophotometrically .2O2 at 240 nm according to the method of Aebi [\u22121; where K is the rate constant of the enzyme. Activity is expressed as U per mg of protein (U/mg protein). Superoxide dismutase (SOD) activity was determined according to the method of Misra and Fridovich [Livers were homogenized in nine volumes of isotonic phosphate buffer . The prepared liver homogenate was centrifuged at 700 g for 5 min at 4 \u00b0C. Catalase was assayed by measuring the destruction of H of Aebi . One uniridovich at room \u22123 M\u22121 cm\u22121 was used to determine the enzyme activity. One unit of activity is equal to the mM of NADPH oxidized/min per mg protein.Glutathione peroxidase (GPx) activity was determined according to the method of Flohe and Gunzler at 37 \u00b0CFor histopathological examination, the formalin fixed liver was embedded in paraffin, cut into 4\u20135 mm thick sections, stained with hematoxylin-eosin, and observed under a photomicroscope.Liver tissue was homogenated in PBS buffer at a ratio of 100 mg tissue/0.5 mL PBS for 5 min. The homogenates were placed on ice for 10 min and then centrifuged at 12,000 g for 30 min. Samples containing equal proteins (40 g) were loaded and analyzed by Western blot analysis. Briefly, proteins were separated by 12% SDS-PAGE and transferred onto PVDF membranes . Membranes were blocked with blocking buffer for at least 1 h at room temperature and then incubated with primary antibodies in the above solution on an orbit shaker at 4 \u00b0C overnight. Following primary antibody incubations, membranes were incubated with horseradish peroxidase-linked secondary antibodies .P values of less than 0.05 were considered significantly.Data were expressed as mean \u00b1 SEM. Statistical evaluation was carried out by one-way analysis of variance (One-way ANOVA) followed by Scheffe\u2019s multiple range tests. 4 induced a marked rise in oxidative stress. Our data suggested that mitochondria-initiated apoptosis triggered by ROS plays an important role in CCl4-induced hepatotoxicity rats. ESC significantly reduced CCl4-induced hepatic apoptosis in rats, probably by exerting a protective effect against hepatocellular apoptosis with its free-radical scavenging ability and inhibiting the Mitochondrial-Dependent Apoptotic Pathway in CCl4-induced liver apoptosis in rats.The present study demonstrated that CCl"} +{"text": "Compared with acute pain that arises suddenly in response to a specific injury and is usually treatable, chronic pain persists over time, and is often resistant to medical treatment. Because of the heterogeneity of chronic pain origins, satisfactory therapies for its treatment are lacking, leading to an urgent need for the development of new treatments. The leading approach in drug design is selective compounds, though they are often less effective and require chronic dosing with many side effects. Herein, we review novel approaches to drug design for the treatment of chronic pain represented by dual-acting compounds, which operate at more than one biological target. A number of studies suggest the involvement of the cannabinoid and vanilloid receptors in pain. Interestingly cannabinoid system is in interrelation with other systems that comprise lipid mediators: prostaglandins, produced by COX enzyme. Therefore, in the present review, we summarize the role of dual-acting molecules (FAAH/TRPV1 and FAAH/COX-2 inhibitors) that interact with endocannabinoid and endovanillinoid systems and act as analgesics by elevating the endogenously produced endocannabinoids and dampening the production of pro-inflammatory prostaglandins. The plasticity of the endocannabinoid system (ECS) and the ability of a single chemical entity to exert an activity on two receptor systems has been developed and extensively investigated. Here, we review up-to-date pharmacological studies on compounds interacting with FAAH enzyme together with TRPV1 receptor or COX-2 enzyme respectively. Multi-target pharmacological intervention for treating pain may lead to the development of original and efficient treatments. Pain is defined by the International Association for the Study of Pain as an unpleasant sensory and/or emotional experience associated with actual or potential tissue damage Bonica, . Acute pRecent studies suggest that inhibiting FAAH will not have as significant efficacy as expected in chronic pain patient groups and 2 (CB2), endogenous agonists called endocannabinoids: AEA and 2-arachidonyl glycerol (2-AG) and enzymes involved in their biosynthesis and degradation. Endocannabinoids are produced in injured tissues to suppress sensitization and inflammation by activation of CB1 and CB2 are responsible for the conducting of pain signaling , regulate the neuro-immune interactions and interfere with the inflammatory hyperalgesia, thus also playing an important role in nociception for the treatment of pain and inflammation is COX inhibition. Cyclooxygenases are enzymes that catalyze the conversion of membrane phospholipids to prostanoids, which include PG, prostacyclins (PGI) essential for intestine and kidney functioning, and thromboxanes (TXA), responsible for platelet aggregation. The two COX isozymes are COX-1 and COX-2. COX-1 is characterized by constitutive expression in tissues and COX-2 is an induced isoform with low baseline expression in human tissues that can be induced during a response to extracellular and intracellular stimuli. The increase in COX-2 expression elevates proinflammatory cytokine, mitogen and growth factor levels may be associated with activation of the alternative AEA biotransformation pathways after completely blocking the FAAH enzyme and the ability of AEA to induce pro-nociceptive signaling through TRPV1 receptors levels were reported . Inhibition of one enzyme can activate others, leading to absence of anticipated analgesic effects. The paradigm shift from selective drugs to multi-target compounds has led to promising results in the treatment of pain, which has been summarized herein. This approach may be beneficial for the treatment of pain, which poses a difficult challenge due to the heterogeneity of its origin. Compounds, acting on more than one molecular target may have higher efficacies and better safety profiles than currently used drugs that act on a single biological target (Fowler et al., In this review, we have described the complex network between TRPV1 and CBNM: preparing of the figure and table, writing of the manuscript, KS: review conception, writing of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The correct names are: Mangesh Vasant Suryavanshi and Shrikant Bhute. The correct citation is: Adebayo AS, Suryavanshi MV, Bhute S, Agunloye AM, Isokpehi RD, Anumudu CI, et al. (2017) The microbiome in urogenital schistosomiasis and induced bladder pathologies. PLoS Negl Trop Dis11(8): e0005826."} +{"text": "Yellow fever virus (YFV) strains circulating in the Americas belong to two distinct genotypes (I and II) that have diversified into several concurrent enzootic lineages. Since 1999, YFV genotype I has spread outside endemic regions and its recent (2017) reemergence in non-endemic Southeastern Brazilian states fuels one of the largest epizootic of jungle Yellow Fever registered in the country. To better understand this phenomenon, we reconstructed the phylodynamics of YFV American genotypes using sequences from nine countries sampled along 60 years, including strains from Brazilian 2017 outbreak. Our analyses reveals that YFV genotypes I and II follow roughly similar evolutionary and demographic dynamics until the early 1990s, when a dramatic change in the diversification process of the genotype I occurred associated with the emergence and dissemination of a new lineage . Trinidad and Tobago was the most likely source of the YFV modern-lineage that spread to Brazil and Venezuela around the late 1980s, where it replaced all lineages previously circulating. The modern-lineage caused all major YFV outbreaks detected in non-endemic South American regions since 2000, including the 2017 Brazilian outbreak, and its dissemination was coupled to the accumulation of several amino acid substitutions particularly within non-structural viral proteins. YFV was probably introduced in the Americas from Africa around 300\u2013400 years ago4, coinciding with the slave trade period, and caused numerous YF urban epidemics in the continent until the early 20th century2. Since 1950, transmission of YFV in the Americas has been mostly maintained in a sylvatic cycle involving New World primates and mosquitoes of the genera Haemogogus and Sabethes. YFV is highly pathogenic for some New World primates and epizootics occur at rather regular intervals (~5\u201310 years) in a particular geographic region, sometime coinciding with sporadic outbreaks of YF in unvaccinated humans living in forested and surrounding rural areas2.Yellow fever virus (YFV) is the causative agent of yellow fever (YF), a severe acute disease of historical importance that remains a major public health problem in endemic regions of South America and Africath century were mainly restricted to the endemic Northern (Amazon) and Central-Western regions; but since 1999 the YFV spread outside the established endemic regions affecting an increasing number of humans and non-human primates in the Southeastern and Southern Brazilian regions7. The most recent outbreak of YF outside the endemic region began in December 2016 and affected non-human primates and unvaccinated human populations from rural areas of all four Southeastern Brazilian states , resulting in the largest epizootic/epidemic of jungle YF registered in Brazil over the last 50 years. Between December 2016 and April 2017, a total of 473 epizootics of non-human primates and 623 humans cases with 209 deaths were confirmed, mostly concentrated in Southeastern Brazilian states8. This corresponds to 79% of the total number of human confirmed cases of YFV across all Brazilian regions between 1980 and 20159.Most epizootic/epidemic outbreaks of YFV reported in Brazil during the second half of the 20th century10. These genotypes have diversified into several concurrent enzootic lineages that appear to persist and evolve within distinct geographic areas of Brazil, Bolivia, Peru, Trinidad and Tobago and Venezuela for long time periods15. While in situ evolution appears to be the predominant mechanism of YFV maintenance in South America and the Caribbean, some studies detected occasional YFV migrations between different American countries on long-time scales15 and others showed that YFV outbreaks were associated with the emergence of new lineages that replaced those causing previous outbreaks, as observed in the recent Brazilian epidemics (2000\u20132008)12. These observations suggest that lineage re-introduction and replacement may have been important factors shaping the long-term evolution and emergence of new YFV outbreaks in the Americas; but their relevance could have been underestimated because the paucity of viral sequences representative of all countries sampled over long-time scales.YFV strains currently circulating in South America branch within two distinct genotypes (I and II) that probably arose around the second half of the 19in situ evolution and lineage replacement as driving forces of the recent expansion of YFV to non-endemic South American regions. To do this, we reconstruct the long-term evolutionary, demographic and phylogeographic dynamics of dissemination of major YFV lineages circulating in the Americas using a much more comprehensive and up-to-date dataset than in previous studies. Our dataset comprised 137 YFV sequences sampled from nine South American and Caribbean countries between 1954 and 2017, including the two recently described YFV sequences from the Brazilian 2017 outbreak16. Additionally, we reconstructed the non-synonymous substitutions fixed along the entire genome of the ancestral virus from which the new YFV outbreaks occurring in the Americas evolved.The objective of this study was to gain new insights about the impact of n\u2009=\u2009100) and II (n\u2009=\u200937) and fully consistent with those previously reported for YFV prM/E sequences of American and/or African origin (Table\u00a018.The Maximum Likelihood (ML) phylogenetic analysis of YFV prM/E gene sequences revealed that all American isolates (not related to vaccine strains) segregate in two reciprocally monophyletic clusters corresponding to genotypes I n\u2009=\u200900 and IIPSP)\u2009=\u20090.95) at around 1908 (95% HPD: 1870\u20131936) to other Brazilian regions as well as to other American countries at multiple times. The analysis suggest that during these decades there were also secondary viral disseminations from Venezuela (PSP\u2009=\u20090.45) to Panama, from Trinidad and Tobago (PSP\u2009=\u20090.69) to Ecuador, and from Brazilian Central-Western (PSP\u2009>\u20090.66) to Brazilian Northern region, although the supporting PSPs values of those migrations were relatively low. Until the middle 1990s, several YFV genotype I lineages (designated as old-lineages) co-circulated and diversified while spread through different American countries and Brazilian regions.The spatiotemporal reconstruction of YFV genotype I suggests that this American lineage likely originated in the Northern Brazilian region (posterior state probability (PSP\u2009=\u20090.72) at 1977 (95% HPD: 1964\u20131987), where it continued circulating until the most recent survey performed in 2009. The modern-lineage did not remain contained to Trinidad and Tobago, but was concurrently disseminated to Northern Brazil (PSP\u2009=\u20090.52) and to Venezuela (PSP\u2009=\u20090.83) at 1989 (95% HPD: 1981\u20131996) and 1992 (95% HPD: 1986\u20131997), respectively. From Northern Brazil, a modern-sublineage rapidly spread southward reaching the Central-Western region in 1993 (95% HPD: 1987\u20131998) and non-endemic Brazilian regions at later times. This modern-sublineage was associated to the 2000\u20132001 Brazilian outbreaks, but there is no evidence of further circulation of this subclade after that time in the country. The modern-lineage strain introduced in Venezuela continued to circulate and evolve in this country until 2010 (year of the most recent sample available), generating another sublineage that was independently disseminated from Venezuela (PSP\u2009>\u20090.90) into Northern Brazil (originating sporadic human cases) and into Southeastern Brazil. Its first introduction into the Southeastern Brazilian region was estimated at 2005 (95% HPD: 2002\u20132007), driving the 2008\u20132009 outbreak that later spread to Southern Brazil and Northern Argentina. An independent dissemination of the modern-lineage from Venezuela into Southeastern Brazil seems to have originated the recent 2017 Brazilian outbreak. The most recent common ancestor (MRCA) of the two Brazilian YFV strains from 2017 was traced to 2016 (95% HPD: 2012\u20132017).At the middle 1990s, a dramatic change in the genetic diversity of the YFV genotype I occurred with the emergence of a new lineage (designated as modern-lineage) that replaced old-lineages circulating in the previous decades. The modern-lineage comprises all YFV genotype I sequences isolated in the Americas after 1996, with the exception of one sequence isolated in Colombia in 2000 that branched among old-lineages. Our phylogeographic analysis supports that the modern-lineage probably arose in Trinidad and Tobago (PSP\u2009=\u20090.96) at 1920 (95% HPD: 1867\u20131958) to other locations at multiple times. Most of those introductions seem to have resulted in dead end infections, with the exception of a genotype II variant probably introduced into Bolivia at 1973 (95% HPD: 1942\u20131992) that was locally spread and remained circulating in this country until 2006. We detected only one reintroduction of genotype II into Peru, probably from Bolivia (PSP\u2009>\u20090.59).Spatiotemporal reconstruction of YFV American genotype II dissemination, suggests that this lineage likely originated in Peru (Ne) between 1950 and 1985, and a subsequent drastic reduction between 1985 and 2010 that roughly coincides with the emergence and dissemination of the modern-lineage through South America analysis suggest that YFV genotype I displayed a phase of exponential growth between 1935 and 1950, followed by a stabilization of the effective population size . The causes of the restricted spread of YFV genotype II outside Peru remain unclear, but cross-immunity between antigenic similar genotypes may certainly function as a barrier for dissemination of YFV genotype II into geographic areas where YFV genotype I already circulates. The Andean mountains may also pose a physical barrier for the spread of YFV genotype II outside of Peru.Previous studies supported that YFV American genotypes have diversified into several concurrent enzootic lineages that appear to persist and evolve within distinct geographic areas for long time periods12.Lineage replacement and long-distance migrations between countries, by contrast, played a crucial role in the long-term evolution and widespread disseminations of YFV genotype I. Several enzootic YFV genotype I lineages (designated as old-lineages) co-circulated and diversified while spread through different countries and Brazilian regions until the middle 1990s, following a pattern comparable to YFV genotype II lineages. However, we noted a remarkable change in this pattern from mid 1990s, when a reduction of genotype I genetic diversity occurred coinciding with the emergence of a new lineage that rapidly disseminated throughout several countries replacing the old-ones. The modern-lineage probably emerged in Trinidad and Tobago at around 1977 and was the responsible for subsequent YFV outbreaks occurring in this country as well as in Brazil, Venezuela, Argentina and Colombia from mid 1990s onwards. Lineage replacement seems to be a common phenomenon in YFV genotype I evolution, as was observed previously during the replacement of the Old Par\u00e1 lineages after the 1960s13, we found that enzootic maintenance (in situ evolution) seems to be the main mechanism shaping the evolutionary dynamics of YFV genotype I modern-lineage circulating in Trinidad and Tobago and Venezuela. The independent clustering of YFV sequences from the 2000\u20132001, 2008\u20132009 and 2016\u20132017 Brazilian outbreaks, by contrast, indicate that continuous sub-clade replacements and long-distance movements seems to be major driving forces of the evolution of the modern-lineage within this country. YFV lineage replacement was already reported between the 2000\u20132001 and 2008\u20132009 YF Brazilian outbreaks10. Here, we observed that the newly reported YFV sequences from the 2017 Southeastern Brazilian outbreak also probably resulted from the reintroduction of a modern-lineage YFV variant from Venezuela (or from some Brazilian endemic region), and not from the local persistence of modern-lineage variants previously circulating in the 2000\u20132001 or 2008\u20132009 Brazilian Southeastern outbreaks.Consistent with previous findings10 and that Brazilian Northern region was the major viral source for surrounding regions and countries15. Here, we confirmed the central role played by the Northern Brazilian region in the spread of the YFV genotype I old-lineages between the 1960s and 1990s. From mid 1990s onwards, however, several different Brazilian regions and countries seem to have contributed to the spread of the YFV genotype I modern-lineage. Trinidad and Tobago was pointed as the primary source of YFV modern-lineage dissemination to Venezuela and Northern-Brazil, while secondary disseminations of this YFV lineage were detected from Venezuela to Northern and Southeastern Brazil, from Northern to Central-Western Brazil, from Central-Western to Northeastern, Southeastern and Southern Brazil and from Southeastern Brazil to Southern Brazil and Northern Argentina.Previous phylogeographic studies suggested that YFV genotype I arose in Brazil at around the second half of 19th century2, the dissemination pattern of the YFV modern-lineage in South America will be inferred with greater precision.One important limitation of this study is the lack of geographically and temporarily balanced YFV datasets. In this sense, the dataset herein used presents a sharp drop in the number and proportion of Northern Brazilian sequences towards the present. Thus, the conclusion that the Northern Brazilian region was not the major source of YFV genotype I in America from middle 1990s onwards need to be taken with caution. Similarly, our analysis pointed to a direct viral dissemination from Venezuela to the Southeastern Brazilian region as the source of the 2017 Brazilian outbreak. However, given the remarkable long branch length separating the 2017 Brazilian sequences from its closest Venezuelan sequences, we could not rule out the existence of intermediate viral migration steps involving the North and Central-West Brazilian regions that were not recovered because temporal and geographical gaps in our data. As more YFV Brazilian sequences from both endemic and non-endemic regions become available, especially from recent YFV epizootic episodes occurring during the 2000s3. These estimates were based on small sequence datasets covering only the YFV genetic diversity existing in the Americas until the year 2000. According to our reconstructions, the YFV genotype I went through an initial exponential growth phase between 1935 and 1950, followed by a period of Ne stabilization between 1950 and 1985. A subsequent reduction in the Ne of this YFV genotype was observed between 1985 and 2010 that seems to be explained by the replacement of old-lineages by a modern-lineage with a much lower Ne and by the confounding effect of geographic population subdivision as consequence of the more frequent sampling of epidemiologically linked sequences from recent YFV outbreaks21. The demographic history reconstructed for YFV genotype II evidence a short exponential growth phase between 1955 and 1965, after which the Ne remains roughly stable up to the most recent time, with only small fluctuations between 1980 and 1995 that coincide with the spread of this genotype in Bolivia14. These results support that both YFV American genotypes displayed complex demographic patterns with significant temporal fluctuations in the Ne over time that were probably driven by both viral disseminations into new areas and viral lineage replacements.Previous estimates of the YFV population dynamics in South America have suggested that genotype I experienced a population growth rate with an extremely low epidemic doubling time (>20 years), while genotype II exhibited a constant population size, both consistent with epidemics dominated by sylvatic transmission22. Our search for genetic signatures in the ancestral nodes of genotype I, however, detected six amino acid substitutions fixed over an estimated interval of 55 years between the middle 1930s and late 1980s. Some of these substitutions were located in protein domains associated with important viral functions. The substitutions A79V and K82R are placed in the fourth \u03b1-helix of C protein, which has been associated with RNA binding, dimer formation, protein stability and infectious particle production24. The substitution F2137L is positioned at N-terminal amphipathic helix of the NS4A protein, and a Leucine residue in that position of NS4A of Dengue virus type 2 was shown that could contribute to the protein oligomerization25. Finally, the amino acid substitutions A2736T (located in the guanylyltransferase/methyltransferase domain) and R3341K (located in the RNA dependent RNA polymerase domain) might modulate the NS5 activity since it was demonstrated biochemically that these domains interact tightly26. We also detected a large number of amino acid mutations (n\u2009=\u200935) that arose during recent dispersion of modern-lineage (subclade 1E) in Brazil and Venezuela, which could reflect more frequent viral replication cycles in large susceptible populations of primates from non-endemic regions and/or selective pressures for new viral variants with specific phenotypic characteristics. Of note, most (91%) amino acid substitutions detected in YFV modern-lineage accumulated within non-structural proteins that have been pointed as relevant selection targets in evolution of Flaviviruses28.It has been suggested that YFV is genetically stable and evolves slowly in comparison to other arbovirusesst century. The recent 2017 Brazilian outbreak seems to be the result of a new reintroduction of the YFV modern-lineage into the Southeastern region, but more sequences from this outbreak are needed to confirm this hypothesis. It is not clear whether the successful dissemination of YFV genotype I modern lineage was driven by stochastic and/or adaptive factors. Several amino acid substitutions within non-structural proteins, however, have been fixed during dissemination of YFV modern-lineage and their potential impact on viral fitness, transmissibility and virulence deserves further investigation.In summary, we describe a dramatic change in the genetic diversity of the YFV genotype I in the Americas during the 1990s, associated with the dissemination of a new viral lineage (modern-lineage) that replaced the old ones in endemic areas and also spread to non-endemic South American regions. Trinidad and Tobago seems to be the most probably source of the YFV genotype I modern-lineage from where it spread to South American countries, originating several YFV outbreaks during the 21www.ncbi.nlm.nih.gov). Noncoding regions were removed from complete genomes, retaining only the complete polyprotein open reading frame for subsequent analyses. American YFV complete genome and prM/E sequences were manually aligned with reference YFV sequences from Africa and with vaccine strains obtained from GenBank and subsequent subjected to Maximum Likelihood (ML) phylogenetic analysis. ML phylogenetic trees were inferred with the PhyML program29, using the best-fit model of nucleotide substitution selected using the jModelTest program30. Heuristic tree search was performed employing the SPR branch-swapping algorithm and the reliability of the phylogenies was estimated with the approximate likelihood-ratio test (aLRT)31. Sequences that grouped with vaccine strains or within African genotypes events supported by at least two methods were considered.We collected all complete YFV genome sequences and prM/E gene sequences (654 nt in length) of American origin with known date of isolation that were available in GenBank (MRCA), the spatial diffusion pattern and the demographic dynamics of the YFV genotypes I and II in the Americas were jointly estimated using the Markov chain Monte Carlo (MCMC) algorithms implemented in the BEAST v1.8.3 package42 with BEAGLE43 to improve run-time. The evolutionary and demographic process were directly estimated for each YFV genotype from the sampling dates of the prM/E sequences using the best-fit nucleotide substitution model, a relaxed uncorrelated lognormal molecular clock model44, and a Bayesian Skyline coalescent tree prior45. Migration events throughout the phylogeny were reconstructed using a reversible discrete phylogeographic model46 with a CTMC rate reference prior47. A discrete state was assigned for each sequence, corresponding to the country or country-region (Brazilian sequences) of isolation. Comparison between coalescent demographic models was performed using the log marginal likelihood estimation (MLE) based on path sampling (PS) and stepping-stone sampling (SS) methods48. MCMC were run sufficiently long to ensure stationarity and convergence. Uncertainty of parameter estimates were assessed after excluding the initial 10% of the run by calculating the Effective Sample Size (ESS) and the 95% Highest Probability Density (HPD) values, respectively, using TRACER v1.649 program. The programs TreeAnnotator v1.7.541 and FigTree v1.4.050 were used to summarize the posterior tree distribution and to visualize the annotated Maximum Clade Credibility (MCC) tree, respectively.The rate of nucleotide substitution, the time to the most recent common ancestor the consensus complete CDS sequence for each key ancestral node were computed using the R package SeqinR51. The MCC tree was reconstructed as previously described.To map amino acid changes that were fixed during evolution of YFV genotype I, complete CDS were reconstructed at key internal nodes of the American YFV complete genome phylogeny using BEAST v1.8.3 packageSupplementary Information"} +{"text": "A ruptured descending thoracic aortic aneurysm (rDTAA) is a life-threatening condition associated with high morbidity and mortality. Endovascular treatment for rDTAA promotes effective aneurysm exclusion with a minimally invasive approach. The authors report a case of a 76-year-old man with hemodynamically unstable 9-cm-diameter rDTAA treated with emergency thoracic endovascular aortic repair (TEVAR). On arrival the patient was awake, responsive, and hemodynamically unstable . Laboratory test results showed hematocrit: 32.5%, hemoglobin: 10.6 g/dL, and creatinine: 1.29 mg/dL. The patient was hypertensive, a heavy smoker, and had no known history of aortic aneurysm or coronary artery disease.Transthoracic echocardiography demonstrated aortic insufficiency and pericardial effusion. Computed tomography (CT) performed at the referring center had shown a 9-cm-diameter rDTAA with periaortic hematoma and large amounts of blood in the intra-pleural and mediastinal cavities and 2. M\u00d7 225 mm; Cook\u00ae Medical) was then advanced and deployed just distal to the left common carotid artery, with intentional occlusion of the left subclavian artery.The procedure was done under general anesthesia. Access to the left femoral artery was obtained percutaneously and the right femoral artery was surgically exposed with insertion of 8 Fr sheaths bilaterally. No systemic heparin was administered. An extra stiff Lunderquist guidewire was advanced through the right external iliac artery to the ascending aorta. An intraoperative aortography was obtained . A proxi\u00d7 211 mm; Cook\u00ae Medical), was inserted, overlapping the first one. A completion angiography demonstrated both endoprostheses correctly positioned, total exclusion of the descending thoracic aortic aneurysm, no endoleaks, and a patent left common carotid. Transesophageal echocardiography revealed aneurysm exclusion and immediate aneurysm sac thrombosis. The entire procedure took 60 minutes and blood loss was trivial. The patient received 3 units of intraoperative blood transfusion.A second device, a distal Zenith Alpha\u2122 Thoracic Endovascular Graft .,,Hemothorax is frequently associated with rDTAA and it may provoke compression of the esophagus and cardiovascular structures and compromises postoperative survival.Piffaretti et al. analyzed fifty-six patients with ruptures of the descending aorta and hemothorax who had been treated with TEVAR , concluding that prompt hemothorax evacuation reduced postoperative mortality in drained patients who had presented significantly worse pre-operative respiratory parameters.,,,,Possible limitations to TEVAR for rDTAA include: no proximal or distal aortic neck, an aortic diameter too wide for commercially available thoracic endografts, and severe aortic calcification and tortuosity.In conclusion, TEVAR is feasible for rDTAA. It can be performed in high-risk patients with adverse anatomical characteristics and represents a good option even in cases of sub-optimal diagnosis of thoracic aortic rupture. Hemothorax secondary to rDTAA should be drained."} +{"text": "Behind only Alzheimer\u2019s disease, vascular contributions to cognitive impairment and dementia (VCID) is the second most common cause of dementia, affecting roughly 10\u201340% of dementia patients. While there is no cure for VCID, several risk factors for VCID, such as diabetes, hypertension, and stroke, have been identified. Elevated plasma levels of homocysteine, termed hyperhomocysteinemia (HHcy), are a major, yet underrecognized, risk factor for VCID. B vitamin deficiency, which is the most common cause of HHcy, is common in the elderly. With B vitamin supplementation being a relatively safe and inexpensive therapeutic, the treatment of HHcy-induced VCID would seem straightforward; however, preclinical and clinical data shows it is not. Clinical trials using B vitamin supplementation have shown conflicting results about the benefits of lowering homocysteine and issues have arisen over proper study design within the trials. Studies using cell culture and animal models have proposed several mechanisms for homocysteine-induced cognitive decline, providing other targets for therapeutics. For this review, we will focus on HHcy as a risk factor for VCID, specifically, the different mechanisms proposed for homocysteine-induced cognitive decline and the clinical trials aimed at lowering plasma homocysteine. Vascular contributions to cognitive impairment and dementia (VCID) are defined as the conditions arising from vascular brain injuries that induce significant changes to memory, thinking, and behavior. It is the leading cause of dementia behind only Alzheimer\u2019s disease (AD); however, there is increasing awareness of the co-morbidity of VCID and AD . RoughlyWhile there is no cure for VCID, several studies have identified risk factors that can be modified to reduce risk of developing VCID. A major, yet underrecognized, modifiable risk factor for VCID is hyperhomocysteinemia (HHcy). Defined as elevated plasma levels of homocysteine, a non-protein forming amino acid, HHcy has been identified as a risk factor for cardiovascular disease, stroke, VCID, and AD . StudiesS-adenosylmethionine (SAM). SAM is a methyl donor to several different receptors and forms S-adenosylhomocysteine (SAH) as a by-product of this methyl reaction. SAH can then be hydrolyzed to form homocysteine. Homocysteine can also go through two different re-methylation processes to form methionine again. In one pathway, folate is reduced to tetrahydrofolate which is then converted to 5, 10-methylenetetrahydrofolate. Methylenetetrahydrofolate reductase (MTHFR) reduces 5, 10-methylenetetrahydrofate to 5-methyltetrahydrofolate. Finally, 5-methyltetrahydrofolate and the essential cofactor vitamin B12 add a methyl group to homocysteine to form methionine again. In an alternative pathway, betaine\u2013homocysteine S-methyltransferase (BHMT) uses betaine synthesized from choline as a methyl group to convert homocysteine back to methionine.Homocysteine is produced in all cells and involved in the metabolism of cysteine and methionine . Normal Homocysteine can also go through a transsulfuration pathway to form cysteine. Serine can be enzymatically added to homocysteine by cystathionine beta synthase (CBS) and vitamin B6 to form cystathionine . CystathAs mentioned above, homocysteine is produced in all cells; however, its conversion to cysteine or back to methionine does not. The brain lacks both CGL and BHMT, making it dependent on the folate cycle for re-methylation of homocysteine to methionine . While tOther studies suggest homocysteine induces cellular damage via oxidative stress. As mentioned above, during normal homocysteine metabolism, cysteine is produced. Cysteine is a precursor for glutathione, which is a tripeptide that ultimately reduces reactive oxygen species. Without homocysteine conversion to cysteine, either due to CBS mutations or a diet lacking in vitamin B6, glutathione levels decrease, leading to increased reactive oxygen species and ultimately oxidative stress. Homocysteine metabolism is also regulated by the redox potential in a cell since several enzymes involved in its metabolism are regulated by the oxidative status . In one Another proposed mechanism for homocysteine neurodegeneration involves homocysteine\u2019s role as an agonist for AMPA (both metabotropic and ionotropic) and NMDA receptors. Homocysteic acid, an oxidative product of homocysteine that is released in response to excitatory stimulation, acts an excitatory neurotransmitter by activating the NMDA receptor . Activat\u00b1 heterozygote mice have a 50% lower CBS activity compared to wildtype mice and develop mild HHcy or enrichment in methionine, which increases the conversion of methionine to homocysteine. A combination of these diets or even a diet of increased homocysteine can also be used to induce HHcy. Our lab has also recently developed a model of VCID by inducing HHcy in order to investigate the mechanisms of homocysteine-induced cognitive impairment. We placed 3-month-old C57BL6 mice on a combination diet that is deficient in folate and vitamins B6 and B12 and enriched in methionine for 3 moIn addition to the pathologies listed above, we have also shown that astrocytic end-feet are disrupted in the mice on the HHcy diet . In the While several mechanisms of homocysteine-induced cognitive impairment and neurodegeneration have been proposed and discussed here, it is unlikely that homocysteine acts through only one of these mechanisms. Homocysteine may act through several, if not all of these mechanisms. It is also unclear whether the high levels of homocysteine or the lack of B vitamins is the main cause behind the cognitive impairment seen in hyperhomocysteinemic patients. Discussed next are the clinical implications of HHcy and the potential therapeutics tested in clinical trials to lower homocysteine levels and improve cognition.Extensive clinical data support the role of HHcy as a risk factor for VCID. Given that normal and abnormal values are set by individual clinical laboratories, mild-moderate HHcy is loosely defined by clinical standards . HoweverBoth genetic mutations and dietary vitamin deficiencies can affect homocysteine levels resulting in HHcy. Several polymorphisms (notably C677T and A1298C) have been identified in the MTHFR gene in humans, which can induce severe HHcy by limiting conversion of homocysteine back to methionine . While rAs suggested, clinical mild\u2013moderate HHcy is common, especially in elderly patients, with the majority of cases resulting from insufficient B vitamin status . The assH. pylori infection may contribute to inadequate B12 absorption do support a beneficial effect of B vitamin supplementation on cognition. The FACIT trial showed significant effects of B vitamins on cognition in participants with high plasma homocysteine, while the WAFACS trial showed similarly significant effects in those with inadequate B vitamin status . FurtherIn all, given the challenges faced by previous trials, further B vitamin supplementation trials are needed. New trials will be most successful if they prescribe a full combination supplement at high dose to at-risk age participants with elevated plasma homocysteine or inadequate B vitamin status at baseline, adequate omega-3 fatty acid status at baseline, and who do not routinely take aspirin.With the number of people aged over 60 expected to increase worldwide by 1.25 billion by 2050, accounting for 22% of the world\u2019s population, it is crucial to understand the causes of dementia and develop treatments . CurrentBP and EW each wrote 50% of the manuscript. DW edited for content, checked for accuracy, and provided guidance in the preparation of the content.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The para-Bombay phenotype is characterized by a lack of ABH antigens on red cells, but ABH substances are found in saliva. Molecular genetic analysis was performed for seven Chinese individuals serologically typed as para-Bombay in Blood Station Center of Ningbo, Zhejiang Province, Ningbo, China from 2011 to 2014.FUT1 (or H) and FUT2 (or Se), respectively. Routine ABO genotyping analysis was performed. Haplotypes of FUT1 were identified by TOPO cloning sequencing. Phylogenetic tree of H proteins of different organisms was performed using Mega 6 software.RBCs\u2019 phenotype was characterized by standard serologic technique. Genomic DNA was sequenced with primers that amplified the coding sequence of \u03b1 -fucosyltransferase genes FUT1 547delAG (h1), FUT1 880delTT (h2), FUT1 658T (h3) and FUT1 896C were identified in this study. FUT1 896C was first revealed by our team. The H-deficient allele reported here was rare and the molecular basis for H deficient alleles was diverse as well in the Chinese population. In addition, the FUT2 was also analyzed, only one FUT2 allele was detected in our study: Se357. Phylogenetic tree of the H proteins showed that H proteins could work as an evolutionary and genetic marker to differentiate organisms in the world.Seven independent individuals were demonstrated to possess the para-Bombay phenotype. RBC ABO genotypes correlated with ABH substances in their saliva. FUT1 gene are responsible for the inactivation of the FUT1-encoded enzyme activity.Molecular genetic backgrounds of seven Chinese individuals were summarized sporadic and random mutations in the FUT1 and FUT2 encode the H antigen, the precursor of A and B antigens. Both FUT1 and FUT2 gene encode \u03b1 . The ABOencode \u03b1 , howeverencode \u03b1 , 5, FUT1FUT1 gene accompanied by an active FUT2 gene. The first mutant FUT1 gene was identified in an India individual who lacked the H enzyme and had no H antigens on erythrocytes, which was a typical Bombay phenotype. To date, more than 43 silencing or weakening mutations have been described for FUT1 in the Blood Group Antigen Gene Mutation Database of the US National Center or Biotechnology Information. FUT1 gene determines the synthesis of H type 1 (following A/B antigens) adsorbed onto the membrane of RBC from the plasma, but the encoded enzyme activity by a deficient FUT1 gene is greatly abated, resulting in a lower amounts of H antigen (and A/B antigen) on the surface of RBC. In above situation, no matter the function of FUT2 gene is normal or not, H antigen (and A/B antigen) is poorly expressed and can only be detected by adsorption-elution tests using proper the anti-H (and anti-A/B) reagents. The anti-H made from para-Bombay individuals usually shows a weaker reaction in the adsorption-elution test compared with the anti-H from individuals with the Bombay phenotype, which usually shows strong reactive with a wide thermal range, whereas, it is less reactive and even does not react above room temperature for anti-H from para-Bombay individuals. This paper described the molecular genetic backgrounds of seven such Chinese individuals.The para-Bombay phenotype is characterized by a non-functional Six probands with the para-Bombay phenotypes were identified during pre-transfusion testing in the time-period 2011 to 2014. One proband was a volunteer donor at the Ningbo Blood Station of Zhejiang Province in China, whose erythrocytes showed the rare phenotype with a cell and serum grouping discrepancy was suspected to be a para-Bombay individual. Overall, 5 mL of peripheral blood was bled with ethylenediaminetetraacetic acid dipotassium (EDTA-2K) anticoagulant from each individual. Saliva samples were presented by all the suspected para-Bombay individuals as well. ABH antigens on erythrocytes and in saliva were examined as well.H allele in natural population. All the subjects signed the informed consentsGenomic DNA was extracted from whole blood samples using a DNA isolation kit according to the manufacturer\u2019s instruction. The DNA of peripheral blood from 110 randomly chosen Chinese individuals with normal ABO blood group phenotypes were isolated to assess the frequency of a and anti-Leb was placed in a tube and mixed with washed RBCs, respectively. After centrifugation, the results of haemagglutination were observed macroscopically and microscopically. The human anti-A, B was prepared by our laboratory.ABO serology was performed with standard serological techniques. The adsorption-elution test was usedABO preliminary genotypes were determined using a Sequence-specific-primer\u2013PCR (PCR-SSP) technology designed by our team with Primer Premier 5.0 . All primers were synthesized by Life Technologies . The sequences are given in The ABO gene, whose primers used, are listed in ABO genotypes were assigned according to the nucleotides at the polymorphic ABO positions. All the acquired nucleotides sequences were compared with standard ABO polymorphisms from the dbRBC of NCBI and each SNP or mutation was analyzed and documented in the ABO gene.ABO exact genotypes were determined by sequencing of exons 6 and 7of FUT1. The reagents and protocols used in the PCR were the same as the sequencing of ABO gene mentioned in the above section. The sequence data were analyzed by FinchTV1.4 software and all achieved nucleotides sequences were compared with standard Hh polymorphisms from the dbRBC of NCBI, and every mutation in the FUT1 gene was analyzed, each FUT1 genotype was assigned at last.Two DNA fragments covering the entire coding region (1098bp) were amplified to identify the mutations in the FUT1 gene was ligated into the plasmid pCRIITOPO, then the competent cells of TOP-10 Escherichia coli were transfected with the recombinant plasmids using a TOPO TA cloning kit according to the manufacturer\u2019s instructions. The colonies on LB plates were selected randomly and screened using colony-PCR for each sample. Plasmid DNA of positive colony was extracted by a kit and used as templates for the sequencing reaction. The PCR products were sent to Shanghai Sunny Biotechnology Co., Ltd , where all the following experiment steps were finished.In order to analyze the haplotype, the PCR-product of e of FUT2, the whole coding region (1118bp) of FUT2 was amplified using the primers sequence as query sequence was pasted in the text area of BlastP, and 52 organisms who express the H proteins were searched out, every protein sequence was downloaded in FASTA format. The evolutionary history was infe05 sequenThe evolutionary distances were computed using the Poisson correction method and the The ABH substances on RBCs could not be detected using direct agglutination, even all the reagents, polyclonal, monoclonal anti-sera and the lectin Ulex europaeus (anti-H) were chosen to perform such experiment . Howeverh1, h2 and h3) were detected in the six individuals with the para-Bombay phenotypes using DNA sequencing based on the entire FUT1 coding region. The genotypes of heterozygous (h1h3) or homozygous were identified in the coding region. The 357C>T variant of FUT2 did not result in an amino acid change, are common in Asian populations , FUT1 880delTT (h2), FUT1 658T (h3) and a novel FUT1 allele, FUT1 896C , FUT1 880delTT(h2), FUT1 658T(h3) mutation was found mainly in Chinese population were identified in seven Chinese individuals with para-Bombay phenotypes and on the same Se357/Se357 haplotype background. As the para-Bombay phenotype is rare in the natural population, it may bring troubles in clinical blood transfusion, blood typing and so on; this article would contribute to understanding the special blood group not only in theory but also in practice.Four non-functional Ethical issues have been completely observed by the authors."} +{"text": "However, the possibility that wild-type RNA binding proteins mislocalize without necessarily becoming constituents of cytoplasmic inclusions themselves remains relatively unexplored. We hypothesized that nuclear-to-cytoplasmic mislocalization of the RNA binding protein fused in sarcoma (FUS), in an unaggregated state, may occur more widely in ALS than previously recognized. To address this hypothesis, we analysed motor neurons from a human ALS induced-pluripotent stem cell model caused by the VCP mutation. Additionally, we examined mouse transgenic models and post-mortem tissue from human sporadic ALS cases. We report nuclear-to-cytoplasmic mislocalization of FUS in both VCP-mutation related ALS and, crucially, in sporadic ALS spinal cord tissue from multiple cases. Furthermore, we provide evidence that FUS protein binds to an aberrantly retained intron within the SFPQ transcript, which is exported from the nucleus into the cytoplasm. Collectively, these data support a model for ALS pathogenesis whereby aberrant intron retention in SFPQ transcripts contributes to FUS mislocalization through their direct interaction and nuclear export. In summary, we report widespread mislocalization of the FUS protein in ALS and propose a putative underlying mechanism for this process. Genetic discoveries in ALS strongly implicate ubiquitously expressed regulators of RNA-processing model, mouse transgenic models and human post-mortem tissue from multiple cases of sporadic ALS. We find that the nuclear-to-cytoplasmic mislocalization of FUS is a more widespread feature of ALS than previously recognized. Furthermore, we present evidence that supports a putative molecular mechanism for this mislocalization through interaction between FUS protein and the aberrant intron-retaining SFPQ transcript.IPSCs were maintained using standard protocols. Motor neuron differentiation was carried out as described previously (G93ASOD1mice , postnatal Day 93\u201395 (P93\u201395) (n = 4 mice); (ii) male over-expressing mutant human VCPA232E were generated by J. Paul Taylor et al., St Jude Children\u2019s Research Hospital, Memphis, TN, USA and are described in C57/B6) background, symptomatic, 9 months old (n = 3 mice); and (iii) wild-type C56BL/6-SJL mixed background (Jackson Laboratories) were used as control (n = 4 mice). Mice were bred and maintained at the UCL Institute of Neurology in standard individually ventilated cages with up to three mice per cage, in a temperature and humidity controlled environment with a 12-h light/dark cycle and had access to drinking water and food ad libitum. Cages were checked daily to ensure animal welfare. Body weight was assessed regularly to ensure no weight loss. For tissue collection, animals were injected with terminal anaesthesia and were transcardially perfused with 4% paraformaldehyde. The lumbar region of the spinal cord was removed and post-fixed with 4% paraformaldehyde and cryoprotected overnight with 30% sucrose; 10 or 20 \u03bcm serial transverse cryosections were cut for immunofluorescence staining.All experiments were carried out following the guidelines of the UCL Institute of Neurology Genetic Manipulation and Ethic Committees and in accordance with the European Community Council Directive of November 24, 1986 (86/609/EEC). Animal experiments were undertaken under licence from the UK Home Office in accordance with the Animals (Scientific Procedures) Act 1986 (Amended Regulations 2012) and were approved by the Ethical Review Panel of the Institute of Neurology. The following transgenic mouse lines were used, and were analysed as different experimental groups: (i) female Snap frozen tissue sections were obtained from lumbar spinal cords of eight healthy donors and 12 age and sex matched sporadic ALS patients . Death tz-stacks with a z-step of 1 \u03bcm and processed to obtain a maximum intensity projection. For the analysis of nuclear/cytoplasmic ratio of FUS in iPSC-derived cells, images were analysed using the Columbus Image Analysis System (Perkin Elmer). The animal and post-mortem tissue sections were analysed using Fiji. Motor neurons were identified by choline acetyltransferase (ChAT) immunoreactivity, and the nuclear and cytoplasmic area were manually drawn based on DAPI and ChAT staining, respectively. For each cell, the average FUS immunoreactivity intensity in each region of interest was measured, background was subtracted, and the ratio between nuclear and cytoplasmic average intensity was calculated and used as the main experimental outcome.For immunocytochemistry and immunohistochemistry, samples were blocked in 10% normal goat serum (NGS) or 10% normal donkey serum (NDS) as appropriate and permeabilized in 0.3% Triton\u2122 X-100 at room temperature for 1 h. Immunolabelling was performed with primary antibodies in NGS (5%) and Triton\u2122 X-100 (0.1% in PBS) at 4\u00b0C overnight followed by species-specific secondary antibodies for 1 h at room temperature and DAPI nuclear counterstain (100 ng/ml) for 10 min at room temperature. For human post-mortem samples fixation and permeabilization in cold methanol was performed before the immunostaining. Primary antibodies were diluted as follows: goat anti ChAT 1:100; rabbit and mouse anti FUS . Images were acquired using either a 710 Laser Scanning Confocal Microscope (Zeiss) or the Opera Phenix High-Content Screening System (Perkin Elmer). Images were acquired as confocal For the biochemical fractionation of iPSC-derived neural precursors the Ambion PARIS kit (Thermo Fisher Scientific) was used following manufacturer\u2019s instructions. A cytosolic fraction was obtained by lysing the cultures for 10 min in ice-cold cell fractionation buffer. Nuclei were lysed in 8 M Urea Nuclear Lysis Buffer, containing 50 mM Tris-HCl (pH 8), 100 mM NaCl, 0.1% SDS, 1 mM DTT. Both lysis buffer contained 0.1 U/\u03bcl RiboLock RNase Inhibitor (Thermo Fisher Scientific). RNA was extracted from both fractions using the Promega Maxwell\u00ae RSC simplyRNA cells kit including DNase treatment, alongside the Maxwell\u00ae RSC instrument. Reverse transcription was performed using the RevertAid\u2122 First Strand cDNA Synthesis Kit (Thermo Fisher Scientific) using 1 \u03bcg of RNA and random hexamers. Quantitative PCR was performed using the PowerUP\u2122 SYBR\u00ae Green Master Mix (Thermo Fisher Scientific) and the the QuantStudio\u2122 6 Flex Real-Time PCR System (Applied Biosystems). Specific amplification was determined by melt curve analysis and agarose gel electrophoresis of the PCR products. Analysis of intron retaining transcripts was performed as previously described , 10% dextran sulphate , 2 mg/ml UltraPure BSA , and 10 mM vanadyl-ribonucleoside complex with probes at final concentration of 1 ng/\u00b5l. Preparations were covered with parafilm and incubated at 37\u00b0C for 5 h, and afterwards washed twice with pre-warmed 2 \u00d7 SCC/10% formamide for 30 min at 37\u00b0C. Finally the preparations were washed twice with PBS at room temperature, and then mounted using 10 \u00b5l ProLong\u00ae Gold Antifade Reagent containing DAPI . The slides were imaged when the mounting medium was fully cured >12 h.Based on Probes were designed using the Probe Designer software from Biosearch Technologies and were provided by same vendor. Probes included were designed against SFPQ intron conjugated to Quasar\u00ae570 (SMF-2037\u20131) and mature SFPQ conjugated to Quasar\u00ae670 (sequences of probes available upon request). Quantification of hybridization signal was performed using custom spot-intensity detection algorithm in DAPI segmented cells to separate nuclear and cytoplasmic signal.P-values were obtained by likelihood ratio tests of the full model with the effect in question against the model without the effect in question.We used R and lme4 data can be accessed at For human iPSC work, informed consent was obtained from all patients and healthy controls in this study. Experimental protocols were all carried out according to approved regulations and guidelines by UCLH\u2019s National Hospital for Neurology and Neurosurgery and UCL Queen Square Institute of Neurology joint research ethics committee (09/0272). The human post-mortem spinal cord samples were obtained from the tissue bank NeuroResource, UCL Queen Square Institute of Neurology, London, UK. Samples were donated to the tissue bank with written tissue donor informed consent following ethical review by the NHS NRES Committee London\u2013Central and stored under a Research Sector Licence from the UK Human Tissue Authority (HTA). All animal experiments described in this study were carried out under licence from the UK Home Office, and were approved by the Ethical Review Panel of the Institute of Neurology.Data supporting the findings of this study are available from the corresponding author, upon reasonable request.R155C and VCPR191Q) (P < 0.001) decrease in nuclear-to-cytoplasmic localization during motor neuron differentiation = 0.031, P = 0.955], nuclear-to-cytoplasmic mislocalization abounded in the VCP mouse model, with a reduction in FUS nuclear to cytoplasmic ratio of \u22124.0176 \u00b1 1.1775 cultures of comprehensively validated and functionally characterized spinal cord motor neurons . To this end, we examined post-mortem spinal cord tissue from 12 sporadic ALS cases and eight healthy controls A. We fouin vitro model of ALS which associates with increased apoptosis as orthogonal validation that this transcript is exported from the nucleus , a mouse transgenic model (three VCP mutant and three control mice), and post-mortem tissue (12 sporadic ALS cases and eight control cases). The pervasive mislocalization of FUS has likely evaded detection thus far as FUS largely remains unaggregated in the cytoplasm, rather than forming part of the TDP-43 aggregates in sporadic ALS cases.Our findings support a model whereby FUS mislocalization from the nucleus to the cytoplasm occurs in the majority of ALS cases, but it generally does not appear to aggregate in the cytoplasm. Nuclear loss of FUS protein may impair pre-mRNA splicing whilst the possibility of a cytosolic toxic gain of function is also noteworthy in light of recent studies of FUS in ALS, and propose a putative context-specific mechanism for this through its interaction with the ALS-related aberrantly retained intron 9 in SFPQ transcripts. These findings raise the prospect of targeting the nuclear-to-cytoplasmic mislocalization of unaggregated FUS as a putative therapeutic strategy in ALS.awz217_Supplementary_DataClick here for additional data file."} +{"text": "Cycling exercise is commonly used in rehabilitation to improve lower extremity (LE) motor function and gait performance after stroke. Motor learning is important for regaining motor skills, suggesting that training of motor skills influences cortical plasticity. However, the effects of motor skill learning in dynamic alternating movements of both legs on cortical plasticity remain unclear. Here, we examined the effects of skillful cycling training on cortical plasticity of the LE motor area in healthy adults. Eleven healthy volunteers participated in the following three sessions on different days: skillful cycling training, constant-speed cycling training, and rest condition. Skillful cycling training required the navigation of a marker up and down curves by controlling the rotation speed of the pedals. Participants were instructed to fit the marker to the target curves as accurately as possible. Amplitudes of motor evoked potentials (MEPs) and short-interval intracortical inhibition (SICI) evoked using transcranial magnetic stimulation (TMS) were assessed at baseline, after every 10 min of the task , and 30 min after the third and final trial. A decrease in tracking errors was representative of the formation of motor learning following skillful cycling training. Compared to baseline, SICI was significantly decreased after skillful cycling training in the tibialis anterior (TA) muscle. The task-induced alterations of SICI were more prominent and lasted longer with skillful cycling training than with the other conditions. The changes in SICI were negatively correlated with a change in tracking error ratio at 20 min the task. MEP amplitudes were not significantly altered with any condition. In conclusion, skillful cycling training induced long-lasting plastic changes of intracortical inhibition, which corresponded to the learning process in the LE motor cortex. These findings suggest that skillful cycling training would be an effective LE rehabilitation method after stroke. Motor impairments following stroke remain one of the leading causes of long-term disability in daily life . There iCycling exercise has been proposed as an effective approach to improve lower extremity (LE) motor function and gait performance in patients with stroke . FujiwarMotor learning is important for regaining motor skills including gait, and motor skill training may influence cortical plasticity after brain injury . PharmacMotor learning of coordinated alternating movements of both legs, such as in cycling, is important to efficiently reacquire gait performance following stroke. A functional MRI study by Eleven healthy volunteers participated in this study . Sample size was determined based on previous studies investigating the effects of cycling exercise or ankle exercise on intracortical inhibition . ExclusiThe present study employed a randomized crossover design. All participants performed the following sessions on different days: (1) skillful cycling training, (2) constant-speed cycling training, and (3) rest condition see . The tasParticipants were comfortably seated on a servo-dynamically controlled recumbent ergometer . Their feet were firmly strapped to the pedals and a seat belt and adjustable backrest with a tilt angle of 80\u00b0 was used to stabilize their trunk. The ergometer used was able to achieve a highly precise load control over a wide range of cycling resistances (0\u2013240 Nm). The ergometer seat and crank heights were set at 51 and 17 cm, respectively. The distance from the seat edge to the crank axis and the height of the pedal axis were adjusted so that the knee extension angle was \u221210\u00b0 during maximal extension. An isotonic mode was utilized with load sets at 5 Nm . The loaParticipants performed skillful cycling training, whereby they controlled the movement of a cursor on a computer screen by adjusting the pedaling speed in order to track a marker to target curves see . PedalinThe ergometer settings were identical to during skillful cycling training. To control the amount of exercise, a trial required a constant pedaling speed of 40 rpm for 10 min. Using a similar program to the one used during skillful cycling training, participants maintained the appropriate number of rotations while observing a tracking line set at 40 rpm.As a control, a 10-min rest condition was carried out whereby participants sat on the ergometer in the same manner as during other conditions, but did not engage in cycling.TR electromyography machine was used to record and analyze the EMG data. A band pass filter was applied between 30 Hz and 2 kHz. Signals were recorded at a sampling rate of 5 kHz and stored on the computer for subsequent analysis using LabVIEW software.Prior to electrode attachment, the area of skin over the recording area of the target muscle was cleansed with alcohol. Throughout the experiments, skin resistance was kept below 5 k\u03a9. Surface electrodes were placed on the skin overlying the left TA in a bipolar montage (inter-electrode distance of 20 mm). A NeuropackParticipants seated on an ergometer with a backrest in a relaxed position with 80\u00b0 hip flexion, 80\u00b0 knee flexion, 10\u00b0 ankle plantar flexion, and their feet on the floor. TMS was performed using a magnetic stimulator capable of delivering a magnetic field of 2.2 T with 100 \u03bcs pulse duration through a double cone coil. Each cone had a diameter of 110 mm. The stimulating coil was located 0\u20132 cm posterior to the vertex and was placed over the site that was optimal for eliciting responses in the left TA and oriented so that the current in the brain flowed in a posterior to anterior direction through this site . Since tThe rationale for choosing TA as the target muscle was mainly for the technical reasons that TMS over M1 can induce reliable MEPs from TA . The thrThe intensity of single-pulse TMS was set at 120% of the rMT to measure MEPs as an indicator of corticospinal excitability. A total of 10 MEPs were recorded in the rest condition. Peak-to-peak amplitudes were averaged for each time point. Ten measurements of the peak-to-peak MEP amplitude were averaged, and the mean value and standard error among subjects were calculated.In the present study, we sought to evaluate cortical plasticity by measuring changes in SICI after the cycling training . In ordet-test.We compared the total number of pedal rotations during skillful and constant-speed cycling using two-factor repeated-measures analysis of variance (ANOVA) to analyze the effects of \u201ctrial\u201d and \u201ccondition\u201d . Additionally, to compare the degree of arousal between conditions, we compared heart rate data recorded after the skillful and constant-speed cycling using a paired To confirm the occurrence of motor learning following skillful cycling training, a one-factor repeated-measures ANOVA was performed to analyze the change in area of error between the three trials.t-test with Bonferroni\u2019s correction for multiple comparisons was used for post hoc analysis if a given ANOVA showed a significant interaction. Retrospective power calculations were performed for paired t-tests, with an effect size represented by Cohen\u2019s d.To analyze MEP amplitude and SICI, two-factor repeated measures ANOVA was used to analyze the effects of cycling \u201ctime\u201d and \u201ccondition\u201d and any interaction. One-way ANOVA was performed to compare MEP amplitude and SICI between each condition using T0 as a baseline. When analyzing SICI, in order to confirm that the test MEP was not different between trials and conditions, we performed two-factor repeated measures ANOVA using the statistical model described above. A paired P < 0.05 for all tests.To investigate the relationship between plastic changes in SICI and motor learning, we calculated the tracking error ratio and SICI ratio and correlations between them were assessed using Pearson\u2019s correlation analysis, after checking for normal distribution of the data with the Shapiro\u2013Wilk test. The tracking error ratio values were calculated by dividing values of Task 2 and Task 3 by the value of Task 1. The SICI ratio was calculated as the SICI values of T20 and T30 divided by the value of T10 in order to minimize the exercise-induced changes in SICI values at each time point. All statistical analyses were conducted using IBM SPSS statistics 21 for Windows . Statistical significance was set at a value of F2,20 = 2.613, P = 0.098) nor any significant main effect . No participants complained of fatigue after cycling for each condition. There were no significant differences in heart rate after training between the skillful and constant-speed cycling conditions . These results indicate that there was no difference in the amount of exercise or arousal between the two conditions or between trials.The average number of rotations of the pedals during the skillful and constant-speed cycling conditions was 444.9 \u00b1 4.0 and 448.4 \u00b1 4.4, respectively (mean \u00b1 standard error). Two-factor repeated measures ANOVA did not reveal a significant interaction . Post hoc test revealed that the area of error for the value of Task 2 and Task 3 was significantly smaller than the value of Task 1 . Additionally, the area of error of Task 3 was smaller than that of Task 2 (P = 0.002). The variance of the individual performance was large at baseline, but gradually decreased with the skillful cycling training . Two-factor repeated measures ANOVA did not reveal a significant interaction or any significant main effect (see There was no significant main effect in the baseline of MEP amplitudes between the three conditions (959) see . These rF2,20 = 1.083, P = 0.358). A significant interaction was observed between each the time and condition . There were significant main effects of time and condition . Post hoc testing of the temporal change results revealed that SICI was decreased at all time points relative to T0 in skillful cycling training . There was a significant difference between T10 and T30 . In constant-speed cycling training, SICI was significantly decreased at T10 and T20 compared to T60. Comparisons between conditions revealed that SICI was significantly decreased in skillful cycling training compared to that in the rest condition at T10 or later . Furthermore, at T30 and T60, SICI for skillful cycling training was significantly decreased compared to that for constant-speed cycling training (see 607) see . These rr = \u22120.614, P = 0.044). However, there was no correlation after Task 3 .There was a significant negative correlation between the tracking error ratio and the SICI ratio measured after Task 2 , should be interpreted with caution. In the future, we will investigate based on power analysis with enhanced detection power. Second, present results showed no differences in the total number of pedal revolutions and that physical conditions were not different. However, we did not measure EMG activities to investigate the exercise load differences between skillful cycling and constant-speed cycling training, which could have affected the results. Further study is needed to clarify the effects of exercise load on cortical plasticity. Another limitation is that the present study included only healthy adults. The relationship between decreased SICI in spastic patients and improved performance requires further investigation. To verify the effectiveness of this method, studies on stroke patients are required.Several limitations of this study should be noted. First, the sample size of the current study was relatively small, although similar to prior studies targeting LE muscles . Hence, Our study revealed that skillful cycling training which involves a learning task for both legs induced a significant reduction in SICI in the LE motor cortex area compared with conventional cycling. The effects lasted for at least 30 min after training. The current findings provide insight into our understanding of the relationship between cortical plasticity and motor learning in leg performance which could be applied to improve gait function in patients with stroke. In the future, the efficacy of skillful cycling training should be examined in stroke patients as a means to improve gait disorder.The datasets generated for this study are available on request to the corresponding author.Human Subject Research: The studies involving human participants were reviewed and approved by the Ethics Committee of the Tokyo Bay Rehabilitation Hospital. The patients/participants provided their written informed consent to participate in this study.SaT and TY conceived and supervised the study. TT, SaT, and TY designed the experiments and wrote the manuscript. TT, KM, and KK carried out the experiments. ShT constructed the computer program. TT and KM analyzed the data. All authors approved the final version of the submitted manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Interferograms with short wavelength are usually prone to temporal decorrelation in permafrost regions, leading to the unavailability of sufficient high-coherence interferograms for performing conventional time series InSAR analysis. This paper proposes the utilization of temporary scatterers for the stacking InSAR method, thus enabling extraction of subsidence in a permafrost region with limited SAR images and limited high-coherence interferograms. Such method is termed as the temporary scatterers stacking InSAR (TSS-InSAR). Taking the Gonghe-Yushu highway (about 30 km), part of G214 National Highway in Qinghai province (in a permafrost region), as a case study, this TSS-InSAR approach was demonstrated in detail and implemented. With 10 TerraSAR-X images acquired during the period from May 2015 to August 2015, the subsidence along this highway was extracted. In this case the lack of a consistent number of SAR acquisitions limits the possibility to perform other conventional time series InSAR analysis. The results show that the middle part of this highway is in the thermokarst and seasonal frozen soil area, and its accumulated subsidence reach up to 10 cm in 110 days. The thawing phenomena is still the main reason for the instability of highway. The results demonstrate that the TSS-InSAR method can effectively extract the subsidence information in a challenging scenario with limited X-band SAR images and limited high-coherence interferograms, where other time series InSAR-based techniques cannot be applied in a simple way. China is the third largest country in the world in terms of permafrost distribution. The proportion of permafrost and seasonal permafrost distribution accounted for 21.5% and 53.5% of the national land, respectively [The subsidence monitoring of highway subgrade in permafrost environments is an important aspect for maintenance work and highway safety. At present, the commonly used subgrade subsidence monitoring methods are divided into two categories, one are the geotechnical measurement methods , with its characteristics of high precision, high spatial resolution, weather-independent and wide coverage, has developed into an important technique to monitor surface displacements. It has its unique advantages in large-scale surface displacement monitoring. The time series InSAR technology SAR data covering study area were acquired from May 2015 to August 2015. In this permafrost area the interferometric coherence decreases sharply with the increment of the time interval due to the short wavelength of the TSX SAR images . The limited number of SAR images and insufficient high-coherence interferograms were the biggest challenges in this case. In such a challenging condition, the conventional time series InSAR-based techniques cannot be applied in a simple way. We monitored the highway stability by proposing a method called temporary scatterers stacking InSAR method (TSS-InSAR), which utilizes the temporary scatterers in the stacking InSAR method. The stacking process and proper selection of temporary scatterers can effectively overcome the limitations in this case study. The parameters and model of this method are discussed and analyzed in detail, in which we demonstrate how to control the different error source to use this method in a unique and challenging environment . With this method the subsidence along the highway is extracted and the related interpretation is performed with the geomorphology.The Gonghe to Yushu highway, part of the G214 national highway (about 30 km), is the main research objective in this study. This highway is located in hinterland of the Qinghai-Tibet plateau with high altitude and very cold winters. There are up to seven months of frozen period throughout the year. Perennial frost exists in the high mountains and most of the rainfall is concentrated in May to September. The maximum temperature difference between day and night reaches 15 \u00b0C. This highway is in extremely poor natural conditions with high altitude and poor traffic conditions. It is cold and hypoxemic for people to live there, resulting in a sparse population. Therefore, it is costly and inefficient to perform conventional measurements there. This highway covers a wide permafrost area and the freezing and thawing phenomena are prominent, which brings great potential hazard to the highway subgrade. InSAR technology has its unique advantages in monitoring the subsidence in such area with wide coverage.As the study area is located in a high altitude permafrost area, the surface would subside in the summer due to thawing and uplift in the winter due to freezing. Especially in the summer, due to the increase of temperature and rainfall, the subsidence phenomenon caused by permafrost thawing is obvious. In order to monitor the subgrade subsidence of this highway in the summer (during the thawing season), 10 TSX images were acquired from May 2015 to August 2015. The coverage of the SAR images is shown in As the latest high-resolution SAR satellites from Germany, TSX data have a high resolution both in azimuth and range direction , which can support the study for a specific highway. The TSX data has a short wavelength (3.1 cm), which is more sensitive to subsidence than other data with longer wavelengths. Even through the impact of decoherence can be reduced by its shortest revisit cycle of 11 days, it still has a significant influence on coherence of interferometry in this area. The dates of all the acquisitions and their related parameters are shown in The stacking method is a simple and effective method in InSAR time series processing, in which a series of unwrapped interferograms are weighted and averaged according to the time span to estimate the average deformation velocity. It is mainly used to estimate the non-periodicity (approximate linear is best) average deformation velocity. The uniform surface displacements at mm/year accuracy can be extracted by the stacking method, which was confirmed by the validation with levelling data in some case studies . TherefoAs the interferograms in the permafrost region easily lose coherence, especially for the X-band SAR data, we combine the concept of temporarily coherent points with a stacking procedure to present a method called temporary scatterers stacking InSAR. Temporary scatterers window by:Assuming lculated within aM, S are the complex values on the pixel at master and slave image, respectively. l is the number of interferograms. After acquiring the correlation coefficient of an arbitrary pixel in the L interferograms, the threshold C(\u00b7) denotes to count the variable, T is a threshold set before (T < L). l is the number of interferograms. This method is a little different from the conventional correlation coefficient method. The conventional method sets the threshold for the minimum correlation coefficient, while this method sets the threshold for the minimum amount of pixels that exceed a correlation coefficient threshold. It is more suitable in this case as the interferograms that can be used are limited.After temporary scatterers were selected, the spatial phase unwrapping was performed only on temporary scatterers to get unwrapped differential interferograms. The interferograms with obvious unwrapping error would be removed. The unwrapped differential phase at each temporary scatterers can be expressed as:After removing or weakening the other phase component, the unwrapped differential interferometric phases consists mainly of the linear displacement phase, and they will be converted into a surface displacement by the following formula :(4)d=\u2212\u03bb4Finally, the average displacement rate and accumulated displacements could be calculated by the weight of time interval on each temporary scatterers point-by-point as the following formula:The interferograms with high coherence ensure the accuracy of phase unwrapping, and the introduction of the temporary scatterers ensure the stability and consistency of these point in time, which further ensure the accuracy and reliability of the final solution. Therefore, the concept of temporary scatterers in this paper is close to it used in other methods, such as the TCPInSAR method . HoweverIn the stacking process, it is necessary to analyze each phase component and weaken the relative error source to ensure the accuracy of stacking. In terms of Considering that freezing and thawing are common phenomena in permafrost regions, it is necessary to consider the non-linear displacement phase or periodicity displacement in the differential phase. In the stacking method model, only linear displacements are modeled. In order to ensure there is no periodic displacement in the study area to perform the stacking method, we obtained the ground temperature data from January to November 2015 from the Qinghai Weather Station and its surroundings were monitored. The results are shown in In terms of spatial distribution, combined with the interpretation of optical images, the study area can be roughly divided into three types of geomorphy, i.e., permafrost area, seasonal frozen soil area and thermokarst area. Permafrost is ground, including rock or (cryptic) soil, at or below the freezing point of water 0 \u00b0C (32 \u00b0F) for two or more years . SeasonaIt can be seen from After the area A, this highway entered a region with a combination of permafrost and seasonal frozen soil with a subsidence of 3 to 8 cm a. It canIn terms of the subsidence along the highway, As temporary scatterers were used in this study, based on the definition as described in Points P1, P2, P3 and P7 were located in the stable area were decoherent. The correct phase unwrapping cannot be performed and useful information is hardly extracted from them. Therefore, we just use the interferograms with 11 days temporal baseline and the ones with coherence higher than 0.4, the number of which is less than 10. For the conventional time series InSAR methods , this small number of interferograms are not enough to form a robust spatial-temporal network for calculation and successful subsidence measurement.With the X-band, the phase signals are hard to maintain in correlation for robust differential phase measurements in permafrost regions. The successful use of the temporary scatterers stacking InSAR method in this study indicated that the conventional time series InSAR methods are not appropriate for all the cases. With limited images and limited high-coherence interferograms, some other methods, such as temporary scatterers stacking InSAR method, could be an alternative.Based on X-band SAR images from the German TerraSAR-X satellite, this paper attempts to track the subsidence along the Gonghe-Yushu highway in a permafrost region. According to the coherence analysis, it was found that with short wavelength X-band, a large number of interferograms have very low coherence, leading to the unsuccessful implementation of conventional time series InSAR methods. This paper proposes the use of temporary scatterers for the stacking InSAR method, thus enabling extraction of the subsidence along this highway in a challenging scenario with limited SAR images and limited high-coherence interferograms. The basic idea, core processing steps, temporal scatterers selection method and phase model were introduced in detail. We discussed the feasibility of our method and the reference 3D high-resolution DEM data, ground temperature data from weather observation station were introduced to ensure the precision of the method performed in this case.The results show that there is inhomogeneous subsidence distribution along this highway and the distribution has a strong correlation with the local geomorphy. The analysis of time series subsidence with ground temperature indicates that the thawing of frozen soil is the dominant factor for the subsidence in this area. There are two severe subsidence areas along the highway. The first part is in the middle thermokarst area, with a maximum accumulated subsidence of 10 cm in 110 days. The second part is located in the northeast part of this highway, where the subsidence of the seasonal frozen soil is 5 to 8 cm. For these two regions it is necessary to perform long-term monitoring and take measures to ensure the safety of the highway. This case study shows that the temporal scatterers stacking InSAR method could be applied in a challenging scenario with limited SAR images and limited high-coherence interferograms, where other time series InSAR-based techniques cannot be applied in a simple way, and it could be an alternative InSAR method for some challenging cases."} +{"text": "Both hepatitis B virus (HBV) and hepatitis C virus (HCV) are major sources of morbidity and mortality worldwide; however, their prevalence in key groups in Colombia is not yet known. We aimed to analyse the prevalence of HBV and HCV and its associated factors in key groups who were treated at an institution providing health services in Colombia during 2019. This was a multiple-group ecological study that included 2,624 subjects from the general population, 1,100 men who have had sex with men (MSM), 1,061 homeless individuals, 380 sex workers, 260 vulnerable young people, 202 drug users, 41 inmates and 103 people from the lesbian, gay, bisexual and transgender community. Prevalence of infection with a 95% confidence interval and its associated factors was calculated for each group. Confounding variables were assessed using logistical regression and SPSS 25.0 software. Prevalence of HBV and HCV in the general population was 0.15% and 0.27%, respectively; 0.27% and 2.09% in MSM; 0.37% and 2.17% amongst homeless individuals; 0.26% and 0.0% amongst sex workers; 0.39% and 0.0% amongst vulnerable youth; and 5.94% and 45.54 amongst injecting drug users. In the multivariate HBV model, the explanatory variables included the study group, city of origin and the type of health affiliation; for HCV they were group, origin, sex, age group, health affiliation, use of drugs and hallucinogen use during sexual intercourse. A high prevalence of HBV and HCV were evidenced for both viral infections, which was, consequently, much higher within the key groups. The main associated factors that were identified related to origin and type of health affiliation and demonstrated a double vulnerability, that is, belonging to groups that are discriminated and excluded from many health policies and living under unfavourable socioeconomic conditions that prevent proper affiliation and health care. Viral hepatitis, especially types B and C, which account for more than 95% of deaths in the population, generate a high morbidity and mortality burden worldwide, with figures higher to those from tuberculosis or HIV/AIDS, but with less investment in diagnosis, prevention and treatment. The most concerning data are those related with hepatitis C virus (HCV) and hepatitis B virus (HBV). In the case of HCV, due to an increase in its morbidity with a worldwide prevalence of 1% , there aOn the other hand and despite the existence of massive vaccination programmes, HBV continues to be a global public health issue \u20137 that a(ab: (hepatitis c)) AND (ab: (hepatitis b)) AND (colombia) and the following syntax in PubMed ((hepatitis B [Title/Abstract]) AND hepatitis C [Title/Abstract]) AND Colombia. Despite this fact, several previous studies in the field are worth mentioning. An investigation with 619 subjects from four departments in Colombia, with Amerindian populations of Amazon River, female sex workers, female sex workers, doctors and nurses and nurses and dislocated people, found the following prevalence levels: HBsAg 5.66%, being statistically higher in Magdalena (8.39%) and without association to age or sex; anti-HBc was 28.43% with statistical differences in line with origin, sex and age, which were higher in men (34.36%), people older than 50 years (51.85%) and individuals from the Amazon (31.61%) Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0Partly**********2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0I Don't Know**********3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1:\u00a0Yes**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0Yes**********5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0The study is interesting, however several studies and data of HBV and HCV infection in Colombian population are not includedThe discussion of the data obtained has to be edited considering the technical limitation of the rapid test and also considering the results of the studies carried out in PID in different cities in Colombia**********what does this mean?). If published, this will include your full peer review and any attached files.6. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool,\u00a0AttachmentPONE-D-20-11570_reviewer.pdfSubmitted filename: Click here for additional data file. 28 Jul 2020Medell\u00edn, July 21th 2020Dra Isabelle CheminAcademic Editor.PLOS ONEManuscript Number PONE-D-20-11570 \u201cPrevalence of hepatitis B/C viruses and associated factors in key groups attending a health services institution in Colombia, 2019\u201dKind regards, Through this letter, we report the completion of all the changes suggested by the editors. The changes are highlighted in blue. Below we describe the changes realized, consistent with each reviewer suggestion.Journal RequirementsComment 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found atAnswer: the change was made in the manuscript and the title sheet, according to the information of the journal.Comment 2. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information.Answer: the change was made, we attached the supplementary material 1, which includes the survey in English and Spanish. in addition, in \u201cData collection\u201d we wrote:The survey items were selected based on the experience of the FAI and a review of the literature. The initial version of the instrument was subjected to an appearance validity process to ensure its applicability and its acceptability (according to the criteria of the subjects of the study groups), with two physicians, two epidemiologists, an infectologist and five people from each study group. Because in this process of the face evaluation no generated changes in the instrument, the validity and relevance of its content to apply to the study population was confirmed.Comment 3. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 1 in your text; if accepted, production will need this reference to link the reader to the Table.Answer: the change was made, we referred the table 1 in the study population.Comment 4. Please upload a copy of Supplementary Material 1 which you refer to in your text on page 8.Answer: the change was made, we attached the supplementary material 1, which includes the survey in English and Spanish.Reviewers' commentsComment 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #1: Partly.Answer: To improve this aspect, the applied survey was added as supplementary material, in English and Spanish. The description of the way in which the survey was constructed, validated and applied was expanded.Furthermore, in the introduction the available evidence (other studies) for Colombia was added, in the discussion the type of screening or diagnostic tests used in each study was described with more detail to improve comparability of results.The conclusion was changed, eliminating the part that reported a high prevalence, which only applied to a study group (as the reviewer pointed out), in the new version it says: The prevalence of both viral infections evidenced a differential level of risk in the study population was evident in the population, with much higher rates within the key groups\u2026Comment 2. Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: I Don't Know.Answer: In accordance with this type of epidemiological research, this study applied the statistical analyzes required to achieve the objectives; the statistical analysis is also adequate according to the type of variables measured . Such analyzes include:\u2022 Sociodemographic and health variables and risk factors in each group were described with relative frequencies (proportions).\u2022 The prevalence of HCV and HCV in each study group was determined with a 95% confidence interval.\u2022 The prevalence of both viruses was compared with the sociodemographic and health variables and risk factors, with the Pearson's Chi square test.\u2022 Variables that could have been confounding were identified with multivariate logistical regression.\u2022 A multivariate logistical regression model was performed with the purpose of identifying the explanatory variables of the prevalence.Comment 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.Reviewer #1: Yes.Answer: does not apply.Comment 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: Yes.Answer: does not apply.Comment 5. Review Comments to the Author. Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. .Reviewer #1: The study is interesting, however several studies and data of HBV and HCV infection in Colombian population are not included. The discussion of the data obtained has to be edited considering the technical limitation of the rapid test and also considering the results of the studies carried out in PID in different cities in Colombia.Answer: the change was made, in the introduction the available evidence (other studies) for Colombia was added.In the discussion we clarify that the detection test used in this study has sensitivity of 100% and specificity of 99.4% in HCV and 100% in HBV, which implies that false positive or negative results tend to zero. This implies that the prevalence is not under or over estimated, and therefore the comparison with other similar studies is pertinent.Despite this clarification, this reviewer's comment was added in the limitations of the study, particularly the fact that we did not make diagnostic confirmation with molecular tests.Comment 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: No.Answer: does not apply. Answer: all the changes suggested by the reviewer on the article in pdf were made.\u2022 In introduction: we change or delete some words, we add some considerations about vaccination and eligibility criteria of some studies, we clarify some data limited to Africa and we added some studies from Colombia.\u2022 In discussion: we explain the data about the validity of the diagnostic tests, we add some details about the tests used in the studies cited in this section so that we can explain limitations in some comparisons of our results, we add several clarifications about the absence of data on vaccination programs in our study population, we added as a limitation not being able to apply detection using molecular tests, and the lack of information of HBV and HCV prevalence in the country and is some risk populations.Despite the studies added to this version, it is important to clarify that in the first version of the manuscript we only intended to make it clear that there is heterogeneity, that there are few studies investigating both infections and that, in general, in Colombia there are no available many publications on this topic (as demonstrated by the search syntaxes that are explained in the introduction).For this addition of studies, consistent with the reviewer's suggestion, several additional syntaxes were applied: (HBV [Title / Abstract]) AND (Colombia [Title / Abstract]) generated 39 results (only 25 in the last 10 years), and (HCV [Title / Abstract]) AND (Colombia [Title / Abstract]) 33 (only 22 in the last 10 years). ((HCV [Title / Abstract]) AND (people who inject drugs [Title / Abstract])) AND (Colombia [Title / Abstract]) only 3 results are generated. From the investigations carried out in Colombia, we not included those developed with people with HIV, hepatocellular carcinoma, effect of vaccination, genotyping, molecular characterization, abstracts of papers, topic reviews, among other research backgrounds that are not directly related to the topic of this research.We appreciate your prompt evaluation and valuable comments that significantly improve the quality of our research.We look forward to new suggestions.Sincerely,The authors.AttachmentResponse to Reviewers.docxSubmitted filename: Click here for additional data file. 21 Aug 2020Prevalence of hepatitis B/C viruses and associated factors in key groups attending a health services institution in Colombia, 2019PONE-D-20-11570R1Dear Dr. Cardona Arias,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Isabelle Chemin, PhDAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments: 8 Sep 2020PONE-D-20-11570R1 Prevalence of hepatitis B/C viruses and associated factors in key groups attending a health services institution in Colombia, 2019 Dear Dr. Cardona-Arias:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofMrs Isabelle Chemin Academic EditorPLOS ONE"} +{"text": "Early diagnosis of chronic hepatitis B virus (HBV) and hepatitis C virus (HCV) infections is pivotal for optimal disease management. Sensitivity and specificity of 19 rapid diagnostic test (RDT) kits by different manufacturers were assessed on serum samples of 270 Mongolians (90 seropositive for hepatitis B surface antigen (HBsAg), 90 seropositive for hepatitis C antibody (HCV-Ab), 90 healthy subjects). All tested RDTs for detection of HBsAg performed with average sensitivities and specificities of 100% and 99%, respectively. Albeit, overall sensitivity and specificity of RDTs for detection of HCV-Ab was somewhat lower compared to that of HBsAg RDTs . Specificity of RDTs for detection of HCV-Ab was dramatically lower among HBsAg positive individuals, who were 10.2 times more likely to show false positive test results. The results of our prospective study demonstrate that inexpensive, easy to handle RDTs are a promising tool in effective HBV- and HCV-screening especially in resource-limited settings. With about 1.4 million annual deaths, viral hepatitis is a major problem in global health . Most deFor reducing the global burden of hepatitis, identifying those who are infected is crucial. This is especially true for HCV, since in recent years highly effective direct acting antiagents (DAAs) have become available as a reliable cure against the disease . SimilarEspecially in low- to middle-income countries like Mongolia with high HBV and HCV prevalence , reliablThe aim of this study was to evaluate the diagnostic performance of commercially available RDTs for HBsAg and HCV-Ab detection.270 participants were prospectively recruited: 90 HBsAg positive, 90 with detectable HCV viral load (HCV-RNA positive), 90 healthy controls. The sample size of 90 participants per group is a compromise between costs and statistic accuracy, to which the reliability of RDTs can be determined.HBsAg or HCV-RNA positive participants were randomly selected from screening registry of the Liver Center, Ulaanbaatar, Mongolia. Inclusion criteria: \u2265 18 years, positive tests for HBsAg or HCV-RNA within the year prior to the study. Patients with dual infection were excluded.Healthy controls were randomly selected among blood donors at the National Center of Transfusion Medicine, Ulaanbaatar, Mongolia. Inclusion criterium: Three or more blood donations, ensuring that these participants were confirmed negative for HBsAg, HCV-Ab and other common chronic infectious diseases multiple times.Ethical approval was obtained from the Ethics Committee of the Ministry of Health, Mongolia . Each individual gave written informed consent prior to participation.Participants were asked to provide a blood sample at the Liver Center or the National Center of Transfusion Medicine, respectively, between April and July 2015. From each individual, two samples of 5 ml venous blood were collected into vials containing clotting agent . All further processing and testing of blood samples was performed at the Liver Center. Within 4 hours after blood drawn, serum was separated by centrifugation and stored at -80\u00b0 C until reference or index testing.rd generation ELISA) following manufacturer\u2019s instructions. All samples were further checked by fully automated quantitative RT-PCR for quantitation of HBV-DNA and HCV-RNA according to manufacturer\u2019s instructions. For cost reasons, a total of 90 seronegative samples of healthy controls was analyzed by RT-PCR as 3 pooled samples (3\u00d730). This decreases sensitivity by a factor of 30.As reference, HBsAg and HCV-Ab status of all serum samples was determined by ELISA included upon request of the manufacturer. In all cases, the researcher was aware of the sample type and test he or she was assessing.Sensitivity and specificity were determined for every test using ELISA (HBsAg and HCV-Ab) results as reference. 95% confidence intervals (95% CI) for sensitivity and specificity were calculated using Wilson score method without continuity correction (6). Positive- and negative likelihood (LR+ and LR-) ratios were calculated based on the values for sensitivity and specificity.To put the results in the context of hepatitis screening activities in Mongolia, we assumed, as previously reported, a prevalence for HBsAg of 11.0% and for HCV-Ab of 8.5% among Mongolian adults (6). These values were used to calculate positive- and negative predictive values (PPV and NPV) and diagnostic accuracy (DA) Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly**********2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No**********3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1: No**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: No**********5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1: This work shows the results of the comparison of a comprehensive number of rapid diagnostic tests (RDT) for hepatitis C antibodies on a relatively modest number of samples previously characterized by ELISA (used as gold-standard).MAJOR POINTS1. L76. In the inclusion criteria, were patients with previous/current antiviral treatment included or excluded. If included, please specify treatment details.2. The authors use routine serology as the gold standard to evaluate the performance of RTDs. Please provide methods and kit details of the standard assays used for HBsAg and HCV-Ab testing 3. For the statistical analysis, the authors used Wilson score for assessing the specifity and sensitivity of the RDTS, and this reviewer interprets that likelihood ratios were also calculated. This is not clarified in the methods, and reference 7 on likelihood tests is not referenced in the main text.4. In this line, the authors somewhat interchange concepts that are in fact different variables: test performance, sensitivity, and diagnostic accuracy. The performance is the evaluation of all variables . Likelihood ratios and diagnostic accuracy values are not provided (see below).5. L126 and Table 2. Can the authors provide more data on the studied cohorts, such as ALT, AST, GGT, fibrosis scores/stage of liver disease (if available), other HBV serological markers (e.g. HBeAg), risk factors, etc.?6. L139-143. Here there is a confusion between performance, sensitivity and accuracy (see above). Please, use appropriate terms.7. Table 3. I would suggest ordering the tests by sensitivity and then by specificity. Add percentages for quotients. Add columns with values for positive and negative predictive values, likelihood ratios and calculated diagnostic accuracy (including 95% CI). See examples:. Larrat, et al J. Clin. Virol. 55(3) (2012) 220\u2013225; Cloherty, et al. J. Clin.Microbiol. 54 (2016) 265\u2013273.8. The same applies for Table 4.9. L158. \u201c\u2026in HCV-Ab negative, HBsAg positive sera than in sera from healthy\u2026\u201d Do the authors have any explanation for this? Include in the discussion.10. L160. I believe that figure 1 is redundant with the tables. The authors may want to transfer details provided in the caption to the text, or to a new table describing the details for all the discrepant results.11. In the discussion the authors argue about tests performance, the value of diagnostic accuracy, when this was not calculated, etc-\u2026please, re-write the discussion section completely after the lacking variables have been analyzed12. L203-205. Delete paragraph13. L206-209. Rephrase, focus on the importance of tasting in the field with real world conditions on your environment, and what how to identify HCV-Ab positives by RDTs with negative HCV-RNA.14. The list of references is quite scarce, there have been a quite high number of paper published in the topic. For a reference see for instance Peeling et al. BMC Infect Dis. 2017 Nov 1;17(Suppl 1):699MINOR POINTSL43 change \u201cless good\u201d for, e.g. \u201csomewhat lower\u201dL55. \u201c\u2026are diagnosed early.2L63: \u201cTherefore, the performance\u2026.\u201dL65. \u201cThe Aim of\u2026\u201dL 74. \u201c\u2026from the screening registry\u2026\u201dL79. \u201cInclusion criteria\u201dL89. \u201cvenous blood\u201dL92. \u201cblood drawn\u201dL 96 quantitative RT-PCR? If so, for quantitation of HBV-DNA and HCV-RNA levels?Table 1, Orasure: delete (!) sign on distributor detailsL130. \u201c\u2026from the HBsAg positive group, HBV-DNA was negative.L131. \u201c...from the HCV-Ab positive group, HCV-RNA was positive but below the limit of quantitation of the assay (specify LOQ)L149 \u201c\u2026the OraQuick HCV Ab test\u2026\u201dL150. \u201c\u2026for all HCV-Ab\u2026\u201dL151: \u201c\u2026and average specificity was\u2026.\u201dL200. \u201c\u2026HCV-Ab indivduals\u2026\u201dL201. \u201c\u2026lower HCV-Ab levels were not included in this study.\u201d**********what does this mean?). If published, this will include your full peer review and any attached files.6. PLOS authors have the option to publish the peer review history of their article digital diagnostic tool, 7 Apr 2020Response to reviewers has been attached as a file.AttachmentResponse to reviewer 07042020.docxSubmitted filename: Click here for additional data file. 9 Jun 2020Sensitivity and Specificity of Commercially Available Rapid Diagnostic Tests for Viral Hepatitis B and C Screening in Serum SamplesPONE-D-19-25400R1Dear Dr. Bungert,We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.https://www.editorialmanager.com/pone/, click the \"Update My Information\" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact With kind regards,Isabelle Chemin, PhDAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1: All comments have been addressed**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes**********3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.The Reviewer #1: Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1: The authors have addressed the questions raised, and they only need to coorect some minor typological errors / word repetitions in the text.**********what does this mean?). If published, this will include your full peer review and any attached files.7. PLOS authors have the option to publish the peer review history of their article (If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: No 6 Jul 2020PONE-D-19-25400R1 Sensitivity and Specificity of Commercially Available Rapid Diagnostic Tests for Viral Hepatitis B and C Screening in Serum Samples Dear Dr. Bungert:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofMrs Isabelle Chemin Academic EditorPLOS ONE"} +{"text": "Then, through coimmunoprecipitation and other techniques, Rab22a-NeoF1 was uncovered to promote osteosarcoma lung metastasis by activating RhoA.The above data indicated that the activation of RhoA by Rab22a-NeoF1 could be a critical determinant to boost the metastasis in osteosarcoma. The authors next sought to how RhoA is activated by Rab22a-NeoF1. From TAP\u2013MS data and co-transfection results, they showed that Rab22a-NeoF1 constitutively bound to a negatively charged region of SmgGDS-607. Interestingly, this region of SmgGDS has been reported to be crucial for its association with RhoA. Based on that, Kang et al. further revealed that the interaction between SmgGDS-607 and RhoA was notably diminished in the presence of Rab22a-NeoF1, indicating that Rab22a-NeoF1 changes the binding of SmgGDS-607 to RhoA and transfers RhoA into active form.4 Based on that, they focused on some positively charged residues and found that Arg4 and Lys7 of Rab22a-NeoF1 were essential for this interaction during lung metastasis. And when using specific targeting peptides of Rab22a, the interaction between Rab22a-NeoF1 and SmgGDS-607 was abolished, which inhibited lung metastasis and increased the survival time, suggesting a potential therapeutic target for osteosarcoma lung metastasis.To further investigate the binding area of Rab22a-NeoF1 and SmgGDS-607, Kang et al. confirmed that amino acids 1\u201310 of Rab22a-NeoF1 was required for the interaction with SmgGDS-607. Meanwhile, the author team has already reported that the promoting function of Rab22a-NeoF1 is largely dependent on its Lys7 acetylation in osteosarcoma.5 this research provides experimental evidence and clinical basis for therapies targeting truncated or fusion proteins against osteosarcoma lung metastasis.Collectively, through comprehensive analysis of different osteosarcoma cell lines, animal models, and patient samples, Kang et al. discovered that Rab22a-NeoF1/SmgGDS-607/RhoA axis is one of the potential mechanisms in driving tumor metastasis. This paper also highlights the importance to understand the genes formed by fusion of exons and introns, which have been ignored by most analyses. With recent advances in deep-sequencing technologies, diverse gene fusions, along with their functions, have been gradually identified and elucidated. More details to understand the fresh fusions are warranted, especially about the mechanisms of modification, downstream factors they target, and cellular processes they regulate. Previous clinical analysis has demonstrated some fusions in other cancers like breast cancer. Considering that clinical trials targeting on fusion proteins have been successfully applied in other tumors,"} +{"text": "Water shortage is one of the most concerning global challenges in the 21st century. Solar\u2010inspired vaporization employing photothermal nanomaterials is considered to be a feasible and green technology for addressing the water challenge by virtue of abundant and clean solar energy. 2D nanomaterials aroused considerable attention in photothermal evaporation\u2010induced water production owing to their large absorption surface, strong absorption in broadband solar spectrum, and efficient photothermal conversion. Herein, the recent progress of 2D nanomaterials\u2010based photothermal evaporation, mainly including emerging Xenes and binary\u2010enes , is reviewed. Then, the optimization strategies for higher evaporation performance are summarized in terms of modulation of the intrinsic photothermal performance of 2D nanomaterials and design of the complete evaporation system. Finally, the challenges and prospective of various kinds of 2D photothermal nanomaterials are discussed in terms of the photothermal performance, stability, environmental influence, and cost. One important principle is that solutions for water challenges should not introduce new environmental and social problems. This Review aims to highlight the role of 2D photothermal nanomaterials in solving water challenges and provides a viable scheme toward the practical use in photothermal materials selection, design, and evaporation systems building. The recent progress of two\u2010dimensional (2D) nanomaterials\u2010based photothermal evaporation owing to the unique advantages of 2D photothermal nanomaterials, such as the large absorption surface, strong absorption in broadband solar spectrum, and efficient photothermal conversion is reviewed. This Review aims to highlight the role of 2D photothermal nanomaterials in solving the water challenges through employing the abundant and clean solar energy. Much effort has been devoted to this field in order to deliver better solutions.2 g\u22121, which can tremendously decrease the cost of solar evaporation device and promote its practical application. Moreover, the large surface area of 2D nanosheets provides an expansive platform for tailoring the physicochemical properties and functionalities.Since the discovery of graphene,The 2D nanosheets can be further employed for preparing novel functional blocks,FigureHerein, we review the recent development in 2D nanomaterials\u2010based photothermal evaporation beyond graphene\u2010based nanomaterials Figure1. We f2Considering the different light\u2013matter interaction mechanisms in electromagnetic radiation, two kinds of 2D nanomaterials can be divided, including the metallic materials with localized plasmonic heating and semiconductors with nonradiative relaxation. Both mechanisms can contribute efficient photothermal conversion.2.1FigureThe investigation about plasmonic photothermal effect starts since 2002,2.2For semiconducting materials, a strong absorption occurs at the wavelength matching the bandgap energy. When irradiated by light, electron\u2013hole pair is formed with similar energy corresponding to bandgap. The bandgap of 2D photothermal nanomaterials differs according to the different species. For one 2D photothermal nanomaterial, the bandgap varies with the size of nanosheets, such as the tunable bandgap range of phosphorene from 0.3 to 2 eV,3Numerous investigations have been reported on enhancing the photothermal performance though seeking for new 2D photothermal nanomaterials or modification of already\u2010known 2D nanomaterials. The main 2D nanomaterials beyond graphene include MXenes, tellurene, TMDs, transition metal oxides (TMOs), and 2D layered alloy. The large family of MXenes and TMDs may provide more potential for photothermal water vaporization but only a few members have been investigated up to now.3.1n+1AXn phases, where M represents an early transition metal, such as Ti, Ta, Nb, Mo, Zr, Cr, and so on, X represents C and/or N, and A is a group IIIA or IVA element.MXene, as early transition metal carbides and nitrides, is firstly introduced into the 2D family by Gogotsi's group in 2011.3C2 MXene,2C, Ta4C3, Mo2C, etc. Excellent photothermal performance has been found in Ti3C2, Nb2C, and Ta4C3 and they have been employed for photothermal cancer therapy in the biomedical field,3C2 MXene was shown to possess both a high internal PTCE of 100% and photothermal evaporation efficiency (84%) through reasonable choosing heat barrier.3C2 membrane, it is found that the hydrophobic membrane can avoid the salt\u2010blocking problem and keep a long and stable evaporation Figure4a.151 2 with cotton cloth for preparing clean water modification. Both a high absorption covering the solar spectrum and high stability can be achieved.Rather than using SWCNTs, Guo et al. fabricated a new hybrid of PEGylated MoSr Figure .156 The 2 in both the two reports is obtained through the bottom\u2010up method. In Chou's study, MoS2 was proved to be fabricated by a top\u2010down method, i.e., chemically exfoliation.2 to conduct the photothermal evaporation . The combined material contributes to solar evaporation efficiency of 81% under light energy of 5.35 kW m\u22122. These solar evaporation systems prove MoS2 can be an excellent solar absorber for converting solar energy into heat.The solar absorber of MoSn Figure .157 It ix,2, which have been investigated regarding their high photothermal conversion ability owing to the strong light absorption and LSPR. Ming et al. demonstrated an efficient photoabsorber based on tungsten oxide (WOx) nanosheets for the solar steam generation owing to its tunable LSPR effect and the other is BiInSe3\u2010coated carbon foam (BiInSe3@CF) technique has been employed as a clean, simple, and efficient method without introducing precursors and catalysts have also been considered for photothermal evaporation owing to their abundant intermediate band (IB) states,F) Figure5a.110, s Figure . Moreove4As discussed above, two main properties evaluating photothermal performance are absorption and photothermal conversion. Tremendous efforts have been devoted on enhancing absorption but less work on photothermal conversion. The specific strategies will be described in this section.4.1The seeking of high\u2010performing photoabsorbers with strong absorption and high photothermal conversion efficiency is of particular importance for realizing efficient light utilization.4.1.1k) is characterized through Lambert\u2013Beer lawA is the wavelength (\u03bb)\u2010dependent absorbance, L is the traversing length of light (in cm), and C is the concentration of 2D nanomaterials (in g L\u22121).It is expected that the absorber can absorb most of the light energy. To evaluate the absorption ability, the extinction coefficient is defined. In Roper's report,QI represents heat produced by electron\u2013phonon relaxation on the surface of the 2D photothermal nanomaterials under the light irradiationI is incident light energy, \u03bbA is the absorbance at a specific wavelength, and \u03b7 is the PTCE.The light\u2010induced heat source term, Qloss is a linear function to the temperature of photothermal systemh is heat transfer coefficient, A is heat transfer area of the photothermal system, and T and Tsurr are the temperatures of the photothermal system and the surrounding temperature.Qloss) increases with increased temperature , is defined to comprehensively evaluate the photothermal performance in both considering the absorption ability and photothermal conversion, which isTableFor monoelemental class of 2D nanomaterials, termed 2D\u2010Xenes, strong absorption for phosphorene induces the low photothermal conversion while the weak absorption in borophene, antimonene, and tellurene leads to higher PTCE. However, phosphorene obtains the best photothermal performance by the overall consideration through ePTCE Table1. As aTable2C MXene, which composes of an early transition metal with moderate atomic number.In MXene family, the photothermal performance is dependent on the component of early transition metal Table2. As tTable2 to WS2, or from MoS2 to MoSe2), the extinction coefficient would decrease, and the PTCE would increase. The overall photothermal performance evaluated by ePTCE obtains a similar level.For transition metal dichalcogenides (TMDs), the photothermal performance differs depending on the component of early transition metal and chalcogen elements Table3. As b4.2A strong absorption is the first and foremost key parameter to guarantee a high photothermal performance and thus many strategies have been employed to improve the absorbance through defect modulation,4.2.12 and ZnO have a wide optical bandgap of 3.2\u20133.4 eV, which make them only absorb 4% of light in solar spectrum. Fortunately, WO3, MoO3, and V2O5 possess tunable optical absorption from NIR to visible regions through tailoring the particle size.x were obtained.Figure3) to WOAr2 . Lasers of 808 and 1064 nm are commonly used for photothermal cancer therapy.5For efficient photothermal evaporation, single material design for high photothermal performance is insufficient. Further design of whole evaporation system is also crucial for enhancing photothermal evaporation performance. The design strategies toward high water flux, low heat loss, stable mechanical property, excellent environmental compatibility, and low cost will be presented.5.1\u22122 h\u22121. Therefore, novel work has been made to develop effective membranes with higher water flux.The membrane materials such as inorganic materials (NaA zeolite),3C2 nanosheet layer modified by trimethoxy silane for photothermal evaporation.3C2 membrane at 65 \u00b0C due to the large interlayer spacing combining with hydrophilic surface. The high\u2010performing pervaporation desalination endows ultrathin 2D MXene membrane a bright future for photothermal evaporation applications.On the contrary, Liu et al. developed hydrophilic MXene membranes for pervaporation desalination.2.2 film is investigated. An evaporation saturation state can be observed as thickness increases because of the opposite effect of enhanced absorption and longer steam transport. The water evaporation efficiency of water can reach up to 91.5% at the power density of 5 kW m\u22122 by using SWCNT\u2010MoS2 film with ultrathin thickness of 6 nm which is comparable to most of localized heating systems with thicker film than 2 \u00b5m.2 film benefits from both the excellent structure and thermal properties. Structurally, the ultrathin and sponge\u2010like SWCNTs structure leads to an excellent permeability that allows the unobstructed and rapid escape of the generated vapor from the localized heating site , as a polymer matrix secreted from bacteria, can entangle into a close\u2010packed network, which leads to high mechanical strength and porosity,2 Figure .157 Wir Figure .156 The 5.2It is well known that, besides material's intrinsic PTCE, photothermal evaporation performance can be significantly enhanced by two other factors, including maximizing light absorption through rationally designing surface structure and minimizing heat loss to bulk water through employing suitable heat barrier.3, a high evaporation performance can be obtained owing to the rational heat management.In the work Li et al., nonporous polystyrene foam is selected as heat barrier through attaching onto the back side of the solar absorber of MXene membrane as it contains no passageway and thus prevents the heat transfer to the ambient water environment while allowing water extraction up to the solar absorber from the circumjacent sides of the heat barrier. In the following, the elaborated design helps to concentrate the light\u2010induced heat at the interface between water and solar absorber of MXene membrane, resulting in an enhanced water evaporation efficiency.5.3Besides high solar light absorption for the best use of solar energy, the good mechanical stability of supporting membranes also needs to be met because an ideal photothermal evaporation system should float on water surface and guarantee the membranes can be recycled in keeping a stable evaporation performance. Kinds of floating materials have been developed, such as the air\u2010laid paper,2 through the self\u2010growth technique in a hydrothermal process light (808 or 1064 nm) has also been reported whereas the photothermal conversion ability in visible light has not been investigated. Moreover, the same fabrication technique of MXenes materials (etching and sonication) endows their rich surface properties and resulting high water flux proved by Ti3C2 MXenes, which is viable for other kinds of MXenes. Therefore, a large space for employing other kinds of MXenes in photothermal evaporation needs to be further explored owing to their high PTCE and beneficial surface groups.The excellent photothermal performance of Ti6.22 hybrid film2\u2010cotton cloth film.2 nanotubes,2Te3 nanotubes,Currently, the photothermal investigation of Te nanostructures is limited by the Te nanorods,6.3A family of 2D Xenes, stemming from group\u2010VA layered materials , has aroused an increasing interest in theoretical work and practical applications.Some photothermal agents in 2D Group V elements, including phosphorene6.3.1Phosphorene is the first and also the most investigated one in 2D group\u2010VA family. In 2014, Li et al. first fabricated the phosphorene and built the phosphorene\u2010based field\u2010effect transistor.x,Particularly, the bandgap of phosphorene is layer number dependent and can be tuned in the range of 0.3\u20132 eV,6.3.2To obtain high stable and similar high performance of phosphorene, scientists have turned their eyes on the cousins of phosphorene in the same VA group, i.e., the arsenene,\u22121 cm\u22121) in Tao's report.In photothermal\u2010related applications, antimonene quantum dots (AMQDs) are reported to possess a high PTCE (45.5%), which is even higher than MXenes and phosphorene. However, the shortcomings of AMQDs are the weak photostability and low absorption. The degradability of AMQDs is assessed in the irradiation of near\u2010infrared light and the color of AMQDs solution becomes transparent after irradiation of tens of minutes. This performance is clearly not suitable for photothermal evaporation, which needs a super photostability. Through some calculation techniques, the stability of antimonene was predicted to be ameliorated through noncovalent functionalization of small molecules6.4Elemental boron, located at the neighborhood of carbon, arouses a rising attention in its 2D form. Borophene, as the lightest 2D material up to now, possesses a series of structures with polymorphism and anisotropy,2 hybridization from metal passivation. The borophene was first demonstrated experimentally by Mannix et al. through growing on Ag (111) under ultrahigh\u2010vacuum conditions with anisotropic electronic properties and metallic characteristics.\u22121 cm\u22121,FigureTheoretically, it was expected that the borophene can grow on metal substrates, such as Ag(111), Cu(111),7In conclusion, the recent progress of 2D nanomaterials\u2010based photothermal evaporation systems is presented. Based on the photothermal mechanism for various 2D nanomaterials, solar light absorption can be enhanced through the introduction of atomic vacancy, dual\u2010resonance optical mode, bandgap adjustment, phase transition, etc. Furthermore, the strategies for efficient photothermal evaporation system in terms of water flux, thermal supervision, and stability are discussed and some other properties, such as environmental influence and cost from the point of environment and commercial application are also evaluated. It is worth noting that new environmental and social problems should not be introduced in the process of solving the water crisis. Challenges of different 2D nanomaterials toward photothermal evaporation application are pointed out and corresponding strategies are proposed. In brief, the important role of 2D materials based\u2010photothermal evaporation in solving current water crisis is highlighted and more attention should be paid for the emerging green technology.k = 3.6 L g\u22121 cm\u22121), antimonene k = 5.6 L g\u22121 cm\u22121), and borophene (k = 2.5 L g\u22121 cm\u22121), while some kinds of photothermal agents can obtain a fairly high extinction coefficient, for instance, phosphorene (14.8 L g\u22121 cm\u22121), WS2 (23.8 L g\u22121 cm\u22121), MoS2 (28.4 L g\u22121 cm\u22121), Ti3C2 (25.2 L g\u22121 cm\u22121), and Nb2C (36.4 L g\u22121 cm\u22121), which are several times of the antimonene and borophene. However, the PTCE of most 2D photothermal agents locates at a similar level (28\u201345%), for instance, 28.4% for phosphorene, 30.6% for Ti3C2 MXene, 42.5% for borophene, and 45.5% for antimonene. Thus, there is a large room for improving the absorption ability. That is why most of research focuses on increasing absorbance while less on photothermal conversion. Currently, different strategies for optimizing absorbance were employed for various kinds of photothermal agents, including defect modulation, bandgap adjustment, etc. These strategies can be combined to implement on one promising photothermal agent.Excellent photothermal performance requires both a strong absorbance and a high PTCE and the research on enhancing PTCE needs to be strengthened in surveying the related reports. Of course, the weakness of most photothermal agents is the low absorption, like the GO has been widely investigated to lock photon and enhance absorption of graphene, yet for other 2D photothermal materials. Thus, SPR\u2010enhanced photothermal performance can be further explored and the plasmonic materials are proposed to integrate with some 2D semiconducting materials.Up to now, the 2D materials\u2010based evaporation system is just in the early stage and focuses on the materials design. Further investigations can move toward structure design. For example, Li et al. elaborately design a cylindrical vapor generator structure, which can guarantee the lower temperature than that of surrounding environment, thus facilitating the energy absorption from the environment. The photothermal evaporation efficiency is observed to be up to 100%.2, are more stable than 2D Xenes, including tellurene, Group VA\u2010enes of phosphorene and antimonene. Thus, the stability problem of Xenes needs to be highlighted for their further consideration. Up to now, more investigations about the protection strategies are developed on phosphorene, which can be further employed for enhancing the stability of other unstable photothermal Xenes. Both physical and chemical methods can be employed to enhance the stability of the unstable 2D materials, including encapsulation by other stable materials, noncovalent functionalization, doping, covalent functionalization, etc. Encapsulation by other hydrophilic/hydrophobic polymers is an efficient strategy because it can not only enhance the stability of photothermal agent but also optimize the surface properties to improve the evaporation water flux. In the meantime, the stability strategy should also consider the environmental influence and the cost. The fabrication should not introduce toxic element and be easily operable, and the additional materials to protect the unstable 2D materials should also be nontoxic and low cost. Some natural materials can be better choice for this need, such as the cotton cloth.Besides the most concerned photothermal performance, the stability and toxicity issues are also crucial especially in late\u2010stage research of practical applications, but lacking comprehensive studies. Generally, binary compounds, such as MXenes and MoS2 with stable properties exhibit no toxicity to mouse, while some elemental materials with unstable properties, such as the tellurene, present the toxicity. The biocompatibility test can illustrate the environmental influence to some extent. However, the targeted research of photothermal agents on environmental influences should be further conducted toward the practical use. The toxicity of some photothermal agents is reported to be reduced through encapsulating by some nontoxic polymers but limited in biological environments and short period. Long\u2010term environmental tests, such as in saline seawater, may result in the falloff of encapsulated polymers and exposure of toxic photothermal agents, which needs to be further confirmed.A basic and important principle is that new environmental problems should not be introduced in solving the environmental problem of water shortage. So far, very less report in photothermal evaporation field focuses on this topic. Therefore, besides the above\u2010mentioned photothermal performance and stability issues, the environmental consideration of photothermal agents also needs to be highlighted. Some biocompatible data of photothermal agents are available from the reports on photothermal cancer therapy. The MXenes and MoSEach kind of photothermal material has its own advantage and disadvantage. Materials hybrid can be an efficient strategy to overcome the disadvantage of each other. For typical carbon\u2010based materials, there are many advantages in cost and photothermal performance. But some of them were reported to be toxic, such as the graphene. However, some natural products can be good carbon sources with low environmental influence, such as the mushrooms,Solar energy utilization is the most promising way for solving energy challenge. In the process of photothermal evaporation, solar energy harvest is an efficient way to solve two main global challenges of energy and water in the meantime, thus enhancing the utilization efficiency of photothermal evaporation system. Gao et al. use the plasmonic gold nanoflowers to realize the solar vaporization. In the meanwhile, the triboelectricity can be harvested from the process of water condensate.The authors declare no conflict of interest."} +{"text": "Sinako: Households and HIV\u201d study is to investigate to what extent and how an intervention can increase HIV competence in PLWH and their households, and subsequently optimise the impact of CHW support on individual ART outcomes.With 7.7 million South Africans currently infected with human immunodeficiency virus (HIV) and 4.8 million currently receiving antiretroviral treatment (ART), the epidemic represents a considerable burden for the country\u2019s resource-limited health system. In response to the health and human resource shortages, task shifting to community health workers (CHWs) and empowering people living with HIV (PLWH) are integral parts of a sustainable ART strategy. Despite the success of the ART programme, South Africa still faces both prevention and treatment challenges. To tackle these challenges, future endeavours need to focus on the role played by the households of PLWH in mediating between the community and PLWH themselves. Building health-enabling \u201cHIV competent\u201d households with the capacity to actively stimulate lifestyles that foster health, offers a potential strategy to tackle South Africa\u2019s HIV-related challenges. The aim of the \u201cSinako\u201d study is a cluster-randomised controlled trial with two arms. In the control arm, CHWs offer a standard package of support to PLWH during home visits, focused on the individual. The intervention arm includes both a focus on the individual and the household to enable the patient to self-manage their treatment within an HIV competent household.The \u201cA longitudinal mixed methods design is adopted to analyse the data. For the quantitative data analysis, methods including latent cross-lagged modelling, multilevel modelling and logistic regression will be used. To assess the acceptability and feasibility of the intervention and to construct a comprehensive picture of the mechanisms underlying the impact on the household and the PLWH, qualitative data (in-depth interviews and focus group discussions) will be collected and analysed.Stimulating HIV competence in households could be a feasible and sustainable strategy to optimise the outcomes of CHW interventions and thus be important for HIV treatment interventions in resource-limited settings.PACTR201906476052236. Registered on 24 June 2019.Pan African Clinical Trial Registry, To date, 76.1 million people have become infected with human immunodeficiency virus (HIV) and the virus has claimed an estimated 35.0 million lives globally . ConsideTransition to a chronic care model is, however, placing an immense burden on the resource-limited health system. Recent estimates suggest that 42.7% of health professional posts in South Africa are vacant . High atDespite the success of the ART programme and task shifting, South Africa is still faced with challenges in terms of prevention as well as treatment. With respect to prevention, incidence rates remain high, with about 240,000 new HIV infections in 2018. Furthermore, 700,000 PLWH do not know their status and are therefore not enrolled into care or receiving treatment . TreatmeTo provide effective preventive actions and chronic disease care within the climate of human resource shortages, it is not enough to simply shift responsibility for chronic HIV care to the community (type III) and PLWH (type IV) themselves. Future endeavours need to focus on the search for innovative ways to provide social support in response to these scarcities. A potential source of such support already exists at the intermediate level, between the community (type III) and PLWH (type IV), namely PLWH households. In this study, households are defined as a \u201cco-residential unit, usually family-based in some way, which takes care of resource management and the primary needs of its members\u201d .The intermediate household level is often overlooked in the current chronic disease care delivery model. However, PLWH seldom live in isolation, and their home life is generally regarded as the closest and most basic context for individual development . PreviouIn order to address prevention and treatment challenges within the household context extensive efforts are required to increase HIV knowledge, reduce stigma, stimulate HIV testing, improve health care-seeking behaviour, and encourage safe sexual practices\u2014described by UNAIDS and other authors as the need for HIV competence , 15. AchThese ideas are rooted in the ecological approach of Kelly et al. with itsIntegrating the elements of this theoretical framework, the research team developed the theoretical concept of \u201cHIV competent households\u201d based on qualitative research to lay tHowever, the road to HIV competence in the household is precarious and prone to obstacles at both the individual and household level. As a result of HIV-related stigma both outside and inside the household, the development of HIV competence can easily be undermined. Furthermore, a household\u2019s lack of social support or emotional connectedness, discrimination against HIV or misconceptions about the illness can further inhibit the development of HIV competence at the household level and may even produce a health-impeding context . In such2CP), which delineates four steps in the process of building HIV competence at the household level [Based on the circumplex model of marital and family systems by Olson and the results of qualitative research by Masquillier et al., the research team developed the positive communication process (Pld level \u201324.2CP step 1). A necessary condition for the building of HIV competence in the household is then disclosure, as household members can only offer social support related to living with HIV when the patient has shared their status (P2CP step 2) [2CP step 3). They create awareness and openness about the illness in their midst and the need for behaviour change to prevent further transmission to others. These change agents are therefore the motor that will prompt the move towards HIV competence at the household level. Moreover, the change agents will act as \u201chousehold health advisors\u201d by translating their knowledge and communication skills into positive HIV-related communication dynamics at the household level, such as safe sex negotiation or a conversation about adherence support. An increase in HIV-related knowledge supports the gradual process of normalisation of HIV in the household, which is required to build an environment that is responsive to HIV treatment and prevention. Finally, these constructive household dynamics are translated into HIV competence (P2CP step 4), resulting in a household that forms a health-enabling environment in which it is easier for the patient to self-manage their treatment, adhere to ART, and reduce the likelihood of a new HIV infection within the household .The road to HIV competency commences with the recognition of the reality of HIV by the PLWH themselves (P step 2) . EquippeSinako (\u2018we can\u2019 in isiXhosa) HIV and households study\u2014aims to test an evidence-based household intervention delivered by CHWs to: 1) increase HIV competence in PLWH and their households; and subsequently 2) optimise the impact of CHW support on individual antiretroviral treatment outcomes.In this project, the HIV competent household concept will be advanced beyond the merely theoretical and conceptual level. Building on the existing literature and our preparatory work, the current project aims to investigate empirically to what extent and in what way HIV competent households can become sustainable health-enabling contexts that can provide an answer to the HIV prevention and treatment challenges facing South Africa. To this end, this project\u2014the 2CP intervention (arm 2). CHWs, however, are linked to a health care facility, which creates a risk of contamination when CHWs active in different arms operate from the same facility. The facility was therefore selected as the cluster unit of randomisation. The health care facilities in the study setting were categorised as large or small facilities according to the numbers of CHWs employed, which also corresponds to the number of patients. Subsequently, 12 facilities were grouped by selected subdistricts and were randomly selected from the list of facilities for inclusion in the study in arms 1 and \u2013 resulting in six facilities or clusters per trial arm. Further blinding of the study arms was not possible because of the clear differences between the intervention and the standard of care. The design and report of this clinical trial protocol follows the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) statement . In contrast, the population in Khayelitsha is mainly Black African (99%). For the three other subdistricts, race distribution is almost equal between Coloureds and Black African populations, with the Western subdistrict showing a significant presence of the White population (29%). All these subdistricts are confronted with severe social and economic challenges, and poverty is widespread. Unemployment is omnipresent with an average unemployment rate of 28.6%, ranging from 18% in Western to 38% in Khayelitsha. Of the five subdistricts, Khayelitsha is the most impoverished, with more than half of the households living in informal dwellings (55%) and with 74% of households surviving on a monthly income of R3200 or less (approximately US$218). Conversely, the Western subdistrict appears to be a little less disadvantaged than the other five subdistricts, with 15% of households living in informal dwellings and a slightly lower unemployment rate of 18% .These social and economic challenges translate into health-related challenges as a result of limited access to health care, education, intra-partner violence, and transactional sex, among others . AccordiInclusion criteria for participants include the following: a minimum age of 18\u2009years; having commenced ART within 4 weeks of enrolment either for the first time or again in the case of previous defaulting; having a household member above 18\u2009years old; not being co-infected with tuberculosis at the time of the test; not tested as a result of pregnancy; accessing HIV care and treatment at one of the designated health care facilities for this cluster-RCT; and living in the area of this facility.The CHWs participating in this trial were selected through an application process based on various criteria, including: having experience as a CHW supporting people living with HIV; willingness to learn new skills and embrace different methods for supporting ART adherence; and willingness to work in the community, including locating clients and conducting the intervention in the client\u2019s household. Intensive training workshops have been held to train the CHWs recruited in the intervention arm of the RCT about their specific tasks and equip them with the skills to deliver the intervention.A sample size of 180 individuals per arm was obtained by sampling 12 clusters in total (six for each arm) with 90% power to determine an increase in ART adherence (primary outcome (see below)) from 68% to 83% (effect size\u2009=\u200915%) postintervention over a period of 12\u2009months. The proportion in arm 1 (the control group) is assumed to be 0.68 under the null hypothesis and 0.83 under the alternative hypothesis. This sample size has been calculated for a two-sided Z test (unpooled) and 5% significance level. The intracluster correlation is 0.0020. The estimated effect size is conservative since this is a new intervention that has not been rigorously assessed in South Africa. We hope to increase the ART adherence levels by 30%; however, we have powered the study on a lower effect size (15%) to avoid a type 2 error. The total sample size of 640 individuals (320 per arm) instead of 360 (180 per arm) will be used to allow for loss to follow-up. As a result of this oversampling to account for loss to follow-up during the trial, CHW will deliver the intervention to a total of 320 PLWH receiving ART adherence support.As part of the standard procedure in the South African health system, the counsellor tests the individual at the clinic. Upon a positive HIV test, the counsellor opens a patient file and scans the patient\u2019s preliminary eligibility criteria. A second clinic visit is then scheduled where the patient will be introduced to their CHW and the study. If the patient refuses to participate, the patient stays in the regular South African health system. If the potential participant agrees, the CHW enrols the individual and schedules a first visit at home or an alternative location preferable to them. At this home visit, the CHW provides further information about the study, invites the PLWH to complete an informed consent form, reassures confidentiality and explains the importance of an interview with a household member. The CHW takes household member names and then schedules the second home visit where the individual baseline interview will be conducted by the fieldworker. The household includes all those people who \u201ceat from the same pot\u201d for at least four nights per week over the past month. Based on the household information document, household members will be randomised by the fieldwork team for the household member interview. If the household member interview is not completed within 30\u2009days of patient enrolment, the household member is not included in the study.Before the start of the intervention visits, the PLWH in both the intervention and control arms receive similar preparatory visits.In the first preparatory visit, the study is introduced by the CHW and consent is asked to collect information about the PLWH and the contact details of the household members. Furthermore, in this first visit an assessment is made of the PLWH\u2019s ART adherence (by doing a pill count) and household context (by doing a home assessment).Between the first and second preparatory visit, a household member will be randomly selected from the information sheet collected by the CHW at the first visit and be contacted for a household interview. If the research team was not able to contact or interview the first randomly selected household member, then a second or third household member will be contacted. The interview with the household member must occur before the first intervention visit. If the research team fails to interview a household member within this time frame, no household member interview will be conducted related to that patient.In a second preparatory visit, which is a standard of care visit that all patients enrolled in the study receive, an evaluation of the ART adherence (pill count) will be repeated. A fieldworker will accompany the CHW during this visit, and conduct and conduct afterwards a baseline interviews.After completion of these initial preparatory visits, the intervention will start. In the control arm, the CHW will offer a standard package of support (i.e. a pill count) focused at the individual level. In the intervention arm, in addition to the pill count the CHW activities entail two additional components: 1) an additional individual-level component to stimulate the self-management skills of the HIV patient; and 2) a context-focused component for promoting HIV competent households.Figure\u00a0Because disclosure is a critical component within HIV competent households and a common thread throughout delivery of the intervention, at the end of each and every intervention visit the PLWH will be encouraged to invite and, where possible, bring a household member or sexual partner to the next visit. If a new household member or sexual partner joins a session, their knowledge about HIV and ART will be assessed and improved by means of a visual HIV fact sheet. The household member/sexual partner will be included in the activities for the remainder of the intervention visit and be invited to join the next visits as well. If the PLWH did not bring a sexual partner or household member to the visit, the CHW will invite the PLWH to do so at the next visit. However, while highly encouraged to bring a household member, this is not a prerequisite to continue the intervention.The intervention pilot was undertaken with a small subsample of two ART patients in the intervention community from a facility not included in the RCT. The outcomes were incorporated into the final intervention. The feedback from the pilot included the need to allow for the possibility for \u201cin between visits\u201d to catch up on any activities that were not completed during the previous session. The pilot also demonstrated the need for more time than had been planned for CHW training. More engagement with the CHWs was therefore included in the training, with more role playing and simulated intervention time rather than time allocated for them to read the manual and self-learn.The CHWs who are part of the intervention arm received 9 training days with a focus on role play and ensuring that the CHWs understand the intervention and the importance of standardisation of the delivery. At the end of the training, the CHWs were informally assessed to evaluate the grasp of content and technique to deliver the intervention. Furthermore, regular debriefing sessions are planned with the CHWs to monitor compliance with the intended standard of the intervention delivery and to improve adherence to the intervention protocols.Various strategies are followed to maximise participant retention in the study. First, potential participants who return for follow-up care and treatment visits after the initial diagnosis visit are recruited into the study. Patients who return for follow-up treatment and care portray commitment to their own health betterment and are as such perceived as patients who are likely to commit to long-term participation in the Sinako study. Second, CHWs have been trained to answer pertinent questions about the study that may be posed by potential participants at every opportunity or as and when requested. Third, tailored strategies are used specific to the individual participant. For instance, a potential participant of the Sinako study will, from time to time, travel out of the province. This poses a risk of attrition from the study. To mitigate this risk and improve retention rates, the CHWs have been trained to check with the enrolled patients regarding travelling plans. In addition, the patient is also requested to inform the CHW of such travel, whether planned or occurring as an emergency. Finally, the CHW follow-up visits with the patient or the fieldworker data collection appointment with the household member are, as far as possible, scheduled according to the participants\u2019 convenience. This is to ensure that study activities adapt to participants\u2019 availability and are not imposed. This in turn is intended to improve retention.If an individual participant does not want to continue participating in the study the intervention for this particular respondent will be stopped. In case of a particular adverse event , the principal investigator will make an informed decision whether or not to continue the trial for this particular participant.Figure\u00a0In order to assess the impact of the intervention, key baseline assessment indicators are considered. These include primary and secondary outcomes Table\u00a0. As the Moreover, we assess the impact of our intervention on a range of secondary process-related outcomes, measuring both individual outcomes and aspects of HIV competence of the households, which should facilitate individual ART adherence (and thus support the primary outcomes). At the individual level, the PLWH questionnaire assesses HIV knowledge, condom use, quality of life, self-management, perceived social support and disclosure. Outcomes measured at the household level include HIV knowledge, condom use, HIV-related stigma, communication about HIV, household functioning, HIV testing and support to a household member living with HIV. These outcomes are included as indications of household comfort with HIV and communicating about HIV and could be considered as proxy measures for HIV competency at the household level.Methodologically, this study aims to analyse unique longitudinal data in a partially mixed concurrent equal status design, which involves \u201cconducting a study that has two phases that occur concurrently, so that the quantitative and qualitative phases have approximately equal weight\u201d . QuantitThe quantitative data collection is performed by experienced fieldworkers who have received additional training to sensitise them to the specifics of the study in order to ensure quality and standardisation of the research process. An innovative feature of this cluster-RCT is that one household member of each PLWH enrolled in the study is also interviewed. This enables collection of data on the level of HIV competence within the household . In order to avoid inadvertent disclosure, and to protect patient privacy, the interview with the household member is held at a different time and will be presented as a general health survey. Furthermore, patient and household interviews will be conducted by different interviewers who have experience with this research method in similar research contexts. Small tokens of appreciation in the form of a shopping voucher (for R75 (in the control arm) and R150 (in the intervention arm)) are provided to participants who complete the baseline and follow-up data collection.For the baseline data collection, the CHW presents the study during the first pre-intervention visit and asks for informed written consent for obtaining contact information of the household members used for the household interview is invited for an interview. Using a semistructured interview guide, longitudinal in-depth interviews with the selected respondents will be conducted at three different time points: before the start of the intervention (month 0); in the middle of the intervention (month 6); and at the end of the intervention (month 12). Longitudinal qualitative data collection allows assessment of the changing dynamics within and outside the household that influence HIV competency, as well as its impact on PLWH and their (un)infected household members. The topics explored during the qualitative interview include HIV testing, stigma, disclosure, treatment adherence support, household support, and aspects of HIV competence. All interviews are conducted in the native language of the respondents .Furthermore, all CHWs delivering the intervention are invited to participate in a focus group discussion to assess the feasibility of the intervention. The perceptions of those delivering the intervention are valuable because they may have important divergent insights into the way in which the intervention works to change HIV competence levels. The focus group discussions are conducted with all CHWs in their preferred language .All quantitative and qualitative data will be anonymised and stored on a secure server. The list with the names of the respondents and their corresponding respondent numbers will be stored safely in a locked cabinet in the office of a School of Public Health or University of Antwerp researcher. The participant list will only be used for the purpose of identifying the follow-up respondents. This list with respondent names will be kept separate from the quantitative and qualitative databases. All these data will be kept for 5\u2009years after the completion of the study.The quantitative data collection is guided by the mobile application Mobenzi. The Mobenzi servers are hosted in private subnets of the Amazon Web Service, where security group filters and network access control lists are utilised within a virtual private network (VPN) environment to ensure data security. Completed quantitative surveys are periodically uploaded and removed from the fieldworkers\u2019 devices once the server acknowledges its receipt. Data are encrypted in transit using Secure Sockets Layer (SSL) .The qualitative in-depth interviews with PLWH and the focus group discussions with the CHWs delivering the intervention will be recorded. These data will be captured and analysed so that the anonymity of the respondents is maintained. Each respondent will be given a unique identifier. The coding sheet with all respondent numbers (identifiers) will be stored on a secure campus server. The audience will thus not be able to link individual statements to particular focus group participants and interviewees. If any statements would potentially reveal the identity of a respondent (e.g. because the respondent gives information specific to a certain household or patient), the research team will not include this statement so as to protect their identity.The comparison between the two arms using cluster-specific analysis techniques will allow us to assess the net impact of the household intervention on both the primary and secondary outcomes. First, we will perform an intent-to-treat analysis. In a second step, an analysis based on dose-response data will be conducted.Furthermore, the main relationships between the relevant concepts will be analysed using latent cross-lagged modelling (Mplus). Using chi-square difference testing , measureData collection and data analysis phases will be alternated to assist subsequent interviews and to assess when data saturation has been reached. After written informed consent is obtained all interviews will be audiotaped, allowing us to produce a detailed transcript of the interviews. These transcripts ensure accuracy of what is said and serve as the basis for data analysis. The recordings of the interviews will be transcribed verbatim and, when necessary, translated into English. A sample number of translations will be back translated into the local language for a quality check. Transcripts will be imported into NVivo. Data will be analysed carefully by reading and re-reading the field notes and transcripts of interviews. Codes for a sample of transcripts will be compared with another researcher\u2019s codes and similarities and differences will be discussed, thus ensuring intercoder reliability. The analysis will be performed in accordance with the Grounded Theory principles described by Strauss and Corbin .When baseline data become available, descriptive analysis and structural equation modelling using Mplus will be conducted. A special data monitoring committee, made up of delegates from both institutions and external institutes, will be informed on the progress of the trial. This committee is independent from the sponsor and funders. Furthermore, the study will be guided by a steering committee, consisting of the two local principal investigators and a postdoctoral fellow.A debriefing and internal monitoring plan will be followed to further monitor the intervention progress and to assess quality of delivery. Adverse events resulting from the intervention are reported on the same day to the principal investigator who will report these to the ethics committee of both institutions involved in the study (the University of the Western Cape and the University of Antwerp). However, no extreme adverse events are anticipated. In case psychological support is needed, the research team will provide counselling contacts.One key to monitoring and evaluation is establishing whether it is ethical to continue the trial. To limit the potential risks for participants, we will organise a mid-term review of the intervention to assess its initial impact. In the unlikely event that the intervention has a negative impact on the health or mental well-being of the participants the trial will be stopped immediately. The project partners (the University of the Western Cape and the University of Antwerp) must mutually decide this in consultation with one another and the ethics committees of both institutions.Before study enrolment informed written consent of all participants is obtained. The consent forms are available in both English, isiXhosa and Afrikaans. The purpose of the study and its design and aspects such as informed consent and confidentiality is explained in an understandable manner to the respondent in the language of their preference . This information is also distributed by means of an information leaflet which the participants receive from the fieldworker. Written informed consent is required not only for study enrolment, but also for audio recording and for the publication of the findings. After written informed consent, respondents who agree to be included in the study are subjected to either a baseline and follow-up survey or a baseline and follow-up interview plus a household intervention.Respondents can withdraw from the study at any time without penalty or loss of benefits to which they are entitled. If the respondent faces issues they do not want to discuss, the researcher will be sensitive to the interests of the participants by not pressing the issue and moving on to the next question.There are essentially three groups of participants: 1) HIV patients on ART; 2) household members; and 3) CHWs providing the intervention. Groups 2 and 3 are not exposed to any risks. These participants will share their views and experiences regarding life in the household (group 2) and their work (group 3), respectively.The PLWH enrolled in the cluster-RCT (group 1) are exposed to two potential risks for which we have developed strategies to prevent and mitigate possible negative effects. First, the patients on ART receiving the standard treatment (CHW support) are not exposed to any potential negative effect. The Non-Governmental Organisation (NGO) providing the CHW support has been providing this support for several years and is accredited and funded by the provincial Department of Health. These trained CHWs have a standard procedure to protect the person living with HIV from any unintended consequences (e.g. the disclosure of their HIV status to the family/community). Patients starting treatment follow counselling sessions at the clinic where they are introduced to their CHW, who then makes an appointment for follow-up support visits . If the patient does not want the CHW to visit their home, the meetings with the CHW are organised at their preferred location. This standardised procedure has been working for many years and has assisted thousands of HIV patients to commence their treatment. No unintended consequences are therefore expected in this arm. The HIV patients in the intervention arm, however, will be subjected to a household intervention. The intervention is based on the available literature on intervention development and the theoretical frameworks developed in family sociology and psychology . The entSecond, it is possible that some of the participating HIV patients have not yet disclosed their HIV-positive status to their household members. For this reason, the patient interviews and the household interviews will be separated entirely. Both interviews will be executed on different dates and by different fieldworkers. The household interview will be framed as a general health survey in order to protect the privacy of the participating HIV patients.After the completion of the study, the participants will be referred back into the health system. All patients revert to the standard of care delivered by the Department of Health, including facility visits and ART adherence clubs for stable patients.The study results will be presented to the scientific community via journal publications and presentations at international conferences. People who are formally named and linked to the study and others who are directly involved, who have actively participated in the preparation or writing of the articles, are eligible for authorship. There is no intention to make use of professional writers. Furthermore, all relevant stakeholders will be informed of the research results; these include the Western Cape Department of Health, the City of Cape Town, the participating NGO and their CHWs, and patient representatives. The goal is to share the resulting knowledge with the relevant people who can subsequently adopt the (hopefully) successful interventions to improve CHW support for HIV patients on ART.Despite the success of the ART programme, South Africa still faces both prevention and treatment challenges. To tackle these challenges, stimulating HIV competence at the household level could potentially be a feasible and sustainable strategy to optimise the outcomes of CHW interventions in a resource-constrained context. This paper provides an overview of the Sinako study. The aim of this cluster-RCT in South Africa is to investigate to what extent and how an intervention can: 1) increase HIV competence in PLWH and their households; and subsequently 2) optimise the impact of CHW support on individual ART outcomes. A longitudinal mixed methods design is adopted to analyse the data of the cluster-RCT Sinako study with two arms: 1) a control arm where CHWs will offer a standard package of support to PLWH during home visits which is only focused on the individual; and 2) an intervention arm where, during home visits, CHWs will focus on both the individual and the household in order enable the patient to self-manage their HIV treatment within an HIV competent household.The Sinako study has to date encountered a couple of unexpected delays, stemming from policy changes in the field. In early 2019, the local Department of Health announced an amendment in operational arrangements with regards to NGOs and CHWs. These changes mainly relate to remuneration of CHWs. Originally, CHWs had been mainly employed on a 50% full-time equivalent basis by NGOs and remunerated from non-South African government funding and external sources of aid funding. The change in operation implied that the CHWs were now effectively employed by the government and remunerated from government funds, channelled through the NGOs, although administratively NGOs still provide oversight of CHWs. Prior to this change, the CHWs worked part-time for the Department of Health, and would therefore be able to take on part-time responsibilities for the study. However, the policy change resulted in the recruitment of full-time CHWs for the length of the intervention to work exclusively for the research project. This new strategy required a new recruitment process, which resulted in delays in the roll-out of the RCT.The ethics committee of the University of the Western Cape (June 2019) and the ethical committee for the Social Sciences and Humanities of the University of Antwerp (September 2018) provided ethical approval for this study. Permission by the City of Cape Town was received in July 2019 and by the Western Cape Department of Health was granted by September 2019 for all but one facility, which was granted in December 2019. In this facility, the data collection started when approval was received.An application for funding was submitted in April 2017 to the Research Foundation Flanders and in May 2017 to the VLIR-UOS Research Foundation\u2014Flanders for different aspects of this cluster-RCT. It went through thorough external peer review for each funding organisation separately. Funding was granted for 4 years, starting from January 2018. In November 2018, we applied for additional funding for the qualitative research component via a Global Minds scholarship at the University of Antwerp and in August 2018 for NRF funding. Each funding body reviewed various aspects of the qualitative research component separately. This article is based on the final protocol . Recruitment for the baseline survey and intervention began in year 2 (8 October 2019). We anticipate that recruitment will be completed by year 3 (May 2020). The postintervention survey and the longitudinal qualitative work are expected to be finalised by year 3 (October 2020).Additional file 1. Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklist.Additional file 2. Consent forms."} +{"text": "E. coli. This setup enabled us to show that being able to precisely set the production rate of a secretory recombinant protein is critical to enhance protein production yields in the periplasm. It is assumed that precisely setting the production rate of a secretory recombinant protein is required to harmonize its production rate with the protein translocation capacity of the cell. Here, using proteome analysis we show that enhancing periplasmic production of human Growth Hormone (hGH) using the tunable rhamnose promoter-based setup is accompanied by increased accumulation levels of at least three key players in protein translocation; the peripheral motor of the Sec-translocon (SecA), leader peptidase (LepB), and the cytoplasmic membrane protein integrase/chaperone (YidC). Thus, enhancing periplasmic hGH production leads to increased Sec-translocon capacity, increased capacity to cleave signal peptides from secretory proteins and an increased capacity of an alternative membrane protein biogenesis pathway, which frees up Sec-translocon capacity for protein secretion. When cells with enhanced periplasmic hGH production yields were harvested and subsequently cultured in the absence of inducer, SecA, LepB, and YidC levels went down again. This indicates that when using the tunable rhamnose-promoter system to enhance the production of a protein in the periplasm, E. coli can adapt its protein translocation machinery for enhanced recombinant protein production in the periplasm.Recently, we engineered a tunable rhamnose promoter-based setup for the production of recombinant proteins in Escherichia coli is widely used for the production of recombinant proteins . At an A600 of ~0.5 target gene expression was induced by the addition of rhamnose at a concentration optimal for the periplasmic production of hGH; for DsbAsp 100 \u03bcM rhamnose, for Hbpsp 50 \u03bcM rhamnose, for OmpAsp 50 \u03bcM rhamnose, for PhoAsp 100 \u03bcM rhamnose, and for the control 100 \u03bcM rhamnose was used were used for another hGH production run, cells were harvested 16 h after the addition of rhamnose, washed three times with plain LB medium and the washed cells were subsequently used to inoculate a fresh culture with a starting A600 of 0.07.The W3110\u0394To monitor the production of periplasmic hGH, whole-cell lysates were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) followed by immuno-blotting using an AlexaFluor 647 conjugated anti-His antibody (Invitrogen) as described previously via the PRIDE partner repository . Database searching was done against a randomized E. coli K12 W3110 UniProt/Swissprot database with the added amino acid sequence of hGH, using the MaxQuant software (version 1.5.8.3) .Protein digestion, liquid chromatography\u2013tandem mass spectrometry (LC\u2013MS/MS) analysis, protein identification and quantification methodology used to analyze the proteome of cells producing periplasmic hGH using different signal peptides and control cells with an empty expression vector have been described in detail previously and OmpAsp (50 \u03bcM rhamnose), hGH was also efficiently produced in the periplasm, but yields were lower than with Hbpsp. Enhancing periplasmic hGH production with PhoAsp (100 \u03bcM rhamnose) resulted in only a minute amount of hGH in the periplasm and most of the hGH was retained as the precursor protein in the cytoplasm in aggregates. Thus, different signal peptides can have a significant impact on the periplasmic production of hGH in the periplasm.Recently, we used rhamnose promoter-based production rate screening in combination with four signal peptides, DsbAeriplasm . Enhanciwas used . Periplawas used . Periplasp, which leads to the lowest periplasmic hGH production, led compared to using the other three signal peptides to a less dramatic overall change in the proteome composition. Out of the 509 proteins showing statistically significant changes, for the DsbAsp Hbpsp, and OmpAsp more than 250 were considerably changed . However, for the PhoAsp only about 150 proteins were considerably changed conditions that require it to adapt its protein translocation machinery.How does E. coli's ability to modulate its protein translocation machinery capacity to enhance the production of recombinant proteins in the periplasm. Cracking the mechanism(s) behind E. coli's ability to modulate its protein translocation capacity may provide the basis for engineering the next generation of strains for periplasmic recombinant protein production.Taken together, it appears that we have exploited http://proteomecentral.proteomexchange.org) via the PRIDE partner repository with the dataset identifier PXD013168 .The datasets generated for this study can be found in the mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (AK, KD, MA-V, AM, and J-WG conceived and designed the experiments. AK, KD, MA-V, AM, and RE performed the experiments. AK, KD, MA-V, AM, RE, SS, DB, KR, and J-WG analyzed the data. AK and J-WG wrote the paper and all others critically read the paper.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Bombus terrestris. Impacts of these honeybee-derived viruses - either injected or fed - on the mortality of B. terrestris workers were, however, negligible and probably dependent on host condition. Our results highlight the potential threat of viral spillover from honeybees to novel wild bee species, though they also underscore the importance of additional studies on this and other wild bee species under field-realistic conditions to evaluate whether pathogen spillover has a negative impact on wild bee individuals and population fitness.Pathogen spillover represents an important cause of biodiversity decline. For wild bee species such as bumblebees, many of which are in decline, correlational data point towards viral spillover from managed honeybees as a potential cause. Yet, impacts of these viruses on wild bees are rarely evaluated. Here, in a series of highly controlled laboratory infection assays with well-characterized viral inocula, we show that three viral types isolated from honeybees readily replicate within hosts of the bumblebee Though Salamandra salamandra [Nosema ceranae microsporidian of the Asiatic honeybee Apis cerana is nowadays an emerging infectious disease (EID) of Apis mellifera [Tetragonula hockingsi [Bombus terrestris [Osmia bicornis [Pathogen spillover is an important cause of biodiversity decline as well as a risk to human health . Recent lamandra and Ebollamandra . Yet, paellifera throughoockingsi , though rrestris ,20 and Ebicornis . In thesbicornis rather tA. mellifera, the world's most important commercial pollinator, is a source of pathogens that spill over into wild bee species [Varroa destructor and deformed wing virus (DWV), which the mite transmits [V. destructor has been introduced, i.e. worldwide excluding Australia [There is mounting correlational evidence that the Western honeybee species ,23\u201328, i species ,26,29,30 species , probablransmits \u201334. DWV ustralia .Bombus spp.) are widespread wild bee species in northern temperate regions [Bombus huntii [Bombus spp. covaries with that in Apis. Though Bombus spp. are not known to host V. destructor, spillover of DWV from honeybees to bumblebees has been inferred from the tight relationship between DWV prevalence in populations of A. mellifera and Bombus spp. and higher prevalence in the former [A. mellifera comprising two main genotypes: the original DWV genotype A (DWV-A) and the more virulent DWV genotype B (DWV-B) [V. destructor parasitism of honeybees, by elevating DWV prevalence and intensity of infection (pathogen load) in honeybees, may help drive pathogen spillover from honeybees to bumblebees [Bumblebees , both of (DWV-B) . A leadimblebees . We notemblebees .et al. [B. terrestris workers led to a significant increase in mortality over 20 days. It is not known whether observed mortality was due to DWV-A, DWV-B, enhanced virulence due to co-infection or an A\u2013B recombinant. Though DWV-A and DWV-B are widespread, have high prevalence in British and US honeybees and often co-occur in the same host [et al. [et al. [B. terrestris fat bodies into conspecific, caged workers and revealed a 50% increase in mortality. In this second study [B. terrestris hosts, to which it had potentially adapted, thus not reflecting a spillover scenario from honeybees to bumblebees. In both studies [B. terrestris and whether it per se, as opposed to a potentially pre-existing pathogen in experimental bees or inoculum, induced elevated mortality.Two studies have to date evaluated the virulence of DWV to bumblebees. Firstly, F\u00fcrst et al. found thame host ,43, A\u2013B ame host and may [et al. . In the [et al. injectednd study , DWV was studies ,45, viraB. terrestris workers with either BQCV, DWV-A or DWV-B derived from honeybees and thereafter quantified host mortality and viral titre. Inoculation of bumblebees was done by injection, so as to determine the capacity of the virus to replicate in a novel host, as well as by feeding, representing the more likely natural route of infection in the field [ad libitum food conditions. However, fitness costs when responding to an immune challenge may be dependent on host nutritional state, and have been shown for bumblebees when diet was restricted [To clarify the potential impact of honeybee-associated viruses on bumblebees, we experimentally inoculated he field . These estricted . We, the2.2.1.B. terrestris colonies were kept in an incubator at 30\u00b0C and 50% relative humidity with ad libitum 50% (w/v) sucrose solution. Every 2\u20133 days, they were fed with fresh-frozen honeybee pollen pellets that had been freshly defrosted. Pollen was UV-irradiated before use to destroy pathogens. Honeybees for experiments and for generating viral inocula were taken from our local apiary , originally purchased as the subspecies Apis mellifera carnica, as is typical for beekeeping in the region. To check that bumblebees (12 source colonies: labelled B1\u2013B12) and honeybees as well as the fresh-frozen pollen pellets were devoid of viral pathogens, we tested them by real-time quantitative PCR (qPCR) for seven common honeybee viral targets and three Microsporidia . Bumblebee and honeybee colonies were largely free of virus , pollen was devoid of virus and Microsporidia were not detected.Commercial 2.2.et al. [et al. [et al. [To propagate DWV-A and DWV-B for experimental inocula, we used the inocula from Tehel et al. . Our BQC [et al. . Viral p [et al. . We alwaBombus and Apis. Ultradeep next-generation sequencing (NGS) on an Illumina platform confirmed the identity of our DWV-A and DWV-B inocula , a quantity sufficient to ensure 100% infection of adults [ad libitum with 50% (weight/volume) sucrose solution and monitored daily till death, as in McMahon et al. [Freshly eclosed workers were cooled to 4\u00b0C and then injected laterally between the second and third tergite with 10f adults , using an et al. . At 10 d2.3.2.B. terrestris workers as follows. Firstly, we marked all workers in our 12 B. terrestris colonies. Colonies were checked daily and unmarked, newly emerged workers were transferred to autoclaved metal cages (10 \u00d7 10 \u00d7 6 cm), fed ad libitum with 50% (w/v) sucrose solution and held in an incubator at 30\u00b0C. On the next day (i.e. 24\u201348 h after eclosion), workers were inoculated with virus (or control solution), either by injection or orally by feeding, and then kept in groups of 5\u201310 of the same treatment per cage. In an experiment, the number of bees per cage was constant (\u00b1 one bee) for every treatment within any 1 day of infection. This procedure was repeated across 25 days to allow for sufficient replication per experiment.Viral inocula were tested in freshly emerged 2.3.2.1.B. terrestris workers by feeding was designed to test the likely route of viral spillover from honeybees at flowers in the field. Freshly emerged (24\u201348 h after eclosion) bumblebee workers were individually fed with 109 viral genome equivalents or the equivalent control solution devoid of virus , a quantity inducing an acute infection [Inoculation of nfection . Then be2.3.2.2.B. terrestris is a competent host for each virus. To inject workers, they were cooled on ice till immobile. Viral inoculation then followed that for honeybees; B. terrestris workers were then transferred to autoclaved metal cages in small groups (five to seven bees per cage). Bees were randomly assigned to cages independent of their four source colonies but grouped according to treatment per cage, resulting in n = 404 bees that were recorded daily for mortality . One bee per cage was removed at 10 d.p.i. to quantify viral titre.Inoculation by injection was designed to test whether 2.3.2.3.B. terrestris workers did not exhibit elevated mortality over controls following viral inoculation under benign laboratory conditions with ad libitum food (see Results), we ran an additional experiment in which we removed their food to determine whether viral inocula induced mortality under non-benign, starvation conditions. Bees from three colonies were collected over a 14 day period as they eclosed, held in autoclaved metal cages and individually injected as described above. To control statistically for effects of age, bees of approximately the same age were held in the same cage. All bees were injected on the same day. At 13 d.p.i., after the virus had time to replicate, bees were individually transferred to a plastic cup covered with netting, devoid of sucrose solution but with a small cotton wool ball soaked in water, held at 30\u00b0C and checked every hour for mortality .As B. terrestris were inoculated by injection in this experiment, of which 194 survived till 13 d.p.i. and, therefore, entered the starvation part of the experiment.At death, bee size was estimated because size might determine the ability to survive under starvation using a plastic pestle, of which 100 \u00b5l were used for RNA isolation. Absolute quantification of viral titre followed methods used for viral inocula described in Tehel et al. . We used generalized linear models (GLMs) with a quasi-Poisson error distribution to test for the effect of treatment or experiment on viral titre.coxme [B. terrestris experiments in which an experiment was initiated across multiple days. To assess the significance of predictors, statistical models including all predictors were compared with null (intercept only) or reduced models (for those with multiple predictors) using likelihood ratio (LR) tests. Pairwise comparisons between factor levels of a significant predictor were performed using pairwise post hoc tests, adjusting the family-wise error rate according to the method of Bonferroni as random factors and treatment as a fixed factor. For the Bombus experiment under starvation conditions, \u2018cage\u2019 was again retained as a random factor, and treatment together with bee age and bee size entered as fixed factors. The median survival was calculated using the Survfit function in survival. In all survival analyses, bees that died within 1 day (24 h) post-inoculation were eliminated from subsequent analyses as death was probably a consequence of physical damage by injection per se rather than the inoculum.Survivorship of experimentally inoculated bees was analysed using the Cox proportional hazards models with the R package coxme ,52. \u2018Cagultcomp, ). For th3.3.1.\u03b2) = 562.259, p < 0.001; DWV-A: Exp. (\u03b2) = 2.489, p = 0.006; DWV-B: Exp. (\u03b2) = 4.461, p < 0.001; electronic supplementary material, table S2). BQCV killed honeybees the fastest, followed by DWV-B and DWV-A . Injected virus grew to ca 3 \u00d7 1013 viral genome equivalents at 10 d.p.i. . Honeybees suffered a slight background infection with DWV-B. However, all viral inocula were devoid of contaminating virus , viable and highly virulent in their original host, A. mellifera.All viral inocula, BQCV, DWV-A and DWV-B, resulted in rapid honeybee mortality ; which was significantly faster than control = 0.940, p = 0.75; DWV-A: Exp. (\u03b2) = 1.244, p = 0.26; DWV-B: Exp. (\u03b2) = 1.218, p = 0.30; figure\u00a01a; electronic supplementary material, table S2). Though all viruses were detectable in bumblebee abdomens at 18\u201325 d.p.i. (figure\u00a01a), viral titres were at or just below 109, the amount administered per bumblebee . Bumblebees were devoid of background infection. This experiment suggests that all three viruses can maintain themselves in B. terrestris following oral infection, but that they are not virulent when hosts are maintained in the laboratory under benign, satiated conditions.3.2.2.ad libitum did not die any faster than controls = 0.623, p = 0.13; DWV-A: Exp. (\u03b2) = 1.240, p = 0.47; DWV-B: Exp. (\u03b2) = 0.923, p = 0.79; figure\u00a01b; electronic supplementary material, table S2). Virus did, though, replicate very well in B. terrestris hosts laboratory conditions.In contrast with honeybees, bumblebees injected with viral inocula and fed 3.2.3.B. terrestris mortality (c). When all treatments were analysed simultaneously through to the death of all bumblebees, statistically significant differences among control or treatments were not seen = 1.059, p = 0.87; DWV-A: Exp. (\u03b2) = 1.589, p = 0.10; DWV-B: Exp. (\u03b2) = 1.167, p = 0.57; electronic supplementary material, table S2). However, DWV-A inoculated bees exhibited a subtly shorter lifespan (c), dying ca 1.6-fold faster than controls, suggesting that DWV-A (but neither DWV-B nor BQCV) might subtly impact B. terrestris longevity .When inoculated by injection and then starved from 13 d.p.i., viral treatment had again no effect on ortality c. When alifespan c, dying \u03b2) = 1.665, p = 0.03), bee size did not differ between treatments and bumblebee size did not differentially impact mortality across treatments .Though smaller worker bumblebees lived longer than larger workers than the equivalent inoculation by injection of bumblebees , for all three viruses and duration of infection across experiments . Notably, inoculation with DWV-B led to a significantly higher viral titre than with BQCV within each experiment with bumblebees , as welus spp.; \u201358). Mor species ,26\u201328 asnew host .sensu [et al. [Bombus spp. exhibiting clinical symptoms . Gusach [et al. have recd wings; ), which d wings; . Also, wd wings; or injecd wings; into B. t et al. and Gray [et al. employed (though used the (though ). Altern [et al. used a m [et al. used DWVmblebees , as doesAnother facet of virulence may be the size of the host in relation to viral titre. Honeybee workers are generally smaller than those of bumblebees and, in our experiments, we inoculated each host species with the same viral titre. A direct relationship between host size and inoculum titre could, therefore, account for the higher mortality of honeybees versus bumblebees that we observed. However, viral titres were actually higher in honeybees than bumblebees, arguing against a relationship between host size and inoculum titre that is constant across host bee species. Furthermore, viral titre seems to asymptote after several days in each host species, high in honeybees [B. terrestris longevity in the laboratory. Condition-dependent virulence of honeybee viruses in Bombus spp. hosts has been seen for slow bee paralysis virus infecting B. terrestris, in which longevity was compromised only when hosts were starved [Crithidia bombi [B. terrestris. Laboratory conditions may underestimate the impact of honeybee virus spilling over into wild bees in the field, where hosts may be exposed to far harsher environmental conditions and limited resources, e.g. [Bombus spp., decline [Not even under stressful, starvation conditions did we detect a marked effect of either BQCV, DWV-A or DWV-B in reducing starved , and foria bombi ,64. We, es, e.g. . Insecti decline , with su decline ,67. Subl decline \u201370, and B. terrestris versus injected into A. mellifera. These results suggest that virus may be locally adapted to its host, and that A. mellifera may be the reservoir host for all three viruses. The immediate impact of viral spillover from honeybees to bumblebees and other wild bee species might then indeed be low, as we found under our benign laboratory conditions. But transmission from bumblebee to bumblebee could lead to local adaption of a virus to a Bombus host, with unknown consequences of pathogen spill-back from bumblebees and other wild bee species to honeybees if viral adaptation to the novel host (Bombus) trades off with a loss of virulence in the original host (Apis) [We found that viral titres were lower and the impact on host mortality was non-existent when BQCV, DWV-A or DWV-B was injected into t (Apis) \u201373. The t (Apis) .V. destructor host feeding is thought to account for the huge increase in viral prevalence and intensity of infection of DWV in honeybees [B. terrestris led to systemic infection and rapid host death, whereas oral infection led to infection of the host gut in a dose-dependent manner and with more limited impact on host health [It is unsurprising that we found inoculation by injection to lead to higher viral titres than by oral inoculation of bumblebees. Injection of a pathogen into the insect haemocoel gives the pathogen access to the entire host body tissue, whereas oral infection initially gives it access to the gut alone. The former route of transmission, injection into the haemocoel, through oneybees ,76. In st health .et al. [et al. [A. mellifera workers through its typical faecal\u2013oral route of transmission.That BQCV was extremely virulent in our honeybee assay is at first sight surprising because BQCV is widespread and highly prevalent in honeybee populations ,29,30,77et al. and Doub [et al. found no [et al. , drone [ [et al. and work [et al. pupae. T [et al. have obs [et al. . To explA. mellifera because it is probably the reservoir host of BQCV, DWV-A, DWV-B. Though we recorded little to no virulence of these viruses on B. terrestris under laboratory conditions, their impact on this and other bee species (and other flower visitors) under field-realistic conditions should be the focus of future studies to evaluate the role of viral spillover in wild bee decline.The Western honeybee is the dominant flower visitor across most terrestrial ecosystems of the world . Dominan"} +{"text": "The pollution status of polychlorinated naphthalenes (PCNs) in the sediment of the Yangtze River Basin, Asia\u2019s largest river basin, was estimated. The total concentrations of PCNs (mono- to octa-CNs) ranged from 0.103 to 1.631 ng/g. Mono-, di-, and tri-PCNs\u2014consisting of CN-1, CN-5/7, and CN-24/14, respectively, as the main congeners\u2014were the dominant homolog groups. Combustion indicators and principal component analysis showed that the emissions from halowax mixtures were the main contributor to PCNs in sediment, among most of the sampling sites. The mean total toxic equivalent (TEQ) was calculated to be 0.045 \u00b1 0.077 pg TEQ/g, which indicates that the PCNs in sediments were of low toxicity to aquatic organisms. This work will expand the database on the distribution and characteristics of PCNs in the river sediment of China. Polychlorinated naphthalenes (PCNs) have been synthesized since the 1930s, with 75 congeners based on the number and position of the chlorine(s) in the naphthalene ring system, including 2 mono-chlorinated (CN-1\u2013CN-2), 10 di-chlorinated (CN-3\u2013CN-12), 14 tri-chlorinated (CN-13\u2013CN-26), 22 tetra-chlorinated (CN-27\u2013CN-48), 14 penta-chlorinated (CN-49\u2013CN-62), 10 hexa-chlorinated (CN-64\u2013CN-72), 2 hepta-chlorinated (CN-73\u2013CN-74), and 1 octa-chlorinated (CN-75) . The posPCNs have been found in various media such as soil, sediment, water, air, and biota, even in human breastmilk ,13. SediThe Yangtze River, Asia\u2019s longest river and the third longest river in the world, serves as an important resource for drinking water, aquaculture and industrial use. With a rapid increase in population and economic development around the river, there are numerous inputs from industrial wastewater, municipal sewage, atmospheric deposition, as well as agricultural soils containing fertilizers, pesticides, herbicides, and heavy metals . PreservThe PCN concentrations of the sampling sites from the Yangtze River basin are shown in \u22121. The lowest concentration of PCNs was found at S7 located in Minjiang River which was served as a water reservoir. The highest concentration of PCNs was found at S15, collected from Xijiu River in Yixing City, a residential urban area. In the upstream zone (samples S1\u2013S3), the relatively higher PCN concentration was found at S1 located near a famous tourist resort. Among the upper- and middle-stream zone samples, the PCN concentrations of S8 and S5 were relatively high. This was mainly due to the fact that S8 was in a large city, and S5 was close to an industrial development zone. Within the sample from the middle and lower zones, the concentration of PCNs was relatively high at S10, which was near the outlet of a sewage treatment plant. As mentioned in the previous study [\u22121). Unlike results reported from Li et al. [The PCN concentrations in the sediment samples ranged from 0.103 to 1.631 ng gus study , the inpus study . In our PCN levels in this study were compared with those reported from other parts of the world . Sample Compared with other locations, the PCN concentrations in our study were the same order of magnitude as in the Venice Lagoon in Venice (0.03\u20131.15 ng/g\u00b7dw), the Gulf of Bothnia in Sweden (0.088\u20131.9 ng/g\u00b7dw), the Liaojie River in Taiwan (0.029\u20130.987 ng/g\u00b7dw), the Tokyo Bay in Japan (1.81 ng/g\u00b7dw), the Qingdao Coastal Sea in China (1.1\u20131.2 ng/g\u00b7dw), and Laizhou Bay in China (0.12\u20135.10 ng/g\u00b7dw) ,22,23,24PCN homolog profiles for the environmental samples may be of great help to qualitatively identify the sources. To further identify the potential sources of PCNs in the samples, we conducted principal component analysis (PCA) on PCN homologs in the sediment samples of the Yangtze River. com) [com to \u2211PCNs was usually calculated to estimate primary sources [com/\u2211PCNs < 0.11 suggested emission from the halowax mixture, values of \u2211PCNcom/\u2211PCNs > 0.5 indicated combustion-related source emissions, and 0.11 < \u2211PCNcom/\u2211PCNs < 0.5 was assumed for the indication of emissions from combustion sources and halowax products [com/\u2211PCNs were all lower than 0.11, except for S12 (0.19). Thus, it was speculated that the dominant sources of the PCNs were mainly from the emission of the halowax mixture. Besides, the concentration of PCNs in S12 were also affected by combustion-related source emissions. Even though PCN mixtures were never historically produced and are not currently in commercial use in China, the historic usage of halowax mixture such as paintings and rubber materials could still be a big contributor to PCNs of the sediment in the Yangtze River Basin [Several congeners\u2014such as CN-17/25, -36/45, -39, -35, -52/60, -50, -51, -53, and -66/67\u2014have been identified as combustion indicators (PCNcom) ,36. The products . In thiser Basin .p-dioxin (TCDD) in terms of their biological actions in animals [Some PCN congeners have toxic effects similar to those of 2,3,7,8-tetrachlorobenzo- animals . Based o animals , meaningCompared with previous studies, the TEQ values in this study were sevTo investigate the distribution of PCNs in the sediments along the Yangtze River Basin, a series of sediments was collected from 16 typical sites, including rural, urban and industrial areas based on their surrounding environment and levels of economic development nearly .This picture just showed Level I and II tributaries. With the direction of the river, the samples could also be divided in four major zones: the upstream (Samples S1\u2013S3), the upper and middle-stream (Samples S4\u2013S8), the middle and lower (Samples S9\u2013S13), and the downstream (Samples S14\u2013S16) . Accordi13C10-labeled PCN internal standards and then mixed with 40 g (dry weight) of diatomaceous earth, and extracted by accelerated solvent extraction at 120 \u00b0C with a mixture extraction solvent of hexane and dichloromethane . The extraction solvents were first cleaned using acid silica column, followed by multilayer silica gel column and then basic alumina column. The elution fraction was then concentrated to 20 \u00b5L by rotatory evaporation and a gentle nitrogen gas stream. Finally, 13C10-labeled PCN was spiked for the calculation of recoveries before analysis.The PCNs were analyzed according to our previously described method . Briefly\u22121, and GC inlet temperature was set at 270 \u00b0C. The temperature program was initiated at 80 \u00b0C (for 2 min) and increased to 180 \u00b0C at 20 \u00b0C min\u22121 (hold for 1 min), 280 \u00b0C at 2.5 \u00b0C min\u22121 (for 2 min), and 300 \u00b0C at 10 \u00b0C min\u22121 (for 5 min). The HRMS was tuned and operated at approximately 10,000 resolutions with 45 eV EI energy.PCNs were analyzed by high resolution gas chromatography coupled with a high-resolution mass spectrometer . A DB-5 fused silica capillary column was used for the separation of PCN congeners. The injection volume was 1 \u03bcL, the flow rate of helium as carrier gas was 1 mL min13C10-labeled congeners ranged from 62% to 98%. The instrumental detection limits were assimilated when the signal-to-noise ratio was three times. PCNs were quantified using a relative response factor of the labeled congener at the same level of chlorination and a similar retention time.A procedural blank sample was evaluated to assess the possible contamination and instrumental stability. Only a small amount of monochlorinated polychlorinated naphthalenes was detected in the blank samples and was 10% lower than the concentrations in sediment samples. The PCN concentrations in the samples were, thus, not corrected using the values from the blanks. The recoveries of the \u22121 and 0.3 pg TEQ/g, respectively. These levels were lower than those of most of the previous reports, demonstrating that there was nearly no alarm to aquatic life with toxic aspects. In our study, the relatively higher PCN concentrations and TEQs were generally related with frequent human activities and nearby industrial sources. Further research is needed, however, to elucidate the relationship between concentrations of PCN congeners and human activities, providing new insights into understanding the environmental and health risk of exposure to PCN at low level.We evaluated the distribution, composition, and ecological risks of PCNs by analyzing 16 sediment samples of the Yangtze River basin from the cradle to the estuary. Their concentrations and TEQs were less than 2 ng g"} +{"text": "Xanthomonas oryzae pv. oryzae (Xoo) is one of the most serious rice diseases, causing huge yield losses worldwide. Several technologies and approaches have been opted to reduce the damage; however, these have had limited success. Recently, scientists have been focusing their efforts on developing efficient and environmentally friendly nanobactericides for controlling bacterial diseases in rice fields. In the present study, a scanning electron microscope (SEM), transmission electron microscope (TEM), and a confocal laser scanning microscope (CLSM) were utilized to investigate the mode of actions of ginger EOs on the cell structure of Xoo. The ginger EOs caused the cells to grow abnormally, resulting in an irregular form with hollow layers, whereas the dimethylsulfoxide (DMSO) treatment showed a typical rod shape for the Xoo cell. Ginger EOs restricted the growth and production of biofilms by reducing the number of biofilms generated as indicated by CLSM. Due to the instability, poor solubility, and durability of ginger EOs, a nanoemulsions approach was used, and a glasshouse trial was performed to assess their efficacy on BLB disease control. The in vitro antibacterial activity of the developed nanobactericides was promising at different concentration (50\u2013125 \u00b5L/mL) tested. The efficacy was concentration-dependent. There was significant antibacterial activity recorded at higher concentrations. A glasshouse trial revealed that developed nanobactericides managed to suppress BLB disease severity effectively. Treatment at a concentration of 125 \u03bcL/mL was the best based on the suppression of disease severity index, AUDPC value, disease reduction (DR), and protection index (PI). Furthermore, findings on plant growth, physiological features, and yield parameters were significantly enhanced compared to the positive control treatment. In conclusion, the results indicated that ginger essential oils loaded-nanoemulsions are a promising alternative to synthetic antibiotics in suppressing Xoo growth, regulating the BLB disease, and enhancing rice yield under a glasshouse trial.The bacterial leaf blight (BLB) caused by Xanthomonas oryzae pv. oryzae (Xoo) is of great concern due to its emergence as a major challenge to global rice cultivation . The samples were cultured in nutrient broth treated with ginger EOs, antibiotics, and DMSO. After incubation, Xoo cells were collected by centrifugation at 4000 rpm for 10 min and washed thrice with PBS to remove unsolicited media and other components. Then, the pellet was prefixed overnight at 4 \u00b0C with improved Karnovsky\u2019s fixative [v/v) glutaraldehyde and 2% (v/v) paraformaldehyde in 0.05 M sodium cacodylate buffer solution . After 30 min of washing with 0.1 M sodium cacodylate buffer, the samples were post-fixed in osmium tetroxide in 0.2 M PBS for 2 h and then dehydrated with a sequence of graded acetone . The sample was treated with 1:1 acetone and resin mixture for 4 h, 1:3 overnight, and 100% overnight resin. Following infiltration, the specimens were embedded in beam capsules with Spurr\u2019s resin. The samples were sliced into ultra-thin pieces using an ultra-microtome and a diamond knife. The pieces were placed on copper grids and stained with 2% uranyl acetate and lead citrate from Reynolds for 10 min each [The ultrastructural changes of the test fixative , which cmin each . FinallyXoo biofilm formation using confocal laser scanning microscope CLSM, 3 Falcon tubes with a total volume of 25 mL of nutrient broth were used for the experiment. Then, 100 \u00b5L of the standardized Xoo suspension (1 \u00d7 106 CFU/mL) was pipetted into each of the tubes. Test bacteria were treated with the minimum inhibitory concentration (MIC) concentration of the ginger EOs (100 \u00b5L/mL) and streptomycin (Sigma-Aldrich (M) Sdn. Bhd., Selangor, Malaysia) (15 \u00b5g/mL), while the control tube contained only nutrient broth (BD cat 234000) and Xoo suspension. The control tube and Falcon tubes alongside the test materials were incubated for 24 h at 30 \u00b0C. After incubation, the samples were centrifuged at 10,000\u00d7 g for 10 min, and the pellet was suspended in 20 mL of wash buffer (PBS solution) after the removal of the supernatant. About 1 mL of this suspension was added to each of the 20 mL of PBS solution contained in the Falcon tubes and incubated at room temperature for 1 h, mixing every 15 min. Both samples were pelleted by centrifugation at 10,000\u00d7 g for 10 min and re-suspended in 20 mL of PBS solution and centrifuge again at 10,000\u00d7 g for 10 min. Both generated pellets were re-suspended in separate tubes with 10 mL of PBS solution.To examine the effect of ginger EOs against The LIVE/DEAD BacLight (L7012) bacterial viability kits (Thermo Fisher Scientific) containing two components were prepared according to the manufacturer\u2019s instructions with a ratio of 1:1 in a microcentrifuge tube. Then, 3 \u00b5L of the mixed stains were pipetted into milliliters of each of the samples and incubated at room temperature for 15 min. Then, 50 \u00b5L of the stained suspension was pipetted onto glass slides and covered with slide slips. The stain and stained samples were protected against light during the process of staining to ensure the viability of the stain. The stained-glass slides were viewed on the same day using CLSM at the Agro Biotechnology Institute (ABI)-National Institutes of Biotechnology Malaysia (NIBM).6 CFU/mL. Subsequently, 100 \u00b5L of the suspension was spread on Muller\u2013Hinton (MH) agar using a sterile glass rod to ensure even distribution of microbial growth. Sterile filter paper discs (Whatman\u2019s No. 6 mm in diameter) were impregnated with 10 \u00b5L of the nanobactericides at different concentrations ranging from 25 to 125 \u00b5L/mL and then mounted on the surface of the agar test plate at intervals. The positive control discs were saturated with 10 \u00b5L of streptomycin (15 \u00b5g/mL disc), while the negative control discs were saturated with DMSO. Then, the Petri dishes were sealed using a sterile laboratory parafilm. The Petri dishes were left for 30 min at room temperature to allow the diffusion of ginger EOs. Plates were finally incubated at 37 \u00b0C for 24 h. The zones of inhibition were used to assess the diameter of growth inhibition in millimeters (mm).The in vitro evaluation was carried out in accordance with , with miXoo/distilled water), TB-negative control TC-Xoo/nanobactericides (75 \u03bcL/mL), TD-Xoo/nanobactericides (100 \u03bcL/mL), TE-Xoo/nanobactericides (125 \u03bcL/mL), and TF-Xoo/Streptomycin (15 \u03bcg/mL). The plants were grown at 30 \u00b0C and 85\u201395% relative humidity inside a glasshouse.The curative experiment was conducted with the MR219 rice cultivar. Seeds were germinated in trays for two weeks. Three blocks were made for each treatment, and there were nine plants per replicate in a randomized complete block design (RCBD). The experiment was divided into six treatments: TA-positive control (Xoo suspension was prepared by incubating the bacteria for 24 h at 28 \u00b0C in nutrient agar (NA) medium . The cultures were finally concentrated to 1 \u00d7 108 CFU/mL using sterile distilled water. Then, the rice plants were treated with the Xoo when the plants reached their tiller stage or 30 days after sowing (DAS) by the clipping method on fully developed leaves. The treatments were sprayed onto rice seedlings with a hand-held sprayer until they were completely wet in the morning hours (8:00). The treatments were applied at intervals for 45, 65, and 75 DAS. The disease parameters were evaluated every 10 days after treatment application up to 95 DAS. The plant height, yield parameters, and physiological characteristics of rice plant were measured using the Standard Evaluation System for Rice. The disease severity was determined according to [A = class of disease (0 to 5), B = number of seedlings per treatment indicating disease class, N = Total Number of Replications, 9 = a constant representing the highest evaluation class.A virulent strain of rding to and basen = number of assessment times, Y = disease incidence, t = observation times.The area under the disease progression curve (AUDPC) was calculated using the equation below, which was based on :AUDPC=\u2211iSimilarly, the protection index was evaluated by the following formula given by ,40:100\u00d7A2 concentration, and transpiration rate measurements were made at 75 days after sowing (DAS) using an infrared gas analyzer model Li-6400XT in order to determine the phytotoxicity of the nanobactericides on the rice plant. Measurements of photosynthesis rate, stomatal conductance, intercellular CO2, and transpiration rate were taken from young fully expanded and exposed leaves (third or fourth leaf from the tip) of the rice plant. Three replications from each treatment at 1000 to 1100 h were evaluated.The photosynthesis rate, stomatal conductance, intercellular COThe height of the plant was recorded from ground level to the tip of the tallest leaf at 30 days after inoculation according to . FurtherThe data were analyzed as mean \u00b1 standard deviation using SAS 9.4 version PROC ANOVA, and significant differences between the means were assessed using the least significant difference (LSD) at a 0.05 probability level.Xoo and break down the biofilm formation confirmed the potency of ginger EOs as a strong antibacterial agent. The ginger EOs-loaded nanoemulsions (nanobactericides) could be a potential delivery approach for highly volatile compounds and sensitive antibacterial agents. Furthermore, the formulated nanobactericides could be applied for managing BLB disease of rice plant and enhancing rice yield under glasshouse trial. The use of an effective and environmentally friendly nanobactericides has a direct impact on the society, economy, and the environment. Therefore, it serves as an important tool for achieving sustainable agricultural system, especially in terms of crop protection practices.The ability of ginger EOs to inhibit the growth of"} +{"text": "Animals must adapt their behavior to survive in a changing environment. Behavioral adaptations can be evoked by two mechanisms: feedback control and internal-model-based control. Feedback controllers can maintain the sensory state of the animal at a desired level under different environmental conditions. In contrast, internal models learn the relationship between the motor output and its sensory consequences and can be used to recalibrate behaviors. Here, we present multiple unpredictable perturbations in visual feedback to larval zebrafish performing the optomotor response and show that they react to these perturbations through a feedback control mechanism. In contrast, if a perturbation is long-lasting, fish adapt their behavior by updating a cerebellum-dependent internal model. We use modelling and functional imaging to show that the neuronal requirements for these mechanisms are met in the larval zebrafish brain. Our results illustrate the role of the cerebellum in encoding internal models and how these can calibrate neuronal circuits involved in reactive behaviors depending on the interactions between animal and environment. Animals can adjust their behavior in response to changes in the environment when these changes can be predicted. Here the authors show the role of the cerebellum in zebrafish that change their swimming as they adjust to long-lasting changes in visual feedback The interaction between animals and their surroundings changes constantly, due to changes in the environment and due to processes such as development, growth or injury, which modify the body of the animal. Nevertheless, fine motor control is so important that evolution has provided animals with mechanisms to produce precise behavior in these changing conditions. The task of adapting behavior to the changing environment can be solved in two ways. One way is to react to these changes through a feedback control mechanism, which ensures that the goal of a behavioral act is achieved under a variety of conditions. A second option is for the animal to learn the new environmental conditions, namely the association between its behavior and the sensory feedback, and to adjust its behavioral program in the long-term. This second mechanism is only possible if the change in conditions lasts and can therefore be predicted.1, in which retinal slip evokes eye motion that sets this slip to zero. If the stimulus is monitored constantly, a feedback control loop with well-tuned parameters may provide an appropriate mechanism for performing the task of setting the stimulus to zero2. This happens online, so feedback controllers are limited by the time delay required for sensory processing, which in the case of visual feedback is estimated to be between 100 and 300 ms7. If the processing of sensory information is long with respect to the duration of the motor action, the current state of the body will change dramatically by the time the feedback signal starts to influence the motor command. As a result, the feedback signal will implement an inappropriate correction based on out-of-date sensory information8.Many stimulus-driven behaviors result in the effective cancellation of the stimulus that evoked them. Examples include the optokinetic reflex (OKR)11. These models monitor the sensorimotor transformation performed by the body during movement and learn a forward or an inverse transfer function of this transformation either to predict sensory consequences of a motor command (forward models), or to provide an appropriate feedforward command to reach a desired sensory state (inverse models). Such models can predict that, for example, if we lift an arm a certain distance, we will experience a certain proprioceptive feedback of this motion (forward model), or that we need to send a certain motor command in order to lift an arm a certain distance (inverse model). If the transfer function changes in a long-term manner, the model updates, leading to motor learning. It is widely believed that internal models for motor control exist in the central nervous system and that in vertebrates, the cerebellum plays a major role in encoding them16, although how exactly this is done is not well understood.To overcome this limitation of feedback motor control, the brain can encode internal models of different parts of the body and/or of different aspects of the external world17, a behavior shared by many animals19, by which they turn and move in the direction of perceived whole-field visual motion. The OMR can be defined in terms of a feedback control mechanism as a locomotor behavior that tries to set the optic flow to zero, thus stabilizing the animal with respect to its visual environment; in this framework, the OMR is similar to the OKR as both of these behaviors effectively cancel the stimulus that evoked them.In this study, we investigate the interplay between feedback controllers and internal models and the role of the cerebellum in encoding them. We make use of the larval zebrafish optomotor response (OMR)20. When zebrafish, or any other animal, move forward, they experience the visual scene coming towards them. Previous work has shown that larval zebrafish swimming in a closed-loop experimental assay react to perturbations in this visual feedback23. Specifically, if a larva receives less feedback than normally, it swims for longer, as if trying to compensate for this lack of feedback by increasing its bout duration. This reaction happens on the time scale of individual bouts21, so we call this phenomenon \u201cacute reaction\u201d. A hypothesized mechanism of acute reaction is that fish use an internal representation of expected sensory feedback, and if the actual feedback does not meet this expectation, they adapt their behavior to minimize this discrepancy21. This postulates that fish use forward internal models to compute predicted sensory feedback from motor commands during acute reaction. In a subsequent study, it was further proposed that these predictive computations occur in the cerebellum22.As larvae, zebrafish swim in bouts that comprise several full tail oscillations and last around 350\u2009ms, separated by quiescent periods called interboutsHere, we employ behavioral tests\u00a0and modeling to demonstrate that acute reaction to unexpected perturbations can be implemented by a simple feedback controller, without internal models. The state of this feedback controller can be adjusted if the animal experiences a long-lasting, and therefore predictable, perturbation in sensory feedback. Crucially, loss-of-function experiments have shown that\u00a0an intact cerebellum was necessary for this recalibration but not for the functioning of the feedback controller itself. We used functional imaging in animals performing adaptive optomotor locomotion to determine whether neuronal requirements of this hypothesis are met in the larval zebrafish brain. Our results illustrate the role of the cerebellum in encoding internal models, which can calibrate existing neuronal circuits according to predictable features of the environment.22 . The second condition we call lag, and this corresponds to introducing an artificial temporal delay between the behavior of the larva and the reafference it experiences . Therefore, this acute reaction should be implemented only after a sensory processing delay. To identify the time that larval zebrafish need to react to unexpected perturbations in reafference, we analyzed the temporal dynamics of the tail beat amplitude in the form of bout power within individual bouts in different reafference conditions , larvae reacted by increasing the tail-beat amplitude only 220\u2009ms after the bout onset , the deviation in the respective mean bout power was observed only around 220\u2009ms after the start of the perturbation , can only affect the tail-beat amplitude during the reactive period Fig.\u00a0. Such a To confirm that acute reaction is implemented by a feedback controller, we took a simulation approach, which involved designing such a controller and testing its performance under the aforementioned perturbations in reafference.The main rationale of the designed model derives from the definition of the OMR in terms a feedback control mechanism, as a locomotor behavior that tries to keep perceived optic flow at zero. If optic flow is constant, an animal moving in discrete bouts cannot achieve this goal at all possible points in time. Instead, it can stabilize its position on average by integrating the optic flow in time, estimating displacement with respect to the visual environment over a time window and performing bouts whenever the integrated signal reaches a threshold.Following this reasoning, we designed a feedback controller consisting of three parts: a sensory part, a sensory integration part and a motor output generation part Fig.\u00a0. The senWe fit the model with a set of parameters such that it generated bouts and interbouts of realistic duration in response to forward moving grating in the normal reafference condition , we observed ROIs that increased their fluorescence at the onset of the moving grating . Motor ROIs were located predominantly in the hindbrain and in the nucleus of the medial longitudinal fascicle in the midbrain, and sensory ROIs were mostly present in the hindbrain, midbrain, and diencephalic regions, including the inferior olive, dorsal raphe and surrounding reticular formation, optic tectum, pretectum, and thalamus = 2.1\u2009\u00b1\u20090.2\u2009s; Q3\u2009=\u20092.5\u2009\u00b1\u20090.4\u2009s; mean\u2009\u00b1\u2009SEM across larvae; see inset in Fig.\u00a0As one of the main assumptions of the model is the existence of optic flow-integrating ROIs, we determined whether some of the identified sensory ROIs display properties of sensory integrators Fig.\u00a0. To this26). This provides an important substrate for the feedback controller-based mechanism of acute reaction presented in Fig.\u00a0We conclude that certain regions of the larval zebrafish brain integrate the velocity of the moving grating in time, and could therefore compute the sensory drive that evokes the OMR in all cerebellar Purkinje cells (PCs), which allows targeted pharmaco-genetic ablation of PCs by treating the larvae with metronidazole were still present after the ablation of PCs , but also showed a barely detectable acute reaction . The activity of two example ROIs in several trials sampled from different phases of the experiment is presented in Fig.\u00a0Criterion 1: how much the bout-triggered response increased in response to unexpected presentation of lagged reafference to a na\u00efve larva.Criterion 2: how much the response increased during the adaptation phase, while the lag-trained animals were adapting to a novel reafference condition.Criterion 3: how much the response increased when the reafference condition was switched back to normal.Criterion 4: how much the response increased during the post-adaptation phase, while the animals were adapting back to the original reafference condition.After confirming that the long-term adaptation effects were detectable in lag-trained adapting larvae and not in other experimental groups, we turned to analyzing the activity of ROIs in the cerebellum in lag-trained adapting fish, 0.2\u2009\u00b1\u20090.2% (1.0\u2009\u00b1\u20090.9 out of 386.3\u2009\u00b1\u200940.1 ROIs) in lag-trained non-adapting fish and 0.9\u2009\u00b1\u20090.3% (3.6\u2009\u00b1\u20091.5 out of 381.8\u2009\u00b1\u200933.3 ROIs) in normal-reafference control fish, mean\u2009\u00b1\u2009SEM across larvae; Fig.\u00a0We aimed to find activity profiles that were significantly enriched in lag-trained adapting larvae compared to the other two groups, because these activity profiles might reflect the output of a recalibrating internal model. To this end, we divided all ROIs into clusters based on their barcodes. We found that the only cluster that contained significantly higher fractions of ROIs in lag-trained adapting fish was the The bout-triggered responses of these ROIs gradually decreased during the adaptation phase (negative criterion 2) and increased back to the original level during the post-adaptation phase (positive criterion 4) Fig.\u00a0. Such an0-0\u2009+\u2009ROIs could be explained simply by motor activity of the larvae, which was different across groups by design, we modeled an artificial ROI, which linearly encodes motor activity (motor regressor)\u00a0for each fish and processed it in exactly the same way as we processed the real ROIs . It is important to note that the transgenic line that we use for whole-brain imaging has very poor expression in PC layer, which explains lack of 0-0+ ROIs identified in this experiment. In line with results of the ablation experiment Fig.\u00a0ii\u2013ciii. Finally, the model explains why perturbations in reafference during early bout segments have more influence over the bout duration, whereas perturbations late in the bout affect the subsequent interbout more Fig.\u00a0iv\u2013div. T34. It has been shown that pretectal neurons integrate monocular direction-selective inputs from the two eyes and drive activity in the premotor hindbrain and midbrain areas during optomotor behavior31. Together with recent evidence from different experimental paradigms26, the present study demonstrates that the pretectum is involved not only in the binocular integration of sensory inputs, but also in temporal integration that can underlie accumulation of the sensory drive.In order for this theoretical mechanism to work in real larvae, they must be able to integrate the optic flow to compute the sensory drive. Our whole-brain functional imaging experiments revealed that the process of sensory integration of the forward visual motion indeed takes place in several brain regions including the pretectum Fig.\u00a0. An incr36, including zebrafish larvae38. The velocity integrator can correspond to the pretectum, which receives projections from the contralateral direction-selective retinal ganglion cells40. Finally, the pretectal neurons (velocity integrators) send anatomical projections to premotor areas in the hindbrain and to the nucleus of the medial longitudinal fascicle42. As these regions both displayed motor-related activity in our experiments at 6\u20138 days of post-fertilization (dpf) of yet undetermined sex. All animal procedures were performed in accordance with approved protocols set by the Max Planck Society and the Regierung von Oberbayern (Protocol number 55-2-1-54-2532-82-2016).All experiments were conducted on larval zebrafish to 600\u2009\u00b5S conductivity, with the pH value adjusted to 7.2 using NaHCO4, 0.6\u2009mM Ca(NO3)2, and 5\u2009mM HEPES buffer) until 1 dpf and in fish water from 1 dpf onwards. The water in the dish was changed daily.To obtain larvae for experiments, one male and one female adult zebrafish were placed in a mating box in the afternoon and kept there overnight. The embryos were collected in the following morning and placed in an incubator that was set to maintain the above light and temperature conditions . Embryos and larvae were kept in 94\u2009mm Petri dishes at a density of 20 animals per dish in Danieau\u2019s buffer solution zebrafish strain or transgenic Tg(PC:epNtr-tagRFP) line that was used for PC ablation (see below).Purely behavioral experiments were conducted using wild-type Tupfel long-fin (Tg(PC:epNtr-tagRFP) zebrafish outcrossed to fish expressing GCaMP6s in PC nuclei and RFP in PC somata (Tg(Fyn-tagRFP:PC:NLS-GCaMP6s))27. This allowed evaluating effects of the ablation protocol on the morphology of both cell nuclei and somata. These larvae were homozygous for nacre mutation, which introduces a deficiency in mitfa gene that is involved in development of melanophores59. As a result, homozygous nacre mutants lack optically impermeable pigmented spots on the skin, which enables brain imaging without invasive preparations.Efficiency of PC ablation was evaluated using the progeny of Tg(elavl3:GCaMP6s))24. PC functional imaging experiments were conducted using zebrafish that expressed GCaMP6s specifically in PCs (Tg(PC:-GCaMP6s))27. In both cases, the animals were also homozygous for nacre mutation.Whole-brain functional imaging experiments were conducted using transgenic zebrafish strain with pan-neuronal expression of GCaMP6s (Tg(elavl3:GCaMP6f))60, homozygous for nacre mutation. For anatomical registration of PC functional imaging data, the red channel of one confocal stack of Tg(Fyn-tagRFP:PC:NLS-GCaMP6s) was used as a reference.A Z-stack of larval zebrafish reference brain used for anatomical registration of the whole-brain functional imaging data was previously acquired in our laboratory by co-registration of 23 confocal z-stacks of zebrafish brains with pan-neuronal expression of GCaMP6f in a cell population of interest with prodrug metronidazole (MTZ). Ntr converts MTZ into a cytotoxic DNA cross-linking agent leading to death of cells of interest. To this end, we generated a transgenic line that expressed enhanced Ntr (epNtr)63 under the PC-specific carbonic anhydrase 8 (ca8) enhancer element64. epNtr fused to tagRFP was cloned downstream to the aforementioned enhancer and a basal promoter. This construct (abbreviated as PC:epNtr-tagRFP) was injected into nuclei of single cell stage TL embryos heterozygous for nacre mutation, at a final concentration of 20\u2009ng/\u00b5l together with 25\u2009ng/\u00b5l tol2 mRNA. Larvae showing strong RFP expression in PCs were raised to adulthood as founders and outcrossed to TL fish to gain a stable line.To perform targeted ablation of PCs, we employed Ntr/MTZ pharmaco-genetic approach that has been successfully used in zebrafishPC:epNtr-tagRFP+/\u2212 fish outcrossed to a TL fish were screened for red fluorescence in the cerebellum at 5 dpf, and 10 RFP-positive (PC:epNtr-tagRFP+/\u2212) and 10 RFP-negative (PC:epNtr-tagRFP\u2212/\u2212) larvae were kept in the same Petri dish to ensure subsequent independent sampling. At 18:00, most of the water in the dish was replaced with 10\u2009mM MTZ solution in fish water, and larvae were incubated in this solution overnight in darkness for 15\u2009h. The next morning at 9:00, animals were allowed to recover in fresh fish water. The next day, behavior of 7 dpf MTZ-treated larvae was tested in a respective behavioral protocol. After the experiment, the animals were screened for red fluorescence once again to reassess their genotype after mixing positive and negative larvae in one Petri dish. PC:epNtr-tagRFP\u2212/\u2212 and PC:epNtr-tagRFP+/\u2212 siblings constituted treatment control and PC ablation experimental groups, respectively.Ablation-induced changes in behavior were tested using the progeny of a single founder. The embryos obtained from a Tg(PC:epNtr-tagRFP) fish outcrossed to Tg(Fyn-tagRFP:PC:NLS-GCaMP6s) fish, homozygous for nacre mutation. These larvae underwent the same ablation protocol, and z-stacks of their cerebella in RFP and GFP channels were acquired under the confocal microscope before and after the ablation . To quantify structural segregation of the PCs induced by ablation, we first masked the cerebellum in each confocal stack using the pipra software65. Next, we computed the local entropy as a non-reference metric of tissue inhomogeneity66. It was computed across the whole stack in radii of 7\u2009\u00b5m using the entropy implementation in scikit-image67. We then averaged all entropy values inside of the cerebellar masks to gain an entropy estimate in bits for each stack. We hypothesized that maintaining the anatomical structure suggests a high entropy, whereas a structural collapse caused by ablation decreases the entropy. As expected, we observed that Ntr+ fish displayed lower entropy values within the cerebellum compared to Ntr\u2212 control fish after metronidazole treatment in a 35\u2009mm Petri dish. For functional imaging experiments, larvae were embedded in 2.5% agarose in custom 3D-printed plastic chambers , with glass coverslips sealed on the front and left sides of the chamber using grease, at the entry points of the frontal and lateral laser excitation beams, and the agarose around the head was removed with a scalpel to reduce scattering of the beams (see \u201cLight-sheet microscopy\u201d section). After allowing the agarose to set, the dish/chamber was filled with fish water and the agarose around the tail was removed to enable unrestrained tail movements that were subsequently used as behavioral readout.All experiments were conducted using head-restrained preparations of 6\u20138 dpf zebrafish larvae, similar to ref. IR LED illuminating the screen with the chamber was directed from above was mounted with a red-pass filter to avoid bleed-through of the green component of the visual stimulus in the light collection optics.A dish/chamber with an embedded larva was then placed onto the screen of the custom-built behavioral or functional imaging rig Figs.\u00a0a, and 6a68. Larvae were presented with the grating moving in a caudal to rostral direction at 10\u2009mm/s. Experiments were performed in closed-loop (similar to ref. 21), as described below. Before starting an experiment, two anchor points enclosing the tail were manually selected. The tail between the anchor points was automatically divided into eight equal segments, and the angle of each segment with respect to the longitudinal reference line was measured by Stytra in real time. The cumulative sum of the segment angles (measured in radians) constituted the final tail trace 22, is called gain change. In the closed-loop experimental assay, the gain parameter was used as a multiplier that converts the estimated swimming velocity of the larva into presented reafference. Therefore, the actual forward velocity of a swimming larva was proportional to gain. If the gain was set to zero, the tail movements had no influence over the grating speed, so larvae did not receive any reafference. This reafference condition was therefore referred to as the open-loop condition. If the gain was 1, the median velocity of the larva during a typical bout was 20\u2009mm/s, referred to as the normal reafference condition. The gain values used in the experiments included 0, 0.33, 0.66, 1, 1.33, 1.66, and 2.The first type of perturbation, which has been previously used in the literaturelag\u00b8 and this corresponds to delaying the reafference with respect to the bout onset. When the lag was greater than zero, normal visual reafference with gain 1 was presented with a certain delay after the bout had started. The lag values used in the experiments included 0\u2009ms lag , 75, 150, 225, 300\u2009ms, and infinite lag . In the shunted lag version of this condition, the reafference was set to 0 upon termination of the bout.The second type of perturbation was called gain drop, and this corresponds to dividing the first 300\u2009ms of a bout into four 75\u2009ms segments and setting the gain during one or more of these segments to zero. Gain drop conditions were labeled using four-digit barcodes, where each digit represents the gain during a corresponding bout segment. For example, the gain profile 1100 denotes that the gain during bout segments 3 and 4 was set to zero, and during the rest of the bout, it was set to one. Gain drop conditions used in the experiment included 1111 , 0111, 0011, 0001, 0000, 1110, 1100, and 1000.The third type of perturbation was called Importantly, such closed-loop assay enables the experimenters to control and manipulate the reafference that animals receive when they swim and hence to study how perturbations in reafference affect behavior. The reafference perturbations used in this study can be grouped into three distinct categories Fig.\u00a0:The firsNo combinations of reafference conditions were used, e.g., if the gain was different from 1, the lag was automatically set to 0\u2009ms, or if the lag was greater than 0\u2009ms, the gain was set to 1, and in both cases the gain drop was set to the normal 1111.normal reafference ;open-loop (gain 0 or infinite lag);gains: 0.33, 0.66, 1.33, 1.66, 2 (0\u2009ms lag and gain drop 1111);lags and shunted lags: 75, 150, 225, and 300\u2009ms (gain 1);gain drops: 1110, 1100, 1000 (0\u2009ms lag).Note that reafference conditions listed above and presented in Fig.\u00a0Calibration phase . During this phase, the multiplier defining how vigor is converted into estimated fish velocity was automatically calibrated so that the median velocity during an average swimming bout was 20\u2009mm/s. Reafference condition during this phase was set to normal. This calibration was implemented to equalize velocity estimation across fish. In addition, during this phase, larvae were able to get used to the experimental environment and bring their swimming behavior to a stable level. All parameters recorded during this phase were not analyzed in this study.Pre-adaptation phase . During this phase, reafference condition was set to normal. This phase was used to record the baseline level of behavior, before any perturbations in reafference were introduced.Adaptation phase . During this phase, larvae experienced perturbations in visual reafference.Post-adaptation phase . During this phase, reafference condition was again set to normal. This phase was introduced to measure how the adaptation phase affected the baseline behavior.All experimental protocols used in this study had a similar general structure. Each protocol consisted of trials. Each trial consisted of a 15\u2009s presentation of the grating moving in a caudal to rostral direction at 10\u2009mm/s, preceded and followed by 7.5\u2009s periods of the static grating to record spontaneous activity. This was followed by a calibration phase, a pre-adaptation phase and an adaptation phase . Post-adaptation phase was omitted. In four out of six imaged larvae, the calibration phase was omitted because behavior under the light-sheet setup was less consistent than under purely behavioral rigs, and calibration of the swimming velocity often failed due to insufficient number of bouts performed during this phase. For these larvae, the multiplier defining how vigor converts into estimated swimming velocity was set manually to a value resulting from successful calibration in one of the other two larvae. During the adaptation phase, reafference condition for each bout was randomly set to either normal or open-loop. In addition, two 350\u2009ms pulses of reverse grating motion at 10\u2009mm/s were presented during each static grating period (5 and 10\u2009s after the grating stopped moving). Responses to these pulses were not analyzed in this study. Furthermore, the difference in bout-triggered responses between normal reafference and open-loop condition was also not analyzed.This experiment was designed to test if larval zebrafish can adapt their behavior in response to a long-lasting and consistent perturbation in reafference. The adaptation phase of the protocol consisted of 210 trials, and reafference condition for all bouts performed during this phase was set to 225\u2009ms lag Fig.\u00a0. This reExperimental protocol used for PC functional imaging experiment was a modified version of the long-term adaptation experiment Fig.\u00a0. The adaAnalysis of the behavioral data was performed in MATLAB .z-scored and interpolated together with the grating speed traces to a common time array with a sampling period of 5\u2009ms. For each swimming bout automatically detected by Stytra during the experiment, individual tail flicks were detected. One tail flick was defined as a section of the tail trace between two adjacent local extrema, with the magnitude greater than 0.14\u2009rad and the duration not greater than 100\u2009ms. Automatically detected onsets and offsets of the bouts were then corrected to coincide in time with the beginning of the first tail flick and the end of the last flick, respectively.Recorded tail traces were Only bouts that occurred while the grating was moving forward were considered for further analysis. For each bout, its duration and duration of the subsequent interbout was measured. If a bout was the last in a trial, the corresponding interbout duration was not computed. All bouts that were shorter than 100\u2009ms or had a subsequent or preceding interbout shorter than 100\u2009ms were excluded from the analysis as potential tail tracking artifacts.TL larvae , 28 Tg(PC:epNtr-tagRFP)\u2212/\u2212 larvae (treatment control group), and 39 Tg(PC:epNtr-tagRFP)+/\u2212 larvae (PC ablation group);Acute reaction experiments: 100 TL larvae, 85 Tg(PC:epNtr-tagRFP)\u2212/\u2212 larvae, and 83 Tg(PC:epNtr-tagRFP)+/\u2212 larvae,\u25cb\u00a0\u00a0\u00a0normal-reafference control animals: 103 TL larvae, 85 Tg(PC:epNtr-tagRFP)\u2212/\u2212 larvae, and 90 Tg(PC:epNtr-tagRFP)+/\u2212 larvae.\u25cb\u00a0\u00a0\u00a0lag-trained animals: 100 Long-term adaptation experiments:If there was at least one block of ten consecutive trials without any bouts, this animal was excluded from analysis. This was done because lack of reliable optomotor response might have indicated some damage caused during handling or the embedding procedure, or some severe behavioral or sensory deficits. The final numbers of included animals are listed below:To analyze the temporal dynamics of the tail beat amplitude within individual bouts, a parameter termed bout power was computed as described below. A 1.1-second-long section of the tail trace was selected for each bout, starting from 100\u2009ms before the onset of that bout. The values of the tail trace after the bout offset were replaced with zeros to exclude subsequent bouts that could occur within this time window. In addition, the median baseline value computed for the 100\u2009ms window before the bout onset was subtracted from the section. Resulting sections of the tail trace were then squared and referred to as bout power, measured in squared radians.To present the results of acute reaction experiments, we averaged the metrics obtained for each bout across bouts within each reafference condition in each larva. Bout and interbout duration are presented as mean\u2009\u00b1\u2009SEM across larvae. Bout power profiles are presented as median across larvae.To identify the time points, at which mean bout power depended on reafference condition, we used Kruskal\u2013Wallis test . According to the test results, the bout power curves were then divided into ballistic and reactive periods and the areas below the curves within these two periods were measured for each bout for each larva.U-test with a two-tailed alternative with significance level of 5%.To present the results of the long-term adaptation experiments, we analyzed the aforementioned parameters only for the first bout in each trial. First bout duration in each trial is presented as mean\u2009\u00b1\u2009SEM across larvae. To quantify the effects observed in the long-term adaptation experiments , we divided all trials of the protocol into blocks of ten and computed the mean value of respective parameter within each block. We then computed the differences between two corresponding blocks: acute reaction: first ten adaptation trials minus pre-adaptation trials, reduction of acute reaction: last ten adaptation trails minus first ten adaptation trials, after-effect: post-adaptation trials minus pre-adaptation trials. These quantifications are presented as median and interquartile range across larvae. To determine statistical significance of the observed differences between experimental groups, we used Mann\u2013Whitney U-test with a one-tailed alternative (with significance level of 5%) because the alternative hypothesis was already known from the main long-term adaptation experiment.In the long-term adaptation experiments performed under the light-sheet microscope, the lag-trained animals were sub-divided into adapting and non-adapting based on the reduction of acute reaction. Therefore, the more correct label for this group would be \u201cnot reacting to changes in visual feedback\u201d. We use the term \u201cnon-adapting\u201d for convenience. If the first bout duration averaged across the last ten trials of the adaptation phase was less than that for the first ten trials of the adaptation phase by at least 40\u2009ms, this lag-trained larva was considered adapting. To determine statistical significance of the long-term adaptation effects in lag-trained adapting larvae, we used Mann\u2013Whitney To test whether acute reaction can be explained by a simple feedback controller that does not involve computation of expected sensory reafference , we developed a model that does not perform these computations Fig.\u00a0 and testThe model was developed and tested in MATLAB . The input of the model was the current velocity of the moving grating, and the output was a binary variable representing swimming velocity of the model. For simplicity, we did not set out to model individual tail flicks and approximated the swimming behavior of the zebrafish larvae by a binary motor output that equaled 20\u2009mm/s when the model was swimming and 0 otherwise. This was possible due to the discrete nature of zebrafish swimming behavior at larval stage. Since in this study we mainly focused on duration of bouts and interbouts, this simplification did not limit the ability to compare the model behavior with behavior of the real larvae.To design the model, we used the results of the acute reaction experiment as a starting point Fig.\u00a0. Thus, smentclass2pt{minimTherefore, in total, the model had eight parameters. The input and output of the model, as well as activity of its nodes in an example trial with normal reafference are presented in Supplementary Fig.\u00a0To evaluate the ability of the model to acutely react to different perturbations in reafference, it was tested in a shorter version of the acute reaction experimental protocol. The protocol was shortened to save the computation time required for fitting the model. One trial consisted of 300\u2009ms of static grating followed by 9.7\u2009s of the grating moving in a caudal to rostral direction at 10\u2009mm/s. The reafference condition of the first bout was always normal, and the reafference condition of the second bout was chosen from a list of 18 reafference conditions used in the acute adaptation experiment . If the model initiated a third bout, the trial was terminated, and the duration of the second bout and subsequent interbout constituted the final output of the model in that trial. If the model did not initiate the third bout, the final output of the model in that trial was not computed.N\u2009=\u2009100) using a custom-written genetic algorithm. To obtain the training datasets, 18 arrays of bout durations and 18 arrays of interbout durations were generated for each larva, each array corresponding to one reafference condition. Average values of randomly selected 50% from each array constituted the training datasets, the remaining 50% were used as test datasets. Distributions of resulting parameter solutions are presented in Supplementary Fig.\u00a0The parameters of the model were fitted to training datasets obtained for each larva that participated in the acute reaction experiment and resulted in sets of the model parameters, each optimized to fit one larva. To present the results, mean\u2009\u00b1\u2009SEM of the final output arrays of models across all sets of parameters, and of the test datasets across all larvae were computed Fig.\u00a0.Functional imaging experiments were employed in this study for two purposes: to test the main assumption of the feedback control model and to test whether activity of PCs can represent the output of an internal model. Respectively, there were two types of functional imaging experiments: whole-brain imaging experiments and PC imaging experiments. Both were performed using a custom-built light-sheet microscope Figs.\u00a0a and 6a.For the whole-brain imaging Fig.\u00a0, a beam The piezo, galvanometric mirrors, and camera triggering were controlled by a custom-written Python program. The light-sheets were created by horizontal scanning of the laser beams at 800\u2009Hz. The light-sheets and the collection objective were constantly oscillating along the vertical axis with a saw tooth profile of frequency 1.5\u2009Hz and amplitude of 250\u2009\u00b5m. At each oscillation, 35 camera frames were acquired at equally timed intervals, with an exposure time of 5\u2009ms. The resulting volumetric videos had a voxel size of 0.6 \u00d7 0.6 \u00d7 7\u2009\u00b5m, and a sampling rate of 1.5\u2009Hz per volume.For the PC imaging experiments, the frontal scanning arm was removed because the whole cerebellum could be illuminated using only the lateral beam Fig.\u00a0. The samLarvae were continuously illuminated with the excitation blue light throughout the entire imaging sessions, which could potentially interfere with the behavior. However, behavior performed under the light-sheet microscope was overall comparable with the behavior performed in the behavioral rigs. Thus, for example, normal reafference control larvae performed, on average, 12.1\u2009\u00b1\u20090.4 bouts per trial in the behavioral rigs and 13.5\u2009\u00b1\u20093.0 under the light-sheet microscope (mean\u2009\u00b1\u2009SEM across larvae), and the shape of the bouts was similar to emphasize image edges over absolute pixel intensity. Volumes for which the computed shift was larger than 15 voxels were discarded and replaced with NaN values. For subsequent registration of the imaging data to a common reference brain (see below), a new anatomical stack was computed for each animal by averaging the first 1000 frames of the aligned planes.To analyze the functional imaging data, we first preprocessed them in Python following the methods presented in ref. 2) ensured that the ROIs matched approximately the size of neuron somata. After segmentation, the fluorescence time-trace of each ROI was extracted by summing fluorescence of all voxels that were assigned to that ROI during segmentation.To segment the imaged volume into regions of interest (ROIs), a \u201ccorrelation map\u201d was computed, where each voxel value corresponded to the correlation between the fluorescence time-trace of that voxel and the average trace of eight adjacent voxels in the same plane. Then, based on the correlation map, individual ROIs were segmented in each plane with the following iterative procedure. Growing of each ROI was initiated from the voxel with the highest intensity in the correlation map among the ones still unassigned to ROIs, with a minimum correlation of 0.3 (seed). Adjacent voxels were then gradually added to the growing ROI if eligible for inclusion. To be included, adjacent voxels\u2019 correlation with the average fluorescence time-traces of all voxels assigned to the ROI up to that point had to exceed a set threshold. The threshold for inclusion was 0.3 for the first iteration and increased linearly as a function of distance to the seed, up to a value of 0.35 at 3\u2009\u00b5m distance. Additional criteria for minimal and maximal ROI area (9\u201328\u2009\u00b5m69), and fluorescence oscillations at frequency higher than that were unlikely to result from biological events. In addition, to correct for potential slow drift, the drifting baseline of each trace was computed by applying a low-pass Butterworth filter with a cutoff frequency of 3.3\u2009mHz; and this baseline was then subtracted from the trace. The traces were then z-scored for subsequent analysis.Subsequent analysis steps were performed in MATLAB . To de-noise the traces, a low-pass Butterworth filter with a cutoff frequency of 0.56\u2009Hz was applied to each trace. This frequency corresponds to the half-decay time of the calcium indicator GCaMP6s expressed by the imaged larvae , we first computed the mean values of average grating-triggered fluorescence within the time window from 0 to 4\u2009s after the grating onset, and of average bout-triggered fluorescence within the time window from 0 to 2\u2009s after the bout onsets. Obtained values were referred to as sensory and motor scores. To estimate the probability of observed scores to result from chance, we then divided each trace into 84 sections, each 23 second-long, randomly shuffled the sections 1000 times and computed sensory and motor scores for each shuffling to build null-distributions of the scores. If the actual score was greater than 95th percentile of the respective null-distribution, it was considered significant. ROIs were defined as sensory if they had a significant sensory score. If in turn, an ROI had a significant motor score and a non-significant sensory score, it was defined as a motor ROI. This additional criterion for definition of a motor ROI was introduced because almost all sensory ROIs continued to increase their grating-triggered fluorescence during swimming bouts. As a result, many sensory ROIs had a significant motor score, so this parameter alone could not be used to define motor ROIs.69. Note that testing time constants shorter than 0.5\u2009s was not possible due to relatively low sampling rate (2\u2009Hz). Therefore, if a given ROIs had an estimated time constant of 0.5\u2009s, the true value of the time constants lies between 0 and 0.5\u2009s. ROIs with short time constants (\u22641.5\u2009s) were referred as sensors, whereas ROIs with longer time constants were called integrators.The second step was aimed to identify whether some of the sensory ROIs integrate sensory evidence of the forward moving grating in time. To this end, we fitted a leaky integrator model to the fluorescence traces of sensory ROIs by iterating over a range of time constants from 0.5 to 10\u2009s, with 0.5\u2009s steps (sampling period duration), and identifying the time constant resulting in the highest correlation between the model and the actual trace. The leaky integrator trace was additionally convolved with a GCaMP6s kernel, modeled as an exponential function with a half-decay time of 1.8\u2009s70. To this end, affine volume transformations were computed to align the anatomical stacks from each larva to the reference brain. Computed transformations were then applied to each ROI to identify its coordinates in the reference space. To present the final ROI maps, binary stacks with ROIs of a given functional group were summed across larvae, and the maximum projections along dorsoventral and lateral axis were computed. In addition, to identify the anatomical regions with experiment-related activity, the regions annotated in the Z-Brain atlas71 were registered to our reference brain using the same procedure.To compare the location of ROIs assigned to the aforementioned functional groups across larvae and to present the ROIs in the context of gross larval zebrafish neuroanatomy, the imaging data was registered to a common reference brain using the free Computational Morphometry ToolkitIf an ROI was assigned to one of the three aforementioned functional groups , it was referred to as active ROI. To determine whether anatomical location of active ROIs was consistent across larvae, we first formulated a null-hypothesis for each ROI. The hypothesis stated that active ROIs that spatially overlap with this ROI in a population of larvae are equally likely to be either sensors, or integrators, or motor ROIs. According to this null-hypothesis, the probability of a given active ROI to be assigned to any functional group is 1/3. We then tested this null-hypothesis for each ROI against the one-tailed alternative that a given ROI was more likely to be assigned to its actual functional group than to the other two groups with a significance level of 5%. Rejection of the null-hypothesis can be interpreted as that in the brain region corresponding to this ROI, the probability of finding active ROIs of the same functional group in a population of larvae is greater than that of finding active ROIs of the other two groups. To test the null-hypothesis, we first calculated the number of larvae that had any active ROIs overlapping with the original ROI and the number of larvae that had an overlapping ROI assigned to the same functional group as the original ROI. Then, the probability of this observation given the null-hypothesis was inferred using maximum likelihood estimation. If this probability was less than 5%, the null-hypothesis was rejected and the anatomical region corresponding to the original ROI was concluded to be consistent across larvae.N\u2009=\u200925 larvae were kept out of the original 50 . Lag-trained larvae were further sub-divided into adapting (N\u2009=\u20099) and non-adapting (N\u2009=\u20098) as described in \u201cBehavioral data analysis\u201d section.PC functional imaging data was pre-processed in Python. Before entering data in the analysis pipeline, data was previewed blindly with respect to experimental condition and behavioral performance. Data that showed any sign of drifting during the whole duration of the experiment were discarded, to avoid any confounding effect in the subsequent analysis. After this selection, 72. Suite2p was used for plane-wise alignment of the data and ROIs segmentation; after these steps, the raw extracted fluorescence was used in subsequent analyses, bypassing the spike deconvolution part of the Suite2p pipeline. Parameters used for the extraction were the Suite2p default values for 2p detection, except for expected cell size and temporal resolution that were adjusted according to the imaging settings. Manual curation was performed on each fish, blindly with respect to experimental group and behavioral performance, to exclude artifactual ROIs segmented from the skin visible in the imaging.Compared with the whole-brain data, cerebellum imaging data were smaller and PC labeling was sparser, so this dataset was better suited for signal extraction using Suite2p69 . We first modeled a motor regressor by convolving the binary variable representing whether the fish was performing a bout or not at each acquisition frame with a GCaMP6s kernel, modeled as an exponential function with a half-decay time of 1.8\u2009sz-scored for subsequent analysis.To correct for potential slow drift, each trace was passed through a high-pass Butterworth filter with a cutoff frequency of 3.3\u2009mHz. The traces were then Subsequent analysis steps are illustrated in Fig.\u00a0Criterion 1 was computed as the difference between the first ten trials of the adaptation phase and ten trials of the pre-adaptation phase.Criterion 2 was computed as the difference between the last ten trials and the first ten trials of the adaptation phase.Criterion 3 was computed as the difference between the first ten trials of the post-adaptation phase and the last ten trials of the adaptation phase.Criterion 4 was computed as the difference between the last ten trials and the first ten trials of the post-adaptation phase.We used these averaged responses to define the four criteria for each trace, which represented how much the responses have changed during important transitions of the experimental protocol.0-0+) was not sensitive to which exact percentiles were used during bootstrapping.Obtained criteria were converted into 4-digit ternary barcodes, each digit corresponding to one criterion and can take values \u201c+\u201d, \u201c0\u201d, or \u201c\u2212\u201d, using the following bootstrapping procedure. For each trace we computed a null-distribution of criteria under the assumption that observed changes in bout-triggered responses were not related to transitions in the experimental protocol. Thus, to build the null-distributions, the trials were randomly shuffled 100,000 times, and criterion 1 was computed in each shuffling repetition. If a given criterion was greater than the 97.5th percentile of the null-distribution for that ROI, the corresponding criterion was replaced with \u201c+\u201d, if it was less than the 2.5th percentile, it was replaced with \u201c\u2212\u201d, and otherwise with \u201c0\u201d. This allowed us to assign 4-digit barcodes to each ROI, and all ROIs were categorized into clusters based on these barcodes. We have verified that the key finding of this experiment .0-0+ ROIs, we first applied 3D Gaussian blur with standard deviation of 1\u2009\u00b5m to the binary stacks with these ROIs. The stacks were then summed across larvae, and the maximum projections along the dorsoventral axis were computed. 0-0+ ROIs were detected more frequently in the medial cerebellum . To present the final maps of Further information on research design is available in the\u00a0Supplementary InformationPeer Review FileDescription of Additional Supplementary FilesSupplementary Movie 1Supplementary Movie 2Supplementary Movie 3Reporting Summary"} +{"text": "Understanding multimorbidity patterns is important in finding a common etiology and developing prevention strategies. Our aim was to identify the multimorbidity patterns of Taiwanese people aged over 50 years and to explore their relationship with health outcomes. This longitudinal cohort study used data from the Taiwan Longitudinal Study on Aging. The data were obtained from wave 3, and the multimorbidity patterns in 1996, 1999, 2003, 2007, and 2011 were analyzed separately by latent class analysis (LCA). The association between each disease group and mortality was examined using logistic regression. Four disease patterns were identified in 1996, namely, the cardiometabolic (18.57%), arthritis\u2013cataract (15.61%), relatively healthy (58.92%), and multimorbidity (6.9%) groups. These disease groups remained similar in the following years. After adjusting all the confounders, the cardiometabolic group showed the highest risk for mortality . This longitudinal study reveals the trend of multimorbidity among older adults in Taiwan for 16 years. Older adults with a cardiometabolic multimorbidity pattern had a dismal outcome. Thus, healthcare professionals should put more emphasis on the prevention and identification of cardiometabolic multimorbidity. Taiwan has been an aging society since 2018 and is expected to be a super-aged society in 2025 . MultimoMultimorbidity has been widely measured by disease numbers or severity ,8, but rLatent class analysis (LCA) is a statistical procedure used to identify different subgroups within populations who often share similar characteristics . Hence, However, no similar study has been conducted in Taiwan. Considering that Taiwan is one of the most rapidly aging countries worldwide, understanding its multimorbidity pattern over the years will help us understand its process and the impact of aging. This study aimed to identify the disease patterns of Taiwanese people aged over 50 years and to explore their relationship to health outcomes through a population-based longitudinal study.This longitudinal cohort study used data from the Taiwan Longitudinal Study on Aging (TLSA), which has been conducted by the health promotion ministration since 1989. TLSA involves adults aged above 60 years residing in nonaboriginal townships of Taiwan. The respondents were followed every 3 to 4 years . Two fresh samples were added in 1996 and 2003 to maintain the representativeness of the younger age cohort and extend that of the cohort aged 50 years or more. This trend analysis obtained data from wave 3 and examined the multimorbidity patterns in 1996, 1999, 2003, 2007, and 2011 separately. Initially, 5130 individuals aged above 50 years were involved, and in 2011, 2420 individuals were included in the analysis.The mortality rate was verified in 2012 using the Death Registration from the Ministry of the Interior in Taiwan.This study assessed 12 diseases, including hypertension, diabetes mellitus, coronary artery disease, stroke, cancer, lung disease, arthritis or rheumatic disease, hepatobiliary disease, renal disease (including stone), gout, hip fracture, and cataracts. Participants were asked the following question: \u201cHave you ever had the disease\u2026?\u201d If the answer was \u201cNo\u201d or \u201cI don\u2019t know\u201d, they would be categorized as the disease-free group. Other variables were age, sex, income level, social participation, self-rated health, health behaviors , admission experience in the past 12 months, disability, and depression.The level of income was determined by asking \u201cAre you satisfied with your income?\u201d The answer could be good (very satisfied/satisfied), fair, or poor (unsatisfied/very unsatisfied). Individuals who had either paid, voluntarily worked, or participated in community activities were considered as having social participation. Moreover, individuals were divided into three groups according to self-rated health: good (very good/good), fair, and poor (poor/very poor). Exercise habits were divided into no exercise, \u22642 times, 3\u20135 times, and \u22656 times per week.Their activities of daily living were also assessed by asking if they can do the following tasks: bathing, taking off and putting on clothes, eating meal, getting up from bed, standing and sitting on a chair, walking indoor, and going to the toilet. If they could not do any one of these tasks, they were considered disabled. In addition, depression was evaluated using the 10-item questionnaire of the Center for Epidemiologic Studies Depression Scale (CES-D). Each question was scored between 0 and 3, and the last two questions were reverse questions. A score above 10 points indicated depression .Disease patterns were estimated by LCA. We chose the most appropriate model groups with lower Bayesian Information Criterion values and descriptively analyzed the demographic and clinical characteristics of each group. Continuous and categorical variables were assessed using the analysis of variance and Chi-square test, respectively. The relationship between disease patterns and mortality was examined by univariate and multivariate logistic regression. In the multivariate analysis, we classified all the covariates into sociodemographic factors , health behavior factors , and health status factors . Each time we added one group, we adjusted the covariates to observe the effect of different dimensions.p-value of less than 0.05 was considered statistically significant.The LCA was performed in PROC LCA 1.3.2, which is developed for SAS version 9.4 for Windows by the Methodology Center at Penn State. All the data were analyzed using the SAS 9.4 . A In 1996, 5130 individuals were involved, with male predominance (53.8%) and a mean age of 66.7. Additionally, we identified four disease patterns, namely, the cardiometabolic, arthritis\u2013cataract, relatively healthy, and multimorbidity groups . These disease patterns remained similar in the following years . HoweverThe baseline demographic characteristics showed higher rates of poor income satisfaction, self-rated health, admission experience, disability, and depression in the multimorbidity group than in the other groups .In the univariate logistic regression analysis, all the variables, except for betelnut chewing, were significantly associated with mortality . In the Subgroup analysis of age among different multimorbidity patterns in relation to mortality was also done. There was a significant association between the cardiometabolic group and multimorbidity group and mortality among participants less than 65 years old. Gender has no effect on mortality in different multimorbidity patterns according to the subgroup analysis .In this population-based longitudinal study with LCA, we determined four disease patterns, namely, the cardiometabolic, arthritis\u2013cataract, relative healthy, and multimorbidity groups. Comparing our findings with those of other countries is difficult because the population compositions and socioeconomic status are very different. Nonetheless, our findings and those from other countries have some similarities. For instance, the relatively healthy group is the majority, and the percentage is approximately 50\u201370% ,13,15,16Hypertension, diabetes, coronary diseases, and stroke, which constitute the cardiometabolic group, coexist in many studies ,13,16. TOur study also identified the arthritis\u2013cataract group, which is not frequently seen in other studies. Only few studies have investigated the relationship between eye diseases and arthritis. One study using data from the Irish Longitudinal Study on Aging found that eye diseases increased the risk of developing arthritis, whereby cataracts were the most significant . In otheOur stepwise multivariate analysis revealed that advanced age, male sex, smoking, poor self-rated health, admission in the past year, disability, and depression are risk factors of mortality. Interestingly, drinking was a protective factor in our analysis. Previous reports regarding the relationship between alcohol consumption and mortality were inconsistent ,36,37. FThis study is the first to use LCA to evaluate disease patterns in Taiwan. Moreover, it included a large nationwide, representative, and randomly selected population with extremely high response rates. Hence, the results are reliable, thereby applicable for risk stratification by policymakers and the development of effective health interventions.However, this study has some limitations. First, all the variables obtained were self-reported. Although some questions, such as \u201cIs your disease being diagnosed by doctors or treated with medications?\u201d were added to improve the accuracy; no medical records, blood tests, or images were utilized to confirm the diagnosis. Recall bias may also exist. Second, the relationship between each disease pattern and mortality was surveyed over a long period. The time-varying effect was not considered in the logistic regression model. Further statistical methods concerning the time effect might be used in future studies.This nationwide study identified four disease patterns in older people: the cardiometabolic, arthritis\u2013cataract, relatively healthy, and multimorbidity groups. The cardiometabolic group showed the highest risk for mortality. Thus, improving the prevention strategy toward cardiometabolic diseases with proper intervention should be emphasized in the future."} +{"text": "As well as causing a global health crisis, the COVID-19 pandemic has also generated multilevel social changes by damaging psychosocial and economic resources across Iranian society. Therefore, this qualitative study was conducted to examine and explain these social consequences and their impact on the social capital of Iran during the COVID-19 outbreak. Using a content analysis approach, nine experts participated in semistructured, in-depth interviews. Interviews were recorded and transcribed verbatim and analyzed using Lundman and Graneheim's method. The social impacts of COVID-19 can be summarized into six categories and 32 subcategories. Three positive-negative categories emerged from the data analysis: \u201cformation of new patterns of social communications; formation of new patterns of behavior; creation of economic changes.\u201d Three entirely negative categories included \u201ccreating a climate of distrust; disruption of cultural, social, and religious values; psychosocial disorders.\u201d Overall, most findings (27 out of 32 subcategories) indicated the destructive effects of the COVID-19 pandemic on social capital. Therefore, this raises concerns about social capital endangerment in Iran. However, positive social impacts can guide policies that strengthen social action and improve social capital. Not long after the reporting of a severe respiratory illness in Wuhan, China, in December 2019, the world found itself facing a major health shock, with the disease rapidly spreading to many countries . TherefoSocial capital is built through social participation, social trust, and trust between government and citizens , 10. It Iran is a developing country, and the COVID-19 pandemic occurred in the midst of an existing economic crisis in the country. International sanctions against Iran slowed economic growth rates (-4.99%) and increased inflation rates (30.52%) in 2020 . While eThe social effects of past epidemics have been variously considered in previous studies. It should be noted that the main focus of these studies has not been on social capital; its positive role in improving prevention measures in society has been more greatly emphasized. A review of studies shows that, at the time of the Ebola outbreak, strong leadership and the strengthening of social capital were among the factors that strengthened bonds in society and trust in the health system, which was helpful in alleviating the shock of Ebola . A studyA search of PubMed, Science Direct, and Google Scholar databases until May 15, 2020, indicated no study that examined the social effects of the COVID-19 pandemic via a qualitative approach in Iran. Since the social consequences of COVID-19 are wider and more persistent than previous epidemics, and on the other hand, tools for quantitative study have not yet been provided, this qualitative study aimed to explore the social implications of COVID-19 in Iran, especially the status of social capital.This study was carried out using a qualitative content analysis approach in order to explore the views of experts and faculty members from different fields affiliated with the Shiraz University of Medical Sciences, with reference to the social consequences of the COVID-19 outbreak in Iran. Shiraz University of Medical Sciences, the second-largest university in Iran with a total of 14 faculties, 780 faculty members, 10,000 students, 11 hospitals, and 50 healthcare centers, thus can be considered a good setting to conduct this study. This research was reported according to the Standards for Reporting Qualitative Research (SRQR) .A purposive sampling method with maximum variation was used to recruit the experts. They were identified based on the diversity of age, gender, fields of study, and scientific rank. Inclusion criteria were being a faculty member in different schools affiliated with the University of Medical Sciences, having at least three years of work experience and participating in the study. Exclusion criteria included refusal to participate in the interview and dissatisfaction with the interview recording. In this study, semistructured and in-depth interviews were used as data collection strategies. The face-to-face interviews were conducted at participants' workplaces according to their preferences. The first author, a doctoral student in health promotion, conducted the interviews and then transcribed them verbatim (M. MJ). The second author, an associate professor of health promotion, reviewed and audited the interview process (MH. K). Both authors have been actively involved in data analysis and the extraction of code and categories.During the interview, participants first answered an open-ended question: \u201cWhat do you think are the social consequences of COVID-19 prevalence?\u201d A following probing question was also used to clarify the participants' answers: \u201cCould you please explain more about your response?\u201d In total, each interview lasted approximately 45 minutes, and participants gave their permission to record audio. Field notes were taken, and interviews were audio-recorded and transcribed verbatim. Data analysis was performed immediately after each interview. Interviews were continued until operational and theoretical data saturation. Operational data saturation indicated that most codes were obtained in the first interviews, and the number of new codes had a decreasing trend in subsequent interviews . TheoretGuba and Lincoln's criteria, known as credibility, dependability, transferability, and confirmability, were utilized to establish trustworthiness . CredibiThe present study is the result of an approved research project at the University of Medical Sciences. In this study, ethical and fiduciary principles were observed in the use of resources and data collection. Written informed consent was obtained for the interview and its recording, and no compulsion was applied to participants to continue in the study. The confidentiality of the information was also taken into account.In the present study, nine experts' views on the social consequences of the COVID-19 outbreak were explored . Based oThis category is divided into positive and negative aspects of new communication patterns. The positive side involves strengthening the virtual communication network, and the negative aspect is the disruption of interpersonal and social relationships and partnerships.\u201c\u2026In the Corona epidemic, staying at home led to more use of cyberspace, and now the source of information for Iranian families has become virtual communication networks...\u201d (P2)The COVID-19 outbreak drew people's attention to the capacities of information and communication technology. People increasingly used these technologies, especially social media, for social interaction and economic transactions. One of the participants said:According to most interviewees, this epidemic has reduced the quantity and quality of interpersonal relationships. Social isolation, disintegration of groups, reduction of participation, and teamwork reduction are other factors that have damaged the communication network.\u201c\u2026 the result of this situation is the reduction of people's presence in society, reduction of their social participation, and even disintegration of social groups such as the elderly or young people groups in a neighborhood...\u201d (P1)This category consists of two parts: the emergence of abnormal social behaviors, which is a negative aspect of behaviors, and a positive aspect, the emergence of self-sacrificial behaviors. Experts stated that increased irrational mass behavior, social stigma, individualistic behaviors, and increased risk of delinquency are abnormal social behaviors.\u201c\u2026We're getting to a point which probably creates a group excitement toward shopping, patients, quarantined locations, and so on.\u201d (P8)\u201c\u2026you can walk 100\u2009meters from the college, and more than a hundred disposable gloves and paper napkins have been dropped without any protection...\u201d (P6)\u201c\u2026 consider a person whose job is a food driver, and restaurants have been closed these days. This person, who owns only a motorcycle, has no other choice but to commit delinquency...\u201d (P7)The emergence of self-sacrificing behaviors is another new pattern of behavior that occurred during the COVID-19 outbreak.\u201c\u2026I recently saw a clip of a woman in Lorestan taking a bottle of alcohol and a cloth in her hand and disinfecting an ATM near her place of residence...\u201d (P2)This category also includes two parts: its positive part is the prosperity of virtual business and its negative aspect is the creation of economic anomalies.\u2009 \u201c\u2026because of being at home, the amount of online shopping has increased, and banking transactions are done through mobile apps...\u201d (P1)\u2009 \u201c\u2026These days, people prefer to use online and telephone counseling services in various fields of education, technology, health...\u201d (P3)Increasing online sales and services, as well as giving and receiving scientific and technical consulting services, is a sign of virtual business prosperity.Poverty, unemployment, increasing the burden on society's resources, and creating a climate of looting and hoarding are among the economic anomalies created by the COVID-19 outbreak.\u2009 \u201c\u2026What is going to happen to those who are unemployed and have no economic support?\u201d (P8)\u201c\u2026People are afraid to buy; they are afraid of getting infected. It has caused a lot of damage, especially to middle-class people...\u201d (P5)\u201c\u2026We are faced with an unreasonable demand of bulk-buying toilet paper, masks, soap, and disinfectant. Undoubtedly, the government will be in trouble providing hospital space and medical supplies...\u201d (P8)\u2009 \u201c\u2026I think the creation of a climate of uncertainty among the people is because of uncertainty toward the officials...\u201d (P6)\u2009 \u201c\u2026People distrust mass media because of internal media's dishonesty and foreign media's misrepresentation ...\u201d (P3)According to experts, one of the social effects of the COVID-19 outbreak is creating a climate of mistrust among citizens. They acknowledged that this climate of distrust is due to distrust of authorities' honesty, mass media, service guilds, and skepticism of people toward each other.\u201c\u2026there are uncertainty and suspicion in the interactions between people and organizations and service units. For example, people are skeptical about whether they regularly disinfect the restaurant's work environment and tools...\u201dParticipant No. 1 also believed:From the experts' points of view, some of the problems following the COVID-19 outbreak are creating generational, social, religious, and structural gaps, as well as the disruption of society's cultural and historical ceremonies, which can cause reducing social convergence.\u201c\u2026only doubt about a person's sickness may be an excuse to reduce contact with her or him, which is harmful to social convergence and social integrity...\u201d (P8)\u201c\u2026These days, the parental obsession towards hygiene adherence, and the characteristics of teenagers who don't feel afraid of anything, can cause parent-and-child challenges...\u201dParticipant No. 8 also said about the generation gap: \u201c\u2026Some religious people agree that people should currently leave religious mass ceremonies based on scientific reasons. Other religious groups insist on holding mass ceremonies.\u201d (P7)Another participant mentioned an example of a religious gap: \u201c\u2026in parliament, a bag of gloves and disinfectants was given to the deputies, while at the same time, the people did not have these supplies; I think this caused a deeper gap between the officials and the people...\u201d (P9)And an example of a structural gap is that:\u201c\u2026because of Corona, we do not have the excitement of the New Year, we do not have family gatherings, and we do not have goldfish on the Eid-e Nowruz table...\u201dAnother concern of experts is the disruption of the cultural and historical ceremonies. As participant No. 4 said:According to experts, social phobia, stress, low self-efficacy, and obsession are psychosocial disorders that people experience following the COVID-19 epidemic.\u201c\u2026When a disease spreads, people experience generalized anxiety and phobia. One of the reasons is that people are afraid about who should take care of them well\u2026\u201d (P2)\u201c\u2026feelings of inefficiency are one of the psychological consequences due to the contradiction of the messages. Obsession can also be caused by overuse of disinfectants, gloves, and masks...\u201d (P7)The current study results showed a range of positive and negative social consequences of the COVID-19 outbreak. In comparison, the frequency of negative and destructive outcomes outweighs the positive outcomes. The higher proportion of this disease's negative consequences also raises doubts about the endangerment of social capital . This isAccording to the results, the COVID-19 epidemic has caused damage to social networks and interpersonal relationships and reduced participation and teamwork in Iranian society. Since social capital refers to aspects of social structure that facilitate engagement and collaboration for individual mutual benefit and achievement to group goals \u201343, it tThe formation of new patterns of behavior is also one of the most interesting social consequences of the COVID-19 outbreak. Self-sacrificial behaviors are an example of the changing valuable behaviors that Iranian society has witnessed. On the other hand, abnormal social behaviors, such as increased irrational mass behaviors, social stigma, individualistic behaviors, and increased delinquency, have also been more prevalent following the outbreak of COVID-19 in Iran and have caused damage to social capital. Ling and Ho reached the same conclusion that, during the COVID-19 pandemic, most people behave selfishly and opportunistically to gain maximum personal benefit, even if it endangers others . When pePositive economic changes, such as the prosperity of virtual business in the form of online sales and services and giving and receiving scientific and technical consulting services, were able to improve the financial situation of some Iranian people during the economic crisis and the COVID-19 outbreak. On the other hand, however, the majority of Iranian people are caught in a circle of poverty and unemployment, and simultaneously looting, hoarding, and panic-buying have caused a decrease in public trust and social cohesion. Since the components of social capital, i.e., trust, relationships, and social networks, can be transformed into economic capital , social One of the findings of this study is the emergence of a climate of mistrust following the COVID-19 epidemic in Iran, which occurred due to a lack of transparency and ill-considered strategies in the government and mass media. Trust in governments' honesty and interpersonal networks plays an essential role in building social capital and in encouraging health protection behaviors among people , 49, 50.According to the findings, the generational, social, structural, and religious gaps and disruption of society's cultural and historical ceremonies are also among the social consequences of the COVID-19 outbreak, which has led to the breakdown of sociocultural values. Different risk perceptions of COVID-19 between younger and older Iranian people, as well as some religious people's opposition to the cancellation of ceremonies and gatherings, have resulted in reduced solidarity and social support and eventually in damaged social capital. Likewise, the inequitable availability of more facilities for the officials and social conflict between the tribes and communities have also added to these deleterious consequences. Several researchers, in accordance with this finding, have stated that damage to social capital is due to the damage done to reciprocal norms, values, and attitudes that establish the importance of citizenship, civilization, and civic ethics , 52. CulSocial phobia and stress following COVID-19 are above the Iranian people's tolerance level, as they have already faced high levels of pressure due to international sanctions, inappropriate policies, and political and social crises over the previous year. These crises have disrupted society's mental structure and caused reductions in people's self-efficiency and self-control. Actually, decreasing resilience, adaptation, and social support in a community can lead to social isolation, reduced participation, and finally decreased social capital . To overOverall, this study concludes that the COVID-19 outbreak has a variety of social consequences. A reflection on the higher rates of negative outcomes and social capital components indicates that the COVID-19 outbreak could jeopardize the social capital of the Iranian people. Of course, factors such as cultural differences, social and organizational capacities, management and leadership patterns, and social media's performance can moderate this epidemic's effects on social capital during the time. It is possible to address these challenges and problems through continuous training and acculturation, transparency of organizations, increasing trust, and a sense of belonging to the community. Considering the unfavorable social results of the COVID-19 disease outbreak in Iran and its destructive effects on social capital, it is recommended that a comprehensive document of realistic solutions should be developed and implemented with the participation and cooperation of officials and healthcare providers. Positive social impacts can guide policies that strengthen social action and, consequently, improve social capital. It is also suggested that the impact of the social outcomes of the COVID-19 outbreak on social capital should be determined, in order to adopt correct preventive policies and strategies and to improve social development processes. The overall findings of this study seem to be relatively generalizable to developing countries in conflict or crisis situations. However, more accurate judgments will require additional comparative research.There may be limitations to the generalizability of these findings, as the social and economic conditions of Iran at present are unique and may differ from other countries. In addition, the participants were not selected randomly, which can limit the scope of the generalizability of the findings."} +{"text": "Social distancing restrictions for COVID-19 epidemic prevention have substantially changed the field of youths\u2019 social activities. Many studies have focused on the impact of epidemic-preventative social distancing on individual physical and mental health. However, in the field of social distancing for epidemic prevention, what are the changes in youth anti-epidemic action and states caused by their interpersonal resources and interactions? Responding to this question by studying the impact of the elements of social capital in youths\u2019 anti-epidemic actions and anti-epidemic states could help identify an effective mechanism for balancing social distancing for effective epidemic prevention and sustainable social-participation development among youth. Bourdieu\u2019s field theory holds that the elements of social capital change with a change in the field. Therefore, we introduced the specific elements of social capital as independent variables and used a multinomal logistic model to analyze and predict the levels of youth anti-epidemic action through an empirical investigation of 1043 young people in Guangdong Province, China. The results show that, first, level of social distancing for epidemic prevention shows differences by occupation status and income level and correlates with social support. Second, social support and social norms play positive roles in promoting youth participation in anti-epidemic activities when social distance is certain. Third, social capital has a significant positive effect on youth social satisfaction and core relationships; however, social trust has a significant negative effect on youth physical and mental health. This study emphasized that social distancing for epidemic prevention is a special social situational state, which is a field where social capital has an impact on the differential changes in the public-participating actions and habitus of youth. With COVID-19\u2019s characteristics of strong infectivity, potential asymptomatic infection, and high variability, staying at home and social distancing have become the main strategies to reduce the risk of human-to-human transmission during the epidemic , such asDuring the anti-epidemic period in China, the strict enforcement of maintaining \u201csocial distance\u201d brought great challenges to people\u2019s everyday living conditions. People had to change their daily habits, especially concerning interpersonal communication, and adapt to new social norms. Youth in China cooperated with the government\u2019s anti-epidemic policy with its various unique ways of interpersonal interaction, which has garnered widespread interest. As a generation of active Internet users, the youth can find epidemic information online, including through various social media platforms, to make suggestions for COVID-19 prevention in their communities, enrich community and rural life using network videos and social platforms, and become propagandists and advisers for middle-aged and older adults. By changing their own \u201chabitus,\u201d such as by giving up frequent outdoor activities, eliminating group gatherings, and adapting to a new form of Internet learning, young people have altered their time and space needs for epidemic prevention in China. In the epidemic context, this includes cooperating with and responding to the government\u2019s anti-epidemic policy, which can be regarded as youth social-participation actions.However, what are the factors that allow young people to cope with the changes brought about by social distancing? Under the social distancing rules, what changes do their interpersonal resources and interaction patterns have on their anti-epidemic state? Responding to these questions by studying the factors that influence youth\u2019s anti-epidemic actions and anti-epidemic status could help identify an effective mechanism for balancing social distancing for effective epidemic prevention and sustainable social participation development among youth.Bourdieu 1986) and Coleman (1988) successively advanced the concept of social capital in the 1980s, while Granovetter 1973), Lin (2001), and Burt (1992), respectively, developed the concept from the perspectives of relationship strength, relational resources, and social network structure [3, Lin .Social distancing is significantly associated with social capital among youth in the epidemic context.Bourdieu believes that capital is the key for actors to compete in a field, and the quantity and results of actors\u2019 capital have crucial effects on their position and role in that field. Through the concept of \u201cepidemic prevention social capital,\u201d Bian and his colleagues (2020) discussed the impact of the changes in cohesive and external social capital on the epidemic prevention effect under social distancing conditions, emphasizing that under effective isolation, the higher a family\u2019s epidemic prevention social capital, the better their performance of epidemic prevention social behaviors, and the better the anti-epidemic effect . Anti-epHypothesis\u00a02\u00a0(H2).Social capital has a significant positive impact on anti-epidemic action among youth.Social capital is multidimensional, with different types, and can produce positive and negative externalities ,15,16. T\u201cSocial norms,\u201d \u201csocial trust,\u201d \u201csocial support,\u201d and \u201csocial connection\u201d are the core elements of the concept of social capital ,18. How Although social organizations that incorporate some formal or informal relationships can gain access to key social resources , these social networks are not built spontaneously but constructed through investment strategies oriented toward the institutionalization of group relationships . TherefoMany studies have focused on the impact of epidemic-preventative social distancing on social connections or the eHypothesis\u00a03\u00a0(H3).Different elements of social capital have a significant positive impact on youth anti-epidemic state.In this study, the research group members invited young people aged 15\u201335 in Guangdong Province to fill out questionnaires, which included 28 questions, through group chats and the moments function of Wechat from June 4 to June 11, 2020. We used a simple random sampling method to collect data online via WeChat. Respondents who voluntarily participated in the questionnaire survey gave consent for their data to be used in the research when they participated in the study. In total, 1043 online questionnaires were collected, with an average response time of 5 min (308.49 s). Questionnaires with intentionally wrong or random answers were screened and identified, and 858 valid questionnaires were collected with a recovery rate of 82.5%.The National Bureau of Statistics of China regards \u201cyouth\u201d to be individuals between 15 and 35 years of age . As a soIn this study, we measured social distancing according to the social distancing strategy of \u201cSix Sets of Guidelines on Disease Prevention: For General Use, Tourism, Households, Public Places, Public Transport and Home Observation\u201d , which wFor this study, we developed a scale of social participation, that was divided into \u201cgeneral participation\u201d and \u201cspecial participation\u201d . Based oThis scale, developed by Diener and reviSocial Capital Assessment Tools (SCAT) is the earliest systematic tool for measuring Social Capital, which some scholars have improved upon, and the new system is called (A-SCAT). This study refers to Putnam\u2019s tool for measuring macro social capital, which measures social capital from the dimensions of social network, norms, and trust . Based oFor occupation status, this study referred to the occupation indicator of individual socioeconomic characteristics in Bian\u2019s study , which wFirst, we used confirmatory factor analysis (CFA) to test the theoretical construction dimension of social capital in the field of epidemic-preventiative social distancing. Second, this study explored the association between social distance and several elements of social capital among youth with various individual characteristics by using correlation analysis for testing hypothesis 1. Third, we used a multinomial logistic regression model and the stepwise method to test hypothesis 2, exploring the impact of social capital on youths\u2019 anti-epidemic actions in the field of epidemic-preventative social distancing. Finally, the study tested hypothesis 3 through multiple linear regression, investigating the impact of social capital on youths\u2019 anti-epidemic states. In this study, SPSS 21.0 was used for the descriptive statistical analysis, analysis of variance, correlation analysis and regression analysis. The CFA was conducted using SPSS AMOS 22.0 .p = 0.000 < 0.05; CFI = 0.995 > 0.9, TLI = 0.986 > 0.9, RMSEA = 0.030 < 0.05 [To test the fit between the overall measurement model of social capital and the sample data, AMOS was used for confirmatory factor analysis (CFA). The parameters of the four-factor model showed that the model\u2019s goodness of fit was as follows: chi-squared = 208.591, 0 < 0.05 ,37).As shown in N = 572, 67%), which was in response to the government\u2019s call to engage in staying at home and cooperating with anti-epidemic action. Social distance was mainly measured by the maximum duration of staying at home and frequency of going out, with an average value of 7.377, which was a high level, indicating that young people could maintain a relatively long period of social distancing during the epidemic. Regarding social capital, the mean values for social norms (13.89), social connection (16.14), and social trust (10.78) were high, and there were no significant differences among individuals, indicating that youth could observe the social norms of epidemic prevention, such as wearing a mask, washing their hands frequently, and not spreading rumors. Interactions with family at home and online communication with core relationships were more centralized and stable, and youth reported a higher level of trust as the epidemic information was shared between family members and friends, which was consistent with the strong relationship culture in China. However, the level of social support was relatively low (mean = 3.73), and the individual difference was large (SD = 1.892), which will be discussed in more detail below.As shown in p = 0.045) and was not significantly correlated with other elements of social capital. Social distance showed a significant correlation with gender, age, occupation status, and income level. Social support and social trust under social capital showed significant correlation with educational level.We analyzed the association between social capital and social distance among youth with various individual features by using correlation analysis . As showp-value of the likelihood ratio test was less than 0.05, and the significance test of the regression equation was passed, which shows that the model was reasonable. The dependent variable of Models 2 to 9 was the effects of youth anti-epidemic action, that is, youth anti-epidemic state, which was measured from four dimensions: studying and working state, physical and mental health state, social satisfaction, and core relationship state. A group of nested models was used for each dependent variable to test whether social capital had an independent effect on youth anti-epidemic state in the field of social distance.p = 0.046), social support , educational level , and occupation were significant, indicating that these four variables added to the model had an effect on youth anti-epidemic action. The regression equations constructed using the stepwise method are as follows:The variables of age and monthly income as stepwise were deleted, indicating that they were not suitable to add to the equation model. The results of the multinomial logistic regression model analysis for Model 1 showed that the likelihood-ratio test indices for social distance shows that, compared to the general participation level among youth, the lower the social support, social norms, educational level, and occupation state, the greater the chance young people would not participate in anti-epidemic action. Social connection and social trust increased the probability of nonparticipation by 0.085 and 0.068 units, respectively; however, the difference was not statistically significant. When social distance and social capital were constant, the higher the educational level and occupation status, the higher the probability that youth would not participate in anti-epidemic action at a general participation level , for which educational level was statistically significant.From Equation (2), when social distance, educational level, and occupation status were constant, obtaining social support, abiding by social norms, and maintaining social connections could increase the probability of youth engaging in general participation to special participation by 0.198, 0.026, and 0.052 units, respectively. Among these, social support was statistically significant. Social trust reduced the probability of the corresponding special participation action by 0.003 units, which was not statistically significant. Similarly, the higher the educational level and occupational status, the higher the probability of special participation, for which occupational status was statistically significant.Model 1 partly supported Hypothesis 2, and the results showed that maintaining social distance during the epidemic increased the probability of general participation in anti-epidemic actions and special social participation . When social distance was constant, the effect of social capital on youth anti-epidemic participation action varied according to different elements. Social support and social norms had significant positive effects on youth action from nonparticipation to general participation, especially the effect of social support on the probability of special participation being more significant. Compared to general participation, social connection increased the probability of youth not participating in anti-epidemic action and participating in special anti-epidemic action; however, this was not significant. In addition, regarding individual characteristics, educational level had a significant positive effect on promoting youth from nonparticipation to general participation, while occupational status, as an individual characteristic of social stratification with a stable positive correlation with social capital, had a significant positive effect on increasing the probability of special participation among youth. Thus, under the social distancing conditions of the epidemic situation, different elements of social capital had differing effects on youth anti-epidemic action.To identify a mechanism that will not only ensure the effects of epidemic prevention but also allow youth to maintain a healthy social life, we used Models 2 to 9 to explore the role that social capital played on the normalization of the epidemic situation by analyzing the influence of social distance, social capital, and anti-epidemic action on youth anti-epidemic state. As shown in In addition, monthly income and educational level had an independent impact on the effect of youth anti-epidemic action. The higher the monthly family income, the better the working and learning state, as well as the physical and mental health state, among youth. Notably, the higher young people\u2019s educational level was, the lower their social satisfaction with \u201cprevention and control measures in the community,\u201d or \u201carrangements for stopping classes,\u201d and \u201carrangements for resuming work and classes.\u201dBourdieu believes that analysis of an actor and their behavior not only needs to start from the macrolevel social environment but also needs to understand the actor\u2019s field and its capital and habitus. Analyzing the habitus of the actor in the field can clearly show how various forms of capital contend with each other and can also explore the reasons behind the actors\u2019 behavior. Therefore, this study aimed to empirically explore the social distancing field of anti-epidemic action, and how different social capital factors would affect youth anti-epidemic behavior and habitus. According to the analysis of survey data, young people in the main cities of Guangdong could maintain social distancing for a long time during the epidemic period, and social distancing among youth showed significant social class differences in occupational status and income. Regarding social distancing for epidemic prevention, the main performance of young people\u2019s anti-epidemic action was the general participation in staying at home and cooperating with anti-epidemic policies.Youth demonstrated an overall good anti-epidemic state; however, there were some individual differences. By introducing the specific elements of social capital, we found that social support and social norms had significant positive effects on young people\u2019s participation in the anti-epidemic campaign, while their habitus of life was also influenced by different elements of social capital during the epidemic period.Bourdieu posits that the amount and structure of the capital that actors own can determine their position in the field, and the rank of that capital will vary with changes in the field . We founMoreover, our analysis found that in the case of epidemic social distancing, in addition to occupation status, social support and social norms play a positive role in promoting youth participation in anti-epidemic activities. When young people can obtain information about the epidemic situation from various channels of formal networks, such as the government and community, they can cooperate with home epidemic prevention, social distancing, and other anti-epidemic actions. Additionally, as the main channel of \u201chuman resources,\u201d strong ties in informal networks have personal inclusiveness. Specifically, they show understanding and tolerance of social distancing and less social interaction during the epidemic period but still provide relevant human resources .The regression analysis of youth\u2019s anti-epidemic state in this study is actually the analysis of actors\u2019 habitus in the social distancing field; it can clearly show how various actors\u2019 social capital contends though exploring the effects of the social capital and the anti-epidemic action on the anti-epidemic state, and could comprehensively explain the mechanism of youth anti-epidemic action. Based on the analysis results, the discussion points are as follows.First, social capital had a significant positive effect on social satisfaction and core relationship state among youth. The results showed that the social norms of \u201cwearing a mask, washing hands frequently, and not spreading rumors,\u201d social support from family and friends, social trust in family interactions and online communication, and sharing epidemic information could enhance social satisfaction and the harmony of core relationships among young people. Owing to the popularization of information network technology in China and the timely disclosure of epidemic information by the government, information resources were disseminated overall through family interactions and social links on online exchanges in informal networks (core relationships). This improved young people\u2019s social trust in epidemic information and motivated them to adhere more closely to social norms, thereby enhancing the effect of epidemic prevention. However, the long duration of staying home for epidemic prevention led to young people spending more time with their families, although they could not work or study normally, and increasing their online contact between relatives and friends. In this case, social distancing generally did not affect social contact, which not only promoted young people\u2019s cooperation with the government\u2019s epidemic prevention policies but also strengthened their core relationships. Therefore, the epidemic prevention effect improved. This is in line with the positive role of social capital in general: the intake and mobilization of social resources to enhance effective behavior and obtain better social support. This shows that during the epidemic period, strong ties may have been more inclusive\u2014this helps maintain a harmonious relationship without social activities such as meetings and gatherings besides providing social support for young people to fight against the epidemic. It can be seen that China\u2019s strong relationship culture has played an important role during the COVID-19 epidemic.Second, social trust in social capital had a significant negative impact on the physical and mental health of young people. In this study, social trust was mainly measured by the trust of youth in epidemic information conveyed between family and friends, and epidemic information may cause people to experience a certain degree of panic, anxiety, and other negative emotions, which is in line with previous studies ,39. TherThird, youth anti-epidemic action had a significant negative impact on physical and mental health. Young people\u2019s participation in anti-epidemic activities was generally manifested as staying home for epidemic prevention, cooperating in epidemic prevention and control measures, and other general participation activities. Although young people spent more time interacting with their families, they also reduced the time spent during normal social interaction , which inevitably had a negative impact on their physical and mental health, such as anti-epidemic fatigue, decreasing anti-epidemic action, and depressive symptoms. The effects of anti-epidemic action on the studying and working state and social satisfaction among youth was not significant. Due to online teaching and office work applied via the government advocacy of \u201csuspended class, ongoing learning\u201d and \u201corderly resumption of work and production\u201d, young people could not only stay at home to prevent the spread of the epidemic but also achieve a better balance with work and study.The emergence of social distance in epidemic prevention can be seen as a change in the original activity field of youth, which changes not only the geographical field but also social network links and relationships, action rules, and resource support. This study emphasized that the social class differences in social distancing reflect not only the differences in youth social capital but also the prominent role of different elements of social capital in this field.The social distancing restrictions of home epidemic prevention have substantially changed the original field of youth social activities, and should be a major challenge for youth social interaction. However, our research showed that youth cooperation with and participation in home epidemic prevention is at very high levels. The long-term social support model dominated by strong ties played an important role in the youth\u2019s anti-epidemic action and social satisfaction. The anti-epidemic actions and the evenly distributed access to epidemic information have had different degrees of negative effects on the youth\u2019s physical and mental health. However, strict and effective epidemic prevention guidelines, the reconstruction of social order by social norms and social connections based on the Internet and networks can compensate for the discomfort brought by social distancing and anti-epidemic actions.Therefore, social distancing for epidemic prevention is a special, social, situational state, and it is a field where social capital has an impact on the differential changes in public participation actions and habitus of youth. This study will help further explain the behavioral choices of youth regarding combating the epidemic, and even participating in public policy.This study discussed how the social capital in the specific field of \u201csocial distance\u201d affected the youth\u2019s anti-epidemic action and its state under the background of Chinese anti-epidemic policy. Some limitations of this study should be taken into account when interpreting our findings. First, owing to the strict anti-epidemic policy requirements at the time in China, it was difficult to control the structure of sample data through online questionnaires via social media. Second, because of the differential implementation of epidemic prevention and control measures among cities in Guangdong Province, many other factors may have occurred at the individual level and regional level. What can be tracked and studied in the future is what changes will take place in these social capital elements in the field of the new normalization of epidemic prevention and control, and what impact will it have on youths\u2019 value judgment, social participation, action strategies and life habitus."} +{"text": "A apresenta\u00e7\u00e3o do VE hipertrofiado depende principalmente da doen\u00e7a subjacente, sendo a HVE conc\u00eantrica resultante na maioria dos casos da sobrecarga de press\u00e3o no VE , enquanto a HVE exc\u00eantrica depende principalmente das sobrecargas de volume no VE e cardiomiopatias dilatadas. Outras causas de HVE incluem os defeitos do septo ventricular, cardiomiopatia hipertr\u00f3fica e altera\u00e7\u00f5es fisiol\u00f3gicas associadas ao treinamento atl\u00e9tico.2A presen\u00e7a de HVE \u00e9 clinicamente importante por estar associada com aumento da incid\u00eancia de insufici\u00eancia card\u00edaca, arritmias ventriculares, insufici\u00eancia vascular perif\u00e9rica, dilata\u00e7\u00e3o da aorta, eventos cerebrovasculares e morte s\u00fabita ou ap\u00f3s infarto do mioc\u00e1rdio.3 O ECG \u00e9 uma ferramenta \u00fatil, mas imperfeita na detec\u00e7\u00e3o da HVE; sua utilidade se deve principalmente ao baixo custo e sua disponibilidade universal, sendo realizado rotineiramente nas avalia\u00e7\u00f5es cardiol\u00f3gicas. O ecocardiograma tem um custo maior, mas n\u00e3o exagerado, e tamb\u00e9m tem estado amplamente dispon\u00edvel. Ainda, para avalia\u00e7\u00e3o da massa ventricular s\u00e3o empregadas as t\u00e9cnicas mais acess\u00edveis do m\u00e9todo. Em poucas situa\u00e7\u00f5es a resson\u00e2ncia magn\u00e9tica card\u00edaca pode ser necess\u00e1ria, s\u00f3 quando as condi\u00e7\u00f5es t\u00e9cnicas inviabilizem a avalia\u00e7\u00e3o ecocardiogr\u00e1fica.4A HVE pode ser diagnosticada pelo eletrocardiograma (ECG) ou pelo ecocardiograma, sendo este o procedimento de escolha por ter sensibilidade muito maior que o ECG.5 e endossados pela maioria dos autores.6 Desta forma se observa na ecocardiografia uma uniformidade dos resultados da HVE, baseados em poucos par\u00e2metros estudados.6O c\u00e1lculo da massa ventricular esquerda pela ecocardiografia pode ser feito por diferentes t\u00e9cnicas \u2013 unidimensional, bidimensional ou tridimensional, mas sempre com o objetivo de quantificar o mioc\u00e1rdio daquela c\u00e2mara, baseado em fundamentos comuns e, portanto, com resultados semelhantes. Os padr\u00f5es de normalidade s\u00e3o preconizados pelas associa\u00e7\u00f5es internacionais de ecocardiografia 7 descreviam 33 crit\u00e9rios eletrocardiogr\u00e1ficos para o diagn\u00f3stico de HVE, e todos se mostravam com baixa sensibilidade.7 No correr dos anos, alguns crit\u00e9rios se solidificaram como os mais empregados na pr\u00e1tica cl\u00ednica para o diagn\u00f3stico da HVE no ECG, mas ainda n\u00e3o h\u00e1 consenso nesta sele\u00e7\u00e3o. Em artigo recente, Wang et al.,8 estudaram o desempenho de sete crit\u00e9rios do ECG em pacientes chineses com HVE no ecocardiograma. Encontraram uma sensibilidade de 15%-31,9% e especificidade de 91,6%-99,2% na amostra global, com melhor sensibilidade na HVE conc\u00eantrica. Os melhores descritores de HVE nesta pesquisa8 foram os crit\u00e9rios de Sokolow-Lyon voltagem, Cornell voltagem, Cornell produto e R aVL voltagem.Na eletrocardiografia a situa\u00e7\u00e3o \u00e9 oposta. J\u00e1 em 1969, Romhilt et al.9 em publica\u00e7\u00e3o nesta revista, estudaram 13 crit\u00e9rios eletrocardiogr\u00e1ficos de HVE em 2458 pacientes hipertensos submetidos a ecocardiograma, classificados pela faixa et\u00e1ria e submetidos a rigorosa an\u00e1lise estat\u00edstica. Entre os pacientes com idade \u2265 80 anos tiveram melhor desempenho os crit\u00e9rios de Perugia e (Rmax + Smax) x dura\u00e7\u00e3o . Nos pacientes com idade < 80 anos, al\u00e9m destes \u00edndices citados, o crit\u00e9rio de Narita, descrito em 2019,10 tamb\u00e9m teve um bom desempenho. Nesta pesquisa, tradicionais \u00edndices tiveram sensibilidade diagn\u00f3stica inferior: Sokolow-Lyon voltagem > 35 mm com 12%-15,7% nas diferentes faixas et\u00e1rias e Cornell voltagem com 17,3%-21% de sensibilidade.9Povoa et al.,Entendemos, em conclus\u00e3o, que o eletrocardiograma continua sendo importante ferramenta na pr\u00e1tica cardiol\u00f3gica di\u00e1ria, bastante valioso quando indica HVE, mas com sensibilidade diagn\u00f3stica ainda modesta, apesar das novas pesquisas nesta \u00e1rea. 1Left Ventricular Hypertrophy (LVH) is defined as an increase in left ventricular (LV) mass, which may be secondary to an increase in wall thickness (concentric LVH), increased cavity size (eccentric LVH), or both. The presentation of hypertrophied LV depends mainly on the underlying disease, with concentric LVH resulting in most cases from LV pressure overload (hypertension or aortic stenosis), while eccentric LVH mainly depends on LV volume overloads and dilated cardiomyopathies. Other causes of LVH include ventricular septal defects, hypertrophic cardiomyopathy, and physiological changes associated with athletic training.2The presence of LVH is clinically meaningful because it is associated with an increased incidence of heart failure, ventricular arrhythmias, peripheral vascular insufficiency, aortic dilatation, cerebrovascular events and sudden death or after myocardial infarction.3 The ECG is a useful but imperfect tool in detecting LVH; its usefulness is mainly due to its low cost and universal availability, routinely performed in cardiac evaluations. Echocardiography is more expensive but not unreasonable and has also been widely available. Yet, to assess the ventricular mass, the most accessible techniques of the method are used. In few situations, cardiac magnetic resonance imaging may be necessary, only when technical conditions make echocardiographic assessment unfeasible.4LVH can be diagnosed by electrocardiogram (ECG) or echocardiogram, which is the procedure of choice because it has a much greater sensitivity than the ECG.5 and endorsed by most authors.6 Thus, echocardiography shows uniformity of LVH results based on few studied parameters.6The calculation of left ventricular mass by echocardiography can be performed using different techniques \u2013 one-dimensional, two-dimensional or three-dimensional, but always to quantify the myocardium in that chamber, based on common fundamentals and, therefore, with similar results. Standards of normality are recommended by the international associations of echocardiography 7 described 33 electrocardiographic criteria for diagnosing LVH, and all showed low sensitivity.7 Over the years, some criteria have solidified as the most used in clinical practice for diagnosing LVH on the ECG, but there is still no consensus in this selection. In a recent article, Wang et al.8 studied the performance of seven ECG criteria in Chinese patients with LVH on echocardiography. They found a sensitivity of 15%-31.9% and a specificity of 91.6%-99.2% in the global sample, with better sensitivity in concentric LVH. The best LVH descriptors in this research8 were the Sokolow-Lyon voltage, Cornell voltage, Cornell product and R aVL voltage criteria.In electrocardiography, the situation is the opposite. As early as 1969, Romhilt et al.9 in a publication in this journal, studied 13 electrocardiographic criteria for LVH in 2458 hypertensive patients submitted to echocardiography, classified by age group and submitted to rigorous statistical analysis. Among patients aged \u2265 80 years, the Perugia criteria performed better and (Rmax + Smax) x duration . In patients aged < 80 years, in addition to these indices mentioned above, the Narita criterion, described in 2019,10 also performed well. In this research, traditional indices had lower diagnostic sensitivity: Sokolow-Lyon voltage > 35 mm with 12%-15.7% in different age groups and Cornell voltage with 17.3%-21% sensitivity.9Povoa et al.,In conclusion, we understand that the electrocardiogram remains an important tool in daily cardiology practice, quite valuable when it indicates LVH, but with still modest diagnostic sensitivity, despite new research in this area."} +{"text": "Ein 14-j\u00e4hriger junger Patient stellte sich in unserer Sprechstunde aufgrund einer traumatischen Mydriasis und Katarakt vor. Der Patient berichtete, 2\u00a0Jahre zuvor durch einen Tannenzapfen eine schwere perforierende Verletzung mit gro\u00dfem Iristeilverlust am linken Auge erlitten zu haben, die extern prim\u00e4rversorgt wurde. Seither sei es zu einer zunehmenden Verschlechterung des Seheindruckes wegen der partiellen Aniridie und einer zunehmenden Linsentr\u00fcbung gekommen. Zum Zeitpunkt der Erstvorstellung bestand subjektiv ein h\u00f6herer Leidensdruck v.\u00a0a. wegen einer \u00e4sthetischen Beeintr\u00e4chtigung und weniger aufgrund vermehrter Blendung. Therapeutisch wurde bereits eine Irisprintkontaktlinse angepasst; er war jedoch mit dem \u00e4sthetischen Ergebnis und der Vertr\u00e4glichkeit der Kontaktlinse nicht zufrieden.2 verringert, wohingegen die Endothelzellzahl am rechten Auge mit 3039\u00a0Zellen/mm2 normwertig war. Die Kontrastsensitivit\u00e4t wurde mit der Pelli-Robson-Tafel gemessen und ergab f\u00fcr das linke Auge einen Wert von 0,45 \u201elog units\u201c. Der Patient wurde gebeten, die subjektive kosmetische Beeintr\u00e4chtigung und die subjektive Beeintr\u00e4chtigung durch Blendung auf einer Skala von 1\u00a0bis 10\u00a0zu bewerten, wobei 1\u00a0f\u00fcr eine geringe Beeintr\u00e4chtigung und 10\u00a0f\u00fcr eine sehr starke Beeintr\u00e4chtigung steht. Die subjektive Beeintr\u00e4chtigung durch Blendung wurde vom Patienten mit 2\u00a0bewertet, die subjektive kosmetische Beeintr\u00e4chtigung wurde mit 6\u00a0bewertet. Die Abb.\u00a0Der bestkorrigierte Visus am betroffenen linken Auge lag bei 0,05 dezimal mit einer subjektiven Refraktion von \u22123,0/plan/\u2212, am Partnerauge konnten wir einen unkorrigierten Visus von 1,25 dezimal feststellen. In der klinischen Untersuchung zeigten sich am linken Auge neben der traumatischen Mydriasis und dem Irisdefekt bei 5 bis 7\u00a0Uhr eine Hornhautnarbe der unteren Hornhauthemisph\u00e4re sowie eine traumatische Katarakt. Fundoskopisch zeigten sich links periphere Argonlaser-Koagulationsherde bei ansonsten regelrechtem Befund, jedoch bei reduziertem Einblick aufgrund der Katarakt. Eine Lentodonesis konnte nicht festgestellt werden. Am rechten Auge zeigten sich ein reizfreier Vorderabschnitt mit altersentsprechend klarer Linse sowie ein fundoskopischer Normalbefund. Der intraokulare Druck lag beidseits im Normbereich. Am linken Auge war die Endothelzellzahl mit 1803\u00a0Zellen/mmund \u00e4sthetischen Ergebnissen sowie dem gegen\u00fcber einer zweizeitigen Operation geringeren Operationsrisiko. \u00dcber die m\u00f6glichen Komplikationen inklusive einer Hornhautdekompensation bis hin zur Notwendigkeit einer Keratoplastik oder der Entwicklung eines Glaukoms wurde der Patient ausf\u00fchrlich aufgekl\u00e4rt. Bei entsprechendem Leidensdruck entschied sich der Patient f\u00fcr den vorgeschlagenen Eingriff.Der Patient wurde dar\u00fcber aufgekl\u00e4rt, dass durch eine Kataraktoperation am linken Auge zwar ein Visusanstieg m\u00f6glich ist, jedoch die Prognose aufgrund von m\u00f6glichen (subklinischen) Ver\u00e4nderungen der Netzhaut eingeschr\u00e4nkt ist. Die M\u00f6glichkeit einer kombinierten Implantation einer Intraokularlinse (IOL) mit einer Artificial-Iris (AI) wurde mit dem Patienten und den Eltern diskutiert \u2013 dies insbesondere im Hinblick auf die M\u00f6glichkeit, auf die Iriskontaktlinse zu verzichten, mit derzeitigen Mitteln guten funktionellen Iris, Human Optics, Erlangen, Deutschland) in den Kapselsack (In-the-bag-Technik) in einem Eingriff , eines Kapselspannringes und der zuvor individuell angefertigten AI , das nur in schwarzer Farbe erh\u00e4ltlich ist, oder das Irisimplantat Modell C1/F1 der Firma Ophtec , das in ca.\u00a0120 verschiedenen Designs hergestellt wird. Beide Implantate sind mit eingearbeiteter Optik erh\u00e4ltlich, haben jedoch den Nachteil, dass das \u00e4sthetische Ergebnis aufgrund der eingeschr\u00e4nkten Farbauswahl und fehlenden Individualisierung nicht mit dem in diesem Fall verwendeten Implantat vergleichbar ist. Au\u00dferdem ist es nicht m\u00f6glich, diese in den Kapselsack zu implantieren.Zusammenfassend stellt die Implantation einer AI zusammen mit einer Intraokularlinse und einem Kapselspannring in den Kapselsack eine elegante Methode dar, um bei einer traumatischen Aniridie und Katarakt Funktion und \u00c4sthetik auch bei sehr jungen Patienten in einer einzigen Operation wiederherzustellen. Gerade bei jungen Patienten muss auch das langfristige Komplikationsrisiko ber\u00fccksichtigt werden, da das Implantat noch viele Jahre im Auge verbleiben wird.Auf den ersten Blick stellt die Irisprintkontaktlinse eine gute Therapieoption bei einer traumatischen Aniridie dar. Jedoch ist die Anwendung komplex und teuer.Selbst bei vermeintlich guter Optik bei der Aufsicht Abb.\u00a0c,\u00a0d k\u00f6nnIm Falle einer zus\u00e4tzlich bestehenden traumatischen Katarakt ist eine alleinige Versorgung mit Irisprintkontaktlinsen nicht mehr zielf\u00fchrend.Im Rahmen einer ohnehin durchzuf\u00fchrenden Kataraktoperation kann mit vergleichsweise geringem Aufwand auch die Aniridie behandelt werden. Die Operation ist insbesondere dann risikoarm, wenn die k\u00fcnstliche Iris und die IOL gemeinsam in den Kapselsack implantiert werden k\u00f6nnen."} +{"text": "Morphine is a widely used opioid analgesic. However, standard morphine dosages and administration methods exhibit a short half-life and pose a risk of respiratory depression. Sustained-release microspheres can deliver prolonged efficacy and reduce side effects. We present a new controlled-release morphine gelatine microsphere (MGM) prepared using an emulsification-crosslinking strategy. The gelatine microsphere design improves the bioavailability of morphine. And it not only increases the clinical analgesic efficacy but also the safety of clinical medication through a gradual, sustained release. Besides, we describe MGMs\u2019 preparation, release, pharmacodynamics, and pharmacokinetics. And the drug metabolism pathway. We calculate the release rate of morphine by measuring plasma morphine concentration over time and pharmacokinetic parameters. It optimized the manufacturing process of MGMs, which makes the analgesic effect have a longer duration. MGMs analgesic effect shows dose dependence. After they were administrated, MGMs were released more slowly. Peak concentration was reduced, and the relative bioavailability improved. It even reached 88.84%. Its pharmacokinetic process was consistent with the two-component first-order absorption model. MGMs deliver sustained-release and long-action pharmacokinetics. It shows design goals of improving drug bioavailability, prolonging drug residence time in vivo, and maintaining stable blood drug concentration. BecauseAs a classical opioid analgesic, morphine has been widely used in postoperative analgesia and chronic cancer pain treatment . The mosR has been used. It can alleviate not only cancer pain but also the above gastrointestinal symptoms. It acts as a solid embolism into the rectum that melts rapidly due to the temperature of the cavity, and the analgesic efficiency is acceptable. But the drug-containing matrix lacks adhesion, leading to the partial outflow or transfer to the formation depth and absorption through the colon endothelium to produce the first-pass effect [Currently, there are mainly two kinds of morphine sustained-release preparations widely used in the clinic. At first, oral sustained-release preparations, such as MethcontinR , were invented and applied to clinical practice, benefiting many cancer pain patients . Howevers effect . There iIt is well known that the way to improve drug bioavailability is to apply drugs locally or directly to the affected area, which can enhance the local therapeutic effect and reduce the number of drugs entering the systemic circulation and the loss of first-pass impact . The morIn recent years, more and more studies have been conducted on the drug slow-release system with polymers such as PLA/PLGA as the carrier, which has a longer degradation time and more foreign body reactions in the injection site pathology . Gelatin2.2.1.Morphine hydrochloride injection , medical gelatine , span 85, isopropanol (IPA); precision balance , LC-10Avp high-performance liquid chromatograph , KQ-50\u00a0DB ultrasonic cleaner (Kunshan Ultrasonic Instruments Co. Ltd), RW20 electric mixer (IKA Company), and electric heating constant temperature water-bath .2.2.Sixteen New Zealand rabbits with an average weight of 2.71\u00a0\u00b1\u00a00.28 kg were used in this study. The experimental rabbits were graded as the \u2018cleanest\u2019 and were provided by the Experimental Animal Centre of Chinese PLA Postgraduate Medical School. They lived in a soundproof animal laboratory at a room temperature of 24\u201325\u00b0C. The daily cycle alternated over 12\u00a0h. They could eat and drink freely and were acclimatized to the environment for one week. The experimental animals fasted for 12\u00a0h before being drugged and caged. Institutional Animal Care and Use Committee and the PLA Ethics Committee approved the animal study. The approval document is batch Number 2018-X14-10.2.3.According to the orthogonal experimental design, an appropriate amount of gelatine was weighed and dissolved in distilled water. Simultaneously, a proper amount of morphine was considered and dried for three hours at a constant temperature (105\u00b0C) . Next, mOrthogonal experiment: Based on a preliminary test and single-factor inspection, we chose four target variables: gelatine concentration, feed ratio, Span 85 concentration, and stirring speed. Three levels were set for each factor .3.3.1Microspheres were coated on a slide, dispersed with an appropriate amount of double distilled water, observed, and imaged at a magnification of 400\u00d7\u00a0using a scanning electron microscope.PS: D90, D50, and D10 indicate the numbers of microspheres with diameters of less than 90%, 50%, and 10% of all the microspheres, respectively.3.2We investigate the sample stability by placing samples in stable conditions, such as intense light and high temperature and humidity. Then, we need to recheck the pieces to find their changes in content and morphology.High-temperature experiments Morphine gelatin microspheres (MGMs) were spread as a 5 mm thin layer in a petri dish and placed in an electric thermostatic incubator at 20\u00b0C,30\u00b0C,40\u00b0C,50\u00b0C and 60\u00b0C for ten days.High humidity tests The test products were placed in a constant humidity closed container at 25\u00b0C, relative humidity (RH) 90\u00a0\u00b1\u00a05% for ten days. Samples were taken on the 5th and 10th days for detection. The tested parameters should include hygroscopic weight gain. A constant humidity condition can be achieved by using a constant temperature and humidity box or placing a saturated salt solution under a closed container. According to different humidity requirements, a saturated solution of NaCl or KNO3 was selected.Illumination experiments The samples were placed in a lightbox or another suitable light container for ten days under illuminance (4500\u00a0\u00b1\u00a0500) lx. Samples were taken on the 1,3,5,7 and 10\u00a0days for testing.3.3Assessment of microspheres was based on sum value (S) of span (S1), yield S2), encapsulation rate (S3), and drug loading (S4) of microspheres, as follows [, encapsuThe calculation method for span (S1) is mentioned above under the morphology section in 3.4.MGMs\u2019 in vitro release curves were fitted by zero-order, first-order, Higuchi equation, and Ritger-Peppas index model. The equations for each released model are given below.t is the cumulative release at time t, M t /M Here t is the release time, k is a constant, M4.4.12 square was shaved on one side of the midline of their backs. The grouping is shown in A total of 24 New Zealand rabbits were randomly divided into six groups, with four rabbits in each group. Before the experiment, they fasted for 12\u00a0hs and drank water freely. A 4 x 4cmThe dosage of morphine hydrochloride injection was calculated following its instructions in animal tests. In contrast, the dose of morphine gelatin microspheres was chosen based on their entrapment efficiency. The encapsulation efficiency of produced microspheres was around 20.94%, according to earlier pilot studies. We increased the dose of the morphine gelatin microspheres to equalize the morphine content of the injection of morphine hydrochloride and the microspheres.4.2.This study adopted a hind plantar incision mode ,24. Exce4.3.After the establishment of the plantar pain model, the rats were subcutaneously injected with morphine as well as the suspension of blank or morphine microspheres in the shaved areas while they were awake.4.4.A rabbit model of incisional pain was given medicine one hour after surgery. The pain behavior was assessed by a cumulative pain score ,25 whichMany academics use the 4.5.The rabbits were placed on a shelf in a special cage before preoperative and postoperative administration, and the pain threshold was measured at 5, 10, 20, 30, 40, 50, 60, 90, 120, 180, 240, 300, 360, 420, 480\u00a0min after administration. The laser pulse duration was fixed at 25\u00a0ms, and the power was increased gradually. The pain threshold was set by placing the pain meter on the soles of the feet and irradiating the soles on both sides to cause the power value of lifting the feet to avoid reflex or neigh. The bilateral mean value was taken as the pain threshold at this point.5.The experimental scheme Sixteen New Zealand rabbits were randomly divided into four groups, with four rabbits in each group. Before the experiment, the rabbits were fasted for twelve hours; they could drink water, and one side of the midline of their back was shaved (4 cm \u00d7 4 cm). The following groups were designed: A (morphine hydrochloride group 1 mg/kg), B (morphine hydrochloride group 3 mg/kg), C (morphine microsphere group 5 mg/kg), and D (morphine microsphere group 15 mg/kg). 1 mL blood was taken from each ear marginal vein of the rabbits after the treatment , treated with heparin anticoagulant, and centrifuged for 10\u00a0min at 2000 rpm. Plasma was collected and stored at \u221220 \u00b0 C.5.1.The microsphere\u2019s pharmacokinetic data were analyzed by 3P97 software. Other statistical analyses were performed using the SPSS statistical software package for the Social Sciences, version 17.0, SPSS Inc., Chicago, IL, USA). Measurement data were expressed as mean \u00b1 standard deviation (SD). One-way analysis of variance (ANOVA) was used to compare the differences between the groups. Two independent-sample t-tests were applied to analyze the difference from the control group. The significance level (\u03b1) was set at 0.05.6.6.1.In our study, New Zealand rabbits were selected for the preliminary investigation of the pharmacokinetics of morphine microspheres in vivo. In our pharmacodynamics experiments, a pain model was used that was established by Brennan in 1996 . The paiThe morphine package dose designed for this study was 5\u201315 mg. The dose design is based on the clinical injection dose of morphine . We chos6.2.1) Concentration of the emulsifier: With an increase in emulsifier content, the microspheres\u2019 particle size shrank. The synthesis of microspheres with a significant share of particle sizes between 50 um and 80 um was possible when the emulsifier concentration reached 1% to 1.5%. However, the emulsifier concentration is too high to result in microsphere adherence. So an emulsifier at 1% was selected.Feed ratio (morphine: gelatine) Variable feeding ratios resulted in microspheres with different burst release effects, drug loading, and encapsulation efficiency. Each set of data samples\u2019 in vitro dissolving trials served as the basis for the results\u2019 mean values .Table 3.2) Concentration: Gelatine content is the critical factor affecting microsphere particle size. When the gelatine concentration was higher, the pellet size was larger; when it was lower, it was the opposite. A significant amount of the microspheres with a particle size of 50 um to 80 um could be generated when the gelatine concentration was 15%\u201325% (W/V).4) Stirring velocity The average microsphere particle size shrank, and the effectiveness of drug encapsulation marginally improved as the stirring speed was increased. A significant share of the microspheres with particle sizes of 50 um to 80 um could be generated when the stirring speed was 700\u2013800 rpm.The drug loading rate was higher in the 50\u201380 um microspheres, but there was less ejection of foreign bodies .6.3.The results of drug loading, encapsulation rate, and burst release effect of microspheres with different feeding ratios are given in According to each feeding ratio, we made at least 3\u20135 batches of microspheres to evaluate each index. Among them, the pellet with a feed ratio of 1:1.5\u00a0had the highest burst release, while the pellet with a feed ratio of 1:2.5\u00a0had a similar burst release to the pellet with a feed ratio of 1:3, but its drug loading was relatively high. So the above feeding ratio parameters are better for the feed ratio of 1:2.5 microspheres.where Yi is the experimentally measured value of three indexes; Ymax and Ymin are the maximum and minimum acceptable values of Yi, respectively; di is the optimization index of a single index, and DF is the total optimization index of the three indexes .Table 4The optimal process conditions for morphine gelatin microspheres were A1B2C1D3, which meant that the gelatin concentration was 20%, the feed ratio was 1:2.5, the emulsifier concentration was 1%, and the stirring speed was 700 rpm. The influencing order of factors affecting the quality of morphine gelatin microspheres was A >\u00a0D\u00a0>\u00a0B\u00a0>\u00a0C.In the preparation of microspheres, each step affects the final properties of the microspheres, and the properties are interdependent. For example, we found that particle size affects the encapsulation efficiency and the release rate during our experiments. Due to the low stirring speed in the process of oil phase volatilization, the particle size of microspheres produced by our previous batch was large, so the release rate and encapsulation rate of microspheres in this batch could have been better. Later, we increased the rate, improving both the encapsulation and release rates. The same findings were found in the literature ,10.Different concentrations of gelatin were also used in our previous experiments. It was found that the higher the gelatin concentration, the larger the particle size. The reason is that the increase of gelatin concentration in the organic phase leads to faster deposition at the oil-water interface, which causes rapid solidification of the surface of the microspheres and increases the particle size. In other words, forming a porous structure on its surface affects the drug release rate ,11. The 7.7.1.MGMs were seen as light-yellow powders. Using scanning electron microscopy (SEM) (100\u00d7), MGMs were round with a smooth surface, no adhesion or little adhesion, good fluidity, and uniform distribution. Their average particle size is 63.11\u00a0\u00b1\u00a09.61\u00a0\u00b5m .Figure 7.2.HPLC determined the morphine content in the microspheres, and the drug loading and encapsulation rate were calculated according to the formula above. The average drug loading was 20.94%, and the average encapsulation rate was 86.27%. The above results are better than the experimental results under orthogonal conditions, which indicates that the microspheres prepared under this condition are stable and reproducible.7.3.in vitro release curve of the microspheres prepared by the optimized process is shown in The The drug release data were fitted by the zero-order kinetic equation, first-order kinetic equation, Higuchi kinetic equation, and the Ritger-Peppas exponential model, respectively . The zerDrug release from microspheres is a complex process that can be achieved in many ways, such as surface erosion, skeleton diffusion, hydration expansion, dissociation diffusion, disintegration, and desorption . many faIn this study, morphine microspheres released the drug slowly after administration; the peak time was significantly prolonged while the peak concentration was reduced. The plasma concentration-time diagram indicated a burst release process of microspheres in rabbits. The release is then maintained at a relatively low concentration. With the increase in the dose, the sustained release time and concentration of the drug in vivo increased correspondingly. According to the in vitro release characteristics of the microspheres, we speculate that the drug molecules adsorbed to the surface are released rapidly, and the blood drug concentration rises quickly after the microspheres enter the body. The encased drug is then slowly released by diffusion as the microspheres dissolve. The drug release curve was more consistent with the Higuchi drug release equation.The release of microspheres often has no ideal release environment and release mode. So it takes a lot of work to describe the release process in an equation. It\u2019s a compound way to work. The drug release of most microspheres presents a three-stage release mode: (1) the drug adsorbed on the surface is released rapidly, which is also the leading cause of sudden release; (2) the polymer begins to degrade, the molecular weight decreases continuously, and the drug is released slowly, but the whole system is still insoluble; (3) When the molecular weight of the polymer decreases to a specific value, a large amount of water penetrates the system, leading to the collapse of the skeleton and the release of the drug in large quantities ,31.This sudden release effect should be minimized during the fabrication of microspheres. By controlling the particle size of the microspheres and the gelatin concentration of the organic phase, we held the density of holes on the surface of the microspheres. We reduced the amount of sudden release so that the encapsulated drug could be released continuously at a more uniform rate.Furthermore, the polymer grade and drug loading affected the release rate of microspheres ,27. When7.4.We randomly selected three batches of microspheres with a feeding ratio of 1:2.5 to test their stability under different conditions.High temperature Under the light microscope (400x), We found that the burst effect, color, and other characteristics of microspheres at 30\u00b0C were not significantly different from those at 20\u00b0C. At 40\u00b0C, the color of the microspheres was slightly deepened, and the other indexes had no significant change. At 50\u00b0C, the color, shape, and burst effect of the microspheres changed, but also some microspheres condensed. The changes in microspheres were more evident at 60\u00b0C. After 1 to 10\u00a0days of illumination, the area under the degradation peak curve of the chromatogram of the microspheres increased by HPLCPs: A (morphine hydrochloride group 1 mg/kg), B (morphine hydrochloride group 3 mg/kg), C (morphine microsphere group 5 mg/kg), and D (morphine microsphere group 15 mg/kg).1/2\u03b1 and t1/2\u03b2 of the MGMs were prolonged, AUC (area under the curve of drug duration) increased, Tmax (peak time) increased, and Cmax (peak concentration) decreased. The relative bioavailability (F) of MGM capsules versus a morphine injection can be calculated as follows:It can be seen from Here D is the administration dose, t is the experimental preparation, and r is the control preparation. F was calculated to be 88.84%.The data show that the plasma concentration of the morphine injection group reached the peak value 30\u00a0min after administration and then decreased rapidly until four h after administration, until the concentration was undetectable. The plasma concentration of the microsphere group produced a peak value at 50\u00a0min after administration, which was generated by the burst release of the microspheres. Then the plasma concentration decreased slowly and remained low until 24\u00a0h. Statistical analysis showed that the peak concentration of the microsphere group was significantly lower than that of the control group (P\u00a0<\u00a00.05); t1/2 in the microsphere group was considerably longer than that in the control group (P\u00a0<\u00a00.01).Bioavailability (BA) is the degree and rate at which a drug\u2019s active ingredient is released from the product and absorbed into the systemic circulation . It is a8.8.1.2 on the midline side of the back. The groups were divided as follows: Group A , group B (blank microsphere control group), group C (morphine injection group 1 mg/kg), group D (morphine injection group 3 mg/kg), group E (morphine microsphere group 5 mg/kg), and group F (morphine microsphere group 15 mg/kg).Twenty-four New Zealand rabbits were randomly divided into six groups of four rabbits per group. Shave 4X4cm8.2.New Zealand rabbits in group A moved as usual, with both hind paws touching the ground and bearing weight. The New Zealand rabbits in group B had upturned feet that could not maintain weight and occasionally landed on the surgically operated side. The cumulative pain scores are shown in There was no significant difference in cumulative pain scores between groups C and E, D and F; The cumulative pain scores in groups D and F were significantly lower than those in groups C and E (P\u00a0<\u00a00.01). As can be seen from 8.3.The pain threshold of group A was significantly higher than that of group B at all times. The pain threshold of group C and GROUP D began to increase 5\u00a0minutes after subcutaneous injection, peaked at 30\u00a0minutes, and lasted until 120\u00a0minutes after subcutaneous injection. The pain threshold of groups E and F began to increase 30 min after subcutaneous injection, peaked at 90\u00a0min, and lasted until 360\u00a0min after subcutaneous injection. The pain thresholds of groups D and F were significantly higher than those of groups C and E , while the analgesic effect of morphine gelatin microspheres was later and lasted longer (30\u00a0min-480\u00a0min after administration and lasted longer). The above characteristics determine that morphine injection is more suitable for early immediate analgesia in acute pain, such as intraoperative analgesia under general anesthesia or auxiliary analgesia for wound debridement under local anesthesia. Morphine microspheres are more suitable for postoperative local analgesia.In general, when the patient has just been extubated and sent to the ward, the patient still has some analgesic effect of opioids in the body. Because the anesthetic in their body is not completely metabolized until 2\u00a0hours later. However, after the drug is completely metabolized, the patient will develop hyperalgesia in the surgical area, which is more obvious after 3\u201312\u00a0hours ,38.At present, many surgeons are used to administer morphine hydrochloride injection into the joint cavity at the end of arthroscopic surgery. It is well known that its analgesic effect reaches its peak at 30\u00a0minutes, and its analgesic effect is greatly weakened after 2\u00a0hours. In other words, when the patient returned to the ward, morphine injection analgesia began, and the patient\u2019s anesthetic was completely metabolized within 2\u00a0hours, and the analgesic effect of morphine injection basically ended. Although patients feel the double analgesic effect in the early stage, their hyperalgesia will be aggravated after the end of analgesia.Hypothetically, if the surgeon changed the application of morphine microspheres into the joint cavity, the analgesic effect of the microspheres would take effect before the patient had completely metabolized the anesthetic (50\u00a0minutes after the end of surgery), and the analgesic effect of the microspheres would last for 6\u00a0hours or even longer. The above characteristics of morphine microspheres can make the blood concentration of morphine more stable, improve the analgesic effect, and reduce addiction. It also makes patients more comfortable after surgery, improves the early postoperative exercise rehabilitation rate, and prevents joint ossification .In addition, the analgesic effect of morphine microspheres was enhanced as the dose of morphine increased. The pellet release time and drug concentration increased accordingly. However, it is still necessary to be careful not to overdo it9.Gelatin morphine microspheres prepared in this study have better-sustained release and longer analgesic time than the same dose of morphine hydrochloride injection. After the trial, it was found that the blood concentration of morphine in patients was more stable, the analgesic effect was better, the bioavailability of morphine was improved, the addiction was reduced, and the bioavailability was improved."} +{"text": "FMR1 premutation or full mutation carriers, are at elevated risk for mental health challenges in addition to experiencing stress associated with parenting a child with significant disabilities. However, little is known about fathers in these families, including the ways in which parental well-being influences the mother-father relationship and the impact of child characteristics on paternal and couple functioning.Individuals with fragile X syndrome (FXS) have significant delays in cognition and language, as well as anxiety, symptoms of autism spectrum disorder, and challenging behaviors such as hyperactivity and aggression. Biological mothers of children with FXS, who are themselves The current study examined features of, and relationships between, parental well-being, couple well-being, and child functioning in 23 families of young boys with FXS. Mothers and fathers independently completed multiple questionnaires about their individual well-being, couple functioning, and child behavior. One parent per family also completed an interview about the child\u2019s adaptive skills.Results suggest that both mothers and fathers in these families experience clinically significant levels of mental health challenges and elevated rates of parenting stress relative to the general population. Findings also indicate that the couples\u2019 relationship may be a source of strength that potentially buffers against some of the daily stressors faced by these families. Additionally, parents who reported less parenting stress had higher couples satisfaction and dyadic coping. Finally, parents of children with less severe challenging behaviors exhibited fewer mental health challenges, less parenting stress, and higher levels of both couples satisfaction and dyadic coping. Parents of children with higher levels of adaptive behavior also reported less parenting stress and higher couples satisfaction.Overall, this study provides evidence that families of children with FXS need access to services that not only target improvements in the child\u2019s functioning, but also ameliorate parental stress. Family-based services that include both mothers and fathers would lead to better outcomes for all family members. FMR1 gene, located at Xq27.3, from the typical 35 or so repeats to greater than 200 repeats is an X-linked disorder that results from an expansion of a cytosine-guanine-guanine (CGG) sequence in the promoter region of the repeats . Mothersallenges , which wallenges . These rallenges , which sThe majority of past studies on parenting in FXS have focused exclusively on the mother-child dyad. In doing so, these studies have neglected to consider the role that fathers play in child development or how features of the broader family environment may influence maternal or paternal behavior and child outcomes. The current study was designed to examine the broader family environment in families of young children with FXS, with a focus on maternal and paternal well-being, features of the mother\u2019s and father\u2019s relationship as a couple, and relationships between child characteristics and parent and couple well-being. A better understanding of parent and couple well-being in families of children with FXS, as well as the ways in which child characteristics influence these domains, will provide the foundation for developing interventions and services focused on improving outcomes for all family members.FMR1 gene, causing a deficiency in, or absence of, the gene\u2019s associated protein, FMRP . FMRP is critical for early brain development, including synaptic protein synthesis and plasticity, as well as experience-dependent learning , 12 and Because it is inherited, the presence of FXS in a family has far-reaching intergenerational effects, offering a unique opportunity to investigate the ways in which multiple family subsystems influence child outcomes. Nearly all males with FXS have ID , and manFMR1 premutation, although some also have the full mutation which causes FXS. Women with the full mutation are at an increased risk for experiencing mental health challenges, including anxiety and depression, as well as social deficits, including avoidance and withdrawal \u201341. For Very little else is known about fathers of children with FXS given that the majority of past studies have focused on the mother-child dyad. However, including both mothers and fathers in behavioral therapies and health care services positively contributes to a child\u2019s success, especially for young children , 47. In The current study was designed to examine multiple features of the family environment, including maternal and paternal mental health, stress associated with parenting, aspects of couple functioning, and relationships between child characteristics and these parental domains. We have four main aims.Aim 1: Examine mental health challenges and parenting stress in biological mothers of children with FXS. We hypothesized that these mothers, compared to the general population, would report elevated levels of mental health challenges and parenting stress , 6, 48.Aim 2: Examine mental health challenges and parenting stress in fathers of children with FXS and compare paternal and maternal mental health challenges and parenting stress. We hypothesized that fathers of children with FXS, compared to the general population, would report elevated levels of mental health challenges and parenting stress given the difficulties associated with parenting a child with significant challenges . We alsoAim 3: Examine relationships between aspects of the couple relationship and mothers\u2019 and fathers\u2019 mental health challenges and parenting stress. We hypothesized that couples satisfaction and dyadic coping would be negatively related to mental health challenges and parenting stress for both mothers and fathers .Aim 4: Examine relationships between child characteristics and parental individual well-being and couple well-being . We hypothesized that children with higher levels of behavior problems and ASD symptoms, and lower levels of adaptive behavior, would have parents who endorsed lower levels of individual well-being and coupFMR1 premutation or full mutation status if available. Medical reports were required to confirm the child\u2019s diagnosis of the FMR1 full mutation, but verbal confirmation was accepted for the mother\u2019s genetic status. The study was approved by the Institutional Review Board at the University of California, Davis in advance of recruitment, and both parents provided informed consent electronically via REDCap (Research Electronic Data Capture) , 23 biological mothers, and 23 male children with FXS. Only families of male children were recruited because virtually all males with FXS have ID and language delays, whereas intellectual functioning and language abilities are more variable among females with FXS . EligibiCapture) , 51.FMR1 premutation, two were carriers of the FMR1 full mutation, and one had not been tested, so her genetic status was unknown. A majority of both mothers and fathers in the study had at least a bachelor\u2019s degree and parent-reported household income indicated that most families were relatively well-resourced . All families resided in North America, with 13 United States states and two Canadian provinces represented. Data were collected between December 2019 and July 2021; therefore, the majority of families were tested during the COVID-19 pandemic. Only two families completed their participation in the study prior to the first community-diagnosed case in California on February 23, 2020.Participant characteristics are presented in via REDCap their individual well-being, including the Symptom Checklist-90-Revised (SCL-90-R) and the The SCL-90-R is a 90-The PSI-4-SF is a 36-The CSI-32 is a 32-The DCI , 56 is aThe ABC-2 is a 58-The SRS-2 is a 65-via a secure teleconferencing platform . The Vineland-3 is a norm-based instrument with a mean standard score of 100 and a standard deviation of 15. The Adaptive Behavior Composite score as well as the Communication, Daily Living Skills, and Socialization domain standard scores were used in analyses. The Vineland-3 interview takes approximately 1 to 2 h to complete.The Vineland-3 measuresvia REDCap during different days of the study.The SCL-90-R, PSI-4-SF, CSI-32, DCI, ABC-2, and SRS-2 are traditionally paper-and-pencil measures. They were modified so that they could be completed in packages as online surveys Analyses were conducted using Stata 14.2. All variables were visually inspected to check for model assumptions of normality and homoscedasticity of the residuals. Tests for skewness and kurtosis were also examined. Transformations and nonparametric alternatives were considered for any data that did not meet parametric assumptions.To address the first and second aims, descriptive summaries of mothers\u2019 and fathers\u2019 mental health challenges and parenting stress (the outcomes variables) were reported and compared to levels reported in the general population and to each other. Then, interspousal correlations were calculated to determine the degree of correspondence between mothers\u2019 and fathers\u2019 ratings of mental health challenges and parenting stress. Comparisons of mothers\u2019 and fathers\u2019 mean scores on the SCL-90-R and PSI-4-SF were also conducted.To address the third aim examining aspects of the couple relationship and parental mental health challenges and parenting stress, descriptive summaries of the outcome variables were reported and mean scores for mothers and fathers were compared. Interspousal correlations were then calculated to determine the degree of correspondence between mothers\u2019 and fathers\u2019 ratings of couples satisfaction and dyadic coping. Comparisons of mothers\u2019 and fathers\u2019 mean scores on the CSI-32 and DCI were also reported. To address the fourth aim examining relationships between child characteristics and parental and couple functioning, descriptive summaries of the predictor variables and interspousal correlations were reported. Comparisons of mothers\u2019 and fathers\u2019 mean scores on the ABC-2 and SRS-2 were also conducted.N of 2 . For this new variable, ranges of the CSI-32 score were given a value of 1\u20138 .Intraclass correlation coefficients (ICCs) were then calculated to estimate the proportion of the total variation in the dependent variables that exists between versus within couples for Aims 3 and 4. The dependent variables for Aim 3 included couples satisfaction and dyadic coping . The dependent variables for Aim 4 included couples satisfaction and dyadic coping, as well as mental health challenges (SCL-90-R GSI T-score) and parenting stress . Next, multilevel models were specified to examine the outcomes for Aims 3 and 4. For Aim 3, separate models for couples satisfaction and dyadic coping were conducted. The strong and significant association between the variables for parenting stress and mental health challenges did not allow for them both to be included in the models for Aim 3; the parenting stress measure was chosen as it was more strongly associated with both couples satisfaction and dyadic coping than the measure of mental health challenges for both mothers and fathers.CS) was specified as follows, with parenting stress (PS) and parent sex (sex) set as predictors at Level 1. Covariates included parent age (age) and parent education (edu). Parenting stress, parent age, and parent education were continuous predictors. In this example, random effects were not included at Level 2 for parenting stress, parent sex, parent age, or parent education; therefore, the effects of these predictors on the outcome (CS) are fixed. However, a family level random effect for the intercept was included at Level 2:As an example, the model for couples satisfaction confirmed that there were no statistically significant differences between mothers\u2019 and fathers\u2019 standardized scores on the SCL-90-R .Aim 1 examined mental health challenges and parenting stress in biological mothers of children with FXS. Aim 2 examined mental health challenges and parenting stress in fathers of children with FXS and compared maternal and paternal mental health challenges and parenting stress. t-tests and Wilcoxon signed-ranks tests (when appropriate) confirmed that there were no statistically significant differences between mothers\u2019 and fathers\u2019 standardized scores on the PSI-4-SF . On the PSI-4-SF, scores that fall between the 16th and 84th percentiles are considered within the normal range, scores between the 85th and 89th percentiles are considered high, and scores at the 90th percentile and above are within the clinically significant range. To determine the degree of correspondence between mothers\u2019 and fathers\u2019 ratings of mental health challenges and parenting stress, interspousal correlations were calculated. Aim 3 examined relationships between aspects of the couple relationship and mothers\u2019 and fathers\u2019 mental health challenges and parenting stress. The ICC for couples satisfaction indicated that 76.6% of the variation was due to between-couples factors whereas 23.4% was due to within-couple factors. For dyadic coping, 42.1% of the variation was due to between-couples factors whereas 57.9% was due to within-couple factors. t-tests confirmed that there were no statistically significant differences between mothers\u2019 and fathers\u2019 scores on the CSI-32 or the DCI . Overall, only six mothers (26%) and four fathers (17%) reported notable relationship dissatisfaction on the CSI-32. Additionally, on the DCI, five mothers and five fathers (22%) reported below average levels of dyadic coping, 13 mothers (57%) and 14 fathers (61%) reported average levels of dyadic coping, and five mothers (22%) and four fathers (17%) reported above average levels of dyadic coping.Unlike the measures of mental health challenges and parenting stress, interspousal correlations indicated that there were significant correspondences between mothers\u2019 and fathers\u2019 scores on both the CSI-32 and DCI, with mean scores indicating average levels of couples satisfaction and dyadic coping for both mothers and fathers. Additionally, paired samples p = 0.010), such that higher levels of parenting stress predicted reduced couples satisfaction. There were no significant main effects of parent sex, parent age, or parent education on couples satisfaction.p < 0.001), such that higher levels of parenting stress predicted poorer dyadic coping. There was also a significant main effect of parent education on dyadic coping (p = 0.031), such that higher levels of education predicted poorer dyadic coping. There was no significant main effect of parent sex on dyadic coping, but there was a marginally significant main effect of parent age on dyadic coping (p = 0.083), such that older age predicted poorer dyadic coping.Aim 4 examined the contributions of child challenging behaviors, ASD symptoms, and adaptive behavior to parental individual well-being and couple well-being . The ICC for mental health challenges indicated that 3.1% of the variation was due to between-couples factors whereas 96.9% was due to within-couple factors. For parenting stress, 27.0% of the variation was due to between-couples factors whereas 73.0% was due to within-couple factors. Visual inspection of the variables and tests for kurtosis and skewness indicated that several of the ABC-2 subscale scores and one of the SRS-2 subscale scores were not normally distributed. t-tests and Wilcoxon signed-ranks tests (when appropriate) confirmed that there were no significant differences between mothers\u2019 and fathers\u2019 subscale and total scores on these measures except for the ABC-2 Hyperactivity subscale, t(22) = 2.11, p = 0.046, and the SRS-2 RRB subscale, Z = 2.27, p = 0.023. For both of these subscales, mothers endorsed higher scores than fathers. Furthermore, according to the SRS-2 Total Score guidelines, scores can be classified as within normal limits (T-scores \u2264 59), in the mild range (T-scores = 60 to 65), in the moderate range (T-scores = 66 to 75), or in the severe range (T-scores \u2265 76). Interspousal correlations indicated significant correspondences between mothers\u2019 and fathers\u2019 scores on the ABC-2 for the Hyperactivity and Inappropriate Speech subscales on the ABC-2, but not for the other four subscales. On the SRS-2, there were significant correspondences between mothers\u2019 and fathers\u2019 scores on the SCI and RRB subscale T-scores as well as the Total T-score. Additionally, paired samples r = \u22120.72, p = 0.001) and a marginally significant correlation between fathers\u2019 SRS-2 scores and the Vineland-3 Adaptive Behavior Composite . Therefore, the ABC-2 Total Score, Vineland-3 Adaptive Behavior Composite, child age, and parent sex were included as predictors in the MLMs for Aim 4, as was the interaction between the ABC-2 Total Score and parent sex. The significant correlations between the ABC-2 and SRS-2 scores for both mothers and fathers see , 12 indip = 0.001), such that higher levels of child challenging behaviors predicted elevated levels of mental health challenges. However, there were no significant main effects of child adaptive behavior, child age, or parent sex on parent mental health challenges, nor was there a significant interaction between child challenging behaviors and parent sex. Model diagnostics suggested that a linear regression model would be sufficient for predicting mental health challenges. The results of a linear regression model were similar to the results of the multilevel model.As expected, there was a significant main effect of child challenging behaviors on parental mental health challenges (p < 0.001), such that higher levels of challenging behaviors predicted elevated parenting stress. There was also a significant main effect of child adaptive behavior on parenting stress, with higher levels of adaptive behavior predicting reduced parenting stress (p = 0.045). There were no significant main effects of child age or parent sex on parenting stress, nor was there a significant interaction between child challenging behaviors and parent sex. Model diagnostics suggested that a linear regression model would also be sufficient for predicting parenting stress. The results of a linear regression model were similar to the results of the multilevel model.As expected, there was a significant main effect of child challenging behaviors on parenting stress (p = 0.002), such that higher levels of child challenging behaviors predicted reduced couples satisfaction. There was also a significant main effect of child adaptive behavior on couples satisfaction (p = 0.015), with higher levels of adaptive behavior predicting greater couples satisfaction. There were no significant main effects of child age or parent sex on couples satisfaction, nor was there a significant interaction between child challenging behaviors and parent sex.As expected, there was a significant main effect of child challenging behaviors on couples satisfaction (p = 0.006), with higher levels of child challenging behaviors predicting poorer dyadic coping. There was also a marginally significant main effect of parent sex on dyadic coping (p = 0.091). In reference to the overall mean, fathers reported lower levels of dyadic coping compared to mothers. There were no significant main effects of child adaptive behavior or child age on dyadic coping, nor was there a significant interaction between child challenging behaviors and parent sex.As expected, there was a significant main effect of child challenging behaviors on dyadic coping , suggesting that their perceptions of the child\u2019s behavior were contributing more to their stress than their adjustment to parenting or their relationship with their child. This profile of parenting stress is consistent with past research on mothers of children with FXS , 65. IntDespite experiencing challenges with mental health and parenting stress, most mothers and fathers reported moderate to high levels of couples satisfaction and dyadic coping, with very few parents reporting notable relationship dissatisfaction, and a majority of parents reporting dyadic coping in the average or above average range. In these families, higher levels of couples satisfaction and dyadic coping may be protective against the daily stressors that the parents are experiencing . ImportaMothers and fathers also reported independently on their child\u2019s challenging behaviors and ASD symptoms. Interspousal correlations indicated high degrees of correspondence between mothers\u2019 and fathers\u2019 scores on the SRS-2, but not the ABC-2. On average, mothers and fathers reported moderate levels of challenging behaviors that were similar to the ABC scores reported in Sansone et al. . With reThere were also some interesting differences in the correlations between maternal and paternal measures. For mothers (but not fathers), there were strong and significant correlations between the ABC-2 and the other measures of individual, couple, and child functioning . However, for fathers (but not mothers), the SRS-2 was strongly correlated with every measure except the DCI. This finding may be due to differences in parental experiences of challenging behaviors and ASD symptoms; that is, mothers may be experiencing and managing more challenging behaviors compared to fathers, and fathers may be more concerned about or influenced by the child\u2019s ASD symptoms compared to mothers. In particular, paternal parenting stress was associated with the child\u2019s ASD symptoms, whereas maternal parenting stress was not. However, consistent with past research, parenting stress for both mothers and fathers was related to child challenging behaviors , 65. FutAdditionally, parenting stress was found to associate with both couples satisfaction and dyadic coping, with no significant differences found between mothers and fathers. We also found a negative association between parent education and dyadic coping, which was unexpected, but should be explored in future studies. Parents with higher levels of education may experience more work-related stress that could negatively affect their individual well-being and their relationship with their partner. Additionally, child challenging behavior was also found to associate with parental mental health challenges, parenting stress, couples satisfaction, and dyadic coping. Surprisingly, no significant differences were found between mothers and fathers across these analyses, including any differences between mothers and fathers based on child challenging behaviors. Perhaps future investigations with larger and more diverse sample sizes would find differences between parents. Child adaptive behavior was also found to associate with couples satisfaction and parenting stress. These findings emphasize the importance of early intervention for children with FXS focused not only on communication and socialization skills, but also daily living skills that promote independence.Interventions focused on reducing parenting stress in these families could also have a positive impact on parents\u2019 individual well-being and the couples\u2019 relationship. One potential intervention that could be beneficial for parents of children with FXS is Mindfulness-Based Stress Reduction (MBSR). MBSR is an established and empirically supported stress-reduction intervention that has been shown to reduce parental stress, depressive symptoms, and parent-reported child behavior problems in families of children with developmental disabilities , 75. Anovia telehealth on problem behaviors in young boys with FXS. Children with FXS often engage in problem behaviors that serve different communicative functions, including gaining access to attention or a highly preferred item or escaping a demanding task or situation. The focus of FCT is to ensure that these problem behaviors are no longer reinforced by the caregiver while simultaneously teaching the child alternative and appropriate ways to communicate their preferences and needs. The FCT intervention conducted by Hall et al. (Parent-implemented interventions focused on teaching parents strategies for managing child challenging behaviors and engaging in responsive interactions may also benefit parental and couple well-being in families of children with FXS. A recent study by Hall et al. examinedl et al. led to sHall et al. and colleagues\u2019 FCT study, along with other parent-implemented intervention studies conducted in the past several years with families of children with FXS e.g., , 79, supThere are some notable limitations to this study, including the relatively small sample size and the lack of diversity in the sample. FXS research studies focused on parent-child relationships tend to have small samples with a majority of the sample being two-parent households who identify as white, highly educated, and have household incomes in the middle to high range. Therefore, future studies should attempt to reduce barriers to participation in research for FXS families from underrepresented groups. These barriers include age of diagnosis, lack of information about research opportunities, time commitment for participation in research, and low household income , 84. FutOne notable strength of the study is the inclusion of both mothers and fathers as independent informants given that the majority of past research in FXS has focused on the mother-child dyad. Fathers have been historically underrepresented in research on child and adolescent development, both in the general population and in families that include children with disabilities, despite the fact that fathers have a unique and independent role in parenting compared to mothers and may differentially affect the child\u2019s development \u201387. For Additionally, parents provided the measures of child ASD symptoms and challenging behaviors as opposed to these behaviors being rated by an independent informant. Future studies should incorporate multiple distinct assessments of both parent and child functioning to ensure accurate measurement within various domains of behavioral and psychological functioning given that self-report measures can be biased. Furthermore, biological markers of stress were not collected nor were any measures of IQ. Future studies could benefit from including these variables. Another limitation was the focus on concurrent associations as opposed to longitudinal ones. Future studies should examine relationships between parent and child functioning over time to develop a better understanding of how these relationships fluctuate as the child develops. Further, information was collected from families regarding the types of services and therapies being provided to the child/family. Nearly all families were receiving a combination of developmental services. Future studies should gather information regarding the quality of these services and the extent to which service provision affects parental and couple functioning. Finally, given that the majority of families participated in the study during the COVID-19 pandemic, the data reported in the current study may not reflect family functioning in families of children with FXS during more typical historical periods.The findings from the current study indicate the importance of considering the entire family system in families affected by FXS. Both mothers and fathers are in need of greater support to reduce their mental health challenges and parenting stress, which would likely benefit not only parental well-being, but also the couples\u2019 relationship and the relationships between each parent and the child. The results of the current study also provide evidence that child challenging behaviors and limited adaptive functioning influence the couples\u2019 relationship as well as individual parent functioning. Early intervention for children with FXS, parent-implemented interventions focused on managing challenging behaviors, and parent interventions focused on reducing stress are likely to benefit these families. Moreover, although many parents reported experiencing significant mental health challenges and parenting stress, this was not true for all mothers and fathers. Additionally, for many families, the couples\u2019 relationship may be a source of strength that potentially buffers against some of the daily stressors faced by these families. Future studies should seek to identify protective factors in these families that support parent and family well-being and continue to investigate the complex dynamics between mothers, fathers, and children in families affected by FXS.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The studies involving human participants were reviewed and approved by Institutional Review Board at the University of California, Davis. Written informed consent to participate in this study was provided by the participants\u2019 legal guardians/next of kin.The submitted research study was part of SP\u2019s dissertation research project, which she completed under the guidance of LA. SP, AS, and LA were responsible for the initial conceptualization of the study. DH provided support with planning and conducting the analyses. SP wrote the first draft and made subsequent edits to the manuscript, based on feedback from DH, AS, and LA. All authors contributed to the interpretation of the data and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Eugenia brasiliensis Lam.) are red-colored fruits due to the presence of anthocyanins. In this paper, anthocyanin-rich extracts from grumixama were submitted to different temperatures and light irradiations, with the aim of investigating their stabilities. The thermal stability data indicated that a temperature range from 60 to 80 \u00b0C was critical to the stability of the anthocyanins of the grumixama extracts, with a temperature quotient value (Q10) of 2.8 and activation energy (Ea) of 52.7 kJ/mol. The anthocyanin-rich extracts of grumixama fruits showed the highest stability during exposure to incandescent irradiation (50 W), followed by fluorescent radiation (10 W). The t1/2 and k were 59.6 h and 0.012 h\u22121 for incandescent light, and 45.6 h and 0.015 h\u22121 for fluorescent light. In turn, UV irradiation (25 W) quickly degraded the anthocyanins (t1/2 = 0.18 h and k = 3.74 h\u22121). Therefore, grumixama fruits, and their derived products, should be handled carefully to avoid high temperature (>50 \u00b0C) and UV light exposure in order to protect the anthocyanins from degradation. Furthermore, grumixama fruits showed high contents of anthocyanins that can be explored as natural dyes; for example, by food, pharmaceutical and cosmetic industries. In addition, the results of this study may contribute to the setting of processing conditions and storage conditions for grumixama-derived fruit products.Grumixama ( Eugenia brasiliensis Lam., commonly known as grumixama and Brazilian cherry, is a tree from the Brazilian coastal forests that belongs to the genus Eugenia, which is one of the largest genera in the family Myrtaceae with about 350 species [ species . Grumixa species . The gru15 skeleton based on a C6-C3-C6 core structure [Anthocyanins are secondary metabolites that protect plants against various biotic and abiotic stresses . They artructure . Anthocytructure .Anthocyanins are widely studied as bioactive compounds to manage and/or prevent the onset/development of several chronic degenerative diseases, including cardiovascular diseases, cancers, type-2 diabetes mellitus, neurodegenerative diseases, and dyslipidemias . On the Regarding trends in the food color industry, the use of natural pigments has increased in foods and beverages as replacements for synthetic colorants, mainly due to the health benefits of natural compounds as compared with synthetic ones ,11. The The search for new sources of natural colorants, as well as for knowledge concerning the stability of anthocyanins, is of great relevance, as it can vary depending on the plant matrix. As far as our knowledge is concerned, no information regarding the thermal and light stability of anthocyanin extracts of grumixama fruits is available in the literature. Therefore, for the first time, anthocyanin-rich extracts of grumixama were submitted to temperatures ranging from 30 to 100 \u00b0C, and exposed to different light irradiation, namely fluorescent, incandescent and ultraviolet, to investigate the stability kinetics of anthocyanins, with a view to future applications of these anthocyanins in food processing and storage. Data obtained herein may contribute to the definition of practical conditions that would ensure the stability of the studied anthocyanins. Consequently, anthocyanin extracts of grumixama could be properly handled, which might help in maintaining their technological and functional properties.Eugenia brasiliensis Lam.) (5 kg) were manually collected in Salvaterra, Maraj\u00f3, Par\u00e1, Brazil (0\u00b0 45\u2032 32\u2033 S and 48\u00b0 30\u2032 44\u2033 W). The selected fruits were stored in an isothermic container and transported to the laboratory where they were washed with running water and sanitized in a sodium hypochlorite solution (100 mg/L) for 10 min, followed by rinsing with water to remove excess chlorine. These fruits were placed into a vacuum pack and stored at \u221218 \u00b0C until use.The full ripe (red-colored peel) and morphologically perfect grumixama fruits , total protein , total lipids extracted with petroleum ether (n\u00b0 920.39), total ash (n\u00b0 940.26), pH (n\u00b0 981.12), total soluble solids , and total titratable acidity , according to the Association of Official Analytical Chemist . The totPolyphenoloxidase (POP) and peroxidase (POD) activities were determined by spectrophotometry at 415 nm and 435 v/v) were homogenized for 90 s. The system was filtered on Whatman n\u00b0 1 filter and the residue was washed repeatedly with acidified ethanol until it became colorless [Approximately 0.1 g of sample (pulp + peel) and 50 mL of 95% ethanol: HCl 1.5 M were determined by spectrophotometry according to the pH differential method described by Wrolstad et al. . An aliqTotal phenolic compounds (TPCs) were also determined by spectrophotometry at 765 nm, according to the methodology proposed by Singleton and Rossi . BrieflyThe antioxidant capacity was determined by the ABTS ) radical method, described by Re et al. . For theThe temperature/time and light irradiation/time binomial chosen for the stability assays of anthocyanins were set according to a preliminary study . The ligFor the thermal stability experiments, the anthocyanin-rich extracts (according to item 2.3) were placed in test tubes (glass tubes) closed with lids, and covered with aluminum foil to block light incidence. The thermal stability assays were performed at the following temperatures: 30 \u00b0C, 60 \u00b0C, 80 \u00b0C and 100 \u00b0C (\u00b12 \u00b0C). Samples were collected at time 0 for all the experiments and each 24 h for testing at 30 \u00b0C; each 1 h for testing at 60 \u00b0C; three times every 20 min and then each 1 h for testing at 80 \u00b0C; and each 10 min in the first 30 min and then each 1 h for testing at 100 \u00b0C.For the light stability experiments, the anthocyanin extracts were placed in test tubes and transferred to a light irradiation box (30 \u00d7 30 \u00d7 25 cm) with air exhausting . The deg1/2) for anthocyanin degradation was estimated using the Equations (5)\u2013(8).0 and C are anthocyanin contents at time zero and time t, respectively; k is degradation rate constant; t is time; n is the form factor; and t1/2 is the half-life time.The thermal and light stability data were submitted to mathematical fitting using order zero (Equation (1)), first order (Equation (2)), second order (Equation (3)) and Weibull (Equation (4)) models. The half-life time (ta) value for the process was calculated from the angular coefficient of the linear regression of k versus 1/T. The temperature quotient value (Q10) for the degradation of anthocyanins was calculated using Equation (10) [0 is the pre-exponential factor, Ea is the activation energy for the process degradation (kJ/mol), R is the gas constant (J/mol.K) and T is the process temperature (K).The effect of temperature on degradation of anthocyanins was accessed by an Arrhenius-like equation (Equation (9)). The activation energy (Eion (10) .k = k0ep \u2264 0.05).Analytical procedures were performed at least three times and the data were presented as mean and standard deviation. Data obtained with degradation studies were submitted to one-way ANOVA, followed by Tukey\u2019s test ), first order (Equation (2)), second order (Equation (3)) and Weibull (Equation (4)) models were fitted by nonlinear estimation to thermal stability data using the Statistica 7.0 program. Least squares were used to estimate the model parameters, considering the Levenberg\u2013Marquardt algorithm, with a convergence criterion of 102), relative mean deviation (P) (Equation (11)) and Root Mean Square Error (RMSE) (Equation (12)).pre is the predicted value, Yexp is the experimental value, and N is the number of experimental measurements.Quality of mathematical models was evaluated by the coefficient of determination (12.02 mg CGE/100 g dw) and pitanga (E. uniflora) fruits (5004.44 mg CGE/100 g dw). Flores et al. [E. brasiliensis extracts, the major compounds being delphinidin 3-glucoside and cyanidin 3-galactoside: 12.2% and 76.5% of TMA, respectively, followed by cyanidin (6.1%) and delphinidin (3.4%). Other minor anthocyanins were identified with percentages less than 1.5%. Teixeira et al. [The content of TMA was 2590.99 mg CGE/100 g dw in grumixama fruit. Ara\u00fajo et al. reporteds et al. identifia et al. and Silva et al. identifiEugenia stipitata) (782.74 mg GAE/100 g dw) and grumixama (75.08 mg GAE/100 g dw). In turn, regarding the antioxidant capacity, the same authors observed values of 99.5 \u00b5mol TE/100 g for grumixama fruit and 77.7 \u00b5mol TE/100 g for uvaia fruit, which were much lower than those found in our study.The content of TPC and the antioxidant capacity were 2340.36 mg GAE/100 g dw and 728 \u00b5mol TE/100 g in grumixama fruit, respectively. Ara\u00fajo et al. reportedAnthocyanins showed the highest thermal stability at 30 \u00b0C, in which the TMA decreased by 52.8% after 168 h. In contrast, the exposure of anthocyanins to temperature >30 \u00b0C resulted in their degradation by 50% after 10 h at 60 \u00b0C, 1.9 h at 80 \u00b0C and 0.96 h at 100 \u00b0C. High rates of degradation could be linked to the temperature increase, which could cause breaks in the glycoside linkages of sugars in anthocyanins resulting in their derived aglycones. Aglycone moieties are more susceptible to the effect of high temperature, and, as a consequence, pigment degradation occurs faster . Das et Liu et al. observedPeng et al. evaluateAccording to Liu et al. , the hig2, RMSE and P were used to evaluate the quality of the fits. The values of the parameters found for the fit to the zero-order model , first-order model , second order model and Weibull model presented a good fit of these models to all the experimental data for the degradation of the anthocyanins.Data from 2 values found in combination with the lowest RMSE and P values suggested that the Weibull model was the one that had the best fit for the experimental data for anthocyanin degradation and presented predictive capacity. In addition, this model is highly recommended due to its mathematical simplicity (number of parameters) and its great flexibility, which favors its use for practical purposes. The thermal degradation curves of anthocyanins in the grumixama extract, obtained by the Weibull model for the different temperatures, are shown in However, the highest R2) and the kinetic parameters obtained by the model are shown in n < 1) . This beThermal degradation of anthocyanins can result in a variety of degradation products and intermediate compounds, depending on the conditions of heating and the processing time. The mechanisms of anthocyanin degradation are not fully elucidated, but it is known that the degradation rates of this pigment increase during processing and storage, as temperature increases . Accordi1/2 is the required time to decrease the concentration of the monitored compounds by half of the initial value. Data showed that the increase in temperature favored a decrease in time for the degradation of anthocyanin to 12.25 h\u22121 (100 \u00b0C) for thermal degradation of black rice anthocyanins. These values were much higher than those observed in our study for the same temperatures, which corroborated the greater thermal stability of grumixama anthocyanins.The thocyanin . Peng etu et al. observeda et al. reported10 is the reaction acceleration factor as a function of temperature and expresses, therefore, how much the rate of a given change depends on temperature. Higher values of Q10 indicate greater influence of temperature on degradation processes or greater acceleration of processes with temperature increase. The highest Q10 value was observed in the 60\u201380 \u00b0C range (2.82), which indicated that, within this range, the degradation of anthocyanins was strongly affected by the temperature process.The factor Qa is the minimum energy required to start a given reaction, indicating its sensitivity to temperature. Therefore, Ea is a valuable parameter to characterize temperature dependence of anthocyanin degradation. A high Ea value was found (52.67 kJ/mol), which meant high susceptibility of the anthocyanins of grumixama when exposed to high temperatures. However, the anthocyanins of grumixama were less susceptible to degradation when compared to the study by Martynenko and Chen [a of 61.98 kJ/mol, and a Q10 value of 1.85 for the temperature range of 70 to 80 \u00b0C for anthocyanin degradation of the blueberry in the hydrothermodynamic process. In studies on the anthocyanin degradation of ju\u00e7ara and \u201citalia\u201d grapes, Peron, Fraga and Antelo [10 for a temperature range of 50 to 70 \u00b0C.The energy Eand Chen , which od Antelo observedThe mathematic models were not fitted to the TPC data, because the TPC degradation presented different behaviors as a function of temperature. It was possible to observe better behavior for TPC degradation at 30 \u00b0C and 60 \u00b0C, requiring 168 h to reduce the TPC content to 63% at 30 \u00b0C and 9 h to reduce the TPC content to 51% at 60 \u00b0C . At high\u22121) (\u22121), as it took less than 10 min for the anthocyanin content to be reduced by half. Additionally, the stability of anthocyanins was superior under fluorescent and incandescent lights when compared to most of the thermal treatments (60\u2013100\u00b0C) (\u22121) . Both re0\u2013100\u00b0C) , while u2, RMSE and P , first-order , second order and the Weibull model models indicated that the Weibull model presented the best fit to predict the degradation of anthocyanins submitted to fluorescent, incandescent and ultraviolet light. The degradation curves obtained by the Weibull model are shown in 1/2) and the kinetic parameter t1/2 obtained by the models are shown in The data on the degradation of anthocyanins, due to light exposure, were subSE and P . The res1/2 values for the anthocyanin\u2019s degradation. The lower value of k observed for the anthocyanins after incandescent light exposure promoted a higher t1/2 value (59.63 h) when compared to the t1/2 value of 45.61 h obtained by the fluorescent light. In turn, the ultraviolet light showed the highest k values, and, consequently, the smallest t1/2 value (0.18 h).The observed data showed that the type of light had a marked effect on the k and t1/2 values, being 597 h, 306 h, 177 h and 100 h for fluorescent, incandescent, ultraviolet and infrared lights, respectively. The anthocyanins of grumixama, compared to those of the aforementioned authors, were more susceptible to degradation, according to the lower t1/2 values. The differences observed could be explained by the structural configuration of anthocyanins in different plant matrices, and also the presence of other compounds by intra- and intermolecular stacking, such as acylation, copigmentation and self-association, that may protect anthocyanins from degradation factors [The kinetic orders for the anthocyanin degradation of grumixama extracts also differed to those reported in the literature, which normally indicate that anthocyanin degradation follows the zero or first-order kinetics. Chist\u00e9 et al. investig factors . Accordi factors . Delgado factors explaineAnemopsis californica at 25 and 50 \u00b0C, combined with exposure to light, and reported higher values for TPC than the values observed for light degradation of TPC in grumixama extracts. According to Zhang et al. [The light degradation data for TPC were notg et al. , among ag et al. ,52.The grumixama fruits showed high contents of TMA and TPC, and promising antioxidant capacity. The results showed that the temperature range from 60 to 80 \u00b0C was critical to the stability of grumixama anthocyanins, and that temperatures below 50 \u00b0C could effectively preserve these anthocyanins. Incandescent and fluorescent lights weakly affected the anthocyanin stability, while anthocyanins were quickly degraded by UV light. Therefore, anthocyanin extracts from grumixama fruits should be handled at temperatures lower than 50 \u00b0C and protected from UV light exposure. Further studies are needed to evaluate the stability of the anthocyanins of grumixama fruits under different food processing conditions, such as pasteurization, or even studies concerning the gastrointestinal digestion of grumixama fruits."} +{"text": "Tuberous root formation and development is a complex process in sweet potato, which is regulated by multiple genes and environmental factors. However, the regulatory mechanism of tuberous root development is unclear.In this study, the transcriptome of fibrous roots (R0) and tuberous roots in three developmental stages were analyzed in two sweet potato varieties, GJS-8 and XGH. A total of 22,914 and 24,446 differentially expressed genes (DEGs) were identified in GJS-8 and XGH respectively, 15,920 differential genes were shared by GJS-8 and XGH. KEGG pathway enrichment analysis showed that the DEGs shared by GJS-8 and XGH were mainly involved in \u201cplant hormone signal transduction\u201d \u201cstarch and sucrose metabolism\u201d and \u201cMAPK signal transduction\u201d. Trihelix transcription factor (Tai6.25300) was found to be closely related to tuberous root enlargement by the comprehensive analysis of these DEGs and weighted gene co-expression network analysis (WGCNA).2+signal\u201d \u201cMAPK signal transduction\u201d and metabolic processes including \u201cstarch and sucrose metabolism\u201d and \u201ccell cycle and cell wall metabolism\u201d are related to tuberous root development in sweet potato. These results provide new insights into the molecular mechanism of tuberous root development in sweet potato.A hypothetical model of genetic regulatory network for tuberous root development of sweet potato is proposed, which emphasizes that some specific signal transduction pathways like \u201cplant hormone signal transduction\u201d \u201cCaThe online version contains supplementary material available at 10.1186/s12864-022-08670-x. Ipomoea batatas L) is a dicotyledonous plant of the family Convolvulaceae, growing in tropical, subtropical, and temperate regions, it is the most important rhizome crop after potato and cassava, and one of the most important food crops in the world were identified from these DEGs Table S. The difIn order to verify the accuracy of RNA-Seq results, we randomly selected 6 genes for qRT-PCR analysis. The results showed that the expression pattern of these 6 differential genes was similar to that of RNA-Seq , WOX4 were significantly upregulated at tuberous root development, which is consistent with the results of previous studies . MoreoveRehmannia glutinosa tuberous root [A series of studies have shown that the initiation and induction of root/tuber is affected by the environment. For potatoes, photoperiod is essential for tuber formation . Moreoveous root . PhotopeMoreover, genes detected in the roots may also be transcribed in the leaves and then transported to the root. For example, after being transcribed in leaves, potato stBEL5 mRNA was transported through the phloem to the stolon tip for translation into protein, thereby promoting the formation of storage organs . In thisRehmannia glutinosa and Callerya speciosa, the expressions of auxin-related genes were significantly up-regulated during tuberous root expansion stage [Hormones are important signals in plant root development , 28. In on stage , 30, 31.The results showed that cytokinin was involved in the proliferation and development of cambium cells, and the expression reached the highest level in the rapid growth stage of tuberous root, which was related to the development and formation of tuberous root / tuber , 32\u201334. Ethylene is a key regulator of rhizome induction and development , which pRehmannia glutinosa. In addition, the phospholipid-calcium signal system regulated potato tuber formation [Cellular processes involved in a series of signaling pathways are usually triggered by specific stimuli and hormones. Phospholipid signal plays an important role in root growth, cell division, and hormone regulation , 38. It ormation , 39. In 2+ concentration and calcium signal- related genes were significantly up-regulated during tuberous root formation in Rehmannia Glutinosa\u00a0[Calcium is one of the main nutrients and is involved in almost the whole process of plant growth, including the controls of cell division, differentiation, and stress response as the second messenger , 41. Stulutinosa\u00a0. In thislutinosa\u00a0.2+-dependent phosphorylation/dephosphorylation [Transcription factors play an important role in the regulation of plant growth and development and secondary metabolism. Many transcription factors have been identified to play key roles in organ development, including MADS, bHLH, MYB, NAC, GRAS et al. In this study, we identified 29 transcription factors that were significantly up-regulated during the tuberous root expansion stage in two varieties. Their expression levels increased successively Fig.\u00a0. Among trylation . The Trirylation , 48. In Callerya speciosa\u00a0[MYBs were involved in cell cycle regulation, plant morphogenesis, cell wall synthesis, secondary metabolism, xylem/phloem differentiation, root radial pattern formation, and so\u00a0on , 50. Furspeciosa\u00a0. In thisspeciosa\u00a0, 53, In speciosa\u00a0, 55. RNAspeciosa\u00a0. In thisTo sum up, these results suggest that transcription factors may drive root/stem growth through cell cycle regulation, cell division, and secondary wall strength. The TFs revealed in this study may be the important candidate genes for breeding sweet potato with high production in the future.Sucrose and starch accumulation occurs during the bulking of storage roots, they are considered to be one of the most important carbohydrates, and play an important role in the formation of storage organs. Sucrose invertase and sucrose synthase were involved in the introduction and accumulation of sucrose in storage roots . In addiPanax notoginseng\u00a0[The accumulation of starch occurs at the same time as the expansion of storage organs. It has shown that the expansions of potato and lotus root tubers were highly coordinated with the accumulation of starch , 66. Theoginseng\u00a0. These sIbMYB1 controls the biosynthesis of anthocyanins in sweet potato [Aronia melanocarpa fruit development [GJS_8 and XGH are two varieties with different anthocyanin content. GJS_8 has higher anthocyanin content than XGH. Anthocyanins are water-soluble pigments and an important class of flavonoids. We found that there was a large number of genes with significant differences in tuberous root development between two varieties. KEGG enrichment analysis showed that the DEGs were significantly enriched to phenylpropanoid biosynthesis (sot00940), flavonoid biosynthesis (sot00941), and starch and sucrose metabolism pathway (sot00500). It was also found that phenylpropanoid biosynthesis and flavonoid biosynthesis was significantly enriched in the process of anthocyanin biosynthesis . In addit potato . It was elopment . Hence, Tuberous root development is a complex regulatory process, which is affected by many factors. In this study, through transcriptome analysis, combined with previous research results, a hypothetical model of sweet potato tuberous root development regulatory network is proposed Fig.\u00a0. The celIntegrated transcriptomic and WGCNA analyses were performed in the study, there were 15,920 differential genes shared by XGH and GJS-8. GO and KEGG pathway enrichment analysis revealed that these DEGs were mainly involved in plant hormone signal transduction, starch and sucrose metabolism, MAPK signal transduction, light signal, phospholipid signal, calcium signal, transcription factor, cell wall, and cell cycle. Furthermore, WGCNA and qRT-PCR analysis suggested that Tai6.25300 played an important role in tuberous root development in sweet potato. A hypothetical model of a genetic regulatory network associated with tuberous roots in sweet potato is put forward. The tuberous root development of sweet potato is mainly attributed to cell differentiation, division, and expansion, which are regulated and promoted by certain specific signal transduction pathways and metabolism processes. These findings can not only provide novel insights into the molecular regulation mechanism of tuberous root expansion, but also support theoretical basis for genetic improvement of sweet potato.Two sweet potato varieties, GJS-8 and XGH were used in this study. They were planted in the experimental farm of Hepu Institute of Agricultural Science in Beihai, Guangxi. At 90\u2009days after planting, Sample collection refers to Ku et al\u2019s method , FibrousA conventional trizol method was used to extract RNA from the samples. The concentration and purity of total RNA were determined by a NanoPhotometer\u00ae spectrophotometer . RNA integrity was assessed using the RNA Nano 6000 Assay Kit of the Bioanalyzer 2100 system . Sequencing libraries were generated using NEBNext\u00aeUltraTM RNA Library Prep Kit for Illumina\u00ae .http://public-genomes-ngs.molgen.mpg.de/cgi-bin/hgGateway?hgsid=9052&clade=plant&org=Ipomoea+batatas&db=ipoBat4 ) [P-value <\u20090.05 and | log2 (FoldChange) |\u2009>\u20091 obtained by DESeq2 were considered DEGs.3\u2009\u03bcg total RNA from each sample was used as the input material, fragmentation was carried out using divalent cations under elevated temperature in NEBNext First Strand Synthesis Reaction Buffer (5X). First strand cDNA was synthesized using random hexamer primer and M-MuLV Reverse Transcriptase (RNase H-). Second strand cDNA synthesis was subsequently performed using DNA Polymerase I and RNase H. Remaining overhangs were converted into blunt ends via exonuclease/polymerase activities. After adenylation of 3\u2032 ends of DNA fragments, NEBNext Adaptor with hairpin loop structure were ligated to prepare for hybridization. In order to select cDNA fragments of preferentially 250\u2009~\u2009300\u2009bp in length, the library fragments were purified with AMPure XP system . Then 3\u2009\u03bcl USER Enzyme was used with size-selected, adaptor-ligated cDNA at 37\u2009\u00b0C for 15\u2009min followed by 5\u2009min at 95\u2009\u00b0C before PCR. Then PCR was performed with Phusion High -Fidelity DNA polymerase, Universal PCR primers and Index (X) Primer. At last, PCR products were purified (AMPure XP system) and library quality was assessed on the Agilent Bioanalyzer 2100 system. Clean reads were obtained by removing reads containing an adapter, reads containing ploy-N and low-quality reads from the raw data. The clean reads were then aligned with the sweet potato genome (poBat4 ) . FeaturepoBat4 ) . Genes wGene Ontology (GO) enrichment analysis of the DEGs was implemented using the cluster Profiler R package, and the gene length bias was corrected during this process . KOBAS sThe DEGs detected with DESeq2 were combined and the TPM values for the 24 samples were determined. Each TPM value was increased by 0.01 and further transformed by a log10 calculation. The converted data were analyzed with the R package WGCNA (version 1.66), with a power value of 9 , 77.-\u0394\u0394Ct. The primers of selected genes were designed using primer 5 software with Trizol\u00ae Reagent . and then reverse transcribed into cDNA with HiScript III SuperMix for qPCR(+gDNA wiper) . qRT-PCR was carried out using SYBR Premix Ex TaqII Kit on a Bio-Rad iQ5 Real-time PCR System , Ten \u03bcl reaction solution contained 5\u2009\u03bcl SYBR Green I Master, 1\u2009\u03bcl specific Primer, 1\u2009\u03bcl cDNA samples, 3\u2009\u03bcl RNase-Free H2O. One-third dilution of the cDNA sample was used, and the reaction conditions were: 30s at 95\u2009\u00b0C followed by 40\u2009cycles of 30s at 95\u2009\u00b0C, and 30s at 60\u2009\u00b0C. Each sample had three biological replicates with three technical replicates for each biological replicate. The relative expression level was calculated by the equation ratio 2e Table S, and UBIAdditional file 1."} +{"text": "Road infrastructure is one of the most vital assets of any country. Keeping the road infrastructure clean and unpolluted is important for ensuring road safety and reducing environmental risk. However, roadside litter picking is an extremely laborious, expensive, monotonous and hazardous task. Automating the process would save taxpayers money and reduce the risk for road users and the maintenance crew. This work presents LitterBot, an autonomous robotic system capable of detecting, localizing and classifying common roadside litter. We use a learning-based object detection and segmentation algorithm trained on the TACO dataset for identifying and classifying garbage. We develop a robust modular manipulation framework by using soft robotic grippers and a real-time visual-servoing strategy. This enables the manipulator to pick up objects of variable sizes and shapes even in dynamic environments. The robot achieves greater than 80% classified picking and binning success rates for all experiments; which was validated on a wide variety of test litter objects in static single and cluttered configurations and with dynamically moving test objects. Our results showcase how a deep model trained on an online dataset can be deployed in real-world applications with high accuracy by the appropriate design of a control framework around it. Roadside litter poses a severe safety and environmental risk for road users, wildlife and the maintenance crews who clean it up see Fig. AccordiLitter generally refers to any misplaced or solid waste. It appears in different formats including but not limited to sweet wrappers, drinking containers, fast food packaging, cigarette ends, small bags etc. Many countries around the world have seen an increase in roadside litter . SeveralThere are two key challenges with garbage disposal. First, the item has to be collected from the disposed location and then sorted for its appropriate recycling process. Traditional garbage cleaning is performed by paid workers, organisations, volunteers and charities sent on-site to pick up items alongside the road. Manual picking is a tedious, boring and repetitive task . WorkersTo automate the cleaning process and improve the safety of workers, litter detection algorithms and robotic systems have been developed over the past few years. Some of the first few works use novel sensing technologies for waste identification. For example, an automatic trash detection algorithm using an ultrasonic sensor was proposed in . A sortiOne of the first implementations of a robotic device for litter picking, the ZenRobotics recycler robotic system, uses machine learning and a robotic manipulator to pick recyclable objects from a conveyor belt . A 3D hi\u2022 A modular approach to robotic development to minimise costs and development time. The robot is comprised of inexpensive or off-the-shelf components which are improvable over time.\u2022 Distinct from other works, we simplify and improve the robustness of our manipulation system by using soft robotic grippers and a real-time visual-servoing controller, requiring only a 2D colour camera for picking and binning objects of variable sizes and shapes even in dynamic environments.\u2022 We use a learning-based object detection and segmentation algorithm trained on the online TACO dataset for identifying and classifying garbage, making our framework easily transferable to additional objects and classes by simply retraining the object detection network.This paper proposes a cost-effective strategy for litter picking using a robotic manipulator. Our contributions are as follows:Our results indicate a high grasp success rate and good recycling accuracy.The rest of the paper is organized as follows. m. The Robotics Operating System (ROS) is employed for the software architecture running on a control laptop. Control of the manipulator is done through the built-in motion and kinematic controllers.The roadside litter-picking robot (LitterBot) is shown in The soft gripper employs two Fin Ray fingers driven by a single servo motor. These structures have several advantages compared to other soft grippers such as ease of use, minimal actuation and the capability to grasp a wide variety of objects . The \u201cV\u201dThe material used for the Fin Ray fingers is the Dragon Skin 30 silicone. The mould for casting was 3D printed. The fingers are attached to the 3D-printed PLA gripper base see which isThe vision system uses the Detectron 2 version of Mask Region Convolutional Neural Network (Mask R-CNN) for the litter instance segmentation and classification. This outputs both masks and bounding boxes. To minimise the developmental cost of the modular integration, we deploy the Resnet50 backbone due to its trade-off between high average precision performance and inference time on the COCO dataset compared to other pre-trained network weights available . The netA of the segmented grey-scale mask data is used to find the first eigenvalue \u03bb. This is used to solve for the eigenvector v. The angle of the target object is obtained using inverse tan on the eigenvector. The angle is constrained between \u2212 90 and 90\u00b0.Principal component analysis (PCA) is used on the masks to determine the angular orientation of the objects. The principal axis corresponds to the long-ways orientation of the target mask. The variance matrix The Trash Annotations in Context (TACO) dataset was used to train the network . It consxt = , the pixel error e = to the reference pixel coordinates xr = , corresponding to the centre of the gripper, is multiplied by a proportional gain Kp. This forms the end-effector Cartesian velocity control input to the robot, given in The robot employs a velocity-based eye-in-hand visual-servoing scheme for litter picking see . Given aKp. Both Kp and the reference pixel coordinates were found empirically. Pixel target coordinates are taken as the centre of the target\u2019s Detectron2 bounding box. The robot picks the target based on the detected object with the largest mask. Detectron2 has a frequency of 7Hz, hence a low proportional gain is implemented to retain control stability. The control input U(t) is given to the UR10 in-built function speedl to achieve the closed-loop control. Once the error is sufficiently small, the visual-servoing process is terminated. e < 2 was empirically found to give reasonable grasping accuracy. The robot then proceeds to grab the object. For moving to the object, the final current X and Y Cartesian end-effector positions are taken, however, the Z position is assumed to be fixed and estimated beforehand. The angle estimated by PCA is then used to re-orient the gripper as it drops such that the thinnest width of the object corresponds to the mouth of the fin ray gripper.A value of 0.0005 was used for picking and binning litter. This process is broken down into three distinct steps when the view is cluttered. This suggests a larger dataset might be required for real-world deployment.From picking and binning is consistently above or at least 80% for the various experiments. The use of an underactuated compliant and adaptable gripper allows for the robust grasping of arbitrarily shaped objects requiring minimal control. Pixel-based visual-servoing also has several advantages over open-loop control such as being less sensitive to frame transformation noise, and the ability to track dynamic objects even allowing for the re-picking of previously dropped objects.In this paper, we introduce the LitterBot, a roadside litter-picking robot prototype that is economically and computationally cost-effective. The robot uses the off-the-shelf Mask R-CNN (Detectron2) network for litter instance segmentation trained on the relatively small TACO dataset. Instance segmentation not only allows for the localisation of the detected objects within the image scene but also the inexpensive pose estimation using PCA. When augmented with 2D pixel real-time visual-servoing using the localised information and a soft-robotic gripper, the robot is highly successful in picking up and correctly binning a wide variety of objects which have drastically different weights, geometry, materials and recyclability. The robot\u2019s success rate in picking and binning, unlike prior works which require 3D point cloud images (The LitterBot also only requires 2D images for d images for planThe deliberate use of modular components has the advantage of being improvable over time, which is easily extendable to more complex mechanisms and algorithms. The robot, however, is still a prototype and has scope for improvements before it is viable for real-world deployment. Although the gripper is adaptable, it is limited to objects that can fit within its grasp. One solution would be to incorporate an additional suction cup gripper such that larger objects such as pizza boxes can also be grasped. Future versions of the Fin Ray gripper will also include stiffer materials to increase the maximum graspable weight. A slight underperformance was also observed when multiple objects are present in the camera view. A larger dataset in the future will be beneficial for increasing the robustness of the vision system. Environmental factors such as variation in weather and lighting conditions as well as background and scenery will also be addressed in future work through more advanced and complex computer vision algorithms. Other networks such as YOLACT which was used in will alsFurther future work includes mounting the robot on a mobile robotic platform such that it can be autonomously deployed in the field. Two control schemes will also be considered, compared and evaluated in the future. The first is the \u201cstop and bin\u201d approach, which is already implemented in this work. The second, is where the robot can dynamically move whilst picking and binning. Algorithms for obstacle avoidance and picking order will also be developed in the future such that safety and energy efficiency can be additionally improved, as well as account for moving cars and pedestrians. Depth distance information will also be included in the servoing approach such that the robot can also grasp objects on inclines and non-planar surfaces. The improved robot will then be field-tested to fully test the efficacy of the LitterBot.Overall, the simple yet robust and inexpensive control framework for the LitterBot performs well in cluttered and dynamic environments, thus showing promise for deploying autonomous systems for roadside litter-picking. This can greatly reduce roadside pollution as well as reduce costs, risks and hazards faced by users and maintenance crews."} +{"text": "This study aimed to investigate the influence of intraocular lens (IOL) weight on long-term IOL stability in highly myopic eyes.A total of 205 highly myopic cataract eyes of 205 patients implanted with the MC X11 ASP or 920H IOL were included in this retrospective study. Eyes were divided into 3 subgroups according to the IOL power: low (\u2265-5 to <5 D), medium (\u22655 to <14 D), and high (\u226514 D) IOL power. At 3 years after surgery, IOL decentration and tilt, high-order aberrations, and anterior capsular opening (ACO) area were measured. The influence of IOL weight on long-term IOL stability was evaluated.P < 0.001). Correspondingly, Group B presented significantly greater overall and inferior decentration than Group A, especially for low and medium IOL power . In both groups, overall and vertical decentration was significantly correlated with IOL weight . Group B showed a significantly greater ACO area than Group A (P < 0.05). Multivariate analysis showed that decentration in Group A was affected by IOL weight, while decentration in Group B was affected by IOL weight and AL.Group B had a significantly greater IOL weight than Group A (Group B vs. Group A: 28.31 \u00b1 2.01 mg vs. 25.71 \u00b1 4.62 mg, Higher IOL weight may lead to greater long-term IOL decentration in highly myopic eyes, while the haptic design may play a role in anterior capsular contraction. The world is going myopic , especiaThe intraocular lens (IOL) malposition is a common post-operative complication of highly myopic cataract eyes , 8. As tThe purpose of this study was to investigate the influence of IOL weight on long-term IOL stability by comparing the 3-year post-operative IOL stability between two monofocal IOLs (MC X11 ASP and 920H) in highly myopic eyes.www.clinicaltrials.gov, accession number: NCT03062085).This retrospective study was conducted at the Eye and Ear, Nose, and Throat Hospital, Fudan University, Shanghai, in accordance with the tenets of the Declaration of Helsinki. All procedures were approved by the institutional review board of the Eye and Ear, Nose, and Throat Hospital. Signed informed consents were obtained from all participants before cataract surgery for the use of their clinical data. The study was affiliated with the Shanghai High Myopia Study who underwent uneventful cataract surgery and implantation of MC X11 ASP or 920H IOLs (between January 2017 and April 2018) and completed 3-year post-operative follow-up. The exclusion criteria were the following: (1) presence of any other oculopathy, such as intraoperative floppy iris syndrome, pseudoexfoliation, corneal diseases, strabismus, uveitis, or glaucoma; (2) prior intraocular procedures or trauma; (3) severe intraoperative or post-operative complications, such as posterior capsule rupture or failure of continuous circular capsulorhexis; (4) pupil diameter of <6 mm after sufficient dilation. A total of 205 highly myopic eyes of 205 patients were included in the study, which were divided into the following groups: 86 eyes in Group A (MC X11 ASP) and 119 eyes in Group B (920H). For both groups, eyes were further classified into 3 subgroups according to the implanted IOL power: \u2265-5 to <5 D (low IOL power), \u22655 to <14 D (medium IOL power), and 14 D or greater (high IOL power).Prior to surgery, all patients underwent complete ophthalmic examinations including assessment of uncorrected visual acuity and best-corrected visual acuity , slit-lamp examination, corneal topography , AL measurements , fundoscopy, B-scan ultrasonography, and optical coherence tomography . IOL power was calculated using the Haigis formula. The weights of the two IOLs were obtained from the corresponding manufacturers.All surgeries were performed by a single, experienced surgeon (Prof. YL) following a standard procedure. A 2.6 mm temporal clear corneal incision was made, which was followed by a 5.5 mm continuous curvilinear capsulorhexis, hydrodissection, and phacoemulsification. The IOL was inserted into the capsular bag and aligned with the center. After thorough removal of the viscoelastic from above and below the IOL, the IOL position was reconfirmed, and then the incision was hydrated. As post-operative treatments for all patients, topical prednisolone acetate , and levofloxacin were prescribed 4 times daily for 2 weeks and pranoprofen eyedrops 4 times daily for 4 weeks.Three years after surgery, all patients underwent ophthalmic examinations, including assessment of UCVA and BCVA, OPD-scan examination, and slit-lamp anterior segment photography.Intraocular lens decentration and tilt were obtained with the OPD-Scan III aberrometer after the pupil was dilated until the edge of the IOL optics was visible. According to our previous study , the cenTo measure the area of anterior capsular opening (ACO), images were taken using a Topcon slit lamp connected to a digital camera . The ACO area was measured by Image J . In brief, the ACO region was circled manually, and the value of the ACO area was figured out with a scale set according to each patient's corneal diameter, which was measured by Pentacam HR pre-operatively .SD. The comparisons of continuous variables were assessed using Student's t-test between two groups. Categorical variables were compared using the \u03c72 test. Spearman's correlation analyses were used to analyze relationships between discontinuous variables. Backward stepwise multivariate linear regression analysis was performed to identify the factors that influenced overall and vertical decentration for Groups A and B, which included age, sex, eye laterality, AL, IOL weight, and ACO area as independent factors and IOL decentration as dependent factors, with adjustment of the interaction between AL and ACO or IOL weight. P-values <0.05 were considered statistically significant.All statistical analyses were performed using SPSS version 22 . Continuous data are presented as mean \u00b1 P > 0.05). Significantly greater IOL weights were found in Group B compared to Group A (P < 0.001). Post-operative UDVA and CDVA did not show significant differences between the two groups at 3 years after surgery (both P > 0.05).The characteristics of all included patients are shown in P = 0.04; 6 mm pupil: 0.96 \u00b1 0.89 \u03bcm vs. 0.72 \u00b1 0.92 \u03bcm, P = 0.048; P = 0.047; 6 mm pupil: 0.57 \u00b1 0.47 \u03bcm vs. 0.36 \u00b1 0.32 \u03bcm, P = 0.044; In terms of intraocular HOAs, the total HOAs at both 4 and 6 mm pupil diameters were significantly greater in Group B compared to those in Group A . However, Group B presented significantly greater overall and vertical decentration than Group A .At 3 years after surgery, no significant differences were identified between the two groups for horizontal decentration and tilt (both P < 0.05). While in the high IOL power subgroup, Group A showed slightly greater weight than Group B (P < 0.05), but no between-group difference was found for overall and vertical decentration (both P > 0.05). Moreover, no significant difference was found for horizontal decentration or tilt between the two IOL groups, regardless of the IOL power ranges .r = 0.471, P < 0.001, r = 0.192, P = 0.037, r = \u22120.312, P = 0.003, r = \u22120.2, P = 0.03, In either Group A or Group B, overall decentration was positively correlated with IOL weight (P < 0.001).As to the ACO area, Group B presented a significantly larger ACO area than Group A . However, in Group B, greater overall decentration and inferior decentration was associated with greater IOL weight and longer AL after adjustment for the interactions between AL and ACO or IOL weight.In Group A, both greater overall and inferior decentration were significantly associated with greater IOL weight are suitable to be implanted in highly myopic eyes. CTRs are commonly used to maintain post-operative integrity and stability of the capsular bag in order to stabilize IOL position , 20. HowMoreover, haptic design may partly affect long-term IOL stability. Our previous study showed capsular contraction syndrome occurred more frequently in the highly myopic population . The capWorse IOL stability can lead to higher intraocular HOAs, which consequently affects patients' visual quality . IOL decNotably, this study proposed an interesting topic, which is to explore the potential relationship between IOL weight and IOL long-term decentration in highly myopic eyes. Since this article is a retrospective study, prospective, randomized, and multicentered trials with controlled confounding factors would be needed to better investigate this topic. In terms of the long-term clinical performance of two different IOLs, the significant difference lies in HOAs rather than post-operative visual acuity. A follow-up study about subjective perceptions could be an excellent idea for further research.In conclusion, heavier IOLs may lead to greater long-term decentration as well as higher HOAs post-operatively in highly myopic eyes. Our results may provide helpful advice for cataract surgeons on IOL selection among highly myopic patients, especially among extremely myopic patients, and thus improve their long-term visual satisfaction.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.www.clinicaltrials.gov, accession number: NCT03062085). The patients/participants provided their written informed consent to participate in this study.This retrospective study was conducted at the Eye and Ear, Nose, and Throat Hospital, Fudan University, Shanghai, in accordance with the tenets of the Declaration of Helsinki. All procedures were approved by the Institutional Review Board of the Eye and Ear, Nose, and Throat Hospital. Signed informed consents were obtained from all participants before cataract surgery for the use of their clinical data. The study was affiliated with the Shanghai High Myopia Study , Science and Technology Innovation Action Plan of Shanghai Science and Technology Commission , Clinical Research Plan of Shanghai Shenkang Hospital Development Center , Double-E Plan of Eye & Ear, Nose, and Throat Hospital (SYA202006), Shanghai Municipal Key Clinical Specialty Program (shslczdzk01901), and the Fudan University Outstanding 2025 Program.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The success of blockchain technology in cryptocurrencies reveals its potential in the data management field. Recently, there is a trend in the database community to integrate blockchains and traditional databases to obtain security, efficiency, and privacy from the two distinctive but related systems. In this survey, we discuss the use of blockchain technology in the data management field and focus on the fusion system of blockchains and databases. We first classify existing blockchain-related data management technologies by their locations on the blockchain-database spectrum. Based on the taxonomy, we discuss three types of fusion systems and analyze their design spaces and trade-offs. Then, by further investigating the typical systems and techniques of each type of fusion system and comparing the solutions, we provide insights of each fusion model. Finally, we outline the unsolved challenges and promising directions in this field and believe that fusion systems will take a more important role in data management tasks. We hope this survey can help both academia and industry to better understand the advantages and limitations of blockchain-related data management systems and develop fusion systems that meet various requirements in practice. Blockchain technology has come into people\u2019s view with the release of the Bitcoin white paper\u00a0 in 2008.Decentralization. There is no central node in a blockchain system and every node in the network holds a replica of the data. In this way, the blockchain eliminates the risks that come with a centralized storage schema in traditional databases, i.e., malicious or failed central storage may cause the loss of data.Immutability. Once data are appended to the blockchain and confirmed by the majority of the chain\u2019s participants, it can never be replaced or reversed as the records are linked one after another with hash values. This marks blockchains as different from regular databases, in which information can be easily edited or deleted.Tamper-Proof. When mining a new block, metadata of current system states and corresponding proofs are generated and distributed to the network with the replication of the block. Since the proof is guaranteed by cryptography methods, any tiny alteration to the data will lead to a failure of validation. If there is any conflict during block validation, the participant can immediately recognize that the block has been tampered with, then he can refuse this block to protect the security of the data.Provenance. Since the immutability of blockchain, the only accepted way to modify what has already been on the chain is to create a new log and append it to the chain to declare the invalidity of previous data. This mechanism ensures that every modification of data entry can be recorded as a trail, from which one can clearly obtain the history status of the data.Despite the strong guarantee in data security, blockchain is still far from an ideal data management system. It suffers from low performance, high resource consumption, and potential privacy concerns.Performance. With its underlying chain structure, blockchain has to process each transaction serially. Moreover, other participants validate the received block by replaying the transactions in it, which is also a sequential process. These two linear transaction processing steps have a significant impact on the blockchain\u2019s performance. It is reported that Bitcoin, as a representative blockchain system, only achieves a throughput of 7 transactions/second. In contrast, a commercial database system can easily process 2000 to 56,000 transactions in one second\u00a0[Resource Consumption. On one hand, as the transactions go on, the append-only ledger consumes more and more storage, which will be a burden for devices with limited storage capacity such as smartphones or even personal computers. On the other hand, the mining procedure requires participants to compete with others to calculate a specific problem, while only one of them wins the right to append a block, which wastes massive energy and computing resources.Privacy Issues. Every participant in a blockchain network holds the full copy of data due to the verification need. However, this is at the cost of some privacy concerns. In real-world business applications, companies will never want collaborators or customers to access their sensitive information, while this goal can be easily achieved by leveraging views in databases.Apparently, blockchain technology has its superiority and defect, and neither it nor a database can perfectly undertake all the requirements of modern data management tasks. Fortunately, blockchains and databases share so many similar technical concepts and solutions, making it possible to combine the strengths of security, efficiency, and privacy from both sides. For example, transactions in both systems result in state changes and should hold ACID properties to ensure their reliability. Smart contracts in blockchains are corresponding to stored procedures in databases, as they aggregate transactions. Moreover, both systems adopt indexes to satisfy various requirements, i.e., tamper-proof and verifiability for blockchains, and efficient query for databases.The huge success of blockchain technology raises people\u2019s interest in applying it to the data management field. A blockchain is essentially a novel data management system, which is maintained by multiple participants (or nodes). Compared to traditional database systems, there may be some participants behaving unexpectedly, but blockchains hold some promising properties under such a circumstance to protect the integrity of data.We have noticed that there are massive works trying to integrate blockchain and database technologies to develop a fusion system that protects data integrity and processes transactions effectively at the same time. Though the integration of blockchains and databases has attracted more and more attention, there are few discussions about it. At present, most of the surveys about blockchains\u00a0\u20139 concenDifference with Existing Works Existing surveys only focused on some specific aspects of blockchains in the data management field. For example, Wang et al.\u00a0[g et al.\u00a0 investigg et al.\u00a0, 8 focusg et al.\u00a0, query pg et al.\u00a0, and shag et al.\u00a0, 9, 10 hg et al.\u00a0, 12 tryiThe trend of the fusion between blockchains and databases has also been noticed and analyzed in other works. Based on the comparisons between blockchains and distributed databases, Ruan et al.\u00a0 discuss Contributions In this paper, we conduct a comprehensive survey on the integration of blockchains and databases in the data management field. To sum up, we made the following contributions.database-oriented blockchains, blockchain-oriented databases, and hybrid systems, and conduct a comprehensive comparison of the three types of systems in the design spaces and trade-offs.We propose the blockchain-database spectrum, a framework to analyze the works about blockchains in the data management field, and recognize the trend of integrating blockchains and traditional databases. We further identify three typical models of the fusion, namely database-oriented blockchains, blockchain-oriented databases, and hybrid systems. Besides, we summarize and evaluate the techniques used in each model, which provides insights into each fusion model.We review each of the representative systems of Based on the exhaustive research and analysis of existing works, we discuss the limitations of existing methods for blockchain-related data management systems and provide future research directions.The rest of this paper is organized as follows. First, the preliminaries are provided in Sect.\u00a0database-oriented blockchains, blockchain-oriented databases, and hybrid systems in Sects.\u00a0We begin this section with some basic information about blockchains and databases to provide a primary impression of the two different but relevant technologies.Blockchain is an innovative data storage and management technology that integrates a variety of established technologies, including high-performance data storage, peer-to-peer networks, cryptography, consensus protocols, etc. The concept of blockchain originated from Bitcoin, which is proposed by Satoshi Nakamoto\u00a0, and mosData layer. To efficiently organize various data in the blockchain, the data layer contains elements such as data structure, transaction model, index data, state data, and persistent storage scheme.Network layer. To meet the communication between nodes in a decentralized blockchain network, the P2P protocol plays an important role in the network layer. The content transmitted between nodes mainly consists of transaction data and block data.Consensus layer. Unlike centrally governed databases, blockchain uses a distributed consensus algorithm to ensure that nodes in the network that do not trust each other can agree on the same ledger. The use of consensus algorithms improves the blockchain\u2019s ability to cope with crash tolerance or Byzantine fault tolerance, giving the blockchain a higher level of security than traditional databases.Contract layer. Containing various scripts, algorithms and smart contracts, it is the foundation of blockchain programmability.Application layer. Users can easily develop new decentralized and cryptographically secure blockchain-based applications using the APIs provided by the blockchain.Permissionless and Permissioned Blockchains can be broadly classified into two categories: permissionless blockchains and permissioned blockchains.For a clear understanding of the blockchain hierarchy, we abstract the blockchain into 5 layers in Fig.\u00a0Permissionless blockchains are a type of blockchain in which anyone can participate in the network without any prior approval or authorization. It is often referred to as a public blockchain as the network is open to the public. Examples of popular permissionless blockchains include Bitcoin, Ethereum\u00a0, etc. EtPermissioned blockchain are blockchains that require permission to join and participate in consensus. Hyperledger Fabric\u00a0, a permiInnovations on Blockchain Blockchains have achieved great success and promoted many developments in different fields. However, traditional blockchain systems still suffer from problems of low throughput and high latency. There are several innovations in consensus algorithms and transaction concurrency to address these issues.O(n).The consensus algorithm is one of the core technologies of blockchain, which describes how the peers reach an agreement on the state of the world. The efficiency of consensus algorithm impacts the performance of the entire blockchain system. Here, we introduce some BFT-based protocols. Castro et al.\u00a0 propose The purpose of concurrency control is to optimize transaction processing, which involves improving the efficiency of transaction validation, execution, and confirmation on blockchains. Take Hyperledger Fabric for instance, although it parallelly executes the transactions in the execution phase, the throughput cannot further improve, especially when there are high contentions among each transaction. To be more specific, though all the conflict transactions are simulated in the execution phase, only one of them can be eventually submitted in the final validation phase, and others have to be aborted. The solution is to reduce the abort rate. Fabric++\u00a0 uses reoSQL databases. As one of the most widely used databases supporting the relational model, SQL database is usuaNoSQL databases. To have better horizontal scalability, many databases abandon the relational model and support for SQL statements, replacing them with support for semi-structured and unstructured data. These databases are called NoSQL databases. Unlike relational databases, NoSQL databases have multiple types: key-value databases , column LevelDB\u00a0, Apache LevelDB\u00a0), docume LevelDB\u00a0), graph LevelDB\u00a0), time s LevelDB\u00a0) and so NewSQL databases. A new type of database management system (DBMS) is designed to provide a NoSQL system\u2019s high scalability and performance while retaining the ACID transactional characteristics of a traditional relational database management system (RDBMS). NewSQL systems can use both relational and non-relational data models. The mainstream NewSQL systems include Google Cloud Spanner\u00a0[ Spanner\u00a0, Cockroa Spanner\u00a0, TiDB\u00a0[3 Spanner\u00a0, and Ama Spanner\u00a0. These NIn general, blockchains and databases are different data management technologies with different futures and application scenarios. Blockchains have the advantage of security for applications requiring security, while databases have the advantage of performance and usability for large-scale data processing and high concurrent access.Database technology has been developed for decades. Unlike blockchain, it supports features like ACID properties, complex queries, low transaction latency, high throughput, and scalability. Mainstream databases are divided into three categories: SQL databases, NoSQL databases, and NewSQL databases.Though blockchains and databases are essentially designed for different goals, both systems have the capability to manage data. Along this point of view, we present our blockchain-database spectrum in Fig.\u00a0database-oriented blockchains, blockchain-oriented databases, and hybrid systems. As Fig.\u00a0In this framework, blockchains lie at the security end of the spectrum, while databases are at the other performance end. Besides both ends, there are also systems located in the middle parts of the blockchain-database spectrum. These systems are fusions of blockchains and databases to varying degrees and can be further classified into three major types, namely database-oriented blockchains, blockchain-oriented databases, and hybrid systems, which occupy the middle parts of the blockchain-database spectrum in Fig.\u00a0In this survey, we focus on the fusions systems, i.e., Database-Oriented Blockchains The database-oriented blockchains are at the blockchain side of the blockchain-database spectrum. Same as blockchains, database-oriented blockchains retain the essential chain-like structure of ledgers, which keeps track of data modifications and ensures data security. Besides the security, database-oriented blockchains also pursue features to provide a better experience in real-world practice just as databases do, such as easy-to-use APIs, higher throughput, lower resource consumption, and assurance of secret data\u2019s privacy. To sum up, database-oriented blockchains are a collection of systems that are built on top of blockchains and integrated with database features.As it has been revealed in the spectrum, the most straightforward and widely-used solution is to equip the systems with mature techniques from databases, including sharding\u00a0, 53\u201359, Blockchain-Oriented Databases Opposite to the database-oriented blockchains, the blockchain-oriented databases are closer to databases. Such systems pay more attention to processing performance and usually support more complicated data models such as relational. Some of them also support SQL-like interfaces, making them more convenient for application developers.blockchain-oriented databases are built upon an existing database instance, while learning lessons of hash chain from blockchains. That is, they usually contain a blockchain layer\u00a0[blockchain-oriented databases and introduce the technical details in Sect.\u00a0To achieve such a goal while keeping a basic security guarantee, in layer\u00a0 or a midin layer\u00a0, 49 withblockchain-oriented databases, and ignore them in the following of this paper.We also notice that there is another way to build a database system that supports verifiable data processing, which results in the so-called ledger databases\u00a0\u201389. HoweHybrid Systems Such systems locate around the very center of the spectrum, which means they reach a balance between security and performance. Note that this can be interpreted into two situations. The ideal one is to achieve decentralized data security as blockchains and high throughput as commercial databases at the same time. However, this is an unreachable target at present and no one has been recognized to provide a perfect solution to this problem. On the other hand, equally combining blockchains and databases into a single system is also a way to reach the balance\u00a0[hybrid systems to refer to the latter systems in the rest of this survey. balance\u00a0\u201346. Thisdatabase-oriented blockchains. For example, MedRec\u00a0[The efforts to explore the data management possibility of blockchains have taken a long way. At the early stage of the exploration, many researchers try to adopt blockchains to real application scenarios, which leads to the earliest , MedRec\u00a0 is an Et, MedRec\u00a0, cogniti, MedRec\u00a0, IoT\u00a0[40, MedRec\u00a0\u201396, dece, MedRec\u00a0, MOOC\u00a0[9, MedRec\u00a0, and COV, MedRec\u00a0.However, the aforementioned systems just take exiguous steps toward databases in the spectrum. The successors propose prototype systems or protocols to manage general data with integrated database systems, which mainly focus on data integrity. Gaetani et al.\u00a0 design adatabase-oriented blockchains aim to equip pure blockchain systems with the ability to manage general data and reach the goal of high throughput, low resource consumption, easy-to-use APIs, and privacy of secret data. Such systems usually modify several components of blockchains, including: (1) index, (2) protocol, e.g., sharding and consensus, (3) API and data models, and (4) ledger arrangement, as Fig.\u00a0Recently, researchers of nsensus, API and database-oriented blockchains. Besides the original data, researchers also index some metadata to support a broader range of queries. There are also works trying to add concurrency to the indexes that support parallel updates.In databases, an index is a structure that sorts the specified values which aim to boost query processing and data updates. However, indexes in blockchains usually take an additional task to prove the integrity of data as an authenticated data structure (ADS) does. For example, Ethereum uses MPT to index the states of each account and protect the data. However, such an index has poor performance since it has to fetch data from LevelDB whenever it visits a node in the MPT. Thus, recent works try to develop indexes that fit the batch data in the blockchain environment, which improves the performance of the indexes of The efforts around the indexes are summarized in Table\u00a0The authors of SEBDB\u00a0 identifiAuthQX\u00a0, 70 runsT controlling the size of the Merkle tree, while the rest are arranged in the binary tree. When processing queries, the system first searches an approximate range from the top level according to the balanced binary tree search algorithm, and then traverses the Merkle tree to fetch the specified data.Each transaction of SE-Chain\u00a0 is maintYan et al.\u00a0 designedWith the help of specifically designed indexes for basic blockchain operations, SEBDB\u00a0 further The structures of most blockchains\u2019 index not only depend on the items stored in the index, but also on its update history. However, the authors of ForkBase\u00a0 extracteLineageChain\u00a0, 75 suppThere are light nodes that only store block headers in a blockchain network, and they usually represent a user. It is important for them to verify the integrity of query results. Xu\u2019s team successively proposed systems to support authenticated queries for light nodes\u00a0, 68. Thet}2-tree\u00a0, that sut}2-tree\u00a0 propose g et al.\u00a0 design nFang et al.\u00a0 focus onIn blockchains, protocols are a set of rules that allow participants to communicate and share data. Though the existing blockchain protocols ensure relatively secure communication, the full-replicated and serial nature lowers the whole system\u2019s performance, which hinders the further application of blockchains in the data management field. In this survey, we focus on two of the promising solutions, namely sharding and concurrency. In addition, the consensus algorithm is orthogonal with the two approaches and can be arbitrarily combined with them according to actual needs. We provide an overview of the surveyed works in Table\u00a0database-oriented blockchains also benefit from such a method and improve the data processing capability. Elastico\u00a0[Sharding is originally a technique in databases to expand storage capacity and reach higher throughput. By sharding, the huge data is divided into multiple subsets and stored on different nodes, so that transactions on different nodes can be processed in parallel. Many Elastico\u00a0 is the fElastico\u00a0 introducElastico\u00a0, combineElastico\u00a0 to reducElastico\u00a0 proposedAlthough the transactions within the same shards can be efficiently executed in sharded blockchains, the cross-shard transactions usually become the bottleneck. Authors of BrokerChain\u00a0 pointed Meanwhile, Meepo\u00a0 providesn nodes in the network and the system can tolerate at most f faulty nodes, an RS engine is responsible to encode O(n) to O(1). The read engine handles the read requests and responses with the target block. When the target block is local to the node, it is returned directly, otherwise the node sends a query request to the target node. If the request is not replied until timeout exceeds, the node broadcasts a decoding request to Since the data are supposed to be fully replicated in the primitive blockchain network, sharding should reduce the storage overhead of every single machine. BFT-Store\u00a0\u201355 is a Section-Blockchain\u00a0 also triSlimChain\u00a0 adopts adatabase-oriented blockchains to further improve the concurrency of the transactions. For example, as mentioned before, the stateless design of SlimChain\u00a0[Many works have revealed that the serial execution of transactions is one of the bottlenecks that encumber the performance of blockchains, as it does not fully make use of the concurrency ability of modern multiprocessors. How to enable blockchains with concurrency to improve transaction execution efficiency is a hot topic in recent years, and the key lies in how to ensure that the results of concurrent schedules are the same in all nodes. As a typical blockchain system, Hyperledger Fabric\u00a0 adopts alimChain\u00a0 naturallSChain\u00a0 introducA parallel execution engine, PEPP\u00a0, is propRecently, new hardware are introduced to blockchain systems. It is also important to design suitable concurrency mechanisms for these systems. SEFrame\u00a0, 64 propExisting blockchain platforms are far from convenient compared to traditional databases, as they lack the capability of modeling complex tasks in the real world. The cumbersome interfaces also prevent them from further use in business. To solve such a problem, many works aim to enable blockchains with complex semantics and easy-to-use APIs.database-oriented blockchains. SEBDB\u00a0[Since the relational model is widely used in business, many researchers and engineers try to implement relational semantics on s. SEBDB\u00a0 adds rels. SEBDB\u00a0 is anothAs for the cumbersome interfaces, BlockchainDB\u00a0 exposes SQL-Middleware\u00a0 providesThe ledger of a blockchain records either the account state or the operations on the data in plain text. As it has been introduced in Sect.\u00a0The first solution is to encrypt the ledger. Adkins et al.\u00a0 designedInstead of cryptographic methods, LedgerView\u00a0 adds accCAPER\u00a0 is a novdatabase-oriented blockchains. First, techniques from traditional databases benefit current database-oriented blockchains a lot since they have been examined and proved efficient in the past decades. It is still important to draw lessons from mature optimizing techniques. Second, there are also several database-oriented blockchains aim to improve those components unique to the blockchains, such as the chain-like ledger and the Byzantine resistance consensus protocol. Given the difference between blockchains and databases, such components play key roles in the functionality of database-oriented blockchains. Experiments show that corresponding improvements can greatly improve the performance of the system. Last, more and more database-oriented blockchains adopt multiple technical routes to enhance its functionality and improve performance. We can conclude that these technical routes can improve the system in various aspects from the previous part of this section. Thus, the combination of these techniques is a wise and promising way to develop further database-oriented blockchains.We make the following observations from the aforementioned representative database-oriented blockchains satisfy various needs of modern data management, and the development and improvement of it are with a wide prospect.In a word, the blockchain-oriented databases take off from the database end on the blockchain-database spectrum and aim to equip the efficient and easy-to-use data management system with blockchain-powered secure guarantee. They are usually extended from mature database systems and even the already-running database instances . The key point of designing the blockchain-oriented databases is to efficiently implement the algorithms and protocols of the blockchain and minimize the impact on the base system. There are two mainstream technical routes to satisfy the requirements, namely blockchain middleware, and blockchain layer. In Fig.\u00a0blockchain-oriented databases and highlight the mainly modified components.The Though it is mainstream to build a blockchain from the very beginning and add database features to it in the databases community, the attempt of leveraging the existing relational databases with rich features and transactional processing capabilities to build a blockchain is also been noticed. Nathan et al.\u00a0 studied Since then, more and more researchers develop various blockchain middleware to enable databases with security guarantees in different aspects. Lian et al leverage the immutability of blockchain ledgers to develop a tamper-proof detection middleware for relational databases, named TRDB\u00a0. In TRDBi), a new table signature (pubkey), and a bit flag to indicate if the transaction is a deletion (del). The two signatures serve as the hash pointer of blockchain, while others are similar to the corresponding fields in block headers. Note that these attributes are calculated and inserted into the augmented tuple implicitly whenever the table is modified, which means they are transparent to the users. Given a transaction T, the system compares whether Beirami et al.\u00a0 propose Different from the simple additions of the middleware solutions, a blockchain layer means stepping inside the underlying databases and modifying the existing components. Though it may require more effort, such a solution allows researchers to adjust the inner workflow and improve the performance of the whole system.Blockchain PG\u00a0 adds theBigchainDB\u00a0 is a comHBasechainDB\u00a0 adopts tchainifyDB\u00a0, 81 is ablockchain-oriented databases and identify two mainstream technical routes to implement it. We can conclude from the analysis that the hash-chain feature of blockchains, along with the multi-node consensus and backup mechanism, becomes important reinforcement of traditional databases\u2019 data integrity protection measures. The integration of blockchain features helps databases to further complete their functionality.In this section, we review representative works about the However, we also notice that the two technical routes have pros and cons. There exists a trade-off between flexibility and performance. In particular, building a blockchain middleware is easy and less intrusive. It can also bridge heterogeneous database instances with the same data model, which provides better portability and suits the inter-organization collaboration scenario. On the other hand, designing a blockchain layer for a specific database instance makes it possible to further optimize the components and provides higher performance. To sum up, the blockchain middleware is more friendly to the legacy systems, and the blockchain layer is more efficient. We further compare the works in Table\u00a0hybrid systems locate in the center of the blockchain-database spectrum. Different from the other fusion systems which focus on either security or performance, they are equal combinations of blockchain and database, and reach a balance between the two aspects. Though it is indeed that the hybrid systems are less competitive than the other fusion systems in most scenarios, the balanced and comprehensive functionality enables them to cope with the basic secure data management tasks and focus on more complex requirements. In fact, many hybrid systems are designed to handle complicated problems in practical scenarios, such as graph data and the conflict situation between morals and laws.The hybrid systems usually relies on middleware to connect the existing blockchain and database instances, and we present the abstract architecture of hybrid systems in Fig.\u00a0The integration of the Instead of representing the relationship in abstract attributes, graph databases directly store and process the relationship of entities in the formation of vertexes and edges. The native graph storage and processing enable the graph databases with superior traversal performance; however, the plain KV model limits blockchains to processing such complicated data as the graph databases do. To enable the verifiable audit trail of data integrity and its modifications for information stored in a graph database, Ermolaev et al.\u00a0 combine In the situation of personal data management such as student data and medical records, there exists a conflict between personal privacy and public interests, i.e., stakeholders want to claim the ownership of their personal data and restrict third parties to access their data, while such a restriction may hinder the third parties to make use of these data in governance or innovation. Though blockchain and smart contract seems to be a promising solution to this problem, it is not practical since current blockchains cannot store and process such a massive amount of data. Bertram et al.\u00a0 combine hybrid systems are also used to simultaneously manage data in blockchain and database platforms. This is from the observation that each platform has its solid advantage at the current stage. Thus, the most practical way is to build a combination system such that inherits both the resistance to data modification from blockchain and the query speed from the distributed databases. ChainSQL\u00a0[The ChainSQL\u00a0 is an imThe authors of MOON\u00a0 hold anohybrid systems. Although there have been numerous studies and applications of other fusion systems, we can draw from the above analysis the unique value of hybrid systems. By directly integrating blockchain and database instances, the hybrid systems acquire a balanced and sufficient ability of integrity and fast data processing from both systems, which can satisfy the needs of most application scenarios. With such a solid foundation, we can further explore some complicated problems in secure data processing , which may be the most important application scenario of hybrid systems.Table\u00a0database-oriented blockchains or blockchain-oriented databases. For example, the manipulation logs on the actual data are stored both as on-chain data on the blockchain instance and as WAL logs in the database instance. How to minimize the redundancy and make full use of the instances is a promising direction for hybrid systems in the future.We also observe that the direct integration of several instances results in a bloated and redundant system that requires more resources than the blockchain-oriented databases suit the scenarios that value security most and want to improve the data processing capability, while blockchain-oriented databases are the best choice to satisfy the security needs of the efficiency-first applications. Hybrid systems provide a balance between data management capabilities and blockchain benefits, making them a viable option for many use cases. Thus, the three systems are of equal importance, since the three systems satisfy different urgent demands in the data management field.Based on the above analysis, we compare the three fusion systems within different dimensions in Table\u00a0database-oriented blockchain, which shows that people pay more attention to data security. Therefore, we suggest further studies of this aspect and expect a more competitive system based on the massive views in this field.We also observe the increasing trend in the research of Numerous cases have shown that there are strengths and weaknesses of blockchains and databases in the data management field, thus the integration of both systems to better undertake the task has become a promising solution in the database community. However, it is not an easy way. In this section, we present our observation on the research challenges and future opportunities in the integration of blockchains and databases, i.e., the fusion systems, from the aspects of performance, privacy, data description ability, new hardware, learning-based optimization, and application.Performance is the most important feature of a data management system that is perceived by the users. Consequently, it is one of the most critical indicators to evaluate such a system. However, we have observed that there is still a huge performance gap between the fusion systems and mature commercial databases. This is due to the linear nature, one of the fundamental features of blockchain, hindering the transaction processing rate of blockchains and the succeeding fusion systems. To solve the problem, there are two parallel-but-associated targets, which affect two main operations (query and modification) in a data management system, respectively.One is to build efficient indexes on the target data to accelerate data access. It is relatively easy for the off-chain part since the indexes of databases can achieve a satisfying performance. However, for the on-chain data, it is proved in practice that the design of new block data storage structure and corresponding indexes can effectively improve the query function and query performance. The index of on-chain data can either serve for real-time transaction verification\u00a0, 67, 72 The other is to improve the consensus mechanism, which is the key to ensuring the consistency of transaction execution among the participants, thus it has a great impact on the overall performance and application of blockchains and fusion systems. It is important to reach a balance between efficiency and consistency, but there are two main drawbacks and opportunities of current blockchain consensus mechanisms. First, the serial execution of transactions does not fully make use of the concurrency ability of modern multiprocessors, so it is a good idea to improve the concurrency of transactions\u00a0, 63, 64.In the blockchain environment, all data and transactions need to be replicated to all nodes to obtain consensus. As a consequence, sensitive data may be accessed by unauthorized third parties, and cannot be managed in the blockchain environment. The fusion systems also face such a problem. The key lies in the access control of private data. However, directly applying access control methods for databases to the blockchain will result in the hash value of each block cannot correspond to the data obtained, so users cannot verify whether the data in the chain has been tampered with. This is a difficulty that remains to be solved.Fortunately, with the help of database techniques, several promising solutions can be further studied. For example, the system can store sensitive data in the databases (off-chain) which have better support for data privacy\u00a0, or learThe development of the Internet applications has spawned a variety of data forms, such as graph data and document data. Many existing studies in blockchain and database fusion systems support and extend key-value model\u00a0, 40 and For example, graph databases are a rapidly evolving field with many active research directions. Graph mining and analysis involves extracting useful information and insights from large-scale graph data. By integrating graph databases with blockchains, transactional relationships can be analyzed in a secure and decentralized manner\u00a0.Recently, there is a notable development of various types of hardware related to blockchains. For example, the success of Bitcoin has led to the emergence of dedicated hardware such as Field Programmable Gate Array (FPGA) and GPU, which has greatly increased the efficiency of hash computing. In turn, how to make full use of these emerging hardware in the fusion systems to better manage data is an interesting topic for the database community. Here we present several observations.A trusted execution environment (TEE) provides an isolated memory that resists outside corruption and ensures secure computing at the hardware level, which lowers the security assumption to a certain extent. Thus, there is an opportunity to improve other aspects of blockchains, especially in the terms of performance\u00a0, 69, 70.Machine learning has been extensively studied over the past decades. It simulates human learning behaviors with high computing power to acquire new knowledge or skills, and has been widely applicated in database optimizations such as cost estimation, join order selection, and end-to-end optimizer\u00a0. We beliFor example, the data distribution in sharding blockchains can greatly affect the efficiency of data access. However, current sharding systems usually adopt a naive rule such as prefix/suffix-based, which may not suit the real data distribution. In this way, machine learning-based rules can capture the pattern and boost data access\u00a0. Other aThe collectively maintained and tamper-resistant public ledger of blockchain systems ensures the security and reliability of the data stored in a distributed network. In addition to general-purposed data management, blockchain-database fusion systems can also bring new solutions to many specific domains. We notice that more and more people combine their original business systems with blockchains to form a domain-specific fusion system in various fields. There is a trend that leverages blockchain characteristics to solve the drawbacks of the business system, and improve the shortcomings and limitations of the blockchain system itself. However, applications in various fields have also posed more challenges.Take finance as an example. The processing capacity of the blockchains is not enough to replace the existing centralized trading system. Therefore, it is important to improve the consensus mechanism to adapt to high throughput financial transaction applications. As for the supply chain, it is necessary to equip the system with a traceability model that fits the industrial supply chain scenario to promote verifiable data sharing in supply chain management. Other applications such as intellectual property management, asset delivery, and medical data management, also have different requirements for the fusion system.database-oriented blockchains, blockchain-oriented databases, and hybrid systems, and present a high-level comparison according to the different directions of their integration. Then, we review the representative fusion systems of database-oriented blockchains, blockchain-oriented databases, and hybrid systems. To be more specific, we review representative systems of database-oriented blockchains from index, protocol, data model, and ledger; we analyze blockchain middleware and blockchain layer scheme of blockchain-oriented databases; we also demonstrate the combination approaches and oriented research fields of different hybrid systems. Finally, we present a high-level comparison between the three fusion systems and our observations on the challenges and future work.In this survey, we present the integrating trend of blockchains and traditional databases, and propose a blockchain-database spectrum to analyze the work related to the fusion systems in the field of data management. First, we classify the fusion systems into We believe that this survey demonstrates the current status and limitations of existing blockchain-related data management research and provides insight for researchers to conduct in-depth research in this area."} +{"text": "Adherence to evidence-based standard\u00a0treatment guidelines\u00a0(STGs)\u00a0enable healthcare providers\u00a0to deliver consistently appropriate diagnosis and treatment. Irrational use of antimicrobials significantly contributes to antimicrobial resistance\u00a0in sub-Saharan Africa (SSA). \u00a0The best available evidence is needed to guide healthcare providers on adherence to evidence-based implementation of STGs. This systematic review and meta-analysis aimed to determine the pooled prevalence of adherence to evidence-based implementation of antimicrobial treatment guidelines among prescribers in SSA.. The publication bias and heterogeneity were assessed using Egger\u2019s test and the I2 statistics. Heterogeneity and publication bias were validated using Duval and Tweedie's nonparametric trim and fill analysis using the random-effect analysis. The summary prevalence and the corresponding 95% confidence interval (CI) of healthcare professionals\u2019 compliance with evidence-based implementation of STG were estimated using random effect model. The review protocol has been registered with PROSPERO code CRD42023389011. The PRISMA flow diagram and checklist were used to report studies included, excluded and their corresponding section in the manuscript.The review followed the JBI methodology for\u00a0systematic\u00a0reviews of\u00a0prevalence\u00a0data. CINAHL, Embase, PubMed, Scopus, and Web of Science databases were searched with no language and publication year limitations. STATA version 17 were used for meta-analysisTwenty-two studies with a total of 17,017 study participants from 14 countries in sub-Saharan Africa were included. The pooled prevalence of adherence to evidence-based implementation of antimicrobial treatment guidelines in SSA were 45%. The pooled prevalence of the most common clinical indications were respiratory tract (35%) and gastrointestinal infections (18%). Overall prescriptions per wards were inpatients and outpatients . Only 391 prescribers accessed standard treatment guidelines during prescription of antimicrobials.Healthcare professionals\u2019 adherence to evidence-based implementation of STG for antimicrobial treatment were low in SSA. Healthcare systems in SSA must make concerted efforts to enhance prescribers access to STGs through optimization of mobile clinical\u00a0decision support applications. Innovative, informative, and interactive strategies must be in place by the healthcare systems in SSA to empower healthcare providers to make evidence-based clinical\u00a0decisions\u00a0informed by the best available evidence and patient preferences, to ultimately improving patient outcomes and promoting appropriate antimicrobial use.The online version contains supplementary material available at 10.1186/s40545-023-00634-0. The World Health Organization (WHO) declared antimicrobial resistance (AMR) as a growing global health security and development threat that undermines the effectiveness of antimicrobial agents, threatening the ability to treat common microbial infections . AMR posIf preventative measures are not taken, the threat of will persist and result in a depletion of resources and an increase in morbidity and mortality on a global scale , 5. Low-To combat inappropriate antimicrobial use, the development of standard treatment guidelines (STGs) has been included as part of the WHO\u2019s Global Action Plan initiative; with this implementation, the WHO aims to set guidelines for the purchasing and prescription of antimicrobial medicine , 9. STGsStudies have shown that when STGs are adhered to, mortality, morbidity, and the costs of health services related to corresponding illness are reduced , 15. WhiReasons for lack of adherence to STGs include a lack of skilled human resources, costs of the drugs, quality of the STGs, lack of accessibility to the drugs, lack of access to STGs, and inadequate training of prescribers , 25. WhiA scoping review that analyzed the overuse of medications in low resource settings found that only 10 out of 139 studies reported drivers of non-adherence-specific antimicrobial treatment guidelines , 29. ThuTherefore, this systematic review and meta-analysis aimed to determine the pooled prevalence of adherence to evidence-based implementation of antimicrobial treatment guidelines among prescribers in sub-Saharan Africa. The pooled data output obtained from this review would serve as region-specific and up-to-date evidence that contributes to comprehensive insights into gaps in the implementation of STGs at point of care and provides actionable recommendations for improvement. It would complement and enhance the knowledge gained from previous reviews by offering a more detailed and context-specific analysis.The proposed review were conducted in accordance with the JBI methodology for systematic reviews of prevalence data . The proThe database search targeted both published and unpublished studies. There was no language and publication year restrictions. A\u00a0three-step search strategy were used in this\u00a0review. First, an initial search of PubMed and CINAHL was undertaken, followed by an analysis of the titles, abstracts, and index terms of the articles. Second, all published and unpublished literature were searched using the identified keywords. Additional file Following the search, all identified citations were collated and uploaded into EndNote 20 and duplicates were removed. Descriptive observational and cross-sectional studies were included. Literature was eligible for inclusion if they reported adherence to STGs among prescribers in SSA. Studies which reported the prevalence of healthcare providers adherence to STGs as the main outcome were included. Literature that reported the clinical indications for which antimicrobials were prescribed for, access, availability, frequency of STG use was included. This\u00a0review\u00a0included studies conducted in both public and private health facilities in SSA. Protocols, systematic reviews, meta-analysis, randomized controlled trials, and studies conducted in high-income countries were excluded.Titles and abstracts were assessed by two independent reviewers (MTB and VS) against the inclusion criteria. The full texts of potentially relevant studies were retrieved and the citation details were imported into the JBI System for the Unified Management, Assessment, and\u00a0Review\u00a0of Information (JBI SUMARI) .\u00a0The fulRefers to the systematic and rigorous applications of established clinical recommendations for the use of antimicrobial agents in the treatment of infectious diseases . This apRefers to compliance with standard treatment guidelines (STG) for antimicrobial treatment at point of care provided that a consistently correct diagnoses and treatments that limit the irrational use of medicines and the negative health consequences that can occur as a result were in place , 37. AdhThe data extraction tool was prepared by MTB using excel spreadsheet. The data were extracted from included studies using the data extraction tool prepared by MTB. The tool includes variables such as the name of the author, publication year, study design, data collection period, sample size, study area, and the prevalence of adherence to standard treatment guidelines (STG) among health care providers. In addition, the tool consists of data on the clinical indications, access and availability of STG, frequency of use of STG. MTB and VS extracted the data. YS and SM cross-checked the extracted data for its validity and cleanness. Any disagreements between the reviewers were resolved through discussion with\u00a0a\u00a0third reviewer. Authors of the papers were contacted to request missing or additional data as required.Two independent reviewers critically appraised eligible studies for methodological quality using the JBI critical appraisal checklist for studies reporting\u00a0prevalence\u00a0data .\u00a0Study aI2 test. A random-effects model using the double arcsine transformation approach were used. Sensitivity analyses were conducted to test decisions made regarding the included studies. Visual examination of funnel plot asymmetry and during the full-article screening (n\u2009=\u2009110) articles were excluded. Accordingly, 43 studies were eligible for quality assessment. Finally, 22 studies were included in this meta-analysis reported by eleven studies , 61\u201364, Public health officers (1616), nurses (731), medical doctors (196), and community health workers (151) were the distribution of STGs prescribers according to profession Table .\u00a0EducatiOnly three studies have reported the frequency of STG use by prescribers , 51, 54,The pooled prevalence of adherence to evidence-based implementation of antimicrobial treatment guidelines were 45.23% (95% CI 32.75\u201358.01%) and gastrointestinal diseases (18%). Respiratory tract (35%) and gastrointestinal (18%) infections are highly treated clinical indications in SSA. This could be attributed to their significant burden due to easy transmissibility and environmental factors , 87.This systematic review and meta-analysis involved cross-sectional studies that comes with limitations related to causality, selection bias, heterogeneity, and the inability to capture temporal and dynamic trends. To overcome these limitations and obtain a more comprehensive understanding of adherence to implementation of evidence-based STGs, future research could consider incorporating other study designs, such as longitudinal studies or randomized controlled trials, in addition to cross-sectional data.Healthcare professionals\u2019 adherence to evidence-based implementation of standard treatment guidelines for antimicrobial treatment were low in sub-Saharan Africa. Healthcare systems in sub-Saharan Africa must make concerted efforts to enhance prescribers access to standard treatment guidelines through the implementation of mobile clinical decision support applications to optimize compliance with standard treatment guidelines. Innovative, informative, and interactive strategies must be in place by the healthcare systems in sub-Saharan Africa to empower healthcare providers to make evidence-based clinical decisions\u00a0informed by the best available evidence and patient preferences, to\u00a0ultimately improving patient outcomes and promoting appropriate antimicrobial use.The implementation of evidence-based clinical practice guidelines for antimicrobial treatment involves the systematic integration of the best available evidence into clinical decision-making and patient care , 89. TheAdditional file 1:\u00a0Appendix I: Search strategy.\u00a0Appendix II: PRISMA 2020 Checklist."} +{"text": "Caenorhabditis elegans to investigate the combined effects of PS-50 (50 nm nanopolystyrene) and PS-500 (500 nm micropolystyrene) at environmentally relevant concentrations on the functional state of the intestinal barrier. Environmentally, after long-term treatment (4.5 days), coexposure to PS-50 (10 and 15 \u03bcg/L) and PS-500 (1 \u03bcg/L) resulted in more severe formation of toxicity in decreasing locomotion behavior, in inhibiting brood size, in inducing intestinal ROS production, and in inducting intestinal autofluorescence production, compared with single-exposure to PS-50 (10 and 15 \u03bcg/L) or PS-500 (1 \u03bcg/L). Additionally, coexposure to PS-50 (15 \u03bcg/L) and PS-500 (1 \u03bcg/L) remarkably caused an enhancement in intestinal permeability, but no detectable abnormality of intestinal morphology was observed in wild-type nematodes. Lastly, the downregulation of acs-22 or erm-1 expression and the upregulation expressions of genes required for controlling oxidative stress served as a molecular basis to strongly explain the formation of intestinal toxicity caused by coexposure to PS-50 (15 \u03bcg/L) and PS-500 (1 \u03bcg/L). Our results suggested that combined exposure to microplastics and nanoplastics at the predicted environmental concentration causes intestinal toxicity by affecting the functional state of the intestinal barrier in organisms.The possible toxicity caused by nanoplastics or microplastics on organisms has been extensively studied. However, the unavoidably combined effects of nanoplastics and microplastics on organisms, particularly intestinal toxicity, are rarely clear. Here, we employed Caenorhabditis elegans [Due to the insufficient recycling and reusing system, large quantities of plastics have been randomly released into ecosystems, including the terrestrial ecosystem, marine ecosystem, and freshwater ecosystem . General elegans ,12. UV-a elegans . However elegans ,15. Curr elegans . Coexpos elegans . Reporte elegans . That is elegans . Current elegans . Neverth elegans ,21.Caenorhabditis elegans is a well-established, wonderful animal model owing to its typical properties [C. elegans exhibits enhanced sensitivity to various environmental toxicants or stresses, which makes it a successful model in toxicological evaluation and signaling studies. Some beneficial sublethal endpoints, were evaluated as indicators of the potential toxicity of multiple toxicants [operties . Additiooxicants ,23,24. Ioxicants ,25,26. Aoxicants . The gutoxicants ,29,30. NCaenorhabditis elegans. In this study, the 50 nm nanopolystyrene (PS-50) and 500 nm micropolystyrene (PS-500) were chosen as the test materials. We measured the locomotion behavior, brood size, intestinal reactive oxygen species (ROS) production, intestinal autofluorescence, and used scanning electron microscopy (SEM) to assess PS-50 and PS-500. Finally, we hypothesized that coexposure to micro- and nanoplastics at estimated environmentally significant concentrations could potentially induced a more severe deterioration in the functional state of the intestinal barrier than single-exposure to micro- or nanoplastics. The associated cellular or molecular basis for this combined effect was also presented.Herein, our aim was to examine the combined effect of nanopolystyrenes and micropolystyrenes at the predicted environmental concentration on the functional state of the intestinal barrier in The 50 nm nanopolystyrene (PS-50) and 500 nm micropolystyrene (PS-500) were obtained from Janus New-Materials Co., Ltd., Nanjing, China. Dynamic light scattering (DLS) analysis further indicated that the sizes of the examined PS-50 and PS-500 were 49.46 \u00b1 2.3 nm and 502.55 \u00b1 3.1 nm, respectively. The images from the scanning electron microscope (SEM) showed the spherical morphology of PS-50 and PS-500 . The zetC. elegans were acquired from School of Medicine, Southeast University and cultured on nematode growth medium plates containing Escherichia coli OP50, the food source, at 20 \u00b0C without light [ut light . To extrut light . These wut light , before E. coli OP50 (~4 \u00d7 106 colony-forming units (CFUs)). During the exposure period, the suspensions were replenished daily. Herein, some endpoints were performed to detect the potential combined toxicity between PS-50 and PS-500 at the predicted environmental concentration on nematodes.The estimated environmental PS-50 concentrations were chosen as 5, 10, and 15 \u03bcg/L ,21. The Locomotor behavior, including head thrashing and body bending, was employed to assess the motor neuronal operative status as described . Head thBrood size was used to assess the reproductive capacity . The totC. elegans were exposed to 1 \u03bcM 5\u2032,6\u2032-chloromethyl-2\u2032,7\u2032-dichlorodihydro-fluorescein diacetate (CM-H2DCFDA), prior to a 3 h incubation without light. The tested organisms were rinsed thrice in K-medium , and then mounted on a 2% agar pad to evaluate intestinal fluorescent ROS production, using a fluorescence microscope with an excitation wavelength at 488 nm and an emission filter of 510 nm. The relative fluorescence intensity representing intestinal ROS production was semiquantified in relation to the intestinal autofluorescence. Overall, 40 animals were assessed per group, and each group was tested three times.Intestinal ROS synthesis was assessed as reported earlier . C. elegIntestinal autofluorescence brought on by the lipofuscin-mediated lysosomal deposition refers to the aging process in worms . After cIntestinal permeability reflecting the functional state of the intestine barrier was routinely performed to exhibit the intestine damage induced by environmental pollutants . Erioglaact-5, pkc-3, acs-22, erm-1, hmp-2, sod-1, sod-2, sod-3, sod-4, sod-5, mev-1, isp-1, clk-1, gas-1, ctl-1, ctl-2, and ctl-3) were recorded using the StepOnePlus\u2122 real-time PCR system. All gene expressions were normalized to tba-1 (a reference gene). All experiments were conducted three times. The employed primer sequences are provided in Total RNAs were prepared with the help of Trizol . Then, cDNA was converted in a Mastercycler gradient PCR system. The relative gene expressions . Intergroups assessment utilized the one-way or two-way ANOVA with multiple-factor comparison followed by a post hoc test. A probability level of 0.01 (**) was set as the significance threshold.PS-500 (0.1 and 1 \u03bcg/L) exposure showed no difference in brood size or locomotion behavior . SimilarAfter prolonged exposure, we first examined intestinal morphology to detect the potential toxicity on intestinal structure. Whether coexposure or single-exposure to PS-50 and PS-500, it did not all cause remarkable changes in the intestinal lumen . FurtherAccording to our erioglaucine disodium staining method, under normal conditions, the blue dye only stained the intestinal lumen . Single-act-5, pkc-3, acs-22, erm-1, and hmp-2 contents or PS-500 (0.1 and 1 \u03bcg/L) did not influence intestinal contents B. Similacontents B. Howeveressions B. Using intestinal autofluorescence as an endpoint, PS-500 did not significantly affect intestinal autofluorescence production . SimilarThe reactive oxygen species (ROS) is a byproduct of the mitochondria-based electron transport chain reaction, and it positively modulates cellular senescence and organ dysfunction via inducting oxidative damages to DNA, proteins, and lipids . PS-500 sod-3, clk-1, and gas-1 expressions and catalase (CTL1\u20133) are critical contributors to the antioxidation-based defense response in nematodes, and mitochondrial complexes participate in oxidative stress activation ,38,39. Aressions B. Furthe3 levels B. To date, sufficient evidences provided by previous studies have raised that prolonged exposure to micro- or nanoplastics results in severe multiorgan toxicity in environmental organisms, including neurotoxicity, reproductive toxicity, and immunotoxicity in environmental organisms ,40,41,42The broadened intestinal lumen is frequently used as an indicator to reflect the abnormality in intestinal morphology ,35. ReceCaenorhabditis elegans [Intestinal permeability is usually performed to assess the functional state of the intestinal barrier ,47. Diff elegans . acs-22 and erm-1 expressions and PS-500 (1 \u03bcg/L) remarkably decreased the intestinal ressions B, which eability . ERM-1 eeability . Hence, onfirmed B, which Caenorhabditis elegans [2-NPs) toxicity on nematodes by diminishing locomotion behavior and enhancing intestinal ROS synthesis [To explore the underlying intracellular mechanism required for the PS-50 and PS-500 coexposure-mediated modulation of intestine toxicity, we proposed oxidative stress and intestinal autofluorescence in this study. Intestinal autofluorescence is generated via lysosomal deposition of lipofuscin, which accumulates over time in aging nematodes . Oxidati elegans , which iynthesis .C.elegans, SODs, catalases, and mitochondrial complex components act as crucial regulators to maintain the oxidative stress balance [sod-3, clk-1, and gas-1 expressions occurred in worms exposed to PS-50 (15 \u03bcg/L). More interestingly, coexposure to PS-500 (1 \u03bcg/L) and PS-50 (15 \u03bcg/L) further dramatically enhanced the sod-2, sod-3, isp-1, clk-1, gas-1, and clt-3 expressions. SOD-2 and SOD-3 cooperate with CTL-3 to modulate the oxidation\u2013antioxidation system in nematodes [In balance ,52. Compematodes ,35. ISP-ematodes . Environ\u20131\u201315 \u03bcg/L and \u22641 \u03bcg/L for 500 nm plastic particles [Noteworthily, based on the above observations, the combined effects to PS-50 and PS-500 were only observed at the upper concentration range for both plastic particles. However, the estimated environmental nanoplastic concentration for 50 nm plastic particles is speculated to be 1 pg Larticles ,21. In aacs-22 or erm-1 expression and the upregulated expressions of oxidative stress-related genes serve as a molecular basis to strongly explain the formation of intestine toxicity caused by coexposure to PS-50 (15 \u03bcg/L) and PS-500 (1 \u03bcg/L). Our results suggested that combined exposure to microplastics and nanoplastics at the predicted environmental concentration notably causes intestinal toxicity by affecting the functional state of the intestinal barrier in organisms.Taken together, coexposure to PS-50 and PS-500 was conducted to confirm our hypothesis, with the following conclusion: coexposure to micro- and nanoplastics at estimated environmentally significant concentrations could induced more severe deterioration of the functional state of the intestinal barrier than single-exposure to micro- or nanoplastics. In wild-type nematodes, cotreatment with PS-50 (15 \u03bcg/L) and PS-500 (1 \u03bcg/L) did not damage the intestinal morphology but enhanced intestinal permeability. Induction of intestinal ROS synthesis and intestinal autofluorescence production act as a cellular mechanism to powerfully explain the formation of intestine toxicity caused by coexposure to PS-50 (15 \u03bcg/L) and PS-500 (1 \u03bcg/L). Meanwhile, the downregulation of"} +{"text": "This study uses two existing data sources to examine how patients\u2019 symptoms can be used to differentiate COVID-19 from other respiratory diseases. One dataset consisted of 839,288 laboratory-confirmed, symptomatic, COVID-19 positive cases reported to the Centers for Disease Control and Prevention (CDC) from March 1, 2019, to September 30, 2020. The second dataset provided the controls and included 1,814 laboratory-confirmed influenza positive, symptomatic cases, and 812 cases with symptomatic influenza-like-illnesses. The controls were reported to the Influenza Research Database of the National Institute of Allergy and Infectious Diseases (NIAID) between January 1, 2000, and December 30, 2018. Data were analyzed using case-control study design. The comparisons were done using 45 scenarios, with each scenario making different assumptions regarding prevalence of COVID-19 , influenza and influenza-like-illnesses . For each scenario, a logistic regression model was used to predict COVID-19 from 2 demographic variables and 10 symptoms . The 5-fold cross-validated Area under the Receiver Operating Curves (AROC) was used to report the accuracy of these regression models. The value of various symptoms in differentiating COVID-19 from influenza depended on a variety of factors, including (1) prevalence of pathogens that cause COVID-19, influenza, and influenza-like-illness; (2) age of the patient, and (3) presence of other symptoms. The model that relied on 5-way combination of symptoms and demographic variables, age and gender, had a cross-validated AROC of 90%, suggesting that it could accurately differentiate influenza from COVID-19. This model, however, is too complex to be used in clinical practice without relying on computer-based decision aid. Study results encourage development of web-based, stand-alone, artificial Intelligence model that can interview patients and help clinicians make quarantine and triage decisions. It is increasingly clear that COVID-19 is becoming an endemic disease and clinicians would need to accurately differentiate it from seasonal influenza and influenza-like-illnesses. A number of existing published studies have contrasted differential diagnosis of COVID-19 and influenza in patients who present at the hospital and for whom laboratory data are available . The curWhen a new infection emerges, it is important to quickly clarify its signature presentation and symptoms that can help differentiate it from other diseases. The U.S. Centers for Disease Control and Prevention (CDC) has repeatedly changed the guidance on which symptoms can be used to diagnose COVID-19. At the time of publication of this paper, the CDC listed common symptoms of COVID-19; and provides no guidance on how to weigh these symptoms, either individually or in clusters of symptoms. It provides no guidance on how to differentiate COVID-19 from influenza or influenza-like illness. This study aims to clarify how COVID-19 may be differentiated from influenza based on the symptoms of patients presenting in the community, based on the data collected at home or in other settings (e.g. clinics), but referring to symptoms present prior to any hospitalization.There are considerable variations in prevalence of respiratory illnesses. During the year 2020, while social distancing was implemented, there were few influenza or influenza-like-illness cases, necessitating for us to rely on data from the years prior to the emergence of COVID-19 , 5. ThisWhen a pandemic emerges, reliance on existing data sources can accelerate identification of signature symptoms of the new infection. This study relied on two different existing data sources. The first dataset obtained from the U.S. Centers for Disease Control and Prevention (CDC) , was colThe two data sources Data useTo be included in the study, both COVID-19 cases and influenza/influenza-like illness cases must have reported at least one of the following symptoms: (1) cough, (2) fever, (3) chills, (4) diarrhea, (5) nausea and vomiting, (6) shortness of breath, (7) runny nose, (8) sore throat, (9) myalgia, and (10) headache. Therefore, the study findings are only generalizable to symptomatic COVID-19 patients. In the CDC data, the majority of the COVID-19 patients were either asymptomatic or their symptoms were not reported. Of 3.5 million of the laboratory-confirmed positive COVID-19 cases, 839,288 cases (24%) had reported at least one symptom, hence were included in our analysis. In the influenza and influenza-like illness databases, all listed cases had at least one symptom reported. For patients with at least one symptom, if additional symptom were missing at random, the missing values were assumed to be absent (the mode for the responses).Symptoms reported in one but not the other database could not be used in the regression equations. It has been noted that COVID-19 presents with non-respiratory symptoms as well (e.g. loss of smell or taste). In those situations, influenza or influenza-like illnesses were not suspected. Only patients presenting with common respiratory infection symptoms across the two databases were included in the analysis.We constructed models for differentiating COVID-19 from influenza/influenza-like illness under 45 scenarios. These scenarios were constructed from different assumptions about the prevalence of COVID-19, influenza, and influenza-like illness, co-occurring during the same season. In the future, we assumed that the prevalence of COVID-19 will be 2%, 4%, or 6% of the population. The prevalence of influenza, in the future, was assumed to be 0.01%, 3%, 6%, 9%, or 12% of the population. In the future, the prevalence of influenza-like illness was assumed to be 1%, 3.5% or 7% of the population. The combination of these assumptions produced 45 different scenarios, reported in In each scenario, to differentiate COVID-19 from influenza or influenza-like illness, we used ordinary logistic regressions. In these regressions, the dependent variable was laboratory-confirmed COVID-19 test results. The independent variables were age, gender, and the 10 symptoms shared across the two databases. Regressions were done with both linear and interaction terms. Interaction terms were organized among age above 30, gender, and binary symptoms, taken in pairs, 3-way, 4-way, and 5-way. In total, there were 5,519 combinations of interaction terms possible.To reduce the possibility of modeling noise, the models were constructed and tested using 5-fold cross-validation . In partThe diagnostic value of different symptoms (regression coefficients associated with the symptoms) changed in the 45 scenarios. The full list of coefficients of the regression models is presented in Furthermore, regression coefficients for different interaction terms suggested that the differentiation of COVID-19 and influenza/influenza-like illness was impacted by age. In 52.65% of scenarios, the impact of a symptom combination reversed when the age group was switched from 20\u201329 to 50\u201359 years old. For example, consider the scenario in which the prevalence of COVID-19 is 4%, influenza is 9%, and influenza-like illness is 3.5%. In this scenario, it is informative to look at the diagnostic value of a combination of fever and sore throat on odds of having COVID-19. In the age group of 20\u201329 years old, that combination of symptoms reduced the odds of having COVID-19 (Odds of 0.18). In individuals in the age group of 50 to 59 years old, the same set of symptoms increased the odds of having COVID-19 (Odds of 2.02). These findings are consistent with others in the literature, suggesting that COVID-19 presentation differs for various age groups [There are a number of limitations in this study which should be considered before evaluating the findings. This study has focused on COVID-19 cases that present with respiratory symptoms. Not all SARS-CoV-2 infections have presented with symptoms. Furthermore, not all symptomatic COVID-19 patients present with respiratory symptoms. The models constructed and validated in this study would not be applicable to asymptomatic individuals or those without respiratory symptoms.Another limitation of this study is related to the fact that this study relied on two sources of data, collected at different time periods. COVID-19 cases used in our analysis occurred in 2019 and 2020. Adherence to masking and social distancing, especially during the early days of COVID-19 pandemic had reduced the number of influenza cases in 2020 to nearly zero . It was This study was done before the emergence of Omicron and Delta variants of SARS-CoV-2. At the time of the publication of this paper, there were reports that different variants of the novel coronavirus may present with different symptoms . AdditioDespite these limitations, the study suggests the complexity of differentiating COVID-19 from influenza/influenza-like illness. Published literature , the CDCSymptom screening should focus on clusters and not individual symptoms. Models based on pairs of symptoms were less accurate (AROC = 0.69) than models based on 5-way interaction between symptoms (AROC = 0.90), suggesting the importance of clusters of symptoms. Clinicians cannot rely on simple rules for diagnosing COVID-19 and would need to examine combinations of symptoms.To differentiate COVID-19 from influenza and influenza-like illness, symptom screening should consider the prevalence of the pathogens. The impact of symptoms on diagnosis of COVID-19 changed under different scenarios. In majority of scenarios, the impact of at least one symptom cluster reversed. If the symptom was originally indicative of COVID-19, under different assumptions of prevalence of pathogens, it reversed direction and now ruled-out COVID-19.The 5-fold cross-validated AROC associated with differentiating COVID-19 from influenza based on their associated symptoms was high (90%). This level of accuracy is high enough to be clinically relevant. At the same time, the models constructed in this study are complex and present challenges in their applicability to clinical practice. Currently, the CDC\u2019s website states that it is not possible to differentiate COVID-19 from other respiratory diseases based on the symptoms alone \u201cbecause some of the symptoms of flu, COVID-19, and other respiratory illnesses are similar.\u201d . While nThe sheer number of symptom combinations, assumptions of prevalence of pathogens, and age-symptom combinations greatly exceeds the number of items that any clinician can keep in mind. The complexity of the inference tasks suggests the need for a decision aid that can assist clinicians in making COVID-19 diagnoses more accurately and to allow for better symptom screening in the community. Such tool could automatically account for the prevalence of COVID-19, influenza, and influenza-like illness based on the geographic location of the user; select logistic regression model that is appropriate for the location of the individual; and predict individual\u2019s odds of having COVID-19. Ideally, such web tool should report the probability of COVID-19 using real-time spatial-temporal prevalence of respiratory infections, and report directly to patients, and, through patients, to their clinicians. Other investigators have developed real-time access to forecasting respiratory infections .The models developed in this study establish how one can differentiate COVID-19 from influenza, albeit only if a computerized decision aid could interview the patient and calculate the probability of a likely infection. Such method of assessment and triage would be helpful if access to at-home COVID-19 tests were limited, as experienced in the United States for some time , and as S1 File(ZIP)Click here for additional data file.S1 Table(DOCX)Click here for additional data file.S2 Table(DOCX)Click here for additional data file.S3 Table(DOCX)Click here for additional data file.S1 Data(TXT)Click here for additional data file.S2 Data(TXT)Click here for additional data file."} +{"text": "It has long been suspected that the sensory cells responsible for the major CNS contribution to this so-called respiratory CO2/H+ chemoreception are located in the brainstem\u2014but there is still substantial debate in the field as to which specific cells subserve the sensory function. Indeed, at the present time, several cell types have been championed as potential respiratory chemoreceptors, including neurons and astrocytes. In this review, we advance a set of criteria that are necessary and sufficient for definitive acceptance of any cell type as a respiratory chemoreceptor. We examine the extant evidence supporting consideration of the different putative chemoreceptor candidate cell types in the context of these criteria and also note for each where the criteria have not yet been fulfilled. By enumerating these specific criteria we hope to provide a useful heuristic that can be employed both to evaluate the various existing respiratory chemoreceptor candidates, and also to focus effort on specific experimental tests that can satisfy the remaining requirements for definitive acceptance.An interoceptive homeostatic system monitors levels of CO It has long been known that O2 sensing is mediated primarily by the carotid bodies, with Corneille Heymans winning the Nobel Prize in 1938 for this discovery; the molecular mechanisms by which carotid glomus cells sense hypoxia remains an area of active investigation by Haldane and Priestly in 1905 . Subsequthe HCVR . Informethe HCVR . At the the HCVR . Since t2; 3) cell activity in vivo tracks pH or PCO2; 4) CO2/H+ modulation of cell activity is a direct effect, at least in part; and 5) interfering with the specific molecular mechanism(s) by which a cell senses CO2/H+ inhibits the normal hypercapnic ventilatory response actiresponse . ConditiThe retrotrapezoid nucleus (RTN) was first identified as a group of cells near the ventral surface of the rostral medulla, inferior to the facial motor nucleus and posterior to the trapezoid bodies, that project to the dorsal respiratory group (DRG) and ventral respiratory group (VRG) in the brainstem . The anaSlc17a6 (VGlut2); they can be differentiated from other nearby Phox2b-expressing populations, like C1 neurons and motoneurons, by the absence of tyrosine hydroxylase (TH) and choline acetyltransferase (ChAT) expression (2P background K+ channel TASK-2 (encoded by Kcnk5) ; as disc the RTN . For thein vivo and ex vivo.Several different methods have been used to obtain activation and inhibition of RTN neurons, and these manipulations in turn activate or inhibit respiration in both conscious and anesthetized animals. Inhibition (acute) or ablation (chronic) of the RTN also blunts/abolishes the HCVR, both 2-evoked breathing stimulation at birth (+ neurons), either by targeted bilateral injection of saporin-conjugated substance P in rats or viral-mediated Cre-dependent expression of caspase in Nmb-Cre mice, reduces baseline breathing and nearly completely abolishes the HCVR decreases phrenic nerve activity, often to the point of apnea , and thiat birth . Moreovethe HCVR (Souza ethe HCVR .E) through effects on both tidal volume and frequency and occludes further activation by CO2. These effects are observed in both conscious and anesthetized animals, and ChR2-mediated increases in VE depend on glutamatergic transmission from the RTN blunts phrenic nerve discharge intensity and frequency at baseline as well as during an acute hypercapnic challenge in an ex vivo brainstem-spinal cord preparation (Transient activation of RTN neurons via photoactivation of channelrhodopsin 2 (ChR2) expressed in RTN neurons under the control of a Phox2b-responsive promotor (PRSx8) increases minute ventilation , and it seems certain that the sampling was diluted by recording from the multiple other neuronal subtypes present in the general parafacial region or, in the early postnatal period (P0-P2), the parafacial respiratory group (pFRG), and are most likely early precursors to the RTN , were also found to retain their CO2/H+ sensitivity or SB269970 (5-HTR7 antagonist) can block RTN activation by exogenous 5-HT in vitro , along with additional modulators from various other cell groups , can enhance baseline activity of RTN neurons and thereby facilitate their response to CO2/H+ .+ current , only 56% of those GFP+ RTN neurons are pH-sensitive in TASK-2 deleted mice; the pH-sensitive background K+ current is reduced in pH-sensitive cells from these TASK-2 global knockout mice, and eliminated in \u223c44% of cells that emerged as pH-insensitive after TASK-2 deletion (2 is strongly reduced (by \u223c60% at 8% CO2) while baseline respiration is unaffected is also reduced in GPR4 global knockout mice while activation of caudal raphe neurons is unaffected by GPR4 deletion and selective re-expression of GPR4 in the RTN alone restores CO2-induced Fos expression in RTN neurons and rescues the respiratory defects observed in GPR4 global knockout animals (2) , approximals (2) . The effin vitro as well as in vivo, when other respiratory-related inputs are eliminated It is also worth noting that RTN neurons fire action potentials in a steady pacemaker-like pattern both iminated . The ionoperties . It is also clear that RTN neurons are intrinsically sensitive to CO2/H+, and that this intrinsic sensitivity is imparted by expression of both TASK-2 and GPR4 , median raphe (MnR), raphe magnus (RMg), raphe pallidus (RPa), raphe obscurus (ROb), and the parapyramidal (PPy) cell groups; they contain all the serotonergic neurons in the CNS, along with other non-serotonergic neurons. Among the serotonergic raphe neurons there is a wide diversity of neuronal subtypes as defined by both developmental origin and molecular phenotype . Elegant2 equilibrated aCSF into different raphe nuclei increases respiration and/or arousal . Similar arousal . In addi the RTN . Pharmacischarge . Inhibitequency) . Recipro outflow . These e2-induced arousal along with an inability to regulate body temperature during a thermal challenge . More reume (VT) ; in the y output . Conversy output , even though that is a period when respiration is strongly dependent on chemoreceptor input. In addition, the activity of all CO2-sensitive caudal raphe neurons increased during motor activity (treadmill locomotion), consistent with a general role in motor function. Although the neurochemical phenotype of these recorded neurons was not definitively established, the physiological, pharmacological, and functional characteristics, together with their anatomical location, indicate that they were likely serotonergic. Overall, these early data suggested that only a subset of serotonergic raphe neurons is CO2 sensitive, and they are thus consistent with the recent recognition of multiple genetically, developmentally, and functionally diverse subgroups of serotonergic raphe neurons in mice and mice , neurons recorded in medullary raphe nuclei were generally insensitive to increases in inspired CO2 (post hoc tryptophan hydroxylase (TPH) immunostaining in rats and perhaps incorporating GCaMP-enabled fiber photometry or cell imaging, would be particularly helpful in resolving the question of the chemosensitivity of serotonergic raphe neurons in freely behaving animals in vivo. The latter was recently attempted; miniscope recordings of GCaMP6s-expressing serotonergic neurons in RMg and RPa of conscious mice uncovered multiple types of CO2-dependent responses , with a graded response to CO2 observed in some cells (8/26) .in vivo studies described above, it is abundantly clear from extensive experiments that medullary raphe neurons are directly activated by CO2/H+in vitro; this has been repeatedly demonstrated in the acute slice, in slice culture and, importantly, under conditions of fast synaptic blockade and/or in dissociated neurons where indirect activation is precluded population within raphe cells recorded in vitro, and recent work indicates that this property appears to be specific to the Egr2-Pet1-expressing subset of serotonergic neurons and/or compensates for loss of TASK channels in knockout mice. One potential alternative is GPR4, one of the proton detectors in RTN neurons that is also expressed in serotonergic raphe neurons , and the inhibition of the HCVR in GPR4-deleted mice is fully rescued by GPR4 re-expression limited only to the RTN (2+-dependent nonselective cation channel (The molecular substrate(s) for modulation of serotonergic raphe neuron activity by COelopment , and a cin vitro ; importain vitro , as is tin vitro . Notablyin vitro . In the context of respiratory control by COeception ; ATP caneception ; and inhy output . In addiy output . In braitrocytes . TogetheStudies of the effect of exogenous activation of astrocytes have focused on the astrocytes in the region near the RTN and/or the preB\u00f6tC. Parapyramidal astrocytes have not been specifically targeted for exogenous activation or inhibition experiments. For a number of experiments, spatial delineation was not fine enough to enable distinction between different astrocyte populations.ex vivo preparation (likely containing the RTN area) drives astrocytic calcium transients . Ca2+ imaging of \u201cventral surface astrocytes\u201d in vivo in the anesthetized rat and ex vivo in the acute horizontal slice during an acute pH challenge (HEPES 7.45 \u2192 7.25) reveals a marked increase in astrocytic Ca2+ throughout the ventral surface of the brainstem, regions including the RTN and PPy groups of astrocytes or even ATP release with cellular sensors , will be required to better satisfy criterion 3.Astrocyte activation is typically assayed using fluorescent probes that assess increases in intracellular calcium direct activation of connexin 26 (Cx26) by molecular CO2 in astrocytes of the parapyramidal region; b) activation of a Na/HCO3\u2212 exchanger (NBCe1) by CO2-mediated intracellular acidification of preB\u00f6tC and RTN astrocytes; and c) direct inhibition of Kir4.1/5.1 by intracellular H+, leading to depolarization of astrocytes adjacent to the RTN ; likewise, HCO3\u2212 uptake due to activation of NBCe1 by intracellular acidification or depolarization can remove buffering equivalents from the extracellular space and further accentuate acidification of the extracellular space. Note that VMS astrocytes are unique compared to other CNS populations in that they induce vasoconstriction, opposed to dilation, of nearby vessels during hypercapnia, likely via a P2Y2-dependent mechanism via a lysine carbamylation event reduced the VT component of the HCVR at 6% CO2 and was noted only at the level of the cPPy, not rostrally near the RTN or more caudal to the cPPy and did not persist for all timepoints tested. The effects of dnCx26 expression on CO2-evoked ATP release in the cPPy were not reported , providing the Ca2+ uptake required for vesicular release of ATP. Aside from initiating ATP release, the uptake of HCO3\u2212 can remove buffering equivalents from the extracellular space, potentially exacerbating local acidification. Consistent with this mechanism, the CO2-dependent Ca2+/Na+ signal in astrocytes is completely blocked in vitro by the NBCe1 inhibitor S0859 and partially blocked by inhibition of NCX receptors. Notably, there is disagreement over the necessity for purinergic stimulation in CO2/H+ activation of RTN neurons, but it seems likely that engagement of P2 receptors plays some role. The approaches used to inhibit astrocyte signaling in the preB\u00f6tC support a contribution to the HCVR but those same manipulations also affect respiratory stimulation by a number of other stimuli. Moreover, they have not yet been applied in the ventral medullary regions where astrocytes were proposed to regulate CO2/H+ sensitivity via nearby RTN neurons or do not support a necessary role . For Kir channels, studies that eliminate their function specifically in astrocytes would be helpful. It is also possible that these mechanisms are redundant during hypercapnia in vivo, and that simultaneous inhibition of more than one mechanism is necessary to uncover some more prominent role. Finally, alternative molecular mechanisms for proton sensing by astrocytes may yet be uncovered.There is abundant evidence from neurons . Althoug2/H+ sensitivity, but the most powerful of the new technical advancements in neuroscience have not yet been applied to addressing the significant gaps in fulfilling the criteria that would be necessary for acceptance as bona fide central respiratory chemoreceptors.The locus coeruleus (LC) is a brainstem structure located in the rostral pons, lateral and ventral to the fourth ventricle; it comprises \u223c3000 noradrenergic neurons in mouse or rat, providing the primary noradrenergic innervation throughout the central nervous system . Its actex vivo, brainstem spinal cord preparation can increase C4 burst frequency, albeit by a small amount (+ channel (BK) activity acts as a brake on CO2-activated firing in LC neurons . In addi neurons , but thi2/H+ regulation of LC neuron activity have been examined in vitro, their role in initiating or supporting the whole animal HCVR is relatively unknown. As mentioned earlier in the discussion of astrocytes, global genetic deletion of Kir5.1 can blunt the HCVR, but it is not possible to attribute this effect to an action on the LC. It has been demonstrated that microinjection of paxilline, a BK inhibitor, into the LC of the adult rat can augment the HCVR via effects on tidal volume, presumably by removal of the oscillatory brake is a highly heterogeneous region which contains a large proportion of the orexin producing neurons within the CNS. The orexin system has been a focus of recent research on arousal state, cardiorespiratory control, and environmental stress response. The orexinergic neurons in the LH have a broad range of targets throughout the brain, including to the RTN, LC, raphe, and preB\u00f6tC . Applicaout mice . Treatmeparation , like most of the other cell groups reviewed here, there have been no direct measures of this CO2-mediated neuronal activation in freely behaving animals. It also seems certain that orexin neurons in the LH can be activated by CO2/H+in vitro, likely directly, but the cellular and ionic mechanisms so far suggested for intrinsic chemosensitivity of those neurons have not held up to experimental scrutiny, at least in the context of CO2-regulated breathing. Thus, better satisfying a number of these criteria, especially identifying and manipulating a relevant molecular CO2/H+ sensor, will be crucial to support a role for these cells as chemosensors.There is good evidence that the orexinergic system can provide a general excitatory drive to respiratory circuits, likely via orexin signaling and in an arousal state-dependent manner. However, the evidence addressing the criteria required for a 2/H+ and drive the respiratory circuits that adjust ventilation to correct deviations from normal physiological set points for PaCO2 and tissue acid-base balance. As cellular candidates have emerged, there have been additional efforts to use various technical advances to define those cell types with greater phenotypic clarity, seek molecular substrates for their CO2/H+ sensitivity, and validate their physiological role in respiratory chemosensitivity. To formalize evaluation of these ongoing efforts, we have enumerated a set of increasingly stringent criteria that we believe are necessary and, for the final criterion sufficient, to declare a candidate as a bona fide respiratory chemoreceptor that are both required for full elaboration of the HCVR. The CO2/H+ modulation of RTN in vivo remains to be directly observed in unanesthetized animals, and the genetic elimination of TASK-2 and GPR4 was global and did not disrupt the pH sensing mechanism, per se. Nonetheless, both RTN ablation and combined TASK-2/GPR4 knockout eliminate the HCVR nearly completely in conscious animals, consistent with a particularly prominent role for both RTN neurons and their molecular pH sensors. The effect of RTN ablation also suggests that these neurons may be a point of convergence for inputs from other presumptive chemoreceptors. Indeed, RTN neurons are modulated by several transmitters and peptides from those other cell groups, and such a convergent action may support the more pronounced CO2/H+ sensitivity of RTN neurons in vivo, by comparison to in vitro.In the case of the developmentally and biochemically defined RTN neurons, experimental modulation of their activity has the expected effects on respiratory output, and they are directly responsive to CO2/H+ sensitive in vitro, an observation not yet verified in vivo, and inhibition of this subset of serotonergic cells blunts the HCVR. To date, TASK-1/TASK-3 channels are the only molecularly identified pH sensors in serotonergic raphe neurons, but genetic deletion of those TASK channels has no effect on the HCVR in mice. For astrocytes, there is good evidence that they are activated by CO2/H+ to mobilize intracellular Ca2+, but this has not been validated in conscious animals. Optogenetic activation of VMS astrocytes evokes ATP release and stimulates local RTN neurons and respiration via a P2Y receptor mechanism; conversely, inhibition of gliotransmitter release and ATP signaling in preB\u00f6tC neurons blunts the HCVR, along with various other respiratory reflexes. It remains to be clarified whether there is a specific site for astrocytic modulation of CO2-dependent respiratory output, and the molecular specializations proposed to support CO2/H+ sensing by astrocytes have not yet been clearly linked to the HCVR. For LC and orexin neurons, which can modulate respiratory output and may indeed be CO2/H+ sensitive in vitro, there is much less direct evidence for the various criteria.The other chemoreceptor candidates that have accrued the most experimental support are the serotonergic raphe neurons and brainstem astrocytes. For raphe neurons, recent elegant intersectional approaches have revealed remarkable molecular and functional diversity within the serotonergic system, and focused attention specifically on the Egr2-Pet1 subset of caudal raphe neurons as potential respiratory chemoreceptors. These particular neurons are directly CO2/H+ sensor and as a principal integrative center that transduces local environmental variations in CO2/H+ and neuromodulatory input from the other presumptive chemosensory cell groups for onward transmission to the respiratory rhythm and pattern generator circuits. These inputs modulate the excitability of RTN neurons, increasing their CO2/H+ sensitivity and input-output gain. To the extent that those other cell groups encode CO2/H+in vivo, their inputs may confer a secondary CO2/H+ signal to RTN neurons while imparting their own chemosensitivity onto other elements of the respiratory control and output networks. Many predictions of this working model have not been directly tested, and those together with the chemoreceptor criteria we outlined here, can hopefully serve as a guide for future experiments. Regardless of whether any of these cell groups fulfill all the listed criteria for bona fide respiratory chemoreceptors, it is clear that they each provide important modulatory influences on downstream respiratory networks that enhance how changes in CO2 are ultimately translated into an effective homeostatic ventilatory response. Finally, it is also important to recognize that these cell groups could serve chemoreceptor functions for other non-respiratory effects of CO2 .If this set of criteria can be fulfilled by one or more of these cell types and molecular sensors, then it will also be important to quantify their relative contributions and determine whether they function together in series, in parallel, or both. Our current working model holds that respiratory chemoreception and the HCVR is primarily subserved by a multicellular sensory apparatus. In particular, we see the RTN as both a direct CO"}