{"text": "The activities of 6-phosphogluconate dehydrogenase and glucose-6-phosphate dehydrogenase have been measured in squamous epithelial cells of the uterine cervix from normal patients and cases of cervical intraepithelial neoplasia (CIN). A biochemical cycling method, which uses only simple equipment and is suited to routine use and to automation, was applied to cells separated by gradient centrifugation. In addition, cells were examined cytochemically, and the intensity of staining in the cytoplasm of single whole cells was measured using computerised microcytospectrophotometry. Twenty per cent of cells in samples from normal patients (n=61) showed staining intensities above an extinction of 0.15 at 540 nm, compared to 71% of cases of CIN 1 (n=14), 91% of cases of CIN 2 (n=11) and 67% of cases of CIN 3 (n=15). The cytochemical data do not allow definitive distinctions to be made between different grades of CIN whereas the biochemical assay applied to cell lysates shows convincing differences between normal samples and cases of CIN. There are no false negatives for CIN 3 (n=14) and CIN 2 (n=10) and 11% false negatives for CIN 1 (n=9) and 14% of false positives for normal cases (n=21). The results of this preliminary study with reference to automation are discussed [corrected]."} {"text": "The activities of 6-phosphogluconate dehydrogenase and glucose-6-phosphate dehydrogenase have been measured in squamous epithelial cells of the uterine cervix from normal patients and cases of cervical intraepithelial neoplasia (CIN). A biochemical cycling method, which uses only simple equipment and is suited to routine use and to automation, was applied to cells separated by gradient centrifugation. In addition, cells were examined cytochemically and the intensity of staining in the cytoplasm of single whole cells was measured using computerised microcytospectrophotometry. Twenty per cent of cells in samples from normal patients (n = 61) showed staining intensities above an extinction of 0.15 at 540 nm, compared to 71% of cases of CIN 1 (n= 14), 91% of cases of CIN 2 (n= 11) and 67% of cases of CIN 3 (n= 15). The cytochemical data do not allow definitive distinctions to be made between different grades of CIN whereas the biochemical assay applied to cell lysates shows convincing differences between normal samples and cases of CIN. There are no false negatives for CIN 3 (n = 14) and CIN 2 (n = 10) and 11% false negatives for CIN 1 (n = 9) and 14% of false positives for normal cases (n = 21). The results of this preliminary study with reference to automation are discussed."} {"text": "Objective: We undertook a microbiological study of purulent specimens from women with symptomatic breast abscesses.Methods: Fifty-one purulent samples were collected in 2 periods (December 1991\u2013April 1992 and January 1994\u2013June 1994) from nonpuerperal breast abscesses in 44 patients attending our hospital.Results: One of the most frequently isolated microorganisms was Proteus mirabilis , present as a pure culture in all but 1 specimen (isolated together with Peptostreptococcus spp.). Staphylococcus aureus was isolated in 10 specimens, 6 of which were post-tumorectomy abscesses. Polymicrobial anaerobic flora were isolated in 11 specimens (21.5%); Staphylococcus epidermidis in 4 (8%); and Streptococcus milleri, Alcaligenes sp., and mixed aerobic-anaerobic flora in 1 specimen each. The 7 remaining samples (13.7%) were negative bacteriological cultures.Conclusions: We draw attention to the frequent isolation of P. mirabilis in recurrent and torpid breast abscesses in 4 women in whom surgery was necessary in addition to antibiotic treatment."} {"text": "Surveillance in the context of malaria elimination will needs to shift from measuring reductions in morbidity and mortality to detecting infections (with or without symptoms). The malaria elimination surveillance research and development agenda needs to develop tools and strategies for active and prompt detection of infection. The capacity to assess trends and respond without delay will need to be developed, so that surveillance itself becomes an intervention. Research is needed to develop sensitive field tests that can detect low levels of parasitaemia and/or evidence of recent infection. Examples of recent work on surveillance and response issues in several African countries will be discussed to illustrate approaches in active case detection and case investigations, cell phone reporting and response, and strategies to access mobile populations."} {"text": "This case demonstrates very late neurological deterioration due to a pseudarthrosis in the fusion mass after scoliosis surgery. Though not the first case in the literature, it is the first case in which pre-operative magnetic resonance imaging revealed that the compression was due to a cyst arising from the pseudarthrosis.Twenty-two years after a successful correction and fusion for scoliosis, a 38-year-old Caucasian man presented with progressive numbness and significant weakness. As revealed by imaging, a cyst relating to an old pseudarthrosis was compressing the spinal cord. This was removed, and the cord decompressed, resulting in resolution of all symptoms.Lifetime care of patients with scoliosis is required for very late complications of surgery. Asymptomatic pseudarthroses have the potential to cause symptoms many years after surgery. This case report highlights a case of spinal cord compression and clinical neurological deterioration caused by a degenerative cyst arising from a pseudarthrosis in an area of previous scoliosis fusion 22 years after the index surgery. The case highlights the need for long-term follow-up of patients with scoliosis and for patients to have access to scoliosis services to address problems that may arise many years after the original surgery.The patient was a Caucasian man who was born in 1974 and who was first referred to the Spinal Deformity Service in 1977. He was initially treated with a brace for his mild scoliosis and followed up over many years until the age of 15, when his scoliosis had progressed, despite bracing, to a 44\u00b0 curve between T8 and L3. Quite a rapid deterioration had occurred through the adolescent growth spurt, and consequently the patient underwent a posterior Harrington-Luque spinal fusion from T7 to L3. Bone graft was taken from the right iliac crest to create the fusion , and was a family man bringing up young children.In the summer of 2011, the patient, then 38, fell while playing paintball and injured his shoulder. The injury was musculoskeletal and settled reasonably quickly. However, in the two weeks that followed, he started to note increasing left leg weakness that was associated with some reduced sensation. The paraparesis progressively worsened to the point that he was unable to walk without sticks or a wheelchair. There had not been any disturbance in bowel or bladder function. The patient now reported altered sensation down the left side of his body from the L1 dermatome. In a neurological examination of his left leg, his motor power was significantly reduced: he had, at best, 3/5 power but most myotomes were of strength between 1/5 and 2/5 on the Medical Research Council grading scale. There was also blunting of sensation from the L1 dermatome distally.Spinal radiography at this time showed a good position of the Harrington rod and Luque wires as placed at the index surgery and a reasonable spinal alignment in both planes. Whole-spine magnetic resonance imaging (MRI) revealed a large cervicothoracic syrinx without Arnold-Chiari malformation. This was assumed to be old and not relevant to the current presenting complaint.The MRI also showed, within the area of instrumentation, a large cyst-like structure causing significant cord compression and this was presumed to be the cause of the recent neurological deterioration Figures and3. CThe patient underwent urgent surgery to decompress the spinal cord. Instrumentation using a modern pedicle screw system was placed to bridge the level of the proximal pseudarthrosis to provide stability. At the level of the compression on the cord, the fusion mass and pseudarthrosis were then burred away to reveal the spinal canal and cyst, the latter of which was removed and sent for a histological examination. The pseudarthrosis was repaired by using an iliac crest bone graft from the left posterior superior iliac spine (the side opposite that of the index procedure). After surgery, there was a wound breakdown with infection. This was managed through multiple debridements and vacuum-assisted closure therapy followed by a musculocutaneous flap to close the defect with a good result. The histological examination of the cyst revealed a degenerative cyst with no signs of malignancy or infection.At last follow-up at six months after surgery, the patient had full recovery of neurological function with no motor weakness or sensory loss and was returning to a more active life. Post-decompression MRI scans are unreadable because of the metal artifact of the Harrington rod and pedicle screw instrumentation over the area in question Figure. A CT scThis patient had, for the time, a well-performed operation with good correction of deformity and a stable construct which allowed him to return to a full and active life for 20 years without problems. It is worth noting that, although there were two separate pseudarthroses in the original fusion mass, both were asymptomatic until a reasonably minor trauma in the summer of 2011. Whether the degenerate cyst from the upper pseudarthrosis would have been present anyway or was stimulated to form after this trauma will never be known. The histology report revealed nothing more than degenerative and inflammatory changes.Neurological compromise secondary to pseudarthrosis within a previous posterior Harrington rod fusion at some time after the index surgery has been reported previously-4. HowevThe underlying cervicothoracic syrinx is undoubtedly related to why the spinal deformity occurred in the first place but is thought to be unrelated to the acute neurological compromise seen in this case as it was a lower-limb only problem with no symptoms in the upper limbs and the changes seen on the MRI appear to be longstanding.This case highlights the need for long-term scoliosis care for patients who have had scoliosis correction, fixation, and fusion in the past. Pseudarthrosis may not be apparent on plain X-ray, and the combination of CT scanning and MRI may be required to make the diagnosis given the difficulties in imaging older implants in an MRI scanner. The degenerative cyst that formed in this case was secondary to the micromovement of the pseudarthrosis, and the location in the spinal canal then caused neurological compromise. Although an operation on an asymptomatic pseudarthrosis may not be appropriate, patients need to be aware of the potential complications of leaving a pseudarthrosis alone given the late complication demonstrated in this case.Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.CT: Computed tomography; MRI: Magnetic resonance imaging.The author declares that he has no competing interests."} {"text": "Exopolysaccharides (EPSs) produced by diverse group of microbial systems are rapidly emerging as new and industrially important biomaterials. Due to their unique and complex chemical structures and many interesting physicochemical and rheological properties with novel functionality, the microbial EPSs find wide range of commercial applications in various fields of the economy such as food, feed, packaging, chemical, textile, cosmetics and pharmaceutical industry, agriculture, and medicine. EPSs are mainly associated with high-value applications, and they have received considerable research attention over recent decades with their biocompatibility, biodegradability, and both environmental and human compatibility. However, only a few microbial EPSs have achieved to be used commercially due to their high production costs. The emerging need to overcome economic hurdles and the increasing significance of microbial EPSs in industrial and medical biotechnology call for the elucidation of the interrelations between metabolic pathways and EPS biosynthesis mechanism in order to control and hence enhance its microbial productivity. Moreover, a better understanding of biosynthesis mechanism is a significant issue for improvement of product quality and properties and also for the design of novel strains. Therefore, a systems-based approach constitutes an important step toward understanding the interplay between metabolism and EPS biosynthesis and further enhances its metabolic performance for industrial application. In this review, primarily the microbial EPSs, their biosynthesis mechanism, and important factors for their production will be discussed. After this brief introduction, recent literature on the application of omics technologies and systems biology tools for the improvement of production yields will be critically evaluated. Special focus will be given to EPSs with high market value such as xanthan, levan, pullulan, and dextran. Biopolymer is used as a term to describe polymers produced by biological systems and polymers that are not synthesized chemically but are derived from biological starting materials such as amino acids, sugars, and natural fats . In nature, they have a significant role for protection of the cell, adhesion of bacteria tosolid surfaces, and participating in cell-to-cell interactions and some non-carbohydrate substituents that were able to produce two different polysaccharides have been reported , N-acetylgalactosamine , or glucuronic acid (GlcA) are the repeating units of heteropolysaccharides and occasionally non-carbohydrate substituent such as phosphate, acetyl, and glycerol. Homopolysaccharides and heteropolysaccharides are also dissimilar in synthetic enzymes and site of synthesis. Biosynthesis of homopolysaccharides requires specific substrates like sucrose, while the residues of heteropolysaccharide are produced intracellularly and precursors are located across the membrane by isoprenoid glycosyl carrier lipids for extracellular polymerization ratio, oxygenation rate, and carbon sources can impact EPS production. EPS composition can be altered by conditional changes (differing monosaccharides or by monosaccharide molar ratio). For instance, due to the carbon source, Due to their unique and complex chemical structures that offers beneficial bioactive functions, biocompatibility, and biodegradability, microbial EPSs have find a wide range of application areas in chemical, food, pharmaceutical, cosmetics, packaging industries, agriculture, and medicine in which they can be used as adhesives, absorbents, lubricants, soil conditioners, cosmetic, drug delivery vehicles, textiles, high-strength materials, emulsifiers, viscosifiers, suspending, and chelating agents. In recent years, several novel bacterial EPSs have been isolated and identified; however, a few of them have achieved to have significant commercial value due to the high production costs are also known as polysaccharide producers and one of the commercial EPS dextran producer Leuconostoc mesenteroides is a LAB; however, low production yields avoid LAB species to be exploited commercially. Besides, lactobacilli are GRAS bacteria and their EPS could be utilized in foods to glucose-6-phosphate (Glc-6-P). They are also involved in other cellular metabolisms. The second group is required to catalyze conversion of sugar nucleotides. Uridine-5\u2032-diphosphate (UDP)-glucose pyrophosphorylase that catalyzes the conversion of Glc-1-P to UDP-Glc, which is one of the key molecules in EPS synthesis can be given as an example for this class of enzymes. Another enzyme group is glycosyltransferases (GTFs) that are located in the cell periplasmic membrane. The sugar nucleotides are transferred by GTFs to a repeating unit attached to glycosyl carrier lipid. The enzymatic functions, the structures, and identification of the genes that encode GTFs has been investigated intense and due to amino acid sequence similarities more than 94 GTF families were reported in the Carbohydrate-Active EnZymes (CAZy) database transporter-dependent pathway, the synthase-dependent pathway, and the extracellular synthesis by use of a single sucrase protein. Inside the cell, the precursor molecules are transformed by enzymes and produce activated sugars/sugar acids in the first three mechanisms. Alternatively, in extracellular production pathway by direct addition of monosaccharides obtained by cleavage of di- or trisaccharides, the polymer strand is elongated , EPS biosynthesis and export have been reported to occur via the Wzx/Wzy-independent and Wzx/Wzy-dependent pathway pathway, polymerization occurs at the cytoplasmic side of the inner membrane. The genes, which are required for high-level polymerization and surface assembly, are described as wza (encoding an outer-membrane protein), wzb (encoding an acid phosphatase), and wzc (encoding an inner-membrane tyrosine autokinase). In most Gram-negative bacteria -linked glucose monomers] or bacterial cellulose [\u03b2-(1-4)-linked glucose units] Rehm, . RegulatIn extracellular synthesis, polymerization reaction occurs as transfer of a monosaccharide from a disaccharide to a growing polysaccharide chain in the extracellular environment. This type of production of EPSs is uncomplicated; independent of the central carbon metabolism besides there is a limited variation in structure. The extracellular EPS synthesis can occur for homopolysaccharides by extracellular GTF -rhamnose] regular repeating units from sugar nucleotide precursors, which are also involved in the biosynthesis of several cell wall components and can therefore be considered essential for growth catalyze heteropolysaccharide biosynthesis which has numerous intracellular steps and only the last step that polymerization of repeating units occurs is extracellular. Depending on substrate type, uptake of sugars is achieved through a passive or an active transport system by the cell in the first step. Subsequently, the substrate is catabolized in the cytoplasm through glycolysis and sugar nucleotides are formed. The biosynthesis of activated precursors , which are derived from phosphorylated sugars is occurred. Finally, EPS is secreted to extracellular environment therefore their secretion from cytoplasm through cell membrane without compromising the critical barrier properties is a challenging process and inulosucrases (EC 2.4.1.9) from transfructosidases class produce levan and inulin type fructans. Ftf genes are induced under stress conditions and sucrose hydrolysis and fructosyl transfer on fructan polymerized chain or syntheses of tri- or tetrasaccharides are catalyzed. In fructan, glucose is the non-terminal reducing residue (G-Fn) and dissimilar to classical Leloir-type GTF they utilize sucrose as donor substrate instead of nucleotide-sugars. The transfer of monosaccharides, generating a glycosidic bond, from activated molecules to an acceptor molecule is catalyzed by these enzymes. Energy released by degredation of sugars is used to catalyze transfer of a glycosyl residue on forming polysaccharide. Due to the product of biosynthesis, the enzymes can be differentiated between transglucosydases (EC 2.4.1.y) and transfructosydases (EC 2.4.1.y or 2.y). Transglucosidases class includes dextransucrase, mutansucrase, and reuteransucrase (EC 2.4.1.5), which are high molecular weight extracellular enzymes and catalyze hydrolysis of sucrose to glucose and fructose and glucosyl transfer on carbohydrate or non-carbohydrate compounds. EPS structures can be varied based on different enzymes intervention and the synthesis of each polysaccharide is catalyzed by a specific GTF; therefore, two products encoded by two genes of Systems biology offers valuable application areas in molecular sciences, medicine, pharmacy, and engineering such as pathway-based biomarkers and diagnosis, systematic measurement and modeling of genetic interactions, systems biology of stem cells, identification of disease genes, drug design, strain development, bioprocess optimization , polylactic acid (PLA), polysaccaharides, carboxylic acids, and butanediols. Systems biology approach and genome-scale metabolic models-guided metabolic engineering strategies have been successfully employed to enhance the productivity of useful biopolymers and their precursors , P (3HB-co-LA), by direct fermentation of metabolically engineered Pseudomonas putida was investigated by genome-scale metabolic model of this microorganism and survival under anaerobic stress was achieved by introducing the ackA gene from Pseudomonas aeruginosa and Escherichia coli synthesizing capacity of Halomonas sp. TD01. In this study, several genes relevant to PHA and osmolytes biosynthesis were analyzed providing invaluable clues for understanding of the evolution and genes transfer, the strategic guidance of the genetic engineering of halophilic Halomonas sp. TD01 for co-production of PHA and ectoine.Cai et al. reportedH. maura strain S-30 was performed to identify gene cluster in this strain. Three conserved genes, epsA, epsB, and epsC, also a wzx homolog, epsJ, which indicates that mauran is formed by a Wzy-dependent system, were found. It was also reported that eps gene cluster reaches maximum activity during stationary phase, in the presence of high salt concentrations (5% w/v), which was investigated by transcriptional expression assays using a derivate of H. maura S-30, which carries an epsA: lacZ transcriptional fusion pathway is required, by performing in silico genome-scale metabolic analysis. Yim et al. was identified , which was reconstructed from genome data of, manually curated and further expanded in size. The impact of xanthan production was studied in vivo and in silico and compared with gumD mutant strain. This verified metabolic model is also the first model focusing on bacterial EPS synthesis and it can be used for detailed systems biology analyses and synthetic biology reengineering of Xcc. Moreover, draft genome of X. campestris B-1459, which was used in pioneering studies of xanthan biotechnology, and it can be used to analyze the genetic basis of xanthan biosynthesis has been reported recently linkages between fructose rings. It is synthesized by the action of a secreted levansucrase (EC 2.4.1.10) that directly converts sucrose into the polymer related to levan biosynthesis mechanism. The findings manifested mannitol as a significant metabolite for levan biosynthesis, which was further verified experimentally. In the previous study, 1.844\u2009g/L levan yield from the stationary phase bioreactor cultures using a defined media containing sucrose as sole carbon source with almost fourfold increase of levan production was reported , and osmoprotectants. Genes related to EPS biosynthesis and intracellular PHA biosynthesis were detected. Hs_SacB gene encoding the extracellular levansucrase enzyme (EC 2.4.1.10), Pel polysaccharide gene cluster , Alginate lyase precursor (EC 4.2.2.3), and \u201cAlginate biosynthesis protein Alg8\u201d genes were predicted. The genome information and metabolic model will have a significant role on levan biosynthesis since they will be utilized to improve levan production by metabolic engineering strategies and medium optimization.Recently, Diken et al. performeAureobasidium pullulans , UDP-glucose pyrophosphorylase , and glucosyltransferase . Nitrogen is the major component for the cultivation of A. pullulans N3.387 by ethyl methane sulfonaten (EMS) and ultraviolet (UV) mutagenesis to improve pullulan biosynthesis and developed a mutant that could produce more pullulan than wild type strain.Kang et al. studied A. pullulans to understand the effect of different concentrations of (NH4)2SO4 which would be useful to optimize industrial pullulan production. The proteomic studies demonstrated the expression of antioxidant related enzymes and energy-generating enzymes and the depression of the enzymes concerning amino acid biosynthesis, glycogen biosynthesis, glycolysis, protein transport, and transcriptional regulation under nitrogen limitation, resulted in conversion of metabolic flux from the glycolysis pathway to the pullulan biosynthesis pathway.Sheng et al. performeAureobasidium pullulans AY4 was determined and genome analysis revealed the presences of genes coding for commercially important enzymes such as pullulanases, dextranases, amylases, and cellulases , through the use of specific enzymes like glucansucrases is part of a large EPS cluster, while dsrA (LEGAS_1012) is located as a single gene in the chromosome LBAE C39-2, which was also found promising for in situ production of dextran in sourdoughs (Amari et al., Leuconostoc lactis EFEL005 has been announced and genomic analysis was performed to understand its probiotic properties as a starter for fermented foods (Moon et al., Various genome sequence and genome analysis studies were carried out with dextran producer strains. For instance, the genome of LAB In microbial EPS production, a better understanding of biosynthesis mechanism is a significant issue for optimization of production yields, improvement of product quality and properties, and also for the design of novel strains. As most of the novel bacterial EPS with unique properties have expensive production costs and economic hurdles need to be overcome, this valuable information about biosynthesis is also be important to lower these charges.More information on the genome of the EPS producer microorganisms will enable to develop additional strategies to successfully enhance EPSs production rate and also to engineer their properties by modifying composition and chain length. Since genome-scale reconstruction includes every reaction of target organism through integrating genome annotation and biochemical information, a systems-based metabolic modeling approach constitutes an important step toward understanding the interplay between metabolism and EPSs biosynthesis. Since microbial biopolymer biosynthesis is a result of a complex system of many metabolic processes, systems-based approaches are needed to control and optimize production in order to improve the formerly reported yields.Furthermore, the genome-scale metabolic model based on genome sequence will have the capacity to consider gene expression, metabolomics, and proteomics data to get accurate prediction at different environmental conditions.The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Bacteria produce a wide range of exopolysaccharides which are synthesized via different biosynthesis pathways. The genes responsible for synthesis are often clustered within the genome of the respective production organism. A better understanding of the fundamental processes involved in exopolysaccharide biosynthesis and the regulation of these processes is critical toward genetic, metabolic and protein-engineering approaches to produce tailor-made polymers. These designer polymers will exhibit superior material properties targeting medical and industrial applications. Exploiting the natural design space for production of a variety of biopolymer will open up a range of new applications. Here, we summarize the key aspects of microbial exopolysaccharide biosynthesis and highlight the latest engineering approaches toward the production of tailor-made variants with the potential to be used as valuable renewable and high-performance products for medical and industrial applications. Polysaccharides produced by microbes can be generally classified by their biological functions into intracellular storage polysaccharides (glycogen), capsular polysaccharides which are closely linked to the cell surface and extracellular bacterial polysaccharides that are important for biofilm formation and pathogenicity. This article will focus on the latter, also termed EPS, which are secreted to the surrounding environment, and therefore can be efficiently harvested from cell-free culture supernatant in a continuous and cost-effective manufacturing process. At present four general mechanisms are known for the production of these carbohydrate polymers in bacteria: (i) the so called Wzx/Wzy-dependent pathway; (ii) the ATP-binding cassette (ABC) transporter-dependent pathway; (iii) the synthase-dependent pathway and (iv) the extracellular synthesis by use of a single sucrase protein. The precursor molecules, which are necessary for the stepwise elongation of the polymer strands, are realized by various enzymatic transformations inside the cell, and follow in principle the same concept of producing activated sugars/sugar acids in the first three cases of different biosynthesis pathways. For the extracellular production, the polymer strand is elongated by direct addition of monosaccharides obtained by cleavage of di- or trisaccharides.In the Wzx/Wzy dependent pathway individual repeating units, which are linked to an undecaprenol diphosphate anchor (C55) at the inner membrane, are assembled by several glycosyltransferases (GT\u2019s) and translocated across the cytoplasmic membrane by a Wzx protein . In a next step their polymerization occurs at the periplasmic space by the Wzy protein before they will be exported to the cell surface . TranspoFigure 1). CPSs produced via this pathway all carry a conserved glycolipid at the reducing terminus composed of phosphatidylglycerol and a poly-2-keto-3-deoxyoctulosonic acid (Kdo) linker. This represents one of the main differences of the Wzx/Wzy and the ABC dependent pathways. Just recently novel insights into the early steps in CPS biosynthesis were provided by new discoveries of the structure of this conserved lipid terminus biosynthesis . These pterminus .The third pathway is the synthase dependent pathway, which secretes complete polymer strands across the membranes and the cell wall, and is independent of a flippase for translocating repeat units. The polymerization as well as the translocation process is performed by a single synthase protein, which in some cases is a subunit of an envelope-spanning multiprotein complex . SynthasTable 1).Most of the enzymatic steps for exopolysaccharide precursor biosynthesis appear inside the cell while polymerization/secretion is localized in the cell envelope. But there also exist some examples of extracellular synthesized polysaccharides, such as, e.g., dextran or levan. The biosynthesis of these occurs via GT\u2019s, which are secreted and covalently linked to the cell surface . The genes involved in the different biosynthesis pathways encode various types of GT\u2019s, polymerizing and branching enzymes, as well as enzymes responsible for addition of substituents or modifications of sugar moieties. Not all steps in the various pathways are currently understood, and sometimes the differences between the pathways become less defined. The genes encoding these enzymes can be found in most of the EPS producing microbes clustered within the genome or on large plasmids . Even though many gene clusters responsible for EPS biosynthesis have been known for several years, the function and mode of action of most of the genes and proteins is not completely clarified. An overview of the most relevant, commercial available EPS, including the biosynthesis pathway they are produced by is given in Table 1.In alignment with the various EPS biosynthesis pathways, the chemical structure and material properties of the final polymers are quite variable to enhance the carbon flux toward the final polymer. In particular genes of precursor biosynthesis were overexpressed. This strategy was demonstrated to be successful for some EPS producers, but failed in some cases . AdditioAnother strategy of engineering EPS biosynthesis is aiming at tailor-made variants with desirable material properties for medical and industrial applications. Here the aim is to alter the molecular structure and therefore the behavior and material characteristics of the final polymer. For example these modifications can be based on deleting substituents or monomeric sugars from the side chain. On the other hand new or more substituents might be attached to change the ratio of decoration. Most efforts were done in engineering the degree of acetylation and pyruvylation of various polymers, in order to control their rheological behavior . Additioin vivo as well as in vitro approaches or even process parameters applied during the production process -linked glucose] and a side chain made of two mannose units and one glucuronic acid . Xanthan is produced by the two precursor\u2019s glucose- and fructose 6-phosphate, the key intermediates of the central carbohydrate metabolism. At the moment five different genomes of X. campestris pv. campestris are available was published which might further enhance the insights in conserved xanthan biosynthesis pathway are not located within the gum-cluster. In detail, the assembly of the pentasaccharide repeating unit starts with the transfers of the first glucose unit toward the phosphorylated lipid linker (C55) anchored to the inner membrane via the priming GT GumD. In a next step, the cytosolic GT GumM attaches the second glucose unit by a \u03b2-(1-4)-bond to the first glucose. Catalyzed by GumH the first mannose unit is linked by an \u03b1-(1-3)-glyosidic bond, followed by the cytosolic glycosyltransferase activity of GumK which adds a \u03b2-(1-2)-linked glucuronic acid. Finally the repeating unit is completed by action of GumI, attaching the terminal mannose via a \u03b2-(1-4)-bond. In general most of the GTs involved in biosynthesis of EPS following the Wzx/Wzy\u2013pathway appear to be monofunctional and the same applies specifically also for the xanthan biosynthesis at which the repeat unit is still linked to the C55 anchor, which might play an important role in the targeted transport of the repeating unit .The general topology of Wzx proteins shows several transmembrane helices (TMHs), 10 in the case of GumJ . Data ofA putative adaption mechanism toward the length, as well as acceptance of repeating units with modified side chains was observed for some Wzy-proteins, characterizing them to be well suited for acceptance of tailored repeating units as obtained by genetic engineering .X. campestris genome are placed on the highly conserved gene operon; the genes necessary for the other nucleotide sugar precursors are randomly distributed within the genome .Different heteropolysaccharides with closely related chemical structures, but strongly differing material properties belongs to the family of sphingans as produced by several strains, . The bacstrains, with smastrains, . EPS incstrains, . The difstrains, . The gene genome . The asssed here . The assThe next steps are different in gellan, welan and diutan. Gellan represents the unbranched version of sphingans, which only shows substituent decorations of glycerol and acetyl at the second of the two glucose units of its backbone.L-rhamnose side chain attached to the first glucose residue of the growing repeating unit . The genes involved in incorporation of the side chains for welan and diutan have not been exactly functionally assigned at present. But the genes urf31.4, urf31, and urf34, which are labeled as \u201cunknown reading frames\u201d are assumed to be involved -linked rhamnose or mannose in the ratio of 2:1 as present at the first glucose of the repeat unit . Diutan ing unit and two involved . These fvariants . Up to nf gellan .Whether the addition of the side chain sugars and non-sugar substituents occurs as final step of repeat unit assembly or at the nascent repeat unit remains speculative up to now, but as observed for xanthan, it can be assumed that already decorated repeat unit intermediates might reduce the activity of the GT\u2019s involved in assembly of the repeat unit . The nexRhizobium, Agrobacterium, Alcaligenes, and Pseudomonas strains . Pyruvate is present in stoichiometric ratio, whilst succinate and acetate decoration depends on strain origin and culture conditions . The genes exoA, exoL, exoM, exoO, exoU, and exoW code for the GTs, subsequently elongating the octasaccharide repeating unit by addition of a glucose unit (Succinoglycan (SG) is an acidic EPS produced by several strains . The modnditions . The pyrnditions and nextnditions . Biosyntynthesis . ExoC, eectively . ExoY reose unit . ExoP genes were identified to be involved in SG biosynthesis, R. melioti strains carrying mutated exsA (high similarity to ABC-transporter) variants showed an altered ratio of high molecular SG to low molecular SG, indicating involvement of exsA, without further knowledge of the detailed function. ExsB gene product was shown to have a negative influence on SG biosynthesis, resulting in lowered product titers is localized on the same megaplasmid, but more than 200 kb away also is known as the M antigen and is described to be an EPS . CA is mc amount . CA biosc amount . The genP-fucose . These tsynthase . The gensynthase . The nexsynthase . Due to synthase . The strsynthase .Figure 3). The Wzx protein was identified within the CA gene cluster by its typical transmembrane segments and the large periplasmic loop. WcaD is predicted to span the inner membrane with nine transmembrane segments, and to polymerize the repeat units of CA . Four genes are involved in curdlan biosynthesis . The curdlan synthase (CrdS), is the key enzyme of curdlan biosynthesis, showing a high similarity to cellulose synthases (Agrobacterium strains (Curdlan is a water insoluble \u03b2-(1-3)-glucan (glucose homopolymer) without any substituents, produced by, e.g., ynthases . The lacynthases . Many ge strains . Results strains . Curdlan strains . The bio strains . The Crd strains . First i strains . Additio strains .celS) genes from Acetobacter xylinum (acsABCD) was given in bacterial cytoplasmic membrane cellulose synthase (Bcs) proteins also belong to the GT2 family and are composed of three subunits -glucanase and the orf2 encoding for the \u201ccellulose completing protein\u201d are placed up-stream of the cellulose synthase operon can be explained for the first time by the steric environment presented by the preceding glucose and the \u03b2-(1-4)-linkage, thus reversing the direction of terminal glucose rotation with every additional glucose monomer . These comonomers are arranged in blocks of continuous mannuronic acid residues (M-blocks), guluronic acid residues (G-blocks), or as alternating residues -linked \u03b2-tobacter .A. vinelandii alginates. However, alginates derived from pseudomonads do not contain G-blocks and \u03b2-(1-3)-glycosidic bonds , causing \u03b2-hemolysis, as observed during cultivation of streptococci strains, recombinant HA production was of high priority even in the early stage of commercialization . Polymerization is realized outside the cell by the dextransucrase the key enzyme for dextran synthesis. Dextransucrases belong to the enzyme class of transglucosidases that are part of the glucosyltransferases which are classified as glycoside hydrolase family 70 linked glucose . Levansucrases are widely distributed among Gram-positive bacteria and several plant pathogens carry more than one enzyme and \u03b1-(2-1). Levan, which is produced by levansucrases, mainly consist of the former with occasional \u03b1-(2-1) branches. Inulin type polyfructan is obtained from inulinosucrase and shows the opposite \u03b1-(2-1) chain with \u03b1-(2-6) branches -dependent and cyclic di-GMP dependent regulatory mechanisms are available and will be summarized here.curdlan biosynthesis displayed up-regulation of GGDEF protein encoding genes under nitrogen limited conditions and lowered EPS production (57%) by knocking out c-di-GMP synthases also includes nifR whose function is unknown. Knock-outs results in 30% reduction of curdlan production, whereas the knock-out experiments of \u03c3-factor rpoN result in a 30% increased curdlan biosynthesis resulted in a 70% decrease in curdlan production. Energy storage via polyphosphate influences curdlan biosynthesis as well as maintains the intracellular pH (crdR) is essential for curdlan production by operating as positive transcriptional regulator of the curdlan operon in ATCC31749. The potential binding region of crdR is located upstream from the crdA start codon , regulation as well as its own operon and seem to be mainly controlled by the Rcs (regulation of capsule synthesis) proteins, as reviewed by Factors inducing One of the main differences of CA regulation is the absence of CA production in wild-type strains grown at 37\u00b0C, whereas cultivation at lowered temperatures seems to induce CA biosynthesis . This tesuccinoglycan is occurring mainly at transcriptional and posttranscriptional level (exoS encoded membrane sensor together with the gene product of chvI (response regulator) constitute a two-component regulatory system which controls the expression of the exo genes. ExoR as a negative regulator directly acting on transcription and translation levels of most of the exo genes with exception of exoB , approaches implementing domain shu\ufb04ing of GTs revealed to be of high potential to enhance the portfolio of EPS variants . The incin vivo as well as in vitro modification of EPS. The latter strategy was successfully applied to alginate and succinoglycan, both having in common the secretion of enzymes for EPS modification. Utilization of these secreted enzymes might allow a tight control of the material properties. Epimerases of A. vinelandii have been employed to modify alginate exhibiting a range of material properties (Alternatively protein engineering can be applied to EPS modifying enzymes for operties . Such vaoperties .Only recently an innovative bi-enzymatic process was reported for the production of short chain fructooligosaccharides and oligolevans from sucrose. This system was based on an immobilized levansucrase and an endo-inulase, resulting in a highly efficient synthesis system with a yield of more than 65% and a productivity of 96 g/L/h . The utiThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "PLOS Computational Biology Education paper.This is a Research from educational psychology suggests that teaching and learning are subject-specific activities : learninThese tips will be useful to anyone teaching programming at any level and to any audience. A larger list aimed primarily at K\u201312 audiences can be found at [Guzdial refers tThe most powerful evidence for this comes from Patitsas et al. . They exBeliefs such as this are known to have powerful effects on education outcomes \u20137. If inOne-on-one tutoring is perhaps the ideal form of teaching: all of a teacher's attention can be focused on one student, and they can completely customise their teaching for that person and tailor individual feedback and corrections based on a two-way dialogue with them. In realistic settings, however, one teacher must usually teach several, tens, or even hundreds of students at once. How can teachers possibly hope to clear up many learners' different misconceptions in these larger settings in a reasonable time?The best method developed so far for larger-scale classrooms is called peer instruction. Originally created by Eric Mazur at Harvard , it has 1. The instructor gives learners a brief introduction to the topic.2. The instructor then gives learners a multiple choice question that probes for misconceptions rather than simple factual recall. (A programming example is given in Code 1 that relates to integer comparison and loops.) The multiple choice question must be well designed. There is no point asking a trivial question that all students will get right or one with meaningless wrong answers that no student will pick. The ideal questions are those for which 40%\u201360% of students are likely to get the right answer the first time 3. Learners then vote on the answer to the question individually, thus formalising their initial prediction.4. Next, learners are given several minutes to discuss those answers with one another in small groups , and they then reconvene and vote again.5. Then, the instructor can act on the latest answers. If all the learners have the right answer, the instructor can move on. If some of the wrong answers remain popular after group discussion, the instructor can address those specific misconceptions directly or engage in class-wide discussion.Peer instruction is essentially a way to provide one-to-one mentorship in a scalable way. Group discussion significantly improves learners' understanding because it forces them to clarify their thinking, which can be enough to call out gaps in reasoning. Repolling the class then lets the instructor know if they can move on or if further explanation is necessary. While it significantly outperforms lecture-based instruction in most situations, it can be problematic if ability levels differ widely (as they often do in introductory programming classes because of varied prior experience). Pair programming (Tip 5) can be used to mitigate this.Rather than using slides, instructors should create programs in front of their learners . This is1. It enables instructors to be more responsive to \"what if?\" questions. While a slide deck is like a highway, live coding allows instructors to go off-road and follow their learners' interests or answer unanticipated questions.2. It facilitates unintended knowledge transfer: students learn more than the instructor consciously intends to teach by watching how instructors do things. The extra knowledge may be high level or fairly low level .3. It slows the instructor down: if the instructor has to type in the program as they go along, they can only go twice as fast as their learners, rather than 10-fold faster as they could with slides\u2014which risks leaving everyone behind.4. Learners get to see how instructors diagnose and correct mistakes. Novices are going to spend most of their time doing this, but it's left out of most textbooks.5. Watching instructors make mistakes shows learners that it's alright to make mistakes of their own . Most peLive coding does have some drawbacks, but with practice, these can be avoided or worked around:1. Instructors can go too slowly, either because they are not good typists or by spending too much time looking at notes to try to remember what they meant to type.2. Instructors can spend too much time typing in boilerplate code that is needed by the lesson but not directly relevant to it (such as library import statements). Not only does this slow things down, it can distract learners from the intended thrust of a lesson. As Willingham says, \"MNote that live coding does not always have to start with a blank screen: instructors may give students some starter code that relies solely on concepts they have already mastered and then extend it or modify it with live coding. Instructors who use live coding should ensure that learners have reference material available after lectures, such as a textbook, but should also recognize that students of all ages increasingly turn to question and answer sites such as Stack Overflow for information.When instructors are using live coding, they usually run the program several times during its development to show what it does. Surprising research from peer instruction in physics education shows that learners who observe a demonstration do not learn better than those who did not see the demonstration , and in The key to making demonstrations more effective is to make learners predict the outcome of the demonstration before performing it. Crucially, their prediction should be in some way recorded or public, e.g., by a show of hands, by holding up cue cards marked with A, B, C, or D, or by talking to their neighbour. We speculate that the sting of being publicly wrong leads learners to pay more attention and to reflect on what they are learning; regardless of whether this hypothesis is true, instructors should be careful not to punish or criticise students who predicted wrongly but rather to use those incorrect predictions as a spur to further exploration and explanation.Pair programming is a software development practice in which 2 programmers share 1 computer. One person does the typing, while the other offers comments and suggestions. The two switch roles several times per hour. Pair programming is a good practice in real-life programming and alsoBoth parties involved in pair programming learn while doing it. The weaker gets individual instruction from the stronger, while the stronger learns by explaining and by being forced to reconsider things that they may not have thought about in a while. When pair programming is used, it is important to put everyone in pairs, not just the learners who may be struggling, so that no one feels singled out. It's also important to have people switch roles within each pair 3 or 4 times per hour so that the stronger personality in each pair does not dominate the session.Learning to program involves learning the syntax and semantics of a programming language but also involves learning how to construct programs. A good way to guide students through constructing programs is the use of worked examples: step-by-step guides showing how to solve an existing problem.Instructors usually provide many similar programming examples for learners to practice on. But since learners are novices, they may not see the similarity between examples: finding the highest rainfall from a list of numbers and finding the first surname alphabetically from a list of names may seem like quite different problems to learners, even though more advanced programmers would recognise them as isomorphic.Margulieux and Morrison et al. \u201321 have A principle that applies across all areas of education is that transference only comes with mastery . CoursesGuzdial et al. found that having learners manipulate images, audio, and video in their early programming assignments increased retention in 2 senses: learners remembered more of the material when retested after a delay and were more likely to stay in computing programs . This isA classic question in computing (and mathematics) education is whether problems are better with context or without . Bouvier et al. examinedOne caution about choosing context is that context can inadvertently exclude some people while drawing others in. For example, many educators use computer games as a motivating example for programming classes, but some learners may associate them with violence and racial or gender stereotypes or simply find them unenjoyable. Whatever examples are chosen, the goal must be to move learners as quickly as possible from \"hard and boring\" to \"easy and exciting\" .To help students accomplish a visible and satisfying result quickly, instructors can provide some prewritten software libraries or source code that starts students closer to the end goal. The idea that students must start from scratch and write all the code they need themselves is the relic of a bygone era of home microcomputers (and it was not true even then). Pick the task that you actually want to the students to engage in and provide everything else premade.This principle is tautological, but it is easily forgotten. Novices program differently than experts and needNovices may need to spend time thinking about an algorithm on paper . They may need to construct examples in guided steps. They may struggle to debug. Debugging usually involves contrasting what is happening to what should be happening, but a novice's grasp on what should be happening is usually fragile.Novices do not become professionals simply by doing what professionals do at a slower pace. We do not teach reading by taking a classic novel and simply proceeding more slowly. We teach by using shorter books with simpler words and larger print. So in programming, we must take care to use small, self-contained tasks at a level suitable for novices, with tools that suit their needs and without scoffing.Our final tip for teaching programming is that you don't have to program to do it. Faced with the challenges of learning syntax, semantics, algorithms, and design, examples that seem small to instructors can still easily overwhelm novices. Breaking the problem down into smaller single-concept pieces can reduce the cognitive load to something manageable.For example, a growing number of educators are including Parsons Problems in their pedagogic repertoire , 27. Rathttp://sigcse.org/), ITiCSE , and ICER present a growing number of rigorous, insightful studies with immediate practical application. Future work may overturn or qualify some of our 10 tips, but they form a solid basis for any educational effort to the best of our current knowledge.The 10 tips presented here are backed up by scientific research. Like any research involving human participants, studies of computing education must necessarily be hedged with qualifiers. However, we do know a great deal and are learning more each year. Venues like SIGCSE or for gatekeeping . If you are teaching someone to program, the last thing you want to do is make them feel like they can't succeed or that any existing skill they have (no matter when or how acquired) is worthless. Make your learners feel that they can be a programmer, and they just might become one.Code 1. An example multiple choice question probing learners' understanding of loops and integer comparisonsfor {if (i < 3 || i > = 8) {\u00a0\u00a0\u00a0\u00a0System.out.println(\"Yes\");\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\u00a0\u00a0\u00a0\u00a0}How many times will the above code print out the word \u2018Yes\u2019?\u00a0\u00a0\u00a0\u00a0a) 10\u00a0\u00a0\u00a0\u00a0b) 5\u00a0\u00a0\u00a0\u00a0c) 4\u00a0\u00a0\u00a0\u00a0d) 3Code 2. An example of subgoal labellingConventional Materials\u00a0\u00a0\u00a0\u00a01. Click on \"AccelerometerSensor1\"\u00a0\u00a0\u00a0\u00a02. Drag out a when AccelerometerSensor1.AccelerationChanged block\u00a0\u00a0\u00a0\u00a03. Click on \"cowbellSound\"\u00a0\u00a0\u00a0\u00a04. Drag out call cowbellSound. Play and connect it after when AccelerometerSensor1.AccelerationChanged\u00a0\u00a0\u00a0\u00a0Subgoal-Labelled Materials\u00a0\u00a0\u00a0\u00a0Handle Events from My Blocks\u00a0\u00a0\u00a0\u00a01. Click on \"AccelerometerSensor1\"\u00a0\u00a0\u00a0\u00a02. Drag out a when AccelerometerSensor1.AccelerationChanged block\u00a0\u00a0\u00a0\u00a0Set Output from My Blocks\u00a0\u00a0\u00a0\u00a03. Click on \"cowbellSound\"\u00a0\u00a0\u00a0\u00a04. Drag out call cowbellSound. Play and connect it after when AccelerometerSensor1.AccelerationChanged"} {"text": "American Muslim women are an understudied population; thus, significant knowledge gaps exist related to their most basic health behaviors and indicators. Considering this, we examined American Muslim women\u2019s contraception utilization patterns.n\u2009=\u2009224) met the inclusion criteria. Convenience sampling was employed. Multivariate logistic regression models estimated associations between demographics, marital status, ethnicity, nativity, health insurance, religious practice, and contraception use.Self-reported data collected in late 2015 were analyzed. Women who identified as Muslim, were at least 18\u00a0years old, sexually active, and current residents of the United States (Identifying as Muslim, in general, was significantly associated with greater odds of using contraception in general and condoms compared to American Muslim women who identify as Sunni. Identifying as Shia was associated with greater odds of using oral contraceptive pills relative to Sunni respondents. South Asian ethnicity was associated with higher odds of using oral contraceptive pills compared to those of Middle Eastern or North African ethnicity.Findings suggest American Muslim women\u2019s contraception utilization patterns share certain similarities with both American women in general and disadvantaged racial and ethnic minority groups in the United States, implying that factors that influence American Muslim women\u2019s use of contraceptives are possibly countervailing and likely multifaceted. More research is needed to accurately identify associates of contraceptive use in this population. This work serves as a starting point for researchers and practitioners seeking to better understand reproductive health decision in this understudied population. There is little scientific data available on the health-related knowledge, behaviors, and attitudes of American Muslim women. This knowledge gap is not surprising, considering American Muslim women are part of a religious minority population that experiences ongoing stigma; some American Muslim women are immigrants, and some are racial and ethnic minorities. To address this knowledge gap, we collected self-reported survey data from American Muslim women across the United States. We then statistically analyzed contraception utilization patterns of our respondents. Our study design and the variables included were informed by previous research on minority populations in the United States and social theory. We found that being a Shia Muslim or Muslim, in general (no sect declared) was significantly associated with higher odds of using contraception. Respondents identifying as South Asian had high odds of using oral contraceptive pills compared to Middle Eastern and North African respondents. Our findings suggest these American Muslim women\u2019s contraception utilization patterns share similarities with American women in general and with disadvantaged minority groups. More research is needed to better understand barriers and facilitators of contraceptive use in this population.Muslim women in the United States are an understudied population; thus, significant knowledge gaps persist pertaining to their most basic health indicators and behaviors , 2. ThisNot only does contraceptive utilization vary across nations, it fluctuates widely across racial and ethnic groups in the United States \u201316. AmerOther theorists have examined factors which influence contraceptive use, and their theoretical frameworks suggest there are three behavioral factors that significantly influence the use of and type of contraceptive method selected: 1) Autonomous decision making authority of the woman, 2) Ability to negotiate with her health care provider about contraceptive preferences, and 3) Influence of cultural subjective norms directly and indirectly related to contraception \u201323. WithNativity may also play a role in the decision to use contraceptives. The healthy migrant effect asserts that foreign-born individuals are healthier and more resilient than their American born peers; this is due to a selection bias in that only the healthiest individuals from a given population immigrate to the United States \u201332. ThisThe Muslim Women\u2019s Health project was conducted for three months between September 2015 and December 2015. The goal of this study was to collect self-reported, exploratory data on a range of health behaviors and outcomes from American Muslim women. The primary outcomes for this particular sub-study related to contraceptive use; defined below. Respondents were recruited through online social networks, email requests, and postings made to online Muslim communities. To complete the survey or to learn more about this study. Information on the Muslim Women\u2019s Health project, the funding institution , principal investigator, and ethical approval were available at this portal. Respondents were not compensated for their participation.N\u2009=\u2009373. Our analysis of contraceptive utilization was limited to respondents who reported being sexually active (N\u2009=\u2009224).Women who self-identified as Muslim, were at least eighteen years old, and were current residents of the United States were eligible to participate. Every respondent was asked to answer the same question in the same sequence -- question delivery was not randomized; respondents were able to skip any question, after the eligibility screening. Average time to complete the survey was about fifteen minutes due to the inclusion of a number of skip patterns allowing respondents to skip full sections that were not applicable. For example, if a respondent answered that she had never used any form of contraception; she was not asked questions about types of contraceptive used. Although online surveys have limitations, such as sampling bias, one major benefit is the ability to engage difficult-to-reach populations, including stigmatized populations, minority enclaves, and groups fearing persecution . Due to Our primary outcomes were use of contraception (any method), oral contraceptive pills, condoms, and the withdrawal method. Respondents were first asked if they had used contraception and then the type of contraceptive method was assessed by a 10-item multi-response question inquiring about the type of contraception used including: withdrawal method, condoms, oral contraceptive pills, intrauterine device, sponge, female condom, vaginal ring, diaphragm, contraceptive patch, and subdermal implant. Respondents who answered \u201cyes\u201d to contraceptive use were coded as 1 while respondents who answered \u201cno\u201d were coded as 0. Similarly, use of oral contraceptive pills, condoms, and the withdrawal method were assessed with binary variables. Respondents who answered \u201cyes\u201d were coded as a 1 and responses of \u201cno\u201d were coded as 0.To examine the relationship between individual characteristics and contraception use, we included independent measures of personal demographics, socioeconomic status, health insurance, and religion. Demographic characteristics were assessed four variables: age, marital status, ethnicity, and nativity. Age was operationalized with a continuous variable while marital status was assessed by a binary variable: married or not married. Ethnicity was operationalized with a series of binary indicators including Middle Eastern or North African, South Asian, and other (referent). Nativity was assessed with a single dummy variable, coded as 1 if the respondent was born in the United States and 0 if the respondent was born elsewhere. Socioeconomic status was assessed with two variables: annual household income grouped into six categories: under $24,999 (referent), $25,000\u201374,999, $75,000\u201399,999, and $100,000 and over, and education. Education was coded into four categories: high school education (referent), some college or vocational school, college graduate, and graduate or professional school. Health insurance was assessed with three categories: no insurance (referent), private insurance, and public insurance. Religion included two variables of religious sect and worship attendance. Religious sect was assessed with three categories: general Islam , Shia, and Sunni. Worship attendance included five categories: less than once a month (referent), once a month, multiple times a month, once a week, and multiple times a week.Frequencies described sexually active American Muslim women\u2019s contraceptive use, demographics, socioeconomic status, religion, and health insurance status. Logistic regressions were used to estimate the relationship between demographics, socioeconomic status, religion, and the use and type of contraception. All analyses were conducted using Stata version 13.0.Ethical approval was obtained from University of Alabama at Birmingham\u2019s Institutional Review Board . Informed consent was collected via an IRB approved online form.N\u2009=\u2009224). The majority of respondents reported using contraceptives (79.5%). Approximately 66% used oral contraceptive pills (65.6%), 66.1% used condoms, and 32.1% used the withdrawal method. Average age was 32.2\u00a0years and ranged from 18 to 49\u00a0years. Two thirds of the respondents were married (66.5%). With respect to ethnicity, 44.2% of the respondents identifying as having South Asian ethnicity, 25.9% identified as having Middle Eastern or North African ethnicity, 29.9% identifying as having an ethnicity other than South Asian or Middle Eastern/North African. About 38% of the respondents were born in the United States (38.4%), 53.9% attended graduate or professional school, and 43.3% reported an annual household income of over $100,000. Most respondents carried private insurance (83.1%); only 5.6% were uninsured. About 38% (38.4%) identified as Sunni, compared to Shia (46.4%), and 15.2% identified as Muslim, in general (neither Shia nor Sunni). About 42% of the sample attended worship once a week or more frequently (42.34%).Table\u00a0Table\u00a0As to religious characteristics, respondents identifying as Shia were associated with 2.8 greater odds of contraceptive use, relative to respondents identifying as Sunni . Respondents identifying as Muslim, in general, were associated with 5.5 greater odds of using any form of contraception and 3.7 greater odds of using a condom compared to respondents identifying as Sunni. Worship attendance of multiple times a week was associated with lower odds of use of oral contraceptive pills, compared to respondents who report worship attendance of less than once a month . Respondents who reported worship attendance of once a month were associated with 6.0 higher odds of withdrawal than respondents who reported worship attendance of less than once a month .Certain non-statistically significant, but directionally relevant trends emerged, as well. Nativity did not have a statistically significant effect; however, American-born Muslim women had lower odds of using any contraception, oral contraceptive pills, and condoms. This reversed with utilization of the withdrawal method; with American-born women having higher odds to using the withdrawal method compared to their foreign-born peers. Education, although also not significant, did -- for the most part -- function as anticipated. Respondents with a graduate level of education utilized any contraception, oral contraceptive pills, and the withdrawal method at higher odds than respondents with a high school or lower level of educational attainment. Condom use was an outlier, with all higher levels of education having lower odds of condom use compared to the high-school referent category, possibly suggesting a bias against condom use in this particular population.Compared to the general public, racial and ethnic minorities have repeatedly been shown to have lower rates of contraceptive utilization and frequently use lower-efficacy methods than non-minority women , 13, 14.In contrast to the high rates of contraceptive utilization, we found a limited range of methods used; we found usage of oral contraceptive pills, condoms, and the withdrawal method. We found minimal utilization of the contraceptive patch, injectable contraception, and implants (statistics not reported). This gap between contraceptive methods employed in minority and non-minority women is growing and may allude to the impact of social and community circumstances that, with respect to American Muslim women, are unique to their culture and beliefs pertaining to reproduction , 38. ThuReligion and Islamic sect may influence the acceptability, perceptions, and attitudes of contraceptive use and may strongly influence the individual decision-making process. About 38% of our sample identified as Sunni Muslims. Shia respondents and respondents who identified as \u2018general Islam\u2019 -- a category we suspect may include those who are religiously more secular but culturally still identify as Muslim -- had higher odds of using contraception relative to Sunni Muslims. Considering Islam as a social modality, precise cultural criteria may define the reproductive practices within these societies and heavily influence the contraceptive choices acceptable to American Muslim women \u201341. ManyLimitations should be considered when applying these findings. Although our sample size is larger than other studies on Muslim health, it is still small compared to the population of Muslim women residing in the United States. Selection bias existed; respondents were more educated and wealthier than Americans in general, and we suspect even in comparison to the wider American Muslim population. Yet, it is noteworthy that these high income, highly educated American Muslim women had contraceptive utilization rates that outpaced contraceptive use found in the highest quintile of Muslim women in nations from which respondents migrated, such as Pakistan . AlthougGiven the complexities of delivering reproductive health care and increasing access to contraceptives within secular societies, it is likely that additional confounding arises when considering the cultural preferences of Muslim Americans . Prior s"} {"text": "Scientific Reports6: Article number: 3852910.1038/srep38529; published online: 12062016; updated: 05172017In Figure 1, the latitudes \u201880.5\u2009N\u2019 and \u201881.0\u2009N\u2019 were incorrectly given as \u201881.0\u2009N\u2019 and \u201881.5\u2009N\u2019 respectively. In addition, the scale between 0\u2009km and 40\u2009km was incorrectly given as between 0\u2009km and 80\u2009km. The correct Figure 1 appears below as"} {"text": "Domains of functional performance and cognition have been shown to independently predict both comfortable and gait speed. Walking speed reserve (WSR) reflects an individual\u2019s ability to increase their walking speed when needed to adjust to environmental demands. Increasing speed requires a higher demand of neuromuscular control and cognitive resources. Few studies have investigated WSR; however, poorer cognitive status has been shown to be associated with a smaller WSR. Understanding mechanisms in WSR could help with intervention development. The purpose of this study was to investigate the interplay of functional performance and cognition on WSR. Sixty-seven community-dwelling older adults completed assessments including: global cognition , leg strength (30 Second Chair Stand), balance , executive function , simple reaction time, processing speed (Digit Symbol Substitution Test), and gait speed (comfortable and fast) using the GAITRite\u00ae system. WSR (fast-comfortable gait speed) was the dependent variable. Independent variables in the regression model were selected using Pearson correlation values <0.05 and processing speed ). Linear regression revealed that the strength and processing speed explained 16.3% of the variance (adjusted r2) of WSR and were both unique predictors of WSR indicating individuals with greater strength and faster processing speed have a greater capacity to alter their gait speed. Further, the interdependence between cognition and physical performance emphasizes the importance of interprofessional care for older adults."} {"text": "Here we address four issues associated with BBB models: cell source, barrier function, cryopreservation, and matrix stiffness. We reproduce a directed differentiation of brain microvascular endothelial cells (dhBMECs) from two fluorescently labeled human induced pluripotent stem cell lines (hiPSCs) and demonstrate physiological permeability of Lucifer yellow over six days. Microvessels formed from cryopreserved dhBMECs show expression of BBB markers and maintain physiological barrier function comparable to non-cryopreserved cells. Microvessels displaying physiological barrier function are formed in collagen I hydrogels with stiffness\u00a0matching that of human brain. The dilation response of microvessels was linear with increasing transmural pressure and was dependent on matrix stiffness. Together these results advance capabilities for tissue-engineered BBB models.Three-dimensional (3D) tissue-engineered models of the blood-brain barrier (BBB) recapitulate Brain microvascular endothelial cells (BMECs) form tight junctions, which restrict paracellular transport, and an array of efflux pumps and transporters, which regulate transcellular transport into the brain. The functional BBB prevents entry of the majority of pharmaceutical agents into the brain, particularly those of high molecular weight, hindering treatment strategies for CNS disease2, while a dysfunctional BBB is implicated in neurological disease4. BMECs generated via differentiation of human induced pluripotent stem cells (hiPSCs) are a promising cell source for in vitro studies of the BBB8. However, most models of the BBB are constructed in two-dimensions (2D) and are therefore unable to recapitulate shear stress, cell-ECM interactions, and cylindrical geometry characteristic of the in vivo brain microvasculature9. Three-dimensional (3D) models of the BBB that incorporate these complexities and achieve physiological barrier function have recently been developed12.Microvessels of the blood-brain barrier (BBB) separate the bloodstream from the central nervous system (CNS)in vivo. However, multiple challenges limit the application of such models: (1) The spontaneous differentiation of iPSCs remains a time-consuming step in microvessel fabrication. The differentiation generates a sub-population of BMECs which is purified by subculture on collagen IV and fibronectin-coated surfaces7. This purification step reduces cell adhesion and has restricted formation of microvessels to stiff collagen I hydrogels13. However, the use of cross-linkers to increase stiffness can limit the ability to co-culture cells in the surrounding matrix. As an alternative, the directed differentiation of dhBMECs5 does not involve a purification step and hence could increase the efficiency of microvessel fabrication and expand the range of matrix materials. Furthermore, extending the repertoire of cell lines to include fluorescently labeled hiPSCs is critical to enabling live-cell imaging in 3D tissue-engineered models. (2) Mimicking physiological barrier function in tissue engineered models is critical for studying clinically-relevant processes such as BBB-opening and drug delivery15. While recent advances have led to in vitro models with physiological barrier function; the ability to maintain stable barrier function within tissue-engineered models has not widely been reported10. This limits the utility of models for studying processes which occur over days to weeks. (3) dhBMECs do not maintain phenotype following passaging, therefore, freshly differentiated dhBMECs are required for most experiments. Recent studies have reported that cryopreservation of dhBMECs obtained by spontaneous differentiation16 show similar barrier properties as freshly differentiated cells. Cryopreservation has not been validated for 3D BBB models, but would increase fabrication efficiency. (4) As the human brain is highly cellular (70\u201385% by volume), selection of an appropriate extracellular matrix (ECM) for 3D models is challenging17. As a proxy for the cellular components, 3D models of the BBB commonly utilize ECM proteins not present in the brain, including collagen I22 and fibrin23. In previous work we demonstrated physiological barrier function in a tissue-engineered microvessel model within a stiff collagen matrix13. However, the role of matrix stiffness on the structure and function of 3D blood-brain barrier microvessels has not been further explored.Tissue-engineered models are indispensable for studies of the BBB as they integrate spatial and temporal information on cell behavior and barrier function, similar to two-photon microscopy approaches 5 using fluorescently-tagged hiPSCs which enable live-cell imaging, (2) barrier function \u2013 we demonstrate stable and physiological permeability of solutes over six days, (3) cryopreservation \u2013 fresh and cryopreserved dhBMECs display similar barrier function in 2D and 3D models, and (4) the role of matrix stiffness in BBB structure and function \u2013 the solute permeability of microvessels remains low across matrix stiffness ranging from 0.3\u20133.3 kPa, while the structural stability and dilation response were stiffness-dependent. Together, these advances in cell culture and live-cell imaging establish tissue-engineered microvessels as a versatile platform for studies of BBB function/dysfunction and highlight the importance of matrix stiffness in BBB microvessel behavior.In this paper we generate tissue-engineered 3D models of the human BBB which address these key challenges: (1) cell source \u2013 we reproduce a recently reported directed differentiation of dhBMECs5 for the directed differentiation of two fluorescently-labeled human induced pluripotent stem cell lines (hiPSCs), BC1-green fluorescent protein (BC1-GFP) and C12-red fluorescent protein (C12-RFP), into brain microvascular endothelial cells (dhBMECs). The directed differentiation uses a multistep protocol over the course of eight days , matching the concentration previously used for directed differentiation of BMECs . BC1-GFP dhBMECs maintained TEER values above 2,000 \u03a9 cm2 for seven days, and C12-RFP dhBMECs maintained TEER values above 1,500 \u03a9 cm2 for two days . Retention of physiological barrier function is critical for studies requiring long incubation periods, such as studies of neurodegenerative protein aggregate accumulation.On day eight of the differentiation, dhBMECs were singularized and seeded onto Matrigel-coated Transwells to monitor TEER Fig.\u00a0. TEER peely Fig.\u00a0. The difays Fig.\u00a0. Both liays Fig.\u00a0. TEER is7. The relative contributions of endothelial markers and tight junction proteins on barrier function are not well understood; however, claudin-5 is thought to be critical to achieve physiologically high TEER values31.To determine whether cells properly express characteristic BBB endothelial markers, we stained confluent monolayers for tight junction markers, nutrient transporters, and efflux transporters Fig.\u00a0. Directe\u22121 collagen gels13. This approach relies on the isolation of brain endothelial cells from a heterogeneous cell population through selective adhesion, achieved by sub-culturing terminally differentiated cells onto collagen IV and fibronectin-coated dishes prior to seeding into microvessels. To assess the barrier function of dhBMECs from the directed differentiation, we formed functional microvessels using a templating method reported previously of 1\u20134 dyne cm\u22122\u200937.First, we assessed microvessels formed in 7\u2009mg\u2009mLtep Fig.\u00a0. Monolay3. To further evaluate barrier function, microvessels were simultaneously perfused with two different size fluorescent probes: Lucifer yellow (LY) and 10\u2009kDa Dextran 38. Similar values were also obtained for BC1-derived dhBMECs and Madin-Darby Canine Kidney (MDCK) cells in 2D transwells, suggesting that 3D culture is not required to achieve physiological barrier function39.BBB microvessels expressed claudin-5 localized at cell-cell junctions Fig.\u00a0. Claudinran Fig.\u00a0. Permeabran Fig.\u00a0. The perely Fig.\u00a0. These v\u22127\u2009cm\u2009s\u22121 due to photobleaching of hydrogel autofluorescence resulting from the cross-linker.11 Studies in zebrafish have shown that 10\u2009kDa dextran does not enter the brain40; however, there remain discrepancies across in vivo studies concerning quantification of BBB permeability9.The permeability of 10\u2009kDa dextran in both BC1-GFP and C12-RFP dhBMEC microvessels was below our detection limit Fig.\u00a0, similar2 for seven days. To confirm that 3D microvessels retained barrier function over time, we repeated permeability measurements of BC1-GFP microvessels on day six. Lucifer yellow permeability in BC1-GFP microvessels was 5.02\u2009\u00b1\u20090.94\u2009\u00d7\u200910\u22127\u2009cm\u2009s\u22121 on day six, not statistically different from day two . In addition, the permeability of 10\u2009kDa dextran remained below the detection limit, indicating preservation of barrier function. These results suggest that barrier function can be maintained in 3D models without need for co-culture with supporting cell types or additional factors. Recently, we found that the incorporation of pericytes into BBB microvessels does not reduce solute permeability41.In 2D, TEER values for BC1-GFP dhBMECs remained above 2,000 \u03a9 cm16. A working stock of cryopreserved dhBMECs enables scale-up, higher throughput, and can reduce batch-to-batch variability between experiments. BC1-GFP dhBMECs, which maintained higher TEER values than C12-RFP dhBMECs, were cryopreserved in liquid nitrogen for 11\u2013120 days. The phenotype and barrier function of cryopreserved BC1-GFP dhBMECs was assessed and compared to non-cryopreserved controls using 2D and 3D functional assaysIn tissue engineering, cell cryopreservation can eliminate the need to perform unique differentiations before each experiment. Cryopreserved dhBMECs have been shown to maintain BBB phenotype when thawed in media containing Rho-associated protein kinase inhibitor Y27632 (ROCK inhibitor)2 for more than ten days . Cryopreserved dhBMECs also retained expression and localization of critical tight junction, efflux, and transporter proteins comparable to non-cryopreserved controls . The morphology of thawed dhBMECs was indistinguishable from newly differentiated cells upon seeding onto glass surfaces. Cryopreserved dhBMECs achieved peak TEER values three days after seeding into Transwells , while fresh cells typically peaked two days after seeding. Confluent monolayers of cryopreserved dhBMECs exhibited TEER values above 1,000 \u03a9 cmays Fig.\u00a0, and a p.\u00a02 Fig.\u00a0. This isols Fig.\u00a0.Figure 5\u22121 collagen I gels cross-linked with genipin at the same density as microvessels formed with non-cryopreserved cells (20\u2009\u00d7\u2009106 cells mL\u22121). Microvessels with freshly differentiated dhBMECs reached confluence one to two days after seeding, whereas the time to confluence was two to three days with cryopreserved cells, possibly due to reduced cell proliferation. However, the resulting microvessel morphology was indistinguishable to those formed with non-cryopreserved dhBMECs. Barrier function was assessed from permeability measurements following microvessel formation. The permeability of Lucifer yellow in cryopreserved dhBMEC microvessels was 3.22\u2009\u00b1\u20090.77\u2009\u00d7\u200910\u22127\u2009cm\u2009s\u22121 . The permeability of 10\u2009kDa Dextran was below the detection limit . Thus, to accurately model the brain ECM, either large volumes of cells must be incorporated or a relatively inert matrix material must be selected to provide sufficient structural support9. As a result, 3D models of the BBB are commonly generated using ECM proteins not found in the brain23. In previous work we used genipin cross-linked 7\u2009mg\u2009mL\u22121 collagen I gels to form microvessels13.The ECM of the brain is comprised of hyaluronic acid, lecticans, proteoglycan link proteins, and tenascins49. The origins of this broad range are thought to be due to differences in brain region47, testing methods50, and preparation techniques51. Therefore, to enable direct comparison of our models and in vivo conditions, we obtained elastic moduli for mouse brain and hydrogels from stress - strain curves in compression53. We tested four matrix conditions: (1) 7\u2009mg\u2009mL\u22121 collagen I cross-linked with genipin, (2) 7\u2009mg\u2009mL\u22121 collagen I, (3) 5\u2009mg\u2009mL\u22121 collagen I, and (4) 3\u2009mg\u2009mL\u22121 collagen I.Previous measurements of brain tissue have reported elastic moduli between 0.5\u20134.5 kPa\u22121 cross-linked gels, 0.8\u2009\u00b1\u20090.2 kPa for 7\u2009mg\u2009mL\u22121 gels, and 0.3\u2009\u00b1\u20090.2 kPa for 5\u2009mg\u2009mL\u22121 gels. 3\u2009mg\u2009mL\u22121 collagen I hydrogels were not sufficiently stiff for unconfined bulk mechanical testing. Genipin cross-linking increased the stiffness of 7\u2009mg\u2009mL\u22121 gels by about 4-fold (p\u2009<\u20090.001), similar to previous reports54. For comparison, we measured the stiffness of resected mouse brain using the same method. The Young\u2019s modulus for mouse brain was 2.1\u2009\u00b1\u20090.4 kPa, between values of 7\u2009mg\u2009mL\u22121 and 7\u2009mg\u2009mL\u22121 genipin cross-linked collagen I. Based on the elastic moduli, 7\u2009mg\u2009mL\u22121 and 7\u2009mg\u2009mL\u22121 genipin cross-linked collagen I are reasonable choices for emulating the stiffness of the brain.The Young\u2019s modulus of the bulk gels increased with increasing collagen density Fig.\u00a0. The mod49. Decreases in stiffness are observed during Alzheimer\u2019s disease, but still are between values for 7\u2009mg\u2009mL\u22121 cross-linked and non-crosslinked collagen I (2.40\u20132.51 kPa)48. The effect of changes in brain stiffness on blood-brain barrier structure and function is not known; however, changes in basement membrane are thought to alter brain function56.Changes in brain stiffness, as measured using magnetic resonance elastography (MRE), are possible biomarkers for disease and aging\u22121 collagen I gels cross-linked with genipin to form microvessels13. However, the use of genipin poses multiple challenges for tissue-engineering, including the difficulty in culturing cells within highly cross-linked matrices and the cytotoxicity of unreacted cross-linker57. Reliable formation of BBB microvessels without the use of chemical cross-linkers would expand model complexity.To assess the adhesion and barrier function of dhBMECs over a range of ECM stiffness, microvessels were formed from non-cryopreserved BC1-GFP dhBMECs. In previous work, we used 7\u2009mg\u2009mL\u22121 collagen I but did not remain perfusable due to collapse of the endothelium Fig.\u00a0. The perely Fig.\u00a0. As desc\u22121) and softest (5\u2009mg\u2009mL\u22121) hydrogels in which stable microvessels were formed, the transmural pressure was increased by increasing the height between the fluid inlet and outlet reservoirs. We observed the response to 2-fold and 3-fold increases in transmural pressure above the baseline head of 5\u2009cm H2O (approximately 0.05\u2009Pa) . For microvessels in cross-linked 7\u2009mg\u2009mL\u22121 gels, a 3-fold increase in transmural pressure resulted in a ~6% increase in diameter .To assess the dilation response of the stiffest (cross-linked 7\u2009mg\u2009mLPa) Fig.\u00a0. Dilatioter Fig.\u00a0. For 5\u2009mion Fig.\u00a0. The res59. Relaxation and constriction of smooth muscle cells (SMCs) mediates these changes in the brain. Physiological arteriolar dilation ranges from about 10\u201330%; however, the diameter of these arterioles is typically less than 20 \u03bcm61. Autoregulation, the process by which tissue perfusion is maintained constant, is also controlled by changes in arterial diameter63. Autoregulation is maintained through multiple mechanisms including myogenic responses where vasoconstriction (mediated by SMCs) occurs in response to increases in transmural pressure (and vice versa)64. In arterioles of skeletal muscle in rats, a 3-fold decrease in perfusion pressure results in ~20% dilation (increase in diameter from ~85 \u03bcm to ~100 \u03bcm). However, when the SMC response is abolished (using treatment with Ca2+ free solution), a 3-fold decrease in perfusion pressure results in ~25% constriction (decrease in diameter from ~135 \u03bcm to ~100 \u03bcm)65. Additionally, in the absence of SMCs, changes in diameter were observed to be roughly linear in the same fold-change regime as studied here65. These studies suggest that our 5\u2009mg\u2009mL\u22121 microvessels display dilation behaviors similar to in vivo arterioles devoid of SMCs. Incorporation of SMCs into tissue-engineered microvessels could recapitulate myogenic autoregulation of flow in response to transmural pressure changes.Neurovascular coupling, the process by which cerebral blood flow is matched to neuronal metabolic demand, is controlled by dilation of arterioles directly upstream from capillariesWe address four key challenges associated with fabrication of tissue-engineered BBB models: cell source, barrier function, cryopreservation, and matrix stiffness. Functional microvessels formed from dhBMECs obtained from the directed differentiation of two iPSC lines showed expression of BBB markers and maintain physiological barrier function. Similarly, functional microvessels with physiological barrier function were formed from cryopreserved dhBMECs, eliminating the need for performing differentiation prior to model fabrication. Microvessels with physiological barrier function were formed in gels with stiffness ranging from 0.3\u20133.3 kPa, spanning the stiffness of mouse brain (~2 kPa). The dilation response of microvessels was linear with transmural pressure and was dependent on matrix stiffness, enabling simulation of the cardiac cycle.66 line was derived from a 46 year old male and the C12-RFP67 line derived from a newborn male. Since the research does not involve humans or animals, and the iPS lines are de-identified, no IRB approval was required. hiPSCs were maintained between passages P50-P70 and were passed using Versene (ThermoFisher). Prior to differentiation, hiPSCs were singularized using warm Accutase (Invitrogen) and plated onto Matrigel-coated six-well culture plates in mTeSR1 supplemented with 10 \u03bcM Rho-associated protein kinase inhibitor Y27632 at a density between 10,000 and 50,000 cells cm\u22122 were maintained on Matrigel-coated tissue-culture treated six-well culture plates (Corning) in mTeSR1 medium (Stem Cell Technologies). Two iPSC lines were used in this study: the BC1-GFP5. Briefly, hiPSC colonies were expanded for three days in mTeSR1, then were treated with CHIR99021 (Selleckchem) at concentrations between 1\u20136\u2009\u03bcM , 10 \u03bcM all-trans retinoic acid (Sigma) and 1\u2009\u00d7\u2009B27. The medium was not changed for 48\u2009hours.Directed differentiation of hiPSCs was adapted from a previously reported protocolhiPSC cultures were imaged daily using phase contrast microscopy and fluorescence microscopy using FITC or Texas Red filters. 10.2\u2009mm\u2009\u00d7\u20097.65\u2009mm images were acquired on an inverted microscope (Nikon Eclipse TiE) using a 4\u2009\u00d7\u2009objective (Nikon) with epifluorescence illumination provided by an X-Cite 120LEDBoost (Excelitas Technologies).2 area, 0.4\u2009\u00b5m pore size) were seeded at 1\u2009\u00d7\u2009106 cells cm\u22122, while eight-chambered borosilicate cover glass wells (Lab Tek) were seeded at 0.8\u2009\u00d7\u2009106 cells cm\u22122. All surfaces were coated overnight with 100\u2009\u03bcg\u2009mL\u22121 growth factor-reduced Matrigel (Corning) in DMEM/F12. Cells were maintained in two-dimensional culture assays using hECSR1 for 24\u2009hours then switched to hECSR2: hESFM supplemented with 1\u2009\u00d7\u2009B27. Transwell cultures were maintained for 14 days, without further media changes. TEER was recorded daily as previously described6.On day eight of the differentiation, cells were singularized using warm Accutase for one hour on a shaker at 100\u2009rpm. Corning\u00ae Transwell\u00ae polyester membrane cell culture inserts , then fixed using ice-cold methanol for 15\u2009minutes, then blocked for 30\u2009minutes in PBS with 10% normal goat serum and 0.3% Triton X-100 (Millipore Sigma). Primary antibodies were diluted in blocking buffer and incubated on cells overnight at 4\u2009\u00b0C which served as a template for the microvessel. Collagen solutions were neutralized on ice and then gelled for 30\u2009minutes at 37 \u00b0C. Prior to the removal of the template rod, the addition of 2% agarose to both sides of the collagen gel prevented delamination. To increase stiffness, 7\u2009mg\u2009mL\u22121 gels were crosslinked using 20\u2009mM genipin (Wako Biosciences) for two hours54; devices were then perfused with PBS for at least eight hours to remove excess genipin. Before seeding with dhBMECs, channels were coated overnight with 100\u2009\u03bcg\u2009mL\u22121 growth factor-reduced Matrigel in DMEM/F12. Derived hBMECs were seeded at 20\u2009\u00d7\u2009106 cells mL\u22121 into collagen channels using hESCR1 media supplemented with 10 \u03bcM ROCK inhibitor Y27632 and allowed to adhere under static conditions for 30\u2009minutes. ROCK inhibition was required to maintain cell adhesion, as previously reported13. After 24\u2009hours, ROCK inhibitor was removed and microvessels were continually perfused with hECSR2. After cell seeding, microvessels were maintained under constant perfusion using a gravity driven flow system (inlet pressure of 5\u2009cm H2O); the average endothelium shear stress was approximately 1\u2009dyn\u2009cm\u22122 as calculated from Poiseuille\u2019s Law.Microvessels were formed within a rectangular channel in a polydimethylsiloxane housing with dimensions 1\u2009cm (length) \u00d7\u20091.75\u2009mm (width) \u00d7\u20091\u2009mm (height) formed within an aluminum mold. The PDMS housing was attached to a glass slide following plasma treatment and treated with trimethoxysilane (Sigma) prior to gelling to reduce bubble formation between the PDMS and collagen gel. Neutralized rat tail type I collagen gel (Corning) ranging from 3 to 7\u2009mg\u2009mLPermeability was measured using two different molecular weight solutes: 200 \u03bcM Lucifer yellow and 2 \u03bcM Alexa Flour-647-conjugated 10\u2009kDa dextran (Thermo Fisher). Phase contrast and fluorescence images were acquired every two minutes (NIS Elements) before and after solute perfusion . At every time point, the protocol obtained six images: (1) a phase contrast image of the top of the microvessel (located and maintained by autofocus), (2\u20135) phase contrast and fluorescence images of the microvessel midplane, and (6) a phase contrast image of the bottom of the microvessel. Filter cubes (Chroma 39008 and Chroma 41008) were used to capture Lucifer yellow (20\u2009ms exposure) and Alexa Fluor-647-conjugated dextran (200\u2009ms exposure), respectively. Images were collected as ten adjacent frames using a 10\u00d7\u2009objective, resulting in a total image area of 8.18\u2009mm\u2009\u00d7\u20090.67\u2009mm.0, where d is the diameter of the vessel, \u0394I is the initial increase in total fluorescence intensity, and (dI/dt)0 is the rate of increase in total fluorescent intensity69. The rate of intensity increase (dI/dt) was measured over sixty minutes for Lucifer yellow and thirty minutes for 10\u2009kDa dextran. Images were segmented into 10 adjacent regions-of-interest (ROIs), where the value of the ROI with minimum permeability is reported to exclude artifacts from interstitial dye entering the ECM from the inlet and outlet ports. The permeability detection limit was 1\u2009\u00d7\u200910\u22127\u2009cm\u2009s\u22121,\u00a0as previously reported11.ImageJ (NIH) was used to measure fluorescence intensity profiles over 36 frames for each of the ten adjacent frames. Permeability was calculated from (d/4)(1/\u0394I)(dI/dt)The structure of BBB microvessels was monitored daily using phase contrast imaging. Survival of microvessels represents two criteria: (1) maintenance of perfusion, and (2) confluent endothelium attached to the collagen wall. Additionally, the ability of BBB microvessels to respond to fluid pressure was monitored using microscopy while the inlet and outlet height difference was changed. The inlet height was increased in increments of 5\u2009cm from 5\u2009cm to 15\u2009cm and equilibrated for 5\u2009minutes at each height. Phase contrast images were acquired at each pressure to record average microvessel diameter.16. dhBMECs were singularized in Accutase and then resuspended in 60% hESCR1, 30% fetal bovine serum (Sigma) and 10% dimethyl sulfoxide (Sigma). Cryovials were frozen in an isopropanol-filled freezing container at -80\u2009\u00b0C for 24\u2009hours, then moved into liquid nitrogen for long-term storage. Cells were rapidly thawed in a water bath (~37\u2009\u00b0C), centrifuged using fresh media, and then seeded onto transwells or glass, or seeded into microvessels as previously described. 10 \u03bcM ROCK inhibitor Y27632 (RI)\u00a0was supplemented for the first 24\u2009hours after thawing.Human iPSC-derived BMECs were cryopreserved and stored in liquid nitrogen on day eight as previously described\u22121 using a tensile/compression tester (MTS Criterion). From the obtained stress-strain curve, the Young\u2019s Modulus (Pa) was calculated as the best-fitted slope of the initial linear region (~5\u201312% strain). Compression testing could not be accurately conducted on 3\u2009mg\u2009mL\u22121 collagen I hydrogels as they were physically unstable when unconstrained and collapsed by ~40% (height). The hydrogels of other conditions were stable and their heights varied less than 10%.Hydrogels: Collagen hydrogels were fabricated by casting collagen I solutions between two glass plates separated by 2 millimeters. Collagen I solutions were gelled for 30\u2009minutes at 37\u2009\u00b0C. Some hydrogels were crosslinked with genipin and/or equilibrated with PBS as previously summarized. Hydrogel samples 10\u2009mm in diameter and 2\u2009mm in thickness were punched out and compressed at a rate of 0.25\u2009mm\u2009s\u22121 using a tensile/compression tester (MTS Criterion). The Young\u2019s modulus was calculated from the obtained stress-strain curve (~5\u201310% strain).Mouse brain: 9\u201312 weeks old male and female BALB/c mice were euthanized, and had their brains harvested within one hour of compression tests. A coronal section of the whole brain was taken by cutting at the anterior and posterior coronal plane proximal to the pituitary gland (~3\u2009mm thickness). Once the sample was sectioned, it was immediately staged on the compression tester for measurement. The compression plate was lowered to achieve initial contact with the flat tissue surface with minimum compressive force applied. The initial thickness of the tissue was then measured by the distance between the top and bottom (stage) plates for strain calculation. The sample was allowed to stabilize for one minute before compression testing. The tissue sample was then compressed at a rate of 2\u2009mm\u2009minStatistical testing was performed using Prism ver. 8 (GraphPad). TEER, permeability and Young\u2019s moduli measurements are reported as mean \u00b1 standard error of the mean (SEM). A student\u2019s unpaired t-test or an analysis of variance (ANOVA) test were used for comparison between two or more than two groups, respectively. Reported p-values were multiplicity adjusted using a Tukey test. Analysis of Covariance (ANCOVA) was used to compare linear regression slopes. Differences were considered statistically significant for p\u2009<\u20090.05, with the following thresholds: *p\u2009<\u20090.05, **p\u2009<\u20090.01, ***p\u2009<\u20090.001.Supplementary Information"} {"text": "A sharp, intense band at 1028 cm\u22121 was observed in the Raman spectra of the nanoparticles. The shift of this band in comparison to AB itself agrees well with the theoretical model. AB in the nanoparticles was identified by means of electrochemistry and NMR spectroscopy. The sizes of the Au crystallites measured by XRPD were about 9 and 17 nm for the nanoparticles obtained in pH 7.4 and 3.6, respectively. The size of the particles as measured by TEM was 24 and 30 nm for the nanoparticles obtained in pH 7.4 and pH 3.6, respectively. The DLS measurements revealed stable, negatively charged nanoparticles.The aim of our work was the synthesis and physicochemical characterization of a unique conjugate consisting of gold nanoparticles (AuNPs) and a pharmacologically active anticancer substance abiraterone (AB). The direct coupling of AB with gold constitutes an essential feature of the unique AuNPs\u2013AB conjugate that creates a promising platform for applications in nanomedicine. In this work, we present a multidisciplinary, basic study of the obtained AuNPs\u2013AB conjugate. Theoretical modeling based on the density functional theory (DFT) predicted that the Au Abiraterone (AB) is administered as an acetate ester prodrug which is rapidly converted in vivo to abiraterone . AbirateChallenges related to modern drug form technology consist, among other things, in using techniques and technologies that allow us to deliver the drug directly to the drug target, extend the time of the API\u2019s activity in the drug target, and influence the API\u2019s distribution. In the light of the above, nanotechnology gains primary importance ,3,4,5. I8, Au20 with alanine, tryptophan), (1+) conjugate the geometry where the NH terminal is located nearby the Au5 cluster is energetically more favourable than the geometry with the OH terminal in front of Au5. However, the interaction energy shows an opposite trend, i.e., the binding energy of Au5 in the [Au5\u2013(NH)abiraterone)](1+) conjugate is weaker than in the [Au5\u2013(OH)abiraterone](1+) conjugate.The interaction energies, also called the binding energies (BE), fall into an interval of about 20 \u00b1 5 kcal/mol (84 \u00b1 21 kJ/mol), which is close to the values obtained in the course of similar investigations of the Aukcal/mol , Au13\u2013guA diffractogram of the lyophilized mixtures obtained in pH 3.6 and 7.4 is characterized by sharp peaks in the range of 5\u201335\u00b0 and broad as well as sharp peaks in the range of 35\u201385\u00b0 . The ideUpper window: diffractograms of the mixtures, Au and NaCl peaks are indicated. Below: a simulated diffractogram of abiraterone form I. Insert: a magnification of the low intensity peaks range .Diffractograms of two AuNPs\u2013AB conjugates obtained in pH 3.6 and pH 7.4 are compared in 13\u2013(N)AB and Au13\u2013(OH)AB model nanoconjugates in the range of 1800\u20131000 cm\u22121. One can see that substantial differences are visible in the ranges of 1660\u20131580 cm\u22121 and 1100\u20131000 cm\u22121.\u22121 comes from the stretching vibration of the C=C (B ring) bond. In the spectra of the AB molecule and the Au13\u2013(OH)AB model nanoconjugate, the bands at 1667, 1640, and 1612 cm\u22121 originate mainly from the C=C (D ring) bond stretching vibrations and the pyridine ring stretching vibrations. However, in each band the participation of individual vibrations of the C=C (D ring) bond and the pyridine ring is different, which has been summarized in 13\u2013(N)AB model nanoconjugate, three characteristic bands are observed in the range 1660\u20131580 cm\u22121: 1664, 1641, and 1617 cm\u22121. The first two bands originate mainly from the C=C (D ring) bond vibrations and the pyridine ring stretching vibrations. These bands are much more intense than their counterparts from the AB and Au13\u2013(OH)AB spectra. Just as in the AB and Au13\u2013(OH)AB spectra, the third band at 1617 cm\u22121 has small cm\u22121 intensity and originates from the vibration of the entire AB molecule.In the theoretical spectra of the AB molecule and both model nanoparticles, the band at 1740 cm\u22121, two isolated bands at about 1080 and 1040 cm\u22121 are observed in the spectra of AB and the Au13\u2013(N)AB model nanoconjugate. The first one comes from the whole AB molecule vibrations in both spectra. The second band comes mainly from the pyridine ring breathing vibrations and the steroid moiety vibrations in the spectrum of the Au13\u2013(N)AB model nanoconjugate. This band is much more intense than its counterpart from the AB and Au13\u2013(OH)AB spectra. In the spectrum of the Au13\u2013(OH)AB model nanoconjugate, three bands at 1039, 1052, and 1076 cm\u22121 are visible. These bands originate from the whole AB molecule vibrations (1076 cm\u22121), steroid moiety vibrations (1052 cm\u22121), and mainly the pyridine ring with the steroid moiety vibrations (1039 cm\u22121). The description of the spectra has been summarized in In the range of 1100\u20131000 cm13\u2013(N)AB model nanoconjugate spectrum can indicate that the interactions between the Au13 cluster and the AB molecule occur via the N atom.A significant increase in band intensities in the AuAccording to our studies, abiraterone appears in three polymorphic forms: I, II, and III. In \u22121 and one characteristic intense narrow band at 1028 cm\u22121. The Raman spectrum of the nanoparticle obtained in pH 3.6 is characterized by broad bands at about 1658, 1590, 1565, 1534, 1509, 1463, 1415, 1355, 1295, 1260, 1242, 1223, 1186, 1160, 1096, and 1057 cm\u22121 and one characteristic intense narrow band at 1028 cm\u22121. This comparison shows that more broadened bands are present in the spectrum of the nanoparticle obtained in pH 7.4. This corresponds well to the smaller Au crystallites estimated by XRPD.13\u2013(N)AB as well as from three abiraterone forms, in the ranges of 1660\u20131580 cm\u22121 and at 1100\u20131000 cm\u22121, have been collected. One can see that the bands in the nanoparticle spectra from the range of 1660\u20131580 cm\u22121 are shifted towards lower wavenumbers in comparison to the Raman spectra of forms II and III. However, the band at 1028 cm\u22121 visible in the nanoparticle spectra is shifted towards higher wavenumbers in comparison to its counterpart from the spectra of forms I, II, and III .In \u22121 and the presence of an intense band at 1028 cm\u22121 in both nanoparticle spectra can suggest that the interactions between the AB molecule and the AuNPs are via the N atom. These studies also prove that the pH changes influence the Au crystalline size in the nanoparticles and bands broadening in the AuNPs\u2013AB spectra.The shift into lower wavenumbers in the range of 1660\u20131580 cmThe amount of AB on the AuNPs surface was measured using a thermogravimetric analysis . The TGA1H NMR spectrum of the abiraterone acetate nanoparticles was obtained. Signals appeared at approx. 4.43, 5.38, 6.10, 7.33, 7.78, 8.41, and 8.57 ppm. They fit very well with the signals recorded for abiraterone acetate: 4.40 (CH-3), 5.37 (CH-6), 6.10 (CH-16), 7.34 (CH-24), 7.75 (CH-25), 8.40 (CH-23), and 8.56 (CH-21) ppm. This proves that the particles of abiraterone acetate are present in the tested nanoparticle sample. The 1H NMR spectra of gold nanoparticles combined with abiraterone were recorded for various synthesis methods. The spectra were obtained with the signals, among others, at approx. 5.29, 6.10, 7.33, 7.75, 8.40, and 8.55 ppm. This matches very well the data obtained for abiraterone measured under similar conditions, where the following signals were obtained: 5.29 (CH-6), 6.10 (CH-16), 7.33 (CH-24), 7.75 (CH-25), 8.55 (CH-23), and 8.56 (CH-21) ppm. This clearly proves that abiraterone is present in the tested samples of the nanoparticles. It is worth noting that in the observed range of the chemical shifts, there is a difference in the chemical shift of the proton of the CH-6 group between abiraterone nanoparticles (5.29 ppm) and abiraterone acetate nanoparticles (5.38 ppm).Initially, the 1H NMR spectra of the nanoparticles oxidation of cholesterol in the cyclohexanol structure, which was manifested by the presence of a peak at the potential of 1.9 V in the acetonitrile solution using a GCE electrode [Electrochemical experiments were carried out using differential pulse voltammetry (DPV) to confirm the formation of gold nanoparticles modified with abiraterone. Electrochemical tests were carried out in a DMSO solution containing 0.1 M TBAHFP . At the lectrode ,28. ThatWe have reported a novel method for the successful conjugation of abiraterone and gold nanoparticles, creating a unique nanoconjugate with expected anticancer properties and biomedical applications.-synthesis optimization and purification of the AuNPs\u2013AB conjugates that would afford desirable-sized AuNPs,-theoretical modeling of the interactions of the Au clusters with AB,-development of the analytical methods for the AB identification in the AuNPs\u2013AB conjugates and the AB quantitation in the supernatants,-development of analytical methods to study the nanoparticle formation mechanism,-development of the quantitative methods to estimate the covering of the gold nanoparticles by the AB substance,-development of an analytical methodology for the physicochemical characterization of the obtained nanoparticle at pH 3.6 and pH 7.4.The study comprised as follows:n at the N-side. The binding energy is smaller, of about 10 kcal/mol (42 kJ/mol), when abiraterone or abiraterone acetate faces Aun at the OH/C=O side.Roller-like abiraterone or abiraterone acetate molecules were predicted to form conjugates with small Au clusters at the N-side that would be more stable than those formed at the OH/C=O-sides by about 10 kcal/mol (42 kJ/mol). The binding energy in the conjugates of small Au clusters with abiraterone is predicted to be about 20 kcal/mol (84 kJ/mol) when abiraterone faces AuAB in the gold nanoparticle was identified by electrochemistry and NMR spectroscopy.The obtained nanoparticles were characterized by the negative zeta potential from \u221211 to \u221213 mV, which can suggest their high stability in water and likely low toxicity for normal cells.The XRPD technique proved the presence of the AuNPs in the lyophilized mixtures as well as in the nanoparticles obtained in pH 3.6 and 7.4. The average size of the Au crystallites estimated from the Scherrer formula was about 9 and 17 nm for the nanoparticles obtained in pH 7.4 and 3.6, respectively. A comparison of the AuNPs\u2013AB precipitates obtained for two different pH values suggests a higher stability of the pH 7.4 conjugate as well as a smaller nanoparticle size as measured by TEM, but the measurements performed by the DLS technique proved that the diameter of the nanoparticle obtained in pH 7.4 is higher than that of the nanoparticle obtained in pH 3.6. In pH 7.4, the nitrogen atom participates in the hydrogen bonds and aggregation may take place: but this, however, requires further research.The AuNPs\u2013AB conjugates are considered to be candidates for further study and possible application in anticancer therapy."} {"text": "Molecular surveillance of newly diagnosed HIV-infections is important for tracking trends in circulating HIV-variants, including those with transmitted drug resistances (TDR) to sustain ART efficacy.Dried serum spots (DSS) are received together with the statutory notification of a new diagnosis. 'Recent infections' (<155 days) classified by a 'recent infection test algorithm' are genotyped in HIV-protease (PR), reverse transcriptase (RT) and integrase (INT) to determine the HIV-1 subtype, to calculate prevalence and trends of TDR, to predict baseline susceptibility and to identify potential transmission clusters for resistant variants.ptrends < 0.01). The subtype A increment is mainly due to transmissions within men who have sex with men (MSM) while subtype C transmissions are associated with heterosexuals and people who inject drugs. The prevalence of TDR was stable at 11.0% over the study period. Resistances to nucleotide RT inhibitors (NRTI) and PR inhibitors (PI) were 4.5% and 3.2%, respectively, without identifiable trends. In contrast, resistances to non-NRTIs doubled between 2014 and 2016 from 3.2% to 6.4% (ptrend = 0.02) mainly due to the K103N mutation predominantly detected in recently infected German MSM not linked to transmission clusters. Transmitted INSTI mutations were present in only one case (T66I) and resistance to dolutegravir was not identified at all. Reduced susceptibility to recommended first-line therapies was low with 1.0% for PIs, 1.3% for NRTIs and 0.7% for INSTIs, but high for the NNRTIs efavirence (4.9%) and rilpivirine (6.0%) due to the K103N mutation and the polymorphic mutation E138A. These trends in therapy-na\u00efve individuals impact current first-line regimens and require awareness and vigilant surveillance.Between January 2013 and December 2016, 1,885 recent infections were analysed regarding the PR/RT genomic region, with 43.5% of these also being subjected to the analysis of INT. The proportion of HIV-1 non-B viruses increased from 21.6% to 36.0%, particularly the subtypes A (5.0% to 8.3%) and C (3.2% to 7.7%; all HIV infection is still a major public health concern in EU/EEA countries with approximately 30,000 new cases reported each year . PreventTo predict TDR in cART-na\u00efve patients, the World Health Organization (WHO) defined relevant resistance mutations selected by protease inhibitors (PIs), nucleoside reverse transcriptase inhibitors (NRTIs) and non-NRTIs (NNRTIs). These are summarized in the WHO surveillance drug resistance mutations (WHO SDRM) list from 2009 . Based oThe surveillance of transmitted resistances of integrase strand transfer inhibtors (INSTI) is of particular interest. Raltegravir was approved in Europe in 2007, followed by elvitegravir (2012) and dolutegravir (2014). The general tolerance and the low likelihood of resistance selection due to a high genetic barrier led to tIn 2013 a molecular surveillance program was initiated in Germany that is based on the examination of viral sequences from recently infected among newly diagnosed HIV cases , 19. Thehttps://www.rki.de/DE/Content/InfAZ/H/HIVAIDS/Studien/InzSurv_HIV/beteiligte_Labore.html) was established that send along with the report form, residual serum from newly diagnosed HIV cases spotted onto a filter card (Whatman 903 filter paper) as dried serum spots (DSS). According to \u00a713 of IfSG the RKI is authorized to receive blood residuals from diagnostics for surveillance purposes.According to the \u201cProtection Against Infection Act\u201d of 2001, diagnostic laboratories in Germany are obligated to report newly diagnosed HIV infections anonymously to the German public health institute . For surveillance programs a network of approximately eighty diagnostic laboratories including results from the BED IgG Capture EIA and clinical data from the HIV-notification database . Only repol-sequence. In cases where a subtype or circulating recombinant form (CRF) could not be assigned, a maximum-likelihood tree with bootstrap (IQ-TREE 1.5.5) was calculated using the HIV-1 subtype reference panel from the Los Alamos HIV sequence data base. Only subtype classifications based on bootstrap values of >70% in the tree topology were taken into account, otherwise they were classified as unique recombinant form (URF).HIV-1 genotypes from the protease (PR) and reverse transcriptase (RT) genomic region 2013\u20132016) and the integrase (INT) genomic region (2014\u20132016) were generated according to the previously published protocols 13\u20132016 a. Since 2https://hivdb.stanford.edu/pages/SDRM.worksheet.INI.html; updated in June 2016). Phenotypic resistance was predicted using the Stanford HIV Drug Resistance Database 8.4 algorithm (Stanford HIVbd) [The prevalence of TDR was calculated from the number of persons infected with viral variants carrying at least one mutation included in the WHO SDRM list . Transmid HIVbd) . Three ld HIVbd) which inhttps://github.com/kavehyousef/code) [Drug resistance mutations present in a proportion of \u2265 0.5% in the dataset were defined to be `frequent mutations\u00b4 and were used for trend analysis. In addition, sequences carrying one of the `frequent mutations\u00b4 were applied to phylogenetic analysis to allow the spread of resistance mutations within transmission networks to be mapped. For this purpose sequences were aligned with 33 reference sequences from the Los Alamos database and trimmed to 1026 base pairs. To select the optimal tree model, Maximum Likelihood (ML) phylogenies were reconstructed using the Ultrafast Bootstrap approximation in IQ-TREE with 10,000 replicates with the integrated model selection algorithm . A groupef/code) was usedef/code) , 30. The2 test was used for bivariate comparison and logistic regression to assess the odds ratios (OR) and 95% confidence intervals (CI). Changes in the prevalence over time were analyzed using the chi2 test for trend of odds.Statistical analyses were performed using STATA (version 14.2). Continuous variables were analyzed using median and interquartile range (IQR). The chiBetween 2013 and 2016, a total of 10,643 DSS of newly diagnosed HIV-cases were submitted to the RKI along with the anonymous report and 3,380 (31.8%) were classified as recent infections. From these we were able to obtain 1,885 (55.8%) HIV-1 genotypes of the PR and RT genomic regions. The median plasma viral load was 140,500 copies/ml , the median CD4 cell count was 454 cells/\u03bcl (IQR 304\u2013612) and the median age of the newly diagnosed individuals was 36.3 years. Baseline patient characteristics are shown in HIV-1 genotypes of the INT genomic region were available for 820 of the 1,885 DSS (43.5%) from newly diagnosed cases in 2014 , 2015 and 2016 . The proportional distribution in subgroups was congruent between the total study population and the subset (n = 820) .ptrend = 0.001). This trend is primarily based on the two above mentioned subgroups and can also be observed in non-Germans and primarily affected MSM and persons of German origin , Table 2 > 0.05) .C = 40 and nAG = 48) largely originated from Sub-Saharan Africa (nC = 26 and nAG = 21) (ptrend = 0.008) and C (ptrend = 0.002) the most prevalent were subtype A (8.2%), CRF02_AG (5.7%) and subtype C (5.0%) . Almost AG = 21) . The inc= 0.002) . While s= 0.002) .ptrend = 0.68) (ptrend = 0.47), while NNRTI resistance increased within the study period (ptrend = 0.06). This increase was significant between 2014 and 2016 (ptrend = 0.02). PI resistance did not show any tendency to increase or decrease between 2013 and 2016 (ptrend = 0.99) . There w = 0.68) . Mono re = 0.68) . A trend = 0.99) .p = 0.03) and in Germans compared to non-Germans . However, the prevalence was also high in persons with American and African origin, but the total number of cases with this origin was low (p < 0.01). However, the highest level of TDR (14.9%) was identified in subtype A (Significantly higher proportions of TDR were found in MSM compared to HET (ctively) . Moreoveubtype A .The thymidine analogue mutations (TAMs) M41L, K219Q and T215 revertants as well as the non-TAM M184V were among the most frequently transmitted NRTI-resistance mutations (\u2265 0.5%) . These Tptrend = 0.07) with this increase being significant between 2014 and 2016 (ptrend = 0.03). The steep slope of the `total K103N\u00b4 between 2014 and 2016 was quite congruent in the subgroups of MSM, persons of German origin and subtype B infections , namely the major primary mutation T66I resulting in high level resistance to elvitegravir and low level resistance to raltegravir. According to predictions from the Stanford HIVdb, phenotypic INSTI resistance was identified in 0.7% (6/820) of cases due to tSequences that had one or more of the most frequent NRTI, NNRTI and PI resistance mutations or the polymorphic E138A (listed in p = 0.67). Among the `not cluster-linked\u00b4 sequences carrying the K103N mutation the diversity of HIV-1 subtypes as well as the origin of infected persons was much higher. The proportion of individuals not linked to K103N-clusters was generally higher in 2015/2016 than in 2013/2014 (p = 0.01), although the sampling was more dense in 2015/2016 . Especially the group of `male persons\u00b4 and here particularly those for which the transmission route was not reported was significantly higher in 2015/2016 (p = 0.008) (p > 0.05) , Fig 5. = 0.008) . However > 0.05) .In this study we analysed 1,885 recent HIV-1 infections newly diagnosed between 2013 and 2016. The proportions of HIV-1 subtypes and primary resistances in the total population and within subgroups were stratified according to the year of diagnosis to allow analysing changes over time to be.In Germany, subtype B continues to be the predominant subtype (68.9%), although the proportion of non-B subtypes significantly increased between 2013 and 2016, particularly for subtypes A and CRF02_AG which peaked in 2015 (11.7% and 7.1%) and subtype C which doubled from 3.6% in 2013/2014 , and the PI resistance mutations M46I and V82L. Nevertheless, the proportions of transmitted NRTI and PI resistance remained stable during the study period. NRTI resistance resulted mainly from the high prevalence of persisting TAMs. Many studies have shown by phylogenetic analysis that onward transmission among drug-na\u00efve patients is the major reason for the maintenance of stable NRTI resistance levels \u201345. HoweTherefore, the predicted resistance to the currently recommended first-line regimens consistiIn 2017, tenofovir-containing pre-exposure prophylaxis was introduced in Germany. HIV-infection despite PrEP, due to subclinical drug levels or infection with resistant variants, might occur and fuel the emergence of TDR. So far, resistance mutations selected by tenofovir and emtricitabine have been rare (below 1%). However, as MSM transmission networks have been identified to be a major source for the spread of HIV resistance by others and by uFor the first time, we analysed, to which extent INSTI resistance was transmitted following the introduction of the first generation INSTI raltegravir in Germany in 2007. So far, the proportion of transmitted INSTI mutations is low with only one case identified. Phenotypic resistance to raltegravir and elvitegravir according to Stanford HIVdb is predicted to be 0.7% and resistance to the second generation INSTI dolutegravir was not detected at all. Primary drug resistance to INSTIs was also described to be rare in studies of European patients diagnosed in 2013 or 2015 One limitation of the present study is the relatively short study period. Changes over time should therefore be interpreted carefully. Furthermore, integrase genotyping only started in 2014, resulting in lower PCR success rates due to RNA degradation on DSS. A combined prediction including all drug classes was only possible for 43.0% of cases therefore, resistance to integrase was analysed separately in this dataset. Another limitation is that the SDRM list has not been updated since 2009 and resistance mutations to the newest drugs or drug classes (e.g. integrase inhibitors) had to be analysed with different mutation lists. Some of the SDRMs are only relevant to older drugs rarely used in today\u2019s first-line regimens in EU/EEA countries . If the Despite effective cART, TDR is present at an overall stable proportion in Germany (11%). In particular, viruses carrying resistance mutations with low fitness cost are spread continuously by onwards transmission, and German MSM are generally driving the spread in Germany, both within transmission networks and outside of networks as shown for the K103N mutation. Intensified HIV-screening in these groups followed by early treatment with cART including the pre-treatment resistance testing as recommended in the current guidelines should help reduce the spread of resistant viruses in the ART na\u00efve population. Due to the increasing use of INSTIs in first-line regimens and tenofovir/emtricitabin for PrEP, it is important to monitor TDR for public health efforts and in order to maintain the effectiveness of cART.S1 Text(DOCX)Click here for additional data file.S2 Text(DOCX)Click here for additional data file."} {"text": "To evaluate the effect of intramedullary nail and locking plate in the treatment of proximal humerus fracture (PHF).China National Knowledge Infrastructure (CNKI), Chinese Scientific Journals Database (VIP), Wan-fang database, Chinese Biomedicine Database (CBM), PubMed, EMBASE, Web of Science, and Cochrane Library were searched until July 2018. The eligible references all show that the control group uses locking plates to treat PHF, while the experimental group uses intramedullary nails to do that. Two reviewers independently retrieved and extracted the data. Reviewer Manager 5.3 was used for statistical analysis.Thirty-eight retrospective studies were referred in this study which involves 2699 patients. Meta-analysis results show that the intramedullary nails in the treatment of proximal humeral fractures are superior to locking plates in terms of intraoperative blood loss, operative time, fracture healing time, postoperative complications, and postoperative infection. But there is no significance in constant, neck angle, VAS, external rotation, antexion, intorsion pronation, abduction, NEER, osteonecrosis, additional surgery, impingement syndrome, delayed union, screw penetration, and screw back-out.The intramedullary nail is superior to locking plate in reducing the total complication, intraoperative blood loss, operative time, postoperative fracture healing time and postoperative humeral head necrosis rate of PHF. Due to the limitations in this meta-analysis, more large-scale, multicenter, and rigorous designed RCTs should be conducted to confirm our findings.PROSPERO CRD42019120508 PHF is the third common limb fracture, accounting for 4 to 5% of total body fractures . The incDue to the good biomechanical properties of the locking plate and the intramedullary nail, its exact clinical efficacy has become the main treatment \u201319. HoweA literature retrieve was carried out in eight databases from their inception to July 2018, like CNKI, VIP, Wan-fang database, CBM, PubMed, EMBASE, Web of Science, and Cochrane Library. Search terms including \u201cProximal humerus fracture,\u201d \u201cIntramedullary nail,\u201d \u201cLocking plate,\u201d and \u201cInternal fixation\u201d were used individually or in combination. The publishing language was restricted to Chinese and English.The inclusion criteria are as follows: (i) internal fixation of displaced proximal humeral fractures; (ii) included both locking plates and intramedullary nails; (iii) greater than a minimum of 6\u00a0months of follow-up; (iv) a minimum of 21 patients for a given study; and (v) clinical outcomes during follow-ups included at least one of the following: intraoperative blood loss, operative time, fracture healing time, postoperative complications and postoperative infection, constant, neck angle, VAS, external rotation, antexion, intorsion pronation, abduction, NEER, osteonecrosis, additional surgery, impingement syndrome, delayed union, screw penetration, and screw back-out.The exclusion criteria are as follows: (i) non-humeral proximal fracture; (ii) treatment mode non-locking plate or intramedullary nail treatment; (iii) non-clinical researches, basic researches, and review articles were excluded, as case reports and theoretical discussions; (iv) improper statistical methods, data defect literature; (v) genetic research; (vi) grey literature; and (vii) letters to editor.Two investigators (Xiaoqing Shi and Hao Liu) independently extracted and screened the data according to the inclusion criteria. We extracted the general details, such as patients\u2019 characteristics, interventions, and outcomes, and a cross-check was done. Any disagreements were resolved through discussion or verification by a third investigator (Runlin Xing).The quality of the non-randomized controlled trials was assessed by the MINORS entry, and trials with MINORS scores >\u200912 were included in the study . The met\u03c72 test and inconsistency index statistic (I2). If substantial heterogeneity existed (I2\u00a0>\u200950% or P\u2009<\u20090.05), a random effect model was applied; otherwise, we adopted a fixed effect model [Revman 5.3 software was employed to pool the effect size. Mean difference (MD) or standardized mean difference (SMD) and 95% confidence intervals (CIs) were used for continuous variables. For the two-category data, we used OR (odds ratio)/RR (risk ratio) and 95% CIs as the efficacy analysis statistic. Heterogeneity was evaluated statistically using the ct model . SensitiA total of 506 articles were initially obtained through the search strategy. After excluding 298 duplications, the remaining articles were screened based on their titles and abstracts, and 148 records were removed. By reading the full text, 14 literatures that did not meet the inclusion criteria were excluded. Finally, 38 trials \u201359 were There were a total of 2699 patients (1238 in the locking plate group and 1461 in the intramedullary nail group) enrolled in our studies. More details of the included studies were presented in Table\u00a0I2\u00a0=\u200996%, P\u00a0<\u20090.00001, and the heterogeneity was high. Therefore, the random effect model was used to calculate the combined effect. The results showed that intramedullary nail in the treatment of PHF is statistically significant, as its intraoperative blood loss is less than the locking plate [SMD\u2009=\u2009\u2212\u20092.67, 95% CI , Fig.\u00a0Twenty-two studies , 53\u201357 rI2\u00a0=\u200992%, P\u00a0<\u20090.00001, and the heterogeneity was higher. Therefore, the random effect model was used to calculate the combined effect. The results showed that intramedullary nailing for the treatment of PHF was statistically significant in reducing surgical time compared with locking plates [SMD\u2009=\u2009\u2212\u20091.59, 95% CI , Fig.\u00a0A total of 878 cases in the experimental group and 1055 cases in the control group, in 26 studies \u201355, 57, I2\u00a0=\u200992%, P\u00a0<\u20090.00001, and the heterogeneity was high. Therefore, the random effect model was used to calculate the combined effect. The results showed that intramedullary nailing for the treatment of PHF was statistically significant in reducing surgical time compared with locking plates [SMD\u2009=\u2009\u2212\u20090.68, 95% CI , Fig.\u00a0Twenty studies , 52\u201356 rI2\u00a0=\u20090%, P\u00a0=\u20090.52, and there was no heterogeneity. Thus, the combined effect model was used to calculate the combined effect. The results showed that intramedullary nailing for the treatment of PHF was better than the locking plate in the incidence of complications [OR\u2009=\u20090.75, 95% CI , Fig.\u00a0Complications were reported in 29 studies , 53\u201359, We also analyzed other outcome indicators. Detailed information was shown in Table\u00a0For the methodological quality and risk of bias of RCTs, we used the Cochrane Handbook for Systematic Reviews of Interventions 5.2.0 for evaluation. The results showed that no studies used double blindness. On the other hand, for non-RCTs studies, we used MINORS to assess the methodological quality of the included studies. The results showed that the score interval was 13\u201318 points. Specifically, 13 points for five studies, 14 points for nine studies, 15 points for ten studies, 16 points for three studies, 17 points for seven studies, and 18 points for four studies. In general, this meta-analysis has qualitative limitations and most of the included studies had high risk of bias and low methodological quality.To further confirm the stability of the above outcomes, we replaced the fixed effect model with random effect model and excluded the most and least weighted trials. Comparing with previous results, the outcome exhibited no obvious difference which revealed that our study was robust and reliable. We mainly assessed the publication bias of overall complications Fig.\u00a0. The resIn recent years, intramedullary nail and locking plate have been the main choices of internal fixation for PHF. From the biomechanics analysis, Edwards et al. establisThe results of this meta-analysis show that (1) intramedullary nail in the treatment of PHF, intraoperative blood loss, operation time, fracture healing time, postoperative complications, and postoperative infection is better than locking plate treatment; (2) there were no significant differences in constant, neck angle, VAS, external rotation, antexion, intorsion pronation, abduction, NEER, osteonecrosis, additional surgery, impingement syndrome, delayed union, screw penetration, screw back-out between intramedullary nail, and locking plate in the treatment of proximal humeral fractures; (3) the screw back-out rate of the two-part fracture is better than the intramedullary nail in the locking plate; the shoulder anterior flexion angle intramedullary nail of the four-part fracture is better than the locking plate.In terms of follow-up constant score, the intramedullary nail was not superior to the locking plate, and the results were not statistically different. Some studies concluded that may be related to surgical techniques \u201371. In tPrevious studies have suggested that there is no difference in the time of fracture healing between the two internal fixations. Jiang pointed out that this bias may be related to that the research is not enough in this area . In our In terms of the incidence of screw penetration, Konrad et al. found thThere was a statistically significant difference in the overall risk of postoperative complications between the two groups, which is different from previous evidence-based studies \u201317. In oThe limitations of this study are as follows: (1) this study cannot examine the use of surgical instruments by various subjects and evaluate the skill level and proficiency of the surgeon, which may cause clinical heterogeneity and affect the reliability of the meta-analytical strength and conclusion; (2) the type of study is retrospective analysis, and there is risk of selective bias, which may affect the authenticity and reliability of the research results; (3) the lack of clinical randomized controlled study, and the level of evidence is not high; (4) the doctor\u2019s procedure is not completely unified, bringing a part of clinical heterogeneity; and (5) in all trials, the manufacturers of intramedullary nails and locking plates are different, and their quality is not the same.Clinical outcomes have the potential to improve with time because the rate of the postoperative index can change with time. Despite the shortcomings in this study, we still try to avoid the risk of bias during the analysis and try subgroup analysis. Sensitivity analysis showed that the study has good stability and clinical reference value.Additionally, there are fewer reliable randomized controlled trials included in this article. The level of evidence was reduced. It is difficult to control bias or confounding factors effectively. The evaluation efficiency may be reduced, and there may be publication bias, selection bias, implementation bias, and measurement bias. The inverted funnel plot shows that the included literature is basically within 95% CI. The article has certain reference value, but its results and applications should be treated with cautious attitude. If there are more clinical randomized controlled trials in this area, then a reliable conclusion will be drawn.Intramedullary nailing for the treatment of proximal humeral fractures, intraoperative blood loss, operation time, fracture healing time, overall complication, and postoperative infection is better than locking plate treatment. In the treatment of proximal humeral fractures, the intramedullary nail and the locking plate are both mature. Before the proficiency of the technique, considering the treatment of proximal humeral fracture with intramedullary nail can bring effective results, such as reducing the surgical trauma, protecting the blood supply of the fracture end, promoting fracture healing, and reducing the occurrence of postoperative complications, especially the occurrence of postoperative infection. The author believes that intramedullary nail treatment is a better choice in the strict control of surgical indications. However, because the quality of the literature included in this study is various, there is a risk of bias. This conclusion needs to be demonstrated by more well-designed, high-quality, large-sample, multi-center, randomized, double-blind controlled clinical trials. In addition, the study and discussion of increasing related complications are conducive to obtaining more rigorous and objective clinical evidence."} {"text": "On January 26, 2021, this report was posted online as an MMWR Early Release.On December 7, 2020, local public health officials in Florida county A were notified of a person with an antigen-positive SARS-CoV-2 testAn estimated 1,700 in-person school days were lost as a consequence of isolation and quarantine of patients and contacts during this COVID-19 outbreak.The American Academy of Pediatrics interim guidance for return to sports specifically recommends against mask wearing during wrestling because of the choking hazard that face coverings could pose (At the time of the tournament, the 14-day cumulative COVID-19 incidence in county A, home to seven of the 10 participating high school teams, was 363 per 100,000 persons; 7.7% of tests for SARS-CoV-2 had positive results ("} {"text": "Adenoviruses (AdVs) are prevalent and give rise to chronic and recurrent disease. Human AdV (HAdV) species B and C, such as HAdV-C2, -C5, and -B14, cause respiratory disease and constitute a health threat for immunocompromised individuals. HAdV-Cs are well known for lysing cells owing to the E3 CR1-\u03b2-encoded adenovirus death protein (ADP). We previously reported a high-throughput image-based screening framework and identified an inhibitor of HAdV-C2 multiround infection, nelfinavir mesylate. Adenoviruses (AdVs) are prevalent and give rise to chronic and recurrent disease. Human AdV (HAdV) species B and C, such as HAdV-C2, -C5, and -B14, cause respiratory disease and constitute a health threat for immunocompromised individuals. HAdV-Cs are well known for lysing cells owing to the E3 CR1-\u03b2-encoded adenovirus death protein (ADP). We previously reported a high-throughput image-based screening framework and identified an inhibitor of HAdV-C2 multiround infection, nelfinavir mesylate. Nelfinavir is the active ingredient of Viracept, an FDA-approved inhibitor of human immunodeficiency virus (HIV) aspartyl protease that is used to treat AIDS. It is not effective against single-round HAdV infections. Here, we show that nelfinavir inhibits lytic cell-free transmission of HAdV, indicated by the suppression of comet-shaped infection foci in cell culture. Comet-shaped foci occur upon convection-based transmission of cell-free viral particles from an infected cell to neighboring uninfected cells. HAdV lacking ADP was insensitive to nelfinavir but gave rise to comet-shaped foci, indicating that ADP enhances but is not required for cell lysis. This was supported by the notion that HAdV-B14 and -B14p1 lacking ADP were highly sensitive to nelfinavir, although HAdV-A31, -B3, -B7, -B11, -B16, -B21, -D8, -D30, and -D37 were less sensitive. Conspicuously, nelfinavir uncovered slow-growing round HAdV-C2 foci, independent of neutralizing antibodies in the medium, indicative of nonlytic cell-to-cell transmission. Our study demonstrates the repurposing potential of nelfinavir with postexposure efficacy against different HAdVs and describes an alternative nonlytic cell-to-cell transmission mode of HAdV. Adenovirus (AdV) was first described in 1953 by Rowe and coworkers as a cytopathological agent isolated from human adenoids , 3. HAdV4\u20137\u201310\u201316\u201324\u2013in vitro is an effective inhibitor of HAdV lytic egress. The procedure leading to the identification of nelfinavir is described in another study using an imaging-based, high-content screen of the Prestwick Chemical Library (PCL) comprising 1,280 mostly clinical or preclinical compounds , 101. Ne50]) values of 25.7\u2009\u03bcM, as determined by cell impedance measurements using xCELLigence (50) of nelfinavir was 27.1 (50 = 10.01\u2009\u03bcM) and the effective concentration yielding 50% inhibition (EC50) of fluorescent-plaque formation (EC50 = 0.37\u2009\u03bcM). The data indicate that nelfinavir is an effective, nontoxic inhibitor of HAdV-C2 multicycle infection.A recent paper describes a full-cycle, image-based screen of 1,278 out of 1,280 PCL compounds against HAdV-C2-dE3B-GFP, where clopamide and amphotericin B were excluded due to precipitation during acoustic dispension into the screening plates . The scrLLigence . xCELLigLLigence , 104. Thwas 27.1 , as deteWe first tested if nelfinavir affected viral protein production. HAdV-C2-dE3B-GFP-infected A549 cells were analyzed for green fluorescent protein (GFP) under the control of the immediate early cytomegalovirus (CMV) promoter and the late protein hexon expressed after viral DNA replication at 46\u2009h postinfection (hpi). The results indicate that nelfinavir had no effect on GFP or hexon expression at the tested concentrations, while the formation of fluorescent plaques was completely inhibited . This rets1) particles, which lack the L3/p23 protease due to the point mutation P137L in p23 /VI and pVII/VII proteins using previously characterized antibodies. There was no evidence for an increase of pVI or pVII in HAdV-C5 from nelfinavir-treated cells, in contrast to temperature-sensitive 1 robustly reduced the number of dead cells and strongly reduced the number of infected cells at up to 100 PFU/well . Remarka50 values compared to the parental virus, for example, 2.1 versus 66.8 with A549 cells, 8.9 versus 61.0 with HeLa cells, and 4.6 versus 55.2 with human bronchial epithelial cells (HBECs) . These e (HBECs) . The datFinally, we performed immunofluorescence experiments with HAdV-C2-dE3B-GFP-infected A549 cells at 44 hpi . Under nViruses are transmitted between cells by three major mechanisms, cell free through the extracellular medium, directly from cell to cell, or in an organism by means of infected motile cells or fluid flow in blood or lymphoid vessels. This can result in far-reaching or mostly local virus dissemination occurred in regular HAdV-C2-dE3B-GFP infections, we analyzed A549 cells infected with <1 PFU per well in 160 wells up to 8\u2009dpi. Thirty-three wells developed a single plaque. Twenty-four of them were fast-emerging comet-shaped plaques, of which the donor cell (indicated by the pink arrows) disappeared at between 2 and 3\u2009dpi , top. In50 values of nelfinavir were heterogeneous for different HAdV types, as determined in A549 cells (50s (>10) ranging from 12.22 (HAdV-C1) to 71.09 (HAdV-C2). Members of HAdV species A and D and most of the HAdV-B types showed intermediate (2 to 10) to low (<2) nelfinavir susceptibility, notably HAdV-B7 and -B11 with TI50s of <1. MAdV-1 and -3 also showed low susceptibility. Noticeably, a high susceptibility of HAdV-C was consistently observed in human lung epithelial carcinoma (A549) cells, human epithelial cervix carcinoma (HeLa) cells, immortalized primary normal human corneal epithelial (HCE) cells, as well as normal HBECs. The corresponding TI50 values were in the same range as those for herpes simplex virus 1 (HSV-1), for which nelfinavir was reported to be an egress inhibitor and MAdV-3 in mouse rectum carcinoma CMT93 cells. To balance statistical significance and automated plaque segmentation, we first determined the optimal amount of inoculum and duration of infection for each virus and cell line. The resulting TInhibitor , 112.50 values) formed exclusively comet-shaped plaques. Viruses with low TI50 values, such as A31, B11, or D37, had a high fraction of round plaques even when infected with >1 PFU/well. This demonstrates that the slowly growing round infection foci observed by fluorescence microscopy gave similarly shaped lesions due to cytotoxicity, akin to the lytic comet-shaped foci. We conclude that HAdV types employ lytic cell-free and nonlytic cell-to-cell transmission modes and give rise to different plaque phenotypes.We finally examined the plaque morphologies in nonperturbed infections by immunofluorescence staining of the late proteins VI and hexon as well as by microscopic analyses of crystal violet-stained dishes for classical plaques . Virusestrans-Golgi network (TGN) (116\u2013\u2013A phenotypic screen of the PCL identified nelfinavir as a potent postexposure inhibitor of HAdV-C2-dE3B-GFP plaque formation in cell culture . Nelfinark (TGN) , 115. NeTGN) 116\u2013\u2013128. It Here, we demonstrate that nelfinavir inhibits the egress of HAdV particles without perturbing other viral replication steps, including entry, assembly, and maturation. Morphometric analyses of fluorescent plaques indicated that HAdV-C propagates by two distinct mechanisms, lytic and nonlytic. Lytic transmission led to comet-shaped convection-driven plaques, whereas nonlytic transmission gave rise to symmetric round plaques. Nelfinavir specifically suppressed the lytic spread of HAdVs, most prominently the HAdV-C types and -B14, but not other HAdVs such as A31 or D37. Incidentally, HAdV-C and -B14 replicate to considerable levels in Syrian hamsters, whereas other HAdV types do not , 130. WeThe molecular mechanisms underlying cell lysis in AdV infection are not well understood, largely due to the lack of specific assays and inhibitors. Single-cell analyses combined with machine learning have started to identify specific features of lytic cells, such as increased intranuclear pressure compared to nonlytic cells . Lysis iP value of 4.1 to 4.68 . The virus was generated by the exchange of the viral E3b genome region with a reporter cassette harboring enhanced green fluorescent protein (GFP) under the control of a constitutively active cytomegalovirus (CMV) promoter. It was grown in A549 cells and purified by double-CsCl-gradient centrifugation .HAdV types A31, B7, B11, B14a, B16, B34, C1, C6, D8, D30, and D37 were kindly provided by the late Thomas Adrian and were verified by DNA restriction analysis , 164. HA166\u2013l-glutamine , and 1% (vol/vol) penicillin-streptomycin and subcultured following phosphate-buffered saline (PBS) washing and trypsinization biweekly. HBECs were maintained in endothelial-basal medium and passaged 1:1 weekly following PBS washing and trypsinization. Cell cultures were grown under standard conditions , and the passage number was limited to 20. The respective supplemented medium is referred to as supplemented medium.A549 cells, HeLa cells, and HBECs were obtained from the American Type Culture Collection (ATCC) . HCE cells were obtained from Karl Matter . CMT93 (mouse rectum carcinoma) cells were obtained from Susan Compton, Yale School of Medicine. A549, HeLa, HCE, and CMT-93 cell cultures were maintained in high-glucose Dulbecco\u2019s modified Eagle\u2019s medium (DMEM) containing 7.5% (vol/vol) fetal calf serum (FCS) , 1% (vol/vol) Nelfinavir mesylate (CAS number 159989-65-8) powder was obtained from MedChemExpress LLC and Selleck Chemicals . The compound was dissolved in dimethyl sulfoxide (DMSO) at 100\u2009mM and kept at \u221280\u00b0C or \u221220\u00b0C for long-term or working storage, respectively.2, and 95% humidity) in duplicates. The 16-well E plates have a gold-plated sensor array embedded in their glass bottom by which the electrical impedance across each well bottom is measured. The impedance per well, termed the cell index (CI), is recorded as a dimensionless quantity. The background CI was assessed following the addition of 50\u2009\u03bcl supplemented medium to each well and equilibration in the incubation environment. After 30\u2009min of equilibration, 9,000 A549 cells in 50\u2009\u03bcl supplemented medium were added per well, and measurement was started.Impedance-based assays were performed using the xCELLigence system (Roche Applied Science and ACEA Biosciences) as described previously , 153, ac50 indicates the concentration of nelfinavir that caused a 50% impedance reduction compared to the solvent-treated cells. The TC50 was calculated by nonlinear regression of the solvent-normalized CI over the concentration of nelfinavir.For the quantification of nelfinavir toxicity, 50\u2009\u03bcl of the supernatant was removed 18 h later and replaced with 2-fold-concentrated nelfinavir or DMSO solvent as the control dilution in supplemented medium . The control was supplemented medium. Impedance was recorded every 15\u2009min over 5\u2009days. The TC6 viral particles [VP]/well of HAdV-C2-dE3B-GFP and 2.68 \u00d7 106 VP/well of HAdV-C2-dE3B-dADP, corresponding to \u223c30 PFU/well). The delay of infection-induced cytotoxicity was calculated as the time point at which the CI of the infected cells had decreased by 50% relative to its maximum. Data analysis was performed using GraphPad , and curve fitting was performed using three-parameter inhibitor concentration-versus-response nonlinear regression.For the quantification of nelfinavir effects on the cytopathogenicity of HAdV-C2-dE3B-GFP compared to HAdV-C2-dE3B-GFP-dADP infection, 50\u2009\u03bcl of the supernatant was removed 18 h later and replaced with nelfinavir- and virus-supplemented medium. Twenty-five microliters of 4-fold-concentrated nelfinavir or the corresponding DMSO solvent control dilution in supplemented medium or supplemented medium only was added to 50\u2009\u03bcl of medium containing cells. Additionally, 25\u2009\u03bcl of a 4-fold-concentrated virus stock dilution was added paraformaldehyde (PFA) and 4\u2009\u03bcg/ml Hoechst 33342 in PBS. Cells were washed three times with PBS and stored in PBS supplemented with 0.02% N3 for infections with viruses harboring a GFP transgene. For wild-type (wt) viruses, cells were quenched in PBS supplemented with 50\u2009mM NH4Cl, permeabilized using 0.2% (vol/vol) Triton X-100 in PBS, and blocked with 0.5% (wt/vol) bovine serum albumin (BSA) in PBS. Cells were incubated with 381.7\u2009ng/ml mouse anti-HAdV hexon protein antibody and subsequently stained using 2\u2009\u03bcg/ml goat anti-mouse-Alexa Fluor 594 . Plates were imaged on either an IXM-XL or an IXM-C automated high-throughput fluorescence microscope using a 4\u00d7 objective in wide-field mode. Hoechst staining was recorded in the 4\u2032,6-diamidino-2-phenylindole (DAPI) channel, the fluorescein isothiocyanate (FITC)/GFP channel was acquired for viral GFP, and the tetramethyl rhodamine isothiocyanate (TRITC)/Texas Red channel was acquired for hexon immunofluorescence staining.Per 96-well plate, 15,000 A549 cells, 10,000 HeLa cells, 30,000 HBECs, 30,000 HCE cells, or 30,000 CMT-93 cells were seeded in 100\u2009\u03bcl of the respective supplemented medium and allowed to settle for 1 h at room temperature (RT) prior to cell culture incubation at 37\u00b0C with 5% CO50 (infected and treated cells), the TC50 , as well as the corresponding standard errors (SE) were determined using curve fitting in GraphPad using three-parameter inhibitor concentration-versus-response nonlinear regression. The mean TI50 was calculated as the EC50/TC50 ratio of the means. The TI50 SE was calculated by error propagation.The infection phenotype for each well was quantified using Plaque2.0 . The numInfection, HAdV hexon immunofluorescence staining, and imaging were performed in technical quadruplicates, as described above for the microscopic plaque assay. Single nuclei (Hoechst) were segmented using CellProfiler . Median 4 and 0.25\u2009mg/ml ruthenium red for 1 h. Following washing with 0.1 M cacodylate buffer (pH 7.36) and H2O, the samples were incubated in 2% (vol/vol) uranyl acetate at 4\u00b0C overnight. The samples were dehydrated in acetone and embedded in Epon as described previously and fixed at 4\u00b0C in 0.1 M ice-cold cacodylate buffer (pH 7.4) supplemented with 2.5% (vol/vol) glutaraldehyde and 0.5\u2009mg/ml ruthenium red for 1 h. Cells were washed with 0.1 M cacodylate buffer (pH 7.4) and postfixed at RT in 0.05 M cacodylate buffer (pH 7.4) supplemented with 0.5% (vol/vol) OsOeviously . Slices HAdV-C5 was amplified in medium containing 0, 1.25, or 3\u2009\u03bcM nelfinavir for 4\u2009days. Cells were harvested and disrupted by three freeze-thaw cycles. The cell debris was removed by freon extraction, and mature full HAdV virions were purified by two rounds of CsCl gradient ultracentrifugation . The proDouble-CsCl-gradient-purified HAdV particles were adhered to collodion and 2% (vol/vol) amyl acetate film-covered grids . Viral particles were negatively stained with 2% (vol/vol) uranyl acetate and viewed on a transmission electron microscope at 100\u2009kV. Images were acquired using a charge-coupled-device (CCD) camera .\u00b1Nelfinavir stocks) and a size standard were size separated on a 12% acrylamide gel under reducing conditions and transferred to a polyvinylidene difluoride (PVDF) membrane. HAdV proteins were detected using primary antibodies, 1:10,000 R72 rabbit antifiber -ester signal. Thirty z steps with a 0.5-\u03bcm step size were acquired for each channel, and maximal projections were calculated. Image analysis was performed using CellProfiler using a 40\u00d7 objective in confocal mode (62-\u03bcm pinhole). The DAPI channel was acquired for nuclear Hoechst staining, the FITC/GFP channel was acquired for viral GFP, the TRITC/Texas Red channel was acquired for immunofluorescence ADP staining, and the Cy5 channel was acquired for the Profiler . NucleusProfiler . Statist8 VP/well of double-CsCl-purified HAdV-C5\u00b1Nelfinavir stocks in 100\u2009\u03bcl ice-cold supplemented medium and kept on ice for 30\u2009min. Following a 15-min entry phase under standard cell culture conditions, the cells were fixed, and the nuclei were stained for 1 h at RT by the addition of 33\u2009\u03bcl 16% PFA and 4\u2009\u03bcg/ml Hoechst 33342 in PBS. Following the above-described immunofluorescence staining procedure, the cell-bound HAdV virions were stained using 9C12 mouse antihexon and goat\u00b1Nelfinavir stocks at 50 to 0.001\u2009pg/well of a BCA-based viral protein and incubated under standard cell culture conditions. Cells were fixed at 52 hpi, stained for HAdV hexon expression, and imaged according to the procedure described above for the image-based plaque assay. Images were quantified using Plaque2.0 on an IXM-XL epifluorescence microscope . GFP-positive infected cells were classified based on the median nuclear GFP intensity using automated image analysis by CellProfiler .Four hundred eighty thousand A549 cells were seeded per 6-well dish, inoculated with 1,100 PFU HAdV-C2-dE3B-GFP/well for 1 h at 37\u00b0C, washed with PBS, and detached by trypsin digestion. Infected cells were centrifuged and resuspended in fresh medium to remove any unbound input virus. Cells were seeded at 180,000 cells/12-well plate in medium supplemented with 1.25, 3, or 10\u2009\u03bcM nelfinavir or the respective DMSO solvent control. Viral progeny in the cell monolayer and supernatant was harvested at the indicated times postinfection by three freeze-thaw cycles. The lysates were cleared by centrifugation and stored at 4\u00b0C until titration on naive A549 cells. PFA-fixed, Hoechst-stained cells were imaged at 44 hpi using a 4\u00d7 objective on an IXM-XL epifluorescence microscope . GFP-positive infected cells were classified based on the median nuclear GFP intensity using automated image analysis by CellProfiler . The yiel-glutamine , 7.5% fetal bovine serum (FBS) , 1% nonessential amino acids , 1% 100\u2009mM sodium pyruvate , 0.25\u2009ng/ml Hoechst 33342 , and 1\u2009\u03bcg/ml propidium iodide (PI) . Plates were imaged at the indicated times postinfection on an IXM-C automated high-throughput fluorescence microscope using a 40\u00d7 objective in confocal mode (62-\u03bcm pinhole). The DAPI channel was acquired for nuclear Hoechst staining, the FITC/GFP channel was acquired for viral GFP, and the Cy5 channel was acquired for the PI signal. Thirty z steps with a 0.5-\u03bcm step size were acquired for each channel, and maximal projections were calculated.Infection was performed as described above for the microscopic plaque assay. Cells were incubated with an inoculum ranging between 10 and 2,560 PFU/well HAdV-C2-dE3B-GFP for 1 h at 37\u00b0C. Cells were washed with PBS and 100\u2009\u03bcl phenol-free DMEM supplemented with 1% penicillin-streptomycin , 1% 2) with a centroid located 600 px from the well rim were considered to exclude spatial limitations. Plaque roundness was calculated as 1 \u2212 eccentricity, as follows: roundness = 1 \u2212 (4\u03c0 \u00d7 area)/(perimeter2).Plaques were segmented in Plaque2.0 , and plaStatistical analysis was performed in GraphPad using the nonparametric Kolmogorov-Smirnov test.87\u2013101 antibody for 5\u2009min on ice. The supernatant and washing PBS were collected, and cells were pelleted by centrifugation at 16,000\u2009\u00d7\u2009g for 5\u2009min at 4\u00b0C. Lysates were scraped off and used to resuspend the pelleted cells. Following another centrifugation step, the supernatant was collected and stored at \u221220\u00b0C. Samples of 15\u2009\u03bcl of the lysate were supplemented with SDS-containing loading buffer . Samples were denatured at 95\u00b0C for 5\u2009min, and proteins were separated on a denaturing 15% acrylamide gel. Proteins transferred to a PVDF membrane were detected with a 1:1,000 dilution of a rabbit anti-HAdV-C2-ADP78\u201393 antibody -supplemented DMEM containing a 1:12 dilution of HAdV-C2/5-neutralizing dog serum suppleme2 atmosphere. At the indicated times postinfection, cells were fixed and stained for 60\u2009min with a PBS solution containing 3\u2009mg/ml crystal violet and 4% PFA added directly to the medium from a 16% stock solution. Plates were destained in H2O, dried, and imaged using a standard 20-megapixel phone camera under white-light illumination.Plaque shapes were also assessed by a conventional crystal violet-stained plaque assay performed on A549 cells in liquid-supplemented DMEM. All infections were performed at 37\u00b0C with 95% humidity and a 5% CO"} {"text": "Heart failure (HF) is the final stage of various cardiac diseases with poor prognosis. The integrated traditional Chinese medicine (TCM) and western medicine therapy has been considered as a prospective therapeutic strategy for chronic heart failure (CHF). There have been small clinical trials and experimental studies to demonstrate the efficacy of Shenfu Qiangxin Pills (SFQX) for treating CHF, however, there is still a lack of further high-quality trial. This paper describes the protocol for the clinical assessment of SFQX in CHF (heart-kidney Yang deficiency syndrome) patients.A randomized, double-blind, parallel-group, placebo-controlled, multi-center trial will assess the efficacy and safety of SFQX in the treatment of CHF. 352 patients with CHF (heart-kidney Yang deficiency syndrome) from 22 hospitals in China will be enrolled. Besides their standardized western medicine, patients will be randomized to receive treatment of either SFQX or placebo for 12 weeks. The primary outcome is the plasma N-terminal pro-B-type natriuretic peptide levels, which will be measured uniformly by the central laboratory. The secondary outcomes include composite endpoint events , echocardiography indicators, grades of the New York Heart Association (NYHA) functional classification, the 6-minute walk test (6MWT) results, Minnesota Living With Heart Failure Questionnaire and TCM syndrome scores.The integrated TCM and western medicine therapy has developed into a treatment model in China. The rigorous design of the trial will assure an objective and scientific assessment of the efficacy and safety of SFQX in the treatment of CHF.Chinese Clinical Trial Registry: ChiCTR2000028777 . Over the last 3 decades, amelioration of treatments and implementation have improved survival and reduced the hospitalization rate in patients with heart failure with reduced ejection fraction (HFrEF). According to the guidelines for chronic heart failure (CHF) treatment, angiotensin receptor neprilysin inhibitor, angiotensin-converting enzyme inhibitors or angiotensin receptor blockers, beta-blockers, aldosterone receptor antagonists, diuretics, digitalis, and vasodilating agents are standard treatments for heart failure. However, the outcomes often remain unsatisfactory. Traditional Chinese medicine (TCM) has the characteristics of multi-target, multi-function and multi-pathway in the process of prevention and treatment of HF. The integrative treatment of western medicine and TCM for HF is associated with increased quality of life, low re-admission rate, and favorable prognosis.Heart failure (HF), as the final stage of various cardiac diseases, is an abnormality of cardiac structure or function.\u20138 Some small-scale clinical studies indicated that SFQX had the advantages of enhancing cardiac contractility and improving heart function.,10China guidelines for the diagnosis and management of heart failure published in 2014 and 2018 all cited evidence from clinical studies of TCM, so more attention has been paid to TCM and it has been widely used in clinical practice. Shenfu qiangxin pills , a patent drug made of Ginseng(Renshen), Radix aconiti carmichaeli (Fuzi), rhubarb (Dahuang), the root bark of white mulberry (Sangbaipi), Polyporus umbellata, Semen lepidii (Tinglizi), grifola (Zhuling), have the effects of nourishing Qi, warming Yang, strengthening heart, and promoting diuresis. The pills have been commonly used in TCM for the integrative treatment of patients with CHF who are also diagnosed with heart-kidney Yang deficiency syndrome characterized with palpitations, breathlessness, chest tightness, puffy face and swollen limbs. Experimental researches have shown in animal models that SFQX could reduce water-sodium retention, correct electrolyte disturbance, protect renal and cardiac function via attenuating autophagy and apoptosis. Hence, further rigorously designed randomized controlled trials are warranted to assess the effect of SFQX in patients with CHF so as to provide high-quality evidence for clinical practice.However, the explicit role of SFQX in preventing and treating cardiovascular disease remains unclear due to a lack of sound scientific evidence. Currently available randomized controlled trials on SFQX are flawed, with small sample sizes, making it difficult to draw definite conclusions on the actual benefits and harms of it.We would like to test the hypothesis that patients with CHF will benefit from SFQX and evaluate its safety through a high-quality clinical trial.22.1The main objective of this study is to evaluate the safety and efficacy of SFQX so as to provide high-quality research evidence for its clinical practice compared to placebo for treating CHF.2.2This study is designed as a randomized, double-blind, parallel-group, placebo-controlled, multi-center trial. There are 22 institution hospitals located in different areas of China in this trial. A total of 352 stable patients with CHF, who fulfill the inclusion and exclusion criteria, will be randomized into either an experimental or control group in the ratio of 1:1. The general design of this study is demonstrated in Figure 2.32.3.1Recruitment of the participants began in January 2020 and is expected to be finished in December 2021. Recruitment may be extended depending on registration completion. All participants will be given an informed consent form with a detailed explanation before enrollment, and notified that they may be withdrawn from the trial at any point without penalty. Participants can be recruited directly or openly during the clinical practice or through screening the database.2.3.2(1)Western medicine diagnostic criteria: refer to the 2018 guidelines for the diagnosis and treatment of heart failure in China.(2),12TCM syndrome diagnostic criteria : refer to the 2014 expert consensus on diagnosis and treatment of TCM in CHF and 2016 expert consensus on diagnosis and treatment of integrated traditional and western medicine in CHF.2.3.3(1)Participants who meet the diagnostic criteria of CHF, and have a history of CHF for more than 3 months or clinical symptoms of HF for more than 3 months.(2)Participants with heart-kidney Yang deficiency syndrome based on differentiation of syndromes and treatment.(3)Aged 18 to 80 years.(4)Left ventricular ejection fraction (LVEF) <\u200a40%.(5)New York Heart Association (NYHA) class II to IV.(6)N-terminal pro-B-type natriuretic peptide (NT-proBNP) \u2265450 pg/ml.(7)Participants who have received standardized western medicine treatment with the optimal therapeutic dose for at least 2 weeks.(8)Participants who have not used TCM for heart failure within 2 weeks before enrollment.(9)Submitted informed consent.2.3.3.1(1)Participants who have undergone coronary revascularization and cardiac resynchronization therapy within 12 weeks or cardiac resynchronization therapy planned within 12 weeks.(2)Participants with severe primary diseases such as liver and kidney hematopoiesis and so on; alanine transaminase, aspartate transaminase, alkaline phosphatase, and/or serum creatinine values that are 2 times higher than the normal upper limit; serum potassium content >5.5\u200ammol/L; patients with tumor, severe neuroendocrine diseases and mental illnesses, etc.(3)Participants with left ventricular outflow tract obstruction, acute myocarditis, hypertrophic cardiomyopathy, restricted cardiomyopathy, aortic aneurysm, arterial dissection, congenital heart disease, severe arrhythmia, and significant hemodynamic changes in unrepaired heart valve disease(4)Participants with uncontrolled hypertension, systolic blood pressure \u2265180 mmHg and/or diastolic blood pressure \u2265100 mmHg.(5)Participants with severe peripheral arterial disease, acute attack of chronic obstructive pulmonary disease, pulmonary vascular disease such as primary pulmonary hypertension or pulmonary hypertension due to autoimmune disease.(6)Participants who are pregnant or preparing for pregnancy and lactation.(7)Participants with an allergic constitution, known sensitivity to the study drugs or their ingredients.(8)Participants in other clinical trials within 1 months.2.4The participants who provide written informed consent and meet the inclusion and exclusion criteria will be selected at the screening visit, namely visit 1. Eligible participants will be informed via telephone call and will be asked to visit the trial center within 1 week (visit 1). At visit 1, the investigator will record demographic information, other medical history and medication, vital signs, physical examination and so on, and subjects will be randomized to the experimental or control group. The dosage used in this study is 5.4\u200ag 2 bags) of SFQX or placebo 3 times daily for 12 weeks. Tests and assessments will be performed according to the following schedule: Visit 2 , TCM syndrome scores, composite endpoint events, complete blood count, routine urine test, liver function test, renal function test, serum electrolytes, 12-lead electrocardiogram); Visit 3 and Visit4 are the same as above. Detailed assessment schedules are outlined in Figure bags of 2.5The shape and appearance of SFQX and placebo are the same: tan herbal pills. Each bag of pills is manufactured at a dosage of 2.7\u200ag, with shelf-life of 18 months. The manufacturing company follows the regulations for good clinical practice, and controls the quality of the products using their own standards and testing methods.The principle investigator will receive the products used for this trial from the company and supply them to the drug administrator. The drugs for each subject are in a box consisting of three packages (1 package for each visit period) which are labelled. A package, which includes 168 bags for 28 days plus 18 bags for spare will be prepared at 4-week intervals from visit 1 to 3. All products used in the trial will be recorded, including the amount of the medication, the date of the delivery, and the date of return.After randomization, the participants will begin taking 5.4\u200ag (2 bags) of SFQX or placebo orally combined with standardized western medicine treatment 3 times daily after meals for 12 weeks.2.6The central random method is used in this trial where IWRS system is used for central randomization and the assignment scheme is hidden. A block dynamic randomization method is used for randomized allocation. Randomized code generation and drug blinding will be implemented independent of the data. The trial is double-blind where the placebo is the same as SFQX in shape, color, smell, properties, and so on. Both the participants and the investigators will be blinded until completion of the trial.2.7 We assume the proportion of patients demonstrating a decrease in NT-proBNP level by at least 30% in the treatment group would be 48%. Therefore, given a type I error rate of \u03b1\u200a=\u200a0.05, a power of 80% (type II error rate of \u03b2\u200a=\u200a0.20), the sample size of each group is 146 cases calculated by PASS11 software. Moreover, considering a dropout rate of approximately 20% for randomized patients, a total of 352 patients is required to be randomized for the efficacy analysis.The sample size calculation is based on the proportion of patients with a decrease of NT-proBNP level by at least 30%. According to the reference, 47.95% of the patients in the qili qiangxin capsule group had reductions in NT-proBNP levels by at least 30%, compared with 31.98% of patients in the placebo group.2.8P\u200a<\u200a.05 is considered statistically significant. Statistics software SAS 9.4 will be used for the data analysis. Measurement data are analyzed by paired t test, variance analysis, rank sum test, and so on. Enumeration data are analyzed by chi-square test, Fisher exact test, etc. Ranked data are analyzed by Ridit or Cochran-Mantel-Haensel. The analysis data set will be selected from full analysis set, per-protocol set and safety analysis set. Considering the influence of baseline value, covariance analysis or logistic regression analysis will be used to deduct the baseline factor.The statistical significance of this study is conducted by 2-sided test and 2.92.9.1The primary outcome is NT-proBNP, Efficacy assessment of primary outcome is based on either the change in NT-proBNP level or the proportion of the patients with NT-proBNP level decreased by at least 30% from baseline to 12 weeks.2.9.2The secondary outcomes will include composite endpoint events (CCEs), echocardiography indicators , grades of NYHA functional classification, 6MWT results, MLHFQ, TCM syndrome scores. Efficacy assessment of secondary outcomes is based on the change in echocardiography indicators, grades of NYHA functional classification, 6MWT results, MLHFQ, TCM syndrome scores from baseline to 12 weeks and the proportion of patients with composite endpoint events at the end of the trial.2.10The safety assessment is based on spontaneous reports of adverse events, vital signs, laboratory tests. Vital signs include temperature, blood pressure and heart rate and breathing. Laboratory tests include complete blood count , routine urine test , liver function test , renal function test , serum electrolytes and 12-lead electrocardiogram.Adverse events will be recorded at any time during the trial. If any serious adverse event occurs during the trial, the investigator shall immediately take appropriate treatment for the subject, and report to the medical ethics committee of the primary research institution and the sponsor.2.11Data collection and management consists of 2 parts: one is Case Record Form (CRF) and another is ResMan public platform in Electronic Data Capture (EDC). Medical information obtained from this trial will be recorded in each patient's CRF and remain confidential which will be checked with the original medical records of the subjects. The researcher's EDC user name and password are specially assigned to enter the data. With the data of each center institution completed and all questions resolved, the project manager and the main researcher need to review the contents of each case again. The data administrator makes a database lock list and saves data management related documents as required. All research data will be kept for 5 years which include confirmation of all subjects, original written informed consent, CRFs, detailed records of drug distribution, etc.2.12A specialized monitor will be responsible for supervising the entire trial process, regularly review and verify whether the trial is conducted and documented on the basis of plan, standard work guidelines, and relevant regulations.2.13This trial protocol was approved by the Ethics Committee of Xiyuan Hospital of China Academy of Chinese Medical Sciences in 12 December 2019 (2019XLA062-4). Any protocol deviations will be approved by the Ethics Committee of Xiyuan Hospital of China Academy of Chinese Medical Sciences. The trial has been registered with the Chinese Clinical Trial Registry. All participants or authorized surrogates will be given a detailed explanation on this trial with the informed consent form and appropriate time to determine consent or assent.3HFrEF is characterized by defects in cardiac contraction. Although the conventional therapeutic approaches in HFrEF management have improved survival rates in patients, the prognosis still remains poor. The efficacy of integrative treatment for CHF with TCM and western medicine has been gradually accredited and the integrative treatment has been considered as an alternative therapeutic strategy for CHF with fewer side effects.\u201320 have indicated that western medicine combined with SFQX could further improve cardiac function, TCM syndrome, the quality of life and show a clinical curative effect, improve LVEF and narrow left ventricular end diastolic diameter. They may work by reducing plasma brain natriuretic peptide and atrial natriuretic peptide, anti-inflammatory, inhibiting oxidative stress, protecting vascular endothelial function, suppressing rein-angiotensin-aldosterone system system, reversing ventricular remodeling to inhibit myocardial fibrosis. However, there is still a demand on a high-quality and larger-scale clinical research to prove its efficacy and safety in the treatment of CHF.As a post-marketed medicine, the composition of SFQX is clear and its quality is assured. Some clinical reportsThe study design of this trial has 3 main points.1.To assess the results of the treatment effectively, a standardized outcome measure is required. We select NT-proBNP as a primary outcome. Plasma will be collected in various hospitals and transported to the central laboratory by cold chain for NT-proBNP detection. Plasma NT-proBNP levels were measured by dedicated kit-based NT-proBNP assays (Roche Diagnostics).2.CHF patients with heart-kidney yang deficiency syndrome will be included on the basis of syndrome differentiation and treatment, which reflects both the advantages and characteristics of TCM and the efficacy evaluation of western medicine. Patients with syndrome characterized by palpitations, shortness of breath or wheezing, fear of cold could be diagnosed.3.The study protocol was designed with reference to international clinical trial principles. Three-level quality control measures were taken and data were monitored by EDC system.Quality control supervisors will ensure that investigators adhere to the study protocol in each center institution. We used the random method of dynamic randomization (multiple blinding) of sub-center group to compete for the group, so as to manage data and reduce deviation in the research. The rigorous design of the trial will ensure an objective and scientific assessment of the efficacy and safety of SFQX in the treatment of CHF.We also thank the following coordination hospitals and directors:(1)Xiyuan Hospital of China Academy of Chinese Medical Sciences, Xiaochang Ma(2)Fuwai Hospital, Chinese Academy of Medical Sciences, Jian Zhang/Mei Zhai(3)Xuanwu Hospital Capital Medical University, Qi Hua(4)First Teaching Hospital of Tianjin University of TCM, Jingyuan Mao(5)Second Affiliated Hospital of Tianjin University of TCM, Yingqiang Zhao(6)Shuguang Hospital Attached to Shanghai TCM University, Xiaolong Wang(7)The First Affiliated Hospital to Changchun University of Chinese Medicine, Yue Deng(8)Shengjing Hospital of China Medical University, Zhijun Sun(9)Xinjiang Uygur Autonomous Region Hospital of Traditional Chinese Medicine, Xiaofeng Wang(10)Tianjin Chest Hospital, Shutao Chen(11)Guangdong Hospital of TCM, Weihui Lv(12)Affiliated Hospital of Jiangxi University of TCM, Zhongyong Liu(13)Affiliated Zhongshan Hospital of Dalian University, Qin Yu(14)Shijiazhuang No.1 Hospital, Xitian Hu(15)Handan First Hospital, Xianzhong Wang(16)Wuxi Traditional Chinese Medicine Hospital, Shu Lu(17)Huangshi central hospital, Daoqun Jin(18)Luoyang Hospital of TCM, Yanling Sun(19)TCM-integrated Hospital of Southern Medical University, Yiye Zhao(20)Yuncheng Central Hospital, Xia Wang(21)Xingtai People's Hospital, Hebei Medical University Affiliated Hospital, Limei Yao(22)Taiyuan Iron and Steel (Group) Co., Ltd., General Hospital, Qing JiThe hospitals all above are listed irrespective of ranking.Conceptualization: Lijun Guo, Dawu Zhang, Xiaochang Ma.Supervision: Xiaochang Ma, Jian Zhang, Qi Hua, Keji ChenWriting \u2013 original draft: Lijun Guo, Hui Yuan.Writing \u2013 review & editing: Lijun Guo, Hui Yuan, Xiaochang Ma."} {"text": "Cervical cancer (CC) is the third most common gynecological malignancy around the world. Cisplatin is an effective drug, but cisplatin resistance is a vital factor limiting the clinical usage of cisplatin. Enhancer of mRNA decapping protein 4 (EDC4) is a known regulator of mRNA decapping, which was related with genome stability and sensitivity of drugs. This research was to investigate the mechanism of EDC4 on cisplatin resistance in CC. Two human cervical cancer cell lines, HeLa and SiHa, were used to investigate the role of EDC4 on cisplatin resistance in vitro. The knockdown or overexpression of EDC4 or replication protein A (RPA) in HeLa or SiHa cells was performed by transfection. Cell viability was analyzed by MTT assay. The growth of cancer cells was evaluated by colony formation assay. DNA damage was measured by \u03b3H2AX (a sensitive DNA damage response marker) immunofluorescent staining. The binding of EDC4 and RPA was analyzed by immunoprecipitation.EDC4 knockdown in cervical cancer cells (HeLa and SiHa) enhanced cisplatin sensitivity and cisplatin induced cell growth inhibition and DNA damage. EDC4 overexpression reduced DNA damage caused by cisplatin and enhanced cell growth of cervical cancer cells. EDC4 could interact with RPA and promote RPA phosphorylation. RPA knockdown reversed the inhibitory effect of EDC4 on cisplatin-induced DNA damage.The present results indicated that EDC4 is responsible for the cisplatin resistance partly through interacting with RPA in cervical cancer by alleviating DNA damage. This study indicated that EDC4 or RPA may be novel targets to combat chemotherapy resistance in cervical cancer. Cervical cancer (CC) is the third most common gynecological malignancy around the world. About one-third of the new cases are found in China, and showing an increasing incidence rate of young patients and early stages every year \u20133. AlthoEnhancer of mRNA decapping protein 4 (EDC4) is a known regulator of mRNA decapping to function in the mRNA P-bodies within the cytoplasm. Hazir Rahman found that EDC4 as an interacting partner of mTORC1, which is a rapamycin sensitive complex involved in the process of energy synthesis, translation, transcription, and lipid biosynthesis . EDC4 alp\u2009<\u20090.01 vs. control) in HeLa and SiHa cells, and sequence 2 was more potent for EDC4 knockdown. EDC4 depletion decreased the IC50 of cisplatin of HeLa (from 9.728\u2009\u03bcM to 5.226\u2009\u03bcM) and SiHa cells (from 25.29\u2009\u03bcM to 9.423\u2009\u03bcM), which indicated that the cancer cells were more sensitive to drug in terms of cell survival by short hairpin RNA (shRNA) of two independent shRNA sequences (shEDC4#1 and shEDC4#2). The knockdown efficacy was confirmed by RT-PCR and Western blot. As shown in Fig.\u00a0p\u2009<\u20090.05 for HeLa and p\u2009<\u20090.01 for SiHa), and EDC4 knockdown exhibited more potent inhibitory effect to the colony formation compared with cisplatin alone (shNC+DDP) (p\u2009<\u20090.01 for HeLa and p\u2009<\u20090.05 for SiHa). Besides, the DNA damage was measured by \u03b3H2AX (a sensitive DNA damage response marker) immunofluorescent staining. The results of \u03b3H2AX staining showed that cisplatin induced the amount of \u03b3H2AX positive cells of HeLa and SiHa, and EDC4 knockdown obviously increased the intensity of \u03b3H2Ax . Colony formation assay . Cisplatin decreased the amount of the colony of cancer cells, but was reversed by EDC4 overexpression . Besides, EDC4 overexpression did not induce the DNA damage, as the amount of \u03b3H2AX positive cells had no obviously changes. Cisplatin induced the increasing of \u03b3H2AX positive cells , while EDC4 overexpression inhibited the effects of cisplatin were established. The transfection efficacy was confirmed by RT-PCR and Western blot. As shown in Fig.\u00a001) Fig.\u00a0e. These As EDC4 was related with cisplatin induced DNA damage, we further study that whether it interacted with RPA, a protein binds with ssDNA and responsible for DNA metabolism. To achieve the goal, the physical EDC4-RPA interaction was confirmed by co-immunoprecipitation assay . In accordance with the above results, EDC4 overexpression induced more colony formation of HeLa cells compared with negative control , while RPA1 or 2 knockdown reversed the effects of EDC4 overexpression on colony formation Fig.\u00a0b. Beside01) Fig.\u00a0c. These Cervical cancer (CC) is the third most common gynecological malignancy around the world. Cis-Dichlorodiamineplatinum is a commonly used and one of the most effective drugs for treatment of advanced or recurrent CC . HoweverEnhancer of mRNA decapping protein 4 (EDC4) is a known regulator of mRNA decapping to function within the cytoplasm and related with drug resistance. For example, researchers found that EDC4 deficiency of HeLa cells leads to hypersensitivity to DNA interstrand crosslinking drugs and PARP inhibitors, which also indicated the effects of EDC4 on DNA damage repair in the nucleus and chemo-sensitivity . In thisThe mechanism of cisplatin resistance is complex. Previous reports have demonstrated that modulating cellular response to DNA replication stress is a key factor of cisplatin resistance in cancer treatment . ReplicaIn conclusion, the present results indicated that EDC4 is responsible for the cisplatin resistance partly through interacting with RPA in cervical cancer by alleviating DNA damage. This study indicated that EDC4 or RPA may be novel targets to combat chemotherapy resistance in cervical cancer.2).Two human cervical cancer cell lines, HeLa and SiHa, were purchased from American Type Culture Collection and cultured in DMEM medium supplemented with 10% fetal bovine serum in a humidified incubator was used to establish the EDC4 knockdown cells. Human cervical cancer cell lines (HeLa or SiHa) were seeded in 6-well plates and subjected to transfection with EDC4 shRNA, or control shRNA using Lipofectamine 2000 according to the manufactory\u2019s suggestion. To evaluate the effect of EDC4 overexpression, cells were transfected with EDC4 overexpressed plasmid or the negative control (Vector) using Lipofectamine 2000 following the manufacturer\u2019s protocol. The sequences of sh-RNAs were as follows: Sh-EDC4 #1:GGTGATAGTACCTCAGCAAAC; Sh-EDC4 #2:GCCACCCATTAACCTGCAAGA. The RPA knockdown was performed by small interfering RNA (siRNA) and transfected to cells by Lipofectamine 2000. Briefly, HeLa or SiHa cells were loaded in 6-well plate and cultured until 80% confluence. Then, the premixed lipofection and plasmids were added into the wells and incubated for 24\u2009h. The sequences of siRNAs for RPA1 and RPA2 were referred from the previous report .The cell viability was analyzed by MTT assay. Briefly, cells (EDC4 knockdown) were treated with cisplatin for 48\u2009h, and MTT solution (5\u2009mg/mL) was added and incubated for another 4\u2009h. DMSO was used to dissolve the formazan crystals. The absorbance at 570\u2009nm was measured with a microplate reader . The IC50 was defined as the cisplatin concentration required inhibiting cell proliferation by 50%, and calculated by SPSS.The transfected or cisplatin treated cells were added in plates. After the incubation of 14\u2009days, cells were washed twice with pre-cold PBS and stained with crystal violet.. The colony formation efficiency was represented as the percentage of colonies in seeded cells number. Experiments were repeated independently in triplicate.The immunoprecipitation was performed using Commercial immunoprecipitation kit according to the manufactory\u2019s protocol. Briefly, cells of different groups were trypsinized and collected. Cells were lysed, homogenized in lysis buffer and centrifuged 12,000\u2009rpm to collect supernatant as cell lysate. For Co-IP experiments, cell lysate was incubated with purified antibody against target protein and incubated overnight at 4\u2009\u00b0C. Then, Protein G-conjugated beads or IgG-conjugated magnetic beads were added to lysates and incubated for 3\u00a0h. Beads were pulled down magnetically and washed with IP buffer and PBS. After elution from beads, the bounding protein was analyzed by Western Blot.Cells were seeded in coverslips, fixed with 4% paraformaldehyde and permeabilized with Triton-100 (1%). After blocked with 5% goat serum, cells were incubated with primary antibody and then with secondary antibodies conjugated to Alexa Fluor 488. Finally, the nuclei are labeled with DAPI . Immunofluorescence images were captured using FV10-ASW viewer software .Cells were lysed in RIPA buffer containing a protease inhibitor cocktail and quantified by BCA method. Then the protein samples were added with loading buffer and heat denatured at 94\u2009\u00b0C. The equal amount of proteins was separated using 10% SDS PAGE and transferred to PVDF membranes. Then the membranes were blocked with BSA, incubated with primary antibodies : Abcam, ab109394) overnight at 4\u2009\u00b0C, and finally incubated with the appropriate HRP-conjugated secondary antibodies. The protein levels were detected with ECL chemiluminescent system and the densitometry of blot was analyzed by ImageJ software.Total RNA was extracted and purified from cells by RNeasy Plus Universal Mini Kit and reverse transcribed to cDNA in a 20\u2009\u03bcL volume. Quantitative real-time PCR analysis was performed using SYBR Green q-PCR kit . The level of the different mRNAs was normalized to GAPDH. The primer sequences refered to published paper .p value <\u20090.05 was considered statistically significant.Statistical analysis was performed using SPSS software . Data was expressed as means \u00b1 SD. Significance was analyzed using two-tailed Student\u2019s t test."} {"text": "Rapid economic and societal development increases resource consumption. Understanding how to balance the discrepancy between economic and social water use and ecological water use is an urgent problem to be solved, especially in arid areas. The Heihe River is the second-largest inland river in China, and this problem is notable. To ensure the downstream ecological water use, the \u201cWater Distribution Plan for the Mainstream of the Heihe River\u201d (97 Water Diversion Scheme) controls the discharge of Yingluo Gorge and Zhengyi Gorge, while the \u201cOpinions of applying the strictest water resources control system\u201d (Three Red Lines) restricts the water use. With the development of the economy and agriculture in the midstream, Zhengyi Gorge\u2019s discharge cannot reach the Heihe River\u2019s ecological water downstream. This paper is under the constraints of the \u201c97 Water Diversion Scheme\u201d of Heihe River and the \u201cThree Red Lines\u201d of the total water use control index for Zhangye County. We constructed a water resource allocation model for the midstream of Heihe River to reasonably allocate water resources in the Heihe River\u2019s midstream and downstream. This model is divided into three parts: Establish the mathematical equation, simulate the water consumption under the different inflow conditions, and ensure each water user\u2019s demand. The result showed that if we fail to confine total water consumption in the midstream, through the reasonable allocation of water resources, the real water use and water consumption of the middle Heihe River will be greater than the \u201c97 Water Diversion Scheme\u201d and the \u201cThree Red Lines.\u201d If we confine water consumption, they will be within the \u201c97 Water Diversion Scheme\u201d and the \u201cThree Red Lines,\u201d at the same time, they can reach the downstream of the Heihe River\u2019s ecological water. Besides, under the premise of satisfying the economic water and ecological water downstream of the Heihe River, returning farmland to wasteland and strengthening water-saving measures will improve water efficiency and be more conducive to allocating water resources. Water resources are essential to all life, primary natural resources, strategic economic resources, and ecological control factors ,2,3. MorThe Heihe River is the second-largest inland river in China. It is an essential water source in Northwest China; it is also the study area\u2019s primary water source. The runoff from Yingluo Gorge is the main water source of the midstream of the Heihe River. The discharge from Zhengyi Gorge determines the ecological water demand downstream of the Heihe River. If the discharge cannot reach the Heihe River\u2019s ecological water consumption downstream, it will affect the Heihe River\u2019s ecological environment. With the rapid development of the economy and agriculture in the midstream, water resources demand has increased significantly ,38,39. TTo effectively curb the excessive development and utilization of water resources, China\u2019s government established the \u201cThree Red Lines\u201d system that includes total water use control, water efficiency, and water functional areas that limit pollution capacity ,44,45. TWith the continued development of the economy and increasing demand for water resources, problems in controlling total water use and the \u201c97 Water Diversion Scheme\u201d gradually appeared ,46. In rIn this paper, we constructed a water resource allocation model for Heihe River based on the \u201c97 Water Diversion Scheme\u201d, simulated water consumption in the middle reaches of Heihe River under different inflow scenarios, and proposed the water resources regulation strategy to meet the needs of ecological water in the lower reaches of Heihe River.The Heihe River is the second-largest inland water body in Northwest China, with the midstream located at 38.6\u00b0 N\u201339.8\u00b0 N, 99.5\u00b0 E\u2013100.8\u00b0 E . Heihe iThe Heihe River originates from the northern foothills of the Qilian Mountains. The upstream above Yingluo Gorge, the midstream between Yingluo Gorge and Zhengyi Gorge, and the downstream below Zhengyi Gorge. The \u201c97 Water Diversion Scheme\u201d strictly stipulates the inflow of water from Yingluo Gorge and the Zhengyi Gorge\u2019s discharge under different inflow conditions. \u201cThree RThe midstream of the Heihe River includes three districts and counties and 13 irrigated areas . Li Yuan River is a tributary of the Heihe River, the Liyuan River irrigated area irrigated by the Liyuan River, and the other 12 are irrigated by the Heihe River mainstream. Due to the development of the economy and agriculture in the midstream of the Heihe River, groundwater demand is also great, so groundwater is extracted by underground mechanical wells for agricultural irrigation.3, the Zhengyi Gorge discharges 1.32 billion m3. When the Yingluo Gorge\u2019s 25% guaranteed rate of incoming water is 1.71 billion m3, the water discharged from Zhengyi Gorge is 1.09 billion m3. When Yingluo Gorge\u2019s 75% guaranteed rate of incoming water is 1.42 billion m3, the discharged water will be 760 million m3. When Yingluo Gorge\u2019s 90% guaranteed rate of incoming water is 1.29 billion m3, the discharge volume of Zhengyi Gorge will be 630 million m3 [The Ministry of Water Resources approved the \u201cWater Distribution Plan for the Mainstream of the Heihe River.\u201d This plan allocates the water volume of the Heihe River in the wet and dry years . When thllion m3 .3 in 2015, 2.011 billion m3 in 2020, and 2.71 billion m3 in 2030. The total water consumption control indexes for Ganzhou District were 779 million m3, 681 million m3, and 702 million m3. The total water consumption control indexes for Linze County were 464 million m3, 406 million m3, and 418 million m3. The total water consumption control indexes for Gaotai County were 389 million m3, 340 million m3, and 350 million m3 [In January 2013, the General Office of the State Council issued the most stringent \u201cThree Red Lines\u201d indicators for water resources management to all provinces, autonomous regions, and municipalities directly under the central government. In November, the general office of the Gansu Provincial People\u2019s government issued water resources management control indicators for 2015, 2020, and 2030 for the Gansu Province prefecture-level administrative regions. According to the city water resources management target, the Heihe water diversion scheme, water resources allocation scheme, and water use practice for each county, Zhangye City issued its total water consumption targets in 2015, 2020, and 2030 . The totllion m3 ,56 . We obtained meteorological data from six weather stations in the Heihe River watershed from 1956 to 2017 from the China National Meteorological Information Center (http://cdc.cma.gov.cn accessed on 8 July 2020). We obtained data for irrigated area, average flow rate, and the closing time of the river in the section from the Zhangye Water Conservancy Annals (http://www.zhangye.gov.cn accessed on 8 July 2020) and Gansu Statistical Yearbook (http://tjj.gansu.gov.cn accessed on 8 July 2020).The Cold and Arid Region Scientific Data Center provided annually observed runoff discharge (1956\u20132017) and Digital Elevation Model (DEM) is an empirical parameter, which is determined after many trials and is combined with the Heihe water resources deployment.In Equation (2), the coordination coefficient and the main soil types (loam) in the midstream of the Heihe River Basin ,59, we sV, z, and Except for KAccording to the groundwater module in the midstream of the Heihe River Basin water resources allocation model, we established an objective function using the historical three-year average water withdrawal data, multiyear average precipitation, multiyear average water surface evaporation, and predetermined parameters of each irrigation area in the middle reaches, and optimized the calculation of pending parameters.m is the number of irrigation area units, including subirrigation area units, m = 23; 8 m3; 8 m3; 8 m3; 8m3. We obtained the numerical values of undetermined parameters according to the optimization calculations given in The objective function is as follows:The model simulated the groundwater level of each irrigation area and the flow of each river section from 2005 to 2012. It was evident from the main section flow fitting and the groundwater level simulation that the results were better.3 of water. We used the water allocation to simulate the completion of a long series of annual water diversion schemes under current water demand conditions. 3 of downstream water was owed annually. In more abundant years, 126 million m3 water was owed to downstream water. In a dry year and normal flow year, the discharge target of the Zhengyi Gorge could be met. The average annual groundwater extraction volume was 598 million m3, which was greater than the average annual extraction volume of 480 million m3 in the middle reaches of the Heihe River [In 2017, the midstream of Heihe needed 1.394 billion mhe River . The aveBy comparing the Zhengyi Gorge\u2019s drainage index with the Zhengyi Gorge simulated discharge, under different incoming water conditions, the Zhengyi Gorge simulated discharge is less than the drainage index, and Midstream water consumption is greater than the water consumption index. Under such circumstances, the ecological water in the downstream will be insufficient, thus affecting the ecological environment in the downstream. Total water withdrawal (including Liyuan River) in the midstream is also greater than the Midstream water consumption index. With the increase of water consumption in the study area, the extraction of water from Heihe River will increase, and the discharge from Zhengyi Gorge will become smaller, which is more detrimental to the ecological environment of the downstream of the Heihe River.3), and the total water consumption also exceeded the midstream water consumption index. Therefore, under current water demand conditions, if the total water consumption was not taken into account, through the reasonable allocation of water resources, the water intake and consumption would exceed the indexes of in the middle reaches of \u201c97 Water Diversion Scheme\u201d in the Heihe River. As a result, the ecological water demand of the lower reaches of the Heihe river decreases, which affects the ecological development of the lower reaches of the Heihe River.In each inflow year, the total water intake was greater than the total water withdrawal index (1.63 billion m3), and total water consumption and total water consumption indicators were basically equivalent. Therefore, considering the water consumption, under current water requirements, through water allocation, the water intake and consumption in the middle reaches of Heihe River could be controlled within the \u201cThree Red Lines\u201d for water intake and consumption indicators.By comparing the Zhengyi Gorge\u2019s drainage index with the Zhengyi Gorge simulated discharge, under different incoming water conditions, the Zhengyi Gorge simulated discharge is less than the drainage index. However difference between the results is not significant. Total water withdrawal (including Liyuan River) in the midstream is also less than the Midstream water consumption index. By the conditions, the discharge volume of Zhengyi Gorge can be satisfied to meet the downstream ecological water.3.Under these circumstances, the water shortage in the middle reaches was significantly greater than without the considering water consumption index. In years when the incoming water was 10%, 25%, 50%, 75%, and 90%, the water shortage was 1.64 and 1.61, 1.55, 1.58, and 1.56 million mIn the case of considering water consumption, the downstream ecological water demand can be satisfied through rational allocation of water resources, so as to achieve the purpose of water resources allocation, but the water shortage in the middle reaches was significantly greater than without considering the water consumption index. So some measures should be taken.2; (2) use advanced technologies to save water and improve the utilization efficiency of water resources, to increase the utilization coefficient of irrigation water to 0.68. Increasing water-saving measures can reduce water consumption and improve water efficiency.According to the results of water resources allocation, we obtained different results under different inflow conditions. If the total water consumption limit was not taken into account, the water consumption in the middle reaches of the Heihe River exceeded the water consumption target. If the total water consumption limit was not taken into account, the water shortage in the middle reaches significantly increased. We identified two solutions: (1) Returning farmland to 1200 Km2, and the corresponding water demand is 1.023 billion m3 (Heihe Recent Governance Plan) [3, with 10% over-extraction. The water consumption in the middle reaches of Heihe could be controlled within the \u201c97 Water Diversion Scheme\u201d.The increase in the cultivated land area has been the main factor driving water demand. Reducing midstream water demand can achieve the water diversion target in wet years. The cultivated land area in the middle reaches should be 1200 kmce Plan) . We simu3 under water-saving conditions. In addition, the reclaimed farmland in the middle reaches of the Heihe River to 1200 Km2 and the corresponding water demand was 1.046 billion m3 [3, with 14% over-extraction. The water consumption in the middle reaches of the Heihe River could be controlled within the \u201c97 Water Diversion Scheme\u201d.Strengthening water saving in the middle reaches can alleviate the water contradiction of water use in the middle reaches. In the middle reaches of Heihe River, the water demand was 1.15 billion mllion m3 . We simuAccording to our results, if water consumption and water use were controlled, farmland was returned, and water-saving measures were applied, the indicators of the middle reaches of the Heihe River could be met. The over-exploitation of groundwater could also be categorized as mild over-exploitation. The water shortage was not significant, and water resources allocation in the middle reaches of the Heihe River was well satisfied. So, the application of returning farmland and water-saving measures is an important measure for water resources allocation in the Heihe River, and controlling the scale of farmland is the most important.On the basis of our simulation, we made the following conclusions.The \u201c97 Water Diversion Plan\u201d and the \u201cThree Red Lines\u201d control the consumption and water use, respectively, in the middle reaches of the Heihe River. With improvements in water use level, and without exceeding the water use targets, water consumption still increased. This affected the ecological water of the lower reaches of the Heihe River. We constructed a water resource allocation model to satisfy the \u201c97 Water Diversion Plan\u201d and solve the shortage of ecological water of the lower reaches.Without considering the total water consumption index, even though the reasonable allocation of water resources, water intake, and consumption in the middle reaches of the Heihe River exceeded the water intake and consumption indicators. Considering water consumption in the middle reaches, and with the allocation of water resources, the water intake and consumption could be controlled within the indicators.2 and implement water-saving measures, especially the control of farmland area is of great significance for the Heihe river resources.After the rational allocation of water resources, it is necessary to control the farmland area to 1200 Km"} {"text": "In 2018, the most recent year for which data are available, dog bites ranked as the 13th leading cause of nonfatal emergency department visits in the United States. As dog ownership spirals upwards in the United States, it is important to continue to monitor the epidemiology of dog bite injuries. This study provides contemporary data on the incidence of dog bites injuries in the United States and in New York and profiles individuals who have been treated for dog bites in emergency departments. The study also examines the demographic correlates of the rate of injuries at the neighborhood level in New York City and maps the rate in each neighborhood.At the national level, the study examines longitudinal data on dog bite injuries from 2005 to 2018 gathered by the Centers for Disease Control and Prevention. For New York, the study analyzes data for 2005\u20132018 collected by the New York State Department of Health. A negative binomial regression analysis was performed on the state data to measure the simultaneous effects of demographic variables on the incidence of dog-related injuries. A thematically shaded map of the rate of dog bite injuries in New York City\u2019s neighborhoods was created to identify neighborhoods with higher-than-average concentration of injuries.In both the United States and New York the rate of dog-bite injuries increased from 2005 to 2011 and then underwent a significant decline. Injuries due to dog bites, however, still remain a sizable public health problem. Injuries are more prevalent among school-age children, inhabitants of less-densely populated areas, and residents of poorer neighborhoods. In New York City, poorer neighborhoods are also associated with fewer dogs being spayed or neutered.To reduce the rate of dog bite injuries, prevention programs \u2013 particularly those which center on teaching the dangers of canine interactions with humans \u2013 should be targeted at children. Dog bite injuries tend to be clustered in identifiable neighborhoods. Dog bite prevention programs and stricter enforcement of dog laws can target these neighborhoods. While the appellation attached to dogs is \u201cman\u2019s best friend,\u201d dog bite injuries are a common occurrence. Data covering the years 2001\u20132003 revealed that approximately 4.5 million individuals in the United States were bitten by dogs each year to provide contemporary data on the incidence of dog bites in the United States and in New York, (2) to furnish a detailed profile of individuals who have been treated for dog bites in New York to describe individuals who are most at risk, (3) to present the socio-demographic correlates of the rate of dog bite injuries at the neighborhood level in New York City which can help to identify the characteristics of neighborhoods with a higher incidence of dog bite injuries, (4) to map the incidence of dog bite injuries at the local level which can be used to target neighborhoods which have a disproportionately large number of dog bite injuries, and (5) to provide data on the changing composition of the dog-owning population to help explain the epidemiological findings.The analyses conducted in this study rest principally on data collected from ED visits in the United States and New York. The national-level data are derived from the Web-based Injury Statistics Query and Reporting System (WISQARS) , which is under the auspices of the New York State Department of Health . SPARCS This study also draws upon data gathered by New York City\u2019s Department of Health and Mental Hygiene (DOHMH) . The datInjury Code. For both the national and state data sets, identification of patients who were treated for a dog bite was based on two separate injury codes. The International Classification of Diseases, Ninth Revision (ICD-9) External Cause of Injury code (E-code) E906.0 \u2013 Dog Bite \u2013 was utilized for the years prior to 2015. Both the ICD-9 E-code E906.0 and the ICD-10CM E-code W54.0XXA \u2013 Bitten by dog \u2013 were utilized for the year 2015. Just the ICD-10CM E-code W54.0XXA was used for the years 2016\u20132018.Sociodemographic Characteristics. Both the WISQARS and SPARCS data sets furnished information about the age and gender of patients. The SPARCS data sets also included two separate variables about the race and ethnicity of patients. A typology was created from these two variables with the following five values: \u201cwhite, non-Hispanic,\u201d \u201cblack, non-Hispanic,\u201d \u201cAsian, non-Hispanic,\u201d \u201cother, non-Hispanic,\u201d and \u201cHispanic.\u201d Importantly, the SPARCS database included the patient\u2019s county of residence and his/her 5-digit zip code.To measure the combined effects of year, background characteristics , and geographic location on the incidence of dog bites, we conducted a negative binomial regression analysis using the patient records from New York. A negative binomial regression analysis was performed instead of a Poisson regression due to overdispersion of the data.The population-based counts of both the number of outpatients and inpatients who were bitten by a dog served as the dependent variable in this analysis. The predictor variables comprised the year, geographic location, and the demographic characteristics of the patients. Year was measured as an interval-level variable ranging in values from 1 (corresponding to the year 2005) to 14 (corresponding to the year 2018). To capture possible curvilinear effects of year on the incidence of dog bites, a multiplicative term created by squaring the year variable was also incorporated into the analysis. Geographic location was a dichotomous variable with a value of 1 indicating New York City and a value of 0 indicating New York State omitting New York City. Gender was also a dichotomous variable with a value of 1 indicating male and a value of 0 indicating female. The age variable consisted of 7 categories: under 5, 5 to 9, 10 to 14, 15 to 19, 20 to 44, 45 to 64, and 65 and older. The racial-ethnic background of patients was made up of 5 groups as mentioned above: non-Hispanic white, non-Hispanic black, non-Hispanic Asian, non-Hispanic other, and Hispanic.Since it can be assumed that the risk of being bitten by a dog varies by population sizes, an offset variable was introduced into the analysis. The offset variable was created in two steps. First, population counts were tallied for each combination of year, geographic location, gender, age group, and racial-ethnic category. So, for example, one count might comprise non-Hispanic Asian females between the ages of 10 to 14 living in New York City in 2014. Altogether, this step yielded 1960 different counts. Next, natural log transformations were carried out on each of these counts.N\u00a0=\u200962), a three-step process was undertaken. First, the number of both outpatients and inpatients were combined for each county for the year 2018 (the most recent year for which data are available). Second, these figures were divided by the population of each county to obtain an injury rate. Finally, the rates were correlated with an array of socio-demographic variables at the county derived from the American Community Survey 2014\u20132018 (5-Year Estimates) (U.S. Census Bureau To measure the demographic correlates of the rate of dog bite injuries at the county level in New York State (N\u00a0=\u2009179). Next these figures were aggregated up to the United Health Fund (UHF) level (N\u00a0=\u200942) and divided by the population of each UHF district to obtain an injury rate. These rates were then correlated with the same set of socio-demographic variables described above calculated for each UHF district.A similar procedure was conducted to examine the socio-demographic correlates associated with dog bite injuries at the neighborhood level in New York City. For this analysis, the number of outpatients and inpatients were combined for each 5 digit zip code in New York City district in which the patient resided was created. A Global Moran\u2019s I was computed to assess whether the spatial distribution of the residences of the patients was geographically clustered or dispersed.The rates of dog bite-related injuries by age and sex for the period 2005\u20132018 are presented in Table\u00a0P\u00a0=\u2009.046 for the curvilinear relationship).Table\u00a0Noteworthy is that the relationship between the incidence of dog bite injuries varies by age group over time. Figure\u00a0The results of the negative binomial regression analysis examining the simultaneous effects of time and key demographic variables on the incidence of dog bites treated in an ED are displayed in Table\u00a0The effects of year and the multiplicative term of year squared are both significant. A graphic display of these terms indicates that from 2005 to 2012 the frequency of dog bite injuries increased and then from 2013 to 2018 decreased, controlling for the other variables in the analysis. The same general pattern emerges if the injury rate of just individuals who were admitted as inpatients serves as the dependent variable. Both trends mirror the results observed at the national level.Inspection of Table\u00a0As expected, age is a major determinant of the risk of injury from a dog bite. Compared with patients who are 65 and older (the reference category), patients aged 5 to 9 are 2.7 times more likely to incur a dog bite injury and patients aged 10 to 14 are 2.3 more likely to sustain an injury. Individuals in the other age categories are also significantly more likely to be injured by a dog bite than those in the reference category.Finally, the data reveal that non-Hispanic Asians are considerably less likely to be treated in an ED for a dog bite than Hispanics (the reference category). The odds ratios for the other racial-ethnic groups are not statistically significant.Table\u00a0Table\u00a0The table shows the socio-demographic correlates of both the percent of dogs which were spayed/neutered and the percent of dogs which were pit bulls in the 42 UHF districts. Of the breeds identified in the data set (84.6%), pit bulls were the most numerous (33.6%), followed in order by Shih Tzu (5.3%), Chihuahua (5.2%), German Shepherd (4.1%), and Yorkshire Terrier (3.1%). This finding is consistent with previous research showing that pit bulls are responsible for more bites than any other dog breed (McReynolds p\u00a0<\u2009.001), indicating a pattern of spatial clustering .Coinciding with expectations, the rates of dog-bite injuries are not uniformly distributed across the UHF districts. A choropleth map of the rates shows that the Hunts Point-Mott Haven neighborhood in the Bronx, East Harlem neighborhood in Manhattan, the Sunset Park neighborhood in Brooklyn, and the Port Richard and Stapleton-St. George neighborhoods in Staten Island have notably higher rates than other UHF districts see Fig.\u00a0. The MorSurveys of dog owners during the last decade reveal significant changes in their demographic characteristics Table\u00a0. The datThis study has found that the rate of dog bite injuries has been declining in recent years. This decline is in evidence at both the national and state levels of analysis. The decline has been most visible among those under the age of 19 \u2013 particularly children under the age of 9.inpatients treated for dog bites in New York -- does not lend support to this explanation.One explanation for this downward trend might be that it is simply an artifact of the methodology employed in this study. Most of the findings contained in this study are based on dog bite injuries treated in emergency room departments. It may be the case, though, that in recent years individuals bitten by dogs have increasingly sought treatment in other venues such as private physicians\u2019 offices or urgent care centers. While this may be a factor associated with the downward trend in dog bite injuries, another finding uncovered in this study -- a decline in the number of A second explanation for the recent decline in dog bite injuries centers on the change in the profile of dog owners and the characteristics of the dogs themselves. Survey data presented in this study indicates that there has been a decline in the presence of young children in dog-owning households over the past decade. Since young children are the most likely age group to be bitten by dogs and the overwhelming majority of injuries in the United States occur in the home, the reduction in the number of younger-aged children living at home would help to explain the drop off in dog-related injuries , it will be important to monitor the frequency of dog-bite related injuries to see if this positive trend persists.Though dog bite injuries have declined in recent years, the extent of these injuries still constitutes a major health problem. Young children especially are vulnerable to being bitten by a dog. Prevention programs \u2013 particularly those which center on teaching the dangers of canine interactions with humans \u2013 should be targeted at this age group. This study also has noted that residents of poorer neighborhoods in urban areas are more susceptible to being injured than residents of more affluent neighborhoods. Future research needs to be conducted to increase our understanding of why there is a negative association between a neighborhood\u2019s socioeconomic status and injury rates from dog bites. Hopefully this greater understanding will lead to a reduction in the disparity of these rates."} {"text": "Dog bites are a significant health concern in the pediatric population. Few studies published to date have stratified the injuries caused by dog bites based on surgical severity to elucidate the contributing risk factors.We used an electronic hospital database to identify all patients \u226417 years of age treated for dog bites from 2013\u20132018. Data related to patient demographics, injury type, intervention, dog breed, and payer source were collected. We extracted socioeconomic data from the American Community Survey. Data related to dog breed was obtained from public records on dog licenses. We calculated descriptive statistics as well as relative risk of dog bite by breed.Of 1,252 injuries identified in 967 pediatric patients, 17.1% required consultation with a surgical specialist for repair. Bites affecting the head/neck region were most common (61.7%) and most likely to require operating room intervention (P = 0.002). The relative risk of a patient being bitten in a low-income area was 2.24, compared with 0.46 in a high-income area. Among cases where the breed of dog responsible for the bite was known, the dog breed most commonly associated with severe bites was the pit bull .The majority of injuries did not require repair and were sufficiently handled by an emergency physician. Repair by a surgical specialist was required <20% of the time, usually for bites affecting the head/neck region. Disparities in the frequency and characteristics of dog bites across socioeconomic levels and dog breeds suggest that public education efforts may decrease the incidence of pediatric dog bites. With over 4.5 million dog bite injuries reported each year in the United States, dog bites continue to be a significant public health concern.Many studies have identified trends in pediatric dog-bite injuries and interventions,Orange County, CA, where our institution resides, is the sixth largest county by population in the US, with many low-income and affluent communities in close proximity to one another. Our academic pediatric trauma center is the only pediatric hospital serving this diverse population of over three million. This makes our institution an ideal setting for an investigation of the etiology and treatment of pediatric dog-bite injuries. In this study, we describe our five-year experience and aim to characterize the settings in which a surgeon is required for the treatment of pediatric dog-bite injury. We also collected information from public records and healthcare databases to evaluate external risk factors that may increase risk for dog bites, such as socioeconomic status and breed of dog. Delineating the injury patterns in this high-risk population may both streamline care and guide future prevention efforts.International Classification of Diseases, Ninth Revision and Tenth Revision, Clinical Modification [ICD-9] E906.0 and ICD-10-CM W54.0). Exclusion criteria were bite wounds that had already received a procedure at another institution and transferred to our institution for delayed reconstruction, patients who presented > 24 hours after the injury, and any subsequent visits related to the same initial injury. Two unblinded abstractors were uniformly trained to use a pilot-tested, standardized, online data abstraction form with coding rules. Data abstraction was routinely monitored to ensure systematic data collection including refresher training and review of coding rules. We did not exclude records with missing data; missing values for categorical variables were documented as unknown.This was a retrospective cross-sectional study of all children aged 0 to 17 years treated for dog-bite injury during the period from 2013\u20132018 at our institution. The inclusion criteria were all pediatric patients presenting to the pediatric emergency department (ED) during the study period and identified in the electronic health record (EHR) as having an acute dog bite injury ; clinical variables ; and information on the dog . Wound depth was categorized as superficial , deep (full-thickness skin wounds without trauma to underlying tissue), and complex . Information on the dog breed, patient\u2019s relationship to the dog, and location where the injury occurred were first abstracted from the provider notes in the EHR and then cross-referenced with information included in the Animal Bite Human Reporting Form sent to the county health department.Socioeconomic data such as median income was extracted from the American Community Survey (ACS).We calculated the relative risk of being bitten by a specific breed of dog, the relative risk of being bitten in a lower-income area, and the relative risk of sustaining a severe, rather than moderate or mild, dog-bite injury. The relative risk of being bitten by a specific breed of dog was calculated using dog population data collected by the animal shelters of our county, which collect data for all licensed dogs in the county. We ranked dog breeds according to relative risk of bite, compared to the risk of being bitten by any member of the dog population in the county. The relative risk of dog bite was mapped onto each breed in the phylogenetic tree. If no bite data was observed for a specific dog breed, the relative risk was set to one.P-values using the chi-square test for cell size >100 and Fisher\u2019s exact test for cell size <100. In this study, the Fisher\u2019s and chi-square P-values measured distribution of a given variable after stratification by another categorical variable, in comparison to the distribution of all other categories summed. For continuous measures such as bite diameter, a Wilcoxon rank-sum test was used to measure the difference in distribution among continuous measures. We used the R programming language to conduct these analyses. Income and dog-bite frequency were mapped using the Choroplethr package .20We calculated From 2013 to 2018, 967 pediatric patients at our children\u2019s hospital were identified as victims of a dog bite. The mean and median ages of pediatric patients who sustained dog-bite injuries were six years and five years, respectively. The mode of the age variable in this study was three years. After stratification into age categories of 1\u20135 years, 6\u201310 years, and >10 years of age, the 1\u20135 age group was identified as the group of patients that made up the greatest proportion of those bitten (53.4%). The risk for dog-bite injury was inversely correlated with age, with a Pearson correlation coefficient of \u22120.76 . RegardlOur analysis of the sociodemographic data collected revealed that the racial distribution of pediatric patients who sustained dog-bite injuries was similar to the racial make-up of the community, with 64.6% of patients in the study identifying as White/Caucasian. It should be noted that patient families identifying as Latino were disproportionately represented in this survey. The 2017 ACS reported that 34.2% of the residents in the county identified as Latino, while 55.2% of the patient population in this study identified as Latino (with only 1.16% of study participants refusing to answer this question). It should also be noted that a large proportion of the patient families included in this study were covered by Medicare (22.4%) or Medicaid (29.5%); 41.4% were covered by private insurance, and the remaining 6.6% were self-pay .Most injuries did not require specialist or OR services; 71.8% of bites did not require wound repair, while 17.1% of patients required specialist consultation for wound repair in the ED or the OR. The distribution of bite severity mirrored this pattern, with 70.5% of bites classified as \u201csuperficial\u201d ; 21.1% of bites classified as \u201cdeep\u201d (full thickness without trauma to underlying structures); and 8.5% of bites classified as \u201ccomplex\u201d . Analysis of the data to determine which anatomical area was most commonly affected revealed that 61.7% of bites were inflicted on the head or neck, 20.6% on the hands or arms, and 13.0% on the feet or legs .P = <.0001). When stratifying injuries by different levels of repair there were statistically significant differences in the proportion of observed injuries across different anatomic sites. The largest difference in proportion was observed in head and neck injuries, which contributed to 41.2% of cases not requiring repair, and 86.2%, 69.6%, and 88.5% to cases requiring repair by EPs, surgical specialists in the ED, and repair performed by specialists in the OR, respectively. This association persisted even when \u201cno repair\u201d patients were removed from the dataset (P = 0.002). This data is presented in When we investigated the relationship between anatomical site of injury and type of intervention , we found that head and neck injuries were significantly more likely to require repair were treated in the OR, 9.8% of complex wounds were treated by a specialist in the ED, and 1.9% of wounds were repaired by a general EP. This observed pattern contrasted with that observed for deep wounds (full thickness without trauma to underlying structures), for which the majority (79.4%) were treated by an EP. The majority of superficial wounds (76.3%) required no repair.P <.0001). These differences are illustrated in We used ZIP codes to map city-level reports of median income from the ACS. The ZIP code was used to approximate the economic status of a patient family to evaluate the association between economic status and the frequency of bites. According to the 2017 ACS, the median income in the county is $89,000. Analysis of the study data showed that 67.9% of patients lived in areas with median annual income greater than $42,000, and 32.1% of patients lived in areas with median income of $42,000 or less . Using pP <.0001) . Medicai <.0001) .In 61.4% of cases included in the study, the breed of the dog that had bitten a particular patient was unknown. Among the cases where the breed of the dog responsible for the injury was reported, representation was as follows: Chihuahua mix, 7%; pit bull mix, 7.6%; German shepherd mix, 3.3%; other or mixed breed, 20.4%. No significant relationship was found between dog breed and anatomical site of injury, or between dog breed and median income in the area where the dog bite occurred. There was, however, a significant association between breed and the requirement for surgical treatment by a specialist . The likP <.0001) and of bite diameter (P <.0001). Pit bull bites were found to be significantly larger, deeper, and/or more complex than the average dog bites included in this study (Dog breed was a significant predictor of bite severity (is study . Patientis study .We constructed a phylogenetic tree of dog breeds to identify clades with an increased relative risk of bite, compared to the general dog population . This viDog bite injuries continue to be prevalent in the pediatric population, especially among young children. Similar to previous studies,A previous study by our group of dog-bite injuries in the county showed that 60% of dog bites in adult patients received no intervention.Our analysis goes further to reveal how socioeconomic factors influence the management of dog-bite injury. A median annual income below $42,000 conferred a 2.24 relative risk for pediatric dog-bite injury, compared to a 0.46 relative risk in regions with high median annual income. This trend is consistent with the findings of a study by Ruiz-Casares et al, which demonstrated that children in low-income families are the most vulnerable to unintentional injury.In our analysis, insurance type was used as an index for socioeconomic status. Our study shows that children in families with Medicaid or self-pay status were more likely to experience a dog-bite injury, but less likely to have their injuries repaired by specialists in the OR. It is unclear whether the difference in service utilization between private insurance payers vs Medicaid or self-payers reflects systemic obstacles or, rather, a parental preference for ED intervention based on financial concerns. While Essig et al showed that the surgical management of pediatric facial dog-bite injuries by specialists in either the ED or OR had no significant effect on the risk for surgical-site infection or reoperation,Many studies have attempted to elucidate the role of dog breed in bite injuries. In the literature the dog breeds most commonly associated with pediatric dog-bite injuries include the pit bull, Rottweiler, German shepherd, terrier, and mixed.35It should be noted that aggressive canine behavior is multifactorial, with genetic as well as human interference-related contributing factors.There were several limitations to our study. The socioeconomic data that we extracted from the ACS was not a true measure of family income, as these pooled data represent neighborhood-level rather than individualized patient information. The data presented in this analysis is specific to a high-volume, academic healthcare institution that serves a large and diverse community. The findings may, therefore, not be generalizable to all institutions and populations. We did not stratify the data used for analysis based on surgical subspecialty or type of dog-bite injury. Not all bites could be attributed to a specific breed or mixed breed of dog. As a result, the relative risk of bite in some breeds may have been under-reported. Additional bias may occur in breeds with small reported populations in the community; these breeds may have instability in the estimates of relative risk of bite due to small samples that are not representative of a given dog breed.Additional studies will be designed to elucidate whether plastic surgeons, otolaryngologists, or general surgeons are more frequently involved with certain types of pediatric dog-bite injuries. Such an investigation would help to streamline workflow and to increase the use of a multidisciplinary approach in pediatric EDs. With interest, we continue to monitor and study how trends in the etiology and management of pediatric dog-bite injuries may change as social distancing alters the way that children interact with their environments.Our findings support previous reports that pediatric dog-bite injuries occur more frequently in children aged 1\u20135 years. Most dog-bite injuries in this study were caused by encounters with large dogs, and bites from pit bulls were associated with significantly more severe injury. The anatomical site affected most commonly was the head and neck region. The dog-bite injuries that most frequently require subspecialist surgical intervention are those affecting the head and neck region and those involving extensive soft tissue damage. Low socioeconomic status may increase the risk of dog-bite injury. Pediatric patients with private health insurance were more likely than others to receive surgical intervention for dog-bite injuries."} {"text": "Wearable robotic devices are designed to assist, enhance or restore human muscle performance. Understanding how a wearable robotic device changes human biomechanics through complex interaction is important to guide its proper design, parametric optimization and functional success. The present work develops a human-machine-interaction simulation platform for closed loop dynamic analysis with feedback control and to study the effect of soft-robotic wearables on human physiology. The proposed simulation platform incorporates Computed Muscle Control (CMC) algorithm and is implemented using the MATLAB -OpenSim interface. The framework is generic and will allow incorporation of any advanced control strategy for the wearable devices. As a demonstration, a Gravity Compensation (GC) controller has been implemented on the wearable device and the resulting decrease in the joint moments, muscle activations and metabolic costs during a simple repetitive load lifting task with two different speeds is investigated. Exoskeletons and exosuits are the terms used interchangeably to refer to a class of wearable assistive devices which work in tandem with the human body to provide assistance. The assistance provided by these human joint force amplifiers can augment, reinforce or even restore human performance. Muscle strength augmentation for workers or soldiers , assistaIn recent literature, Zhang et al. employed an inverse dynamics and optimization-based technique using the Anybody Inc. software to obserIn this paper, the position and velocity data of the joint from the forward dynamics are fed back, so that the inverse dynamics-based motor command generator can compensate for the error terms. The present simulation framework implements the open-source platform OpenSim , that is1) inverse dynamics, 2) musculoskeletal model, 3) static optimization, and 4) activation dynamics. The muscle excitations are combined with the exosuit\u2019s actuator signal that is generated by the controller implemented in MATLAB. The combined signal is then sent as an input to the forward dynamics block in OpenSim which generates the human motion and computes the metabolic cost. The joint angles from the motion are fed back to the controller to close the loop. This closed-loop system involves continuous exchange of information between MATLAB and OpenSim and is possible due to the OpenSim application programming interface (API) commands. The functional blocks within these modules are the following:The present simulation framework creates a digital model of the human musculoskeletal system integrated with an external assistive device. The overall system architecture is divided into the brain computer interface (BCI), the musculoskeletal biomechanics, actuator and controller modules. This overall Computed Muscle Control (CMC) architecture is presented in The purpose of MCG is to use the error between the desired and actual trajectories to calculate the required excitations in the muscles such that the trajectory error is also minimized. The MCG also uses the position and velocity feedback from the simulated output trajectory. Inverse dynamics simulator calculates the joint moments from the input kinematics, followed by the static optimization to solve for the muscle redundancy problem and obtain the muscle activations. Finally, the equations involved in activation dynamics are used to calculate the corresponding muscle excitation values. The inverse dynamics and musculoskeletal models within the MCG use the OpenSim API commands while the static optimization and activation dynamics are carried out entirely using MATLAB algorithm. The MCG functionality is explained using the following:Inverse Dynamics: The desired joint trajectory as well as the error between the simulated and desired trajectories is used to calculate the joint moments such that the joint moment,p and kv values from p and kv respectively) provided the best error compensation, especially for the simulations where the exosuit provided assistance. Musculoskeletal Model: The musculoskeletal model comprises a forward simulation of the arm26 musculoskeletal model, which is well documented in the open literature , involved in the simulation. The objective function J is chosen as a square of the activations, as previous literature suggests that the quadratic criterion provides a reasonable to good estimate of the muscle activations in MATLAB, where ai is the activation corresponding to the i th muscle.signals) . The optActivation Dynamics: The forward dynamics simulation in the system framework requires the muscle control signals (excitations) as input. The individual muscle excitations are calculated from their activation values using the activation dynamics equation is the time constant whose magnitude depends upon whether the muscle activation is increasing or decreasing.equation shown bea and u are the muscle activations and excitations, respectively. The parameters The parameters The forward dynamics uses OpenSim API commands to determine the joint reaction forces and exosuit-human interaction forces, along with the error between the desired and output joint trajectories, thus refining the accuracy of estimated muscle excitations.assisted condition (actuator ON) the exosuit provides assistance to the elbow joint, while in the unassisted condition (actuator OFF) the exosuit doesn\u2019t provide any assistance to the elbow joint. The GC controller compensates for the gravity dependent component of the moment acting at the elbow joint, given by:A gravity compensation (GC) based control strategy is implemented in the assistive device. In the present study, the assistive moment at the joint is directly proportional to the angle between the forearm and the direction of gravitational force at the elbow joint. In the h et al. . In a phThis module is used to calculate the forces in the extensor and flexor cables such that the desired moment can be transmitted to the elbow joint. In the simulation, these forces are directly generated using the path actuators (OpenSim API). The actuator module is based on the current value of moment arms of flexor-extensor cables, in addition to the magnitude and direction of the desired moment (to be transmitted to the elbow joint), and computes the value of the tension required in the cables. These forces in the simulation are directly applied to the path actuators in the forward dynamics block.For Actuator moment >0:For Actuator moment <0:The functionality of BCI module in the simulation environment is to test and validate the different methods of desired motion estimation in sync with the other modules of exosuit like controller and actuator by taking data from a pre-recorded EEG dataset. In the current simulation study though, the control system uses gravity compensation control which only requires the joint angle data as input, and the BCI module comprises a ready-made reference trajectory as input to the simulation.large scale interior point algorithm. The algorithms for controller and actuator modules used MATLAB for calculations. A sampling rate of 121\u00a0Hz was used for the joint angle data from 0 to 1\u00a0s. The simulation presents robust results with variations of step-sizes between 0.004s < We developed a MATLAB based platform in the present study, to access the OpenSim functionalities using the OpenSim API, and perform the forward and inverse dynamics of the musculoskeletal system. MATLAB provided the requisite functionalities for the calculations involving controller, actuator and muscle activations. The simulation platform is versatile and provides integration with multiple software, and subsequently other utilities such as brain machine interface and finite element analysis for study of force interaction can be interfaced into the simulation. The simulation framework involved the OpenSim Inverse Dynamics Solver as well as the Forward Integrator (using a Runge-Kutta-Merson integrator) by default. The static optimization carried out in MATLAB uses a 1) wearable fabric and straps, 2) actuator unit, 3) controller unit, 4) cable routing, and 5) battery pack. As shown in For its digital counterpart, we used a right upper body model \u201carm26\u201d as the musculoskeletal model, representing the digital human, as shown in path actuators, which apply equal and opposite force on the segments to which they are attached. These path actuators are attached agonistically and antagonistically, in between the upper-arm and forearm straps, as shown in prescribed controller, control signals were sent to the path actuators to create the desired level of tension. All the geometrical input parameters of the model are given in As a part of the exosuit, the CAD designs of exosuit straps were imported and attached to their appropriate locations over the digital human model. The force transmission system from the motor to the anchorage points are modeled directly as force generating elements between the two straps in the digital model. The force generating elements are the 1) a fast and 2) slow flexion motion, with and without assistance from the exosuit. In the current simulation study, we constrained our analysis to elbow joint only, and locked the shoulder degree of motion (at a 0\u00b0 orientation). Following are the parameters varied in different simulation iterations:The simulation was repeated for two conditions of the elbow flexion: External load in hand: An external mass has been attached at a distance of 29\u00a0cm from the elbow joint, as shown in Reference trajectory: The elbow joint of the simulation model undergoes motion from an initial angular position of 0\u00b0 to a final position of 90\u00b0, following a minimum jerk trajectory path for a specified time duration. As shown in Metrics of evaluation: We calculated the joint moment, activation and metabolic cost parameters from the simulation platform and observed the changes in magnitudes for different parametric variations within the simulation, i.e., varying speeds, external loads and actuator assistance (ON/OFF). The metabolic cost function for the activity has been calculated as the sum of rate of heat liberated from the body and the rate at which work is done and maintain the desired trajectory. In this situation, the antagonistic muscles (triceps group) present relatively higher activations to distribute joint reaction forces, the soft exosuit envisioned in the present work does not have any such joint. Interestingly, the simulation results illustrate that there is a reduction in the joint reaction force when external assistance is provided . As showThe interaction force between the exosuit strap and the human limb is an important parameter in the design of the exosuit. The dimensions of straps and the padding material to be used in the strap, can be calculated based on the force applied by the strap to the body, in order to bring the contact pressure within the required tolerance . The sim1) incorporation of only six muscles about the elbow joint and omitting the muscles at the shoulder and the forearm, 2) the metabolic cost reduction values are those calculated considering these six muscles only. We have used the arm26 musculoskeletal model in the present study, whereas a more realistic simulation should consider a full-body model with all muscle definitions. However, the MATLAB-OpenSim framework provides easy access and utilization of multiple musculoskeletal models. Further, the force interaction between the actuator straps and the human twin may be carried out with the addition of a FEA module. The simulation framework described in this paper utilizes the best features of OpenSim and MATLAB to develop a system that acts as a digital model of the human physiology. Overall, the reduction in effort of the human muscle may be estimated using the framework, thereby indicating the efficiency of an exosuit implementing the proposed control strategy.The metabolic cost plots presented in"} {"text": "Changes in speech can be detected objectively before and during migraine attacks. The goal of this study was to interrogate whether speech changes can be detected in subjects with post-traumatic headache (PTH) attributed to mild traumatic brain injury (mTBI) and whether there are within-subject changes in speech during headaches compared to the headache-free state.Using a series of speech elicitation tasks uploaded via a mobile application, PTH subjects and healthy controls (HC) provided speech samples once every 3 days, over a period of 12\u2009weeks. The following speech parameters were assessed: vowel space area, vowel articulation precision, consonant articulation precision, average pitch, pitch variance, speaking rate and pause rate. Speech samples of subjects with PTH were compared to HC. To assess speech changes associated with PTH, speech samples of subjects during headache were compared to speech samples when subjects were headache-free. All analyses were conducted using a mixed-effect model design.Longitudinal speech samples were collected from nineteen subjects with PTH who were an average of 14\u2009days (SD\u2009=\u200932.2) from their mTBI at the time of enrollment and thirty-one HC . Regardless of headache presence or absence, PTH subjects had longer pause rates and reductions in vowel and consonant articulation precision relative to HC. On days when speech was collected during a headache, there were longer pause rates, slower sentence speaking rates and less precise consonant articulation compared to the speech production of HC. During headache, PTH subjects had slower speaking rates yet more precise vowel articulation compared to when they were headache-free.Compared to HC, subjects with acute PTH demonstrate altered speech as measured by objective features of speech production. For individuals with PTH, speech production may have been more effortful resulting in slower speaking rates and more precise vowel articulation during headache vs. when they were headache-free, suggesting that speech alterations were related to PTH and not solely due to the underlying mTBI. Individuals with migraine report changes in speech during migraine attacks and several studies have documented speech difficulty during the aura phase of the attack as well as prior to and during the attack \u20133. AlthoPost-traumatic headache (PTH) due to mild traumatic brain injury (mTBI) commonly has symptoms that are similar to those of migraine \u20137. In faThe overarching goal of this study was to determine whether objective features measured from speech samples obtained from individuals with acute PTH could provide a surrogate measure of headache burden, which could have utility in the future for tracking headache persistence and recovery.Study questionnaires: Subjects with PTH completed a detailed headache symptom questionnaire. All subjects completed the Ohio State University TBI Identification Method, a standardized questionnaire assessing the lifetime history of TBI for an individual (available at www.brainline.org) [This study received approval from the Mayo Clinic IRB in 2019. All subjects completed written informed consent. Subjects had to be native English-speakers aged 18\u201365\u2009years. All subjects were required to have a mobile device with capability for downloading an application used to collect speech and had to be willing and able to provide a speech sample once every 3 days over a period of 3 months. Subjects with PTH were eligible for enrollment starting on the day of mTBI and until 59\u2009days post-mTBI. Subjects had to meet criteria for acute PTH attributable to mTBI in accordance with the ICHD-3 criteria [ine.org) , 11, theine.org) , the Becine.org) for asseine.org) . The SymFor the RAVLT, a list of 15 words is read out loud by the examiner and the examinee is asked immediately afterward to recall as many words as they can. The list is read five times and each time, the examinee is asked to recall as many words from the list as they can, in any order. Next, a distractor list is read out loud, and the participant is asked to recall only the words from the distractor list. Afterward, the participant is then asked to recall only those words from the first list, which was read 5 times. After a delay of about 20\u2009min , the examinee is asked again to recall as many words as possible from the first list. Only the delayed recall z-scores, which are a measure of episodic memory performance were included in this study.At the first study visit, subjects were taught to download the speech application to their mobile devices. The study coordinator modeled the completion of speech elicitation tasks and the correct procedure for using the speech app. This included, selecting a time and place that is comfortable, without distractions and with minimal background noise. All subjects were asked to submit a speech sample every 3 days, beginning on the day of the first study visit and continuing over a period of the subsequent 12 weeks. As it was assumed that subjects with PTH would show the most significant speech changes during the acute phase of mTBI, therefore only the speech samples submitted during the first 30\u2009days were used for comparison between subjects with PTH and healthy controls. When comparing subjects with PTH during headache to the headache-free phase, speech samples submitted over the first 90\u2009days were used to increase the number of available samples.The speech application was specifically developed for the objective evaluation of the following measures: sentence speaking rate, average pitch, pitch variance, vowel space area vowel and consonant articulation precision, and the spontaneous pause rate. As part of the speech app, subjects were asked to read out loud five sentences (sentence reading task) and to use spontaneous speech to describe activities of the previous day (spontaneous speaking task). The entire speech elicitation task took approximately 3\u2009min to complete. Prior to starting the speech task, all subjects indicated whether they currently had a headache or whether they were headache free. If a current headache was reported then individuals were prompted to rate their headache intensity on a scale ranging from 1 (mild headache) to 10 (most severe headache imaginable). Table\u00a0The methodology for extracting and normalizing speech features is shown in Fig.\u00a0the supermarket chain shut down because of poor management\u201d for 4.01\u2009s. As there are a total of 15 syllables in the sentence: \u201cthe su-per-mar-ket chain shut down be-cause of poor man-age-ment\u201d, the speaking rate was calculated as 15/4.01\u2009=\u20093.74 syllables/second.First, the total speaking time in a sentence audio sample was detected by using a Voice Activity Detection (VAD) algorithm\u00a0 that idehttps://github.com/google/REAPER) pitch estimator, was used to extract the pitch contour from the raw audio waveform for calculating the average pitch and pitch variance. The average pitch was estimated by calculating the sample mean from the pitch contour; similarly, the pitch variance was estimated by calculating the sample variance from the pitch contour.The REAPER (The sentence reading audio files and corresponding sentence texts were estimated using the goodness of pronunciation (GOP) score evaluation algorithm\u00a0 to generAll five sentence reading samples were concatenated into a continuous audio stream. The vowel space area was estimated using a extraction algorithm\u00a0.The VAD algorithm was used to detect the timepoints during which the participant was speaking. The total speaking time was measured as the period from the speech start point to the speech stop point; the pause time was measured as the non-speech periods during spontaneous speech. The spontaneous pause rate was then calculated as the ratio of pause time over speaking time. For example, if a subject provided a spontaneous speech sample lasting 10.82\u2009s\u2009seconds, and paused for 2.13\u2009s during the task, then the spontaneous pause rate was calculated as 2.13/10.82\u2009=\u20090.197. The pause rate was measured from the spontaneous speaking task. , feature normalization was used to control for these potential confounding variables. Speech features were normalized by subject age and sex using the Mozilla common voice English database, a large open-source corpus for speech data\u00a0. This daTo normalize the features of a study participant, a nonparametric estimate of the cumulative distribution function (CDF) was computed from a subset of age/sex matched individuals in Mozilla. The features were converted to percentiles relative to this CDF and the normalized percentiles were then used as features.t-tests or Fisher-exact tests, as appropriate.Speech patterns of subjects with PTH and healthy controls were compared using a mixed-effects model with random (unique) intercepts for each participant and controlled for age and sex. The effect of healthy controls/PTH was tested on each speech measure. Age, sex, and group differences were treated as fixed effects. Differences on cohort demographics were assessed via two-sided mean difference in the metrics when subjects had a headache compared to when they were headache-free but it also tests the extent to which participants differed on their changes on speech measures .Speech patterns between subjects with PTH during headache were compared to speech patterns of subjects with PTH when they were headache-free using a mixed-effects model with random (unique) intercepts for each participant and random (unique) slope for headache status. Age, sex, and headache status were modeled using fixed effects. The use of random slopes tests not only whether there was a p-value may indicate mean differences in a measure when headache.Therefore, a significant is present versus absent or indicate that participants vary in terms of how their scores differ when they have a headache or are headache-free. Given the limited sample size, the models for some speech metrics did not converge. Non-convergence occurs when the model is too complex, the sample size is too small, or the model is not supported by the data, which results in the model being unable to reach a stable solution. Therefore, only the models that converged are reported.p\u2009=\u20090.32) or sex (p\u2009=\u20090.55); see Table\u00a0Subject Demographics. Among those with PTH, 9 had mTBI that were due to motor vehicle accidents, 3 that were due to falls, 5 that were due to sports-related injuries, and 2 that were due to hitting their head in home-related accidents. Fifteen subjects with PTH had no prior mTBI, 1 subject had one prior mTBI, and 3 subjects had two prior mTBIs. Twelve subjects reported no loss of consciousness, and 7 had loss of consciousness. As part of completing the headache questionnaire, individuals reported medication use for treating headache. Nine individuals reported treating headache with NSAIDs, and ten patients did not treat headache with medication. There were significant differences between groups on the symptom assessment of the SCAT-5, with individuals with PTH reporting more severe symptoms and they had significantly lower delayed recall z-scores compared to healthy controls . Symptoms of aura including difficulty with speech was only reported by one individual with PTH. Although there were significant group differences on raw scores of the BDI, the mean raw scores of both groups were in the \u2018normal, non-depressed\u2019 range. On average, subjects with PTH were seen two-weeks post-mTBI .Nineteen subjects with PTH and 31 healthy controls participated. There were no significant differences between groups for age .p\u2009=\u20090.008; normalized: p\u2009=\u20090.0015) and vowel precision and longer pause rates (0.0098) relative to healthy controls. On days when PTH subjects had headache, subjects had significantly longer pause rates (p\u2009=\u20090.0043), slower sentence speaking rates and less precise vowel and consonant articulation compared to healthy controls.Regardless of headache presence or absence, individuals with PTH had significantly reduced consonant precision , the mean values for the two groups, and the difference between the two groups. A significant p-value indicates that the mean speech measure differed significantly between the control and PTH groups.Table The means and differences are based on the sample cohorts and not based on the mixed-effects model and are provided in order to give context to the p-value and to evaluate the directionality of the effect.p\u2009=\u20090.002; normalized: p\u2009<\u20090.0001) but more precise vowel articulation compared to when they were headache-free. Table\u00a0p-values for the differences between the headache states, the mean values for the two headache states, and the mean differences between the two headache states. The means and differences are based on the raw scores of the sample and not based on the mixed-effects model and are provided in order to give context to the p-values and to evaluate the directionality of the effect. Two sets of p-values are provided: p-values for the random-intercepts models and p-values for random-intercepts-random-slopes models. As previously explained, significant p-value in the random-intercepts model indicates that the mean speech measure differed significantly between the headache states, while a significant p-value in the random-intercepts-random-slopes indicates significant mean differences and between-participant variability in the differences between the two states.During headache, PTH subjects had significantly slower sentence speaking rates in individuals with chronic back pain and chanCompared to healthy controls, individuals with acute PTH demonstrated alterations in speech rate and rhythm . There is emerging data that individuals with PTH have difficulty understanding and performing cognitive-linguistics tasks, have difficulty understanding and processing rapid speech and showAlthough not the focus of this study, participants with acute PTH did show significantly worse performance on a delayed word recall task and more cognitive, behavioral and mood related symptoms (SCAT-5). Therefore, pause rates in individuals with PTH could be an indication of word-finding difficulties and may serve as a proxy for cognitive function in individuals with PTH. However, future studies are needed that specifically relate post-mTBI symptoms including cognitive function to changes in speech.In the current study, individuals with PTH during headache also showed alterations in the precision of articulation, specifically reduced vowel space area relative to healthy controls.Vowel space area is an acoustic metric commonly used for measuring articulatory function. Previous data have shown reduced vowel space area in patients with motor speech disorders includinThe disruption in speech pattern in subjects with PTH might be a result of brain structural or functional changes in auditory and language pathways such as the posterior thalamic fasciculus and the superior and inferior longitudinal fasciculus. However, the neural underpinnings of speech changes will need to be further interrogated by associating brain structural and functional data with speech features in subjects with PTH.Individuals with PTH had more precise vowel articulation during headache compared to when they were headache-free. It may be hypothesized that during headache, when speech production requires more effort (hence resulting in slower speaking rates), individuals need to pay more attention to the production of speech and thus paradoxically produce more precise vowel articulation.It is possible that several factors may have influenced individuals\u2019 speech patterns and introduced variance to our result including 1) mTBI mechanism , or 2) the number of previous mTBIs. Additionally, future studies are needed that assess speech features in subjects with mTBI without headache to subjects with PTH to specifically disentangle speech changes due to mTBI from speech changes due to headache. Additionally, future studies are needed to isolate speech alterations in individuals with PTH without history of PTSD from individuals who suffer from PTSD without history of mTBI. In the current study, the model for pause rate did not converge in the within-subject analysis which is likely due to the relatively small sample size in the study. We posit that a larger study, with more speech samples captured during periods of headache and no-headache per individual, would further show that reductions in pause rate are apparent in the within-subject analysis as well.Our results indicated changes in speech rate and rhythm and alterations in precision of articulation in individuals with PTH to due mTBI relative to healthy controls as well as a reduction in sentence speaking rate and alterations in vowel articulation precision when individuals with PTH had a headache compared to when they were headache-free -- potentially suggesting that PTH-related pain can modify healthy speech patterns. Currently, there is not a way to predict when and whether an individual with PTH will recover. The current results indicate that speech detection using a speech application downloaded on a mobile device might be a practical, objective, and early rapid screening tool for assessing headache-related burden and may have potential for predicting headache recovery in subjects with acute PTH. Additionally, the recognition of speech changes in individuals with acute PTH could be important for identifying those individuals at \u2018high risk\u2019 for developing persistent post-traumatic headache and may allow physicians to begin headache treatment early, when it might be most effective, in order to prevent headache chronification.Relative to healthy controls, individuals with acute PTH show aberrations in objective speech features.Speech changes are exacerbated in PTH subjects during headache.Speech pattern analysis might have utility for assessing headache burden and recovery."} {"text": "Self-ATtentiOn based model to detect Regulatory element Interactions. Our approach combines convolutional layers with a self-attention mechanism that helps us capture a global view of the landscape of interactions between regulatory elements in a sequence. A comprehensive evaluation demonstrates the ability of SATORI to identify numerous statistically significant TF-TF interactions, many of which have been previously reported. Our method is able to detect higher numbers of experimentally verified TF-TF interactions than existing methods, and has the advantage of not requiring a computationally expensive post-processing step. Finally, SATORI can be used for detection of any type of feature interaction in models that use a similar attention mechanism, and is not limited to the detection of TF-TF interactions.Deep learning has demonstrated its predictive power in modeling complex biological phenomena such as gene expression. The value of these models hinges not only on their accuracy, but also on the ability to extract biologically relevant information from the trained models. While there has been much recent work on developing feature attribution methods that discover the most important features for a given sequence, inferring cooperativity between regulatory elements, which is the hallmark of phenomena such as gene expression, remains an open problem. We present SATORI, a High-throughput sequencing techniques are producing an abundance of transcriptomic and epigenetic datasets that can be used to generate genome-wide maps of different aspects of regulatory activity. The complexity and magnitude of these data has made deep neural networks an appealing choice for a variety of modeling tasks, including transcription factor (TF) binding prediction\u00a0, chromatDFIM) uses a network attribution method called DeepLIFT\u00a0 in the input sequences (for caveats and architecture choices that affect this ability see\u00a0(In this work we propose lity see\u00a0). The seWe test SATORI on several simulated and real datasets, including data on chromatin accessibility in 164 cell lines in all human promoters and genome-wide chromatin accessibility data across 36 samples in Arabidopsis. To compare our method to DFIM, we incorporated their Feature Interaction Scores (FIS)\u00a0 into ourL is transformed into a matrix of size 4 \u00d7 L where each position in the sequence is represented by a column in the matrix with a single non-zero element corresponding to the nucleotide in that position.We present a self-attention based deep neural network to capture interactions between regulatory features in genomic sequences. Figure\u00a0X\u2032 defined by:X is the input matrix, i is the position at which convolution is performed, j is the index of the filter, and \u03c9j) and B is the number of input channels (four in the case of one-hot encoded input DNA sequences). After the convolution operation, we apply the Rectified Linear Unit activation function (ReLU), which is given by:The first component of our model is a one dimensional CNN layer where a set of filters are scanned against the input sequence/matrix. Formally, we can express the result of one dimensional convolution as a matrix Following the CNN layer we use an optional RNN layer layer dimensions through addition and normalized by the mean and the standard deviation of the result. Empirically we find that this final step is not only computationally efficient but also leads to better model accuracy in comparison to flattening the attention layer output. For explicit details, please refer to the Supplementary Material. The final fully connected read-out layer outputs the model\u2019s prediction: either binary or multi-label classification, depending on the experiment. For binary classification, we use the standard cross entropy loss function. For multi-label classification, we use the binary cross entropy with logits loss function:For model selection and optimization, we employ a random search algorithm to tune the network\u2019s hyperparameters. For the convolutional layers we considered filter size, number of filters, and size of window over which pooling is performed. For the multi-head attention layer we tuned the dimensionality of the features generated, and the size of the output of the multi-head attention layer. Details are provided in PyTorch\u00a0 and all To interpret the deep learning model, we extract sequence motifs from the weight matrices (filters) of the first convolutional layer, similarly to the methodology used in\u00a0. For bink heads to a single d \u00d7 d matrix by taking the maximum at each position so that sufficient positional information is maintained to accurately detect active filters. The max-pooling operation was useful for reducing the length of the resulting sequences to reduce the computational overhead. Finally, for each identified filter-filter interaction, we generate the attention profile across all testing examples. For a given pair, this profile consists of a vector of its attention values at positions where the corresponding filters were active. An interaction pair is discarded if its maximum attention value is below a certain threshold. By default we used the value 0.10; in the human promoter data we used 0.08 to increase sensitivity. In the Filter-filter interactions are then translated to TF interactions by picking the most significant TomTom hits in the appropriate TF database. Filters often learn redundant motifs and, as a consequence, a single TF-TF interaction can be captured by multiple distinct filter-filter interactions. Nevertheless, we observe that for a given TF-TF interaction, the corresponding interacting filters have very similar attention scores see . Note thP-values are adjusted for multiple hypothesis testing using Benjamini-Hochberg method\u00a0. Then the non-parametric Mann-Whitney U Test is used to calculate their significance. All the g method\u00a0.As mentioned above, to test the statistical significance of regulatory interactions, we need to compare them to a background. We use a biologically relevant background depending on the experiment:For binary classification problems, the negative test set is used as the background.For multi-label, multi-class or regression problems we generate a background set by shuffling the test set sequences while preserving their di-nucleotide frequencies. Next, in the shuffled sequences, we randomly embed motifs that are generated based on the CNN filters, interpreted as probability distributions, taking into account the number of times a filter is active above a given threshold in the original test sequences, using the same threshold used for motif extraction.To infer interactions between motifs using the FIS method, we closely follow the strategy described in\u00a0. Given ap = 0.70 in our experiments. For the background examples, we pick all the negative test examples that score below 1 \u2212 p. In case of the multi-label classification problem, we pick our test examples based on the precision of the model\u2019s prediction probabilities: for a test example to qualify, the precision value\u2014calculated using the given labels and their model assigned probabilities\u2014must be above a specified threshold (default precision threshold =0.50). We note that for FIS scoring of multi-label classification problems, we only use the attribution values of the true positive predictions. These values are summed and used in calculating the final FIS score.To quantify interactions using SATORI or the FIS-based approach, we use the high-confidence predictions of the model. For binary classification, we pick all positive examples that are assigned prediction confidence above a specified threshold. We use a threshold of We use four datasets to evaluate the ability of SATORI to capture interactions between regulatory elements. The datasets are summarized in Table\u00a0et\u00a0al.\u00a0 across 164 immortalized cell lines and tissues. This experiment was based on the work of Kelley et\u00a0al.\u00a0. SATORI P-value of 5.98 \u00d7 10\u22124 using the hypergeometric test. We also found support for nine out of the 123 interactions in the HIPPIE database of human protein-protein interactions\u00a0. Among those 93 TFs, our model identified 123 unique pairs of motifs that interact. The 15 most frequent interactions are shown in Figure\u00a0ractions\u00a0, which hIn this work we only analyzed interactions between motifs of known TFs. Not all filters can be mapped to characterized regulatory proteins. In this dataset, TomTom returned no significant matches for 80 out of the 200 CNN filters with motif information content >3.0. The interactions of these filters require further investigation to discover the regulatory molecules associated with them.As mentioned above, the motif matching results returned by TomTom are noisy and imperfect. For example, some of the statistically significant matches are clearly incorrect, as shown in In the next experiment we evaluated the ability of SATORI to detect interactions on a genome-wide scale. For this task we chose to focus on regions of accessible chromatin in arabidopsis in a manner similar to our experiment in human promoter regions. More specifically, we predict chromatin accessibility from sequence across 36 arabidopsis samples from recently published arabidopsis DNase I-Seq and ATAC-Seq studies (GEO accession numbers provided in the In the next step, we investigated genome-wide regulatory interactions in those regions of open-chromatin. The trained network yielded 189 filters with information content above 3.0, and we obtained 100 unique matches for those filters in the DAP-Seq arabidopsis TF database\u00a0. Among tTo compare our model to DFIM\u00a0, we incoNext, we compared the computation times for the two methods. As discussed earlier, unlike the FIS method, SATORI does not require re-calculation of the gradients to estimate the interactions, leading to much faster computation times: it processed all motif interactions 8\u201320\u00a0times faster than FIS (see Figure\u00a0In summary, we find a very high level of overlap between the results of the two methods, which use very different approaches. This is important in view of the relatively small number of experimentally verified interactions that are available. Further wet-lab validation is needed to test the quality of the reported interactions by the two methods. High-frequency interactions consistently detected by both methods can be used as the most promising candidates for experimental follow-up.In this work we presented SATORI\u2014a method for extracting interactions between the learned features of an attention-based deep learning model. Unlike existing methods, it only requires minimal post-processing and uses the sparsity of the attention matrix to infer the most salient interactions. We compared SATORI to the FIS interaction estimation method and reported a 10\u00d7\u00a0speed-up in its computation time in most cases. Furthermore, the top predictions made by both methods show very high overlap, suggesting such interactions as promising targets for follow-up biological experiments. This high overlap, despite the big difference in the approach provides good evidence for their potential biological relevance.The proposed method can be extended in several ways. In this work, we focused on globally scoring interactions between TFs with known PWMs. This is in contrast to feature attribution methods that score the contribution of features in genomic regions of interest. We believe that the sparsity of the attention matrix could make it useful as an attribution method as well, but further experiments are required in order to validate that. SATORI is able to detect interactions between filters, even if they do not correspond to known TFs. Furthermore, the proposed methodology is flexible enough to be applied to deep networks that integrate multiple data modalities, and has potential applications outside of computational biology. For example, it can allow discovery of interactions between different characteristics of chromatin structure to provide a better understanding of the relationship between epigenetic markers such as histone modifications, DNA methylation, and nucleosome positioning and their contribution to the regulation of gene expression.https://github.com/fahadahaf/satori.The source code for SATORI and the processed data and results are available at gkab349_Supplemental_FileClick here for additional data file."} {"text": "The present study examined the sustained effects of acute resistance exercise on inhibitory function in healthy middle-aged adults. Seventy healthy middle-aged adults (mean age = 46.98 \u00b1 5.70 years) were randomly assigned to exercise or control groups, and the Stroop test was administered before, immediately after, and 40 min after exercise. The resistance exercise protocol involved two sets of seven exercises performed for a maximum of 10 repetitions, with 60 s between sets and exercises. Acute resistance exercise resulted in higher Stroop test performance under the incongruent (inhibition) and interference conditions immediately post-exercise and 40 min post-exercise. Furthermore, the difference in scores after 40 min was significant. The findings indicate that a moderately intensive acute resistance exercise could facilitate Stroop performance and has a more beneficial effect on sustaining of cognition that involves executive control at least 40 min. Cognitive ability is important for daily life as a main component of the health-related quality of life. However, the most impactful change in cognition with increasing age is declining executive function (EF), with cognitive impairments severe enough to impair their everyday functional abilities . EncompaAccording to meta-analyses of exercise and inhibitory control, acute aerobic or resistance exercises have small to moderate effects on inhibitory control in middle-aged populations . NotablyIn reviewing the literature on sustained effects of resistance exercise on inhibitory control, Despite these initial inquiries into the effect of acute exercise on inhibitory aspects of executive control, little research has examined this relationship of the time course to the cognitive benefits. Therefore, a rationale is raised for our current theoretical understanding for the acute exercise intensity-cognition interaction based on the arousal-performance interaction theory. A prominent theoretical framework proposed by The purposes of this study was to examine the immediate and sustained effects of acute resistance exercise on inhibition at 5 and 40 min post-exercise after an acute resistance exercise. A previous meta-analysis reported that the largest effects of exercise on cognitive performance could generally be observed during a period of 11\u201320 min after exercise . Howevern = 35) or a control group (n = 35) by drawing lots. Detailed characteristics of the participants are presented in A total of 70 community-dwelling healthy adults, aged 40\u201360 years, were initially recruited in Taipei, Taiwan. The potential participants were included if they met the following criteria: (1) they met the requirements of the physical activity readiness questionnaire (PAR-Q), to ensure their safety when performing a single bout of exercise and (2) they achieved a score of more than 26 on the Chinese version of the mini-mental state examination (MMSE), verifying that they may be considered cognitively normal . The excThe Stroop test was developed by the Vienna Test System. The Vienna test system is a test system for computerized executive function assessments. With it, digital tests of executive function can be administered and it provides automatic and comprehensive scoring. It includes a series of computerized tasks. Perceptual motor speed is measured when reading color words and naming the color that the word is (or is not) written in. The test uses correct responses on time as reaction time. The test has 128 trials, with 5 trials for practice. Trials present the four types of Stroop task, namely, the baseline of reading words and naming colors as the congruent condition, and reading and naming words under the incongruent condition; the difference in reaction time between the conditions is the interference tendency and is referred to as the Stroop effect. A positive value indicates an increased interference tendency, whereas a negative value is characteristic of a reduced interference tendency. In the congruent condition, the reading word and naming color stimulus is displayed in the color matching the meaning of the word . In the incongruent condition, the reading word stimulus is displayed in a color matching the meaning of a different stimulus . The participants must read the word aloud and disregard the color of the word is written in . Alternatively, with the naming color stimulus, the participant must state the color of the word and disregard the word\u2019s meaning . The stimuli for each condition were displayed on a 15-inch laptop screen, and the test length was approximately 8 to 10 min. The validity and reliability of the Stroop test has been extensively reported . The StrA Polar watch was worn by each participant to measure the participant\u2019s heart rate (HR) during the resistance exercise stage, with the HR data from the HR monitor being recorded at 1-min intervals. HR data were assessed as an indicator of the physiological arousal induced by the resistance exercise. Four HR variables were identified: pre-exercise HR, treatment HR, immediately post-exercise HR, and 40-min post-exercise HR. Pre-exercise HR was determined 60 s before the performance of the first Stroop test (pre-exercise); the treatment HR was the average HR during the moderate intensity or control treatment; and the immediately post-exercise HR and 40-min post-exercise HR were assessed 60 s before the participant performed the Stroop tests for immediately post-exercise and 40 min post-exercise, respectively.Each participant completed the rating of perceived exertion (RPE), which was defined as their perception of their own level of effort during the exercise. According to the original scale by The participants were instructed to avoid engaging in any other resistance exercise training or any other physical activities such as jogging, running, yoga, dance, tennis, or table tennis for a week before the experimental session. The 10-repetition maximum (RM) represents the maximum weight an individual can successfully lift in 10 repetitions and approximates 70% of the 1-RM. After stretching, the participants were asked to warm up with light resistance exercise for 10 min, and were then instructed to change the load and to continue the testing process until they reached a load level that they would be able to lift for a maximum of 10 repetitions. Generally, the instructor was able to adjust the loads such that the 10-RM could be measured within four testing sets. The following seven muscle exercises were selected: the bench press, shoulder press, dumbbell rows, alternating bicep curls, triceps pushdowns, leg extensions, and leg curls . The loaThe participants were requested to visit the Sport Pedagogy Laboratory on two separate testing days at least 48 h apart. Trained laboratory researchers administered questionnaires regarding the inclusion criteria, the 10-RM tests for each of the seven muscle groups, and the Stroop tests. On Day 1, each participant was invited to visit the laboratory at the confirmation stage and was presented with a brief introduction to the experiment. The informed consent form, MMSE, PAR-Q, and medical health history questionnaire were given to them to read and complete. After completing the questionnaires on Day 1, each participant was asked to sit quietly, individually, on a comfortable sofa in a dimly lit room for 15 min and instructed to attach the HR monitor. After the 15-min period, the HR baseline was recorded, and the participant\u2019s 10-RM for each of the muscle groups was determined. The participants were told to avoid the use of stimulants such as caffeine the day before and the day of the tests.On Day 2, each participant was again asked to sit quietly, individually, on a comfortable sofa in a dimly lit room for 15 min. Then, pretest scores for the Stroop test were collected, and the participant was asked to press the correct color button on the keyboard for each stimulus in each condition. In the exercise groups, the participants then warmed up for 10 min and conducted the resistance exercises for 25\u201330 min. At the start of each session, the participants were fitted with a Polar Cybex 770T\u2019s CardioTouch HR monitor to assess their physiological arousal and the effects of the selected intensities on cardiovascular responses. Subjective RPE was collected after each set of exercises using a category-interval rating scale ranging from 6 to 20 . In the control group, the participants were instructed to sit quietly in a well-lit room and read exercise-related magazines for 30 min.Immediately post-exercise (3 to 5 min after the end of the session), we asked each participant to complete the Stroop test again using the same directions and under the same conditions as the pre-exercise test. Afterward, the participant rested for 30 min and then completed all four conditions of the Stroop test again (40-min post-exercise stage). Each of the participants was given US$20 as compensation for participating in the study and was debriefed by a member of the research team.t-test or a \u03c72 test to compare demographic data and pre-exercise variables with continuous and discrete scales, respectively, between the groups. The analyses indicated no significant differences between the groups for age , height , weight , BMI , years of education , resting HR , or MMSE score . Furthermore, no differences were observed for sex ratio . Descriptive data are summarized in To ensure homogeneity of potential confounders between the control and exercise groups, an analysis of independent samples was applied using a F = 1027.75, p < 0.0001, partial \u03b72 = 0.99), indicating that the treatment HR was significantly higher than the immediately post-exercise HR and the follow-up HR, which was also significantly higher than the pre-exercise HR. The average RPE value during the resistance exercise was 15.25 \u00b1 1.46. Descriptive data for the exercise manipulation check are summarized in One-way repeated ANOVA was used to analyze HR in the exercise group, and the results revealed a significant time effect = 16.14 and 13.13, p < 0.0001, partial \u03b72 = 0.33 and 0.28) and significant interactions of Group with Time = 5.06 and 4.66, p < 0.01, partial \u03b72 = 0.13 and 0.12). However, no significant Group effect was observed = 0.41 and 0.01, p > 0.05). Follow-up decompositions indicated significant Time effects for the congruent condition of reading words and naming colors in the exercise group = 13.21 and 11.44, p < 0.001, partial \u03b72 = 0.45 and 0.41), wherein pairwise comparisons revealed that the participants exhibited faster reaction times (RTs) in the immediately and 40-min post-exercise tests than in the pre-exercise test. No RT difference was observed between the two post-exercise tests. As expected, no significant Time effects were observed for the congruent condition of reading words and naming colors in the control group = 2.96 and 1.92, p > 0.05). The results under the congruent condition are presented in In the congruent condition of reading words and naming colors, the 2 \u00d7 3 mixed ANOVA revealed a significant Time effect = 17.06 and 13.80, p < 0.0001, partial \u03b72 = 0.34 and 0.29) and significant interactions of Group with Time = 9.67 and 10.88, p < 0.0001, partial \u03b72 = 0.22 and 0.25). The results also indicated a significant Group effect = 4.22 and 4.22, p < 0.05, partial \u03b72 = 0.06 and 0.06). The exercise group exhibited faster RTs than the control group did in the incongruent condition of reading words and naming colors immediately post-exercise and 40-min post-exercise (t(68) = \u22123.04, \u22123.96, \u22123.02, and \u22123.07, p < 0.01). Follow-up decompositions indicated significant Time effects for the exercise group under the incongruent condition of reading words and naming colors = 20.72 and 20.64, p < 0.001, partial 2\u03b7 = 0.56 and 0.56), wherein pairwise comparisons revealed that the participants again exhibited faster RTs immediately and 40-min post-exercise than at pre-exercise, and no significant difference was noted between the post-exercise scores. As expected, no significant Time effects were observed for the incongruent condition of reading words and naming colors in the control group = 1.33 and 0.24, p > 0.05). The RTs under the incongruent condition are presented in To test the hypothesis regarding the effects of acute resistance exercise on the incongruent condition of reading words and naming colors, we conducted 2 \u00d7 3 mixed ANOVA. The results revealed significant Time effects = 5.04 and 3.49, p < 0.01, partial 2\u03b7 = 0.13 and 0.10) and significant interactions of Group with Time under the interference conditions = 6.36 and 4.01, p < 0.05, partial 2\u03b7 = 0.11 and 0.11). A significant Group effect was also observed = 26.07 and 47.19, p < 0.001, partial 2\u03b7 = 0.27 and 0.41). Notably, the exercise group had shorter RTs than the control group immediately and 40-min post-exercise (t(73) = \u22125.17, \u22125.29, \u22125.04, and \u22126.07, p < 0.001). Follow-up decompositions indicated significant Time effects for the exercise group = 13.52 and 6.38, p < 0.01, partial 2\u03b7 = 0.45 and 0.28), wherein pairwise comparisons revealed that the participants exhibited faster RTs in the post-exercise tests than in the pre-exercise test; no significant RT difference was observed between the two post-exercise tests. As expected, no significant Time effects were observed for the control group = 1.17 and 0.08, p > 0.05). The results suggested that the responses of the exercise group were quicker both immediately post-exercise and 40-min post-exercise compared with those pre-exercise. They were also significantly quicker than those of the control group. The RTs for the interference tendency are presented in To test the hypothesis regarding the effects of acute resistance exercise on the interference tendency of reading words and naming colors as indicators of inhibitory control, we conducted 2 \u00d7 3 mixed ANOVA. The results revealed significant Time effects that is associated with facilitated inhibitory control performance . AlthougThis study had limitations that warrant caution with respect to the interpretation of its results and future research. Nonetheless, based on the findings of the present study, further research efforts in this field are suggested. First, the outcome of the study may have been affected by the small size and diversity of the sample; however, given the significance and size of the effects, we believe that a larger sample size would not have significantly altered the outcome. Second, an additional limitation of these data is the inability to gain a mechanistic understanding of the effects of acute resistance exercise on inhibitory control. That is, despite the interesting and positive sustained impacts about RT observed herein, little is known regarding which accuracy of interference condition and interference tendency on Stroop Test was influenced by an acute resistance exercise bout. Third, future studies should be designed to further our understanding of the measurement of intra-subject variation for RTs. Meanwhile, the recording of ERPs could be used to explore stimulus and response conflicts within the processes underlying standard deviation and coefficient of variation for RTs. Whether standard deviation and coefficient of variation for RTs are related to localized brain regions is unknown, and neuroimaging could provide clarification. Because the intra-subject variation in RTs is a measure of a subject\u2019s consistency in responding to the conditions of congruent and incongruent stimuli, often quantified as the standard deviation and coefficient of variation of intra-subject variation across a task period; higher intra-subject variation, reflect in larger standard deviations, is associated with greater variability, or inconsistency, of responses. Finally, resistance exercise training and arousal-induced alterations in lateral prefrontal cortex neural activity , biochemOur findings suggest that moderate-intensity resistance exercise promotes EF in healthy middle-aged adults. In summary, this study has extended the literature by providing evidence that acute resistance exercise has positive, sustained impacts on multiple cognitive functions, as assessed by the Stroop test, in healthy middle-aged adults. Furthermore, the results suggest that moderate-intensity resistance exercises have positive impacts and sustained effects on particular types of EF in middle-aged adults, with those impacts being larger under task conditions that place greater demands on inhibitory control cognition.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The studies involving human participants were reviewed and approved by the University of Taipei\u2019s Institutional Review Board (IRB-2020-011). The patients/participants provided their written informed consent to participate in this study.C-CC and C-JH contributed to conception and design of the study and wrote sections of the manuscript. M-CH and Y-HC organized the database. W-YW and M-YH performed the statistical analysis. C-CC, M-CH, Y-HC, W-YW, MY-H, and C-JH wrote the first draft of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "Ongoing hemorrhage from hepatobiliary and pancreatic injuries continues to daunt even the most experienced surgeon. Despite the widespread centralization of elective hepatopancreatobiliary (HPB) surgery to high-volume centers, HPB trauma remains relatively common and requires a rapid and thoughtful approach . Although blood flow within the pancreatic gland is impressive, the dominant source of hemorrhage associated with pancreatic trauma remains the mesenteric venous structures that surround it. More specifically, the superior mesenteric, portal, inferior mesenteric, and splenic veins remain the prime sources. Although the anatomy of the portal and superior mesenteric veins is relatively constant, the insertion point of the inferior mesenteric vein can vary dramatically . It is also generally stated that the portal vein does not possess any branches arising from its anterior surface . As with many dogmatic anatomic comments, this \"rule\" is frequently broken. The presence of large venous tributaries from the portal vein into the head and uncinate of the pancreas is absolute however. The largest of these are the gastroepiploic trunk and the first jejunal venous branch. Hemorrhage from these structures can be torrential and unforgiving.Although portal venous injuries in the retropancreatic location are notoriously difficult to access, human anatomy has provided us with a powerful temporizing maneuver. More specifically, digital pressure from the front of the gland is often adequate to temporarily control venous hemorrhage via simple compression. In cases where this fails, a rapid Kocher maneuver is required to allow concurrent anterior and posterior digital pressure and therefore occlusion of the portal vein and distal SMV. This maneuver provides the surgeon time to add a second suction device, call for experienced assistance, notify the anesthesiologist for predicted massive blood loss, and have the nursing staff ready all vascular instrumentation and suture selections that the surgeon will require. If the hemorrhage appears to arise from the bottom of the pancreas , it is helpful to rapidly mobilize the right colon to provide improved inferior exposure and eventually control. Inspection of the right lateral small bowel mesentery will also provide insight into possible hemorrhage caudal to the pancreatic head/uncinate. As with most massive bleeding, the importance of an educated assistant who can expose the venous injury (both suctioning of blood and retraction of adjacent organs) cannot be overstated. Excellent help is typically the difference between completing an efficient and smooth repair compared to flailing with massive blood loss and patient demise.If pressure and packing do not persistently control hemorrhage from the retropancreatic portion of the portal vein , then rapid exposure and subsequent ligation/repair of the vessel may be required. This can be achieved by dividing the neck/body of the pancreas gland . This maneuver is discussed significantly more often in the literature than it is actually performed in the real world. It also carries with it a substantial risk of inadvertently enlarging the venous injury . If this maneuver is triggered however, rapidly place 4 retraction sutures (3-0 Prolene on MH needles) through the pancreatic neck in a figure-of-eight manner immediately lateral and medial to the portal vein (at both the top and bottom of the gland). There is significant risk of ligating the hepatic artery at the top, so you must be extremely accurate. This is not the time to flail or lose focus. These 4 sutures will provide significant retraction from both sides of the pancreatic neck, allowing the surgeon to use high-voltage Bovie electrocautery to transect the pancreas quickly (in combination with adequate ongoing suction). Remember that the vein is not typically dissected off of the back of the pancreas , so you need to slow down as you move closer to the posterior margin of the pancreas. Repairs to the portal or superior mesenteric veins themselves are generally performed with a 5-0 or 6-0 RB-1 Prolene .When necessary, venous tributaries from the portal vein can generally be ligated. Even the portal vein itself can be ligated in a damage control scenario. Interestingly, these patients display a superior survival when evaluating the literature as a whole . This likely reflects the varying comfort level among surgeons attempting to address these difficult injuries. Another possible solution is to place a temporary intravascular shunt (TIVS) in continuity for major portal venous injuries. Portal veins are best shunted with a 22F to 26F chest tube, or large nasogastric feeding tube for small women. Once inserted into the vessel (in-line), the TIVS can be locked into place via silk ties or double vessel loops that are tightened/locked with clips. If silk ties are selected , it must be remembered that the vessel itself will need to be trimmed back proximal to the silk to ensure there is no ischemia at the time of the reconstruction. This may become a problem in areas where every bit of vessel length is critical .Ongoing hemorrhage from the splenic vein is much less treacherous. Bleeding to the anatomic left of the portal vein can be solved via a rapid distal pancreatectomy/splenectomy with bulk ligation of the splenic artery and vein. Energy instrumentation and staplers can make this endeavor simple and efficient. More specifically, divide the short gastrics and gastroepiploics, mobilize the transverse and left colon, and finally free the spleen with an energy instrument that works well within pools of blood . A multitude of staplers can then be effectively utilized to divide the pancreatic body concurrent to ligation of the splenic artery and vein. The TX-30v linear stapler (Ethicon) is a workhorse for the elective HPB surgeon and is superb for this indication. Alternatively, in the context of a soft gland, a laparoscopic stapler can also be used to divide these structures en masse. The dominant risk is mistaking the hepatic artery for the proximal splenic artery and dividing it. A quick test clamp of the splenic artery with a large bulldog clamp eliminates this potential disaster. Prior to placing either stapler, however, the surgeon must rapidly dissect around the pancreatic body and place a vessel loop or umbilical tape for complete control of the gland. This is often best done with a single well-educated finger. Remember that the retroperitoneum at this location is generally spared from hemorrhage and easily accessible from the bottom once the transverse colon is mobilized caudally by approximately 1 cm. The splenic vein will remain stuck to the underside of the elevated pancreas, and the splenic artery can be palpated. Although this short series of maneuvers may sound challenging, it becomes much easier when rapidity is in demand. The most extreme damage control maneuver for a splenic venous injury remains bulk ligation with a large suture, followed by packing.Although it is beyond the aims of this article, the dominant postoperative complications surrounding pancreatic injuries remain leaks from preceding pancreatoduodenal closures and/or anastomoses. Critically injured patients rarely tolerate the physiologic consequences of uncontrolled leaks. Pancreatic juices are also highly dangerous in the context of a fresh vascular repair, anastomosis, or TIVS. As a result, generous closed suction drainage must be considered to control any potential pancreatic leaks after the ongoing hemorrhage has been stopped.The dominant challenge with hepatic trauma generally surrounds the management of the hemodynamically unstable patient with a bleeding, high-grade liver injury. More specifically, these injuries can be difficult to expose, temporize, and/or repair for any surgeon who does not make his or her living in this region of the upper abdomen. These patients often present in physiologic extremis and therefore require damage control resuscitation techniques. Early recognition of their critical condition, as well as immediate hemorrhage control, is essential. Unlike the spleen and kidney, the liver cannot generally be resected in a rapid on-demand basis. Regardless of your training, these injuries will engage all of your senses, test your technical skills, require the utmost focus, and demand great teamwork from you and your colleagues.Patients with major injury as a result of either blunt or right upper quadrant penetrating trauma must undergo an immediate Extended Focused Assessment with Sonography for Trauma examination in the trauma bay to confirm the presence of large-volume intraperitoneal fluid. This examination is repeatable and should be used to reevaluate patients in urban centers who present immediately following their injuries. Massive transfusion protocols as part of a damage control resuscitation must be initiated early during the patient assessment process. If the patient rapidly stabilizes their hemodynamics, they should undergo an emergency computed tomography (CT) scan of their torso. If they remain clinically unstable, they must be transferred to the operating theater without delay. Hemorrhage control is the dominant driver limiting survival. Collateral issues such as optimal intravenous access, imaging of other areas , and fracture fixation are secondary problems.Thankfully, not all patients with liver injuries are actively dying secondary to hemorrhage. More specifically, in hemodynamically stable patients without CT evidence of a hepatic arterial blush, admission and close observation are warranted. In hemodynamically stable patients with a hepatic arterial blush, immediate transfer to the interventional angiography suite (or hybrid operating room) is recommended. Hepatic angiography and/or portography with selective embolization is indicated with either autologous clot or absorbable embolization medium. In persistently hemodynamically unstable patients, however, an immediate laparotomy is essential. More to the point, early recognition of a patient with ongoing hepatic hemorrhage and immediate transfer to the operating theater are crucial. Delays will lead to the loss of life.The patient should be rapidly prepared and draped with available access from the neck to the knees. Vascular instruments and balloons must be open and at the ready. A midline laparotomy from xiphoid process to pubic bone should be performed with 3 passes of a sharp scalpel. The peritoneal cavity should be packed in its entirety with laparotomy sponges for patients with blunt liver injuries. Although the ligamentum teres can be ligated, the falciform ligament may be left intact . This offers a medial wall against which to improve packing pressure. The right upper quadrant should be evaluated prior to any potential intraperitoneal packing for penetrating injuries. If hemorrhage continues, an early Pringle maneuver (clamping of the porta hepatis with a vascular clamp) is recommended. This is both diagnostic and potentially therapeutic. If bleeding continues despite application of a Pringle clamp, a retrohepatic inferior vena cava (IVC) or hepatic venous injury is likely (assuming that a replaced left hepatic artery is not the source of inflow occlusion failure). Critically injured patients in physiologic extremis do not tolerate extended Pringle maneuvers to the same extent as patients with hepatic tumors undergoing elective hepatic resection. Forty minutes represents the upper limit of viable. If the liver hemorrhage responds to packing but continues to hemorrhage when unpacking is completed, they should be repacked and transferred to the ICU with an open abdomen once damage control of concurrent injuries is complete. Cover the liver with a plastic layer of sterile x-ray cassette material to avoid capsular trauma upon eventual unpacking. It should be reemphasized that all damage control procedures should be completed in less than 1\u202fhour. Return to the operating suite in patients with packed abdomens should occur in 48\u201372\u202fhours .If the liver hemorrhage control is dependent on maintenance of a Pringle maneuver despite packing, call for senior assistance, mobilize the right lobe, and suture the IVC or hepatic veins with 4-0 Prolene on SH needles. These patients may also require total vascular exclusion/occlusion (TVE) of the liver. This technique involves complete occlusion of the infrahepatic IVC, suprahepatic IVC, porta hepatis (Pringle maneuver), as well as an aortic cross-clamp within the abdomen. If TVE is pursued without concurrent clamping of the aorta, the patient will often arrest due to a lack of coronary perfusion. Prior to performing TVE of the liver, it is imperative to allow the anesthetic team to resuscitate the patient to the best of their ability to facilitate IVC clamping. We prefer to obtain suprahepatic IVC control within the abdomen in patients with a normal length of IVC inferior to the diaphragm. An alternate approach involves accessing the IVC immediately prior to its entry into the heart . This 2-cm length of IVC is easily accessible by opening the pericardial sac following division of the central tendon of the diaphragm. Alternatively, it can also be accessed from the thorax if a thoracotomy has already been performed. Control of the infrahepatic IVC can be rapidly gained by opening the overlying peritoneum and bluntly encircling the IVC cephalad to the right real vein .Veno-veno bypass is also a theoretical option in some very specific scenarios but is rarely required if the patient can be adequately resuscitated to allow for IVC clamping. Furthermore, a lack of transplantation training in most trauma/general surgeons precludes expeditious use of this bypass.In the case of central hepatic gunshot wounds or deep central lacerations where access and exposure are difficult, ongoing hemorrhage should be stopped with balloon occlusion. Either a Blakemore esophageal balloon or variant (red rubber catheter with overlying Penrose drain and 2 silk occlusion ties) is exceptional at stopping ongoing bleeding at the bottom of deep central hepatic injury tracts (including retrohepatic IVC injuries). Foley catheters of varying sizes are also helpful. These should be deflated approximately 72\u202fhours after the initial placement. If hemorrhage continues, they should be reinflated and left in vivo for 3 additional days.Another excellent damage control option for major IVC disruption, portal vein injuries, and combined portal venous/hepatic arterial trauma is the use of a TIVS. Although a large variety of tubes can be utilized as a TIVS , they do not need to be heparin bonded. More specifically, TIVS typically fails for 1 of 3 reasons: (1) selection of a tube that is too small for the caliber of the disrupted vessel, (2) kinking of the tube itself, and (3) inadequate concurrent outflow . IVC injuries in adults are usually best approximated with a 32F to 36F chest tube. Portal veins are best shunted with a 22F to 26F chest tube, or large nasogastric feeding tube for small women. Hepatic arteries are best served by inserting pediatric nasograstric or feeding tubes. These TIVS may be locked into place with either silk ties or double-looped vessel loops and locking clips. As previously mentioned, the surgeon should consider the latter method in scenarios where maximizing the vessel distance is critical because the vessel will need to be further trimmed back beyond the silk ties when reconstruction is eventually attempted.Vascular reconstruction following insertion of a TIVS should ideally involve an experienced HPB surgeon. The timing of this reconstruction will depend entirely upon the physiological and biochemical recovery of the patient. As soon as this is achieved in the critical care suite, the patient should return to the operating theater for repair. The surgeon must also ensure that a wide range of potential conduits is available and ready . One superb conduit choice for IVC reconstruction following TIVS removal is bovine pericardium (or biologic mesh) that is fashioned into a tube of the appropriate size . This conduit performs quite well in leaking/infected traumatic fields.Although TIVS has revolutionized damage control trauma scenarios, the traditional damage control option for vascular trauma of ligation remains relevant. It is clear based on a literature review of portal venous and superior mesenteric venous trauma that ligation of this vessel, rather than reconstruction, is often superior. This observation is likely multifactorial but almost certainly relates to surgeon unfamiliarity with these vessels in anatomically hostile regions. Similarly, ligation of the IVC is also well recognized as a successful damage control maneuver. If the IVC is ligated, wrapping a patient's legs with compression garments, elevation of patient's lower extremities above the heart, and judicious fluid management for 5 postoperative days are critical.Although unusual, patients with penetrating injuries to the hepatic artery will present as critically ill and may require ligation . Portal vein injuries should ideally be repaired with 5-0 or 6-0 Prolene once control is obtained. Clamps above and below the injury are essential for visualization. Alternate damage control options include TIVS with a small chest tube conduit or ligation (assuming the hepatic artery is intact).If an atrial\u2013caval shunt is contemplated, 2 experienced surgical teams (1 for the chest and 1 for the abdomen) are essential to ensure both rapidity and efficiency. The decision to pursue this shunt must be made early in the exploration process. Unfortunately, these shunts rarely result in patient salvage in even the most experienced trauma centers. If a center and/or surgical team considers this maneuver to be part of their armamentarium for treating ongoing hemorrhage from retrohepatic injuries, a prestocked kit with all the necessary items must be readily available. Similar to utilizing TIVS and occlusion balloons, demanding these instruments in the wee hours of the morning among a stressed clinical team for a decompensating patient is likely to fail. Remember that Allis clamps are also excellent for the initial control of most venous hemorrhage.In conclusion, massive ongoing hemorrhage associated with pancreatic trauma is typically compressible with a well-educated hand/finger. A detailed knowledge of anatomy and a talented assistant will make the difference between a huge save and a long presentation at morbidity and mortality conference.Although the published history of hepatic trauma is littered with descriptions of technical maneuvers ordered in a hierarchical scheme, very few are relevant in the context of modern trauma care. Packing of hepatic hemorrhage controls the vast majority of ongoing bleeding in critically ill patients. Selective use of vessel ligation, parenchyma resection, and hepatic transplant remain less common strategies. Ongoing hemorrhage from major hepatic injuries remains the most challenging of all intraperitoneal injuries due to issues with exposure, blood flow, and difficult technical repairs. Initiate damage control resuscitation and massive transfusion protocols early in your assessment. Rapid completion of damage control procedures is essential (<\u00a01\u202fhour). Flailing and indecision lead to prolonged operative times and patient demise. If diagnosis and therapy are rapid, patients who present in physiologic extremis as a result of major hepatic hemorrhage have a good chance of survival in the context of a prolonged hospital stay. Elective liver surgeons can be of superb assistance when available.Dr. Ball wrote and edited all components of this manuscript.Dr. Ball has no conflicts of interest to declare.There was no funding for this article.No patient data were included in this manuscript. No ethics approval was obtained."} {"text": "Some studies have revealed a close relationship between metabolism-related genes and the prognosis of bladder cancer. However, the relationship between metabolism-related long non-coding RNAs (lncRNA) regulating the expression of genetic material and bladder cancer is still blank. From this, we developed and validated a prognostic model based on metabolism-associated lncRNA to analyze the prognosis of bladder cancer.Gene expression, lncRNA sequencing data, and related clinical information were extracted from The Cancer Genome Atlas (TCGA). And we downloaded metabolism-related gene sets from the human metabolism database. Differential expression analysis is used to screen differentially expressed metabolism-related genes and lncRNAs between tumors and paracancer tissues. We then obtained metabolism-related lncRNAs associated with prognosis by correlational analyses, univariate Cox analysis, and logistic least absolute shrinkage and selection operator (LASSO) regression. A risk scoring model is constructed based on the regression coefficient corresponding to lncRNA calculated by multivariate Cox analysis. According to the median risk score, patients were divided into a high-risk group and a low-risk group. Then, we developed and evaluated a nomogram including risk scores and Clinical baseline data to predict the prognosis. Furthermore, we performed gene-set enrichment analysis (GSEA) to explore the role of these metabolism-related lncRNAs in the prognosis of bladder cancer.By analyzing the extracted data, our research screened out 12 metabolism-related lncRNAs. There are significant differences in survival between high and low-risk groups divided by the median risk scoring model, and the low-risk group has a more favorable prognosis than the high-risk group. Univariate and multivariate Cox regression analysis showed that the risk score was closely related to the prognosis of bladder cancer. Then we established a nomogram based on multivariate analysis. After evaluation, the modified model has good predictive efficiency and clinical application value. Furthermore, the GSEA showed that these lncRNAs affected bladder cancer prognosis through multiple links.A predictive model was established and validated based on 12 metabolism-related lncRNAs and clinical information, and we found these lncRNA affected bladder cancer prognosis through multiple links. Bladder cancer, as a common tumor of the urinary system, has always been the focus of research. Recent studies have shown that the metabolism of glycogen, lipid, amino acid, and other substances is closely related to the diagnosis and prognosis of tumors , 2. For The lncRNAs play a crucial role in transcriptional regulation, epigenetic gene regulation, and disease, a kind of non-coding RNAs greater than 200 nucleotides, with no protein-coding function , 11. WitFor bladder cancer, immune-associated and autophagy-associated lncRNAs were found as a marker of early diagnostic and prognostic treatment \u201320. ReceAlthough we know the nonnegligible fact that metabolism-related genes and lncRNAs are involved in tumor prognosis. At present, the research on metabolism-related lncRNA is not abundant, which is the starting point of this dissertation. It is of great underlying value to study metabolism-related lncRNA to accurately predict the prognosis of bladder cancer. Hence, to address the clinical values, we tried to establish a nomogram containing metabolism-related lncRNAs and clinical data and explore the possible functions of metabolism-related lncRNAs beings through the GSEA.https://portal.gdc.cancer.gov). And metabolism-related gene sets were downloaded from the GSEA database (https://www.gsea-msigdb.org/gsea/index.jsp). Taking | log2FC | > 0.5 and P < 0.05 as standards, the \u201cedgeR\u201d package was used to construct the volcano map and distinguish the differentially expressed metabolism-related gene and lncRNAs in tumors and paracancer tissues. The correlation between metabolism-related genes and lncRNAs is measured by the Pearson correlation coefficient. Metabolism-related gene and lncRNAs whose correlation coefficients satisfy | R2 | > 0.5 and P < 0.05 are considered related and used for further analysis.The RNA-Seq, lncRNA sequencing data, and related clinical information of bladder cancer were extracted from TCGA , calibration plot, and decision curve analysis (DCA).The differentially expressed metabolism-related genes in the high-risk group and low-risk groups were analyzed for gene enrichment. To explore what biological functions or pathways these expressed different genes might be involved in.http://www.R-project.org). And P < 0.05 was considered statistically significant.All statistical computations were conducted using the R software, version 4.0.2 and validation group (131). In the training group, the risk score model was determined by multivariate cox regression analysis to calculate the coefficients for selected lncRNAs.P < 0.05). And in both the training group and validation group, the AUC was greater than 0.71 functional annotation and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway\u00a0enrichment analysis in differentially expressed metabolism-related genes between high and low-risk groups. Then we presented 9 KEGG molecular pathways in which different\u00a0genes were enriched Worldwide, bladder cancer is characterized by high incidence, poor prognosis, and high economic burden. How to accurately predict prognosis and explore new therapeutic targets are urgently needed in the clinic. Previous studies have found metabolism-related genes and lncRNAs affect the prognosis of bladder cancer in different aspects , 23. HowAlthough there are few studies on the prognostic relationship between metabolism-related related lncRNAs and bladder cancer, the prediction model based on other types of lncRNAs has a good performance in predicting the prognosis of bladder cancer. In tumor immunity, an 8 immune-related LncRNA classifier for prognostic prediction, could predict prognosis and immunotherapeutic response . And patFrom the Kaplan-Meier survival curve we drew according to subgroups, we could see that this model still has good predictive performance in most subgroups age > 65, female, male, T1-2, T3-4, N0, and M0. In the remaining three subgroups, no significant differences were observed, and the insufficient sample size was considered as the main reason.Our study developed a risk score model with good reliability. This reconfirmed that these selected metabolism-related lncRNA are dysregulated in bladder cancer.LncRNA DUXAP8 was located on chromosome 20q11 with 2307 bp in length8 . DUXAP8 We found through GSEA that different metabolism-related pathways were significantly enriched both in high and low-risk groups. The galactose metabolism and amino sugar and nucleotide sugar metabolism enrich in patients with high-risk scores, which might mean a poor prognosis in these patients. Consistent with our research, Overexpression of the galactose transporter and galactose binding lectin, may contribute to tumor progression , 42. On We were the first to build a nomogram based on metabolism-related lncRNA. After verification and evaluation, the model can accurately predict the prognosis of bladder cancer. Using GSEA, we initial research the potential function of 12 metabolism-related lncRNA. Admittedly, there are also some limitations to our study. First, further experiments on cells and animals are needed to verify the function of this lncRNA. Second, contained clinical data lacks some important data, such as comorbidities, therapies, smoking, and the cause of death.We identified 12 metabolism-related lncRNAs associated with the prognosis of bladder cancer. And we found these lncRNA affecting bladder cancer prognosis through multiple links could be the focus of future research. From this, the first nomogram was established and validated based on metabolism-related lncRNAs and clinical information.The original contributions presented in the study are included in the article/JTH, JLH, and JK conceived and designed the study, participated in the collection of data and data analysis, and drafted the manuscript. CL and ZS assisted in the design of this research and project development. HY, JL, WX, and HS analyzed the data and reviewed the article. All authors contributed to the article and approved the submitted version.This study was supported by the Science and Technology Planning Project of Guangdong Province (Grant No. 2020A1515111119) and Guangdong Provincial Clinical Research Center for Urological Diseases (Grant No.2020B1111170006). The funders played no role in design of the study, collection, analysis, and interpretation of data or in writing the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "Many studies have shown that metabolism-related lncRNAs may play an important role in the pathogenesis of colon cancer. In this study, a prognostic model for colon cancer patients was constructed based on metabolism-related lncRNAs.Both transcriptome data and clinical data of colon cancer patients were downloaded from the TCGA database, and metabolism-related genes were downloaded from the GSEA database. Through differential expression analysis and Pearson correlation analysis, long non-coding RNAs (lncRNAs) related to colon cancer metabolism were obtained. CRC patients were divided into training set and verification set at the ratio of 2:1. Based on the training set, univariate Cox regression analysis was utilized to determine the prognostic differential expression of metabolic-related lncRNAs. The Optimal lncRNAs were obtain by Lasso regression analysis, and a risk model was built to predict the prognosis of CRC patients. Meanwhile, patients were divided into high-risk and low-risk groups and a survival curve was drawn accordingly to determine whether the survival rate differs between the two groups. At the same time, subgroup analysis evaluated the predictive performance of the model. We combined clinical indicators with independent prognostic significance and risk scores to construct a nomogram. C index and the calibration curve, DCA clinical decision curve and ROC curve were obtained as well. The above results were all verified using the validation set. Finally, based on the CIBERSORT analysis method, the correlation between lncRNAs and 22 tumor-infiltrated lymphocytes was explored.By difference analysis, 2491 differential lncRNAs were obtained, of which 226 were metabolic-related lncRNAs. Based on Cox regression analysis and Lasso results, a multi-factor prognostic risk prediction model with 13 lncRNAs was constructed. Survival curve results suggested that patients with high scores and have a poorer prognosis than patients with low scores (P<0.05). The area under the ROC curve (AUC) for the 3-year survival and 5-year survival were 0.768 and 0.735, respectively. Cox regression analysis showed that age, distant metastasis and risk scores can be used as independent prognostic factors. Then, a nomogram including age, distant metastasis and risk scores was built. The C index was 0.743, and the ROC curve was drawn to obtain the AUC of the 3-year survival and the 5-year survival, which were 0.802 and 0.832, respectively. The above results indicated that the nomogram has a good predictive effect. Enrichment analysis of KEGG pathway revealed that differential lncRNAs may be related to chemokines, amino acid and sugar metabolism, NOD-like receptor and Toll-like receptor activation as well as other pathways. Finally, the analysis results based on the CIBERSORT algorithm showed that the lncRNAs used to construct the model had a strong polarized correlation with B cells, CD8+T cells and M0 macrophages.13 metabolic-related lncRNAs affecting the prognosis of CRC were screened by bioinformatics methods, and a prognostic risk model was constructed, laying a solid foundation for the research of metabolic-related lncRNAs in CRC. Clorectal cancer (CRC) is one of the most common gastrointestinal malignancies in the world, whose morbidity and mortality are increasing year by year. In 2018, China\u2019s CRC accounted for the third in the national incidence of malignant tumors and the second in mortality, and the number of new cases and deaths was as high as 376,000 and 191,000, respectively database and the Gene Set Enrichment Analysis database , respectively.The colon cancer-related data and metabolism-related data in this study were from the Cancer Genome Atlas (TCGA) between lncRNAs and metabolism-related genes. The lncRNAs with R2>0.5 and p<0.05 were defined as metabolic-related lncRNAs.The colon cancer patients were divided into the training set and validation set at the ratio of 2:1. Based on the training set, the single-factor Cox regression analysis of the metabolic-related lncRNAs was performed to obtain the lncRNAs related to the overall survival. P<0.05 was considered as statistically significant and was included in the LASSO regression analysis to determine the closely related lncRNAs. Multi-factor Cox regression generated risk coefficients and a risk regression model was then constructed. The risk scores of different patients were calculated. The prognostic risk score formula was constructed as follows: Risk score The survival package in R was applied for the subgroup analysis of lncRNAs risk model combined with the clinical subgroup characteristics of patients, such as age, gender, TNM staging. The ability of the risk model to distinguish high and low risk patients in different subgroups was clarified.Univariate and multivariate Cox regression analysis were used to analyze risk scores and clinical factors including age, gender, tumor stage and TNM stage and screened independent prognostic factors. We built a nomogram based on the results of multivariate Cox regression including risk scores. The 3-year and 5-year OS for each patient were predicted based on the nomogram. At the same time, the C index, calibration curve and ROC curve were generated to evaluate the prediction effect of the model. The above results are all verified in the validation set to verify the stability of the results.KEGG analysis was performed on the selected metabolic-related prognostic lncRNAs based on the clusterProfiler package in R using GSEA software to identify genes with rich function and classify gene clusters (P<0.05).CIBERSORT is a tool for deconvolving the expression matrix of human immune cell subtypes based on the principle of linear support vector regression. The CIBERSORT analysis method , 11 was A total of 12 pairs of CRC tissues and noncancerous adjacent tissues were collected from patients who had undergone surgical resection at the Second Hospital of Hebei Medical University . All patients had signed informed consent. This study was approved by the Ethical Review Committee of the Second Hospital of Hebei Medical University and was conducted in accordance with accepted ethical guidelines.Total tissue RNA was extracted using Trizol reagent following the manufacturer\u2019s protocols. Then, RNA samples were reverse transcribed by Hiscript III Reverse Transcriptase kit and corresponding RNA expression was evaluated by qRT-PCR with ChamQ\u2122 Universal SYBR qPCR Master Mix kit . GAPDH acted as the internal reference for normalization.The analysis in this study was completed by R software version 3.6.2, in which the limma package was used for differential gene acquisition, cluster Profiler and org.Hs.eg.db package were used for functional enrichment analysis, and survival package was used to perform Kaplan-Meie survival analysis.The expression data of 473 colon cancer tissues and 41 normal colon tissues was downloaded from the TGCA database, and the metabolism-related genes were obtained from the FerrDb database. A total of 279 differentially expressed metabolism-related genes were obtained, of which 151 genes were down-regulated and 128 genes were up-regulated . The heat map showed the expression levels of 13 lncRNAs between the high-risk group and the low-risk group . However, there was no statistical difference in OS between the high-risk and low-risk groups in the N0, M1, and stage I-II-subgroups .The Nomogram was constructed based on independent prognostic factors determined by the multivariate Cox regression results, such as age, M, and risk score (The KEGG pathway enrichment analysis of the differentially expressed metabolic-related lncRNAs showed that differential lncRNAs were mainly related to the following pathways: chemokine pathways, amino acid and sugar metabolism, NOD-like receptors and Toll-like receptor activation (The CIBERSORT-based analysis method clarified the infiltration level of 22 immune cells in colon cancer patients, and the results showed that lncRNAs used to construct the model had a strong correlation with the polarization of B cells, CD8+ T cells and M0 macrophages (The expression levels of selected metabolic-related lncRNAs were further evaluated and validated in tissues. As illustrated in How to predict the prognosis of colon cancer is an urgent problem for gastrointestinal surgeons. With the development of clinical diagnosis and treatment, some prognostic factors have been discovered, including tumor size, tumor grade and stage. High-throughput biological technology has been widely used to predict cancer recurrence and tumor metastasis by detecting lncRNA or gene changes. In recent years, many studies have shown that changes in the expression patterns of metabolism-related genes are closely related to the occurrence and prognosis of colon cancer [13]. In-depth study of metabolism-related lncRNAs is expected to provide guidance for clinical decision-making.Colon cancer is a common malignant tumor in the digestive system, which seriously affects patients\u2019 survival , 13. HowThis study identified 13 metabolic lncRNAs related to the prognosis of CRC patients through bioinformatics methods, namely LINC01703, LINC01559, AC083880.1, AC027796.4, AL139384.1, LINC00858, LINC01876, AC008760.1, AC006329.1 AL590483.1, AC073283.1, CASC9, AP001469.3, CAPN10-DT, ZKSCAN2-DT, AP006621.4, AC074117.1, LINC01133, AL161729.4 and AC010973.2, thereby building a risk model for predicting prognosis. Combined with clinicopathological analysis, this model can be used as an independent predictor for the prognosis of lung cancer.Among them, LINC01703 has been reported to be of great significance in the diagnosis of lung adenocarcinoma. Wang et\u00a0al. found that LINC01703 enhanced the invasiveness of NSCLC cells by changing miR-605-3p/MACC1 . In addiPathway enrichment analysis revealed that metabolism-related lncRNAs were mainly related to pathways such as chemokine pathways, metabolic collaterals, NOD-like receptors and Toll-like receptor activation. Liu et\u00a0al. found thA large number of studies have shown that there is a significant correlation between tumor cell metabolism and immune cell infiltration in tumor tissues \u201335. In oin vitro and in vivo studies to clarify the role and specific mechanisms of metabolism-related lncRNAs in the occurrence and development of colon cancer.The relationship between metabolic-related lncRNAs and the prognosis of CRC was preliminarily explored through bioinformatics methods, a model based on 13 metabolic-related lncRNAs was established to predict the prognosis of CRC, and the stability of the model in multiple data sets was fully verified. It provided a new direction for the study of metabolism and colon cancer. We still need further The original contributions presented in the study are included in the article/XZ designed the experiments. CL performed the analysis. CL and QL analyzed the TCGA data. CL, YS and WW wrote and reviewed the manuscript. All authors contributed to the article and approved the submitted version.We would like to acknowledge TCGA for free use.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "Vigna unguiculata L.) is particularly sensitive to waterlogging stress during the reproductive stage, with a consequent decline in pod formation and yield. However, little is known about the critical processes underlying cowpea\u2019s responses to waterlogging during the reproductive stage. Thus, we investigated the key parameters influencing carbon fixation, including stomatal conductance (gs), intercellular CO2 concentration, chlorophyll content, and chlorophyll fluorescence, of two cowpea genotypes with contrasting waterlogging tolerance. These closely related genotypes have starkly contrasting responses to waterlogging during and after 7 days of waterlogging stress (DOW). In the intolerant genotype (\u2018EpicSelect.4\u2019), waterlogging resulted in a gradual loss of pigment and decreased photosynthetic capacity as a consequent decline in shoot biomass. On the other hand, the waterlogging-tolerant genotype (\u2018UCR 369\u2019) maintained CO2 assimilation rate (A), stomatal conductance (gs), biomass, and chlorophyll content until 5 DOW. Moreover, there was a highly specific downregulation of the mesophyll conductance (gm), maximum rate of Rubisco (Vcmax), and photosynthetic electron transport rate (Jmax) as non-stomatal limiting factors decreasing A in EpicSelect.4. Exposure of EpicSelect.4 to 2 DOW resulted in the loss of PSII photochemistry by downregulating the PSII quantum yield (Fv/Fm), photochemical efficiency (\u03a6PSII), and photochemical quenching (qP). In contrast, we found no substantial change in the photosynthesis and chlorophyll fluorescence of UCR 369 in the first 5 DOW. Instead, UCR 369 maintained biomass accumulation, chlorophyll content, and Rubisco activity, enabling the genotype to maintain nutrient absorption and photosynthesis during the early period of waterlogging. However, compared to the control, both cowpea genotypes could not fully recover their photosynthetic capacity after 7 DOW, with a more significant decline in EpicSelect.4. Overall, our findings suggest that the tolerant UCR 369 genotype maintains higher photosynthesis under waterlogging stress attributable to higher photochemical efficiency, Rubisco activity, and less stomatal restriction. After recovery, the incomplete recovery of A can be attributed to the reduced gs caused by severe waterlogging damage in both genotypes. Thus, promoting the rapid recovery of stomata from waterlogging stress may be crucial for the complete restoration of carbon fixation in cowpeas during the reproductive stage.Waterlogging is an important environmental stress limiting the productivity of crops worldwide. Cowpea ( Waterlogging stress limits crop yields in about 16% of global cultivated areas, and the problem is exacerbated in poorly drained soils . For exa4 times relative to air [3\u2212, SO2\u2212, Mn4+, and Fe3+ [2 concentrations in the plant root zone and a decrease in hydraulic conductance, resulting in the rapid closure of stomata [Waterlogging first depletes oxygen levels in the soil by rapidly causing the diffusion rate of gases to drop by more than 10e to air . Conseque to air , followeand Fe3+ . Under oand Fe3+ . In addi stomata . Collect stomata , enzymat stomata , plant g stomata , and ultA) of cowpea declined rapidly [A under waterlogging conditions could lead to a decline in plant energy reserves, indicating the existence of a common metabolic pattern. Imperatively, the factors affecting the A of plants are primarily divided into two distinct metabolisms: stomatal and non-stomatal limitations. Due to limited oxygen under waterlogging conditions, plants close their stomata to maintain plant water status, causing a decline in stomatal conductance (gs) and inhibiting the exchange of CO2 required by the plant\u2019s basic processes [s eventually leads to a corresponding decrease in A due to the decreased intercellular CO2 concentration (Ci) under waterlogged conditions [A in the submerged condition is the alteration in mesophyll conductance (gm), which is the diffusion of CO2 from intracellular space to the carboxylation site in the chloroplast stroma [A under waterlogging in legumes is associated with the maximum rate of Rubisco carboxylation (Vcmax), ribulose-1,5-bisphosphate (RuBP) regeneration capacity mediated by maximum electron transport rate (Jmax), photosystem II (PSII) activity, Rubisco activity, and loss of pigments related to leaf senescence [A varies with crop genotype, duration, and severity of waterlogging stress, ranging from a significant decline in sensitive genotypes to little or no inhibition in tolerant genotypes [An important characteristic of plant responses to waterlogging is the alterations in shoot physiology, especially photosynthesis . Previou rapidly . Thus, erocesses . Consequnditions . Anothert stroma . Non-stonescence ,23,24. Tenotypes ,26. Howes in response to waterlogging may enhance the sensitivity of the photosynthetic apparatus to high irradiance, leading to photodamage of PSII due to overproduction of reactive oxygen species (ROS) [v/Fm) of PSII, actual photochemical efficiency of PSII (\u03a6PSII), and photochemical quenching (qP) of mungbeans were downregulated, whereas the non-photochemical quenching (NPQ) was significantly increased. Conversely, for common bean cultivars with different waterlogging tolerances, Fv/Fm was not affected by waterlogging, while the NPQ of susceptible cultivars increased with the duration of waterlogging treatments [Furthermore, decreased ges (ROS) . The phoes (ROS) . Chloropes (ROS) . Under wes (ROS) found theatments . Thus, tOf the leguminous crops, cowpea is the most sensitive to waterlogging . Most cos and transpiration rate; E) and non-stomatal factors. The objectives of this study were to investigate the relative responses of stomatal and non-stomatal factors affecting carbon fixation in cowpea genotypes during waterlogging and recovery period. We hypothesized that cowpea\u2019s growth and physiological responses during and after waterlogging might differ with respect to waterlogging tolerance. This hypothesis was tested in two contrasting cowpea genotypes exposed to a 7-day waterlogging and recovery at the R2 stage using stomatal and 7 days of recovery (DOR) at the R2 growth stages. Generally, 7 DOW caused a reduction in plant growth of both genotypes. RWC was measured to determine leaf water loss in cowpea genotypes during and after the waterlogging period. At 3 DOW, the mean RWC of waterlogged UCR 369 was statistically at par with the control . HoweverMoreover, waterlogged \u2018EpicSelect.4\u2019 showed a drastic reduction in SPAD values after 2 DOW, indicating early leaf senescence F, and mos) until 6 DOW, while the gs of EpicSelect.4 significantly declined by 41% at 1 DOW under control treatments is critical for evaluating the PSII photochemical efficiency in stressed plants [v/Fm reflects the PSII\u2019s internal light energy conversion efficiency or maximum light [v/Fm following 7 DOW showed the maximum quantum yield decreased over time but in a genotype-dependent manner during 7 DOW and 7 DOR became more oxidized in waterlogged EpicSelect.4 from 2 DOW to the end of the experiment compared to the control plants , they generate a pH gradient across the thylakoid resulting in the alterations of NPQ , which rPSII, \u03a6NPQ, and \u03a6NO, which sum to one [PS\u0399\u0399 of cowpeas, with the extent of decline dependent on cowpea genotypes and the duration of treatments of cowpeas with the gas exchange parameters. SPAD was positively correlated with most photosynthetic and chlorophyll fluorescence traits, indicating that increased stay-green leaf was associated with the higher photosynthetic performance of cowpea genotypes under waterlogging. Similarly, most photosynthetic traits under waterlogging treatments were significantly and positively correlated with Fv/Fm, qP, ETR, \u03a6CO2, and \u03a6PSII, but negatively associated with NPQ, \u03a6NO, \u03a6NPQ, and 1-qL. However, the correlation coefficients of Ci with most parameters were in the range considered moderate to weak. Thus, suggesting Ci acts as a non-stomatal factor affecting the photosynthetic efficiency of cowpeas under waterlogging.Evaluated gas exchange and chlorophyll fluorescence parameters were highly correlated in Pearson\u2019s correlation analysis . Except Waterlogging is a type of abiotic stress that can affect plant growth and development . Our preWaterlogging significantly affects cowpeas\u2019 shoot biomass but depends on growth stages and genotype. Relative to the response of cowpeas and related crops in previous studies, waterlogged-induced reductions recorded in the current study were lower. Most of these evaluations were done at the vegetative stage with over 50% biomass reduction ,31,41. IA was maintained in UCR 369 by sustaining E up to 5 DOW, while the A and E were drastically reduced in EpicSelect.4 under 2 DOW. It is also imperative to note that UCR 369 does not achieve waterlogging tolerance via water conservation. Instead, it was better able to maintain plant water status compared to EpicSelect.4 under waterlogging. This difference was more pronounced in the gas exchange response after 5 DOW after 7 DOW and suggested that higher Ci limits Rubisco activity, resulting in plant inability to restore photosynthetic capacity during the recovery period. EpicSelect.4 also showed an increase in WUE under waterlogged conditions, suggesting that sensitive genotypes tended to gain less carbon per unit of water lost. An analogous pattern of increased WUE has been observed in waterlogging-sensitive legumes in previous studies [2 intake compared to the tolerant UCR 369 genotype.Although a contrasting mechanism of photosynthetic damage was demonstrated between tolerant UCR 369 and sensitive EpicSelect.4 genotypes, waterlogging significantly reduced enotypes . Previouon of gs A,B withoand Jmax A,B. Thusr et al. . An incrrlogging D. Taken peanuts have demm et al. also obs studies ,53. Therv/Fm values vary from 0.75 to 0.83, and a reduction from these values indicates damaged PSII. In the current study, the Fv/Fm of EpicSelect.4 genotype was significantly lowered at 3 DOW, and the decrease was below 0.75 from 6 DOW to the end of the experiment under waterlogging conditions in contrasting cowpea genotypes can be further explored.\u22122 s\u22121. Plants were held at a temperature of 30/20 \u00b0C (day/night) for a 16/8 h period, respectively. Also, the average relative humidity during the experiment was 63%, 64%, and 70%, respectively, for October, November, and December 2021. Two cowpea genotypes (EpicSelect.4 and UCR 369) with contrasting waterlogging tolerance as determined by Olorunwa et al. were selBradyrhzobium japonicum at the rate of 141 g per 22.68 kg of seeds. Four inoculated cowpea seeds of each genotype were planted into one-gallon pots filled with Pro-Mix BX soilless medium and watered daily. Twice a week, the plants were fertigated with a 5-15-29 water-soluble nutrient solution at the rate of 100 ppm. Plants were thinned to one plant per pot at 14 days after sowing (DAS). After 45 DAS, cowpea plants were subjected to two experimental treatments consisting of waterlogging and control treatments at the R2 growth stage.The cowpea seeds were inoculated before sowing with Cowpea plants were waterlogged by placing 6 pots of each cowpea genotype into five replicated containers . To simulate 7 DOW treatments, the container was filled with tap water to a height of 2\u20133 cm above the substrate surface. Pots containing cowpea plants were maintained at optimal field capacity under the control treatments. After 7 DOW, the pots were removed from the 15-gallon container filled with water, and plants were allowed to recover for additional 7 days. A, gs, Ci, and E were measured in situ with chlorophyll fluorescence at the North Mississippi Research and Extension Center (10.00\u201314:00 CST) using an LI-6800 portable photosynthesis system . Measurements were allowed to match the chamber environment before the values were recorded. The chamber environment was set to match the growth chamber, with 1500 \u00b5mol m\u22122 s\u22121 of light intensity, 415 ppm of CO2 concentration in the air (Ca), and 50% relative humidity. Measurements were conducted on five representative plants of each cowpea genotype subjected to waterlogging and non-waterlogging treatments during waterlogging and recovery. The ratio of A/gs was used to calculate intrinsic water use efficiency (WUE) [Parameters related to gas exchange were measured on the second, most fully expanded trifoliate at 1 to 7 DOW and 1 to 7 days of recovery (DOR). The cy (WUE) .2 response curves (A/Ci) measurements were evaluated using the auto program settings in the LI-6800 at 7 DOR and 7 DOR. To measure the steady-state response of A/Ci, the leaf chamber settings were fixed at 50% relative humidity, 1500 \u00b5mol m\u22122 s\u22121 light intensity, and the temperature set to maintain ambient greenhouse temperature (28\u201330 \u00b0C). Using the built-in program on the LI-6800, measurements were taken at 50, 100, 200, 300, 400, 500, 600, 800, 1000, 1200, and 1500 ppm CO2, with early matching enabled and wait times of 60\u201390 seconds between measurements. A/Ci analyses were performed according to Sharkey et al. [http://landflux.org/Tools.php. Representative individual curves were fitted separately, and the extracted parameters were averaged across replicates for each treatment. According to Bernacchi et al. [A/Ci response curve was further utilized to calculate the maximum rate of Rubisco carboxylation (Vcmax), the maximum rate of photosynthetic electron transport (Jmax), and mesophyll conductance (gm).Additionally, the COy et al. , with fey et al. using thi et al. , the esto) was measured on the second-most fully expanded leaf using a measuring light (0.005 \u00b5mol m\u22122 s\u22121). The maximal fluorescence (Fm) was quantified using a 1-second saturating pulse at 8000 \u00b5mol m\u22122 s\u22121 in dark-adapted leaves. The leaves were continuously illuminated for 20 min with an actinic light (1400 \u00b5mol m\u22122 s\u22121) to record the steady-state yield of fluorescence (Fs). Maximal light-adapted fluorescence yield (F\u2032m) was determined by 8000 \u00b5mol m\u22122 s\u22121. The actinic light was turned off, and minimal fluorescence yield (F\u2032o) in the light-adapted state was determined after 5 s of far-red illumination. The difference between the measured values of Fm and Fo is the variable fluorescence (Fv). The chlorophyll fluorescence parameters were calculated using the following formulas [v/Fm is the maximal photochemical efficiency of PSII, \u03a6PSII is the actual photochemical efficiency of PSII, \u03a6NPQ is the quantum yield for the energy dissipated via \u0394 pH and xanthophyll-regulated processes, \u03a6NO is the quantum yield of non-regulated energy dissipated in PSII, and qP and NPQ are the photochemical and the non-photochemical quenching, respectively. The electron transport rate (ETR) of chlorophyll fluorescence was calculated according to Genty et al. [The LI-6800 using pulse-amplitude modulated (PAM) fluorometry with a Multiphase Flash Fluorometer was used to measure the chlorophyll fluorescence at 1 to 7 DOW and 1 to 7 DOR. During predawn hours (3:00\u20135:00 CST), the minimal fluorescence of the functional leaves was measured at 1 to 7 DOR and 1 to 7 DOR using a SPAD analyzer . The relative CCI of each leaf, represented by the SPAD value, can be used to study the effect of waterlogging on leaf yellowing in cowpea genotypes associated with nitrogen remobilization and leaf senescence. Three readings were collected from each cowpea genotype\u2019s top-most fully expanded trifoliate and averaged.Five representative cowpea plants (from each treatment/replications/genotype) were harvested on 3 DOW, 7 DOW, 3 DOR, and 7 DOR to obtain growth data on the effects of waterlogging stress. The plant component, fresh mass (FM), was measured using a weighing scale from all plants. Plant FM samples were lyophilized using a FreeZone 2.5 L freeze dryer to determine the dry mass (DM) and percent dry mass (%DM). The cowpea\u2019s relative water content (RWC) was determined as per the method of Barrs and Weatherley with minp \u2264 0.05 were employed to test the differences between the interactions of factors for measured parameters. The standard errors of the mean were calculated using the pooled error term from the ANOVA table and presented in the figures as error bars. Diagnostic tests, such as Shapiro\u2013Wilk in SAS, were conducted to ensure that treatment variances were statistically equal before pooling. A Pearson correlation analysis was utilized to study the relationship between the studied parameters. Graphs were plotted with GraphPad Prism 9 .The experiment was a randomized complete block design with two waterlogging treatments, two cowpea genotypes, five replications, and twelve plants in a factorial arrangement. In total, 240 plants (5 replicates \u00d7 2 waterlogging treatments \u00d7 2 cowpea genotypes \u00d7 12 plants) were utilized in this study. SAS was used to perform a statistical analysis of data. A three-way analysis of variance (ANOVA) using the generalized linear mixed model (PROC GLIMMIX) was used to assess the effects of factors , along with their interactions, on the replicated values of CCI, gas exchange, and chlorophyll fluorescence parameters. The experiment\u2019s fixed effects consist of treatment, genotypes, and duration, where the replication (5 levels) was treated as a random effect. The responses of FM, DM, and RWC values were analyzed by a two-way ANOVA with \u2032genotype\u2019 and \u2032treatment\u2019 as the main factors. Fisher\u2019s protected least significant difference tests A was mainly driven by decreased gs and gm, with no biochemically limiting declines in Vcmax and Jmax, as well as chlorophyll fluorescence parameters. However, the sensitive EpicSelect.4 showed a significant decrease in A at 2 DOW, with a corresponding reduction in gs, gm, Vcmax, Jmax, Fv/Fm, qP, ETR, and \u03a6PSII under waterlogged conditions, indicating that stomatal and non-stomatal limited photosynthesis is taking place when the genotype is waterlogged stressed. These waterlogging-induced photosynthetic changes are consistent with rapid leaf chlorosis in cowpea genotypes based on the SPAD values and chlorophyll fluorescence data. In this study, gas exchange and chlorophyll fluorescence parameters were evaluated to reveal the key factors influencing leaf carbon fixation and the adaptive mechanism of cowpea genotypes under waterlogging stress. After 7 DOW and 7 DOR, the tolerant UCR 369 genotype exhibited superior plant growth and photosynthetic efficiency than the waterlogged sensitive genotype, EpicSelect.4. This study confirmed that the ability of UCR 369 to develop adventitious roots and maintain biomass accumulation are critical for waterlogging tolerance. Moreover, the analysis of gas exchange traits revealed that the photosynthetic response to waterlogging differed between tolerant and sensitive cowpea genotypes. At 6 DOW, the downregulation of PSII, \u03a6NPQ, and qP values in PSII at 2 DOW indicated that sensitive EpicSelect.4 could not absorb energy for photochemical reactions, resulting in damaged photosynthetic apparatus. At the same time, the elevated values of NPQ, 1-qL, and \u03a6NO in EpicSelect.4 compared to UCR 369 may partly contribute to photoinhibition and decreased photochemical efficiency during waterlogging. Further studies evaluating carotenoid and chlorophyll content are needed to understand the light-dependent response mechanism in tolerant and sensitive cowpea genotypes.Moreover, the downregulation of \u03a6"} {"text": "We simulate two recent matrix\u2010isolation experiments at cryogenic temperatures, in which a nitrene undergoes spin crossover from its triplet state to a singlet state via quantum tunnelling. We detail the failure of the commonly applied weak\u2010coupling method in describing these deep\u2010tunnelling reactions. The more rigorous approach of semiclassical golden\u2010rule instanton theory in conjunction with double\u2010hybrid density\u2010functional theory and multireference perturbation theory does, however, provide rate constants and kinetic isotope effects in good agreement with experiment. In addition, these calculations locate the optimal tunnelling pathways, which provide a molecular picture of the reaction mechanism. The reactions involve substantial heavy\u2010atom quantum tunnelling of carbon, nitrogen and oxygen atoms, which unexpectedly even continues to play a role at room temperature. The spin crossover from triplet to singlet in two nitrene reactions is accompanied by simultaneous heavy\u2010atom quantum tunnelling. Instanton theory was used to unveil the molecular mechanism of this tunnelling process and, in addition to excellent agreement with experiment, appreciable tunnelling effects were found even at room temperature. Due to their role as versatile reactive intermediates in several important organic reactions, carbenes and nitrenes are molecules of high interest,The feature that makes the reactions cited above amenable to established theoretical methods is that the rate\u2010determining step takes place adiabatically on a single electronic state, which allows the Born\u2013Oppenheimer approximation to be used. However, due to the two additional non\u2010bonded electrons on the carbon or nitrogen atom, both carbenes and nitrenes may exist either in their singlet or triplet state, and the spin\u2010crossover process is nonadiabatic. Three recent studies have presented convincing evidence for nitrene reactions in which the spin crossover is the rate\u2010determining step. They additionally found that this process is accompanied by tunnelling of hydrogenTheoretical investigations of spin crossovers commonly start by locating the minimum\u2010energy crossing point (MECP) of the two spin states. Based on the knowledge of the MECP and the reactant minimum, nonadiabatic transition\u2010state theory (NA\u2010TST)37, 38As the name of the WC method suggests, it is based on the assumption of weak spin\u2013orbit coupling between the two states, as for Fermi's golden rule. We will demonstrate that the golden\u2010rule assumption itself is valid for the nitrene reactions under investigation. However, the WC method additionally relies on a crude linear approximation of the potential\u2010energy surfaces (PESs) around the MECP. While this approximation would be valid for shallow tunnelling at high temperatures, where the reaction proceeds at an energy that lies only slightly lower than the MECP, its applicability to the description of deep tunnelling at energies close to the reactant zero\u2010point energy (ZPE) cannot be rigorously justified. In this work, we will show that the WC method fundamentally breaks down and leads to unphysical predictions for the deep tunnelling exhibited by the two nitrene reactions. It is hence evident that new theoretical methods are needed to provide reliable insight into the tunnelling mechanism underlying the experimental results.For adiabatic reactions, semiclassical instanton theory has become a well\u2010established method since it finds an excellent balance between a rigorous theoretical foundation and a high computational efficiency.56, 57The experiments for the two reactions under consideration were carried out at cryogenic temperatures, at which the excited vibrational states are not thermally accessible. Hence, the only mechanism for the reaction to proceed is via nuclear tunnelling out of the vibrational ground state, giving rise to a temperature\u2010independent plateau of the rate constant in the low\u2010temperature limit.In this work we will investigate nonadiabatic tunnelling in spin\u2010crossover processes by considering two specific examples from nitrene chemistry. Although heavy\u2010atom tunnelling is conventionally thought of as being restricted to cryogenic temperatures, we here unveil the significance of such effects even at room temperature, implying that heavy\u2010atom tunnelling may surprisingly be relevant under typical reaction conditions of synthetic chemistry.In this work we simulate the cyclization reaction of a 2\u2010formylaryl nitreneIn the previous studies34, 35The failure of the WC method to capture the physical behaviour of the rate arises from the inherent linear approximation of the potentials around the MECP, which clearly breaks down close to the reactant minimum. The method cannot therefore be expected to give a reasonable description of reactions at very low temperature, where tunnelling takes place dominantly from the vibrational ground state. Moreover, the WC approach is not able to predict possible changes in the reaction mechanism due to multidimensional tunnelling effects such as corner cutting.In order to gain well\u2010founded theoretical insight into heavy\u2010atom tunnelling in the two nitrene reactions, we hence need to go beyond the approximations of previous studies. We pay particular attention to the two main aspects of any practical molecular simulation: i) the accuracy of the electronic structure; ii) the validity of the assumptions underlying the rate theory.The crucial importance of the electronic structure arises from the sensitive dependence of rate calculations on the quality of the underlying PESs, on which the nuclear dynamics take place. As detailed in the Supporting Information, our investigations of the two systems under consideration showed that an accurate description of dynamic correlation is of particular importance in these reactions, which rules out the validity of the complete active space self\u2010consistent field (CASSCF) method alone. In this work, we therefore employ state\u2010of\u2010the\u2010art double\u2010hybrid density\u2010functional theory (DFT)\u03c4and on the triplet state is \u03b2\u210f-\u03c4, such that the overall time is related to the inverse temperature, \u03b2=1/kBT.Together, these trajectories define the optimal tunnelling pathway, known as the \u201cinstanton\u201d.We can go beyond the WC approximation of the rate using semiclassical golden\u2010rule instanton theory, which provides an accurate description of nuclear quantum effects such as ZPE and multidimensional tunnelling.59, 62\u03c4), which is facilitated by discretizing the trajectories in the form of a ring polymer.[The key computational step is the optimization of this instanton pathway expression for the rate constant is then given by Equation\u2005\u0394is the spin\u2013orbit coupling measured at the hopping point, Sis the instanton action, given as the sum of the classical actions of the individual states, and Z\u2260is the instanton partition function, which like the reactant (triplet) partition function ZTcontains translational, rotational and vibrational contributions for each degree of freedom. Note that the vibrational component of Z\u2260is computed from the second derivatives of the action with respect to the ring\u2010polymer beads and imaginary time, \u03c4.S/\u210f, which typically increases the rate by accounting for tunnelling effects.where In Figure\u2005From Figure\u2005It can be seen in the cyclization reaction that the oxygen and nitrogen atoms tunnel toward one another in order to complete the isoxazole five\u2010membered ring. The oxygen atom contributes 48\u2009% of the squared mass\u2010weighted tunnelling path length (SMWTPL), closely followed by nitrogen with 35\u2009%.3 group was responsible for the tunnelling.3 group from the carbon to the nitrogen atom takes place after the barrier crossing.The instanton pathway for the isomerization reaction reveals that the bottleneck is the cleavage of the C\u2212C bond, which is overcome by means of heavy\u2010atom tunnelling. After emerging on the product side of the barrier, the NCO group shifts over to form the C\u2212N bond of the isocyanate product. In this reaction the dominant contribution to the SMWTPL comes from the carbon and nitrogen atoms in the NCO group with 43\u2009% and 34\u2009%. The second carbon and the oxygen atom contribute 7\u2009% and 9\u2009% to the SMWTPL, while the fluorine atoms account for only 7\u2009% in total. It had previously been proposed that the CFFrom the knowledge of the instanton pathways, the rate constants can now be computed using Equation\u20051). However, in order to effectively account for missing multiconfigurational effects not captured by DFT, we first scaled the potential energies (relative to the reactant minimum) of the MECP and along the MEPs and instanton pathways by the ratio of the MRMP2 and DFT barrier height. This is a common trick to improve the energetics, when higher\u2010level methods are too computationally expensive for structure optimizations.85, 86. HoweverIn Figure\u2005At high temperatures the WC rate constants are in good agreement with the instanton results. In fact, formally both WC and instanton theory have the same correct classical limit equal to NA\u2010TST. However, at temperatures below 300\u2005K, the WC method overestimates the rate by orders of magnitude compared to experiment and instanton theory. Note that our WC results are different from the ones reported in Ref.\u2005The one\u2010dimensional Wentzel\u2013Kramers\u2013Brillouin (WKB) approximation, which accounts for tunnelling along the MEPs,87, 88We can leverage the accuracy of instanton theory to study the reactions at temperatures where the nitrenes react too quickly to isolate them and measure a rate. While it is expected that tunnelling is the key mechanism for a reaction to proceed at low temperatures, we surprisingly find that even at 300\u2005K nuclear quantum effects continue to speed up the rate of the cyclization and isomerization by factors of 10 and 160 compared to the (fully) classical case. Further comparison to the NA\u2010TST rate (which includes ZPE but not tunnelling effects) reveals that heavy\u2010atom tunnelling alone accounts for speed\u2010ups of 4 and 60.crossover temperature Tc=\u210f\u03c9b/2\u03c0kBwhich depends on the curvature \u03c9bof the barrier top.expTo/T3can be derived,To3=\u210f224mkB3\u03baT\u03baS\u03baT-\u03baS2is defined in terms of the slopes \u03baTand \u03baSof the electronic states at the crossing point.Toas an onset temperature below which tunnelling starts to become important. Due to the cubic dependence on To/Tinside the exponential of the tunnelling factor, the significance of nuclear tunnelling will rapidly increase below this temperature.Although hydrogen\u2010atom tunnelling is not particularly unusual, and a number of examples of heavy\u2010atom tunnelling have been reported at cryogenic temperatures,89, 90For the cyclization and isomerization reactions, we obtain onset temperatures of 434\u2005K and 514\u2005K, implying that carbon, nitrogen and oxygen atoms can tunnel even above room temperature. This is in stark contrast to adiabatic reactions, for which the crossover temperature is rarely much higher than 300\u2005K for hydrogen\u2010tunnelling reactions and typically much lower for heavy\u2010atom rearrangements, implying that heavy\u2010atom tunnelling is not significant at room temperature. Note however that the simple tunnelling factor stated above can only be used for a rough assessment about whether tunnelling plays a role in a given reaction. This is due to the linear approximation also inherent in the WC method, which as shown in Figure\u2005\u03baTand \u03baShave opposite signs, whereas in the inverted regime (which has a sloped intersection), the gradients are parallel and hence have the same sign. This implies that the onset temperature, which depends inversely on \u03baT-\u03baS, will typically be higher in the inverted regime and therefore that it is more likely to find heavy\u2010atom tunnelling at room temperature. Additionally, the tunnelling effects in the inverted regime tend to be larger because the instanton action associated with the propagation on the product state contributes with a negative sign leading to a reduced value of S, while in the normal regime both actions are positive leading to a slower rate.\u03baT/Sare large enough.Although this analysis makes it clear that tunnelling is more likely to be important for nonadiabatic reactions than for adiabatic reactions, it is still a rather surprising finding that there is significant tunnelling of heavy atoms at typical laboratory conditions, especially as these reactions are in the Marcus normal regime. While tunnelling is known to be common in the inverted regime from the related field of electron transfer,93, 9414N/15N KIE for both reactions using two independent instanton calculations. Here we discuss only the low\u2010temperature limit; predictions at higher temperatures can be found in the Supporting Information. Our result of 1.35 for the isomerization reaction is in excellent agreement with the range of experimental values 1.18\u20131.4414N/15N KIE of 1.4 for the cyclization reaction, which could be verified by future experiments.It is common to measure KIEs as a powerful experimental approach to obtain insight into tunnelling reactions. Therefore, we also computed the 12C/13C and 16O/18O KIEs in the low\u2010temperature limit for the respective reactions and predict even larger values of 1.8 and 2.4. In this case we used a simple approximate scheme by assuming that the instanton pathway would not change significantly upon isotopic substitution.Our analysis of the instantons above revealed that the NCO\u2010carbon in the isomerization and the oxygen atom in the cyclization contribute even more to the tunnelling pathway than the nitrogens. We hence computed the We have studied the effects of heavy\u2010atom tunnelling on low\u2010temperature spin\u2010crossover reactions of two nitrenes and obtained quantitative agreement with experimental rate constants. To achieve this level of accuracy, it was necessary to employ MRMP2 calculations on top of double\u2010hybrid DFT in order to obtain an adequate description of the PESs. However, even with an accurate description of the electronic structure, meaningful results can only be attained with a state\u2010of\u2010the\u2010art rate theory such as the golden\u2010rule instanton formalism.Our results highlight the shortcomings of the commonly used43, 44T00000000000000000000000000000000111111110000000011111111000000000000000000000000Created by potrace 1.16, written by Peter Selinger 2001-2019O bond. Meanwhile, the core level spectrum of N 1s can be deconvoluted into three subpeaks at 398.6, 399.9 and 400.9 eV, as shown in 37 The presence of pyridine-like and graphitic N are thought to promote the catalytic activity.26 Furthermore, weak signals from P and S elements were also detected. Core level spectrum of P 2p and S 2p are shown in Fig. S2b and c,21,22 The formation of C\u2013S\u2013C and P\u2013C bonds will also promote the catalytic performance.21,22 Combining the results of XPS and those from XRD and EDS characterizations, it can be confidently concluded that the N, P and S elements were successfully doped into porous carbon matrix. The estimated atomic content of N, P and S is about 7.9, 1.2 and 1.2%, respectively. Doping carbon materials with ternary heteroatoms will benefit the catalytic activity on triiodide reduction process.29\u201331The chemical composition of the TPC material was investigated with energy-dispersive spectroscopy (EDS). The typical EDS mapping images are shown in 4 as the supporting electrolyte, with LiI and I2 as the redox couple. Two pairs of oxidation/reduction peaks were observed for Pt electrode, whereas, one typical pair of I3\u2212 ion oxidation/reduction peaks was present in the case of TPC carbon electrode within scanning range. During the operation of dye-sensitized solar cells, the produced I3\u2212 ions must be efficiently reduced to I\u2212 ions at the CE interface.1\u20136 Thus, the reduction peak of triiodide reduction is the research focus of CV analysis.18,20 The cathodic peak potential for TPC electrode is very close to that for Pt electrode. As expected, the cathodic peak current density of TPC electrode is much larger than that in Pt electrode. This is due to large surface area of TPC electrode, compared with Pt CE. These results imply that the as-prepared TPC electrode can effectively catalyze I3\u2212/I\u2212 redox couple, similar to the case of Pt electrode. The CV curves for two electrodes were recorded with 100 cycles to check the electrochemical stability. The changes in cathodic and anodic current density for two electrodes are summarized in Fig. S3.3\u2212-based electrolyte, surpassing that of Pt electrode.The catalytic activity of the as-obtained TPC carbon towards triiodide reduction was evaluated with cyclic voltammetry (CV) analysis, in comparison with that of Pt electrode. 24,25,39 The Nyquist plots for TPC and Pt electrodes are displayed in 19,24,25 The equivalent circuit for fitting experimental results is listed in Scheme S1.Rct) for Pt and TPC electrode are 0.42 and 0.54 \u03a9 cm2, respectively. Obviously, the two electrodes exhibited nearly identical Rct, which is largely below than 10 \u03a9 cm2 needed for highly-efficient dye-sensitized solar cells.19 These results indicate that TPC could be used as efficient counter electrodes for dye-sensitized solar cells.The electrochemical characteristics of CE was also investigated with electrochemical impedance spectra conducting on a symmetric sandwich device configuration with two identical electrodes.2 photoanodes were used to assemble solar cells. For comparison, the DSC containing conventional Pt CE was also fabricated as reference. The corresponding photocurrent density-voltage curves of devices are shown in Voc) of 0.750 V, a short-circuit photocurrent density (Jsc) of 15.64 mA cm\u22122, and fill factor (FF) of 0.668. The as-optimized photovoltaic performance for TPC-based device cloud be comparable to that for conventional Pt-based device .Consequently, the TPC electrodes of different thicknesses and N3-sensitized TiO19,39\u201342 EIS of DSCs with TPC-2 and Pt CEs was measured under the light illumination . The corresponding Nyquist plots are presented in RCT1), and the arc at middle frequency is attributed to the charge transfer resistance at the interface of N3-sensitized TiO2/electrolyte (RCT2). Meanwhile, the low frequency semicircle is ascribed to the diffusion resistance of redox couple within the electrolyte (ZN). In most cases, the RCT2 are commonly overlapped with ZN due to the application of liquid state electrolyte in the study.19,38 The fitted curves with the equivalent circuit are also shown in the RCT1 for TPC-2 and Pt-based devices are 1.58 and 1.52 \u03a9, respectively. The almost equal values of RCT1 for both devices confirm that the as-prepared CE (TPC-2) could catalyze I3\u2212/I\u2212 redox couple as efficiently as Pt CE. The corresponding values of RCT2 are 18.21 \u03a9 for TPC-2 CE and 16.81 \u03a9 for Pt CE. The DSC with a TPC-2 CE showed a slightly larger RCT2 than that for Pt-based solar cell. The power conversion efficiency of device is dependent on the total resistance of the device.19,38 Therefore, RCT1 and RCT2 in the case of TPC-2 electrode can lead to a slightly low FF for TPC-2 based devices, compared to device with a Pt CE. The high performance for device containing TPC CE could be ascribed to high surface area and well-defined porosity for promoting electrolyte diffusion within electrode, as well as heteroatom doping-induced electrocatalytic activity on I3\u2212 reduction.The photovoltaic properties of devices with different CEs were investigated using the electrochemical impedance spectra (EIS).4via pyrolysis approach using fish waste as raw material in an inert atmosphere. The N, P and S elements contained in the fish waste were doped simultaneously into porous carbon matrix during the pyrolysis process. The resultant porous carbons possessed both large surface area and highly graphitized nanostructures, and thus presented perfect catalytic activity on triiodide reduction. The optimized DSC with a TPC-2 CE exhibited a power conversion efficiency of 7.83%, which is comparable to that for the device with a Pt CE (8.34%). The results indicate that porous carbon derived from fish waste could catalyze as efficiently as noble metal Pt on the triiodide reduction in DSC. The idea of \u201cmaking waste profitable\u201d reported here could be suitable for exploring low-cost non-noble metal catalysts in a wide variety of applications.Ternary heteroatom-doped porous carbons were successfully prepared There are no conflicts to declare.RA-008-C8RA02575D-s001"} {"text": "P < 0.05); comprehensive examination showed that 39 patients had obstruction between ureter and bladder anastomosis, 13 cases had rejection, 10 cases had perirenal hematoma, 5 cases had renal infarction, and 22 cases had no complications; the diagnostic sensitivity, specificity, accuracy, and consistency of the observation group were higher than those of the control group (P < 0.05). In the control group, the sensitivity, specificity, and accuracy in the diagnosis of complications after renal transplantation were 66.5%, 84.1%, and 78.32%, respectively; in the observation group, the sensitivity, specificity, and accuracy in the diagnosis were 67.8%, 86.7%, and 80.6%, respectively. DRSA-U-Net denoising algorithm can clearly display the information of MRI images on the kidney, ureter, and surrounding tissues, improve its diagnostic accuracy in complications after renal transplantation, and has good clinical application value.This study was to explore the diagnostic value of magnetic resonance imaging (MRI) optimized by residual segmentation attention dual channel network (DRSA-U-Net) in the diagnosis of complications after renal transplantation and to provide a more effective examination method for clinic. 89 patients with renal transplantation were selected retrospectively, and all underwent MRI. The patients were divided into control group and observation group (MRI image diagnosis based on DRSA-U-Net). The accuracy of MRI images in the two groups was evaluated according to the comprehensive diagnostic results. The root mean square error (RMSE) and peak signal-to-noise ratio (PSNR) of DRSA-U-Net on T1WI and T2WI sequences were better than those of U-Net and dense U-Net ( The development of renal transplantation has gone through a long process and has now become the first in the field of peripheral organ transplantation . TransplAt present, the process of renal transplantation has been standardized, the new triple suppression regimen exerts the greatest immunosuppressive function with the least drug toxicity, the first-year survival rates of patients and grafts are more than 96% and 91%, respectively, and the quality of life of patients is significantly improved . The repU-Net was proposed by Marticorena Garcia et al. and nameThis study was aimed to explore the diagnostic value of optimized MRI based on DRSA-U-Net in the diagnosis of complications after renal transplantation and to provide a more effective examination method for clinic.In this study, 89 patients who underwent renal transplantation in hospital from March 2020 to March 2021 were included and examined by MRI. The patients received MRI at 2 weeks after operation. The patients were randomly divided into the control group and the observation group (MRI image diagnosis based on DRSA-U-Net). There were 59 males and 30 females, aged 27\u201346 years, with the mean age of 42 years. 76 transplanted kidneys were located in the right iliac fossa and 13 transplanted kidneys in the left iliac fossa; 45 developed tenderness 30 days after surgery; 37 had anuria 9 days after surgery; and 7 had postoperative fever and abdominal pain. This study approved by the ethics committee of the hospital, and the patients' families signed the consent form.Inclusion criteria: all patients received renal transplantation; those who follow doctor's advice and actively cooperate with the treatment.Exclusion criteria: history of contrast medium allergy; those who are allergic to the drugs used; patients with other types of serious diseases; heart, liver, spleen, and other important organ dysfunction.T superconducting MRI scanner was used, gradient was 32\u2009mT/m, switching rate was 132\u2009Tm\u22121 s\u22121, and flexible phased array circular polarization coil was used. Examination order and parameters: referring to the clinical symptoms of patients, laboratory tests, and ultrasound results, with different sequences, all patients underwent routine renal MRI and magnetic resonance urography (MRU) examination. Sequence and main parameters: axial, spin echo (SE) sequence T1WI, TR/TE was 112\u2009\u223c\u2009124\u2009ms/4.85\u2009ms; fast spin echo (FSE) sequence fat suppression T2WI, TR/TE was 2,121\u2009\u223c\u20092,406\u2009ms/132\u2009ms; coronal TRUFI sequence, TR/TE was 5.1\u2009ms/2.56\u2009ms; layer thickness was 7\u2009mm; spacing was 2.1\u2009mm; average signal number was 2\u20134; and acquisition matrix was 258\u2009\u00d7\u2009258. T1WI without fat suppression and T2WI underwent fat suppression and nonfat suppression scanning 3 times, and field of vision (FOV) was 34\u2009cm\u2009\u00d7\u200938\u2009cm.1.6Renal MRU examination: single-shot fast spin-echo sequence (SSFSE), thick T2WI, TR/TE was infinite, layer thickness was 82\u2009mm, interval was 0, FOV was 4,002\u2009mm\u2009\u00d7\u2009402\u2009mm, and acquisition matrix was 314\u2009\u00d7\u2009258; thin T2WI, TR/TE was 1,123\u2009ms/567\u2009ms, flip angle was 152, layer thickness was 5\u2009mm, interval was 0, FOV was 352\u2009mm\u2009\u00d7\u2009352\u2009mm, and acquisition matrix was 258\u2009\u00d7\u2009158.r-Softmax of attention mechanism is used to extract features, and the extracted features and the output of each branch are multiplied and added to obtain the feature layer with the same dimension as the input. In the parallel space channel compression and excitation module, DRSA-U-Net adds Scse module after each jump connection in U-Net to compress and excite the MRI images of T1WI and T2 downsampling weighted image. It can extract the effective information in each channel and space and enhance the learning efficiency of the network. The biggest feature of these algorithms is that the learning rate remains unchanged. Parameter update can only be achieved by gradient, and in equation . The parameters are expressed in equation w=b\u00b7w\u03bc equivalently achieves adaptive learning rate. The global learning rate is denoted as \u03a8. The algorithm is defined by equations . . \u03bc equivDRSA-U-Net is based on the traditional U-Net structure. The left side is composed of three encoders. There is a 65\u2009\u00d7\u20094\u2009\u00d7\u20094 convolution layer before the first encoder, which extracts the input of two channels into 65 channels. The first encoder is composed of three independent note residual modules and average titer sampling. The second encoder is composed of four independent residual modules and average sampling, and its output is 65 channels. Then, the output image is sampled. The third encoder is composed of five independent note residual modules and average sampling, and there are 256 output channels. The encoder is composed of three decoders. Each decoder is composed of an interpolation module. The decoder corresponding to each encoder jumps to connect with a convolution layer. The compression module is also added to the clock excitation attention of the second and third decoding units. The interpolation module samples the image through the nearest neighbor of interpolation and extracts the compression excitation attention module of channel and spatial information. The convolution layer fuses the feature channel. The output channels behind each decoder are 129, 65, and 17. At the end of the network, the output of the network is merged into a channel through the convolution layer. The examination results of all patients were analyzed by two independent reviewers, and a consensus was reached on the controversial results after discussion. Observation: whether there was abnormal signal in the transplanted kidney, whether the boundary of renal corticomedullary was clear, whether the ureter was unobstructed, whether there was abnormal signal around the kidney, whether the transplanted renal vessels were unobstructed, and whether the cortical enhancement density was uniform.P\u2009<\u20090.05 was considered statistically significant.Statistics was completed using SPSS16 software. Measurement data were expressed by The image quantification indexes of synthetic T2WI under different inputs of U-Net, Dense-U-Net, and DRSA-U-Net networks were displayed. The data were the mean value\u2009\u00b1\u2009standard deviation of the synthetic images of all layers in the test set. When 1/4 downsampling T2WI was added to synthesize T2WI after input, the mean peak signal-to-noise ratio (PSNR) of DRSA-U-Net synthesized T2WI was enhanced by about 0.5\u2009dB and 0.9\u2009dB, respectively, compared with U-Net and Dense-U-Net, and the mean root mean square error (RMSE) was reduced by about 0.03 and 0.02, compared with U-Net and Dense-U-Net; when 1/8 downsampling T2WI was added to synthesize T2WI after input, the PSNR of DRSA-U-Net synthesized T2WI was enhanced by about 1.6\u2009dB and 1.9\u2009dB, compared with U-Net and Dense-U-Net, and the RMSE index was the same as Dense-U-Net, reduced by about 0.02 and 0.02, compared with U-Net; therefore, no matter how much downsampling rate T2WI was added to the input, the fidelity of T2WI synthesized by DRSA-U-Net was good.P < 0.05) . The PSNR and RMSE of the T2WI synthesized by DRSA-U-Net were the best. In terms of visual effect, the T2WI synthesized by U-Net and Dense-U-Net, DRSA-U-Net network was very similar to the real T2WI, but the degree of ambiguity was different. In order to observe the details, the selected part of the composite image (blue box) was amplified, and the difference in detail and texture of the composite image was found. The T2WI synthesized by DRSA-U-Net network was closer to the real image, and the ambiguity was the smallest. The results of comprehensive examination of 89 patients were as follows: 39 cases of hydronephrosis of the transplanted kidney, hydronephrosis of the renal pelvis of the transplanted kidney, ureteral dilatation, and obstruction at the bladder; 13 cases of blurred corticomedullary demarcation of the transplanted kidney, slightly increased signal intensity of the transplanted kidney parenchyma on T2WI, significantly increased fat suppression signal on the T2 sequence, blurred renal corticomedullary structure, and abdominal effusion, which were diagnosed as rejection; 10 cases of perirenal hematoma formation, hematoma formation around the transplanted kidney, ureteral compression, and hydronephrosis; 5 cases of vascular occlusion of the transplanted kidney, vascular occlusion, and local renal infarction 2\u2009days after the operation. There were no complications in 22 patients .The sensitivity, specificity, and accuracy of conventional MRI images (control group) in the diagnosis of complications after renal transplantation were 66.5%, 84.1%, and 78.32%, respectively. The sensitivity, specificity, and accuracy of MRI images based on DRSA-U-Net (observation group) were 67.8%, 86.7%, and 80.6%, respectively .Rejection of the transplanted kidney is a series of cellular and fluid immune reactions of the recipient kidney to graft antigens and can occur in 91% of patients . With thThere are few reports of vascular occlusion and regional renal infarction in the transplanted kidney. In the experiment, 5 patients were diagnosed with vascular occlusion and regional renal infarction by MSCTA after renal transplantation, which was confirmed by surgery.Perirenal bleeding after renal transplantation to form a periureteral hematoma and compression of the ureter to form hydronephrosis is a rare complication that requires definitive diagnosis and urgent surgical treatment , 21. TheIn conclusion, the DRSA-U-Net denoising algorithm can clearly show the information of MRI images on the kidney, ureter, and surrounding tissues and improve the diagnostic accuracy of complications after renal transplantation, which has a good clinical application value. However, there is a small sample size, and clinical trials should be conducted in multicenter hospitals with large sample size, rather than in a single area or small area. For the application of spatial information of three-dimensional data, new multimodality images with strict interlayer registration can be acquired to further verify the relationship between multilayer input and interlayer registration. Network performance needs to be validated and optimized on multiple different data sets to improve the generalization capability of the network."} {"text": "However, these batteries following Li2CO3-product route usually deliver low output voltage (<2.5\u2009V) and energy efficiency. Besides, Li2CO3-related parasitic reactions can further degrade battery performance. Herein, we introduce a soluble binuclear copper(I) complex as the liquid catalyst to achieve Li2C2O4 products in Li\u2013CO2 batteries. The Li\u2013CO2 battery using the copper(I) complex exhibits a high electromotive voltage up to 3.38\u2009V, an increased output voltage of 3.04\u2009V, and an enlarged discharge capacity of 5846 mAh g\u22121. And it shows robust cyclability over 400 cycles with additional help of Ru catalyst. We reveal that the copper(I) complex can easily capture CO2 to form a bridged Cu(II)-oxalate adduct. Subsequently reduction of the adduct occurs during discharge. This work innovatively increases the output voltage of Li\u2013CO2 batteries to higher than 3.0\u2009V, paving a promising avenue for the design and regulation of CO2 conversion reactions.Li\u2013CO 2 batteries following Li2CO3-product route suffer from low output voltage and severe parasitic reactions. Herein, a soluble binuclear copper(I) complex is introduced as the liquid catalyst to achieve Li2C2O4 products in Li\u2013CO2 batteries, which increases their output voltage to higher than 3.0 V.Li\u2013CO Varieties of CCU technologies that can convert CO2 into value-added chemicals, such as methane dry reforming8, hydrogenation10, electrochemical reduction12, and\u00a0photocatalytic reduction14, have been developed. In the past decade, a kind of energy storage device of Li\u2013CO2 battery was proposed, offering an attractive tactic to utilize CO2 and produce electrical energy17. A typical Li\u2013CO2 battery is composed of a lithium metal anode separated by an aprotic electrolyte from a porous CO2 cathode. The typical reversible reaction at the cathode involves the reduction of CO2 to form Li2CO3 and carbon on discharge, and the process is reversed on charge is gaining increasing attention in the field of CO2 batteries usually present discharge voltages of around 2.5\u2009V, sometimes even lower than 2.0\u2009V in previous reports23. Generally, the quality of electrical energy is determined by the voltage supplied. An output voltage lower than 3.0\u2009V leads to a low-quality electrical energy24. Obviously, the actual output voltage of Li\u2013CO2 batteries is far lower than the theoretical value which is not high enough. Apart from the thermodynamic information of the reaction, the voltage that the battery can provide depends on the catalytic activity of catalysts and the transport properties of charge and mass in bulk and between phase boundaries.However, practical Li\u2013CO26, noble metals28, and transition metal oxides30. Although they can remarkably reduce the charging overpotential, they have minimal effect in increasing the discharge voltage. It should be explained here that the catalytic characteristics of solid catalysts bring the difficulty of raising battery discharge voltage. As illustrated in Fig.\u00a02. Specifically, three solid phases contain solid catalysts, Li2CO3, and carbon on the cathode surface. The liquid phase includes Li ions and dissolved CO2 in electrolyte. CO2 reduction during discharge occurs at the catalyst/electrolyte interface. The effect of catalytic reaction partially depends on the catalytic surface area of the solid catalyst particles on which CO2 is reduced. The sluggish kinetics of charge-transfer and mass-transport across multiphase interfaces aggravate the large voltage hysteresis. What\u2019s more, active sites of solid catalysts are occupied by insulating and insoluble Li2CO3 products, leading to their invalidation32.On this basis, much efforts on solid catalysts have been exerted in raising the discharge voltage and reducing the charge voltage. The reported catalysts include carbon allotropes2CO3 itself usually occurs irreversibly during the charging process of Li\u2013CO2 batteries 34 cause severe parasitic reactions. The gradual accumulation of irreversible byproducts threatens the stability of batteries. Thus, a catalyst designed with the strategy of Li2CO3-free pathway might be a good choice.It is worth mentioning that discharge products also affect the charging performance of the next cycle. In accordance with previous reports, electrochemical decomposition of Li2 reduction process, which is effective to reduce the discharge overpotential. The reported liquid catalysts, including 2,5-ditert-butyl-1,4-benzoquinone35, 2-ethoxyethylamine36, and tris-dichloro-ruthenium(II)37, can promote the discharge potential to a certain extent. However, batteries involving these catalysts still follow the Li2CO3 pathway. It might be better for the battery to discharge without taking the Li2CO3 path by liquid catalysts. As depicted in Fig.\u00a02, and catalytic RMs are mixing at the molecular level in the liquid phase of electrolyte. Typical discharge process contains two steps. RM molecules capture CO2 to form RM\u2013CO2 species first. The newly formed molecules then gain electrons at the cathode and are reduced to original RM and corresponding products (such as Li2C2O4). The liquid catalyst has more full contact with CO2 at the molecular level in the liquid phase, which effectively improves the reaction kinetics. The electrochemical redox process of the RM\u2013CO2 at the electrode replaces the directly electrochemical reduction of CO2. This process allows to adjust the battery\u2019s output voltage up to above 3.0\u2009V by selecting and designing RM molecules rather than solid one can reduce the number of phases involved in the COles Fig.\u00a0.2 to oxalate chemicals38. This condition inspires us to introduce the catalytic effect of metal complexes into the design of Li\u2013CO2 batteries. Moreover, Li2C2O4 as an electrochemical product can take the battery out of the troublesome Li2CO3 pathway. Herein, we introduce a binuclear copper(I) complex (denoted as Cu(I) RM) as the liquid catalyst in Li\u2013CO2 batteries and study the battery performance, including the discharge potential, capacity, and cycle performance, in detail. In addition, we use a variety of spectroscopic analysis techniques, such as Raman and differential electrochemical mass spectrometry (DEMS), to explore the Li2CO3-free path experienced by the discharge process of the cathode. Furthermore, we employ an additional catalyst containing Ru nanoparticles to reduce the charge overpotential synergistically. This study increases the output voltage of Li\u2013CO2 batteries to more than 3.0\u2009V, which strongly promotes the practical application of this electric energy storage system.Apart from the abovementioned conventional liquid catalysts, some soluble metal complexes can also catalyze the electrochemical reduction of CO3CN)4]BF4 in dry acetonitrile (MeCN)39 (details of the preparation procedure are provided in the Methods section). 1H nuclear magnetic resonance spectroscopy (1H NMR) and electrospray ionization mass spectrometry (ESI-MS) were performed first to verify the molecular structure of the prepared ligand. As shown in Supplementary Fig.\u00a0m/z) ratio of 545.25 matches well with that calculated for [C30H37N6S2]+ of the ligand 4]BF4 in MeCN. A prominent signal at the m/z ratio of 335.05 in Supplementary Fig.\u00a02C30H36N6S2]2+, confirming the successful synthesis of the target complex.The binuclear Cu(I) RM was synthesized by the reaction of disulfide ligand with two equivalents of BF4 was added to the MeCN solution containing ligand in an Ar glove box to acquire the Cu(I) RM.The binuclear Cu(I) RM was synthesized as described in previous literature28: RuCl3\u2022xH2O (50\u2009mg) was dissolved in ethylene glycol of 100\u2009mL. Then Super P carbon (80\u2009mg) was added in the solution, where the suspension was stirred for 3\u2009h at 170\u2009\u00b0C via an oil bath. The mixtures were filtered by deionized water and ethanol for several times after cooling to room temperature. The final products were dried at 120\u2009\u00b0C under vacuum for 12\u2009h.Ru@Super P was prepared in accordance with our previous study1H NMR (Bruker DRX500) was applied to analyze the molecular structures of the ligand. ESI-MS (Agilent 6460) was performed to collect information of the ligand and Cu(I) RM.All samples were transferred to different characterization equipment by using an air-tight sample module. \u03a6 3\u2009mm) that was polished elaborately prior to use. The counter electrode was obtained by rolling the mixture of LiFePO4, Super P carbon, and polytetrafluoroethylene (PTFE) binder (W: W: W\u2009=\u200980: 10: 10) into a film (1.0\u2009\u00d7\u20091.2\u2009cm) and pressing on a stainless steel (SS) current collector. The pre-charged LixFePO4 (x\u2009=\u20090.9) electrode was applied as the reference electrode, which had a stable potential of 3.45\u2009V versus Li/Li+. The galvanostatic discharge-charge measurements were conducted in Swagelok batteries, including a Super P carbon (or Ru@Super P) cathode , a pre-charged LixFePO4 anode (\u03a6 12\u2009mm), a glassy fiber separator , and a gas chamber. The cathode was prepared by rolling the mixture of Super P carbon (or Ru@Super P) and PTFE binder (W: W\u2009=\u200985: 15) into a film and pressing on a SS mesh. The mass loading of the electrode was 0.5\u2009\u00b1\u20090.2\u2009mg\u2009cm\u22122. The thickness of a glassy fiber separator was 675 \u03bcm. All the electrodes were dried at 120\u2009\u00b0C under vacuum for at least 12\u2009h before assemblage. LiClO4 (0.1\u2009M) in MeCN with or without Cu(I) RM (0.5\u2009mM) was employed as electrolytes and the amount of electrolyte in each battery was about 300\u2009\u03bcL.A three-electrode glass cell was first used to conduct CV tests. The working electrode was commercial glassy carbon . Galvanostatic tests were performed on LAND 2001A Battery Testing Systems at 25\u2009\u00b0C under Ar or CO2 atmosphere. The batteries were discharged and charged at a specific current of 100\u2009mA\u2009g\u22121 and potential cut-offs of 2\u2009V and 4.8\u2009V. Galvanostatic discharge/charge cycling tests were conducted at a constant current density of 100\u2009mA\u2009g\u22121 or 200\u2009mA\u2009g\u22121 and a fixed capacity of 1000\u2009mAh\u2009g\u22121. All current densities and capacities were normalized by the mass of active materials on the cathode. The specific energy based on active substance on the cathode was the product of specific capacity and output voltage.CV measurements were carried out on an electrochemical workstation at 25\u2009\u00b0C inside an Ar or CO\u22121. FTIR measurement was conducted on a FTIR spectroscope with a wavenumber range of 4000\u2013450\u2009cm\u22121 and a resolution of 1.0\u2009cm\u22121. The states of surface elements on the cathodes were characterized through XPS (Thermo Fisher Scientific Model K-Alpha spectrometer) equipped with Al Ka radiation (1486.6\u2009eV) at a working voltage of 12\u2009kV and a current of 10\u2009mA. The morphology of cathodes was observed by SEM (Hitachi SU8010). The microstructure was further characterized by TEM (FEI TF20), and the SAED pattern was collected from a Gatan charge-coupled device camera. To evaluate the interaction between Cu(I) RM and CO2 in the electrolyte, ESI-MS and UV\u2013vis absorption spectra data were collected. The UV\u2013vis spectra were evaluated on a UV\u2013vis spectrophotometer . XRD analysis was performed to analyze the crystalline structure of the catalyst by employing a Bruker D8 advanced diffractometer with Cu\u2013K\u03b1 radiation (\u03bb\u2009=\u20091.5406\u2009\u00c5) at a scan rate of 0.064\u00b0 s\u22121. TG was carried out on an SDT Q600 TA instrument with a temperature range of 25\u2013800\u2009\u00b0C in O2 gas and the heating rate was 5\u2009\u00b0C min\u22121.The discharged and recharged electrodes were washed with MeCN and dried sufficiently before characterization. The components of electrodes and electrolytes during different reaction stages were recorded by Raman spectroscopy with the excitation light of an air-cooled He\u2013Ne laser at 633\u2009nm through a 50\u00d7 long working distance lens (Leica Microsystems Inc.). To obtain apparent signals on the spectra and avoid the degradation of carbon cathode, the acquisition time was set as 120\u2009s with 10% laser power. The resolution of Raman spectroscopy was around 1.0\u2009cm2/Ar (V: V\u2009=\u20099: 1) was purged continuously to eliminate residual air first, after which 0.5\u2009mM Cu(I) RM in MeCN (1\u2009mL) was injected to the sealed vessel, and the remaining gas was purged to the mass spectrometer chamber (PrismaPro QMG 250 M2). The sealed vessel was connected by using two PEEK valves to the purge gas system. The electrochemical reaction was conducted by a home-made Li\u2013CO2 battery mold with two PEEK valves connected to a quadrupole mass spectrometer with a turbomolecular pump (Pfeiffer Vacuum). During the charge process, ultrapure Ar was employed as carrier with a flux of 0.5\u2009mL\u2009min\u22121. The DEMS battery was also performed on LAND 2001A Battery Testing Systems.In situ DEMS measurements were performed by chemical/electrochemical reactions. With regard to the chemical reaction, a mixture of COSupplementary Information"} {"text": "The mutual influences of social epidemiology and ideas of justice, each on the other, have been seminal in the development of public health ethics and law over the past two decades, and to the prominence that these fields give to health inequalities and the social\u2014including commercial, political, and legal\u2014determinants of health. General and political recognition of injustices in systematised health inequalities have further increased given the crushingly unequal impacts of the COVID-19 pandemic; including impacts of the legal and policy responses to it. However, despite apparent attention from successive UK governments to injustices concerning avoidable inequalities in health opportunities and outcomes, significant challenges impede the creation of health laws and policy that are both effective and ethically rigorous. This article critically explores these points. It addresses deficiencies in a UK health law landscape where health care contexts and medico-ethical assumptions predominate, to the great exclusion of broader social and governmental influences on health. The article explains how a public health framing better serves analysis, and engages with a framework of justice-oriented questions that must be asked if we are to understand the proper place and roles of law and regulation for the public\u2019s health. The links between social justice and health inequalities are empirical as well as critical philosophical matters.A forceful representation of the challenges of structural causes of (ill) health, and the problems of relative powerlessness to respond to them simply through enjoining individual responsibility or personal choice, is found in the following table produced by David Gordon and colleagues in the University of Bristol\u2019s Townsend Centre for International Poverty Research.In this article, our aim is to contribute to critical discourses on the place of law in the context of such debates in the UK, and in particular England. As already indicated,the practical impacts (for better and worse) of legal forms of regulation;the broader concept of law itself as an overall social phenomenon and source of normative ideas and ideals; andthe contributions of legal scholarship to practical agendas concerning health inequalities.The article adds to the growing academic field of public health law,In what follows, we therefore look to the question of securing the critical underpinnings to claims about how laws, law, and legal scholarship do, can, and should address health inequalities in the UK (in particular with a focus on England). More negatively, this involves a challenge to narrow, medico-ethical and medico-legal framings, and the predominance of paradigms that pretend to ethical neutrality and/or perpetuate impossible demands on individual responsibility through a fixation on civil and political rights to non-interference by the state. More positively, our analysis involves a practical representation of how philosophically-driven approaches from public health may be combined with practical questions in critical legal theory and philosophy to assist engagement and, with the right political will, the achievement of a fairer society. Section II of the article explains why health inequalities are wrongly addressed as a question for medical law and indicates how insights from public health literatures lend perspectives that otherwise could not be drawn. Section III expands on this, identifying in greater and critical detail what it means to take a public health approach. Section IV, with reference to practical questions of justice that have been seen in the context of the COVID-19 pandemic and government responses to it, then introduces a framework that is designed to promote a better marrying of the empirical and critical questions that must be engaged if we are to understand and respond to health inequalities as a problem of law and social justice.English medical law, by design, presents conceptual and analytical frameworks that are blind to inequalities. Given the tightness\u2014even the \u2018symbiotic\u2019 naturehave regard to the need to reduce inequalities between the people of England with respect to the benefits that they can obtain from the health service\u2019.her values, beliefs, wishes, and feelings, however eccentric or irrational these may be.With that said, it is right to acknowledge that the National Health Service Act 2006 provides an obligation on the Secretary of State for Health and Social Care to \u2018Against that framing, \u2018the system\u2019 more widely, and trends regarding different groups or communities within it\u2014including patterns related to health inequalities\u2014are simply irrelevant. \u2018Treating patients right\u2019 within English medical law means drawing from a library of contextually-contained rights that would apply at the point of receiving health care.It would, therefore, be odd to imagine that one could or should generalise from principles governing clinical interactions to all other areas of interpersonal and political morality. Nevertheless, the idea of the libertarian person found in the rights-holding medical patient carries a great deal of weight more widely in bioethical thought in the UK, both within and beyond the biomedical sphere.general rejection of paternalism, or a general assumption of empowerment being assured through the securing of negative rights. Yet an historical imperative to give greater recognition to patients\u2019 rights to non-interference need not imply a writ-large endorsement of narrowly libertarian systems of rights, duties, and state powers more generally. Nevertheless, arguments are advanced on the basis, essentially, of medico-ethical norms driving public policy more widely, rather than things working the other way around: norms of and for medicine are given as the starting point for questions regarding health, where the starting point should cover the whole of contexts embraced by politics and political decision-making.What is remarkable in this is not the focus that has been given to patients\u2019 negative rights. Rather, it is the unnecessary and unargued affirmation it gives to wholesale political theories and, for instance, a Population [health] interventions \u2026 that focus on providing advice, guidance, and encouragement rely heavily on individuals being able and motivated to engage with this advice, guidance and encouragement. These types of interventions have been described as highly \u201cagentic\u201d: recipients must use their personal resources, or \u201cagency,\u201d to benefit.However, just as a great deal of bioethical scholarship may draw too quickly\u2014or without adequate analytical scrutiny\u2014from works in liberal political philosophy, so it is the case that public health agendas, and critical analyses of public health responsibilities in England, are advanced in the shadow of such works.Policy measures\u2014including the UK Government\u2019s public health plans for England following the onset of the COVID-19 pandemicA challenge for health law scholarship that aims to look beyond clinical encounters is therefore to revisit the foundational questions of social theory and political philosophy that secure assumptions about what is impermissible, permissible, to be encouraged, or outright mandated. To take seriously concerns about health inequalities, and to be able to frame these as questions of health (in)justice, we should not start from medico-ethical norms. Equally, we need to be prepared to engage with and potentially challenge the libertarian norms more generally that support such policy approaches; including empirical evidence that undermines the concepts on which their normative conclusions are based.If social factors are identified as determining such significant aspects of human well-being as mortality and morbidity, the moral responsibility for ill health and health inequalities expands beyond the individual to include social institutions and processes.One of the leading scholars on health justice in the UK and globally, Sridhar Venkatapuram, has made significant contributions given epidemiological research on the socially-determined influences on health that we highlighted in the introduction to this article. Venkatapuram\u2019s position may be seen to raise arguments that run in two directions in such an exercise.practical responsibility for health does not therefore move wholesale from asking whether, why, and how individuals can and should be responsible for their own health. But it also calls into the framing\u2014and morally implicates\u2014other actors and institutions. What this means for ultimate moral, political, and legal responsibility is a separate question. There is a difference between identifying regrettable consequences of our social and political systems and in identifying moral failures in political and social responsibility. But crucially, we should not accept philosophical arguments that hold that individual responsibility is sufficient to address responsibility for health where they do so on the basis that individuals alone can determine their health outcomes: to quote again from Gordon and colleagues\u2019 \u2018Top Ten Tips for Health\u2019, a person cannot simply choose, for example, not to be poor.Venkatapuram\u2019s arguments rest on the matter of demonstrable, empirical fact that individuals alone are not empowered to account for or respond to all of the impacts and influences on their health. The question of To conclude this section, we therefore observe that UK medical ethics and law have developed with predominant assumptions and framings that are ill-suited to addressing health inequalities and associated questions of justice. In the next section of the article, we explain and show how public health approaches are, by contrast, well equipped to problematize questions of health inequalities, and to help identify solutions better to address them through justice-oriented law and policy. As we have argued here, medical law\u2019s contained focus does not allow the development and application of assumptions that may straightforwardly carry into questions of policy writ large. They may even, problematically, be taken without due analysis to affirm and endorse the general soundness and applicability of normative assumptions, for instance, concerning the meaning and scope of individual responsibility for health. In its more doctrinal senses, medical law does not ask, and thus cannot answer, the greater questions concerning health inequalities. And in its more critical and philosophical aspects, it also fails in this regard. Biomedical ethics may include regard to questions of justice; it is, for example, one of the canonical four principles of biomedical ethics.As we will explain, a public health perspective presents various (broadly) unifying conceptual and normative themes. However, to begin to understand how a public health framing may better serve health law analyses, it is important to appreciate that the term \u2018public health\u2019 covers multiple, quite distinct ideas, professional identities, areas of policy, scientific approaches, and indeed ideological perspectives.Similarly, in practice and policy, there are core identifiable functions and related government powers that are centrally public health in nature; for example, functions in monitoring and responding to outbreaks of infectious diseases under powers provided in the Public Health (Control of Disease Act) 1984. In relation to infectious disease, the potential reach of these powers are well represented too by the Coronavirus Act 2020. But public health concerns are far more extensive still, and span across government departments and sectors; for instance, education, employment, environment, housing, town-planning, transport\u2014to name just some that we could list\u2014all draw in salient responsibilities regarding the public\u2019s health. Equally, health features as an important consideration when evaluating the rationales for, and proportionality of, public policy; notably as an express consideration given in qualifications to legally protected human rights such as the right to respect for private and family life.A single, preclusive characterisation of public health cannot, therefore, be given. It is, though, possible to discern particular features of ideas of public health that sit at the intersection of different understandings of what it means. These in turn circumscribe particular scientific (broadly conceived) approaches and matters of practical concern. Robert Beaglehole and colleagues capture this very effectively with the pithy definition of public health as \u2018[c]ollective action for sustained population-wide health improvement\u2019.As explained in Geoffrey Rose\u2019s seminal paper \u2018Sick Individuals and Sick Populations\u2019, public health sciences look to what we learn when we make observations about health by studying populations.And just as observations and understanding may differ when we look at a population level, so may our interventions when \u2018treatment\u2019 is of the \u2018population as a patient\u2019.The Lancet, \u2018public health is the science of social justice\u2019.whether values are at play, but which moral values should inform the core of public health and from there come to direct policy and practice? Section II of this article has shown how and why medico-legal framings are ill equipped for this task. Exploring the question from a cross-societal, cross-sector, population perspective allows us to see how studies in public health ethics and law may much better engage with questions of inequality and injustice.In its core senses, public health is therefore unavoidably political, intertwining concerns for scientific rigour with ideas about ethical values and social equity. This has direct implications for the roles and remits of public institutions and the communities that they serve. In the phrase of Richard Horton, editor of As Kathryn MacKay argues, it is sometimes the case that public health ethics is represented as espousing a blunt and monistic, maximising moral system (often presented as utilitarianism).In line with MacKay\u2019s observation and its underlying concerns, the ethics paper that supports the Public Health Skills and Knowledge Framework for the UK\u2019s public health workforce explains how public health research and practice are widely recognised as resting on a mission to address two particular sources of moral concern.equity or fairness\u2014of distributive justice\u2014are foundational within the ethics \u2018of\u2019 public health.Fair Society, Healthy Lives:To reduce the steepness of the social gradient in health [i.e. to provide greater health equality across distinct points of social position], actions must be universal, but with a scale and intensity that is proportionate to the level of disadvantage. We call this proportionate universalism. Greater intensity of action is likely to be needed for those with greater social and economic disadvantage, but focusing solely on the most disadvantaged will not reduce the health gradient, and will only tackle a small part of the problem.However, that maximising ethic is complemented\u2014and sometimes constrained\u2014by a distinct, egalitarian ethic to prioritise addressing avoidable, systematised health inequalities.These ideas capture the \u2018moral mandates\u2019 of public health.First, health opportunities and outcomes are to be maximised both through proportionate preventive measures to defend against disease, illness, and injury, and through proportionate health-promotion interventions to sustain and enhance general levels of health (and on many counts well-being); andSecondly, systematised, avoidable, and unfair inequalities in health (opportunities) must be addressed: social architecture that supports or creates differential enjoyment of health rests on poor foundations, and priority should thus be given to protecting and promoting the health of groups and communities who face greater disadvantage.In relation to both of these moral mandates, and recognising that they may, at times, stand in tension with one another, insights from a public health perspective take us through considerations of what instances of poor health should be of concern, and where responsibility for addressing them should lie.Improving the Health of the Public by 2040,In terms of health, prevention involves a range of interventions aimed at reducing risks or threats to health. Primary prevention aims to prevent disease or injury before it occurs, for example by immunisation, health education and preventing exposure to hazards. Secondary prevention aims to reduce the impact of a disease or injury which has already occurred, for example by detecting, diagnosing and treating as soon as possible as well as taking steps to prevent reoccurrence. Regular screening programs, such as mammograms for detecting breast cancer, are an example. Tertiary prevention aims to reduce the impact of a disease or illness which is ongoing and has long-term effects, by helping people to manage often complex health problems and injuries to maximise their quality of life and life expectancy. Rehabilitation and support programs are forms of tertiary prevention.Effectively assuring conditions in which people can enjoy good health, and addressing avoidable inequalities in health, in public health parlance, includes a great concern for looking to (often complex networks of) \u2018upstream causes\u2019Given the practical understandings that public health research gives us of causes of avoidable ill health and health inequalities, why do they persist so forcefully? Why do we not have better systems of primary prevention? The ideas of assuring better and fairer health opportunities and outcomes are constructed around scientific evidence bases that explain both how and why we find incidences of poorer health, and what practical measures would address these.As explained in Section III of this article, advocacy to respond to health inequalities is a longstanding concern in public health and public health ethics. But within public discourses consequent to the onset and impact of the COVID-19 pandemic, there has been a renewed recognition of health inequalities as a problem of social justice. While everyone has been affected by COVID-19 and measures put in place against it , the harms and burdens have not been equally spread out within societies or dissociated from pre-existing structural determinants of unequal enjoyment of health . Rather, they have fallen along racialised,Because health inequalities are avoidable through changes to social norms, structures, and institutions, their very existence makes them a question of social justice.If we engage these literatures, rather than consider questions of health inequalities with blunt and exclusive reference, for instance, to liberal conceptions of autonomy and the need for defences against state regulations and professional hegemonies, we see causation within a \u2018matrix of domination\u2019,It is at such points that insufficient explanations and framings for the existence of health inequalities become more stark, and the substantiation about claims that they are unfair may crystalise. Through that process, we also may identify what it means to address them head on if law (and policy) are to respond to questions of health injustice, rather than be complicit in their causal structures. Some explanations for inequalities during the COVID-19 pandemic have been marked by wilful ignorance of systemic factors. For example, the UK Government, through a report published by its Commission on Race and Ethnic Disparities on 21 March 2021, dismissed state-sanctioned racism as a contributor to health inequalities.In conclusion, while COVID-19 has brought health and other inequalities into sharp relief, it would be mistaken to assume that if there had not been a pandemic then the inequalities would not have existed, and/or that health inequalities would disappear after the pandemic \u2018ends\u2019. The circumstances pre-existing the COVID-19 pandemic, such as harsh austerity measures for a decade and deep welfare cuts, directly created the context for the inequities and many of the avoidable harms, that have resulted from the coronavirus and responses to it. For example, the use of food banks, including by people in full time employment,In the final section of this article before the conclusion, we take the practical points regarding inequalities as exemplified against narratives that have come to the fore during the COVID-19 pandemic, and indicate how approaching these through a public health perspective and as a matter of social justice can work. This is not with a view to being comprehensive in representation of ideas of justice, or to advancing or defending a single or preclusive idea of justice; rather, it is to show the sorts of prior questions that must be asked, and to indicate the critical scope that any response to them must have.To begin a critical re-evaluation of law\u2019s place in relation to health inequalities, we would promote research agendas that take an adapted version of a framework of four questions concerning social injustices, health inequalities, and COVID-19, which was developed by one of this paper's authors as part of the UK Pandemic Ethics Accelerator project.whose care (broadly conceived) is constructed as the type of care that can consistently be left waiting? This question does not prompt us to challenge the idea of prioritisation questions per se. In any resource-limited system, there will always be the need to prioritise due to finite resources . The question instead prompts us to consider systemic neglect and care-lessness aimed at communities. Scholarship on health justice needs to pay attention to the serial disregard for particular groups and communities.First, who does not get to breathe? The summer of 2020 was filled with chants of \u2018I can\u2019t breathe\u2019 in protests that erupted after the killing of George Floyd through police brutality.Secondly, whose voices are not being heard? This is a basic as well as a central question in any policy evaluation. At the heart of justice is listening to voices; particularly those that are more likely to be erased.Thirdly, what outcomes are truly (not) inevitable? This question helps us to scrutinise explanations as to why things are the way they are. Sometimes health inequalities are seen as inevitable because, for example, their causes are framed as a manifestation of a culture of a particular population.status quo bias, or a hearkening to the priority of what has been \u2018normal\u2019, requires as strong a defence in terms of justice as any challenges to it.Fourthly, Health inequalities indicate problems of social injustice. Specifically , they are problematic insofar as they are unfair and avoidable through means that themselves are morally-mandated. That point bears emphasis because, as we have spelled out, these properties of unfairness and avoidability require us to bite political bullets. Wherever we sit in political philosophical terms\u2014whether in a more libertarian or more collectivist or communitarian campLancet-O\u2019Neill report on the legal determinants of health.with justice have divided people in the UK. The pandemic has seen the implementation of sweeping and draconian statutory measures, both under the Coronavirus Act 2020 and through secondary legislation made under the Public Health (Control of Disease) Act 1984, with consequent (and ongoing) reviews by Parliament,The public health ethics literatures provide excellent examples of works that take on this task. And recognising the growing legal scholarship that engages and contributes to these,First, we may look at law as a key structural aspect within the social determinants of health: laws practically contribute to the materialisation of better or worse, and more or less just, health outcomes. This applies to laws that, for example, empower government to implement health protection measures, or to apply taxes to products such as sugary drinks or tobacco to discourage their consumption. It applies to laws that govern interactions of and between private individuals, organisations, corporations, and so on, from public health rationales within torts such as negligence, to areas such as employment law duties. And it applies to criminal law measures around particular forms of harmful behaviours and practices. Secondly, there is law as a constraint to guard against undue interference with individual and commercial freedoms: for example, human rights and equality protections, or wider protections of basic constitutional and common law rights. Notably, within this function too we see the importance of philosophical commitments that are embedded within the idea of law writ large: for example, the rule of law. Thirdly, we have law as a normative system whose study provides its own important measures of critical evaluation in value-based standards of justice. These may come from standards that are internal to (ideas of) law, such as human rights norms or the rule of law. And they also come from critical moral and political theories that help us, as legal scholars, to evaluate, critique, and make practical proposals in relation to law.inter alia) the health sciences and political sciences to give empirical accounts of laws\u2019 practical effects and influences; and of their potential, through litigation, legislation, and distinct methods of implementation. The necessary multi- and transdisciplinarity of studies in law and public health need equally to extend in the directions of philosophy and critical social theory. Works such as the Lancet-O\u2019Neill report explicitly note the relevance of the normative questions that would be clustered under our third point in the above list. But much greater attention to and incorporation of such questions within legal scholarship is necessary if we are adequately to frame and respond practically to legal determinants of health injustice.Leading scholarship in public health law and legal epidemiology has been particularly attentive to the first and second of these.Many libertarian, and more \u2018narrowly liberal\u2019, theories, ideas, and norms, feature in and underpin normative assumptions, both in UK medical law and political morality. This at once perpetuates notions of self-reliance, atomised individualism, and so on, and gives rise to a wariness of the state advancing positions on the ethical values that matter (most), or that act with a paternalistic goal of defining and serving people\u2019s interests. These problems manifest for laws, in critiques of the idea of law, and as routine assumptions with UK health law scholarship. Nevertheless, we also see sustained, and perhaps growing, concerns about the stark realities of health inequalities, and the structures that contribute to their (worsening) existence. Laws are a fundamental part of the social and institutional architecture within which health inequalities are created and sustained. They may thus also be part of the response to them. Individual laws, law writ large, and the work of legal scholars, have a key part to play in identifying what is meant by health injustice, and how it may be addressed. In this article, we have sought to explain why medical law provides the wrong starting point, what bringing a public health perspective brings to health law scholarship, and outlined four critical, framing questions that must be addressed within such scholarship. The combination of the continued impacts of austerity, the COVID-19 pandemic, the predominance of libertarian ethico-political norms in health policy, and the development and implementation of a new policy agenda underscore the urgency of the project of addressing the legal determinants of health (in)justice."} {"text": "Wearable Internet of Things (IoT) devices can be used efficiently for gesture recognition applications. The nature of these applications requires high recognition accuracy with low energy consumption, which is not easy to solve at the same time. In this paper, we design a finger gesture recognition system using a wearable IoT device. The proposed recognition system uses a light-weight multi-layer perceptron (MLP) classifier which can be implemented even on a low-end micro controller unit (MCU), with a 2-axes flex sensor. To achieve high recognition accuracy with low energy consumption, we first design a framework for the finger gesture recognition system including its components, followed by system-level performance and energy models. Then, we analyze system-level accuracy and energy optimization issues, and explore the numerous design choices to finally achieve energy\u2013accuracy aware finger gesture recognition, targeting four commonly used low-end MCUs. Our extensive simulation and measurements using prototypes demonstrate that the proposed design achieves up to 95.5% recognition accuracy with energy consumption under 2.74 mJ per gesture on a low-end embedded wearable IoT device. We also provide the Pareto-optimal designs among a total of 159 design choices to achieve energy\u2013accuracy aware design points under given energy or accuracy constraints. Gesture recognition is among the popular issues for human\u2013machine interface applications. In particular, hands are the parts that can move most accurately with relatively little energy, compared to other body parts. Thus, hand gesture recognition is used as an efficient interface for human\u2013computer interaction (HCI) ,5,6,7,8.An alternative method of implementing gesture recognition is to use wearable sensors such as inertial measurement units (IMU), electromyography (EMG) sensors, flex sensors, and pressure sensors ,11,12,13Among various wearable sensors such as IMUs, EMG, and flex sensors, we focus on using a state-of-the-art flex sensor which ca-Provide the full design for a finger gesture recognition system using a single flex sensor. -Explore the design choices of a finger gesture recognition system in terms of performance, accuracy, and energy consumption using the conducted performance and energy consumption models.-Demonstrate the functionality and feasibility of the proposed designs by implementing the prototypes using four commonly used low-end embedded MCUs.-Show the energy\u2013accuracy aware design which achieves up to 95.5% accuracy with an energy consumption of 2.74 mJ per gesture.-Provide the energy\u2013accuracy aware Pareto-optimal designs among a total of 159 design choices to find energy\u2013accuracy aware design points under given energy or accuracy constraints.In this paper, we design a light-weight finger gesture recognition system that can be implemented in low-end embedded devices using a single flex sensor. To this end, we first design a framework for a finger gesture recognition system that recognizes 17 finger gestures. The framework consists of data collection, preprocessing filters, and a light-weight multi-layer perceptron (MLP)-based classifier. Then, we construct performance and energy models to find optimal design choices efficiently. We analyze and discuss the energy\u2013accuracy aware system-level design issues, and explore the design choices of finger gesture recognition by considering computation requirements/memory resource targeting for four types of low-end micro controller units (MCUs). Finally, the functionality and feasibility of the proposed work are verified by implementing prototypes. The contributions of this paper are summarized as follows:The rest of this paper is organized as follows. The backgrounds are described in This section describes the backgrounds of this work which consists of the existing work related to gesture recognition and the basics of the flex sensor used in this work.An IMU sensor which embeds micro electro mechanical systems (MEMS) accelerometers, gyroscopes, and magnetometers was popularly used because it can capture the wide range of body movements. An IMU sensor can even be attached to a cane to detect falls in the elderly . HoweverEMG sensors are used for body movement recognition as well. Instead of directly measuring the physical movements of the body, the sensor alternatively measures the biomedical signals using specially made probes attached to the skin surface. EMG sensors can detect the very fine movements of the body that cannot be detected by physical movement measuring sensors alone ,16,17. HConventional flex sensors based on conductive ink, fiber-optic, or conductive fabric technologies are used for various wearable IoT applications such as embedded device-based health care , sign laIn general, data collected from the wearable flex sensor for body movement recognition requires time-domain data analysis using machine learning (ML) techniques such as dynamic time warping (DTW) , hidden Flex sensors measure the amount of bending or deflection. There are three types of commonly used flex sensors, as shown in Sensors made with an optical fiber support high accuracy and high durability. However, a pair of a light source and a detector is required, and only unidirectional sensing is possible . Conduct2C) standard communication interface. This means that power-hungry analog-to-digital converters (ADCs) are not necessary, which is good for wearable IoT devices.The advanced flex sensor introduced in the previous subsection is made with a silicone elastomer layered with a conductive and non-conductive material. This sensor not only measures the bending degree of two axes stably with a single sensor, but also has the advantage of being flexible and stretchable with silicon material. As mentioned, this sensor is not a simple variable resistor type but a sensor module that embeds a low-power integrated analog front, resulting much less noise over time compared with the other sensors. In addition, it generates digital data through an inter-integrated circuit activation function. For each explored MLP model, we perform an independent training and testing process. The exploration in detail will be described with system-level optimization in Output Layer: The number of nodes in the output layer is generally determined by the number of recognized gestures. In this work, the number of gestures is set to 17. Thus, we design the output layer to have 17 nodes. Each node in the output layer uses a Softmax activation function to generate a probability value for each gesture so that the gesture with the highest probability is selected as the final result.Based on the design described in In terms of the design components, the proposed system consists of data collection, preprocessing filters, and an MLP-based classifier. At the same time, in terms of hardware components, the system mainly consists of a flex sensor and an MCU board. Thus, management of these hardware components is a practical issue of the implementation. For example, activation/deactivation scheduling of the MCU and the sensor module is tightly coupled with the performance and energy consumption of the system. The MCU can be in a standby state synchronized with the operating frequency of the sensor. When the preprocessing and MLP classification tasks are executed in the MCU, the sensor can be entered into a standby state to minimize the power consumption of the sensor. To address these issues, we first build timing models of gesture recognition, as shown in The time taken per single gesture recognition, 2C configuration when running at 400 KHz. Note that the sensor is always in the active state during Looking at the data collection process which accounts for most of the time spent on gesture recognition, the MCU repeats the sensor data read with the sampling frequency When estimating Since our design considers The energy consumption per single gesture recognition, In the data collection task, the MCU operates periodically with the frequency of The energy consumption for executing the preprocessing, As mentioned, the sensor is in the active state only during data collection for time Based on Equations (3)\u2013(7), Similar to Equation (2), only AT and There are numerous design choices where the energy and accuracy are trade-off relations in general. This means that maximizing recognition accuracy while simultaneously minimizing energy consumption is not easy to solve. Thus, we first define accuracy- or energy-constrained objective functions as below:As modeled in previous sections, the sampling frequency, Using Equation (8), we can easily analyze and explore the design choices of In designing preprocessing filters, a simple design choice is whether each filter is adopted. We use a segmentation filter and a reshape filter for all design choices because they are indispensable while noise and normalization filters are optional. In designing a segmentation filter, determining , the higher the accuracy but the larger the energy consumption. Similar to , the maximum achievable accuracy is also limited even when In designing a MLP classifier, finding the optimal number of parameters used in the MLP is important to find an energy\u2013accuracy aware design. The higher the This section introduces experimental setups including the prototypes we implement to verify the energy\u2013accuracy aware design points. Then, the results of design choice exploration and the Pareto-optimal energy\u2013accuracy aware design points are presented with some findings and discussions.2C to the MCU board. We consider four commonly used low-end MCUs for targeting low-end embedded devices. To demonstrate the feasibility of the proposed designs, we implemented an in-house prototype tiny enough to wear on the body, as shown in The prototypes are used for two purposes\u2014data collection and design verification\u2014through real-time gesture recognition. In data collection, the raw data collected are directly sent to the PC so that the data are used for training and for testing the MLP classifier. The prototypes are also used to provide the timing information to the energy models defined in In total, 17 types of gestures are defined as continuous motions, as shown in MLP training is performed in the Pytorch environment. The hyper-parameters used for trainings are 0.0075 and 500 for the learning rate and epoch, respectively. No significant performance change is observed after the epoch of 500, so the maximum epoch is fixed at 500. For comparison purposes, we build one gated recurrent unit (GRU) and two tiny ML models generated using TensorFlow and Neuton\u2019s AutoML, which is commercially available from Google AI.k-th hidden layer, and n is the number of hidden layers. Note that As expected, recognition accuracy is highly correlated with The accuracy for the single hidden layer and double hidden layer of MLPs shows different behaviors depending on whether the preprocessing filter is applied. When preprocessing filters are not applied, the double-hidden-layer MLP shows better performance at most ranges of problems . We obseBased on comparisons of the four configurations, we conclude that the single-hidden-layer MLP with preprocessing is more suitable for devices that have limited resources.We explored the design choices of the proposed finger gesture recognition system in terms of accuracy as well as the energy consumption by analyzing a total of 159 designs with varying design choices. Atmega2560 has the worst energy\u2013accuracy efficiency. We found that Atmega2560 is based on an 8-bit RISC architecture, and computation requirements during the preprocessing and forward propagation operations in the MLP needs more active time of the MCU, which increases energy consumption when A confusion matrix is useful for analyzing the patterns of mispredictions. In this paper, we implemented a finger gesture recognition system based on a light-weight MLP-based classifier using a low-end MCU and a 2-axes flex sensor. In order to find energy\u2013accuracy aware design points, we first designed a full process of finger gesture recognition and its system-level performance and energy models. Then, we analyzed system-level design issues including sensor operating frequency and the size of the MLP classifier. Finally, we explored the numerous design choices based on accuracy and energy constraints. Considering four commonly used MCUs, a total of 159 design points were determined according to the configuration of the sensor operating frequency, the presence of preprocessing filters, and the size of the MLP classifier. As a result of Pareto Fronts, the proposed design achieved up to 95.5% accuracy with an energy consumption of 2.74 mJ, which shows up to 10% higher accuracy than previous studies with simIn this work, we do not address the effect of using AI accelerators such as digital signal processors (DSPs), FPGAs or application-specific integrated chips (ASICs). Since these accelerators will greatly affect performance as well as energy efficiency, considering these components will be our future work to find energy\u2013accuracy aware design choices for wearable IoT devices."} {"text": "This article forms part of a series looking at management of patients with tooth wear. Articulated study casts can be essential in assisting the clinician to plan and communicate proposed treatment to the dental technician and patient. Their production is often seen as straightforward, but a lack of attention to detail can quickly lead to articulated casts that do not replicate the patient clinical presentation. This in turn will lead to inaccurate planning and potentially a suboptimal treatment outcome. This article discusses a collection of the clinical records needed to produce accurate articulated study casts, which can be utilised for tooth wear planning. It also aims to present the evidence base for the recommendations outlined. Articulated study casts can form an integral part of the treatment planning process for patients with complex tooth wear. The stages to produce accurately mounted study casts include recording impressions to achieve high-quality study casts, recording a facebow, recording a centric relation record and lab mounting. Each stage needs to be carried out with the greatest of care to ensure consistent results.After the records have been collected and processed by the laboratory, it is essential that the mounted study casts are verified for accuracy. Accuracy is by no means a guarantee and errors can occur at any point. This can result in mounted study casts that do not replicate the tooth contacts found clinically. The use of analogue techniques to produce articulated study casts is well documented.Articulated study casts serve as a medico-legal record of the patients' pre-treatment presentation. They can be used to examine the static and dynamic occlusal contacts without muscular guarding or the visual limitation of the soft tissues. Duplicated study casts can be modified with mock equilibrations, tooth removal, or additive waxing, to simulate proposed treatment plans. They are also useful in the production of guides and stents. The production of articulated study casts comes at significant expense and must provide added value. They are not always necessary but are invaluable when executing complex tooth wear cases.Recording impressions to achieve high-quality study castsRecording a facebowRecording a centric relation record (CRR)Lab mounting and verification.The stages to produce accurately mounted study casts include:This article will aim to outline a simple and evidence-based summary of how to execute the stages above to a high standard.Alginate impression material in a universal stock tray will provide sufficient accuracy to produce high-quality study casts. Alginate provides a cheap and well-tolerated impression material. However, an understanding of the material's limitations is essential if it is to be used consistently.Alginate is most commonly bought in bulk. After opening, the loose alginate is stored in an airtight container . It is e8 Setting time of alginates can vary considerably between manufacturers. Most setting times are based on 22 \u00b0C tap water and average 2-3 minutes. By increasing or decreasing the temperature of the water, the setting time can be manipulated. There is some evidence to suggest that higher temperature water results in less dimensionally accurate impressions. For the best results, cold water is recommended.9,10By altering the powder-to-water ratio when mixing alginate, it is possible to manipulate the consistency. For the best results, it is best to maintain the volume of powder and vary the amount of water. Altering the powder-water ratio by up to 25% does not affect the accuracy of the impression.11 When using alginate, universal plastic stock trays provide adequate rigidity.The tray is the foundation of an accurate impression. Classically, metal rim lock trays were recognised to be the gold standard. This was born out of concerns about the rigidity of newer plastic stock trays, which were thought to flex and then rebound on impression removal. In reality, the studies that have investigated this have been concerned with high viscosity silicone putty, not alginate.Sizing the tray appropriately is essential. There should be good material support around the full arch and 3-5 mm clearance from the teeth around all aspects of the tray to allow adequate alginate thickness. Tray perforations through the alginate can lead to inaccurate impressions and ideally should be avoided, particularly in key areas . Stock t12Special attention needs to be paid to large edentulous saddles or high-vaulted palates. Alginates do not self-support well and can sag before the gelation phase is complete. The tray is best augmented with compound or putty in these areas to prevent this .12Fig. 4In vitro studies have demonstrated that it can improve the dimensional accuracy of alginate impressions.13,14 It is essential that pooled adhesive is not left in the tray as this can act as a lubricant and absorption (imbibition) of water. Special attention has to be paid to impression disinfection, storage, transportation and the time delay until pouring, to prevent suboptimal performance.16,17The disinfection of an alginate impression can be completed by either immersion in a disinfectant bath or soaking with a disinfectant spray. Both have been shown as effective and achieve cross-infection control. The effect of disinfection on dimensional accuracy has been extensively researched. A recent critical review of the literature demonstrates that there are negligible effects on dimensional accuracy of the impression as long as the manufacturer's instructions are followed.18 This has led to the belief that an alginate impression needs to be cast as soon as possible after capture. This has been disputed in other publications and largely depends on the alginate being used.19,20 There have been huge advances in alginate in recent years, with some manufacturers claiming their alginate can maintain dimensional stability for as many as seven days. The evidence would support the dimensional stability of these newer extended pour alginates.21 That said, we cannot expect standard alginates to perform in this way. It is essential that these are poured as soon as possible.22 If there is likely to be periods of over 24 hours between recording and pouring of your impression, an extended pour alginate or addition cure silicone should be considered.23It is commonly quoted that distortion of an alginate impression can start to develop in as little as 12 minutes after removal.The importance of the facebow transfer can be conceptionally difficult to grasp. A facebow allows us to record the spatial orientation of the maxilla to the terminal hinge axis clinically and then transfer this information via the mounting process to an articulator .Fig. 8 AAesthetically, it allows any incisal and occlusal cants, relative to the horizontal reference plane being utilised, to be recorded. Although this can be achieved in other ways, a facebow is quick and predictable . If thisThere are many different articulator manufacturers on the market, each with their own specific facebow. It is essential that the correct facebow to the articulators used in your chosen laboratory is selected. Facebows rely on two posterior reference points and one anterior reference point when positioning their upper member. Confusingly, these reference points change depending on the brand of facebow utilised. It is therefore essential that the manufacturer's instructions are followed.24Most facebows utilise the external auditory meatus (EAM) as a convenient, but arbitrary, posterior reference point to represent the terminal hinge axis (THA). The THA is an imaginary line drawn between the patient's condyles around which the mandible rotates in early opening and late closing. Because of this, they are more appropriately named 'earbows' . The EAM25 This discrepancy is unlikely to significantly impact most cases and is a small price to pay for the incredible ease with which the 'earbow' can be used clinically. Greater accuracy can be achieved but involves advanced equipment, such as pantographs and fully adjustable articulators.Manufacturers have gone to great lengths to construct the upper member of their facebow in order to account for the average discrepancy between the anatomical position of the THA and the EAM. But there is no doubt that this 'one size fits all' approach will lead to inaccuracy. The impact of this has been studied by researchers and it has been concluded that even a 5 mm discrepancy between the EAM and the THA will result in only a 0.2 mm discrepancy in mandibular position on closure when a 3 mm thick CCR inter-occlusal record is used.When recording the facebow, a non-flexible material, such as silicone bite registration material or beauty wax, should be used to record the tooth positions on the bite fork. It should be possible to stabilise the study cast on the bite fork with three well-distributed points of contact. If this is not possible due to the distribution of the teeth, then a wax record block will be necessary. Ideally, this should be constructed on the model to be mounted to improve accuracy of fit.Centric Relation (CR) is a maxillomandibular relationship, independent of tooth position. In CR, the mandible is restricted to a purely rotation movement. Diagnostic casts are mounted on the articulator in CR using a CRR.26 But as clinicians, we recognise CR as the familiar feel of 'passive rotation' of the mandible. An accurate CRR can be time-consuming to achieve depending on the degree of muscle guarding the patient has. It is best to test the degree of muscle guarding by gently manipulating the mandible up and down bite registration material . It can 32It is important that a rigid PVS material is used, ideally one with a Vickers hardness of D or above. Pink or silver wax are not suitable to record a CRR as they are not rigid at room temperature.It is essential that the CRR has a minimum of tripod contacts on the remaining teeth. If the patient has multiple missing posterior teeth, there may be a need to include wax registration rims as part of the CRR . Wax rimSilicone bite registration material records more surface detail than alginate. The high degree of accuracy with which the silicone captures the fissure pattern of the teeth prevents the occlusal record from fully seating on the study casts produced from the alginate impressions . It is tAfter the records above have been collected and processed by the laboratory, it is essential that the mounted study casts are verified for accuracy. Accuracy is by no means a guarantee and errors can occur at any point. This can result in mounted study casts that do not replicate the tooth contacts found clinically.33,34,35,36 Verification can be completed in different ways. The recording of multiple CRRs allows the use of a Centri-Check.37 More practically, the initial tooth contact in CR can be marked with articulating paper clinically. Occlusal indicator wax can also be used to record and retain this tooth contact. An occlusal sketch could be used for the same purpose (The occlusion should already have been examined in detail clinically. purpose .Fig. 23 When the models are returned from the laboratory, the initial tooth contact on the study casts should replicate those found clinically. This can be marked with articulating foil and confirmed with the occlusal indicator wax or occlusal sketch . If the The production of accurate articulated study casts is a time-consuming process. Each stage in the process must be executed to the highest standard if we are to be successful.Alginate provides a cheap and accurate impressions material but must be handled with care to achieve the best results. The use of a facebow is not always necessary, particularly if not increasing the occlusal vertical dimension. But when needed, the ear bow can provide adequate accuracy for most tooth wear cases, even accepting its limitations relative to a pantograph with fully adjustable articulator. The recording of a CRR can be challenging, particularly in patients with extensive muscle guarding. But by using a staged approach to deprogram the patient's muscles, and selection of the correct inter-occlusal material, an accurate record can be achieved . Even with excellent clinical techniques and care, it cannot be assumed that the lab mounting process will progress without fault. The verification of mounting accuracy in comparison to the patient's clinical tooth contacts is essential for success."} {"text": "After comprehensive consideration of the genetic and ultrasound results, the two gravidas decided to receive elective termination and molecular investigations of multiple tissue samples from the aborted fetus and the placenta. The results confirmed the presence of true fetoplacental mosaicism with levels of trisomy 9 mosaicism from 76% to normal in various tissues. These two cases highlight the necessity of genetic counseling for gravidas whose NIPT results highly suggest the risk of chromosome 9 to ascertain the occurrence of mosaicism. In addition, the comprehensive use of multiple genetic techniques and biological samples is recommended for prenatal diagnosis to avoid false-negative results. It should also be noted that ultrasound results of organs with true trisomy 9 mosaicism can be free of structural abnormalities during pregnancy.Chromosomal mosaicism remains a perpetual diagnostic and clinical dilemma. In the present study, we detected two prenatal trisomy 9 mosaic syndrome cases by using multiple genetic testing methods. The non-invasive prenatal testing (NIPT) results suggested trisomy 9 in two fetuses. Karyotype analysis of amniocytes showed a high level (42%\u201350%) of mosaicism, and chromosomal microarray analysis (CMA) of uncultured amniocytes showed no copy number variation (CNV) except for large fragment loss of heterozygosity. Ultrasound findings were unmarkable except for small for gestational age. In Case 1, further umbilical blood puncture confirmed 22.4% and 34% trisomy 9 mosaicism by CMA and fluorescent Trisomy 9 is an uncommon chromosomal abnormality that can occur in a mosaic or non-mosaic state . Full tr+1 weeks of gestation after genetic counseling. Sample preparation, maternal plasma DNA sequencing, and bioinformatics analysis for NIPT were carried out using BGI platform as previously described with no family history of chromosomal abnormalities and a sign of spontaneous abortion during early pregnancy. She underwent maternal serum screening at 12\u00a0weeks of gestation, which revealed a risk for Down syndrome of 1 in 200. Due to concerns about the unavoidable risks of invasive prenatal diagnosis methods, she chose NIPT for fetal autosomal aneuploidies screening at 16escribed . NIPT an\u00ae750\u00a0K Array revealed an 18.54\u00a0Mb loss of heterozygosity at 9p24.3p22.1, arr [GRCh37] 9p24.3p22.1 \u00d7 2\u00a0hmz , a head circumference of 26.1\u00a0cm (14.7th), an abdominal circumference of 21.8\u00a0cm (2nd), a humerus length of 4.7\u00a0cm (14.7th), a femur length of 5.2\u00a0cm (14.7th), and a fetal weight assessment of 1040\u00a0g (5.5th). The gravida chose to terminate the pregnancy at 29+4 weeks of gestation.Following post-test genetic counseling for the NIPT results, the gravida agreed to receive amniocentesis for further analysis at 22\u00a0weeks of gestation. Genetic amniocentesis test revealed a karyotype of mos 47,XX,+9 [25]/46,XX [25], indicating a level of 50% trisomy 9 mosaicism . Parenta \u00d7 2\u00a0hmz . As thernclature guidelinnclature , this kanclature . CMA ananclature . UltrasoA subsequent autopsy of the aborted fetus confirmed the prenatal ultrasound finding of no structural anomalies. The copy number variation sequencing (CNV-seq) analysis of the maternal and fetal center of the placenta verified trisomy 9 with the level of 80% and 81% mosaicism respectively. The levels of trisomy 9 mosaicism were 6% in the DNA of uncultured amniocytes following CMA testing, 23% in uncultured umbilical blood, and 21% in the umbilical cord . The tri\u00ae750\u00a0K Array showed no CNV on chromosome 9 except for 24.96\u00a0Mb and 19.22\u00a0Mb loss of heterozygosity at 9p23p13.1 and 9q33.1q34.3: arr [GRCh37] 9p23p13.1 \u00d7 2\u00a0hmz and arr [GRCh37] 9q33.1q34.3 \u00d7 2\u00a0hmz . The pregnancy was terminated at 21+5 weeks of gestation upon the request of the parents.A 31-year-old woman was referred to the Hunan Provincial Maternal and Child Health Care Hospital because of the high risk of NIPT results. NIPT using the MGISEQ-2000 platform at 14 gestational weeks showed that the Z-score of chromosome 9 was outside the normal range , suggesting a high risk of trisomy 9. There was no signs of miscarriage during early pregnancy, and all common laboratory parameters were within normal reference ranges. The gravida underwent amniocentesis at 18\u00a0weeks of gestation. Genetic amniocentesis analysis revealed the existence of trisomy 9 mosaicism, mos 47,XY,+9 [21]/46,XY [29] with a 42% level of trisomy 9 mosaicism . Parenta \u00d7 2\u00a0hmz . PrenataUpon approval from the parents, an autopsy of the aborted fetus was performed, and the result showed no significant structural anomalies. FISH analysis of uncultured oral mucosal cells with chromosome 9p-ter and 9q-ter FISH probes detected trisomy 9 in 16 (8.2%) of 196 cells examined . CNV-seqOwing to the advantage of NIPT in detecting genome-wide chromosomal anomalies , more stin vitro cell culture, which is especially common in prenatal diagnosis of mosaic trisomy 9 is only paternally imprinted gene on chromosome 9, which is implicated in neonatal diabetes and pancreatic development by autosomal recessive inheritance instead of imprinting (https://cs-tl.de/DB/CA/UPD/0-Start.html), only 53 cases have been referred to chromosome 9 and limited prenatal clinical significance of UPD 9 has been reported in the literature (https://cs-tl.de/DB/CA/UPD/0-Start.html), which makes the clinical phenotype analysis of UPD 9 more difficult.The clinical significance of uniparental disomy (UPD) lies in its ability of producing either aberrant patterns of imprinting or homozygosity for recessive mutations. printing . Accorditerature . BesidesChromosomal mosaicism presents a major interpretative dilemma in prenatal genetic counseling. Genetic counseling for clinical outcomes of chromosomal mosaicism in pregnant women needs to be assessed on the case-by-case basis and comprehensively consider multiple factors, including the timing of the initial event, gene-phenotype associations of referred chromosome, the ratio and distribution of the normal/abnormal cells in tissues. The placental findings from NIPT and the trisomic rescue observed by SNP-array and karyotyping, suggest that our cases are fetoplacental trisomy 9 mosaicism, which was further confirmed by CNV-seq testing of the aborted placental and fetal tissues. Prenatal clinical features of mosaic trisomy 9 are often complicated with intrauterine growth retardation and/or \u201csmall\u201d size, which is not specific. Live births with trisomy 9 mosaicism may present with characteristic phenotypic features, such as craniofacial abnormalities , cardiac abnormalities, feeding and breathing difficulties, cryptorchidism, hip dysplasia, seizures, and developmental delay . The incIn conclusion, the findings in the present study suggest that attention should be paid to the possibility of mosaicism and placental mosaicism in gravidas with positive NIPT results of trisomy 9, and the comprehensive use of multiple genetic techniques and biological samples is strongly suggested for the diagnosis of trisomy 9 mosaicism. Genetic counseling for clinical outcomes of trisomy 9 mosaicism during pregnancy should be provided on the case-by-case basis by comprehensive consideration of the prenatal results and the fact that true fetal trisomy 9 mosaicism can be free of structural anomalies."} {"text": "The Y2O3 phase has specific orientation relationships with the \u03b1-Ti phase: (002)2O3Y // 6Si3 has an orientation relationship with the \u03b2-Ti phase: (026Si3(TiZr) // (011)\u03b2-Ti, [6Si3(TiZr) // [04\u03b2-Ti. The 0.1 wt.% Y composition alloy has the best high-temperature oxidation resistance at different temperatures. The oxidation behaviors of the alloys follow the linear-parabolic law, and the oxidation products of the alloys are composed of rutile-TiO2, anatase-TiO2, Y2O3 and Al2O3. The room-temperature and 700 \u00b0C UTS of the alloys decreases first and then increases with the increase of Y content; the 0.1 wt.% Y composition alloy has the best room-temperature mechanical properties with a UTS of 1012 MPa and elongation of 1.0%. The 700 \u00b0C UTS and elongation of the alloy with 0.1 wt.% Y is 694 MPa and 9.8%, showing an optimal comprehensive performance. The UTS and elongation of the alloys at 750 \u00b0C increase first and then decrease with the increase of Y content. The optimal UTS and elongation of the alloy is 556 MPa and 10.1% obtained in 0.2 wt.% Y composition alloy. The cleavage and dimples fractures are the primary fracture mode for the room- and high-temperature tensile fracture, respectively.To improve the heat resistance of titanium alloys, the effects of Y content on the precipitation behavior, oxidation resistance and high-temperature mechanical properties of as-cast Ti-5Al-2.75Sn-3Zr-1.5Mo-0.45Si-1W-2Nb-xY alloys were systematically investigated. The microstructures, phase evolution and oxidation scales were characterized by XRD, Laser Raman, XPS, SEM and TEM. The properties were studied by cyclic oxidation as well as room- and high-temperature tensile testing. The results show that the microstructures of the alloys are of the widmanst\u00e4tten structure with typical basket weave features, and the prior \u03b2 grain size and \u03b1 lamellar spacing are refined with the increase of Y content. The precipitates in the alloys mainly include Y High-temperature titanium alloys are widely used in aerospace and military fields due to their excellent properties, such as low density, high specific strength, high specific stiffness and excellent high-temperature creep properties ,2,3. Wit2O3 phase, thereby improving their strength and toughness [2O3 phase, as well as the interaction between Y and other alloy elements and the mechanism of their impact on the high-temperature properties of the alloys are still unclear.The rare earth element Y has strong chemical activity and deoxidation, which can purify the titanium alloy matrix and improve the high-temperature mechanical properties and oxidation resistance of the alloy. Y mainly affects the microstructures of titanium alloys by forming the Youghness ,15,16. Poughness . TherefoIn this present work, the effect of Y content on the precipitation behavior, oxidation and mechanical properties of the as-cast Ti-Al-Sn-Zr-Mo-Si-W-Nb series high-temperature titanium alloys were systematically investigated. In addition, the corresponding oxidation and high-temperature fracture mechanism were also discussed.\u22123 Pa. The melting process was at a dry high-purity argon (99.999%) atmosphere maintained at 5 \u00d7 103 Pa. After the raw materials were melted uniformly, the melt was kept for 2 min and then cooled to room temperature in the crucible. Each ingot was melted four times to keep the composition homogeneity. The actual compositions of the prepared titanium alloys were analyzed by an X-ray fluorescent spectrometer , and the results were given in Ti-5Al-2.75Sn-3Zr-1.5Mo-0.45Si-1W-2Nb-xY alloys, developed for the application at 700 \u00b0C based on traditional design methods for near \u03b1 titanium alloys, were prepared by the vacuum electromagnetic levitation melting technique. Pure Ti, Y, Al, Zr, Sn, Mo, Nb and Si were added in the form of elemental alloys, and W was added in the form of a Ti-90W master alloy. Melting stocks weighing 1 kg were melted in a water-cooled copper crucible. Before melting, the sealed evacuated chamber was washed three times with 99.999% high-purity argon, and the internal atmosphere environment was controlled below 103: H2O = 1: 3: 7, Vol.%). TEM observation specimens were ground to 40 \u03bcm and then punched to \u03a6 3 mm discs for ion thinning.The specimens for microstructure characterization, oxidation and mechanical performance tests were cut by wire-cut electrical discharge machining (WEDM). The phases and oxidation products of the as-cast alloys were characterized by a Smartlab (9 kW) X-ray diffractometer with a scanning speed of 10\u00b0/min. The microstructure, oxidation morphology and tensile fracture of the alloys were observed by the Quanta 450-FEG type scanning electron microscope equipped with an energy dispersive X-ray spectrometer . The precipitates were characterized by a Tecnai G2 F30 transmission electron microscope . The oxidation mechanisms were analyzed by LabRAM HR Evolution type laser Raman and ESCALAB 250 Xi X-ray photoelectron spectroscopy . Specimens for SEM observations were etched with Kroll\u2019s reagent at 650 \u00b0C, 700 \u00b0C and 750 \u00b0C. The dimensions of the oxidation specimen were 10 \u00d7 10 \u00d7 3 mmThe room-temperature tensile test was implemented on an Instron-5848 tensile testing machine at a tensile rate of 1 mm/min. According to the ASTM E8/E8M-16 standard, a 15 mm gauge length extensometer was used to measure the strain to exclude the influence of the elastic strain of the grips. The high-temperature tensile test was carried out on the CMT-5205 tensile testing machine in the air environment at 700 \u00b0C and 750 \u00b0C. The tensile rate was 0.5 mm/min. The tensile test was repeated three times under each condition. The dimensions of the tensile specimens for room- and high-temperature tests are shown in 2O3 phase.6Si3 silicide . The addition of the Y element in the alloys will reduce the solid solubility of Si in the \u03b2 phases and promote the precipitation of silicides. It also can be seen that the (TiZr)6Si3 silicide has an orientation relationship with the \u03b2-Ti phase: (026Si3(TiZr) // (011)\u03b2-Ti, [6Si3(TiZr) // [04\u03b2-Ti 2O3Y // , n is the reaction index, kn is the oxidation reaction rate constant, and t is the oxidation time (s). Equation (1) is subjected to natural logarithm processing to obtain Equation (2):The oxidation kinetics can be expressed by Equation (1) :\u2206Wn = knn can be used as parameters to characterize the oxidation behaviors of materials. The fitting parameters of the alloys at each oxidation temperature are provided in This demonstrates that there is a linear relationship between lg(\u2206W) and lgt, and n and k2, Al2O3 and Y2O3. The lattice constants of \u03b1-Ti after oxidation were calculated according to the XRD data, and the results are exhibited in \u22121. The corresponding laser Raman spectra are displayed in 2 , anatase-TiO2 (143 cm\u22121), Y2O3 (312 cm\u22121) and Al2O3 (378 cm\u22121) [2 was not found in the XRD analysis . Howeveranalysis due to i4+ , Al3+ , O2\u2212 and Y3+ [2\u2212 with a binding energy of 529.88 eV and 531.99 eV is oxygen in titanium oxide and alumina, respectively. Ti4 + with a binding energy of 458.62 eV and 464.62 eV is titanium in anatase-TiO2 and rutile-TiO2, respectively. The fitting results of XPS are consistent with the results of laser Raman and XRD analysis. Therefore, it can be considered that the oxidation products of the alloy are composed of rutile-TiO2, anatase-TiO2, Y2O3 and Al2O3.2, short rod Y2O3 and Al2O3. The oxide particles on the surface of the alloys oxidized at 650 \u00b0C are smaller, and no serious cracks are observed on the oxidized surfaces. As the oxidation temperature exceeds 700 \u00b0C, the amount and size of the oxide particles increases significantly. Therefore, stress would occur between the oxide particles and the matrix, causing cracks. These cracks provide a large number of channels for the internal diffusion of oxygen atoms, which is not conducive to the formation of a uniform and dense oxide film. When the oxidation temperature exceeds 750 \u00b0C, the integrity of the oxide layer is seriously damaged. Thus, the oxidation resistance of the alloy decreases with the increase of oxidation temperatures. The number and size of oxidation products increase with the increase of Y content, further indicating that the 0.1 wt.% Y composition alloy has the best high-temperature oxidation resistance.To further explore the change process of the oxide layer in different oxidation stages, the surface morphologies of as-cast alloys with different Y content after oxidation for 60 h at different temperatures were characterized, and the results are presented in 0 is the standard free energy change. R is the gas constant, T is the Kelvin temperature, and PO2 is the equilibrium oxygen partial pressure. As described above, TiO2 and Al2O3 are the main oxidation products. The \u0394G of TiO2 and Al2O3 at 650 \u00b0C, 700 \u00b0C and 750 \u00b0C is calculated according to Equation (3), and the results are listed in 2O3 and TiO2 can coexist on the oxide surface due to their \u0394G value being negative. The stability of Al2O3 is also higher than that of TiO2. However, TiO2 was easier to form due to the much higher content of Ti than Al.To further study the composition of oxidation products, thermodynamic analysis was carried out. The oxidation reaction of titanium alloy follows the Vant Hoff isothermal equation, as specified in Equation (3) :\u2206G = \u2206G02O3 is mainly distributed in the outermost layer of the oxide film, the inner layer of the oxide film is mainly composed of TiO2 and a small amount of Al2O3, and the oxide layer is mainly TiO2. This shows that in the early stage of oxidation, Ti and Al elements diffuse outward and oxidation reaction occurs; the generated oxides cover Y2O3 to form an oxide film with a loose structure. With the extension of oxidation time, oxygen atoms can diffuse inward through the gaps and cracks in the oxide film and oxidize with Ti atoms inside the matrix to form an oxygen-rich layer. Due to the different types of oxidation products in the oxide surface and oxide layer, the stress concentration generated means that the oxide film and the oxide layer cannot be tightly bonded, resulting in the oxide layer being peeled off at more than 750 \u00b0C.2O3 in 0.1 wt.% Y composition alloy is small and dispersed, which has a certain dispersion strengthening effect in the alloy. With the increase of Y content, the size of Y2O3 increases, and a large-size brittle particle phase is formed in the local position. When the distribution of large-sized precipitates is uneven, it leads to more severe stress concentration and reduces strength during the tensile deformation of the alloy. This is also the reason for the large-size cleavage planes and tearing ridges in the 0.2 wt.% Y composition alloy. When the Y content increases to 0.4 wt.%, Y2O3 will be distributed more uniformly. At this time, Y2O3 forms a network structure distributed along the prior \u03b2 grain boundary, which can improve the ability of the prior \u03b2 grain boundary to hinder deformation and significantly reduce the weakening degree of the grain boundaries at high temperatures.The size and distribution of the precipitated phases will affect the mechanical properties of the alloys ,32. As d2O3 phase. Therefore, the mechanical properties of the alloys are affected by the distribution of the Y2O3 phase, silicide and temperature factors. A schematic illustration of the fracture mechanism of the alloys with different Y content is shown in In addition, the silicide in the microstructure will change the stress at the interface of \u03b1/\u03b2 laths ,34, and (1)The as-cast alloys with different Y content are mainly composed of \u03b1-Ti phase and a small amount of \u03b2-Ti phase. With the increase of Y content, both the \u03b1-Ti lattice parameters \u201ca\u201d and \u201cc\u201d increase.(2)The microstructures of the alloys are all of the widmanst\u00e4tten structure, with typical basket weave features. The prior \u03b2 grain size and \u03b1 lamellar spacing decrease with the increase of the Y content.(3)2O3 phases increases, and the morphologies of the Y2O3 phases change from short rod to long strip. The orientation relationships between the Y2O3 phase and \u03b1-Ti phase in 0.1 wt.% Y composition alloy are established as: (002)2O3Y // 6Si3 silicide is precipitated from the \u03b2-Ti phase. It has an orientation relationship with the \u03b2-Ti phase: (026Si3(TiZr) // (011)\u03b2-Ti, [6Si3(TiZr) // [04\u03b2-Ti.(TiZr)(5)2, anatase-TiO2, Y2O3 and Al2O3.The oxidation resistance of the alloys decreases with the increase of oxidation temperature and time. The 0.1 wt.% Y composition alloy has the best high-temperature oxidation resistance at different temperatures. The oxidation behaviors of the alloys conform to the linear-parabolic law, and the oxidation products of the alloy are composed of rutile-TiO(6)The room temperature UTS of the alloys decreases first and then increases with the increase of Y content; the 0.1 wt.% Y composition alloy has best room temperature mechanical properties, with a UTS of 1012 MPa and elongation of 1.0%. The UTS of the alloys at 700 \u00b0C decreases first and then increases with the increase of Y content, whereas the elongation shows an opposite trend. The UTS and elongation of the alloy with 0.1 wt.% Y is 694 MPa and 9.8%, showing an optimal comprehensive performance. The UTS and elongation of the alloys at 750 \u00b0C increase first and then decrease with the increase of Y content. The optimal UTS and elongation of the alloy is 556 MPa and 10.1%, obtained in 0.2 wt.% Y composition alloy. The cleavage and dimple fractures are the primary fracture mode for the room- and high-temperature tensile fracture, respectively."} {"text": "In this investigation, we apply the improved Kudryashov, the novel Kudryashov, and the unified methods to demonstrate new wave behaviors of the Fokas-Lenells nonlinear waveform arising in birefringent fibers. Through the application of these techniques, we obtain numerous previously unreported novel dynamic optical soliton solutions in mixed hyperbolic, trigonometric, and rational forms of the governing model. These solutions encompass periodic waves with W-shaped profiles, gradually increasing amplitudes, rapidly increasing amplitudes, double-periodic waves, and breather waves with symmetrical or asymmetrical amplitudes. Singular solitons with single and multiple breather waves are also derived. Based on these findings, we can say that our implemented methods are more reliable and useful when retrieving optical soliton results for complicated nonlinear systems. Various potential features of the derived solutions are presented graphically. In the telecommunications industry, solitons are one of the fastest-growing study fields. Without the idea of a solitary wave, it is not easy to understand how fiber optics , 2, tele\u03c66-model expansion approach Un2 = 0, s = 0, one reachesIn Accordingly, from Let the auxiliary solution of There are nine possible solutions to equation Family-01: Hyperbolic function (for \u03d1 < 0):Family-02: Trigonometric function (for \u03d1 > 0):Family-03: Rational function \u03c1, H, and G \u2260 0 are real parameters. For finding N in U3 and U\u2033 yields N = 1. Then Now, making use of Eqs , 4.2) a a4.2) an\u03b4 = \u2212kx + wt+ p. Applying \u03b4 = \u2212kx + wt + p. Applying \u03b4 = \u2212kx + wt + p.Applying Let the auxiliary solution of the suggested nonlinear structure as follows U(\u03c2)=\u2211i=There are five possible solutions to equation Family-01: Hyperbolic function (for M > 0):Family-02: Trigonometric function (for M < 0):Family-03: Rational function (for M = 0)N in U3 and U\u2033 yields N = K + 1. If K = 1, then N = 2 and For finding Now using Eqs , 5.2), , 5.2), a\u03b4 = \u2212kx + wt + p. Applying Eqs (\u03b4 = \u2212kx + wt + p. Applying Eqs (\u03b4 = \u2212kx + wt + p.Applying Eqs \u20135.5) an an5.5) aying Eqs \u2013(5.5) anying Eqs \u2013(5.5) anThe auxiliary solution to the suggested nonlinear structure is assumed in the underneath symbolic form , 39U(\u03c2)L, and M. For finding K in U3 and U\u2033 yields K = 1. As a result From equation Now, making use of Eqs , 6.4), , 6.4), a\u03b4 = \u2212kx + wt + p with According to Eqs and 6.56.5, the Q26 and Q27 produce W-shaped periodic waves. We depict only Q26 . You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.Please include the following items when submitting your revised manuscript:If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: We look forward to receiving your revised manuscript.Kind regards,Boris MalomedAcademic EditorPLOS ONEJournal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at\u00a0https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and\u00a0https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdfhttps://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at http://journals.plos.org/plosone/s/latex.3. Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at [Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to Questions Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Partly********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0N/A********** 3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0Dear Editor,After reading this paper in detail. Authors studied on the improved Kudryashov, the novel Kudryashov, and the unified methods to demonstrate new wave behaviors of the Fokas-Lenells nonlinear waveform. They derive the ordinary differential structure of the model via a parametric transformation. The above-mentioned techniques are then applied to the governing model by Maple-18 software. As a result, various novel dynamic optical solitons of mixed trigonometric, hyperbolic, and rational form solutions are obtained. Periodic waves with W-shaped waves, amplitudes increasing gradually, amplitudes increasing rapidly, double-periodic waves, and breather waveswith symmetrical or asymmetrical amplitudes appear in soliton solutions. Singular solitons with single and multiple breather waves are also derived. Variouspotential features of the derived solutions are presented graphically.This paper should be revised via follows:1- This paper is of 22% similarity index by ithenticate program. So, it needs to be reduced a little more.2- [15-20] these paper should be written in detail.3- \"The dimensionless of Fokas-Lenells PDE\" in here, what is the meanings of \"dimensionless \" should be explained.4- Eq.(3.2) should be explained a little more detail. Why is \"exp(i\u03b4)\" selected by them?5- In eq(5.3), they need to write properly the trigonometric terms carefully. 6- \"8 Conclusion\" should be extended by comparing their novelties via existing works such as \"Exact traveling wave solutions for (2+1)-dimensional Konopelchenko-Dubrovsky equation by using the hyperbolic trigonometric functions methods; New analytical solutions and modulation instability analysis for the nonlinear (1+1)-dimensional Phi-four model;Analytic solution of fractional order Pseudo-Hyperbolic Telegraph equation using modified double Laplace transformmethod;Levenberg-Marquardt backpropagation neural network procedures for the consumption of hard water-based kidney function; Instabilitymodulation and novel optical soliton solutions to the Gerdjikov\u2013Ivanov equation with M-fractional\".7-Reference papers should be rewritten according to journal format, properly.After these modifications, it may be accepted.Best regardsReviewer #2:\u00a0Review ReportNew wave behaviors of the Fokas-Lenells model using three integration techniquesThe authors used three different mathematical techniques namely: improved Kudryashov, the novel Kudryashov, and the unified methods to obtain various soliton solutions for the Fokas-Lenells nonlinear wave form. The idea of this paper is appreciable and interesting. However, I suggest the following issues should be resolved before it can be considered for publication. My comments are as follows:1. A professional proof-reading is required for the whole manuscript.2. The authors should explain the limitations of this work in the introduction section.3. The whole manuscript should be checked for typos and grammatical errors. There are various types of errors in the manuscript. An overall review is needed for fixing the grammatical and typos errors in the manuscript.4. The Abstract is meaningless. So the authors must improve it. The abstract contain answers to the following questions: What problem was studied and why is it important? What methods were used? What are the important results? What conclusions can be drawn from the results? What is the novelty of the work and where does it go beyond previous efforts in the literature?5. The authors should explain why the study is useful with a clear statement of novelty or originality by providing relevant information in the introduction and conclusion sections.6. The author should add some more discussions on figures and numerical simulation in the conclusion and introduction section.7. Looking through the manuscript, the author should present the physical motivation for the nonlinear waves for governing equation. Why the author considered this equation?8. The introduction needs to be improved by the recent developments in the field of the soliton theory as well as its applications. For this purpose, the authors can add the following references to enrich the introductory section:\u2022 Dynamical behaviors of various optical soliton solutions for the Fokas\u2013Lenells equation\u2022 Abundant different types of exact-soliton solutions to the (4+ 1)-dimensional Fokas and (2+ 1)-dimensional Breaking soliton equations\u2022 Abundant exact solutions for the deoxyribonucleic acid (DNA) model\u2022 Symbolic computation and Novel solitons, traveling waves and soliton-like solutions for the highly nonlinear (2+1)-dimensional Schr\u00f6dinger equation in the anomalous dispersion regime via newly proposed modified approach\u2022 Some specific optical wave solutions and combined other solitons to the advanced (3+1)-dimensional Schr\u00f6dinger equation in nonlinear optical fibers\u2022 On the dynamics of optical soliton solutions, modulation stability, and various wave structures of a (2+1)-dimensional complex modified Korteweg-de-Vries equation using two integration mathematical methods\u2022 Dynamical behavior of analytical soliton solutions, bifurcation analysis, and quasi-periodic solution to the (2+1)- dimensional Konopelchenko-Dubrovsky (KD) system\u2022 Newly generated optical wave solutions and dynamical behaviors of the highly nonlinear coupled Davey-Stewartson Fokas system in monomode optical fibers\u2022 Newly formed center-controlled rouge wave and lump solutions of a generalized (3+1)-dimensional KdV-BBM equation via symbolic computation approach9. The authors should comment on figures in detail in the conclusion section. The conclusion section must also contain new findings of the paper.10. The authors should provide the future scope of the work in the conclusion section.11. Because the authors did not create these applied methods/procedures, the authors must cite them. The authors must the graphical and physical explanation of the obtained solutions.12. The authors will gain some additional exact solutions because we know that these applied methods can yield various exact soliton solutions.13. The author has to verify the governing model from equations 1 to 3.5. Recheck the governing equation.Reviewer #3:\u00a0Editor, PLOS ONERe: PONE-D-23-23981Title: New wave behaviors of the Fokas-Lenells model using three integration tech-niquesAuthors: M.S. Ullah, H.R. Roshid and M.Z. AliThe paper presents the generation of wave solutions of the Fokas-Lenells equation.Technically, the results are sound.However, the solution method is based on several assumptions, for which no physical justification is provided. Here are a few.1. The choice of Eq. (3.1) means that the authors are looking for a specific family of solutions. Are they of any physical significance? Why this choice?2. The choice made in the line preceding Eq. (3.5) means that in the family of solu-tions discussed by the authors, is further reduced, with no discussion of the physical significance, or justification.3. The choice of Eqs. (4.1) - (4.2) is not explained or justified, neither in the present paper, nor in Ref. [21].4.The same criticism applies to the choice of Eqs. (5.1) - (5.2) and Ref. [22] and to Eqs. (6.1) - (6.2).Even if my reservations are satisfactorily answered, my inclination is to recommend submission of the paper to a more specialized/technical journal, rather than PLOS-ONE, which aims at a far more general readership.********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1:\u00a0NoYes:\u00a0Sachin KumarReviewer #2:\u00a0Reviewer #3:\u00a0No**********https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0AttachmentPONE-D-23-23981review.pdfSubmitted filename: Click here for additional data file. 17 Aug 2023Response to reviewers\u2019 commentsReviewer #1: Dear Editor,After reading this paper in detail. Authors studied on the improved Kudryashov, the novel Kudryashov, and the unified methods to demonstrate new wave behaviors of the Fokas-Lenells nonlinear waveform. They derive the ordinary differential structure of the model via a parametric transformation. The above-mentioned techniques are then applied to the governing model by Maple-18 software. As a result, various novel dynamic optical solitons of mixed trigonometric, hyperbolic, and rational form solutions are obtained. Periodic waves with W-shaped waves, amplitudes increasing gradually, amplitudes increasing rapidly, double-periodic waves, and breather waves with symmetrical or asymmetrical amplitudes appear in soliton solutions. Singular solitons with single and multiple breather waves are also derived. Various potential features of the derived solutions are presented graphically.Answer: Thanks for your real view and constructive comments.This paper should be revised via follows:1- This paper is of 22% similarity index by ithenticate program. So, it needs to be reduced a little more.Answer: We remove the similarity index into 6%.2- [15-20] these paper should be written in detail.Answer: Detailed citations are provided for all references.3- \"The dimensionless of Fokas-Lenells PDE\" in here, what is the meanings of \"dimensionless \" should be explained.Answer: The word \"dimensionless\" refers to a mathematical transformation that removes the units of measurement from the variables and parameters involved in the equation. Since the unit of measurement from the variables and parameters of our investigated model is not present. So, it is called the dimensionless of Fokas-Lenells PDE.4- Eq.(3.1) should be explained a little more detail. Why is \"exp(i\u03b4)\" selected by them?Answer: We explain equation (3.1) in more detail. Furthermore, a transformation with exp(i\u03b4) can explicitly separate the phase (\u03b4) and magnitude of the solution.5- In eq(5.3), they need to write properly the trigonometric terms carefully. Answer: We rewrite the trigonometric terms properly.6- \"8 Conclusion\" should be extended by comparing their novelties via existing works such as \"Exact traveling wave solutions for (2+1)-dimensional Konopelchenko-Dubrovsky equation by using the hyperbolic trigonometric functions methods; New analytical solutions and modulation instability analysis for the nonlinear (1+1)-dimensional Phi-four model; Analytic solution of fractional order Pseudo-Hyperbolic Telegraph equation using modified double Laplace transformmethod; Levenberg-Marquardt backpropagation neural network procedures for the consumption of hard water-based kidney function; Instability modulation and novel optical soliton solutions to the Gerdjikov\u2013Ivanov equation with M-fractional\".Answer: We include some relevant references and cite them in the text.7-Reference papers should be rewritten according to journal format, properly.Answer: We rewrite all references in journal format.After these modifications, it may be accepted.Answer: We remain thankful for your support in enhancing the article's quality. Reviewer #2: Review ReportNew wave behaviors of the Fokas-Lenells model using three integration techniquesThe authors used three different mathematical techniques namely: improved Kudryashov, the novel Kudryashov, and the unified methods to obtain various soliton solutions for the Fokas-Lenells nonlinear wave form. The idea of this paper is appreciable and interesting. However, I suggest the following issues should be resolved before it can be considered for publication. Answer: Thanks for your real view and constructive comments.My comments are as follows:1. A professional proof-reading is required for the whole manuscript.Answer: We tried our best. 2. The authors should explain the limitations of this work in the introduction section.Answer: The main limitation of this work is that our employed methods cannot find the exact solutions to the governing model for the coefficient of nonlinear dispersion, denoted as s, which takes on nonzero values3. The whole manuscript should be checked for typos and grammatical errors. There are various types of errors in the manuscript. An overall review is needed for fixing the grammatical and typos errors in the manuscript.Answer: We checked the whole manuscript again and again.4. The Abstract is meaningless. So the authors must improve it. The abstract contain answers to the following questions: What problem was studied and why is it important? What methods were used? What are the important results? What conclusions can be drawn from the results? What is the novelty of the work and where does it go beyond previous efforts in the literature?Answer: We modified the abstract section according to the reviewer's suggestion.5. The authors should explain why the study is useful with a clear statement of novelty or originality by providing relevant information in the introduction and conclusion sections.Answer: We explain the novelty of our work in the introduction and conclusion sections.6. The author should add some more discussions on figures and numerical simulation in the conclusion and introduction section.Answer: We added some discussions on figures and numerical simulation in the conclusion and introduction section.7. Looking through the manuscript, the author should present the physical motivation for the nonlinear waves for governing equation. Why the author considered this equation?Answer: We include the physical motivation of the governing model. The governing FL model is considered in our manuscript because of its practical utility.8. The introduction needs to be improved by the recent developments in the field of the soliton theory as well as its applications. For this purpose, the authors can add the following references to enrich the introductory section:\u2022 Dynamical behaviors of various optical soliton solutions for the Fokas\u2013Lenells equation\u2022 Abundant different types of exact-soliton solutions to the (4+ 1)-dimensional Fokas and (2+ 1)-dimensional Breaking soliton equations\u2022 Abundant exact solutions for the deoxyribonucleic acid (DNA) model\u2022 Symbolic computation and Novel solitons, traveling waves and soliton-like solutions for the highly nonlinear (2+1)-dimensional Schr\u00f6dinger equation in the anomalous dispersion regime via newly proposed modified approach\u2022 Some specific optical wave solutions and combined other solitons to the advanced (3+1)-dimensional Schr\u00f6dinger equation in nonlinear optical fibers\u2022 On the dynamics of optical soliton solutions, modulation stability, and various wave structures of a (2+1)-dimensional complex modified Korteweg-de-Vries equation using two integration mathematical methods\u2022 Dynamical behavior of analytical soliton solutions, bifurcation analysis, and quasi-periodic solution to the (2+1)- dimensional Konopelchenko-Dubrovsky (KD) system\u2022 Newly generated optical wave solutions and dynamical behaviors of the highly nonlinear coupled Davey-Stewartson Fokas system in monomode optical fibers\u2022 Newly formed center-controlled rouge wave and lump solutions of a generalized (3+1)-dimensional KdV-BBM equation via symbolic computation approach.Answer: Thanks. We include some recent works and then cite them in the text.9. The authors should comment on figures in detail in the conclusion section. The conclusion section must also contain new findings of the paper in the conclusion section.Answer: We comment on figures in detail of the work to remove this problem.10. The authors should provide the future scope of the work in the conclusion section.Answer: We include the future scope of the work in the conclusion section.11. Because the authors did not create these applied methods/procedures, the authors must cite them. The authors must the graphical and physical explanation of the obtained solutions.Answer: We cite the applied methods in the text. Furthermore, we include the graphical and physical explanation of the obtained solutions.12. The authors will gain some additional exact solutions because we know that these applied methods can yield various exact soliton solutions.Answer: For the convenience of the paper, we skipped these types of solutions.13. The author has to verify the governing model from equations 1 to 3.5. Recheck the governing equation.Answer: We recheck it. Once again, we are grateful for your assistance in enhancing the article's quality.Reviewer #3: Editor, PLOS ONERe: PONE-D-23-23981Title: New wave behaviors of the Fokas-Lenells model using three integration tech-niquesAuthors: M.S. Ullah, H.R. Roshid and M.Z. AliThe paper presents the generation of wave solutions of the Fokas-Lenells equation.Technically, the results are sound.Answer: Thanks for your real view and constructive comments.However, the solution method is based on several assumptions, for which no physical justification is provided. Here are a few.1. The choice of Eq. (3.1) means that the authors are looking for a specific family of solutions. Are they of any physical significance? Why this choice?Answer: The selected transformation Q=U(\u03c2) exp(i\u03b4) for tackling the nonlinear Fokas-Lenells model is purposeful, as it separates the complex quantity into U(\u03c2) and exp(i\u03b4), leading to simplified analysis, the isolation of physical properties in U(\u03c2), and a clearer examination of phase effects (\u03b4).2. The choice made in the line preceding Eq. (3.5) means that in the family of solu-tions discussed by the authors, is further reduced, with no discussion of the physical significance, or justification.Answer: Our governing equation involves three dispersion terms. However, when seeking exact solutions, it's important to note that the mentioned model is non-integrable due to the non-zero coefficient of nonlinear dispersion, denoted as 's'. To ensure the integrability of the system, we set 's' equal to zero. In our subsequent endeavor, we aimed to determine the exact solution for the governing model in cases where 's' takes on non-zero values.3. The choice of Eqs. (4.1) - (4.2) is not explained or justified, neither in the present paper, nor in Ref. [21].Answer: Nonlinear PDEs can be complex and difficult to solve directly. By assuming a trial solution with an auxiliary equation, one can simplify the problem by solving algebraic equations instead of the original differential equation, which is often more manageable. Furthermore, upon substituting the obtained solutions into the governing equation, the equation is satisfied. 4. The same criticism applies to the choice of Eqs. (5.1) - (5.2) and Ref. [22] and to Eqs. (6.1) - (6.2).Answer: Nonlinear PDEs can be complex and difficult to solve directly. By assuming a trial solution with an auxiliary equation, one can simplify the problem by solving algebraic equations instead of the original differential equation, which is often more manageable. Furthermore, upon substituting the obtained solutions into the governing equation, the equation is satisfied.Even if my reservations are satisfactorily answered, my inclination is to recommend submission of the paper to a more specialized/technical journal, rather than PLOS-ONE, which aims at a far more general readership.Answer: Please accept our sincere thanks for your valuable suggestions and constructive comments that have significantly improved the quality of our article. 22 Aug 2023New wave behaviors of the Fokas-Lenells model using three integration techniquesPONE-D-23-23981R1Dear Dr. Ullah,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Boris MalomedAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments:Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressedReviewer #2:\u00a0All comments have been addressedReviewer #3:\u00a0All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Partly********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0I Don't KnowReviewer #3:\u00a0N/A********** 4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0YesReviewer #3:\u00a0Yes********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0(No Response)Reviewer #2:\u00a0The authors amended and corrected the manuscript based on the suggested modifications. So, I would like to suggest that this revised work be published in your Journal.Reviewer #3:\u00a0See attached documents: Review of paper and a few examples of Mathematica computations of some simple solutions.********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1:\u00a0NoYes:\u00a0Sachin KumarReviewer #2:\u00a0Reviewer #3:\u00a0No********** 1 Sep 2023PONE-D-23-23981R1 New wave behaviors of the Fokas-Lenells modelusing three integration techniques Dear Dr. Ullah:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofProf. Boris Malomed Academic EditorPLOS ONE"} {"text": "In terms of respiratory health conditions, lung function was measured by spirometry, and information regarding respiratory symptoms was collected.Despite extensive studies on the airway microbiome in patients with respiratory diseases, characterization of the airway microbiome in healthy populations remains largely scarce, partly due to the unique challenges in obtaining lower airway specimens from healthy individuals. Leveraging a province\u2010wide chronic obstructive pulmonary disease (COPD) surveillance by Guangdong Provincial Center for Disease Control and Prevention, we have surveyed a total of 3915 household members from six districts, 18 sub\u2010districts, and 36 neighborhoods in Guangdong, China. Induced sputum was collected and the microbiome was characterized in 1651 participants. In terms of exposure, smoking, biofuel, occupational pollution, PM3Acinetobacter was found in particular to drive the mediation of microbial functional attributes. On the other hand, fungal microbiota, such as Penicillium and Cladosporium, were found to mediate the influence of PM2.5 concentration on respiratory symptom (CAT score). Through further statistical modeling, bacterial microbiota, and specifically Neisseria, a potential airway pathobiont,Neisseria in their airways experienced a greater respiratory symptom burden in response to occupational pollution. This is accompanied with unique changes in the fungal microbiota in particular an increase in Aspergillus, the commonly observed fungi in the environment.A bi\u2010directional mediation analysis revealed a mediation role of the airway microbiome between exposure and respiratory health, where distinct entities of the microbiota were found to mediate the effects of different exposures. Specifically, bacterial microbiota were found to mediate the influence of smoking on the lung function, where 4A signature for healthy airway microbiome could serve as a baseline to assess dysbiosis in respiratory diseases. To this end, an airway microbiome health index (AMHI) was developed, by simultaneously integrating bacterial and fungal taxa to assess an individual's respiratory health status. AMHI was declined in airway diseases compared with healthy individuals, declined in individuals with respiratory symptom, and continuously declined in individuals experiencing accumulated exposures. AMHI further interacts with exposure on its effect on respiratory health, together suggesting the potential utility of this microbiome\u2010based scoring system in assessing an individual's respiratory healthy status and susceptibility to exposure. A continuous decline of AMHI was observed from healthy individuals, individuals \u2018at\u2010risk\u2019 for COPD (pre\u2010COPD), to those diagnosed with COPD. In concert, there was a continuous expansion in the microbial interaction network from healthy, pre\u2010COPD to COPD. Among all exposures, smoking was associated with the greatest network perturbation. Microbiome taxa that drive the network perturbation were found to contribute to functional shifts from healthy, pre\u2010COPD and COPD, suggesting the interactome changes could be implicated in early development of COPD. AMHI and the interactome have the potential to assist in quantifying the individualized effects of exposure on respiratory health and early development of COPD.5Neisseria in the airways could be more vulnerable to the effects of the occupational pollution on the development of respiratory symptoms. Identifying this subgroup of population is important, as preventative measures can be taken to limit exposure for these individuals. The formulation of AMHI may further allow for a personalized scoring system to assess an individual's risk of developing chronic airway diseases as a result of exposure. Second, in light of the recently renewed interest on early\u2010stage COPD,Collectively, we showed an airway microbial ecosystem that can both be influenced by exposure and in turn modulate its impact on the respiratory health. Together, they highlight the central role of the airway microbiome in shaping an individual's respiratory health and response to environmental exposure (you are what you breathe). The implications of these results are several. First, it may be possible to assess an individual's susceptibility to exposure based on the composition of the airway microbiome. For instance, individuals with enrichment of The authors declare no conflict of interest."}