diff --git "a/cluster/319.jsonl" "b/cluster/319.jsonl" new file mode 100644--- /dev/null +++ "b/cluster/319.jsonl" @@ -0,0 +1,60 @@ +{"text": "The AJCC cancer staging atlas is the official publication of the American Joint Committee on Cancer, the world's foremost authority on cancer staging information.This is the first edition of this book, created as a compendium to the AJCC Cancer Staging Manual and Handbook, now in their sixth editions.This is an impressive and extremely valuable small book, which is easily transportable due to its small size, 20 \u00d7 12.5 cm (Figure It contains over 400 illustrations to facilitate an understanding for the stage of a tumour, and can easily be referenced for individual patients. Each chapter is extremely detailed, and indexed at the page margin, so one can easily access the pertinent chapter in this way rather than going to the index or table of contents each time Figure .The atlas is divided into eight parts, covering all the major systems/sites of cancer, with an introductory chapter on the principles and purposes of staging. It is packed full of useful illustrations.One of the most useful aspects of this atlas is the summary of changes for staging at the beginning of each system chapter Figure .I would recommend this publication to all health care professionals actively involved in the treatment of cancer patients, as the illustrations and text make the concept of the disease and its staging very simple to understand. It would also appeal to research students and post-doctoral scientists.From a training perspective, the book is probably more suited to postgraduate oncology or surgery trainees."} +{"text": "Accurate staging of rectal cancer is essential for selecting patients who can undergo sphincter-preserving surgery. It may also identify patients who could benefit from neoadjuvant therapy. Clinical staging is usually accomplished using a combination of physical examination, CT scanning, MRI and endoscopic ultrasound (EUS). Transrectal EUS is increasingly being used for locoregional staging of rectal cancer. The accuracy of EUS for the T staging of rectal carcinoma ranges from 80-95% compared with CT (65-75%) and MR imaging (75-85%). In comparison to CT, EUS can potentially upstage patients, making them eligible for neoadjuvant treatment. The accuracy to determine metastatic nodal involvement by EUS is approximately 70-75% compared with CT (55-65%) and MR imaging (60-70%). EUS guided FNA may be beneficial in patients who appear to have early T stage disease and suspicious peri-iliac lymphadenopathy to exclude metastatic disease. Approximately 41,000 new cases of rectal cancer will be diagnosed in the year 2006 with an estimated 8,500 deaths . In thesCurrently available methods for assessment of rectal tumors include digital rectal examination, rigid proctoscopy, computer tomography (CT) scan, magnetic resonance imaging (MRI), and endorectal ultrasound (EUS). Digital examination allows for assessment of size and degree of fixation of rectal tumors. It has limited value because of its subjective nature and dependence on the examiner's experience . Rigid pTransrectal endoscopic ultrasound has emerged as the diagnostic modality of choice for clinical staging of rectal tumors. Because EUS can delineate the layers of the rectal wall -14, it iThe most widely used EUS endoscopes are available in two different designs: radial and curved linear array. Radial echoendoscopes produce a 360\u00b0 image in a plane perpendicular to the long axis of the endoscope's insertion tube. Linear devices, on the other hand, produce sector-shaped images in a plane parallel to the long axis of the insertion tube. Linear imaging is used for interventional EUS-fine needle aspiration (FNA). EUS sonographic layers of the wall of the GI tract have been correlated to histopathological layers in several studies . The staPatients who undergo rectal EUS should receive an oral lavage preparation used typically consisting of polyethylene glycol electrolyte solution. Tumor staging and lymph node detection is performed by filling the echoendoscope ultrasound balloon with water, advancing the scope above the tumor and slowly withdrawing to the anal sphincters. The depth of wall invasion, invasion into the perirectal fat or adjacent organs, and presence of perirectal lymph nodes are carefully assessed Figure . If a noPrognosis of rectal cancer depends on its local, nodal, and distant tumor status. Rectal cancer is staged using the Tumor-Node-Metastasis (TNM) staging system, which is similar to colon cancer Table . Rectal One of the limitations of EUS is the under-staging of T3 tumors, which is caused by the inability to detect microscopic cancer infiltration owing to the limits its resolution. Other factors influencing the accuracy of tumor staging are operator experience ,21,22 anThe accuracy to determine metastatic nodal involvement by EUS is approximately 70\u201375% ,19,28 coet al evaluated the role of EUS-guided FNA of perirectal lymph nodes in the preoperative assessment of a cohort of 80 patients with newly diagnosed, nonmetastatic rectal cancer [EUS-guided FNA does not improve the preoperative staging of rectal cancer in most patients. Harewood l cancer . The oveThe overall accuracy of EUS, CT scan and MRI in the staging rectal cancer compared with surgical pathology is summarized in table The routine utilization of EUS for assessment and determination of tumor penetration into the bowel wall is essential for rectal cancer. This has become the standard of care as it allows for identification of patients in whom to administer preoperative neoadjuvant therapy . AnalysiAccuracy of EUS for staging rectal cancer after radiation therapy is decreased markedly due to post-radiation edema, inflammation, necrosis, and fibrosis. Studies suggest that the T-stage accuracy after radiation is 50%, with a 40% overstaging rate ,34. LympLocal recurrence rate after surgery alone for advanced rectal cancer is approximately 25% and decreases to 10% after radiation . The risThe use of preoperative EUS is an accurate modality for clinical staging of rectal cancer to guide neoadjuvant treatment. EUS guided-FNA may be beneficial in patients who appear to have early T stage disease and suspicious peri-iliac lymphadenopathy. Whether the accurate staging ability of EUS and EUS guided-FNA translates into improved outcomes in terms of reduced recurrence rates and ultimately prolonged survival remains uncertain. At this juncture, the utility of FNA-guided needle aspiration for evaluation of metastatic disease remains unclear.The author(s) declare that they have no competing interests.AAS conceived of the idea of the review article, and helped to draft the manuscript. YF helped to draft the manuscript. SH coordinated and helped to draft the manuscript. All authors read and approved the final manuscript."} +{"text": "Caspases belong to a class of cysteine proteases which function as critical effectors in apoptosis and inflammation by cleaving substrates immediately after unique sites. Prediction of such cleavage sites will complement structural and functional studies on substrates cleavage as well as discovery of new substrates. Recently, different computational methods have been developed to predict the cleavage sites of caspase substrates with varying degrees of success. As the support vector machines (SVM) algorithm has been shown to be useful in several biological classification problems, we have implemented an SVM-based method to investigate its applicability to this domain.1' and P2' amino acids and (iii) the tetrapeptide cleavage sites with ten additional upstream and downstream flanking sequences (where available) were tested. The SVM method achieved an accuracy ranging from 81.25% to 97.92% on independent test sets. The SVM method successfully predicted the cleavage of a novel caspase substrate and its mutants.A set of unique caspase substrates cleavage sites were obtained from literature and used for evaluating the SVM method. Datasets containing (i) the tetrapeptide cleavage sites, (ii) the tetrapeptide cleavage sites, augmented by two adjacent residues, PThis study presents an SVM approach for predicting caspase substrate cleavage sites based on the cleavage sites and the downstream and upstream flanking sequences. The method shows an improvement over existing methods and may be useful for predicting hitherto undiscovered cleavage sites. These results suggest that the SVM trained with the (-D) datasets may be useful for identifying hitherto undiscovered cleavage sites while circumventing the problem of overtraining due to the high percentage of \"XXXD\" cleavage sites in the training datasets. The results also provided further evidence for the suggestion that the P1', P2' and residues further upstream and downstream of the cleavage site may influence substrate cleavage, and by accounting for these flanking sequences, the SVM performance can be improved. It was also shown that the SVM method can be extended to predict cleavage sites with residues other than the canonical aspartate (D) at P1. While the occurrence of the non-canonical cleavage sites remains proportionately small, it does imply that the sampling space is not limited to the XXXD motif for cleavage sites. Consequently, the ability to predict these non-canonical cleavage sites will be a useful complement to existing computational methods which assumes the consensus XXXD motif as the basis for their algorithms.Our analysis on the training and test datasets indicated a large percentage of cleavage sites with the XXXD motif (~98%) and a very small percentage of cleavage sites with a non-canonical XXXE motif (~2%). While experimental cleavage site specificities reported in Thornberry XD motif , the inc4P1 training dataset alone. As the GraBCas method can only be applied to potential cleavage sites with aspartate (D) at the P1 position, we scored the positive sequences in the P4P1 training dataset with the GraBCas matrix values for the different caspases, selected the highest score and checked for the percentage of correctly predicted cleavage sites against a series of cut-off scores. As shown in Table 4P1 test dataset. As there were no recommended cut-off scores for predicting the cleavage sites, we chose the cut-off score of 0.1, which was used for the granzyme B cleavage site prediction as reported in Backes et al. and its mutant sequences as reported in Yan et al. , based o1' and P2' positions and further upstream and downstream of the cleavage site. In addition, the SVM method may be useful for predicting the non-canonical cleavage sites lacking aspartate (D) at the P1 position, such as those found in DIAP1 and other proteins as reported in literature and cysteine as . Therefore, for the P4P1 dataset, each sequence was represented by an 80-dimensional vector. Sequences in the P4P2' and P14P10' datasets were represented by 120 and 480 dimensional vectors respectively.To encapsulate the sequence information into a format suitable for SVM training and testing, the sequences were transformed into xi with corresponding labels yi \u2208 {+1,-1}. To classify the data, the SVM trains a classifier by mapping the input samples, using a kernel function in most cases, onto a high-dimensional space, and then seeking a separating hyperplane that differentiates the two classes with maximal margin and minimal error. The decision function for new predictions on unseen examples is given as:For SVM implementation, we used the freely downloadable LIBSVM package by Chang and Lin . DetailsK (xi\u00b7xj ) is the kernel function, and the parameters are determined by maximizing the following:where under the conditions,C serves as the regularization parameter that controls the trade-off between margin and classification error. As the efficacy of the SVM prediction system is dependent on the type of kernel used, we explored various kernels commonly implemented in biological problems on our datasets. We have chosen the widely used radial basis function (RBF) kernel as it was found to be most effective (data not shown):The variable \u03b3, which determines the capacity of the RBF kernel and the regularization parameter C.Two parameters are required for optimizing the SVM classifier; \u03b3 and C, we applied 10-fold cross-validation on each of the training datasets using various combinations of \u03b3 and C. In 10-fold cross-validation, the training dataset was spilt into 10 subsets where one of the subsets was used as the test set while the other subsets were used for training the classifier. The trained classifier was tested using the test set. The process is repeated 10 times using a different subset for testing, hence ensuring that all subsets are used for both training and testing. SVM parameters \u03b3 and C were stepped through combinations of 0.01, 0.1, 1, 10, 100 for \u03b3, and 1, 10, 100 and 1000 for C in a grid-based manner.To optimize the SVM parameters \u03b3 and C obtained from the optimization process were used for training the SVM classifier using the entire training dataset. The SVM classifier was subsequently used to predict the test datasets. Various quantitative variables were obtained to measure the effectiveness of the SVM method:The best combinations of TP, true positives \u2013 the number of correctly classified cleavage sites.(i) FP, false positives \u2013 the number of incorrectly classified non-cleavage sites.(ii) TN, true negatives \u2013 the number of correctly classified non-cleavage sites.(iii) FN, false negatives \u2013 the number of incorrectly classified cleavage sites.(iv) Sensitivity (SE) and Specificity (SP), which indicates the ability of the prediction system to correctly classify the cleavage and non-cleavage sites respectively, were calculated:Using the variables above, a series of statistical metrics were computed to measure the effectiveness of the SVM method. Accuracy (AC), for the percentage of correctly classified sites, and the Matthews Correlation Coefficient (MCC).To provide an indication of the overall performance of the system, we computed 14P10' (-D) dataset was used to predict the cleavage of Livin [Swiss-Prot:Q96CA5] and the various deletion mutants, based on the prediction of the caspase cleavage sites, as reported in Yan et al. [1 residue of the reported Livin cleavage site (DHVD52) were extracted from both wild type and mutant Livin sequences. Mutants used in this study are: LE \u039452\u201361, \u039453\u201355, \u039455\u201357, \u039457\u201359, \u039460\u201362, \u039452\u201361, \u039453\u201361, \u039452 and \u039451\u201353. In mutants with Asp-52 deleted, the peptide windows were centred on the subsequent residue occupying position 52.The SVM trained using the Pn et al. . 24 amin4-P3-P2, and assume that P1 is an Asp (D) residue, the GraBCas matrices were used to score only the positive sequences (cleavage sites) from the P4P1 training dataset. As GraBCas scores for different caspases were available, only the highest scores were recorded. The percentage of correctly predicted cleavage sites were calculated as mentioned earlier. The P4P1 test dataset was tested in the similar manner and the SE score was obtained at a GraBCas cut-off of 0.1.As the CasPredictor method is unavailable from the published website, it was not tested. The performance of GrabCas was compared with the SVM method using the current datasets. As the GraBCas scoring matrices are specific for the tripeptide, PLJKW conceived the application of SVM for prediction of caspase substrate cleavage sites. TWT contributed with ideas on the experimentation and SR finalized the manuscript. All authors read and approved the final manuscript.Dataset of caspase substrate cleavage sites . List of caspase substrate cleavage sites used for cross-validation and training of the SVM.Click here for fileDataset of caspase substrate cleavage sites (for independent out-of-sample testing). List of caspase substrate cleavage sites used for independent out-of-sample testing of the SVM method.Click here for file"} +{"text": "Synthetic glycolipids prepared by esterification of various sugars and sorbitol, and containing various numbers of saturated or unsaturated fatty acid residues as well as bacterial lipid A and lipopolysaccharide, were tested for mitogenicity of splenic cells of Fischer rats and Swiss mice and for the augmentation of humoral immune response against sheep red blood cells in these species. Subsequently a few of the humoral immune-response-enhancing glycolipids were compared with non-enhancers in their anti-tumour activity against 13762 rat mammary carcinoma in inbred Fischer 344 rats and Ehrlich tumour in Swiss mice. They were given systemically after tumour inoculation and intratumourally in squalene and Tween emulsion after intradermal MAC tumour development. It was observed that certain structural characteristics in glycolipids with respect to the type of sugar, the type and number of fatty-acid residues were needed for their adjuvant action of the humoral arm of the immune response. Although humoral immune-response enhancers were somewhat superior to non-enhancers in their anti-tumour activity, the correlation coefficient demonstrated a lack of significant concordance. It is concluded that glycolipids selected for their ability to augment humoral immune responses against standard antigens need not be suspect as tumour-enhancers on the grounds that they would elicit blocking antibodies in vivo against tumour-associated antigens."} +{"text": "For eliminating the unexpected decoherence effect in cavity quantum electrodynamics (cavity QED), the transfer function of Rabi oscillation is derived theoretically using optical Bloch equations. In particular, the decoherence in cavity QED from the atomic spontaneous emission is especially considered. A feedback control strategy is proposed to preserve the coherence through Rabi oscillation stabilization. In the scheme, a classical quantum feedback channel for the quantum information acquisition is constructed via the quantum tomography technology, and a compensation system based on the root locus theory is put forward to suppress the atomic spontaneous emission and the associated decoherence. The simulation results have proved its effectiveness and superiority for the coherence preservation. The enormous potential of quantum information has caused the widespread concern in the scientific community and has become an important research focus. Among the implementation of hardware design for quantum computing such as cavity QED, ion trap, nuclear magnetic resonance, quantum dots, and superconducting systems , cavity However, all the advantages in cavity QED depend on the coherence of the system. The loss of coherence in quantum mechanical superposition states limits the time for which quantum information remains useful. Similarly, it limits the distance over which quantum information can be transmitted . Hence, g) is proportional both to the dipole moment of the atom and to the electric field of the photon at the atom's location.The cavity QED investigates the interaction of single atoms with single electromagnetic field modes, defined, for example, by a pair of mirrors illustrated in g) is larger than the dissipation arising from the loss of photons (at rate \u03ba) or from emission from the atom into other modes at rate \u03b3, that is, g \u226b \u03ba, \u03b3. The excited atoms will periodically release and absorb photons with a certain frequency, a phenomenon known as vacuum Rabi oscillations \u2032 in the case of resonant light \u03c9 = \u03c90, the optical Bloch equation degenerates into constant coefficient differential equations [\u03c111 = \u03c111*, \u03c122 = \u03c122*, we can give a general solution of \u03c122:For the special initial conditions quations :(10)dWhen external coherent laser field was applied, the vacuum-field-induced coherence effects will be replaced by the microwave-field-induced coherence effects. The decoherence effect caused by spontaneous emission in the system can be suppressed by the introduction of the control of the laser field. Furthermore, the method for implementing the decoherence suppression is to change the Rabi oscillation frequency. According to the previous strategy, the transfer function of the system is constructed and then the decoherence suppression is realized through the compensation of the transfer function.\u03be < 1):We can infer from that theHence, the unit step response of the open-loop system described by is as foComparing formula and 14)14), the s = \u2212p in the negative half real axis, where p > \u03be\u03c9n. After this step, the root locus of the system is as Add a pole s = \u2212z to locate the asymptotes of the root locus in the right side of the imaginary axis. Let \u0394x > 0 be the intersection of the asymptotes and the real axis. To meet the requirements of the asymptote (\u2212p \u2212 2\u03be\u03c9n + z)/2 = \u0394x, we get z = p\u2009+2\u03be\u03c9n + 2\u0394x, where \u0394x should be moderately selected to avoid causing the open-loop gain to be too large. After this step, the root locus of the system is as Considering that the compensated system is sensitive to the open-loop gain after step (1), then we add a zero If we put the open-loop transfer function into a uAccording to the control theory, the closed-loop characteristic equation after the previous two steps iss = iy and inserting it into in in16) inIn the feedback control of the quantum system, the information of the density matrix cannot be measured directly because of the characteristics of the quantum system. One of the challenges exists in how to access the quantum information and feed it back to the input, in other words, how to construct the negative feedback channel for the quantum second-order system. In our work, for solving the problem of information acquisition of the quantum state, a quantum tomography scheme is designed to reconstruct the density matrix.\u03c1 is reconstructed from the output; thus, the quantum information has been transformed into classical information and fed back to the input. In order to realize the compensation strategy of the Rabi oscillation based on the transfer function analysis, a compensation circuit is designed using the active phase-lead compensator and the double integral A/D converter. The quantum density matrix information is transformed into the classical voltage signal for driving the light beam, which can be used to stabilize the Rabi oscillation. The details are described in Sections The physical implementation of the proposal is as N identical copies of the output photon pass through the horizontal-polarization wave plate, and record the number n0.Let the N identical copies of the output photon pass through the vertically polarization wave plate and record the number n1.Let the N identical copies of the output photon pass through the left-rotation wave plate and record the number n2.Let the N identical copies of the output photon pass through the right-rotation wave plate and record the number n3.Let the Quantum tomography technique is an indirect method to determine quantum system parameters. The basic idea is to construct multiple copies of the photon from the system output and determine density matrix elements of the output photon through the optical operations of the photon copies . Assuminni\u2009\u2009 and the system output \u03c1out can be described as follows [The relationship between follows , 12::The density matrix of output photon can be reconstructed according to = (s + z)/(s + p), (p > z), the implementation of the active phase-lead compensation network is as Due to the typical phase-lead compensation \u03a9|2 in the expression of K\u2032, a double integral A/D converter can be used to realize the reciprocal operation of |\u03a9|2 as Due to the 1/|If the element ucted by , a voltaThrough the previous analysis, a coherence preserving solution in cavity QED has been presented using the quantum tomography and the Rabi oscillation compensation. In the following, simulation results have been analyzed for the evaluation of the strategy.As stated in ni\u2009\u2009 can be calculated from for calculating the output density matrix by = 1, \u2009\u03c122(0) = 0). The probability to find the atom in the excited state is plotted for various ratios of \u03b3 and the Rabi frequency \u03a9. In the simulation, we have chosen the following value: \u03b3 = 0, \u2009\u03b3 = 0.1\u03a9, \u2009\u03b3 = 0.25\u03a9, \u2009\u03b3 = 0.5\u03a9, and \u03b3 = \u03a9.From the discussion in \u03b3 = 0, the oscillation is with equal amplitude and fixed frequency. With the \u03b3 increasing, the damping effect becomes more and more significant. The main objective of our design is to overcome the damping effect by compensation; that is to say, when \u03b3 which stands for the spontaneous emission exists, the process will still be a coherent process with equal amplitude and fixed frequency.As can be seen from \u03b3 = 0.3, |\u03a9 | = 2, p = 1.5, and \u0394x = 0.1, the unit step response of the uncompensated and compensated system is shown in Figures According to the strategy described in It is clear that the uncompensated system approaches a constant value after the damping process and the compensated system can maintain a sustained oscillation with a constant frequency and amplitude, which have proved the effectiveness and superiority of our design.For the coherence preservation in cavity QED, the model of the damping Rabi oscillation in the form of the transfer function is derived based on the optical Bloch equation. The transfer function of the damping Rabi oscillation is compensated using the root locus technique derived from the classical control theory to suppress the atom's spontaneous emission. Finally, a physical implementation is put forward to keep the coherence in cavity QED. The strategy has provided a basis for the entanglement preparation in cavity QED theoretically. The research has theoretical significance and practical value. The simulation results showed that the compensated system can maintain a sustained oscillation with a constant frequency and amplitude. And it means the process is a coherent reversible one, which is an ideal environment for quantum information processing.However, this work is based on the semiclassical optical Bloch equations. In other words, only the atom is quantized and the field is treated as a definite function of time rather than as an operator. To obtain more rigorous results, the further work will focus on the Jaynes-Cummings model, in which the radiation field is also quantized. And meanwhile, an appropriate quantum feedback channel is expec"} +{"text": "Lions Quest Skills for Adolescence as perceived by adolescents and teachers who took part in it. Lions Quest has become recognized as an evidence-based program for preventing alcohol and drug use through the development of social skills and the promotion of meaningful engagement in the school community (Lions Clubs International, Overview of Skills for Adolescence Lions Quest. Deductive and inductive analysis of interview transcripts clearly underscored that the positive perceptions of those early adolescents on the quality of their relationships with friends outweigh the negative impression that can be created by peer pressures at this age. It is through such a filter that these adolescents came to appreciate the impact of Lions Quest. Their need to be part of a circle of friends also comes to the fore as a crucial component of a sense of school belonging (Faircloth and Hamm (The purpose of this qualitative study is to pave the way for the establishment of healthy interpersonal relations by facilitating an understanding of the impacts of and Hamm J Youth Social workers must continually re-examine their roles and tasks based on a dual responsibility that consists of making decisions on solid findings, and evaluating the effects of their actions , risk and protective factors, and the sense of school belonging. We will describe the methodology by which we investigated the perspective of concerned actors. We will offer key findings showcasing the views of adolescents who participated in In the field of social prevention geared toward adolescent problem behaviors, research has seen numerous advances in recent decades Jenson . More prOn the other hand, programs that proved to be more successful sought to help adolescents to acquire social skills, make responsible decisions, develop higher self-esteem and resist negative peer pressures .Lions Quest seeks to prevent or delay adolescents\u2019 alcohol and drug use by helping them to develop a constructive commitment to their family, school, and community through the development of social skills and competencies whereby they: (1) develop a capacity for self-assertion and for resisting negative peer pressures; (2) acquire a sense of responsibility; (3) develop conflict resolution skills; and (4) learn how to intervene with peers who start to become substance users. Through a series of 30 one-hour sessions, participants meet in classrooms during regular hours, typically during health or physical education periods. They are conducted by the teacher of the class involved, with occasional support offered by a social worker from the social services agency that initiated the project. In Eastern Ontario, the program brought together students primarily from Grade 7, with several from Grade 8, all between 12 and 14\u00a0years of age. This age group was the focus of evidence-based research conducted by Eisen et al. was insecure. She often wanted to change schools. She had problems with other girls and got into arguments. She was turning in on herself. She decided to take the situation in hand. How did the program help her through these troubles? Certainly in the area of self-confidence, I was able to offer other tools, and this helped her to make appropriate decisions. (Teacher 1)This trait emerges as a dominant theme in the interviews held with school staff. The five teachers related\u2014often and with enthusiasm\u2014how important it is that young adolescents participating in Lions Quest helped the young adolescents to better assert themselves and defend their opinions. One teacher cited the case of a tall, strong boy who had been intimidating other students and, thanks to the program, was able to gain a healthy form of self-respect and to express himself in a more sociable fashion. The same teacher spoke as well of another boy who had been the scapegoat of his group but learned to speak up for himself. He was being picked on, all the time. Now it\u2019s not like that any more because he can defend himself. He does it verbally, by saying, \u2018Stop it, I don\u2019t like this!\u2019 (Teacher 1) According to the teachers, Lions Quest played a pivotal role in bringing about such changes. One said, It takes a lot of courage for some of them to take a stand outside of their group. (\u2026) The discussion groups we conducted [through the Lions Quest program] gave them the confidence to speak up for their own beliefs, and they felt like, \u2018Hey, this is how I feel, and I may be for or against something, but I\u2019ve got my reasons and it\u2019s OK if they\u2019re different from others.\u2019 I think the students developed their skills considerably. (Teacher 3)The interviews with teachers confirmed that He had a serious behavioral problem. While we were conducting the \u2018Lions Quest\u2019 program, there was a moment when everything clicked into place, and he changed. (\u2026) He gives his all when he discusses something, by expressing himself with words instead of acting up when he\u2019s mad. (Teacher 2) Another indicated that not only did the program help the teens to develop citizenship values through participating in voluntary activities; it became a drawing card through the interest it generated in civic spirit.The teachers also noticed a difference in relationships among the students under their daily supervision and offered several examples. One spoke of an aggressive boy who learned to verbalize feelings instead of making threatening gestures when angry. Lions Quest to issues that fall within the daily reality of 12- to 14-year-olds, to realize immediate results, and to see improvements in peer relationships among the students\u2014this last point being a constant concern of young persons of that age group But you know, by the end [of Lions Quest program], everybody was beginning to say: \u2018You know what, she really is fun to be with, let\u2019s go and hang out with her\u2019. The relational dimension, notably ties among classmates and friends, was clearly placed center stage through the statements of the adolescents we met. They recounted that within the scope of Ever since \u2018Lions Quest,\u2019 after those activities you\u2019d have to say there\u2019s less bullying at school. Although only a minority of students addressed the topic of atmosphere at school, these short yet thought-provoking comments leave the impression that the program indeed had an influence on the daily lives of many students and on the social climate within groups. We\u2019re all close in our class, the whole 8th grade, we\u2019re all one big family. (\u2026) We don\u2019t judge anybody (\u2026) we\u2019re always there for each other. (\u2026) Someone\u2019s accepted for who they are. It appears that the program activities had a favorable influence on cohesion within groups, a theme readily identifiable in the students\u2019 comments. We found our class fun because the teacher was organizing lots of activities and we learned by doing fun things. Now we\u2019re more confident in ourselves. The theme of confidence in oneself appears frequently from an analysis of the interviews with adolescents, mirroring much of what was revealed in the teacher interviews. A range of activities guided them toward recognizing their strengths and talents and feeling at ease about revealing these attributes to others. About his general impressions of the program, a 13-year-old boy answered without hesitation: Interviewer: \u2018What is it that makes you have more respect now for your own opinions?\u2019 The 13-year-old student responded: Probably because I\u2019m more sure of myself. (\u2026) And I know I have a good idea about what I\u2019m doing. These new capabilities inspired the students to surpass themselves. I was starting to flunk math and French. (\u2026) I wanted to get some help even if maybe they\u2019d laugh at me, so I took the risk anyway. It became apparent that the adolescents drew much from Lion Quest in terms of life lessons given that it presented them with strategies for better self-expression while respecting others at the same time. As for process, although the manner whereby the strategies were applied in some cases did not achieve the desired level of maturity, they nonetheless generated a higher degree of self-awareness. I usually don\u2019t say it in the well-behaved manner that everybody says you should. I usually use a loud voice. We used to have all these programs where it\u2019s, like, telling you how to solve these conflicts and it was always, take time to calm down and then express it to the person calmly. According to the adolescents interviewed, growth in self-confidence goes hand in hand with an increase in self-assertion. (\u2026) I try to solve the problem, then find out what I\u2019m doing that he doesn\u2019t like, because I don\u2019t wanna lose my friend. Indeed, the influence of Lions Quest in the area of conflict resolution was moderated according to the individual teens being interviewed. Some said they would react in the same fashion if a similar conflict situation presented itself, whereas others attested that Lions Quest played a role in improving their conflict resolution skills. I wasn\u2019t thinking about what I was saying, so just about anything came out, but now I know how others feel if I let loose like that. I\u2019ve learned to be nice. I\u2019ve decided I should think before I speak. The perceptions of the students indicated they also developed certain skills in conflict resolution, their comments indicating that such ability is often tied in with relationships with peers. They gave numerous examples of previous conflicts they had experienced with classmates, including disputes between friends and incidents of bullying. They related anecdotes about how they learned to calm down and better resolve discord, seek the advice of an adult, reach a compromise through discussion, as well as apologize. Primarily what the students from all three schools retained from the program were the elements calling for reflection upon their relationships with friends. In this angle, their perspectives follow the same direction as those of their teachers. It has become common that the peer group occupies such a predominant place in adolescent life that, in the case of 12- to 14-year-olds, it surpasses that of parents, from whom the offspring strive to distance themselves in order to shape their own identity These statements clearly fall in line with the research of Faircloth and Hamm The teens discussed during the interviews that the frequency of social contacts was an important aspect of the stability of their friendships: seeing their closest friends often at school, during extracurricular activities, or outside the school altogether. These adolescents insisted on the qualities they considered to be important in their peers, specifying repeatedly they sought friends who were of respectful character. As an example, one 14-year-old girl stated, It\u2019s cool because the whole gang\u2019s like me, and I\u2019m the same as them. We like everybody a lot, we have fun together, we\u2019re all smart the same way I am and we\u2019re really close. We\u2019re a family. In the same vein, those adolescents appreciate being able to feel at ease in their immediate environment, thereby benefiting from a freedom from intimidation. Many of them indicated that they and their peers accept and respect differences among themselves, and are thus able to spend time together in their group dynamic in a spirit of mutual understanding. To quote one 14-year-old boy, . I can tell him something and he won\u2019t turn around and blab it all around. And he can tell me stuff too, like give me his opinions and I tell him mine. A large number of teens reinforced the viewpoint that a circle of friends provides mutual support. They stressed the fact that young people need to provide a positive social network and underscored the importance of looking out for each other\u2019s welfare. They cited many examples whereby a friend who was living through a difficult situation or experiencing negative emotions received support from the others whose presence brought comfort, distraction, or much-needed laughter. These findings flesh out the conviction that such mutual assistance and acceptance among peers is the foundation of a positive social network.Apart from that well-being, the young adolescents place considerable value upon trust in relationships. It is important to be able to open up to their friends and have their privacy respected. Many affirmed that they share secrets with trusted friends and in return are taken into similar confidence on a one-for-one basisLions Quest), complement social work interventions with young adolescents, and provide the guideposts leading to social norms and a more profound understanding of their daily life experiences.In essence, respect, trust in each other, and mutual support are the fine points that young adolescents hold in high regard among their friends. When these traits bind a group of friends, they lead to shared ties through participation in activities. In this regard, Brown and Larson specify Lions Quest on 12- to 14-year-olds, as well as to determine whether any aspects of their sense of school belonging contributed to the achievement of the actual program objectives. The verbatim indicate that, beyond the expected capacity to resist peer pressures as originally set forth by the program, the development of social skills and engagement to their school community that the young adolescents carried forth from this exercise encouraged them to think about the quality of their relationships with friends, and even more so, to improve them. Interestingly, they listened more attentively when adults addressed this issue that they considered to be crucial in the context of their own frame of reference and their day-to-day experiences. We therefore come face to face with their need to be part of a milieu, of a niche finding and of a circle of friends, the latter having moved to the forefront as a crucial component of a sense of school belonging have a tangible affiliation with members of that group; (2) benefit from a feeling of self-esteem stemming from involvement with the group; and (3) believe that they are appreciated by the other members and take pride therein , the young adolescents in this study entered into a phase of reflection vis-\u00e0-vis Notwithstanding all these considerations, it should be specified that even when adolescents experience well-being thanks to supportive friends, such benefits can be compromised if the support offered by adults is weak, even more so if the younger ones are passing through a time of personal difficulty (Buchanan and Bowen Lions Quest, they were becoming a valued part of a larger \u201cwhole\u201d with talent, potential, and ventures to pursue. In their comments, they enshrined what researchers in social prevention advance: get involved in a positive fashion at least at school; develop skills and solid ties; and benefit from encouraging positive reinforcement from various actions within one\u2019s environment, from positive friends and prosocial adults (Brooks To conclude, these young adolescents had begun to feel that, through s Brooks . The perLions Quest program aims to prevent or delay adolescents\u2019 alcohol and drug use by helping them to develop a constructive engagement in their school community through the development of social skills and competencies. The study offers distinctive insights from the perspective of those concerned by the implementation of the program in question. Nevertheless, it would be interesting to conduct a longitudinal study to assess the effect of Lions Quest over time.This study sought to bring about an understanding of the influence of participation in a program geared toward the positive development of adolescents. The As established earlier, there is little or no research calling for more studies that focus on the views of young adolescents. In fact, we propose that it is logical, and perhaps imperative, to consider the views of the very individuals for whom interventions are developed and implemented (Gall\u00e9 and Lingard"} +{"text": "The only anatomical variation of the superior turbinate defined in the literature is concha bullosa. Determination of anatomical variations of the intranasal structures is important to perform safe endoscopic sinus surgery and avoid complications.We report a case of an unusual anatomical variation of the superior turbinate in a 55-year-old Turkish man with nasal obstruction and headache.Anatomical variations of superior turbinate are very rare. Variations of intranasal structures can easily be detected with coronal plane paranasal sinus computed tomography. Although there are many anatomical variations in the nasal cavity, those related to the superior turbinate are extremely rare. The only variation of the superior turbinate defined in the literature so far is concha bullosa. The determination of anatomical variations in the superior nasal cavity is very important to perform safe endoscopic sinus surgery and avoid complications. Anatomical variations in this region can sometimes cause significant sinonasal symptoms such as nasal obstruction, smell disorders and migraine-like headache. Only a coronal plane paranasal sinus computed tomography (CT) provides detailed information about this inaccessible area of the superior nasal cavity. Therefore, careful evaluation of a paranasal sinus CT scan before surgery is very important in these cases. Here we report a case of unusual anatomical variation of the superior turbinate in a 55-year-old Turkish man.A 55-year-old Turkish man was admitted to our clinic with complaints of nasal obstruction and headache lasting for years. There was a nasal mass in front of his left middle turbinate extending nearly down to the inferior turbinate in anterior rhinoscopic examination which had an unclear origin. The endoscopic examination revealed a mass extending from his superior nasal cavity toward his inferior turbinate on the left side. The opposite nasal cavity was obstructed by the deviated septum. As shown in Figures\u00a0Nasal obstruction and headache are common complaints of patients presenting to otorhinolaryngology clinics. Nasal septal deviation and unilateral nasal masses are two common causes of nasal obstruction. The differential diagnosis of unilateral nasal masses which may be congenital, inflammatory, neoplastic or traumatic is important and should be done thoroughly.The rhinogenic causes of headache are twofold. The first is acute rhinosinusitis and the second is any anatomic variation within the nose. These anatomic variations or anatomical abnormalities can cause headache in and of themselves or as a result of causing sinusitis because of blockage of the osteomeatal complex. These anatomic variations include deviated nasal septum, in particular a spur which may contact either the middle or inferior turbinate (this is the most common cause of rhinogenic headache); congestion of the turbinates; nasal neoplasm; pneumatized agger nasi cells; unusual deflections of uncinated process; paradoxically bent middle turbinate and variations of ethmoid bulla . PneumatChanges in turbinate skeletal structure or increased respiratory mucosa volume may constrict the nasal passage causing a negative effect on paranasal sinus ventilation and mucociliary cleaning of the middle meatus which is thought to play a role in the development of sinusitis .et al. [Christmas et al. and Cleret al. have sugAnomalies of the superior turbinate are very rare. Paranasal sinus CT can easily identify unusual anatomical variation of the superior turbinate which should be evaluated with great caution by otorhinolaryngologists and radiologists preoperatively.Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.CT: Computed tomographyThe authors declare that they have no competing interests.TS made the diagnosis and wrote the case report. EG reviewed and modified. BE supervised. All authors read and approved the final manuscript."} +{"text": "Pneumatization of the intranasal turbinates or concha bullosa is an anatomic variation of the lateral nasal wall. Concha bullosa is defined as the presence of air cells in turbinates. It can be best diagnosed with paranasal sinus computed tomography. Concha bullosa is a possible etiologic factor for recurrent sinusitis due to its negative effect on paranasal sinus ventilation and mucociliary clearance. Concha bullosa is most commonly seen in the middle turbinate and less frequently in the inferior or superior turbinate. Pneumatization of all turbinates is very rare. To our knowledge, there are only two publications about a case with concha bullosa in all turbinates in the current literature. Here, we present a woman with bilateral pneumatization in all three intranasal turbinates. Intranasal turbinates are required structures for the maintenance of normal nasal functions such as humidification, hydration, lubrication of the upper respiratory system, olfaction, filtration, and thermoregulation. The reason of pneumatization of the intranasal turbinates is still unknown. Although mostly asymptomatic, overpneumatized turbinates may constitute mass effect and in these cases as a result of impaired ventilation and drainage of osteomeatal region it can lead to sinusitis.A 20-year-old woman was admitted to our clinic with nasal obstruction, postnasal drip, and intermittent headache for 3 years.Anterior rhinoscopy and nasal endoscopic evaluation revealed that the nasal septum was deviated to the left, bilaterally inferior and middle conchae were hypertrophied, and the nasal mucosa was normal.Paranasal sinus computed tomography (CT) scan with coronal plane views revealed that the inferior, middle, and superior turbinates were pneumatised bilaterally, a moderate nasal septum deviation to the left with a spur formation and inflammatory mucosal thickening in the right maxillary sinus Figures and 2.After giving information to the patient about the surgical procedure we performed an endoscopic surgery including lateral marsupialisation for both middle turbinate pneumatizations, a limited septoplasty and crushing and outfracture of the inferior turbinates. For both inferior turbinate pneumatizations, the bullous structures were reduced by crushing and outfracture. Superior turbinates were left intact since they had no visible clinical significance. She was discharged from the hospital 1 day after the operation and any postoperative complications were not seen. Follow-up evaluation indicated that the patient's symptoms had improved dramatically and she had no further complaints.Concha bullosa (CB) is the most common anatomical variation of the osteomeatal complex region. Its incidence has been reported between 13% and 53% . The devAlthough the majority is asymptomatic, due to its negative effect on paranasal sinus ventilation and mucociliary cleaning of the middle meatus, CB can cause the development of maxillary or ethmoidal sinusitis. The severity of symptoms resulting from CB is closely associated with the degree of pneumatization.The difference between a hypertrophied turbinate and its pneumatization can be only distinguished by paranasal sinus CT. In our patient, the anterior rhinoscopic and endoscopic views of the nasal cavity were not distinctive for pneumatization of the turbinates, but paranasal sinus CT imaging allowed us to diagnose the pathology. To our knowledge, there have been two reports about a case with CB in all turbinates , 5. We pThe treatment of CB is endoscopic partial middle turbinate resection. With the widespread use of imaging techniques such as CT have provided us with detailed information of nasal cavity and paranasal sinuses. This helps the surgeons be aware of the anatomic variations regarding the osteomeatal complex region. Surgical resection of CB during endoscopic sinus surgery requires careful protection of medial lamella and resection of only lateral half of turbinate. The extent of turbinate pneumatization is evaluated on paranasal sinus CT scans and this allows the surgeon to anticipate points of safe entry into the lumen of CB. Careful evaluation of paranasal sinus CT scans before surgery is important in these cases.Bilateral triple CB is very rare anatomical variation. The presence of such anatomical variations regarding intranasal nasal turbinates can lead to complications during endoscopic sinus surgery. The anatomical variations of the intranasal turbinates can be determined with paranasal sinus CT. Therefore, it is especially important that otolaryngologists and radiologists are well aware of such anatomical variations of osteomeatal complex region when evaluating patients with sinonasal symptoms."} +{"text": "PTPN22 C1858T polymorphism and the risk of endometriosis. To evaluate A meta-analysis of 10 published case-control studies (from four articles), with a total sample of 971 cases and 1,181 controls, was performed. We estimated risk of endometriosis associations with the C1858T polymorphism. A significant increased risk in all genetic models of the variant T allele with endometriosis was found. The analysis without the study whose controls deviated from the Hardy-Weinberg equilibrium exacerbated these effects in the homozygous and recessive models . In the Italian subgroup, a significant risk association was found in the homozygous and recessive models .PTPN22 (C1858T) and the risk of endometriosis suggest this polymorphism might be a useful susceptibility marker for this disease. The associations observed between It presents multisystem involvement affecting several organs, most commonly in the peritoneum and pelvis, especially the ovaries, and less often in the recto-vaginal septum.2 This results in pelvic pain, dysmenorrhea, and infertility.3Endometriosis is a condition in which a tissue that is histologically similar to the endometrium, with glands and/or stroma, grows outside the uterine cavity.Numerous hypotheses have already been put forward to explain the presence of ectopic endometrial tissue and stroma. However, none of them could explain all implantation sites and symptoms, leading researchers to search for new theories which alone or together with the hypotheses already proposed could better explain the etiology of endometriosis.5Exposure to estrogen is one of the major endocrine risk factors for endometriosis. In contrast, progesterone is somehow protective against the development of endometriosis, for disrupting the production of local differentiation factors necessary for the regulation of the expression of responsible genes, promoting the ability of refluxed menstrual endometrial fragments to invade the peritoneal surface, interfere in vessels, and establish endometriosis. Because of the powerful anti-inflammatory effect of progesterone, reduced sensitivity to this steroid could contribute to the autoimmune nature of endometriosis.7In this context, hypotheses addressing immunological predispositions as well as genetic factors have been considered,8 and polymorphisms in genes associated with autoimmune diseases have emerged as possible candidates for endometriosis development.4Although the etiology of autoimmune diseases is unknown, they are characterized by genetic and environmental factors in their development. Just as in autoimmune diseases, in endometriosis similar immunologic alterations occur, such as an increase in the number and cytotoxicity of macrophages, polyclonal increase in the activity of B lymphocytes, abnormalities in the functions and concentrations of B- and T-lymphocytes, and reduction in the number or the activity of natural killer cells. Furthermore, the presence of specific antiendometrial and antiovary antibodies has been found both in endometriosis and infertility.PTPN22) gene, located on 1p13.3-13.1, and it is involved in the regulation of T-cell receptor signaling.9The PTPN22 gene shows a missense single-nucleotide polymorphism at nucleotide 1858 (C>T), which causes a substitution of an arginine at codon 620 (CGG) for a tryptophan (TGG) (W620 variant) associated with autoimmune disorders.10 The variant does not bind kinases well and appears to encode a gain-of-function enzyme, which has been suggested to increase the inhibition of T-cell-receptor signaling, affecting thymic deletion of autoreactive T-cells or the development or function of peripheral regulatory T-cells.12Lyp is a protein tyrosine phosphatase encoded by non-receptor 22 .17 was used to assess the methodological quality of the studies included. These studies were judged based on three broad perspectives: selection, comparability, and exposure (case-control studies) or outcome (cohort studies), by a \u2018star\u2019 rating system with a score ranging from zero star (worst) to nine stars (best). A score of seven stars or greater indicated that one study was of high quality.The Newcastle-Ottawa Score (NOS) quality assessment scaleversus TC + CC genotypes as well as the TT + TC versus CC genotypes. These contrasts correspond to recessive and dominant effects of the T allele. To compare effects on the same baseline, we used raw data for genotype frequencies to calculate study-specific estimates of the OR. Pooled ORs were obtained using either the fixed18 (in the absence of heterogeneity) or random19 (in its presence) effects models. Heterogeneity between studies was addressed in a number of ways. First, it was estimated using the \u03c72 based Q test.20 Recognizing the low power of this test,21 significance threshold was set at p=0.10. Second, it was explored using subgroup analysis20 with population as variable. And third, it was quantified with the I2statistic, which measures the degree of inconsistency among studies.21 Sensitivity analysis, which involved omitting one study at a time and recalculating the pooled OR, was also used to test for robustness of the summary effects. Significance was set at a p value of \u22640.05 throughout, except in heterogeneity estimation.Data were analyzed using Review Manager 5.1 . We estimated the odds ratio (OR) of association with the variant TT genotype compared with the wild-type CC genotype. To evaluate importance of the heterozygous genotype, dominant and recessive genetic models were also applied. Thus, we examined contrast of TT 13 had zero homozygous and recessive datasets in cases and controls, which rendered them non-estimable, hence the total number of studies was nine for these genetic models. When the number of studies is lower than ten,22 qualitative and quantitative tests for publication bias become less sensitive, obviating investigation of publication bias. Non-zero data in the dominant and co-dominant models placed the overall total number of studies at 10, warranting test of publication bias in these models. In this case we used the regression asymmetry test by Egger et al.,23 as well as the diagnosis by Begg et al., (nonparametric \u03c3 correlation coefficient).24 For both tests, we used the web-based software, WINPEPI (PEPI for Windows).25Ammendola et al.,PTPN22 (C1858T) polymorphism with endometriosis. Eight studies from two articles16 had Italian subjects (132 cases/528 controls), while one study had Brazilian subjects15 and another had Polish participants,14 with 140 cases/180 controls and 171 cases/310 controls, respectively.The epidemiological and clinical characteristics of the included articles are outlined in 13 had controls whose frequencies deviated from the HWE, leaving nine studies from three articles13 in HWE .Control frequencies of the variant allele in the Italian studies ranged from 0.02 to 0.06, while Brazilian and Polish populations were 0.09 and 0.12, respectively. One study12 and Polish14 populations. The significant Italian effects increased up to OR: 11.12 in the homozygous model, accompanied by wide confidence intervals (95%CI: 2.44-50.71) (13 with their study-specific OR of 122.69 and 95%CI: 13.36-1126.84.4-50.71) . Figuresheterogeneity<0.00001, I2=76-88%), less in the homozygous and recessive models .In contrast to the Italian pooled effects, the study-specific ORs were modulated in the Brazilian (OR: 1.95-2.35) and Polish (OR: 1.08-1.84) populations. Sensitivity treatment did not materially alter the overall and HWE results as the Italian findings did, thus indicating robustness of the summary effects (data not shown). All 12 pooled ORs were heterogeneous, high in the dominant and co-dominant models (PPTPN22 (C1858T) polymorphism, our meta-analysis showed overall increased risk associations of up to 5.6-fold in endometriosis, significant in all genetic models. The HWE analysis did not materially alter the overall findings, other than exacerbate susceptibility up to 9.5-fold in the homozygous model. Interpreting such an increase should, however, be treated with caution given the unusually wide CI that accompanied the pooled effects. Wide CI margins tend to heighten uncertainty, hence, less confidence in the interpretation of results.With a sample of over 1,624 for the 12 was up to 2.4-fold, and a comparably modest 1.8-fold in the Polish population.14Because there was only one of each Brazilian and Polish studies and several Italian studies, we compared these single-ethnic populations with the pooled Italian effects. By and large, the Italian effects were significant up to 11-fold in the homozygous model with the widest CIs in the entire body of results and heterogeneous. By comparison, increased risk effects in the Brazilian population14 were matched to cases compared to none in the Italian subjects. Thus, it may be one or a combination of genetic and/or epidemiological (matching of subjects) features that rendered differences in summary effects among the three populations.These differences among the three populations may be associated with the following factors: (i) the minor allele frequencies between these three populations differed , (ii) controls in Polish studyPTPN22 and risk of endometriosis suggest this polymorphism might be a useful susceptibility marker for this disease.However, the following features of the studies account for the advantages of the non-Italian subgroup: (i) the homozygous and recessive effects in this population were obtained in zero heterogeneity; (ii) although composed of only two studies , their combined sample sizes (n=801) accounts for more than one-third (37.2%) of the total 2,152; (iii) findings in this population had the narrowest CIs, boosting precision of the findings. Given these features of the non-Italian studies, it may be that these values may be closer to the true values of association. More studies are needed to confirm our findings. Nevertheless, the observed associations between PTPN22 gene encodes the human lymphoid tyrosine phosphatase, an enzyme with restricted expression in hematopoietic cells. Lymphoid tyrosine phosphatase is a critical regulator of signaling through the T-cell receptor, and in T-cells, it forms a complex with the kinase Csk.8 The autoimmune-associated PTPN22 C1858T variant does not bind kinases well, and appears to encode a gain-of-function enzyme.26 The mechanism of action of PTPN22 in autoimmunity remains unclear. However, increased inhibition of T-cell receptor signaling caused by the PTPN22 C1858T polymorphism could predispose towards autoimmunity, either by affecting thymic deletion of autoreactive T-cells or by affecting development or function of peripheral regulatory T-cells.27 Indeed, recently, PTPN22 was found to be among the gene targets of FoxP3 in CD4+CD25+ regulatory T-cells.28The PTPN22 polymorphism may cooperate with clinical and genetic factors to influence the course of disease and immune reactions. These cooperative interactions could result in a statistical association between PTPN22 and endometriosis. Further investigations are needed to clarify the possible role of PTPN22 and other polymorphic systems in the clinical course of endometriosis. In subjects with endometriosis, PTPN22 may contribute to the development of autoimmune phenomena in the presence of peculiar circumstances.13 Given the multifactorial nature of endometriosis, the analysis of genetic factors would be proper when considering the synergy with environmental influence along with epistatic interactions.16In the presence of endometriosis, the 13 from the HWE, which may have biased summary outputs and pointed to methodological weaknesses, such as biased selection of subjects, genotyping errors, and population stratification.29 However, omitting this study followed by reanalysis did not materially alter significance and direction of association, underpinning the stability of our overall findings; (iii) the homozygous and recessive effects were characterized by unusually wide CIs in the overall analysis, which got even wider in the modifier and subgroup analyses and translate to reduced precision of the pooled ORs eliciting less confidence in the findings; (iv) there was no mention of matching in all but one14 of the component articles.Limitations of our study include the following: (i) predominant heterogeneity of the body of results indicating variance of the component studies, which may have been offset by our adjustment for this variance with use of the random-effects model; (ii) deviation of one study13 anonymous healthy adults,14 and healthy blood donors.16 Only one study15had as Control Group fertile and non-menopausal women, who had undergone tubal ligation for family planning reasons, and had no sign of endometriosis in their clinical history. The absence of symptoms in women does not exclude endometriosis, given that 16% of patients with endometriosis are fertile and asymptomatic.30Additionally, a possible limitation could be the heterogeneity of the Control Group (v), as healthy men and women,Yet, despite these limitations, the following strengths boost confidence in our findings: (i) all studies were population-based, easing extrapolation of results to the general population; (ii) all tissue sources were blood; (iii) endometriosis diagnoses were all made by laparoscopic intervention and histopathological confirmation; (iv) all studies used a combination of polymerase chain reaction (PCR) and RFLP with enzyme restriction. These five items add to the epidemiological and clinical homogeneity of the studies: consistency of increased risk effects in the entire body of results; sensitivity analysis demonstrated that the entire body of results was robust, supporting the reliability of the findings; and no publication bias was detected, indicating that the dominant and co-dominant body of results may be unbiased.The findings we report here highlight the utility of modifier analyses which provide a more comprehensive profile of an association of a polymorphism with the disease (endometriosis). Such meta-analytical treatments tend to uncover new insights into factors that retain or alter the stability or robustness of a pooled odds ratio. The synthetic approach to the individual profiles of each included study could be used to form biologically plausible subgroups.locus will be small because gene-gene as well as gene-environment interactions are likely to operate. Additional well-designed studies, based on sample sizes commensurate with detection of small genotypic risks, should allow conclusions that are more definitive as to the association of PTPN22 (C1858T) polymorphism with endometriosis.It is conceivable that endometriosis related to any"} +{"text": "At 300\u2009K and 5\u2009GPa Xe(N2)2-I is synthesised, and if further compressed, undergoes a transition to a tetragonal Xe(N2)2-II phase at 14\u2009GPa; this phase appears to be unexpectedly stable at least up to 180\u2009GPa even after heating to above 2000\u2009K. Raman spectroscopy measurements indicate a distinct weakening of the intramolecular bond of the nitrogen molecule above 60\u2009GPa, while transmission measurements in the visible and mid-infrared regime suggest the metallisation of the compound at ~100\u2009GPa.Molecular nitrogen exhibits one of the strongest known interatomic bonds, while xenon possesses a closed-shell electronic structure: a direct consequence of which renders both chemically unreactive. Through a series of optical spectroscopy and x-ray diffraction experiments, we demonstrate the formation of a novel van der Waals compound formed from binary Xe-N Under high compression, molecular nitrogen exhibits a rich polymorphism1234562 and Xe would seem unlikely due to the relative inertness of both materials. Nevertheless, a recent theoretical study predicts the formation of novel xenon nitride compounds above 146\u2009GPa with stoichiometry - XeN62 have been explored experimentally at low pressures investigating mutual solubility25The direct reaction of N2, the metallisation pressure is drastically reduced2 compound. Here, we report the synthesis and characterisation of a Xe-N2 van der Waals compound through x-ray diffraction, Raman and transmission spectroscopies. We show that two inert condensed gases form a Xe(N2)2 compound at pressures as low as 5\u2009GPa at room temperature. When the novel compound is formed in a xenon medium, it becomes metallic at around 100\u2009GPa, whilst Xe(N2)2 with an abundance of nitrogen demonstrates metallic behaviour above ~140\u2009GPa.It is known that at high pressures both xenon and nitrogen exhibit (semi-)conducting phases. Xenon has been shown to transform to metallic state at pressures between 130\u2013150\u2009GPa, giving it the lowest pressure of metallisation amongst the rare gas solids272822 at various concentration were loaded into diamond-anvil cells (DAC) using a combination of cryogenic and high-pressure gas-loading techniques (see Methods section). Compressing the mixture above 2\u2009GPa leads to the formation of a xenon single crystal surrounded by liquid N2 as seen visually and in x-ray diffraction measurements \u00c5 at 5.6\u2009GPa 2, designated Xe(N2)2-I herein, which is in excellent agreement with the calculated equation-of-state data for Xe\u2009+\u20094N resulting in the best fit to the data (see table in SM for more details on the structure refinement). The structure of this phase can be considered as a diamond-type host lattice of Xe atoms with four rotationally disordered N2 molecules forming a tetrahedron within each vacancy. The N-N site distance of 3.2655(1) \u00c5 implies a N\u2026N closest-contact distance of 2.1655(1) \u00c5.From both the structure type and unit-cell dimensions we determine the stoichiometry as Xe, we observe the complete transformation of the sample, evident through only the low-frequency vibrational mode and no evidence of excess N2 (see SM).Raman spectroscopy measurements of the formed single crystal at 2\u2009GPa reveals the appearance of a weak vibrational mode, which is lower in frequency than the fluid N2 mode at 5\u2009GPa, which consists of overlapping modes of Xe(N2)2-I, as determined by x-ray diffraction, and pure N2 that increasingly separate in frequency space at higher pressure. The vibrational mode of Xe(N2)2-I (blue spectra in 2 in Xe (red spectra in 2)2-I that form within a Xe matrix. It should be noted that the prescence of this N-containing dopant does not significantly affect the measured unit-cell volume which agrees with the literature to within experimental errorIn Raman measurements of the surrounding media (see blue spectra in 2)2-I to a high-pressure Xe(N2)2-II phase. This transition pressure corresponds approximately to the critical pressure of the \u03b4 to 2. Xe(N2)2-II adopts a body-centered tetragonal cell with a\u2009=\u20095.7228(3), c\u2009=\u20099.2134(10) \u00c5 at 18.7\u2009GPa , 0.179(1)). This position lies displaced by 0.52(2) \u00c5 from an inversion centre resulting in four ordered N2 molecules aligned along the c-axis. Final Rietveld agreement factors are Rp\u2009=\u20090.015 and R\u2009=\u20090.094.Above pressures of 14\u2009GPa, we observe a phase transition from the low-pressure Xe with a spherically-disordered N2 molecule model. Shortest N\u2026N interatomic distances are now 2.5238(1) \u00c5 and 2.610(12) \u00c5 at 18.7\u2009GPa. Recalling that the shortest N\u2026N interatomic distances at 5.6\u2009GPa were 2.1655 (1)\u2009\u00c5 in phase I, the alignment of N2 molecules relieves unfavourable close N\u2026N contacts while maintaing the same coordination number for each N2 molecule.The origin of this transition lies in the ordered orientation of Nc accompanied by a reduction of \u22120.519(1) \u00c5 (\u22128.1%) along tetragonal a, corresponding to \u2329110\u232a in phase I (see table in SM for more details on the structure refinement). We tracked unit-cell dimensions for Xe(N2)2-II up to 58\u2009GPa (2)2 phases (see methods section). At pressures of 38\u2009GPa and above there were clear signs of the incipient high-pressure hcp phase of Xe accompanied by strong diffuse scattering and increased background at d-spacings overlapping with a significant number of Xe(N2)2 reflections and above 58\u2009GPa unit-cell dimensions could not be reliably extracted from the data. However the low-angle (101) reflection could be observed up to 103\u2009GPa \u00c5 (+3.6%) along o 58\u2009GPa , confirmo 58\u2009GPa and allo\u2009GPa see .2)2 deviates greatly from that of pure N2 2 are 2161\u2009cm\u22121 and 2212\u2009cm\u22121, considerably lower frequencies than either those of \u03ba-N2 (2376\u2009cm\u22121) or \u03bb-N2 . Interestingly, at 178\u2009GPa, we observe the persistence of molecular nitrogen, which is above the pressure at which pure N2 is claimed to become non-molecular (\u03b7-N2)122 molecular mode than that just before \u03b6 transforms to the non-molecular amorphous \u03b7 phase in pure N2, there is no evidence that the N2 molecules in Xe(N2)2 dissociate to form Xe-N bonding. However, there is a clear reduction in intensity , appear to exhibit metallic behaviour evident by the sharp rise in the absorption in the near-IR, which shifts with pressure. By 120\u2009GPa, no detectable transmission was observed in the visible, the sample appearance became shiny and reflected red laser light (see photomicrographs in 2)2 with higher N2 concentrations do exhibit absorption 2, we are able to tune the conductive properties of Xe and lower the pressure of metallisation.sorption green buOur results demonstrate that xenon can form compounds not only with chemically reactive gases such as hydrogen or oxygen, but also with unreactive nitrogen. That such a compound forms at low pressure, exhibits metallic properties, and stable to both high-pressure and high-temperature conditions will no doubt stimulate further research in the reactivity of xenon, an element which now appears to be substantially less inert than previously thought.\u03bcm culet flat diamonds were used for experiments under 50\u2009GPa, while 60\u2009\u03bcm and 150\u2009\u03bcm culets were used for higher pressure experiments. In all experiments rhenium foil was used as the gasket material.We have studied the formation conditions and stability of xenon and nitrogen compounds up to pressures of 180\u2009GPa in a diamond anvil cell (DAC). In total 12 DAC loadings were performed. 200\u20092 consisted of two stages. Solid Xe (99.9% purity) was initially cryogenically loaded into a DAC under a N2 atmosphere. Loading was confirmed initially through comparisons of white light transmission spectra due to the change in refractive index between the empty sample chamber and loaded sample. Thorough mapping of the sample with Raman spectroscopy was carried out to ensure no impurities were present in the sample and further confirmation was obtained through x-ray diffraction analysis.The loading of the Xe-N2 gas (99.9% purity) was then loaded into the cell at a pressure of 20\u2009MPa using a high-pressure gas loading system, displacing some of the pre-existing Xe gas. The volume ratio was estimated by the phase separation of Xe and N2 in the fluid state. Using a combination of varying pressure and temperature, single crystals of the Xe rich mixture were grown.NWe have used 514 and 647\u2009nm as excitation wavelengths in the Raman spectroscopy measurements using a custom-built micro-focussed Raman system. Pressure was determined through both ruby fluorescence (P\u2009<\u2009100\u2009GPa) and the Raman edge of the stressed diamondvs. 2\u03b8 plots were obtained by integrating image plate data in various formats using DIOPTAS2)2-I: V0\u2009=\u2009873(90) \u00c53, K0\u2009=\u200947(56) GPa, K\u2032\u2009=\u20091(5). Equation-of-state parameters for Xe(N2)2-II: V0\u2009=\u2009526(158) \u00c53, K0\u2009=\u20099(14) GPa, K\u2032\u2009=\u20094.5(11), K\u2033\u2009=\u2009\u22120.51131\u2009GPa\u22121.Powder x-ray diffraction data were collected at several beamlines: BL10XU at SPring-8 (Japan), IDB PETRA-III (Germany), ID09 at the European Synchrotron Radiation Facility (France), and ID-BMD of HPCAT at APS (USA). Incident beam energies in the range 25\u201330\u2009keV were used. Intensity How to cite this article: Howie, R. T. et al. Formation of xenon-nitrogen compounds at high pressure. Sci. Rep.6, 34896; doi: 10.1038/srep34896 (2016)."} +{"text": "Acute renal failure (ARF) developed due to various causes and may lead to significant morbidity and mortality among pediatric patients.The study was conducted to determine the incidence, etiology, outcome of treatment and clinical presentation of ARF in pediatric patients in Somalia.Comprehensive case history of 39 pediatric patients below 12 years of age, admitted with renal diseases in four tertiary care hospitals in Hargeisa\u00a0and Borama cities in Somalia during December 2015 to November 2016. They were subjected to clinical investigation and laboratory test analysis based on the inclusion criteria of renal insufficiency characterized by serum creatinine level more than 1.5 mg/dl.ARF was most commonly found in five to 12 years age group (53.8%) compared to infant (zero to one year) and pre-school (one to five years) children (23.08%). Mean age of presentation was 6.14 years. Male female ratio in this study was 1.2: 1. Most common presenting clinical feature in our study was oliguria (97.43%), swelling (69.2%), fever (84.1%), abdomen pain\u00a0and nausea-vomiting (41.02%). Common clinical signs were edema (66.66%), altered sensorium (51.28%), hematuria (48.71%) and hypertension (38.46%). Snake bite and acute post streptococcal glomerulonephritis were the two most common causes of ARF in children in our study. Common complications were hypertension (38.46%), anemia (35.89%), hyperkalemia (25.64%) and infection (20.51%), all of which were within the previously reported range. The factors which correlated positively with increased mortality and morbidity were females with age below one year , etiology like septicemia and systemic lupus erythematosus\u00a0(SLE), high peak serum creatinine concentration and complicated by disseminated intravascular coagulation (DIC).Many causes of ARF are preventable and it should be possible to reduce mortality and morbidity due to ARF through purposive preventive measure and availability of the better medical facility. Acute renal problems constitute a large proportion of hospital admission and outpatient attendance among pediatric population. Acute renal failure (ARF) in children, compared with adults, are often reported with speedy recovery without any residual effect and complications. Among the hospitalized patients, ARF may often lead to life-threatening conditions and it is regarded as one of the major cause of significant morbidity and mortality among pediatric patients. Five to 10% of patients in pediatric intensive care unit (PICU) have evidence of renal insult and ARF occurs in 8% of neonates in the neonatal intensive care unit (NICU) .Despite the major advance in the PICU tools and renal replacement therapy (RRT) methods, the reported mortality rate among inpatients ARF is 29% to 46% -4. The hThe ARF is an abrupt cessation of kidney function with life-threatening consequences in assocAccording to AKIN, the following criteria of an abrupt within 48 hours) reduction in kidney functions are characterized by i) absolute increase in serum creatinine more than or equal to 0.3 mg/dl from baseline or ii) increase in serum creatinine more than or equal to 50% (1.5 fold from baseline) or iii) reduction in urine output (documented oliguria of less than 0.5 ml/kg / hour for more than six hours). But, in most of the cases, the baseline creatinine is not available. ARF may be non-oliguric or even polyuric in 10% to 15% of cases, which may lead to misdiagnosis on clinical assessment if we rely on daily urine volume only [ hours reThe children aged 12 years or less, suffering from ARF symptoms were enrolled between December 2015 and November 2016 in\u00a0pediatric medicine department of four tertiary care hospitals in Hargeisa and Borama cities in Somalia: Hargeisa Group Hospital and Garger Hospital in Hargeisa\u00a0and Al-Hayat Hospital in Borama, and Emirates Hospital in Hargeisa. The subjects were included in this study based on the inclusion criteria of renal insufficiency characterized by serum creatinine level more than 1.5 mg/dl and were excluded showing the history of any previously reported renal disease. Distinct parameters were characterized such as parameters of incidence, etiology, presentation, complications and clinical outcome. The data collected from these cases were then analyzed.\u00a0All ARF patients in this study were examined clinically along with proper and complete history recording and subjected to laboratory investigations like serum levels of urea, creatinine, sodium, potassium, cholesterol, triglycerides and with complete blood count, routine and microscopic urinary examination. Other special investigations like ultrasonography (USG), antinuclear antibody (ANA), anti-double-stranded deoxyribonucleic acid\u00a0(anti- dsDNA), anti-streptolysin O titer\u00a0(ASO), complement component 3 (C3) level and kidney biopsy were performed whenever necessary.\u00a0A prospective observational study was undertaken to evaluate the percentage of incidence of ARF among children hospitalized for all causes, the differential etiology of ARF, commonly presented clinical features, age, and sex ratio among children with ARF, and also to delineate the clinical complications and prognosis of treatment and management of cases.In the studied period, a total of 1520 children were admitted in pediatric medicine ward for the different illness and 39 cases were confirmed as ARF. Therefore, the incidence of ARF among hospitalized children in our hospitals was 2.6%. The most common cause of ARF in children in our study was snakebite followed by post-streptococcal glomerulonephritis . Three cases were due to pneumonia and two cases each were due to gastroenteritis, sepsis, systemic lupus erythematosus (SLE), pyelonephritis and posterior urethral valve (PUV). Other cases were due to falciparum malaria, nephrocalcinosis, rapidly progressive glomerulonephritis, non-Hodgkin lymphoma, neuroblastoma, hemolytic uremic syndrome, pyogenic meningitis, atypical minimal change nephrotic syndrome and tubulointerstitial nephritis were admitted in zero to one year age group (age up to one year included). Similarly, nine cases (23.08%) were admitted in one to five years age group and 21 cases (53.84%) were admitted to five to 12 years age group (Table 3.08% werTwenty-one cases 53.85%) were male and 18 cases (46.15%) were female children in our study with a male to female ratio of 1.2: 1. Most common cause of the ARF in the male patient was post-streptococcal glomerulonephritis followed by snakebite , posterior urethral valve and one case each were due to ten others different etiologies\u00a0 followed by acute post-streptococcal glomerulonephritis , systemic lupus erythematosus , pneumonia and one case each were due to six other etiologies\u00a0Table . No caseThe mean age of male patients was 6.12 years and for the female patient was 6.15 years. Therefore, in male and female cases, mean age was almost the same. The mean age of each etiological group is given in Table In this study, the most common presenting symptom was oliguria . Oliguria was present in all cases except one, which was the non-oliguric renal failure due to tubulointerstitial nephritis. Other common presenting symptoms were swelling , fever , abdomen pain , nausea or vomiting . Common presenting clinical signs were edema , altered sensorium , hematuria , hypertension , hepato-splenomegaly and others. Different presentation of ARF is given in Table Mean duration of oliguria was less than 10 days in most etiological cases. Mean oliguria for more than 10 days was present in three etiological groups. A typical minimal change nephrotic patients had oliguria for 17 days. Mean oliguria of two SLE patients was 18 days. The non-Hodgkin lymphoma (NHL) child had oliguria for one month Table -Figure 5The most common complications among all during the course of acute illness from ARF in the children was hypertension . Other complications reported were anemia , hyperkalemia , pneumonia , seizure , disseminated intravascular coagulation (DIC) (7.14%), meningitis , arrhythmia and pleural effusion (3.57%). Eleven cases were reported without any complications\u00a0. Among the 14 cases that developed anemia as a complication, five patients 35.7%) completely recovered and three cases (21.4%) died. Hyperkalemia developed in 10 cases of ARF. Fifty percent of hyperkalemia patient completely recovered, three patients (30%) died and two patients (20%) partially recovered. Two patients developed disseminated intravascular coagulation (DIC) and both of them died. Of the various infective complications, six patients (75%) developed pneumonia and two patients developed meningitis (25%). Other complications in ARF patient and their outcome are given in Table % completA total 39 cases were included and follow up has been done for a short period of two weeks. Among them, 24 cases (61.53%) recovered from ARF completely, and eight cases (20.51%) recovered partially with some persistent biochemical abnormality during discharge and follow-up. Unfortunately, seven cases died during their acute stage (17.94%) of disease Figure .\u00a0Out of total nine cases in snake envenomation group, seven cases (77.8%) recovered fully, one patient (11.1%) died during acute phase due to DIC and another one child (11.1%) had persistent biochemical abnormality during follow-up. Among the total eight cases of acute renal failure following PSGN, seven patients (87.5%) recovered completely. One child (12.5%) in this group however had persistent biochemical abnormality, though the child was improving until the last follow-up visit. No patient died in this group Figure .\u00a0The outcome in the different age group is depicted in Figure The outcome in different sex group is given in Figure The relationship between outcome and mean peak creatinine concentration during acute illness is given in Figure Out of 24 cases in the completely recovered group, 14 cases (58.3%) had one or more complications. Among the seven patients, who died during their acute illness, six cases (85.7%) had complications and all the patients (100%) in the partially recovered group had some complications.\u00a0ARF is not uncommon in a pediatric population and has significant morbidity and mortality. The emphasis on presentation remains cardinal, but early detection and appropriate treatment can also provide complete recovery in a large proportion of ARF in children, which is the major goal of ARF therapy. The incidence of ARF in NICUs reported in some previous studies ranges from 6% to 24% of newborns . AlthougIn this study, out of total 39 cases, 21 cases were males and 18 cases were females, and the male-female ratio (M: F) was 1.2:1. This male preponderance was in consistent with previous studies, where always a higher incidence in males was reported -17. ARF Most common cause of ARF in this study was snake envenomation (23.07%) followed by acute post-streptococcal glomerulonephritis (20.5%). Other causes included pneumonia (7.7%), SLE (5.1%), PUV (5.1%), acute gastroenteritis (AGE) (5.1%), hemolytic uremic syndrome HUS (2.6%) etc. In developing countries, volume depletion from diarrhea continues to be the most common cause of ARF . Few preHUS was one of the common etiologies of ARF in children in many previous studies (12% to 24%), along with acute tubular necrosis and glomerulonephritis , 20. In Snake envenomation has contributed a substantial bulk to the etiology of ARF and has been shown in other tropical countries like Asia and Africa -23. In oThe second most common etiological factor responsible for ARF in this study was acute post-streptococcal glomerulonephritis (APSGN), which contributed 20.5% and was in consistent with previous reports . In our Overall outcome in our study has been recorded in three categories as completely recovered, partially recovered or the patient died during the acute stage of the disease. The overall mortality in our study 17.9%) was lower than many previously published reports .9% was l. AlthougMortality in infants (zero to one year age group) was higher (33.3%) than the other two groups , which was consistent with other studies . MortaliThe precise incidence of acute renal failure in the pediatric patient is\u00a0difficult to define, as it largely depends\u00a0on the referral patterns, proximity to pediatric renal unit and expertise within the unit. It also depends on the population studied and the geographic location of the study. A large number of pediatric patients in our country, mainly from rural areas are not adequately managed because of poverty and lack of facilities for dialysis. Many causes of ARF in our environment are preventable and it should be possible to reduce mortality and morbidity due to ARF through purposive preventive measure and availability of the better medical facility.Our study was comparatively of short duration with the small number of cases. The follow-up was done for a very short period of time. Therefore, it was very difficult to conclude an incidence, long-term mortality, morbidity and other variables from the present study. So, it is essential to conduct a large multi-centric long-term follow-up assessment of children\u2019s clinical condition and renal function study."} +{"text": "NPC1. NPC1 is characterized by abnormal accumulation of unesterified cholesterol and glycolipids in late endosomes and lysosomes. Common signs include neonatal jaundice, hepatosplenomegaly, cerebellar ataxia, seizures and cognitive decline. Both mouse and feline models of NPC1 mimic the disease progression in humans and have been used in preclinical studies of 2-hydroxypropyl-\u03b2-cyclodextrin , a drug that appeared to slow neurological progression in a Phase 1/2 clinical trial. However, there remains a need to identify additional therapeutic agents. High-throughput drug screens have been useful in identifying potential therapeutic compounds; however, current preclinical testing is time and labor intensive. Thus, development of a high-capacity in vivo platform suitable for screening candidate drugs/compounds would be valuable for compound optimization and prioritizing subsequent in vivo testing. Here, we generated and characterize two zebrafish npc1-null mutants using CRISPR/Cas9-mediated gene targeting. The npc1 mutants model both the early liver and later neurological disease phenotypes of NPC1. LysoTracker staining of npc1 mutant larvae was notable for intense staining of lateral line neuromasts, thus providing a robust in vivo screen for lysosomal storage. As a proof of principle, we were able to show that treatment of the npc1 mutant larvae with 2HP\u03b2CD significantly reduced neuromast LysoTracker staining. These data demonstrate the potential value of using this zebrafish NPC1 model for efficient and rapid in vivo optimization and screening of potential therapeutic compounds.Niemann-Pick disease type C1 (NPC1) is a rare autosomal recessive lysosomal storage disease primarily caused by mutations in This article has an associated First Person interview with the first author of the paper. Summary: A zebrafish genetic model of Niemann-Pick disease type C1 is suitable for performing in vivo screening of candidate therapeutic compounds by examining LysoTracker staining intensity in neuromasts. NPC1 or NPC2 can cause NPC. Mutations in NPC1 are reported in 95% of NPC patients, with mutations of NPC2 accounting for the remaining cases (Niemann-Pick disease type C (NPC) is a rare autosomal recessive disease caused by the accumulation of cholesterol and glycolipids in late endosomes/lysosomes. It is estimated to affect 1 in 90,000 individuals . NPC patng cases . Both NPng cases .NPC1 is highly conserved among many species, and both murine and feline NPC1 models manifest pathological and clinical findings similar to those observed in NPC1 patients entering cells via LDL-receptor-mediated endocytosis. LDL particles are then disassembled in late endosomes/lysosomes, releasing cholesterol esters . Lysosompatients . Neurodepatients . These apatients and felipatients models hpatients , additiopatients . These iin vivo drug screening. Thus, to facilitate drug screening and optimization, we developed a genetic zebrafish NPC1 model utilizing CRISPR/Cas9-mediated gene targeting to mutate npc1. The npc1-null zebrafish manifests both liver and nervous system pathology, thus providing a model for both the peripheral and central nervous system (CNS) defects found in NPC1 patients. Furthermore, we demonstrate the utility of this novel model system as a rapid, high-capacity, in vivo screen of candidate therapeutic drugs.The development of CRISPR/Cas9-mediated gene targeting technology has greatly facilitated the use of zebrafish to model human genetic diseases ,b. Zebranpc1 gene (NCBI Gene ID: 553330), which maps to chromosome 2. The zebrafish npc1 gene consists of 25 coding exons encoding Npc1, a 1276 amino acid, lysosomal transmembrane protein. To generate npc1-null mutants, CRISPR/Cas9-mediated gene targeting was used to induce double-strand DNA breaks and error-prone repair in wild-type zebrafish. Two independent sites located within exon 2 and exon 7 were selected as the single guide RNA (sgRNA) targeting sites to increase the chance of generating mutations that would disrupt the Npc1 protein near the N-terminus and give rise to a non-functional npc1 allele. Wild-type zebrafish embryos (F0) were injected with npc1-specific sgRNA and cas9 mRNA at the 1-cell stage to induce somatic and germline mutations. The resulting F0 zebrafish were raised to adulthood and outcrossed to individual wild-type adults to obtain F1 embryos. The F1 embryos were screened for germline transmission of npc1 mutations by PCR and fragment analysis. F0 adults carrying potential mutations in the germline were selected and outcrossed to individual wild-type adults. F1 embryos obtained from this second outcross were raised to adulthood and individually screened for npc1 mutations. Two npc1 frameshift mutant alleles, y535npc1 and hg37npc1, were identified. The y535npc1 allele . Genotyping for the hg37npc1 allele using the dCAPS assay is shown in Fig.\u00a0S1). Consistent with being null alleles, the mutant phenotype was similar for both hg37npc1 and y535npc1. Subsequent data reported in this paper correspond to the hg37npc1 allele.Zebrafish have a single 5 allele A consist7 allele B consistnotyping C. The npnpc1 mRNA to knock down Npc1 protein expression in zebrafish showed defects in epiboly movement, notochord and somite development, as well as a defect in platelet formation , with gross morphology essentially identical to their wild-type (+/+npc1) and heterozygous (+/mnpc1) siblings of the surviving animals were homozygous npc1 mutants . These data suggest significant mortality of m/mnpc1 zebrafish between late larval and adolescent stages. Early loss of m/mnpc1 occurred between 8 and 12\u2005dpf. Mutants that were viable at 14\u2005dpf typically survived until adult stages. Phenotypically, the surviving npc1 mutants were significantly smaller than +/+npc1 and +/mnpc1 siblings at 7\u2005weeks of age . Furthermore, none of the npc1 mutant larvae had clear livers. Liver size was also significantly increased in the 7\u2005dpf m/mnpc1 larvae or in the yolk itself. The YSL is a transient extraembryonic tissue connecting the yolk to the embryonic tissues, serving as a center for early embryonic patterning as well as the nutrient transportation intermediate from the yolk . Therefot larvae A. Howevecontrols A. The acral line A. After nd 7\u2005dpf A. We furm larvae B. These the yolk . To answ reduced . Thus, unpc1 mutants showed a significantly stronger LysoTracker Red signal compared to wild-type larvae, and the areas with the most intense LysoTracker Red signal resembled the distribution of neuromasts compared with +/+npc1 larvae (42.3\u00b12.6) B. Next, t period A. The mum/mnpc1 embryos at 3\u2005dpf combined with an easily recognizable neuromast cell phenotype in viable larvae suggested that the LysoTracker Red staining could be used as a primary readout for drug efficacy in an in vivo drug screen. As a proof of principle, we tested the feasibility of using m/mnpc1 larvae to screen for potential drugs using 2HP\u03b2CD, which has previously been shown to be effective in both mouse and feline models of NPC1 and in a human Phase 1/2 trial (m/mnpc1 larvae was obtained by isolating olfactory placode LysoTracker-Red-positive larvae. These larvae were then treated with either 2.5\u2005mM 2HP\u03b2CD or vehicle (ddH2O) for 3\u2005days. After 3\u2005days of treatment we double-stained the larvae with LysoTracker Red and YO-PRO-1. At 6\u2005dpf, the LysoTracker Red staining of the neuromast cells was significantly reduced in 2HP\u03b2CD-treated mutant larvae compared to vehicle-treated larvae . Also, consistent with the lack of tissue penetration by 2HP\u03b2CD, the survival rate at 2\u2005weeks was similar in vehicle- and 2HP\u03b2CD-treated m/mnpc1 zebrafish.The ability to identify and select /2 trial . An enrid larvae B,C. The d larvae D. Filipid larvae E. Althouin vivo NPC1 model system that would be amenable to high-capacity screening of potential therapeutic compounds, we utilized CRISPR/Cas9-mediated gene targeting to mutate zebrafish npc1. Although a morpholino npc1-knockdown model has been reported or CNS dysfunction impairing feeding.Although the neonatal NPC1 liver disease can be fatal, the natural history of NPC1 is that the cholestatic liver disease often appears to resolve. Neurological signs/symptoms then manifest later in childhood or adolescence. Neurological manifestations typically include progressive vertical supranuclear gaze palsy, cerebellar ataxia and dementia. Progressive loss of cerebellar Purkinje neurons and axonal spheroids are pathological hallmarks of NPC1 disease . Our npcnpc1 expression in zebrafish , enters cells via fluid-phase endocytosis and exon 7 (5\u2032-CCATCAGAGTTTAAGGAGTG-3\u2032), were chosen for sgRNA recognition. Each corresponding sgRNA was injected into wild-type zebrafish embryos at the 1-cell stage together with cas9 mRNA. Embryos from wild-type EKW and TAB-5 zebrafish were used for targeting exon 2 and exon 7, respectively. For mutation screening, sgRNA-injected F0 embryos were raised to adulthood and outcrossed to either EKW or TAB-5 wild-type adults to obtain F1 embryos. PCR and fragment analysis using genomic DNA from 16 randomly selected F1 embryos were performed to identify potential F0 adults carrying mutations (founders). Additional F1 embryos from the F0 founders were raised to adulthood and screened for mutations by PCR and fragment analysis. Two npc1 frame-shift mutant alleles were identified during the screening, namely y535npc1 (NM_001243875.1: c.194_201delinsCTGTGCCTC) in exon 2 and hg37npc1 (NM_001243875.1: c933delinsATCAG) in exon 7. All zebrafish lines were maintained in the aquatic animal facility at 28.5\u00b0C according to our Animal Use Protocol (AUP), approved by the Institutional Animal Care and Use Committee (IACUC) of the Eunice Kennedy Shriver National Institute of Child and Human Development, MD, USA.Zebrafish npc1 mutants, genomic DNA was extracted from either whole embryos/larvae or the caudal fin of adults with DNA extraction buffer . Genomic DNA was further diluted 20-fold and then used as the template for genotyping PCR. For the y535npc1 allele, genotyping PCR was carried out using a forward primer npc1Ex2F1 (5\u2032-CCAGCACTGTATCTGGTACGG-3\u2032) and a reverse primer npc1Ex3R1 (5\u2032-ACCAGTCTCGGACACAGCTC-3\u2032). The PCR conditions were: 94\u00b0C for 2\u2005min; 35 cycles of 94\u00b0C for 30\u2005s, 63\u00b0C for 30\u2005s, 72\u00b0C for 30\u2005s; and 72\u00b0C for 5\u2005min. For the hg37npc1 allele, primers used for genotyping PCR were designed through dCAPS Finder 2.0 online tool to generate an artificial restriction enzyme cutting site on the PCR product. The PCR was carried out using a forward primer Znpc1 Ex7-AvaII-F (5\u2032-TTCTTGACAGCAATCAGCCCCGGTC-3\u2032) and a reverse primer Znpc1 Int7-8-R2 (5\u2032-GAGGGTGTCTGCAGGTTTCACC-3\u2032). The PCR conditions were 94\u00b0C for 2\u2005min; 45 cycles of 94\u00b0C for 30\u2005s, 63\u00b0C for 30\u2005s, 72\u00b0C for 30\u2005s; and 72\u00b0C for 5\u2005min. PCR products from both y535npc1 and hg37npc1 were digested with restriction enzyme AvaII at 37\u00b0C for 8\u2005h. Final digestion products were resolved on 2% agarose gels.To genotype Live zebrafish larvae or adults were anesthetized by adding 0.16\u2005mg/ml tricaine to the egg or system water prior to imaging. After larvae or adults were anesthetized, they were mounted on a layer on a glass depression slide with 3% methylcellulose solution. Images of live zebrafish were obtained using a Leica MZ16F stereomicroscope with an AxioCam HRc CCD camera .At 7\u2005dpf, larvae were fixed with 4% paraformaldehyde in 1\u00d7 PBS at 4\u00b0C overnight. After rinsing with 1\u00d7 PBS 3 times, fixed larvae were incubated sequentially with 85% and 100% propylene glycol for 10\u2005min at room temperature. Larvae were transferred to ORO staining solution [0.5% ORO in 100% propylene glycol] and placed on a platform rocker for overnight staining at room temperature. Stained larvae were destained with gradual transition from 100% propylene glycol to 1\u00d7 PBS and eventually transferred to glycerol for imaging. Images were obtained using a Leica MZ16F stereomicroscope (Leica Microsystems) with an AxioCam HRc CCD camera (Carl Zeiss).Larval (7\u2005dpf) or adult (9-week-old) zebrafish were fixed with 4% paraformaldehyde in 1\u00d7 PBS for 24\u2005h. After fixation, samples were dehydrated sequentially and eventually stored in 70% ethanol at \u221220\u00b0C. Paraffin embedding and microtome sectioning were performed to obtain tissue sections for H&E staining . Stained sections were imaged using an Axioplan 2 compound microscope with an AxioCam 105 Colors CCD camera (Carl Zeiss).Rabbit monoclonal anti-Npc1 antibody was used for both immunofluorescence (1:100) and western blot (1:1000). For immunofluorescence, zebrafish larvae were fixed with 4% paraformaldehyde in 1\u00d7 PBS at 4\u00b0C overnight. Fixed larvae were rinsed extensively with 1\u00d7 PBT (0.5% Triton X-100 in 1\u00d7 PBS) and subsequently blocked with blocking solution at room temperature for 1\u2005h. Larvae were incubated with the primary antibody diluted in the blocking solution at 4\u00b0C overnight. After rinsing the larvae 3 times with 1\u00d7 PBT, they were incubated with the secondary antibody (goat anti-rabbit IgG\u2013Alexa-Fluor-594) at 4\u00b0C overnight. Stained larvae were rinsed 3 times with 1\u00d7 PBT and 1\u00d7 PBST (0.1% Tween-20 in 1\u00d7 PBS). Immunofluorescence images were taken using a Zeiss Observer Z1 inverted compound fluorescence microscope with a Calibri.2 LED lighting system and a CCD camera (Carl Zeiss). For western blot analysis, livers from three 9-week-old adult zebrafish of the same genotype were dissected and pooled together in RIPA buffer for protein extraction. Total protein concentration was determined by BCA assay . SDS-PAGE separation was done by running 40\u2005\u03bcg of total protein per well on NuPage 4-12% Bis-Tris Protein Gels . Proteins were then transferred to nitrocellulose paper using an iBlot 2 transfer apparatus (Thermo Fisher Scientific). Blots were stained with Ponceau S for loading control before the incubation with primary antibody at 4\u00b0C overnight. Secondary antibody incubation was done by incubating blots with horseradish peroxidase (HRP)-conjugated goat anti-rabbit IgG at room temperature for 2\u2005h after they were rinsed several times with 1\u00d7 TBST. For signal detection, Clarity Western ECL Substrate was used to develop luminescence on the blot.Embryos/larvae were fixed with 4% paraformaldehyde in 1\u00d7 PBS at 4\u00b0C overnight. After extensive rinsing with 1\u00d7 PBS, fixed embryos/larvae were then stained with filipin staining solution containing 0.5\u2005mg/ml filipin and 1% goat serum in 1\u00d7 PBS for 2.5\u2005h at room temperature. Stained embryos/larvae were rinsed and stored in 1\u00d7 PBST before imaging. Images were taken using a Zeiss Observer Z1 inverted compound fluorescence microscope with a Calibri.2 LED lighting system and a CCD camera (Carl Zeiss).EGFP or pCS2+-npc1 plasmid containing the full-length EGFP or npc1 cDNA driven by a CMV promoter was injected into the YSL of embryos at 3.5\u2005hpf. Injected embryos were collected and fixed at 2\u2005dpf for filipin staining.A total of 20\u2005mM TopFluor-cholesterol dissolved in DMSO was injected into the yolk of embryos at the 1-cell stage to label the cholesterol distribution in live zebrafish larvae. Each embryo was injected with 20\u2005pmol of TopFluor-cholesterol. As a control, FITC-BSA was injected into the yolk of embryos at the 1-cell stage. Injected embryos were raised to 7\u2005dpf and the live larvae were then imaged using an MZ16F stereomicroscope (Leica Microsystems) and an AxioCam HRc CCD camera (Carl Zeiss). For DNA microinjection, 50\u2005pg of pCS2+-2=0.001-0.1, circularity=0.11-1.00.LysoTracker Red DND-99 was used to stain lysosomes and other acidic organelles in live zebrafish larvae. Zebrafish larvae at 3-7\u2005dpf were rinsed with fresh egg water twice before bathing in the egg water containing LysoTracker Red DND-99 (1:1000 dilution). Larvae were incubated with the dye for 1\u2005h in the dark. After the staining, larvae were rinsed 3 times with fresh egg water. YO-PRO-1 iodide was used for labeling neuromast hair cell nuclei (1:500 dilution). Images were obtained using a Leica MZ16F stereomicroscope (Leica Microsystems) equipped with an AxioCam HRc CCD camera (Carl Zeiss). The mean intensity of LysoTracker Red in the lateral line neuromasts was quantified per individual larvae via ImageJ particle analysis with the following constraints: size (inch)2O as a 100\u2005mM stock solution. For the treatment, 15-20 zebrafish larvae were placed in 60\u2005mm-diameter glass Petri dishes with the egg water. A 100\u2005mM 2HP\u03b2CD stock solution was then diluted to the working concentration for each treatment group. Larvae were incubated in the egg water with 2HP\u03b2CD for 3\u2005days at 28.5\u00b0C.2-hydroxypropyl-beta-cyclodextrin was dissolved in ddH+/mnpc1 intercross was analyzed by chi square. Other differences between experimental groups were analyzed by two-tailed Student\u2019s t-test. P-values <0.05 were considered statistically significant.All graphs were plotted as mean\u00b1standard deviation (s.d.). Genotype distribution of the"} +{"text": "UCP3 gene with feed efficiency in meat-type chickens.The uncoupling protein 3 (UCP3) is a member of the mitochondrial anion carrier superfamily and has crucial effects on growth and feed efficiency in many species. Therefore, the objective of the present study was to examine the association of polymorphisms in the UCP3 gene were chosen to be genotyped using matrix-assisted laser desorption-ionization time-of-flight mass spectrometry in meat-type chicken populations with 724 birds in total. Body weight at 49 (BW49) and 70 days of age (BW70) and feed intake (FI) in the interval were collected, then body weight gain (BWG) and feed conversion ratio (FCR) were calculated individually.Six single nucleotide polymorphisms (SNPs) of the UCP3 was significantly associated with BWG and FCR (p<0.05), and that rs13997811 had significant effects on BW70 and BWG (p<0.05). Rs13997812 of UCP3 was strongly associated with BW70, FI, and FCR (p<0.05). Furthermore, individuals with AA genotype of rs13997809 had significantly higher BWG and lower FCR (p<0.05) than those with AT genotype. The GG individuals showed strongly higher BW70 and BWG than AA birds in rs13997811 (p<0.05). Birds with the TT genotype of rs13997812 had significantly greater BW70 and lower FCR compared with the CT birds (p<0.05). In addition, the TAC haplotype based on rs13997809, rs13997811, and rs13997812 showed significant effects on BW70, FI, and FCR (p<0.05).One SNP with a low minor allele frequency (<1%) was removed by quality control and data filtering. The results showed that rs13997809 of UCP3 polymorphisms in growth and feed efficiency that might be used in meat-type chicken breeding programs.Our results therefore demonstrate important roles for In modern poultry production, feed represents approximately 60% of the total raising costs especially in developing countries. In addition, low feed efficiency chickens produce too much excreta which may result in environment pollution . TherefoUCP3) is mainly expressed in skeletal muscle and plays a vital role in regulating energy metabolism at transcriptional and posttranslational levels [https://www.animalgenome.org/cgi-bin/QTLdb/GG/index) [The uncoupling proteins (UCPs) located in the mitochondrial inner membrane are members of anion carrier protein superfamily and regarded as crucial regulators of energy homeostasis . Brieflyl levels . In chicG/index) ,8.UCP3 gene is mainly expressed in skeletal muscle and plays crucial roles in energy metabolism, limiting reactive oxygen production, fatty acid metabolism, and body weight (BW) [UCP3 gene is significantly associated with obesity susceptibility [UCP3 gene is significantly correlated with partial body measurement traits including withers height and chest depth in Chinese Qinchuan cattle [UCP3 in the low feed efficiency group is significant higher than that in the high feed efficiency group in beef cattle [It is well documented that UCP3 protein decreases metabolic efficiency via uncoupling substrate oxidation in mitochondrial from ATP synthesis by mitochondrial respiration chain, thus dissipating mitochondrial proton gradient by mediating inducible mitochondrial proton leak to regulate energy metabolism . In poulght (BW) ,10,11. Itibility . It was n cattle . Anotherf cattle . In addif cattle .UCP3 gene polymorphisms and growth, feed efficiency and energy metabolism disorders have been performed in mammals and humans [UCP3 gene with growth and feed efficiency traits in chickens. The objective of the present study was to investigate the association of polymorphisms in the UCP3 gene with BW, body weight gain (BWG), feed intake (FI), and FCR in meat-type chickens, which may be helpful as molecular markers in future breeding programs to improve growth and feed efficiency of meat-type chickens.Taken together, numerous studies regarding the relationships between d humans ,17. UntiAll animal experiments were performed in accordance with the Regulations for the Administration of Affairs Concerning Experimental Animals and approved by the Institutional Animal Care and Use Committee of Anhui Agricultural University (permit number: IACUC-20101020). All experimental protocols were carried out according to relevant regulations and recommendations by this committee. All efforts were made to minimize suffering in our birds.20 to G22). The two strains were selected based on the appearance, growth and carcass traits within every generation and chosen as valuable genetic sources for a local chicken breeding program. All chickens were derived from a single hatch, wing-banded at day of hatch and raised indoors. All birds were given the same experimental diet . The analysis of all the SNPs was performed by the BLAST program at the NCBI database and DNAMAN 7.0 to guarantee all the SNP sequences were non-homologous associated with other genome sequences. Finally, six SNPs in the UCP3 gene were selected for further analysis.The single nucleotide polymorphisms (SNPs) in the All blood samples were collected from the wing vein by standard venipuncture and stored in acid citrate dextrose anticoagulant at \u221220\u00b0C before DNA isolation. Genomic DNA was extracted from whole blood samples using NRBC Blood DNA Kit in accordance with the manufacturer\u2019s procedures. DNA quality was tested by 1.5% agarose gel electrophoresis and concentration was quantified using the NanoDrop 2000 spectrophotometer . The final DNA concentrations were 2 to 10 ng/\u03bcL and stored at \u221220\u00b0C for further analyses. Two PCR primers and extension primer were designed using software Assay Design 3.1 for each SNP, as listed in Genotyping of 724 chickens was carried out by matrix-assisted laser desorption-ionization time-of-flight mass spectrometry on the Mass ARRAY iPLEX Platform . Single nucleotide polymorphisms with a genotype call rate <90% and minor allele frequency <1% were discarded across all birds.2>1/3 means sufficiently strong LD [http://watson.hgen.pitt.edu/docs/simwalk2.html). The frequency of SNPs genotypes and Hardy-Weinberg equilibrium were analyzed by the FREQ procedure and chi-square fitness test of SAS version 9.4 . The SNPs that deviated from the Hardy-Weinberg equilibrium were removed for further analysis. The association between SNPs or haplotypes and growth, feed efficiency traits were analyzed using the generalized linear mixed model procedure of SAS 9.4 with the following model:The linkage disequilibrium (LD) between several SNPs in one gene was determined by the Haploview program . Lewontitrong LD . The hapijk is the observed value of the traits, \u03bc is the overall population mean, Li is the fixed effect of line, Gj is the fixed effect of SNP genotype or haplotype, Pk is the random effect of family, and eijk is the random error.Where YUCP3 gene in energy homeostasis and its association with growth and feed efficiency in many vertebrates [UCP3 polymorphisms with growth and feed efficiency traits in chickens. In the present study, six SNPs of the UCP3 gene were chosen and screened in yellow meat-type chicken strains. One SNP with a very low minor allele frequency was removed by quality control and data filtering. The remaining five SNPs were further tested as polymorphic with genotype call rate >90% and minor allele frequency >1% , and that rs13997811 was strongly associated with BW at 70 days of age (BW70) and BWG (p< 0.05). SNP rs13997812 had significant effects on BW70, FI, and FCR (p<0.05). Furthermore, LD analysis indicated a high linkage block among rs13997809, rs13997811, and rs13997812 of the UCP3 gene (UCP3 gene plays an important role in determining growth and feed efficiency traits in our chicken populations. Previous research had demonstrated that an A/G polymorphism in intron 3 of the UCP3 gene is significantly associated with average daily gain, partial efficiency growth, and FCR in beef cattle [UCP3 gene has pivotal roles in uncoupling thermogenesis in beige adipocytes and might be a crucial mediator of thermogenesis in cold-tolerant pigs [In the current study, association analysis revealed that three SNPs in the CP3 gene and contCP3 gene . The resf cattle . It was ant pigs .UCP3 gene had crucial effects on energy expenditure and homeostasis, and their polymorphisms were significantly associated with fat metabolism, obesity and diabetes in humans and mice [UCP3 gene were significantly associated with total cholesterol levels and genetic mutation in intron 1 may be correlated with relative expression levels of the UCP3 gene in dogs [UCP3 gene might be associated with obesity [UCP3/BglIpolymorphism was significantly associated with growth traits, and genotype AB was superior over AA or BB genotypes in Nanyang cattle [UCP3 in chicken growth and feed efficiency.Previous studies demonstrated that the and mice ,23. Ther in dogs . In huma obesity . Anotherg cattle . TherefoUCP3 gene were significantly associated with some growth and feed efficiency traits, which might be used as potential genetic markers in yellow meat-type chicken breeding programs. Further studies are necessary to properly investigate associations of UCP3 polymorphisms with feed efficiency traits in large populations with different chicken strains. Additionally, it is necessary to further investigate the molecular mechanisms of identified SNPs or haplotypes of the UCP3 gene contributing to growth and feed efficiency of chickens.In summary, three SNPs and the TAC haplotype of the"} +{"text": "Noncovalently bound excited states of anions have led to the development of resonant photoelectron spectroscopy with rich vibrational and dynamical information. \u03bc > \u223c2.5 D) can support dipole-bound excited states below the detachment threshold. These dipole-bound states (DBSs) are highly diffuse and the weakly bound electron in the DBS can be readily autodetached via vibronic coupling. Excited DBSs can be observed in photodetachment spectroscopy using a tunable laser. Tuning the detachment laser to above-threshold vibrational resonances yields vibrationally enhanced resonant photoelectron spectra, which are highly non-Franck\u2013Condon with much richer vibrational information. This perspective describes recent advances in the studies of excited DBSs of cryogenically cooled anions using high-resolution photoelectron imaging (PEI) and resonant photoelectron spectroscopy (rPES). The basic features of dipole-bound excited states and highly non-Franck\u2013Condon resonant photoelectron spectra will be discussed. The power of rPES to yield rich vibrational information beyond conventional PES will be highlighted, especially for low-frequency and Franck\u2013Condon-inactive vibrational modes, which are otherwise not accessible from non-resonant conventional PES. Mode-selectivity and intra-molecular rescattering have been observed during the vibrationally induced autodetachment. Conformer-specific rPES is possible due to the different dipole-bound excited states of molecular conformers with polar neutral cores. For molecules with \u03bc \u226a 2.5 D or without dipole moments, but large quadrupole moments, excited quadrupole-bound states can exist, which can also be used to conduct rPES.Valence-bound anions with polar neutral cores ( D, it caFermi and Teller first predicted a minimum dipole moment of 1.625 D for a finite dipole to bind an electron when studying the capture of negative mesotrons in 1947.via excited DBSs.Direct evidence of DBSs came from photodetachment experiments of the enolate anion, which revealed sharp peaks in the photodetachment spectra attributed to the existence of dipole-supported excited states.via vibrational autodetachment from dipole-bound excited states of cryogenically cooled C6H5O\u2013.6H5O\u2013 was found to be 97 cm\u20131 below the detachment threshold. Mode-specific autodetachment from eight vibrational levels of the DBS was observed, yielding highly non-Franck\u2013Condon resonant photoelectron (PE) spectra, due to the \u0394v = \u20131 vibrational propensity rule.\u20131 to 659 cm\u20131 depending on the dipole moments of the neutral cores. The small binding energies confirm the weakly bound nature of the DBSs, which have been probed by high-resolution PEI using a third-generation electrospray ionization (ESI)-PES apparatus equipped with a cryogenically cooled Paul trap.via vibrational autodetachment has been shown to be a powerful technique to resolve rich vibrational features, especially for low-frequency and Franck\u2013Condon (FC) inactive vibrational modes, as well as conformation-selective and tautomer-specific spectroscopic information. Additionally, a DBS of the cluster anion C2P\u2013 was observed, revealing that the weakly dipole-bound electron is not spin-coupled to the core electrons of C2P.The Wang group first reported high-resolution rPES 6H5O\u2013 and C6H5S\u2013 in Section 3, illustrating some basic features of the DBSs, such as the small binding energies of the DBSs, structural similarities between an anion in the DBS and its corresponding neutral, and vibrational autodetachment following the \u0394v = \u20131 propensity rule. Section 4 presents several applications of rPES in resolving vibrational information by resonant enhancement, from the vibrational origin of the CH3COO radical to the low-frequency and FC-inactive vibrational features of the deprotonated uracil radical. Intramolecular inelastic rescattering, which lights up low-frequency FC-inactive vibrational modes, will also be discussed. In Section 5, we present isomer-specific rPES via DBSs of two conformers of m-HO(C6H4)O\u2013 and two tautomers of deprotonated cytosine anions. The first observation of a quadrupole-bound excited state of cryogenically cooled NC(C6H4)O\u2013 anions will be described in Section 6. Finally, in Section 7, we give a summary and provide some perspectives for the study of noncovalent excited states and rPES of cryogenically cooled anions.In this perspective, we first discuss the experimental methods in Section 2. We then present the DBSs of C2via vibrational autodetachment from DBSs will be discussed, illustrating the differences of rPES from conventional PES. Photodetachment spectroscopy used to search for DBS resonances of anions will be discussed. We will briefly present our current third-generation ESI-PES apparatus,This section describes the experimental techniques that we have developed to study excited DBSs of anions. The principle of rPES 2.1\u2013) is detached by a laser beam. When the laser photon energy (hv) exceeds the binding energy of the electron in the anion or the electron affinity (EA) of the corresponding neutral, photoelectrons (e\u2013) can be ejected with various kinetic energies (KEs) depending on the resulting final neutral states (M). Conventional PES is governed by the FC principle, only allowing vibrational modes with significant FC factors to be observed, though anomalous PES intensities can be observed in slow-electron velocity-map imaging in certain detachment photon energies64Conventional anion PES is done at a fixed laser wavelength, as schematically shown in DBS\u2013). Resonant PES involves two processes. The first is resonant excitation, which has a high absorption cross section, from the anion ground state to the DBS vibrational levels. For below-threshold DBS vibrational levels, a second photon is required to detach the DBS electron. For above-threshold DBS vibrational resonances , vibronic coupling can induce autodetachment from the DBS vibrational levels to neutral levels via transfer of vibrational energies to the weakly bound electron. The vibrational autodetachment follows the \u0394v = \u20131 propensity rule under the harmonic approximation, which was extended from autoionization of molecular Rydberg states.v = \u20131 propensity rule, which is also related to the fact that the potential energy curve of the DBS and that of the neutral is almost identical , suggests that only one quantum of vibrational energy is allowed to transfer to the DBS electron buffer gas, which is shown empirically to exhibit optimal thermal cooling effects.\u20131 for electrons with 55 cm\u20131 energy and about 1.5% (\u0394KE/KE) for kinetic energies above 1 eV. The narrowest line width achieved was 1.2 cm\u20131 for 5.2 cm\u20131 electrons.69Details of the third-generation ESI-PES apparatus and the improvements relative to the first- and second-generation apparatuses have been described previously.The third-generation ESI-PES apparatus has allowed the study of weakly bound non-covalent excited states of anions, including both dipole-bound2.360\u2013, the most accurate EA of C60 was measured to be 2.6835(6) eV, as well as the resolution of sixteen fundamental vibrational frequencies for the C60 molecule.72Due to the small binding energies of the DBS electron, it is critical to cool down the anions to low temperatures to allow high-resolution PDS and rPES and facilitate spectral assignments of complex anions by eliminating vibrational hot bands. In 2005, the Wang group developed the first version of a cryogenically cooled Paul trapThe cryogenic Paul trap has also been adapted by several groups to study cold ions and ionic clusters by vibrational spectroscopy,8333.13.1.16H5O\u2013),6H5O possesses a large dipole moment of 4.06 D. Eight DBS vibrational resonances were found manually. Recently, a more complete photodetachment spectrum was obtained for C6H5O\u2013, revealing a total of eighteen vibrational resonances across the detachment threshold at 18\u2009173 cm\u20131 correspond to excited vibrational levels of the DBS of C6H5O\u2013, i.e., vibrational Feshbach resonances.The first anion for which we observed an excited DBS and performed rPES in 2013 was the phenoxide anion (C3.1.26H5O\u2013 at 480.60 nm (black dashed curve) obtained from the PE image in \u03bd11 up to the fifth quanta,\u20131 to line up peak 000 in the PE spectrum with peak 0 in the photodetachment spectrum, we see that the positions and relative intensities of the vibrational progression of mode \u03bd11 in the PE spectrum and those in the photodetachment spectrum are perfectly matched. This comparison vividly demonstrates the structural similarity between the molecular core in the DBS of C6H5O\u2013 and the neutral C6H5O radical. Since the peak width in the photodetachment spectrum is mainly limited by rotational broadening, the measured frequencies are in general more accurate than those obtained from the PE spectrum, where the spectral resolution depends on the photoelectron kinetic energies. In addition, much richer vibrational features are revealed in the photodetachment spectrum due to the resonant enhancement via the DBS. Hence, in comparison with conventional non-resonant PES, rPES in combination with PDS is more powerful to resolve vibrational information for dipolar neutral radicals by probing the DBS resonances.In 3.1.3p-wave angular distribution th level of the same mode in the neutral , i.e. one quantum of the vibrational energy in mode is transferred to the dipole-bound electron during autodetachment. The resulting final neutral peak in the PE spectrum corresponding to the level will be highly enhanced. For instance, the resonant PE spectra in for n = 1 to 5, respectively. Vibrational autodetachment processes from these DBS levels result in significant enhancement of peaks 000, A (111), B (112), C (113) and D (114), respectively, in the resonant PE spectra, following the \u0394v = \u20131 propensity rule. In these autodetachment processes, one vibrational quantum of mode (519 cm\u20131) is transferred to the DBS electron (BE = 97 cm\u20131), yielding an autodetached electron with a KE of 422 cm\u20131 in all five cases. In addition, peaks A (111) and B (112) are slightly enhanced in v = \u20133 autodetachment process. This violation of the \u0394v = \u20131 propensity rule indicates anharmonicity at higher vibrational levels.42The vibrational DBS resonances consist of single-mode levels of the DBS is more complicated. When all the vibrational frequencies of the modes involved are larger than the binding energy of the DBS, both neutral final levels, and , are expected to be enhanced. 1) and d (181) are highly enhanced because of autodetachment from the combinational DBS level 11\u2032118\u20321 following the \u0394v = \u20131 propensity rule. However, excitation to the DBS combinational level 9\u2032111\u20321 in 1), which means that the mode is more strongly coupled with the dipole-bound electron, indicating mode selectivity in vibronic coupling. Even more complicated cases are those involving autodetachment from overlapping vibrational levels of the DBS, as shown in 111\u20322 and 10\u2032111\u2032220\u20321. The enhancement of the two peaks A (111) and k (91111) is due to autodetachment from the DBS level 9\u2032111\u20322, while that of peak h (112201) is due to autodetachment from the 10\u2032111\u2032220\u20321 DBS level. Both mode-selectivity and anharmonic effects are observed. All the discussed autodetachment processes from the DBS vibrational levels to neutral levels are schematically illustrated in The autodetachment from a combinational vibrational level 3.36H5S\u2013) is another relatively simple example that can be used to illuminate the basic features of DBSs and rPES,6H5S), an excited DBS was observed in the photodetachment spectrum of C6H5S\u2013 in 1) in 2) in 2, 11\u20321, 11\u20322 and 11\u20323, obeying the \u0394v = \u20131 propensity rule for autodetachment. In 1) is due to the mode-specific autodetachment from the combinational level 10\u2032111\u20321: strong vibronic coupling is only observed for mode , similar to the case of C6H5O\u2013 , c (111201) and A (111). The autodetachment to peaks b and c follows the \u0394v = \u20131 propensity rule, while that to peak A involves \u0394v = \u20132 of the lowest frequency bending mode .51The thiophenoxide anion ,\u2013, \u20131. By scanning the laser wavelength up to \u223c1700 cm\u20131 above the threshold, a total of forty-six DBS vibrational levels were observed.One of the most prominent examples is the deprotonated uracil radical , b (261), c (272), and e (251), which are symmetry-forbidden in the non-resonant spectra.5In \u03bd20 mode , which i mode and lose energies to the bending modes \u03bd27 (113 cm\u20131), \u03bd26 (150 cm\u20131), and \u03bd25 (360 cm\u20131), corresponding to peaks a, b and e, respectively. We have observed especially pronounced rescattering effects for autodetachment from the 16\u20321 DBS level of [U\u2013H]\u2013. This observation is not well understood currently and it would deserve some careful theoretical consideration.Vibronic coupling or Herzberg\u2013Teller coupling5One interesting application of rPES is to obtain conformer-selective spectroscopic information for dipolar species because different conformers have different DBSs. If multiple conformers are present in the ion beam, a non-resonant PE spectrum would be a mixture of the two species. However, enhancement of vibrational features for a specific conformer or conformer-selective rPES can be achieved when the detachment laser is tuned to the DBS vibrational levels of a specific conformer.5.1syn- and anti-m-HO(C6H4)O\u2013, due to the different orientations of the hydrogen atom on the \u2013OH group, as shown in S000, A000, and A (S231).anti- and syn-conformations, respectively. Peaks S000 and A000, with binding energies of 18\u2009850 cm\u20131 and 18\u2009917 cm\u20131, represent the EAs of the syn- and anti-m-HO(C6H4)O radicals, respectively. Peak A is a vibrational feature of mode \u03bd23 of syn-m-HO(C6H4)O. With dipole moments of 3.10 D and 5.34 D for the syn- and anti-radicals O\u2013, respectively. The larger DBS binding energy of anti-m-HO(C6H4)O\u2013 is consistent with the larger dipole moment of its neutral radical. A complicated detachment spectrum was observed with DBS resonances from both conformers: peaks A1\u2013A17 are due to anti-m-HO(C6H4)O\u2013, peaks S1\u2013S8 are due to syn-m-HO(C6H4)O\u2013, and peaks AS1\u2013AS5 are due to overlapping vibrational levels of both conformers.The 3-hydroxyphenoxide anion has two nearly degenerate conformers, radicals , respectinset in , represeS30\u20321 and S28\u20321 of syn-m-HO(C6H4)O\u2013, the resonant PE spectra display major enhancement of the S000 peak as shown in A000 peak is negligible. When the laser is tuned to the DBS levels A27\u20321 and A24\u20321 of anti-m-HO(C6H4)O\u2013, the A000 peak is greatly enhanced as shown in S000 peak becomes negligible. In S231) and C (A211) are enhanced due to autodetachment from DBS levels S23\u2032130\u20321 and A21\u20322, respectively. Such conformer-selective resonant PE spectra have been obtained from every DBS resonance in 49Hence, by tuning the detachment laser to DBS levels of specific conformers, conformer-selective resonant PE spectra can be obtained. When the detachment laser is tuned to the DBS vibrational levels 5.2\u2013).\u2013 and cKAN3H8a\u2013 .\u2013 and tKAN3H8b\u2013. Peaks C0 and T0 represent the 0\u20130 detachment transitions and yield the EAs of cKAN3H8a and tKAN3H8b to be 3.047 eV and 3.087 eV, respectively, which are in excellent agreement with the calculated EAs.T0 than C0 is consistent with the computed relative stabilities of the two anionic tautomers. Hence, both tautomers are present experimentally even under our low temperature conditions. At 400.22 nm (C301) and B (C302), are observed.Tautomerism of nucleic acid bases plays an important role in the structure and function of DNA. For example, the deprotonation of cytosine can produce many tautomeric negative ions in C29\u2032130\u20323 DBS level followed by \u0394v = \u20133 autodetachment, breaking the \u0394v = \u20131 propensity rule. The resonant PE spectra in T0 peak due to autodetachment from DBS vibrational levels T27\u20321, T17\u20321 and T23\u20321 of tKAN3H8b\u2013, respectively, whereas the C0 peak from the cKAN3H8a\u2013 tautomer is negligible.The cKAN3H8a and tKAN3H8b radicals are calculated to have dipole moments of 3.35 D and 5.55 D , respect62\u2013 cluster was first suggested to be a QBA.2\u2013 cluster showed a relatively high electron binding energy,2 and (KCl)2, and a series of complex organic molecules with vanishing dipole moments but large quadrupole moments have also been proposed to form QBAs.trans-isomer of 1,4-dicyanocyclohexane, which has no dipole moment.Long-range charge\u2013quadrupole interactions can form quadrupole-bound anions (QBAs).6H4)O\u2013, see 6H4)O, has two dipolar centers scale\" fill=\"currentColor\" stroke=\"none\">N and C\u2013O) in the opposite direction, resulting in a small dipole moment of 0.30 D but a large quadrupole moment . The dipole moment is much smaller than the 2.5 D critical value to form an excited DBS, but the large quadrupole moment may allow a QBS. Photodetachment spectroscopy of NC(C6H4)O\u2013 indeed revealed many resonances across the detachment threshold at 24\u2009927 cm\u20131, as presented in \u20131 below the detachment threshold, due to resonant two-photon detachment. Since NC(C6H4)O\u2013 cannot support a DBS, peak 0 should represent the ground vibrational level of the QBS. The continuous baseline above the threshold represents the non-resonant detachment signals, while the seventeen peaks, labeled 1\u201317, are vibrational resonances of the QBS of NC(C6H4)O\u2013. Inset (b) of 6H4)O\u2013 anion, consistent with previous results.via the QBS are found to be the same as those via the DBS, following the \u0394v = \u20131 propensity rule. Seventeen resonant PE spectra were obtained, which together with the photodetachment spectrum yielded ten fundamental vibrational frequencies for the NC(C6H4)O radical.54The 4-cyanophenoxide anion has a large neutral core dipole moment of 6.35 D with a DBS binding energy of 659 cm\u20131.6H5O\u2013 is found to have a binding energy of 97 cm\u20131. Yet the DBS in syn-m-HO(C6H4)O\u2013 has a larger binding energy of 104 cm\u20131 while its neutral core has a smaller dipole moment of 3.10 D. This indicates that molecular structures and polarizability play important roles in the electron binding in DBSs. Thus, it would be interesting to investigate how the DBS binding energies depend on the magnitude of the dipole moment for different classes of molecular species104There are many interesting questions that can be investigated using PDS and rPES, as well as experimental challenges. For all the anionic systems we have studied , the sma4H3N2O\u2013, \u20131 was observed for the DBS. The neutral core of HO(C6H4)2O\u2013 has the largest dipole moment of 6.35 D among all the anions that we have investigated thus far (\u20131 below the threshold.Another interesting question is if it would be possible for an anion to support a second bound DBS below the detachment threshold? If so, what would be the critical dipole moment required for the neutral core? Early theoretical studies of a fixed dipolar system predicted that a critical dipole moment of 9.64 D was required to support a second bound DBS.thus far . We havevia noncovalent excited states is unique to resolve low-frequency vibrational modes for the relatively small aromatic systems shown in Resonant PES There are no conflicts to declare."} +{"text": "KRAS is a GTPase that activates pathways involved in cell growth, differentiation and survival. In normal cells, KRAS-activity is tightly controlled, but with specific mutations, the KRAS protein is persistently activated, giving cells a growth advantage resulting in cancer. While a great deal of attention has been focused on the role of mutated KRAS as a common driver mutation for lung adenocarcinoma, little is known about the role of KRAS in regulating normal human airway differentiation.To assess the role of KRAS signaling in regulating differentiation of the human airway epithelium, primary human airway basal stem/progenitor cells (BC) from nonsmokers were cultured on air-liquid interface (ALI) cultures to mimic the airway epithelium in vitro. Modulation of KRAS signaling was achieved using siRNA-mediated knockdown of KRAS or\u00a0lentivirus-mediated over-expression of wild-type KRAS or the constitutively active G12\u2009V mutant. The impact on differentiation was quantified using TaqMan quantitative PCR, immunofluorescent and immunohistochemical staining analysis for cell type specific markers. Finally, the impact of cigarette smoke exposure on KRAS and RAS protein family activity in the airway epithelium was assessed in vitro and in vivo.siRNA-mediated knockdown of KRAS decreased differentiation of BC into secretory and ciliated cells with a corresponding shift toward squamous cell differentiation. Conversely, activation of KRAS signaling via lentivirus mediated over-expression of the constitutively active G12\u2009V KRAS mutant had the opposite effect, resulting in increased secretory and ciliated cell differentiation and decreased squamous cell differentiation. Exposure of BC to cigarette smoke extract increased KRAS and RAS protein family activation in vitro. Consistent with these observations, airway epithelium brushed from healthy smokers had elevated RAS activation compared to nonsmokers.Together, these data suggest that KRAS-dependent signaling plays an important role in regulating the balance of secretory, ciliated and squamous cell differentiation of the human airway epithelium and that cigarette smoking-induced airway epithelial remodeling is mediated in part by abnormal activation of KRAS-dependent signaling mechanisms.The online version of this article (10.1186/s12931-019-1129-4) contains supplementary material, which is available to authorized users. The RAS protein family are a class of small GTP-binding proteins that function as signal transduction molecules to regulate many cellular processes including proliferation, differentiation, and apoptosis \u20134. The pBased on the knowledge that KRAS signaling regulates a diverse array of cellular pathways relevant to differentiation , 12\u201315, 2 into T75 flasks and maintained in Bronchial Epithelial Growth Media before differentiation on air-liquid interface (ALI) as described [5 cells/cm2 onto 0.4\u2009\u03bcm pore-sized Transwell inserts pre-coated with human type IV collagen . The initial culture medium consisted of a 1:1 mixture of DMEM and Ham\u2019s F-12 Nutrient Mix containing 5% fetal bovine serum, 1% penicillin-streptomycin, 0.1% gentamycin and 0.5% amphotericin B. The following day, the medium was changed to 1:1 DMEM/Ham\u2019s F12 (including antibiotics described above) with 2% Ultroser G serum substitute . Once the cells had reached confluence 2\u2009days post seeding, the media was removed from the upper chamber to expose the apical surface to air and establish the ALI (referred to as ALI Day 0). The ALI cultures were then grown at 37\u2009\u00b0C, 8% CO2, with fresh media changed every 2 to 3\u2009days. Following 5\u2009days on ALI, the CO2 levels were reduced to 5% until harvest of the cultures at the desired time point. For histological analysis ALI trans-well inserts were fixed for paraffin embedding and sectioning . For general histology, sections were stained using standard protocols for hematoxylin and eosin (H&E) or Alcian blue.Nonsmoker primary basal cells (BC) were purchased from Lonza . All cultures were seeded at 3000 cells/cmescribed \u201334. To iBasal cells were used without transfection or were transfected\u00a0with 5\u2009pmol of control siRNA or KRAS specific siRNA using Lipofectamine RNAiMAX Reagent and Opti-MEM media (both from Life Technologies) at the time of seeding cells for ALI culture [The cDNA sequence of human wild type and the constitutively active mutant G12\u2009V mutant of KRAS were PCR amplified using specific primers and cloned into the TOPO\u00ae TA subcloning vector (Invitrogen) using the manufacturer\u2019s protocol. The KRAS inserts were then subcloned into the multiple cloning site of pCDH-MSCV-MCS-EF1\u03b1-GFP lentiviral vector via the Nhe I and BamHI restriction sites. The resulting plasmids were sequenced to verify the integrity of the KRAS open reading frame. Recombinant replication deficient lentiviruses were generated by transient co-transfection of 293A cells with the KRAS lentiviral vectors and the appropriate packaging plasmids pGal-Pol and pMD.G (VSVg envelope) as previously described . Virus tTotal RNA was extracted using TRIzol (Invitrogen) and purified using the Rneasy MinElute RNA purification kit . Double-stranded cDNA was synthesized from 1\u2009\u03bcg of total RNA using TaqMan Reverse Transcription Reagents . Gene expression was assessed using TaqMan quantitative PCR and relative expression levels determined using the \u0394Ct method with 18S ribosomal RNA as the endogenous control \u201334. PremTM\u00a0phosphatase inhibitor cocktail . The protein concentration was then quantified using the Bradford Assay and an appropriate volume of 4X NuPAGE LDS sample buffer (Invitrogen) containing 200\u2009mM dithiothreitol (DTT) added to each sample. The cellular lysates were then boiled for 5\u2009min and equal amounts of total protein for each sample analyzed using NuPAGE 4\u201312% Bis-Tris gradient gels (Invitrogen) and subsequently transferred onto nitrocellulose membranes with a Bio-Rad semidry apparatus before Western analysis. The primary antibodies used were KRAS and GAPDH .Western analysis was performed as described \u201334. BrieImmunohistochemical and immunofluorescent staining was performed on normal human bronchus tissue or ALI cross-sections as described \u201334. The https://imagej.nih.gov/ij/, Version1.45\u2009s, National Institute of Health, Bethesda, MD). For quantification of BC differentiation at the histological level via immuno-staining using cell type specific markers, a minimum of 15 images equally distributed between both ends of the sectioned membrane were acquired and a minimum of 500 total cells counted for each individual experiment. Epithelial thickness of ALI cultures was quantified on H&E stained cross-sections. For each cross-section, 20 images equally distributed between both ends of the sectioned membrane were acquired and three measurements were made at one quarter, one half, and three-quarter intervals with Image J software (Cigarette smoke extract (CSE) was made from the smoke of one Marlboro Red commercial cigarette bubbled through 12.5\u2009ml of differentiation medium that was then filtered through a 0.2\u2009\u03bcm\u00a0pore filter as described . To ensuLevels of activated KRAS were quantified using the KRAS activation Kit according to the manufacturers\u2019 protocol. This kit uses GST-Raf1-RBD (Rho binding domain) fusion protein to bind the activated form of GTP-bound Ras, which can then be co-immunoprecipitated with glutathione resin from cell lysates. For each sample, 500\u2009\u03bcg of total protein lysate were used per assay. Levels of total (input) and activated (elution following co-immunoprecipitation) KRAS are determined by Western analysis using a rabbit polyclonal KRAS antibody. Levels of activated KRAS were quantified using ImageJ software with the untreated cells set as the reference for 100% activity.To quantify the effect of CSE exposure on RAS protein family activity, the 96-well RAS Activation ELISA Kit was used according to the manufacturers\u2019 protocol. The kit uses the Raf1 RBD (Rho binding domain) attached to a 96-well plate to selectively pull down the active form of RAS (GTP bound) from cell lysates. The captured GTP-RAS is then detected by a pan-RAS antibody and HRP-conjugated secondary antibody, with the absorbance read on a spectrophotometer at a wavelength of 450\u2009nm. For each sample, 10\u2009\u03bcg of total protein lysate were used per well. Levels of activated RAS in the untreated and nonsmoker cells were set as the reference for 100% activity.n\u2009=\u20095) and smokers (n\u2009=\u20095) were recruited under IRB-approved protocols in KRAS mRNA levels at ALI day 0 relative to control siRNA . Therefore, all the analyses to characterize the effect of KRAS knockdown were performed at ALI day 14 when the epithelium was still viable. Treatment of cells with control siRNA had no significant effect on the expression of KRAS mRNA and protein compared to untreated cells . In support of these findings, qPCR analysis of the proliferation marker MKI67 demonstrated no significant (p\u2009>\u20090.8) difference in expression between untreated and siRNA control treated cells. However, compared to control siRNA treated cells knockdown of KRAS decreased expression of MKI67 (\u2212\u20094.4 fold) suggesting suppression of KRAS decreases proliferation . To further characterize these differences, ALI day 14 cultures from each group were analyzed by qPCR for expression of cell type specific markers relevant to mucociliated [p\u2009<\u20090.05) and suppressed expression of ciliated cell markers decrease in the number of SCGB1A1 (6.6% untreated vs 7.2% siRNA control and 0.02% siRNA KRAS) compared to siRNA control treated cells KRAS or the constitutively active G12\u2009V mutant (activated) during ALI culture. As a control, BC were infected with control lentivirus (empty vector). To confirm over-expression of WT and activated KRAS throughout the differentiation process, cells were harvested at ALI day 0, 7, 14 and 28 for analysis of KRAS expression at the mRNA level by qPCR. Compared to lentivirus control infected cells, KRAS was significantly over-expressed in WT and activated lentivirus infected cells at all time points difference in expression between control lentivirus and lentivirus expressing WT KRAS infected cells. However, over-expression of activated KRAS increased expression of MKI67 compared to control (3.3 fold) and WT KRAS (3.3 fold) lentivirus infected cells suggesting activation of KRAS increases proliferation . To further characterize these differences in histology and quantify the impact of KRAS signaling on BC mucociliated differentiation, ALI day 7, 14, and 28 cultures from each group were analyzed by qPCR for expression of cell type specific markers. In support of the histological data, no significant difference in expression of BC (KRT5 and TP63), secretory , ciliated (FOXJ1 and DNAI1) and squamous (KRT6B and IVL) cell markers was observed between cells infected with control lentivirus and lentivirus expressing WT KRAS at any time point . However, activated KRAS significantly increased expression of secretory cell markers at all time points; at day 28, the magnitude of increase for key genes included 7.4-fold for MUC5AC, 6.0-fold for MUC5B, and 2.9-fold for SCGB1A1. In ciliated cells, differentiation-driving transcription factor FOXJ1 was up-regulated 1.7-fold at day 7 (p\u2009<\u20090.001), and in parallel, cilia structural gene DNAI1 showed significant upregulation of 3.7- and 2.0-fold at days 7 and 14. By day 28, ciliated cell marker gene expression was higher than WT KRAS and control samples, but the difference was no longer significant. At the same time, squamous cell markers were downregulated in cultures expressing activated KRAS at day 28 compared with cultures expressing WT KRAS or control lentivirus infected cells (\u2212\u20091.8-fold KRT6B and\u2009\u2212\u20094.2-fold IVL). Differentiation changes observed at the mRNA level were subsequently validated histologically by staining of ALI sections at day 28. Immunofluorescent staining of the BC marker KRT5 demonstrated no significant difference in positive cells between control, WT and activated KRAS over-expressing cells were observed in the number of Alcian blue positive (9.1% lenti control vs 7.6% WT KRAS vs 17.3% in activated KRAS-expressing cultures), MUC5B positive (8.3% lenti control vs 8.5% WT KRAS vs 20.9% in activated KRAS-expressing cultures) and SCGB1A1 positive secretory cells . These data suggest that KRAS-dependent regulation of normal BC differentiation into a mucociliated epithelium involves downstream signaling mechanisms independent of SOX2, SOX9 and the NOTCH signaling pathway.KRAS signaling regulates multiple cellular processes in the human and murine lung via modulating expression of the SOX family transcription factors (SOX2 and SOX9) \u201343\u00a0and ip\u2009<\u20090.001, Fig. p\u2009<\u20090.001) increased levels of activated RAS at ALI day 7 (83% increase), day 14 (32% increase) and day 28 in the airway epithelium of smokers compared to nonsmokers under non-differentiating culture conditions. Forty-eight hours post CSE exposure, the cells were lysed and the activated form of GTP-bound KRAS quantified by co-immunoprecipitation (co-IP) and subsequent elution using the GST-Raf1-RBD (Rho binding domain) fusion protein and glutathione resin. Western analysis for the input cell lysates used for the Co-IP showed equal amounts of total KRAS protein in untreated and CSE treated cells Fig.\u00a0a. Howeve.KRAS is a member of the RAS protein family, a class of small GTP-binding proteins with intrinsic GTPase activity that function as molecular switches for multiple cellular processes \u20135. Cance+) cells and complete loss of mucus-producing secretory (Alcian blue+ and MUC5B+) cells and ciliated (\u03b2-tubulin IV+) cells with a corresponding increase in squamous (IVL+) cells. Conversely, constitutive KRAS activation via over-expression of the G12\u2009V KRAS mutant had the opposite effect and produced a thicker epithelium with increased numbers of secretory and ciliated (\u03b2-tubulin IV+) cells with a decreased number of squamous (IVL+) cells. Interestingly, over-expression of wild-type KRAS had no effect on BC differentiation suggesting feedback mechanisms exist to tightly control KRAS signaling in the presence of elevated protein levels. Overall, these data demonstrate that KRAS signaling is critical for regulating differentiation of BC into a mucociliated epithelium, with suppression of KRAS signaling increasing squamous cell differentiation at the expense of secretory and ciliated cell differentiation. In contrast, activation of KRAS signaling promotes secretory and ciliated cell differentiation at the expense of squamous cells.To assess the role of KRAS signaling in regulating differentiation of the human airway epithelium, BC were cultured on ALI and KRAS signaling was either suppressed during differentiation via siRNA-mediated knockdown of KRAS expression or activated via over-expression of the constitutively active G12\u2009V KRAS mutant. Suppression of KRAS expression resulted in production of a leaky epithelium that failed to regenerate a fully differentiated mucocilated epithelium and survive for 28\u2009days. However, analysis of differentiation following 14\u2009days of culture when the cells were still viable demonstrated a thinner epithelium following KRAS knockdown with decreased numbers of non-mucus producing secretory and peripheral (SOX9+) axis of the lung [+ BC in the proximal airways and subsequent disruption of normal morphogenesis [KRAS-dependent signaling involves multiple downstream signaling pathways including those mediated by RAF/MEK/ERK and PI3K/AKT to regulate multiple cellular functions , 12\u201315. rocesses , 41. Actthe lung , 42, 43.ogenesis . HoweverBased on the knowledge that KRAS activation plays a critical role in regulating BC differentiation into a mucociliated epithelium and that cigarette smoking is a major driver of airway epithelial remodeling, we assessed the effect of cigarette smoke exposure on KRAS activation. Short term exposure of BC to cigarette smoke extract (CSE) in vitro resulted in increased KRAS activity compared to untreated cells. Furthermore, CSE treatment of BC during differentiation on ALI culture had the same effect and increased activation of the RAS protein family. These in vitro findings were then validated in vivo using airway epithelial brushings isolated via bronchoscopy from the airway of nonsmokers and asymptomatic healthy smokers, which demonstrated increased RAS activation in the airway epithelium of smokers. Based on these observations, we conclude that cigarette smoke exposure increases KRAS (and RAS protein family) activation in the human airway epithelium. Upon exposure to cigarette smoke, the airway epithelium becomes progressively disordered which impairs its structure and function , 44\u201349. In addition to contributing to the process of airway epithelial remodeling, cigarette smoke induced activation of KRAS may play an important role in oncogenic transformation of the airway. A recent study by Vaz et al. using unIn summary, our data demonstrates that KRAS-dependent signaling plays a critical role in regulating the balance of secretory, ciliated and squamous cell differentiation of the human airway epithelium. Suppression of KRAS signaling increased squamous cell differentiation at the expense of secretory and ciliated cell differentiation, whereas activation of KRAS signaling promotes secretory and ciliated cell differentiation at the expense of squamous cells. Furthermore, cigarette smoke exposure results in increased KRAS activity in the airway both in vitro and in vivo. Therefore, development of airway epithelial remodeling in smokers may, in part, be regulated by cigarette smoke-mediated activation of KRAS-dependent signaling in BC.Additional file 1:Figure S1. Effect of constitutive KRAS activity on expression of SOX family tran-scription factors and NOTCH pathway genes. (PDF 239 kb)\u115f"} +{"text": "Physical processes in the quantum regime possess non-classical properties of quantum mechanics. However, methods for quantitatively identifying such processes are still lacking. Accordingly, in this study, we develop a framework for characterizing and quantifying the ability of processes to cause quantum-mechanical effects on physical systems. We start by introducing a new concept, referred to as quantum process capability, to evaluate the effects of an experimental process upon a prescribed quantum specification. Various methods are then introduced for measuring such a capability. It is shown that the methods are adapted to quantum process tomography for implementation of process capability measure and applicable to all physical processes that can be described using the general theory of quantum operations. The utility of the proposed framework is demonstrated through several examples, including processes of entanglement, coherence, and superposition. The formalism proposed in this study provides a generic approach for the identification of dynamical processes in quantum mechanics and facilitates the general classification of quantum-information processing. By definition, engineering-oriented procedures are physical processes. Then such purely physical investigation inspires the question as to how quantum-mechanical effects can be harnessed to perform practical tasks4. Moreover, feasible techniques for fully exploring the possibilities and limitations of these tasks based on quantum mechanics are still lacking5. The effort to address these issues has revolutionized the conventional methods for engineering physical systems and has greatly advanced the development of quantum technology8.Physical processes in quantum mechanics attract considerable interest on account of their unusual characteristics and potential applications. Investigating how and why these processes cannot be explained using classical physics provides an important insight into the fundamentals of quantum mechanics9 provides a new paradigm for the emerging generation of quantum technologies, such as quantum computation10 and quantum communication11. The underlying manipulations of quantum systems are all derived from dynamical processes in quantum mechanics, and range from gate operations12 to information storage and protection against the effects of noise13, from the creation of entanglement to teleportation15 and entanglement swapping16. Identifying the elementary classes of quantum dynamical processes, therefore, is not only significant in its own right, but is also a fundamental goal in uniting work on quantum information theory9.Quantum information processing24. However, a fully comprehensive analogue for dynamical processes has yet to be found. Moreover, despite the success of theoretical methods in describing the dynamics of quantum systems, such as the quantum operations formalism3, the problem of characterizing the prescribed quantum-mechanical features of dynamical processes in a quantitatively precise manner has yet to be resolved. As a result, it is presently intractable to quantitatively distinguish the different processes in quantum mechanics.Considerable progress has been made in understanding quantum dynamical processes; in particular, in identifying the quantum properties of the output statesMotivated by this problem, and driven by the desire to ultimately identify all quantum dynamical processes, we present herein a method for quantifying the extent to which a process quantum mechanically behaves and affects a system. We commence by introducing a new concept referred to as the quantum process capability for evaluating the prescribed quantum-mechanical ability of a process. We then introduce two capability measures and a task-oriented capability criterion, which demonstrate that such process evaluation can be quantitatively determined through experimentally feasible means. We show that with the proposed tools, it is possible, for the first time, to quantitatively identify several fundamental types of dynamical process, including processes of entanglement, coherence and superposition.26. These experimentally measurable quantities, conditioned on different initial system states, 27. In such a way, the physical process acting on the system can be fully described by a positive Hermitian matrix, referred to hereafter as the process matrix, We begin by systematically characterizing the physical process acting on a system using the quantum operations formalism. In doing so, we assume that the system of interest and its environment are initially in a product state. Furthermore, it is supposed that, after the physical process, the density matrices of the system states, The manner in which a system evolves from an arbitrary input state, capable, and is denoted as incapable.When a process has the ability to show the quantum-mechanical effect on a system prescribed by the specification , the process is defined as An incapable process is defined as an operation, (P1) If a process is composed of two cascaded incapable processes: (P2) If a process is a linear combination of incapable processes: These properties imply that manipulating incapable processes inevitably results in another incapable process.Furthermore, if the process quantum process capability, which provides a quantitative understanding of how well To place the basic definitions of capable and incapable processes into a wider context, we now introduce a measurable property of a process called the The quantum process capability of a process can be quantitatively evaluated using different tools according to the type of specification or subsequently used experimental process. The following discussions propose two methods for evaluating the quantum process capability, namely the capability measures and the task-oriented capability criterion.We desire to have a tool that can faithfully reflect the features of capable and incapable processes, respectively, and reliably show how the quantum process capability of (MP1) (MP2) Since incapable processes consist of only incapable ingredients by definition [see (P1) and (P2)], one cannot increase the capability of a process by incorporating additional incapable ones. In other words, it follows that(2a) The capability measure of (2b) The capability measure reflects the non-increasing capability of a process under stochastic incapable operations: (MP3) The capability measure is convex, meaning that To show how the quantum process capability can be concretely quantified, we now introduce two different types of capability measure, which both satisfy properties (MP1)-(MP3) defined above (see Methods section).M1. Capability composition 29: In the practical examples presented later in this study, M2. Capability robustness In practical cases, 11, the process fidelity, When Let the following capable processes be used to demonstrate the formalism described above:30, a non-classical dynamics is a dynamical process that cannot be explained using a classical picture: the initial system is considered a physical object with properties satisfying the assumption of classical realism, and the system thus evolves according to classical stochastic theory.E1. Non-classical dynamics. As defined in ref.\u200930, e.g., the fusion of entangled photon pairs31. In the present context of quantum process capability, non-classical dynamics is capable of going beyond the generic model of classical dynamics, compared to incapable classical processes. The quantifications of non-classical processes introduced in ref.\u200930 reveal the capability of non-classical dynamics, and include the capability composition Non-classical dynamics can be quantified and used as the requirement for the non-classical manipulation of a systementclass1pt{minimacument}\u03b2 and the cument}\u03b2 .10. However, the means to quantify the ability of a process to generate the entanglement of two qubits remains unclear.E2. Entanglement generation. Creating entanglement is a crucial dynamical process in both quantum mechanics and quantum-information processing33, and guarantees that if the input states are separable states, the output states are separable states as well. In other words, the PPT criterion stipulates that an output state Let entanglement creation be defined as a capable process, 23, and is the main power behind quantum technology7. If the density matrix of a 21. A mixture consisting only of the basis states of the form: E3. Coherence creation and preservation. Quantum coherence is one of the main features of quantum systems13. They represent two different abilities of capable processes, and are denoted hereafter as Coherence creation and preservation are essential in performing state preparation and manipulation in quantum engineeringRegarding the capability of coherence preservation, In other words, all of the output states are incoherent states, regardless of the input state 34 describes how any two or more quantum states can be superposed together to form another valid quantum state (and vice versa). The superposition of quantum states therefore generalizes the capability of coherence creation in the orthonormal basis 24.E4. Superposition of quantum states. The superposition principle of quantum mechanicsFor a capable superposition process, the capability measure has a positive value, 35: 36. The composite two-qubit evolution can be represented by an unitary transform Z gate. Moreover, assume that one of the qubits is depolarized at a rate \u03b3. The depolarization of a qubit can be represented in the form We now provide an explicit example illustrating how a dynamical process of interest can be identified upon the prescribed quantum specification . We consider the case of a composite system consisting of two qubits. Let the qubits be coupled via an interaction, equivalent to the quantum Ising model, of the Hamiltonian\u03b3\u2009=\u20090.02, Fig.\u00a0Z gate. When Z gate. There are oscillations of \u03c0 for \u03c0 in our example. An observation of Fig.\u00a0Given mentclass2pt{minimmentclass2pt{minimhow closeis to an incapable process in the sense of how large the minimum amount of noise First, compared to mentclass2pt{minimThird, superposition can be performed either by purely unitary evolution or by quantum jumps of depolarization. The latter phenomenon can be seen in Fig.\u00a0Finally, since superposition can be implemented using incoherent operations, the capability of superposition does not show how efficiently the coherence of the input states The quantum process capability framework introduced above possesses several important features for practical utility and extensions, as described in the following.9, the dynamics of energy transfer systems37, the task-orientated processes associated with quantum information31, and the atomic-physics and quantum-optics experiments on a chip38.(D1) Since constructing the process matrix is experimentally feasible, the process capability can be readily quantified in various present experiments. The quantum operations formalism underlying the process matrix is a general tool for describing the dynamics experienced by closed or open quantum systems in a wide variety of physical scenarios. Our formalism is therefore applicable to all physical processes described by the general theory of quantum operations, including, but not limited to, the fundamental processes postulated in quantum mechanics39, Einstein-Podolsky-Rosen steering40, Bell non-locality20, and genuine multipartite quantum steering42.(D2) In addition to the capable processes illustrated above , the present framework can also be used to explore other types of quantum process capability, including the creation of genuine multipartite entanglement30 or the coherence of quantum information13.(D3) Process identification and classification. As distinct process capabilities are given, dynamical processes can be identified and classified accordingly using the capability measures, such as the capability robustness. This is helpful in uniting existing works on quantum information under a given type of quantum process capability, for example, the preservation of quantumness11; as illustrated in Fig.\u00a0(D4) Benchmark for process engineering. The capability criterion (5) sets a level 44. Alternatively, the capability of entanglement generation also provides an understanding of the efficiency of coherence conversion. The framework proposed herein thus facilitates a more comprehensive understanding of the characteristics of quantum dynamics.(D5) Insightful description of processes. A given quantum process capability may be linked in some way with other concepts. For example, the capability of non-classical dynamics can be used to reveal the non-Markovianity of dynamical processes46 and coherence22. The maximum amount of resources that can be extracted from a given process, (D6) Maximum extraction of resources from processes. The quantum effects of a process on the system inputs, In conclusion, we have developed a novel formalism for performing the quantitative identification of quantum dynamical processes. The concept of quantum process capability and its various quantifications have been introduced to evaluate the prescribed quantum-mechanical features of a dynamical process. The ability to quantify the ability of a process in terms of its quantum process capabilities makes it possible to discriminate quantitatively between different dynamical processes. Overall, the rigorous framework of quantum process capability proposed in this study provides the means to go beyond the usual analysis of state characteristics and to approach the goal of uniting work on quantum information theory.47 underlying the proposed formalism, including scenarios such as where the measurement outcomes are continuous and unbound, e.g., as for nanomechanical resonators48. It is anticipated that the enhanced framework will thus facilitate the novel recognition and classification of physical processes with quantum process capability.Future studies will aim to improve the performance and scalability of the process tomographyThe capability composition measure MP1): From the definition of capability composition, Eqs. : From th, it foll(MP2): (2b)Property (2a) can be proven by properties (2b) and (MP3). We start by proving (2b).27, We consider a process consisting of the experimental processmentclass2pt{minimNote that, while For Since Thus, we conclude that the capability composition measure satisfies the property, (MP3): The process \u03c7expt\u2218\u03c7\u2110 can be rSince each process (MP2): (2a)From (2b) and (MP3), we conclude thatHere we take the capability composition mentclass2pt{minimThe properties (MP1)-(MP3) also hold for the time evolution under the noise effect. In Fig.\u00a0\u03b2 has the three properties of capability measures, (MP1)-(MP3).The capability robustness measure (MP1): From the definition of capability robustness, Eqs. and 4),,4), it i(MP2): (2b)From the definition of capability robustness, Eqs. and 4),,4), it iFrom Eq. , combineSince In general, the quantum process capability cannot be increased by applying additional incapable processes. Thus, the minimum amount of noise added to ith Eqs. and 4),,\\documenIn other words, the capability robustness measure possesses the property, (MP3): The process \u03c7expt\u2218\u03c7\u2110 can be eWith (MP2): (2a)Properties (2b) and (MP3) lead property (2a) as follows:Thus, the capability robustness measure has the property The idea underlying the capability composition measure highlights whether a process can be described by the model of an incapable process. Such a feature indicates that this measure is unable to reveal the distinction between processes, in general. By contrast, the idea of how close a process is to an incapable process, which underlies the capability robustness measure, makes the difference between processes visible.9. To obtain the constraints for the incapable process 9.In this section, we present the constraint sets 30 for a two qubit system. In general, a classical process can be described by its classical states and their evolution. The classical states of input systems satisfy the assumption of realism and can be represented by the realistic sets ith physical property for the jth classical object. The evolutions of these states are described by transition probabilities, \u03bc, which can be reconstructed as a density operator E1. Non-classical dynamics. To demonstrate the constraint set for the incapable processes of non-classical dynamics The constraint set E2. Entanglement generation. To derive the constraint set Since all of the input states 33, where The first criterion in ensures aints in ensure tt in Eq. , which ss in Eq. state th9, we first prepare the input states9. The above specification is also applicable to superposition.E3. Coherence creation and preservation. To construct the process matrix using process tomographyThe constraint set By contrast, the set of constraints for the incapable process of coherence preservation, The number of variables that needs to be optimized by SDP is related to the output states For the quantum Ising model shown in Fig.\u00a024 in E4. Superposition of quantum states. The superposition free statesTo construct the process matrix using process tomography, we first prepare the input states d in Eq. and the The basis states can be decomposed into a linear combination of the input states The output states of basis states The constraints on the incapable process constituting For the quantum Ising model in Fig.\u00a049 of the output state 22 is In order to evaluate the coherence conversion efficiency of process We note that these values are exactly the same as those for the entanglement generation robustness, see Fig.\u00a0. Notably24 of the output state from \u03b2 decrease. As a result, the capability of superposition cannot be used to examine the coherence conversion efficiency.The conversion efficiency from coherence to superposition for process see Fig.\u00a0. For \\do\u03b1 and \u03b2 can quantify the maximum extraction of resources from a process.Since mentclass2pt{minimin, Eqs. can be r in Eqs. , we conc"} +{"text": "Attention Deficit/Hyperactivity Disorder (ADHD) is the most common neurodevelopmental alteration in childhood because modifications in the expression and/or function of this gene may well lead to ADHD symptoms Bannon, . FurtherDAT1 gene has a 40 bp variable number tandem repeat (VNTR) in the 3'-untranslated region, which exist in 3\u201313 copies; the 9\u201310-repetition alleles are the most common. Although not all literature agrees, the 10-R allele has often been implicated in the presentation of ADHD to the previous, already assessed ones.It should be considered that the fidelity of maintenance of CpG methylation within cell division has been found to be very high in hemi-methylated DNA, on the other hand We selected 14 ADHD patients for which we assessed the other strand . Once neThese new data suggest that not only M1, M2, and M6 located on the same strand of the gene are important: also the M1-cos and M6-cos located on the opposed strand are crucial. While increased M1 (or M2\\M6) methylation on the strand of the gene is associated with ADHD (or its relief), very intriguingly the same sites tend to be demethylated on the opposed strand. This is the first time (at least to our knowledge) that the importance of the opposed strand is claimed as important for the role of a given gene within a given pathology. Whereby a CpG residue is claimed important when its methylation is increased, the corresponding CpG residue on the opposed strand might be claimed important when its methylation is conversely decreased.We thought that, if crucial CpG positions exist, these shall be ON (Methylated) on one (gene) strand and moreover OFF (Demethylated) on the other (opposed) strand. The quantity of CpGs in any DNA trait is 1\\16 of total, based on simple probability: it is unlikely that all of them are equally crucial. We propose that the identification of a crucial CpG, to cause a given phenotype, may exploit its negative correlation with its complementary opposite.For first motif CGGCGGCGG the best candidates are M1 and M2; for second motif CGCG the best candidate is M6. By doing this kind of reasoning, we realized that the relevance of M2 and M1-cos, suggested by their being highly correlated see , is supp100 \u2013 methylation\u226b for any CpG-cos . Patent Application in ITALY at no. 102016000129938 (22-December-2016); European Patent Application at no. 17830021.6 (21-December-2017). Granstrem O, WA, Laviola G, Porfirio MC, Curatolo P\u2014\u201cBiomarkers for validation of ADHD (Attention Deficit and Hyperactivity Disorder) diagnosis and monitoring of therapy efficacy\u201d. Full patent PN810701WO, Int. Application PCT/EP2013/066845, Publication International Number WO/2014/023852 (10-August-2013). The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.WA, EP and CD'A hold following patent applications: WA, Laviola G, EP, CD'A\u2014\u201c"} +{"text": "Olea europaea L.) was one of the first plant species in history to be domesticated. Throughout olive domestication, gene expression has undergone drastic changes that may affect tissue/organ-specific genes. This is an RNA-seq study of the transcriptomic activity of different tissues/organs from adult olive tree cv. \u201cPicual\u201d under field conditions. This analysis unveiled 53,456 genes with expression in at least one tissue, 32,030 of which were expressed in all organs and 19,575 were found to be potential housekeeping genes. In addition, the specific expression pattern in each plant part was studied. The flower was clearly the organ with the most exclusively expressed genes, 3529, many of which were involved in reproduction. Many of these organ-specific genes are generally involved in regulatory activities and have a nuclear protein localization, except for leaves, where there are also many genes with a plastid localization. This was also observed in stems to a lesser extent. Moreover, pathogen defense and immunity pathways were highly represented in roots. These data show a complex pattern of gene expression in different organs, and provide relevant data about housekeeping and organ-specific genes in cultivated olive.The olive tree ( Olea europaea L.) was one of the first plant species to be domesticated in history. The demand of olive oil and extra virgin olive oil (EVOO), which is the main product obtained from this crop, is continuously increasing, and its health benefits are well-established. Among the different varieties, the EVOO from cultivar \u201cPicual\u201d has exceptional organoleptic properties and marked oxidative stability due to its high content in polyphenolic compounds (ACP) dehydratase FabZ, which catalyzes the dehydration of short- and long-chain beta-hydroxyacyl-ACPs to produce unsaturated fatty acids [The 435 genes showing an expression above the basal level only in fruit and below the basal level in the other studied plant organs were anaxidation and cutixidation . The AAE ligases , but may ligases . FABZ gety acids . In the CALS12, 2 CALS11, and three MS1 genes appeared as flower-specific. MS1 genes encode for plant homeodomain (PHD) finger protein MALE STERILITY 1, are transcription factors that activate anther and post-meiotic pollen development and maturation, and control tapetal development [CALS genes encode for callose synthases, which are involved in sporophytic and gametophytic development. They are required for forming the callose wall by separating tetraspores of the tetrad during pollen formation [The largest number of organ-specific genes was found in flowers, as 3529 genes showed an expression above the basal level in flowers, and below the basal level in the other analyzed plant organs . The expelopment ,29. CALSormation ,31. MostDFRA gene was found that encodes for dihydroflavonol 4-reductase, involved in flavonoid biosynthesis. A LCY1 gene, which encodes for a chloroplastic lycopene beta cyclase involved in \u03b2-carotene and \u03b2-zeacarotene biosynthesis, was also detected in this set of leaf-specific genes. There was also a set of leaf-specific genes implicated in the response to external stimulus. In this case, two genes involved in leaf circadian rhythm were included in the leaf-specific genes set. One of them is an EARLY FLOWERING 3 (ELF3) gene that encodes a circadian clock-regulated nuclear protein in Arabidopsis [CRY1 gene that encodes a cryptochrome/DNA photolyase, a FAD-binding domain protein. Cryptochromes are blue-light photoreceptors involved in the circadian clock for plants and animals [In leaves, 690 specific genes were found , whose ebidopsis , and a C animals ,36. In aThe selection of meristem-specific genes produced a set of 768 genes . The heaMLP146, three MLP28, two MLP423, one MLP31, and one MLP43. These are defense genes, and MLP43 confers drought tolerance in Arabidopsis [MLP genes and the disease-resistance RPP8 gene.A total of 971 root-specific genes were foubidopsis . Additiobidopsis , which s2 RPKM > 1, an 8-fold change above the basal level gene, which is involved in freezing tolerance and cold acclimation [MYB113, which is likely implicated in regulating the development process or stress response. The stem had the fewest organ-specific genes represented by only 229 genes . In addial level B. Such aal level . As mostbidopsis . In factbidopsis , and alsbidopsis . Therefolimation , whereasMS1 genes and 12 sporophytic and gametophytic development CALS genes. It can be concluded that the genetic and metabolic regulatory processes that control floral development require the involvement of at least 3529 of the genes together, which are specifically expressed in flowers. In fruit, there were fewer specific genes (435), with some genes involved in fatty-acid biosynthesis and also in pigmentation. In general, many of these organ-specific genes are involved in regulatory activities and show nuclear protein localization, except for leaves, in which the coded proteins of many genes were located at the plastid, which also happens in stems to a lesser extent. In leaves, 690 tissue-specific genes were found and included some genes involved in the biosynthesis of flavonoids and carotenes with a marked presence in olive leaves, as well as genes involved in the circadian rhythm. Very few stem-specific genes with a moderate or high expression were found, including a PKS1 gene that could play a role in controlling stem phototropism and/or anti-gravitropism. Pathogen defense pathways were very well represented in roots. Interestingly, the genes that negatively regulated the PTI were specifically expressed in roots, which suggests that the roots of healthy plants had at least partially repressed the PTI, but had also partially induced root-specific defense genes. Meristems presented a wide range of processes in the 768 tissue-specific genes, which is consistent with being a very undifferentiated tissue. A full list of all the gene profiles is provided, together with the expression level data for all the genes. These data might be an extremely useful tool for seeking out future breeding genetic markers.The RNA-seq transcriptomic analysis of the fruit, flower, leaf, meristem, root, and stem samples from adult healthy plants revealed the expression of 53,456 unique genes, which means that there are nearly 25,000 genes in the genome that might be expressed in other developmental stages in response to biotic or abiotic stresses, or simply silenced. A full complex gene expression pattern is provided. Of these 53,456 unique genes, many possible housekeeping genes were found as they were expressed at moderate or high levels in the six analyzed plant organs. Additionally, organ-specific genes were defined. Flowers showed a significantly large number of specific genes (3529), many of which were related to reproduction, including three male sterility"} +{"text": "The main challenge of advertising is to catch consumers\u2019 attention and evoke in them positive attitudes to consequently achieve product preference and higher purchase intentions. In modern advertising, visual metaphors are widely used due to their effects such as improving advertising recall, enhancing persuasiveness, and generating consumers\u2019 positive attitudes. Previous research has pointed out the existence of an \u201cinverted U-curve\u201d that describes a positive relationship between the conceptual complexity of metaphors and consumers\u2019 positive reactions to them, which ends where complexity outweighs comprehension. Despite the dominance of visual metaphors in modern advertising, academic research on this topic has been relatively sparse. The inverted U-curve pattern has been validated regarding ad appreciation, ad liking, and purchase intention by using declarative methods. However, at present, there is no evidence of consumers\u2019 neurophysiological responses to visual metaphors included in advertising. Given this gap, the aim of this research is to assess consumer neurophysiological responses to print advertisements that include visual metaphors, using neuroscience-based techniques. Forty-three participants (22W\u201321M) were exposed to 28 stimuli according to three levels of visual complexity, while their reactions were recorded with an electroencephalogram (EEG), eye tracking (ET), and galvanic skin response (GSR). The results indicated that, regardless of metaphor type, ads with metaphors evoke more positive reactions than non-metaphor ads. EEG results revealed a positive relationship between cognitive load and conceptual complexity that is not mediated by comprehension. This suggests that the cognitive load index could be a suitable indicator of complexity, as it reflects the amount of cognitive resources needed to process stimuli. ET results showed significant differences in the time dedicated to exploring the ads; however, comprehension doesn\u2019t mediate this relationship. Moreover, no cognitive load was detected from GSR. ET and GSR results suggest that neither methodology is a suitable measure of cognitive load in the case of visual metaphors. Instead, it seems that they are more related to the attention and/or emotion devoted to the stimuli. Our empirical analysis reveals the importance of using neurophysiological measures to analyze the appropriate use of visual metaphors and to find out how to maximize their impact on advertising effectiveness. Marketing scholars and practitioners are continuously facing the challenge to find out how to enhance advertising effectiveness. The decrease of traditional advertising media such as TV and newspapers, the rise of new ones like mobiles or videos, and the growth of interactive and targeted advertising represent a huge limitation to print advertising, characterized by a static image. Given this limitation, graphic print advertising must focus on seeking the most optimal design to catch consumers\u2019 attention and evoke positive attitudes, in order to trigger a higher preference for the products and, consequently, higher purchase intentions.Advertisers and academics have analyzed the key factors that influence effective print advertisement such as element location , advertiImages themselves can be extremely complex, as they are \u201ccapable of representing concepts, abstractions, actions, metaphors and modifiers\u201d , p. 253.A rhetorical figure is an artful deviation relative to audience expectation , which cAmong the visual rhetoric figures, metaphors are the most commonly used because, according to the theory, they can formulate, sustain, or modify the attention, perceptions, attitudes, or behaviors of their audiences ; they alScholar research defines metaphors as comparisons between two things that are originally different in nature but have something in common , and wheConsumer studies have concluded that advertisements with complex layouts evoke positive attitudes , 2014, hAs mentioned, previous studies have consistently reported that advertisements with complex layouts result in audiences\u2019 more positive attitudes than advertisements based on stand-alone images , 2014. OIt seems that decoding the message increases the subject\u2019s sense of pleasure and decreases the sense of tension, leading to the enhancement of the subject\u2019s attitude toward the ad (Aad) and to iAccording to The pleasure evoked by complex visual images used in advertising has been studied from different perspectives. Previous studies suggest that if metaphors demand too much or too little cognitive processing effort, consumers may opt out, and appreciation will decrease; thus, advertisement appreciation follows the pattern of the aforementioned inverted U-curve . In the Despite of the valuable findings of the aforementioned research, empirical evidence for the inverted U-curve is still relatively scarce, and its validity has not yet been proven regarding Aad, a concept extensively examined that reveals consumers\u2019 precise perceptions and impressions toward advertisement designs . NeitherHypothesis 1:The effects of metaphors on (a) Aad, (b) purchase intention, and (c) preference follow the inverted U-curve pattern according to which there is a positive relationship between complexity and positive feelings until a tipping point is reached where complexity exceeds comprehension.According to In cognitive psychological terms, elaboration \u201cindicates the amount, complexity, or range of cognitive activity occasioned by a stimulus\u201d . p 39. W;Jeong, 2008;Chang and Yen, 2013), where it is stated that more complex visual figures lead to more cognitive elaboration. Such higher elaboration is a consequence of comprehension efforts, and it manifests as an enhanced memory of the ad , and especially those based on the recording of event-related brain potentials (ERPs)\u2013an EEG methodology that offers great insights into processing mechanisms with millisecond precision \u2013suggest In the same line, Pileliene and Grigaliunaite (2016) analyzed the allocation of attentional resources to process advertising with complex layouts , through the use of P300, a component that provides information about the neural activity of cognitive operations . The obtMoreover, regarding temporal and spatial studies of the brain, the EEG study developed by On the other hand, studies using eye tracking (ET) revealed that the time devoted to exploring the stimuli could be indicative of the cognitive processes involved in comprehending the metaphors, as experimental studies showed that extra time is needed to comprehend more complex metaphors . BesidesFinally, some authors have suggested that galvanic skin response (GSR) seems to be a suitable tool for measuring cognitive activity and have pointed out the correlation between GSR features and cognitive functions and more specific cognitive workload .Based on the preceding discussion, we can identify a positive relationship between complexity and elaboration that can be represented as a higher cognitive load as metaphors increase their complexity. According to literature, the cognitive load can be measured with EEG, analyzing oscillations of alpha\u2013theta bands; with ET, analyzing the time devoted to exploring the visual metaphor; and by analyzing GSR features. Hence, it is hypothesized:Hypothesis 2:To the extent that a visual metaphor increases its difficulty, the subject will have a higher cognitive load. This situation will be reflected in (a) a longer time to explore the advertisement, (b) a higher index of EEG cognitive load, and (c) a higher index on GSR activation.Forty-three undergraduate students voluntarily participated in the study in June 2018. The mean age was 23.3 years with a standard deviation of 2.8 years. The participants were recruited using convenience sampling. All participants were right-handed, healthy people with normal or corrected-to-normal vision and were free of any hearing problems. All participants provided signed consent before participating and received monetary compensation at the end of the session.The EEG is a measurement of the whole sphere of brainwave activity emerging in various cortical areas, which helps to understand the way the brain responds to various stimuli. EEG is a non-invasive instrument that provides information from areas underneath the cortex and, combined with other instruments, may provide very accurate results on a subject\u2019s response to a marketing stimulus .Cerebral activity was recorded using the Bitbrain Versatile EEG with 16 channels at a sampling rate of 256 Hz, while impedances were kept below 5 k\u03a9. For the experiment, we used 12 electrodes placed by following the International 10\u201320 system.This biometric technique is based on the relationship between human eye movements, visual attention, and information acquisition, with the latter two both being closely related to higher-order cognitive processes . ET has Consumers\u2019 behavior is measured with an ET technique by recording either the number of fixations or dwell time of the eyes during an individual or group exposure to external stimuli. The specific ET device used in the present study was a Tobii X2-30 Eye-Tracker Compact Edition, a screen-based eye tracker capturing gaze data at 60 Hz.The GSR is defined as a change in the electro-physiological properties of the skin due to sweat gland function. GSR provides an indication of changes in the human sympathetic nervous system (SNS) and is wNeurophysiological measurements comprise cognitive load, time in AOI and arousal. These measures are described following and the instruments used to measure them are related in The EEG technique can be used to obtain many different psychological metrics such as cognitive workload. Studies on the area have found that EEG power in the theta and alpha frequency range is related to cognitive performance . In factBased on previous studies , in the The ET technique has been used as a direct measure of attention by analyzing the number of fixations on specific areas, the viewing time, or the time that users take to reach each area and alsoTaking into account that there is evidence regarding \u201ca sufficiently close connection between time spent fixating on display items and the amount of cognitive processing\u201d , p. 1237To get the time in AOI, we previously defined the AOIs on each image by selecting the area of the metaphor. In order to avoid the bias derived from having a different number of images according to the type of metaphor, we defined as \u201carea of interest\u201d the same zone (same size and shape) for all four conditions in each set.Previous studies have found that the GSR signal represents a suitable measure for detecting emotional responses but also for differentiating between stress and cognitive load . BesidesA computer-based questionnaire was also applied, through an Internet platform, to obtain declarative Aad, purchase intention, preference, and perceived complexity see .(1)Aad was measured adopting (2)Purchase intention was measured as in previous studies , on a se(3)Participants ranked their preference from 1 (the most preferred stimulus) to 4 (the least preferred stimulus).(4)Perceived complexity was measured as in A within-subjects research was conducted to assess consumer responses to print advertisements that include visual metaphors. Twenty-eight advertisements were developed in total, seven sets of four print advertisements for seven different product categories and a control condition. Each set corresponded to one of the product categories selected on the basis of a focus group performed with eight people from 18 to 35 years, in which age range were found high levels of familiarity with categories of food, house, and personal care.All images used were full-color, and product images were previously used in real printed or Internet ads. However, in order to avoid familiarization bias , althougTaking into account that metaphorical content could have multiple interpretations a pre-teThe study was performed at the Laboratory of Neuromarketing of Complutense University of Madrid, and its total duration was 60 min, including both blocks to be described in the following paragraphs. All 43 participants were right-handed, healthy people with normal or corrected-to-normal vision and were free of any hearing problems. All participants provided signed consent before participating and received monetary compensation at the end of the session.The experimental procedure had two phases, which entailed the use of neurophysiological (Block 1) and declarative (Block 2) methods. In Block 1, after briefing the protocol to the participants, they were sat in front of the computer screen where the ET was installed and were affixed with the EEG and GSR devices for collecting their brain electrical activity and skin conductance. The screen used was 21 inches with full HD resolution .To calibrate the ET, subjects were instructed to follow the points appearing on the screen with their sight without moving their heads. Once the ET was calibrated, the researcher checked that the signal of all three devices was good and started running the experiment. It is important to highlight that in order to measure the workload, the emotional activation, and the time spent exploring each ad, we used simultaneous EEG, GSR, and ET measurements during the whole experiment. The software used to present stimuli and simultaneously record data was SensLab, developed by Bitbrain.In this block, the 43 participants were exposed to the 28 aforementioned stimuli (7 product categories \u00d7 4 complexity modification designs), presented individually and randomly. First, a fixed cross was presented in the middle of the screen, followed by the stimulus for 5 s, and then a black slide, with the word \u201cRest,\u201d so participants could take a 2 s rest see .During Block 2, which entailed the declarative test phase, participants had to answer the questionnaire while they were visualizing the stimuli individually. The respondents initially answered the Aad scale for the 28 advertisements. The second scale, with the same structure, asked participants about their purchase intention. The third task was to rank the four advertisement modulations from 1 to 4, to assess their preference. Finally, the last declarative section asked about their perceived complexity. In this second block, the exposure time depended on the subject\u2019s response time instead of being standardized as in Block 1.All raw data coming from the neurophysiological techniques were provided by SennsLab.Raw EEG data were first filtered using a band-pass filter between 1 and 25 Hz with a four-order Butterworth filter. After that, a filtering pipeline was implemented. First, an ASR (artifact subspace reconstruction) filter was used to remove big amplitude artifacts . Then ICSkin conductance data are usually characterized by a sequence of overlapping phasic SCRs overlying a tonic component . The extThe post-processed EEG and GSR signals, next to the ET information about time in AOI were subsequently analyzed by using SPSS. The declarative information also was analyzed using that statistical software.t-test analysis was performed. The second stage was oriented to find differences in the complexity continuum. To get those results, we applied a repeated measures ANOVA. In the third stage, all the metrics (implicit and declarative) were regressed onto conceptual complexity derived from metaphorical content, with perceived complexity as a mediating variable. Finally, a Sobel test was used to statistically investigate the effect of the proposed mediator on the predictor\u2013outcome relationship.The data analysis was performed in three stages. First, we focused on the differences between advertisements with and without metaphors. To obtain those differences, a F = 39.42, p = 0.000. More specifically, differences were found between juxtaposition (M = 6.1) and replacement and between fusion (M = 5.9) and replacement .The first step to analyze Hypothesis 1 was to validate that the complexity continuum was well constructed. For this purpose, a repeated measures ANOVA was performed. It showed that the perceived complexity of the levels of the ads was statistically and significantly different, Note that in order to maintain the negativity of the scales to the left, subjects rated the images difficult to understand as 1 and those easy to understand as 7. t-test was conducted. Results showed that there was a significant difference in the scores for ads without metaphors and ads including metaphors (M = 5.64 SD = 0.71); t(42) = \u22122.71, p = 0.010. It means that ads without metaphors were perceived as easier to understand than those with metaphorical content.To compare the perceived complexity between ads with and without metaphors, a paired-samples In order to obtain more accurate results of the self-reported measures, we performed two types of analysis: (1) a comparison between advertisements with and without metaphors and (2) a comparison between the three levels of complexity included in the experiment.t-test was conducted. Results revealed a significant difference between advertisements without metaphors and those including metaphors ; t(42) = \u22129.235, p = 0.000.Regarding the analysis of Aad with and without metaphors, a paired-samples F = 75.877, p = 0.000. Regarding the level of complexity, post hoc tests using the Bonferroni correction revealed significant differences between juxtapositions (M = 4.8) and fusions (M = 5.4) and between fusions and replacement metaphors (M = 3.9). However, there were no significant differences between juxtaposition and replacement metaphors described as an inverted U-curve in the literature, a mediation analysis was performed. Statistically, mediation is often analyzed through path analytic models with one X variable, one mediator M, and one outcome variable Y . In the a = \u22120.62, p = 0.000) and that perceived complexity was a significant predictor of Aad . These results support the mediational hypothesis. Ad complexity was not a significant predictor of Aad after controlling for the mediator (comprehension), c\u2019 = 0.16, p = 0.179. The standardized indirect effect (B) was a (\u22120.62) \u00d7 b (0.51) = \u22120.31 .The results of mediation analysis indicated that complexity of metaphors was a significant predictor of perceived complexity . The results revealed that comprehension/perceived complexity is a significant mediator of the relationship between advertisement complexity induced by metaphors and Aad; thus, the inverted U-curve pattern is validated, and consequently, results support Hypothesis 1a.A Sobel test was also conducted to validate the effect of the mediator, finding full mediation and advertisements including metaphors ; t(42) = \u22123.395, p = 0.020.Regarding purchase intention, a paired-sample F = 41.742, p = 0.000. Post hoc tests using the Bonferroni correction revealed significant differences between all three levels of complexity: juxtapositions (M = 4.8), fusions (M = 5.4), and replacement metaphors (M = 3.9).In order to test Hypothesis 1b, a repeated measures ANOVA also determined that the different complexity levels of the advertisement produced statistically significant differences in purchase intention, a = \u22120.62, p = 0.000) and that perceived complexity was a significant predictor of purchase intention . Advertisement complexity induced by metaphors was no longer a significant predictor of purchase intention after controlling for the mediator , in consistence with a full mediation. The standardized indirect effect was a (\u22120.62) \u00d7 b (0.43) = \u22120.27 . Thus, it can be concluded that comprehension mediated the relationship between advertisement complexity induced by metaphors and purchase intention. Consequently, the inverted U-curve pattern is validated for this construct, and the hypothesis 1b is supported, as displayed in Lastly, the Sobel test found full mediation (2(2) = 40.812, p = 0.000. A post hoc analysis with a Wilcoxon signed-rank test was conducted with a Bonferroni correction, resulting in a significance level set at p < 0.000. There were significant differences between the three levels of complexity.To test the preference hypothesis (H1c), due to the ordinal nature of the preference metric, a Friedman test was conducted as the non-parametric alternative to the one-way repeated measures ANOVA. The Friedman test revealed a statistically significant difference in preference depending on the complexity of the images visualized, \u03c7a = \u22120.062, p = 0.000). To investigate how conceptual complexity of metaphors could influence the preference of ads with metaphorical content, an ordinal logistic regression analysis was conducted. The conceptual complexity was found to contribute to the model . The estimated odds ratio shows an inverse relationship between complexity and preference. It suggests a decreasing probability of improving the preference level with increasing complexity level of ads .To examine the effect of comprehension of different metaphors on preference, we performed a path analysis, described in X2(1) = 16.536, p = 0.000]. The coefficient shows that when increasing the perceived complexity, there is a predicted increase of 0.659 in the log-odds of being in a higher level of preference . These results support the mediational hypothesis. The standardized indirect effect was a (\u22120.62) \u00d7 b (0.659) = \u22120.4085.On the other hand, the analysis showed that comprehension was a significant predictor in the model [Z = \u22124.059, p = 0.000) and (2) ads not including metaphorical images and fusions . However, the analysis shows that there were no differences between ads including replacements and ads not including metaphorical images .Finally, a Wilcoxon signed-rank test was conducted to analyze the differences between ads with and without metaphorical content. Results showed statistically significant differences between (1) ads not including metaphorical images and ads including juxtapositions is a suitable indicator of comprehension and cognt-test was conducted to compare the time in AOI for ads including and not including metaphorical images. Results yielded a significant difference in the time in AOI between ads without metaphors and ads including metaphors (M = 4.5 SD = 0.62); t(42) = \u22123.200, p = 0.003. Besides, a repeated measures ANOVA determined that the different complexity levels of the ads produced statistically significant differences in time in AOI, F = 11.609, p = 0.000. Post hoc tests using the Bonferroni correction revealed significant differences in time in AOI between juxtapositions (M = 4.1) and fusions (M = 4.8) and between fusions and replacements (M = 4.4).First, a paired-sample a = 0.62, p = 0.000), perceived complexity was not a significant predictor of time in AOI , so perceived complexity did not mediate the relationship between ad complexity induced by metaphors and time in AOI . These results were also supported by results of the Sobel test .As shown in M = 4.8) than ads with metaphors (M = 4.5); (2) the time that participants spent exploring an ad did not always increase when the ad increased in complexity ; and (3) perceived complexity was not a significant predictor of time in AOI.Regarding the time spent exploring the ads, obtained results provide three interesting findings: (1) participants spent more time exploring ads without metaphors and ads including metaphors ; t(41) = \u22120.503, p = 0.617. However, a repeated measures ANOVA determined that cognitive load measured through EEG yielded statistically significant differences as a function of the different complexity levels of the ads, F = 3.102, p = 0.050. Post hoc tests using Bonferroni revealed differences in cognitive load between juxtaposition (M = 26.1) and replacement (M = 28.4).In order to determine the cognitive load required by ads including and not including metaphorical images, a paired-sample Regarding these results, it is interesting to remark that EEG results reflect a linear relationship between complexity and cognitive load, as shown in a = \u22120.61, p = 0.000), perceived complexity was not a significant predictor of cognitive load , so perceived complexity did not mediate the relationship between ad complexity induced by metaphors and cognitive load .The mediation analysis showed that comprehension did not mediate the relationship between ad complexity induced by metaphorical content and cognitive load. Although ad complexity was a significant predictor of perceived complexity (M = 27.6) than ads without metaphors (M = 27.1), (2) there is a linear relationship between complexity and cognitive load, and (3) perceived complexity is not a significant predictor of cognitive load. Since Hypothesis 2b stated that higher levels of difficulty could be reflected on a higher index of EEG cognitive load, we can state that Hypothesis 2a was supported.According to the obtained results: (1) ads with metaphors had a higher cognitive load index and ads including metaphors ; t(38) = 0.372, p = 0.712. Nor was there any significant difference in activation as a function of the three levels of complexity, F = 0.288, p = 0.636. However, it is interesting to note that the means obtained reflect the aforementioned inverted U-curve pattern, as displayed in Again, the first step was to conduct a M = 0.06) than when they visualized ads with metaphors (M = 0.05) and (2) the participants\u2019 arousal did not always increase when the ad increased in complexity . On the basis of the obtained results and since Hypothesis 2c predicted that higher levels of difficulty and, consequently, a higher index of cognitive load could be reflected by higher levels of arousal, we can state that Hypothesis 2c was not supported. Regarding the arousal, obtained results provide two findings: (1) participants had more arousal when they were exposed to ads without metaphors could be suitable to measure the cognitive load derived from complex tasks or contents , 2012. TIn advertising, visual metaphors are widely used to draw individuals\u2019 attention and entice them to buy the product . HoweverVisual metaphors are creative resources that try to provide differentiation and recognition of the brand. Our results highlight the importance of assessing the perceived complexity of ads in order to find the optimal level of complexity of the visual metaphors to achieve the desired brand awareness and advertising effectiveness .Finally, the present study provides evidence on how useful the use of neuroscientific techniques can be to have an objective measure of consumer reactions to marketing stimuli and to find new and useful insights that cannot be detected with declarative methodologies . Our resWhile this study provides theoretical and practical implications, some limitations must be acknowledged, mainly regarding the sample size, similarity of participants, and forced exposure to the stimuli.Although previous studies have demonstrated that consumer neuroscience studies with small sample sizes can produce predictive results and significant insights , the useBesides, the sample used is very similar because it is formed by students, and their age, education level, cultural context, and lifestyle are pretty similar. Since previous studies have proven significant cross-cultural variations in the interpretation of advertisements , it woulOn the other hand, the present study was conducted in a laboratory, and subjects were forced to visualize the stimuli contrary to how they behave in real life, where they devote their attention voluntarily. The degree of control exerted over potential extraneous variables determines the level of internal validity of a study . AlthougFinally, on the basis of previous findings according to which advertisement appreciation follows an inverted U-curve pattern , we tookThe datasets generated for this study are available on request to the corresponding author.Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.All authors listed contributed to the design of the experiment, data collection, data analysis, literature review and writing and reviewing of this manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The collection of phenotypes related to livestock methane emissions is hampered by costly and time-demanding techniques. In the present research, a laser methane detector was used to measure several novel phenotypes, including mean and aggregate of methane records, and mean and number of methane peak records, considering Simmental heifers as a case study. Phenotypes showed satisfactory repeatability and reproducibility for log-transformed data. The number of emission peaks had great variability across animals and thus it is a promising candidate to discriminate between high and low emitters.4) emissions represent a worldwide problem due to their direct involvement in atmospheric warming and climate change. Ruminants are among the major players in the global scenario of CH4 emissions, and CH4 emissions are a problem for feed efficiency since enteric CH4 is eructed to the detriment of milk and meat production. The collection of CH4 phenotypes at the population level is still hampered by costly and time-demanding techniques. In the present study, a laser methane detector was used to assess repeatability and reproducibility of CH4 phenotypes, including mean and aggregate of CH4 records, slope of the linear equation modelling the aggregate function, and mean and number of CH4 peak records. Five repeated measurements were performed in a commercial farm on three Simmental heifers, and the same protocol was repeated over a period of three days. Methane emission phenotypes expressed as parts per million per linear meter (ppm \u00d7 m) were not normally distributed and, thus, they were log-transformed to reach normality. Repeatability and reproducibility were calculated as the relative standard deviation of five measurements within the same day and 15 measurements across three days, respectively. All phenotypes showed higher repeatability and reproducibility for log-transformed data compared with data expressed as ppm \u00d7 m. The linear equation modelling the aggregate function highlighted a very high coefficient of determination (\u22650.99), which suggests that daily CH4 emissions might be derived using this approach. The number of CH4 peaks resulted as particularly diverse across animals and therefore it is a potential candidate to discriminate between high and low emitting animals. Results of this study suggest that laser methane detector is a promising tool to measure bovine CH4 emissions in field conditions.Methane (CH Anthropic activities related to the primary and secondary sectors are responsible for the majority of GHG released in the atmosphere. Methane , and sniffer-based systems [4 measurement or prediction is an emerging research field of global interest. Until now, mid-infrared spectroscopy [2 to CH4 ratio [4 production in cattle. Laser methane detector (LMD) has been proposed as an alternative instrument to directly measure CH4 emissions. The sensitivity and accuracy of LMD have been assessed in controlled conditions, by comparing data acquired through LMD with those measured through the respiration chamber [2 = 0.64) [4 emissions in field conditions. For decades, the scientific community has focused on different strategies and approaches aiming at reducing ruminants CH systems . For thetroscopy , archaeotroscopy , sulphurH4 ratio have beeto 0.97) and Gree = 0.64) . However4-related phenotypes, including mean and aggregate of CH4 records, slope of the linear equation modelling the aggregate function, and mean and number of CH4 peak records. For this purpose, CH4 emissions were measured on Simmental heifers as a case study.This research question was investigated by assessing repeatability and reproducibility of different CHProcedures used in this study are excluded from the authorization of the animal welfare committee. Methane emission measurements were performed in September 2019, in a commercial dairy farm located in Padova province on three pregnant Simmental heifers: heifer 1 , heifer 2 , and heifer 3 . Animals were housed in the same open-aerated barn. Heifers received the same diet containing wheat straw , corn silage , meadow hay , protein mix , corn meal , and mineral/vitamin mix , distributed through a total mixed ration. Protein, neutral detergent fiber, acid detergent fiber, starch, fat, and ashes content, calculated on dry matter basis, were 12.39%, 57.12%, 36.25%, 5.30%, 2.62%, and 7.32%, respectively.4 was expressed as parts per million per linear meter (ppm \u00d7 m). Each measurement was performed by pointing the laser to the nostril of a single animal for 5 min, at a distance of 3 m, according to the protocol proposed by Chagunda et al. [4 emission every 0.5 s, for a total of 600 records for each measurement (5 min). Every single measurement was forwarded via Bluetooth from the laser device to a Lenovo Tab E7 tablet, equipped with an Android operating system and Gas Viewer application, saved as .csv file, sent to a dedicated e-mail box, and downloaded in a computer workstation to allow for the local storage of data. The methanogram plot resulting from a single measurement is depicted in Methane emissions were measured through Laser Methane Mini and CHa et al. . Each he4 emissions expressed as ppm \u00d7 m is depicted in 4 emissions were loge-transformed (lnCH4) to achieve normality and homogeneity of variances. The distribution of the probability function of lnCH4 is reported in 4, n = 27,000), which resulted in 26,856 records available for subsequent analysis.The distribution of CH4 and lnCH4 records, (ii) aggregate of CH4 and lnCH4 records, (iii) slope of the linear equation modelling the aggregate function, (iv) mean of CH4 and lnCH4 peak records, calculated on the last decile of the distribution, and (v) number of CH4 and lnCH4 peak records. Repeatability of the previously mentioned phenotypes was calculated as the relative standard deviation (RSDr) of five consecutive measurements carried out within the same day and within the same animal. Similarly, reproducibility of phenotypes was calculated as the relative standard deviation (RSDR) of 15 measurements collected across three days of analyses and within the same animal, as proposed by Chagunda et al. [Phenotypes considered in the present study were: (i) mean of CHa et al. , Franzoia et al. , and Niea et al. .4 emissions expressed as ppm \u00d7 m, obtained in a single measurement on a single heifer. The methanogram featured a baseline signal, including the majority of records, which is likely due to environmental CH4 and to the basal eructation activity. The plot highlighted clear emission signals, as the minority of records, associated with peaks of CH4 eructation. Methane emissions averaged 105.48 and 98.26 ppm \u00d7 m with standard deviation of 77.92 and 58.02 ppm \u00d7 m for the pre-edited and post-edited datasets, respectively . The loge-transformation of CH4 produced a much more normal trait (lnCH4), as reported in p > 0.05), both in the pre-edited and in the post-edited datasets. Skewness and kurtosis were close to zero in the pre-edited and post-edited dataset. Logarithmic transformations were proposed also by Ali and Shook [ectively . Averagea et al. . In terma et al. . These da et al. who measrom zero . Visual rom zero and Shapnd Shook and Benend Shook to achie4 emissions was assessed through RSDr and RSDR (4 highlighted notable improvements in terms of RSDr and RSDR. Repeatability varied from 8.93% (heifer 1 in day 3) to 14.85% (heifer 1 in day 1), whereas reproducibility ranged from 11.98% (heifer 2) to 15.35% (heifer 3). Still, such repeatability and reproducibility values remain greater than values from other studies describing the precision of analytical methods carried out under controlled conditions [The precision of LMD for determining the mean of CHand RSDR . Methanenditions ,17. Overnditions , and of the linear equation modelling the aggregate function are reported in 4 emissions showed the lowest value for heifer 3 in day 1, being equal to 153,393 ppm \u00d7 m and 11,212 lnCH4. The greatest aggregate value was observed for heifer 1 in day 2 . The slopes of the linear equation mirrored the tendency of aggregate values, being lower and greater concurrently with lower and greater aggregates. Although the agreement between aggregates and slopes may support and reinforce the significance of these traits, it can be argued that the consideration of both phenotypes is redundant since their biological meaning is likely the same. Overall, the R2 of the aggregate functions of lnCH4 (0.999) was greater than the R2 of the aggregate functions of CH4 (0.989 to 0.997). Such a great accuracy suggests that the linear equation modelling the aggregate function might be used in the future to estimate long-term or daily CH4 emissions.The aggregate value of CH4 and lnCH4 emissions was assessed through RSDr and RSDR (4 emissions showed greater precision compared with the same indexes calculated for means of CH4 emissions. This translated into a relatively low RSDr, ranging from 17.58% to 20.15% for CH4, and 4.50% to 5.34% for lnCH4, and RSDR, which was always lower than 20% and 5% for CH4 and lnCH4, respectively. The average values for peaks of CH4 emissions did not vary much across different animals, which suggests that this phenotype might be not adequate to discriminate between high and low CH4 emitters. Such a low variability was somewhat expected. Peaks of CH4 emissions were defined as records belonging to the highest decile of both the datasets, which leads to a considerable decrease of variability. For this reason, the number of peaks emitted from every single animal is more informative because it is more differentiated across heifers. A similar approach was adopted by Bobbo et al. [The precision of LMD for determining peaks of CHand RSDR . Repeatao et al. in the s4 emissions, measured through LMD using Simmental heifers as a key study. The distribution of emission events expressed as ppm \u00d7 m showed a significant deviation from the normal distribution, but the logarithmic transformation of the data led to normality. Repeatability and reproducibility were much better for lnCH4 than for CH4. The coefficient of determination of the linear equation modelling the aggregate function showed high precision. Such results are promising since these equations might be used to estimate daily or long-term CH4 emissions. Peaks of CH4 emissions were rather different across animals in terms of the number of events but were homogeneous in terms of average values. For this reason, the number of peaks may be an interesting phenotype to discriminate between high and low emitting animals. Overall, results of the present study indicate that measures carried out through LMD are fairly repeatable and reproducible. Therefore, in terms of accuracy, LMD may be considered as a promising tool enabling to measure bovine CH4 emissions in field conditions at relatively low costs. Future studies will focus on the application of LMD for large-scale studies to assess sources of variation of CH4 emissions.The present research assessed the repeatability and the reproducibility of phenotypes related to CH"} +{"text": "A number of medicines are currently under investigation for the treatment of COVID-19 disease including anti-viral, anti-malarial, and anti-inflammatory agents. While these treatments can improve patient's recovery and survival, these therapeutic strategies do not lead to unequivocal restoration of the lung damage inflicted by this disease. Stem cell therapies and, more recently, their secreted extracellular vesicles (EVs), are emerging as new promising treatments, which could attenuate inflammation but also regenerate the lung damage caused by COVID-19. Stem cells exert their immunomodulatory, anti-oxidant, and reparative therapeutic effects likely through their EVs, and therefore, could be beneficial, alone or in combination with other therapeutic agents, in people with COVID-19. In this review article, we outline the mechanisms of cytokine storm and lung damage caused by SARS-CoV-2 virus leading to COVID-19 disease and how mesenchymal stem cells (MSCs) and their secreted EVs can be utilized to tackle this damage by harnessing their regenerative properties, which gives them potential enhanced clinical utility compared to other investigated pharmacological treatments. There are currently 17 clinical trials evaluating the therapeutic potential of MSCs for the treatment of COVID-19, the majority of which are administered intravenously with only one clinical trial testing MSC-derived exosomes via inhalation route. While we wait for the outcomes from these trials to be reported, here we emphasize opportunities and risks associated with these therapies, as well as delineate the major roadblocks to progressing these promising curative therapies toward mainstream treatment for COVID-19. Betacoronavirus. In general, coronavirus is the name of viruses that belong to the family Coronaviridae. These are classified into four categories: Alphacoronavirus, Betacoronavirus, Gammacoronavirus, and Deltacoronavirus. Alpha- and betacoronaviruses infect mammals, gammacoronaviruses infect avian species, and deltacoronaviruses infect both mammalian and avian species discovered in Wuhan, China, in December 2019, has caused a global disruption and COVID-19 pandemic. SARS-CoV-2 is classified as cies Li, . CoronavCoronaviruses have characteristic clove-shape spikes (\u201ccorona\u201d) protruding from the surface. The spikes are protein complexes that virus uses to bind to a receptor (receptor-binding subunit S1) and mediate entry into host cells (a membrane-fusion subunit S2). Upon binding virus fuses with the human cell membrane, allowing the genome of the virus to enter the cell and begin replication. The receptor-binding domain of SARS-CoV-2 spikes are closely related to those of SARS-CoVs (73.8\u201374.9% amino acid identity) and SARS-like CoVs (75.9\u201376.9% amino acid identity) have demonstrated potent and broad immunomodulatory and anti-inflammatory capacity , Middle East Respiratory Syndrome (MERS), and now, COVID-19. The severity of these infections also varies widely between age-groups and different ethnic populations. Older people are at increased risk of acquiring the infection and are likely to develop severe symptoms culminating in death.Interestingly, there appears to also be gender disparity in the number of acquired cases of COVID-19 with higher percentage of men (~60%) than women being infected, as was first reported in China . A recent study presented a deleterious effect on the cells Currently, there are 17 clinical trials investigating the therapeutic potential of MSCs in COVID-19 patients that are registered on clinicaltrials.gov website; most of these trials are either recruiting patients or have not yet started the recruitment. The vast majority of the trials are selecting patients with COVID-19 and pneumonia, and utilizing allogeneic bone-marrow or umbilical cord-derived MSCs transplanted intravenously on three different occasions . Approxi6 per kg) and 3 patients in the placebo controlled group albeit consisting of all female patients, showed significant improvement in pulmonary function and symptoms as well as substantial reduction in inflammation compared to the placebo controlled group and follows strict regulations prior to being approved for the use in humans. The use of unauthorized and unapproved stem cell therapies not validated through stringent multicenter clinical trials should be strongly discouraged. At the same time current findings imply that there is a need for globally coordinated approach and support to conduct multicenter clinical trials to demonstrate safety and effectiveness of various types of stem cells to treat COVID-19-induced health complications. It also suggests that there is a need in biomedical research and development to establish the most effective stem cell types that are ideally suited for the treatment of aforementioned complications. These developments will also require (a) GMP compliant technologies to mass produce stem cells, and (b) testing platforms that mimic human pathophysiology to allow high-throughput screening and rapid testing of stem cells safety and efficacy.Extracellular vesicles (EVs) are emerging as an attractive alternative to the whole cell\u2013based therapy. EVs have several advantages compared to the whole cell therapy including lower risk of tumorigenic effects, lower susceptibility to damage by hostile disease microenvironment and possibility for long-term storage. The long-term storage is fundamental to make the treatment accessible in developing countries and it circumvents the need to have expensive GMP cell manufacturing facilities on-site. Nevertheless, production of EVs must follow the same strict guidelines that apply to stem cells and any EV-based therapy needs to be approved by the governing bodies after being tested in clinical trials to demonstrate the safety and efficacy.in vivo environments on the integrity of these cells. Stem cells can be environmentally preconditioned by certain stimuli such as hypoxia or ischemia, which can induce certain signaling pathways to improve engraftment, survival, and function of transplanted cells in harsh environments within the injured lung will potentially be a paradigm shift. In the light of a recent study that demonstrated efficacy of inhaled stem cell-derived therapy in both The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author/s.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "In this report, we present a detailed comparison of the lipid composition of human milk (HM) and formula milk (FM) targeting different lactation stages and infant age range. We studied HM samples collected from 26 Polish mothers from colostrum to 19 months of lactation, along with FM from seven brands available on the Polish market . Lipid extracts were analysed using liquid chromatography coupled to high-resolution mass spectrometry (LC\u2013Q-TOF\u2013MS). We found that the lipid composition of FM deviates significantly from the HM lipid profile in terms of qualitative and quantitative differences. FM had contrasting lipid profiles mostly across brands and accordingly to the type of fat added but not specific to the target age range. The individual differences were dominant in HM; however, differences according to the lactation stage were also observed, especially between colostrum and HM collected in other lactation stages. Biologically and nutritionally important lipids, such as long-chain polyunsaturated fatty acids (LC-PUFAs) containing lipid species, sphingomyelines or ether analogues of glycerophosphoethanoloamines were detected in HM collected in all studied lactation stages. The observed differences concerned all the major HM lipid classes and highlight the importance of the detailed compositional studies of both HM and FM. Human m infants . However infants . HM lipiThe World Health Organization (WHO) and United Nations Children\u2019s Fund (UNICEF) recommends exclusive breastfeeding for six months after birth and continued breastfeeding along with appropriate complementary foods for up to two years of age or beyond, while human milk is a source of a wide range of different valuable components . HoweverMany studies have shown that human milk metabolite composition differs significantly from the composition of milk formula due to the dynamic nature of HM and unique chemical structure of HM lipids ,14,15,16Lipidomics is now widely used in the comprehensive analysis of the lipidomes of various biological samples and is a valuable tool to analyse human milk ,28,29,30Besides these studies, data concerning lipid composition differences between HM and FM, especially regarding advanced lactation (HM after 1 year postpartum) and corresponding growing-up formula, are remarkably scarce. Filling this knowledge gap is necessary to further determine the impact of these components on infant health, growth, and development, as well as to provide suitable nutrition for all infants.In this study, we performed comparative LC\u2013MS-based lipidomic analysis of the HM and FM samples to evaluate the differences between HM at different lactation stages, including HM samples at advanced lactation (>12 months postpartum) and the corresponding FM. The lipid fingerprints were compared and analysed using statistical and/or chemometric methods in terms of differences in the lipid composition: (i) within the HM samples collected in various stages of lactation; (ii) within the FM samples with various age range targets and fat sources; (iii) between the HM and FM samples.To our best knowledge, this study is the first report to comprehensively describe differences in HM and FM lipidomes regarding the lactation stage, including longitudinal changes in HM samples collected after 1 year postpartum.LC\u2013MS-grade methanol, 2-propanol, and HPLC-grade chloroform were purchased from Merck . Chemicals\u2014ammonium formate (99.9% purity), formic acid and ammonium hydroxide solution (28%)\u2014were purchased from Sigma-Aldrich . All aqueous solutions were prepared with ultra-pure water produced by an HLP5 system . The lipid standards 1-pentadecanoyl-2-oleoyl(d7)-sn-glycero-3-phosphoethanolamine (15:0-18:1-d7-PE) and 1,3-dipentadecanoyl-2-oleyol(d7)-glycerol (15:0-18:1-d7-15:0 TG) were purchased from Sigma-Aldrich .n = 11); samples collected between 0 and 6 months (n = 10); samples collected between 6 and 12 months (n = 8); samples collected after 12 months (n = 16). Colostrum samples were obtained by a qualified midwife at the Obstetric Clinic, University Clinical Centre of the Medical University of Gda\u0144sk. All volunteering mothers were given oral and written instructions for the standardised collection of milk samples. Written informed consent was obtained from each participant. The milk samples were obtained by the full expression of one breast using an electronic breast pump in the morning and/or in the evening. After collection, 10 mL of HM was transferred to the polypropylene laboratory tube and kept frozen at \u221220 \u00b0C until delivery to the laboratory. Samples were stored at \u221280 \u00b0C until analysis for a maximum of three months.Human milk samples were donated by 26 healthy women living in Pomeranian Voivodship, Poland, who had delivered healthy full-term neonates and had met the criteria of inclusion to the study , follow-on formula (age range target 6\u201312 months of life), and the growing-up formula milk of seven brands were included in the study. The detailed characteristics of all analysed samples are presented in Research ethics approval was obtained from the Human Research Ethics Committee of the Medical University of Gda\u0144sk, Poland .Sample preparation was performed using a previously developed extraction method based on the LLE and SPE techniques . The HM v/v) and vigorous mixing for 10 s. After that, 310 \u03bcL chloroform and 310 \u03bcL deionised water were added. Next, the samples were mixed for another 30 s and centrifuged for 10 min at 4300 rpm to separate the aqueous and organic phases. The upper phase was discarded and the lower phase was transferred to a new borosilicate glass tube by a Pasteur pipette. Next, 20 \u03bcL of the obtained extract was diluted by adding 980 \u03bcL methanol to further dissolve the SPE extract.A total of 225 \u03bcL of the milk sample was transferred to a borosilicate-glass tube with a PTFE cap, followed by the addition of 950 \u03bcL of a chloroform/methanol mixture 15:0-18:1-d7-PE were added. The contents were mixed for 30 s at 2000 rpm, followed by centrifugation for 10 min at 10,000 rpm. After that, 900 \u03bcL of the supernatant was loaded onto a HybridSPE-Phospholipid cartridge , washed in sequence with 2 \u00d7 1 mL methanol and 2 \u00d7 1 mL 2-propanol. Finally, the phospholipids were eluted with 2 mL 5% ammonia in methanol. The extract was evaporated under a nitrogen stream at 35 \u00b0C. Next, 90 \u03bcL of the diluted LLE extract was used to dissolve the dried phospholipid extract. Additionally, 10 \u03bcL internal standard (2) 15:0-18:1-d7-15:0 TG was added to the final extract, which was then transferred to a 1.5-mL chromatographic vial and subsequently analysed by LC\u2013Q-TOF\u2013MS.v/v)) and component B (2-propanol). The mobile phase was pumped with a flow rate of 0.3 mL/min. The gradient elution program was initiated with 20% component B, which was ramped to 40% from 0 to 20 min, then from 40% to 60% from 20 to 40 min and finally from 60% to 100% from 40 to 45 min. The column was then equilibrated with the starting conditions for 10 min. The total run time was 55 min, and the injection volume was set to 0.5 \u03bcL. The data were collected in the positive ion mode using the SCAN acquisition mode in a range from 100 to 1700 m/z in the high-resolution mode (4 GHz). MS analysis was carried out using the following parameters: capillary voltage, 3500 V; fragmentation voltage, 120 V; nebulising gas, 35 psig; drying gas temperature, 300 \u00b0C. MS/MS analysis was performed using identical chromatographic and ion source conditions. The collision energy was set to the following values: 35 V and 80 V. The two most abundant peaks were selected for fragmentation and excluded for the next 0.3 min. The MS/MS spectra were acquired in the m/z range of 50\u20131700. Lipid extracts were injected randomly using one Quality Control (QC) sample injected every five real samples for the LC\u2013MS stability control. The LC\u2013MS batch started with the extraction blank and the five subsequent QC samples to equilibrate the chromatographic column. The lipid extracts were kept in the autosampler at 10 \u00b0C during the batch run.Lipid analysis was conducted using an LC\u2013Q-TOF\u2013MS system\u2014an Agilent 1290 LC system equipped with a binary pump, an online degasser, an autosampler and a thermostated column compartment coupled to a 6540 Q-TOF\u2013MS with a dual electrospray ionisation (ESI) source . Lipid extracts were injected into a reversed-phase column Poroshell 120 EC-C8, 2.1 \u00d7 150 mm, 1.9 \u03bcm particle size with a 0.2-\u03bcm in-line filter. The column was maintained at 60 \u00b0C. The mobile phase comprised component A and (2) manual interpretation of the obtained MS/MS spectra of milk samples. Identification resulted in the determination of the lipid class, number of carbon atoms, and number of unsaturated bonds in fatty acid residues, as well as the presence of ether bonds instead of ester bonds in the lipid structure . The fatty acid composition was evaluated based on MS/MS spectra interpretation. Lipid species with ether-linked substituents were not differentiated regarding ether and vinyl ether bonds in position sn-1. The position of fatty acyl substituents and the position of double bonds were not evaluated. The fragmentation lipid patterns for TGs, DGs, PEs, PCs and SMs were previously published [m/z 184.0726 for confirmation of the SM and PC identity; neutral loss of 141.02 Da for the confirmation of the PE identity; neutral loss of 185.01 Da for the confirmation of PS identity; m/z 264.27 for confirmation of the C18 sphingoid base backbone.Lipid identification was carried out using the two-step procedure: (1) a custom HM database automated search based on an accurately measured ublished . The fra+; isotope model, common organic ; charge state range, 1\u20132; MFE score, \u226570. Next, the peak areas of the identified lipids filtering on peak height, 1000 counts. In both data pre-treatment approaches, the .cef files were exported and imported to Mass Profiler Professional 15.1 software for data alignment and filtration. Missing values were exported as missing. The alignment parameters were set as follows: alignment slope = 0.0%; alignment intercept = 0.4 min; mass tolerance slope = 20.0 ppm; intercept = 2.0 mDa. Filtration was based on frequency (the MFs remained in the dataset if they were present in 80% of the samples in at least one specified group) and the QC %RSD . The MFs that were present in the extraction blank, with the average peak volume higher than 10% of the average peak volume in the real samples, were removed. The statistical analyses and fold change calculation were conducted using Mass Profiler Professional 15.1 software The parameters in the statistical tests were as follows: p \u2264 0.01; Multiple Testing Correction: Benjamini\u2013Hochberg; p-value computation\u2014asymptotic; missing values were excluded from the fold change and p-value calculations ; corrected p-value cut-off, 0.01; Post-hoc test: Tukey HSD(for ANOVA). Statistical tests and fold change calculations were conducted using the average peak area of samples within defined group of samples. The Metaboanalyst 4.0 web tool [The Molecular Feature Extraction (MFE) algorithm, implemented in the Agilent MassHunter Workstation Profinder 10.0 , was used to extract the total molecular features (MFs) from the raw LC\u2013MS data using the following parameters: ion threshold, >1000 counts; ion type, Hweb tool was usedp < 0.01)The analytical procedure adapted from our previous studies allowed First, we studied the detailed composition of human milk according to the lactation stage. We analysed 90 lipid extracts of 45 samples of human milk collected from 26 women. The highest number of molecular features (MFs) was detected in the colostrum samples , as shown in The lipidome coverage of both colostrum and mature (HM) comprised lipid species belonging to the classes of triacylglycerols (TGs), diacylglycerols (DGs), monoacylglycerols (MGs), sphingomyelines (SMs), glycerophosphoethanoloamines (PEs), glycerophosphocholines (PCs), glycerophosphoinositoles (PIs) and glycerophosphoserines (PSs). We also detected ether analogues of PE, PC, and TG containing one ether-linked fatty acid in its structure apart from the acyl-linked FA. The percentage relative amount of many lipid species within all detected lipid classes was diversified between individual mothers and lactation stage, with the highest difference between colostrum and other lactation stages HM, as shown in PCA was used to visualise the differences in the lipid pattern among the collected HM samples. The results of the unsupervised analysis are shown in n = 215) were statistically significantly changed (p < 0.01) with a fold change (fc) \u22652 among colostrum and mature HM samples, with the peak area of 76 lipid species higher in mature milk and that of 40 lipid species higher in colostrum. The detailed list of the lipid components that were statistically different between colostrum and mature milk is shown in To further study the difference between the colostrum samples and mature milk samples, we performed statistical tests using the dataset containing only the identified lipids. Fifty-five percent of the lipid compounds included in the test was higher in mature HM (mainly medium-chain triacylglycerols (MCTGs) and long-chain triacylglycerols (LCTGs) with a low level (1\u20133 double bonds) of unsaturation (HM 0\u20136 months and HM >12 months)) than in colostrum. However, the content of some TGs was significantly higher in colostrum samples than in mature HM, corresponding to TGs containing long-chain polyunsaturated fatty acids (LC-PUFAs) or TG 58:5, which contains a 22:4-18:0-18:1 fatty acyl composition (fc = 3.5)). Interestingly, the value of a fold change of many TG species was lower between colostrum and HM 6-12 months than the value of a fold change between colostrum and HM 0-6 months and HM >12 months .p < 0.01, fc = 3.5), TG-O-52:1 and TG-O 50:1 \u2014which were all upregulated in colostrum samples.The colostrum samples also had a higher content of TGs containing fatty acyl substituents with more than 20 carbon atoms, such as TG 62:3 and TG 62:4, than mature milk (average fc >20). This also concerned alkyldiacylglycerols (captioned here as TG-O)\u2014TG-O-52:2 and PE-O 38:5 (fc = 2.0), which contain LC-PUFAs eicosatetraenoic acid (C20:4) in their structure, PC-O 34:1 (fc = 1.8) and more PC34:0 (fc = 2.2), PC32:0 (fc = 2.8) and PC30:0 (fc = 2.7) containing saturated fatty acids compared with mature HM.We observed an increased content of PI38:4 (fc = 4.0) and PS36:1 (fc= 3.1) in colostrum compared with mature milk and some species of SM . However, the content of many other lipid species of these classes was higher in mature milk. Further exploration of HM lipid composition dynamics by visualising the variance in HM specific lipids, according to the four lactation stages on whiskers and box plots, showed that the lipid composition pattern is even more ambiguous, as shown in Crypthecodinium cohnii or Mortierella alpine). It was not always possible to determine whether FM contained isolated DHA or ARA from oils or contained them in the bound form in other lipid molecules, such as TGs. One of the formulas (FM.4.1.MFGM) was claimed to be enriched in milk fat globule membranes. The detailed characteristics of milk formulas and contained fat sources are presented in Next, we investigated the lipid composition of the milk formulas available on the Polish market. We studied infant formulas from seven brands, including starting formulas (0\u20136 months), follow-on formulas (6\u201312 months), and growing-up formulas . The lipid fraction of infant formulas was based on various sources of fat, including caprine whole milk, vegetable oils, fish oil, and microorganism lipids between the FMs classified into two groups (soy-lecithin supplemented FM and caprine milk FM), as indicated by PCA, are presented in Generally, we did not observe grouping of the lipid fingerprints based on the age range target; however, the lipid fingerprints of the growing-up formulas of brands one, two and six form a subgroup within the group of soy-lecithin-based FM, likely associated with the absence of coconut oil compared with the starting and follow-on formulas from the same brand. The lipid fingerprints of one of the studied FMs were positioned on the PCA score far from two groups of FM samples and correspond to FM enriched in milk fat globule membranes. Some of the statistically significant differences (Detailed insight into the variance in the lipid composition among the studied FM samples with respect to different brands and specific age targets was obtained by heatmaps presented in p < 0.01) higher content of many lipid species belonging to the MCTs and LCTs with a low unsaturation level and with an odd total number of carbon atoms in FAs compared with FMs not supplemented with caprine milk . However, soy lecithin-supplemented FMs contained a higher level of some LC-PUFA TGs than caprine-based FMs .The differences were observed in the content (measured by peak area fold change) of many medium-chain triacylglycerols (MCTGs) and long-chain TGs (LCTGs). FM with caprine whole milk contained a significantly were richer in phospholipids than FMs not containing this ingredient.For SM species, no specific trend of differences was observed, the content of some SM molecules was higher and others were lower in the FM caprine whole milk-based samples than in soy lecithin-supplemented FM.The observed results show that the lipid composition is generally brand-specific and depends strongly on the used fat source.Next, we compared the lipid composition of the HM and FM samples to explore the differences between them. For the statistical testing of differences at various lactation stages between HM and FM, all FM samples dedicated to the particular age target range were considered as one group without differentiation based on the fat source. We considered colostrum and HM samples collected in the period of 0\u20136 months separately to evaluate in detail the differences between HM samples obtained in this lactation stage and starting FM, because infants who cannot be breastfed directly after birth with colostrum are fed with FM dedicated for infants in age range 0\u20136 months.p < 0.01) between HM and FM samples is presented in The short-list of lipids significantly different differences included MCTGs whose content was higher in formula milk than in HM, especially saturated TGs with the number of carbon atoms in fatty acid substituents lower than 40. Additionally, HM samples contained a higher content of TGs containing LC-PUFAs, such as DHA C22:6 , DPA C22:5 (in TG58:7) and ALA C18:3 (in TG50:4), compared with FM. These differences were the highest between colostrum and FM dedicated for infants aged 0\u20136 months but also between HM collected after 12 months postpartum and the corresponding FM or TG 56:6 (fc = 53), TG58:7 (22:5-18:1-18:1) and TG58:7 (22:6-18:0/18:1)). In further lactation stages, the largest difference between HM and FM was observed in the content of TGs containing both MCFAs and LCFAs in its structure ). Moreover, the difference was observed in the fatty acid substituents of TG 50:2 between HM and FM samples, as shown in Statistically significant (p < 0.01) between the studied groups of FM and HM samples concerning the lactation stages. In particular, a higher peak area fold change was observed between samples of growing-up FM and HM samples collected after 1 year . Considering differences between colostrum and FM, the greatest differences corresponded to SMs with an LCFA residue attached to the d18:1 sphingoid base, such as SMd38:1 (fc = 40), SMd44:2 (detected only in HM), and SMd42:2 (fc = 24).Different lipid patterns were also observed for SMs. In HM, the most abundant SM was SMd42:2 (15\u201334%), while it comprised, on average, only 2\u20134% of the total SMs in FMs. SMd39:1 comprising 11\u201315% of the SM fraction in soy-supplemented FM was detected in HM samples in a much lower amount (0.05\u20130.69%). These lipid species were also statistically significantly different .A diversified lipid pattern was also observed for PCs. In HM samples, the percentage relative amounts of PC34:1 and PC32:0 were the highest in colostrum and were decreased in further stages of lactation along with an increase in percentage relative amounts of PC36:2 and PC34:2. In FM samples, percentage contribution of these components was almost equal in all FMs concerning the age range target. Regarding these lipids separately, the content of some PCs was statistically significantly higher in FM than in HM; however, that of the others was lower .The FM samples contained lipid species not detected in HM, such as TG66:18 or PI34. On the other hand, HM contained lipids that FM lacks or has at very low levels, such as PE-O36:5 and PE-O38-5, ether analogues of TGs, some LC-PUFAs TGs, SMs and PIs.Human milk is considered an ideal source of nutrients and bioactive compounds for a new-born; however, it remains a high-value liquid in further stages of lactation . AccordiSchizochytrium sp.) and predicting the FM lipid composition is not possible based on the list of ingredients. This raises a question about the similarity of the lipid composition of FM, its bioavailability, fate in the digestive system, biological functions compared with HM, and how it affects infants. Evaluation of the impact of feeding type on an infant\u2019s growth and development should be performed using knowledge about the chemical composition of the food. Therefore, the knowledge gap regarding the molecular composition of HM and FM should be fulfilled. This should also concern lipid composition because the biological roles of these lipid components are of great importance for infant proper growth and development [FMs are designed to mimic human milk composition; therefore, various sources of lipids are added to FM to mimic the chemical composition and nutritional function of HM, such as vegetable , fish and microorganisms oils, but also whole caprine milk. The detailed lipid compositions of some of these oils are not known in the literature of various lipids within specific classes of lipids.Analysis of the lipid composition of the HM samples collected at various stages of lactation showed the most significant difference between colostrum samples and the remaining HM samples. Samples collected in periods from the first to the 19th month of lactation showed no clear pattern according to child age as designated by FM dedication, namely 0\u20136 months, 6\u201312 months and >12 months.The individual differences were dominant among analysed samples. Based on statistical tests, specific lipid species within the lipid class undergo different trends during lactation ; however, we observed a contrasting trend for other molecules. Additionally, the individual HM samples were characterised by a higher concentration of specific compounds than the other samples. It is not without meaning when we consider the individual nutritional requirements of growing and developing infants. The intra-population variance of the HM samples can be determined by many factors, including maternal diet. The influence of maternal diet on the HM fatty acid composition has been very well documented ,39,40,41Especially alarming is the distinct lipid composition of FM from colostrum and mature milk obtained during advanced lactation (after 1 year postpartum) or lack of some lipid compounds in FM. This condition can possibly lead to the loss of intake of biologically important lipids by infants when fed with starting formula milk and at an age after 12 months when only a small number of children are still breastfed. Our results also proved that, at the advanced lactation (after 1 year postpartum), lactation milk is still rich in lipid species. This includes easily digested and accessible sources of energy\u2014triacylglycerols\u2014and also lipid species containing LC-PUFAs, such as arachidonic acid , linolenic acid , docosahexaenoic acid , docosapentaenoic acid , eicosatetraenoic acid and adrenic acid (C22:4). LC-PUFAs are fatty acids known for their great nutritional value because they are important components of the brain and retina; for example, the concentration of DHA in an infant\u2019s brain increases until at least 2 years of age ,43.On average, the most abundant TGs in the collected HM samples were as follows: TG52:2 (18:1-16:0-18:1), TG 52:3 (18:1-16:0-18:2), TG 50:2 (18:1-16:0-16:1) and TG 54:3 (18:1-18:1-18:1). It is a slightly different composition than that reported in other studies of the composition of TGs in HM ,26,44,45In contrast to HM, differences in the lipid composition within FM samples were generally brand-specific and linked to a contained fat source. The analysed FM samples contained a distinct TG pattern compared with human milk. All of the studied FMs contained a higher content of MCTGs and SCTGs than HM. However, milk formula with the addition of whole caprine milk was characterised by a higher content of these compounds than FM with other sources of fat. MCTGs and SCTGs are added to formula milk to directly provide energy because of the lower level of lipase in new-born babies . These csn-2 position and unsaturated fatty acids primarily at the sn-1,3 positions in the TG structure enhancing the fat and calcium absorption [Crypthecodinium cohnii and detected in FM, described as rich in DHA. The composition of LC-PUFAs in TGs and the level of specific TGS in FM are distinct from that observed in HM; therefore, the bioavailability and digestibility, as well as the nutritional properties of TGs contained in FM should be investigated.Another important aspect of HM and FM compositional differences is the positional distribution of fatty acids in triacylglycerols, palmitic acid in the sorption . This unsorption . We coulsorption ,50. One We also detected ether analogues of TGs in HM samples (TG-O). Their content was statistically significantly higher in the colostrum samples than in the mature HM samples. Importantly, these compounds were not detected in many milk formula samples; if they were, their content was statistically significantly lower than that in HM. Information about these components in the literature is scarce. Recently, it was shown by Yu et al. that alkylglycerol-type (AKG-type) ether lipids are specific lipid signals of breast milk that are essential for healthy adipose tissue development and therWe also observed that formula milk is also poor in the ether analogues of glycerophosphoethanolamines. In many formulas, the concentration level was below the detection limit. In other formulas, the content was significantly lower than that in HM. The PE-Os detected in HM contain LC-PUFAs in the structure, especially 20:4. This is in accordance with previous studies by Moukarzel et al. showing that human milk plasmalogens are highly enriched in long-chain PUFAs . We did sn-1 and sn-3 position) and phospholipids (LC-PUFA in the sn2 position) raises speculation of the possible dissimilar role of LC-PUFAs in these categories of lipids [Our study showed that HM also contains a higher concentration of other glycerophospholipids containing LC-PUFAs than FM, which contains a higher content of saturated phospholipids than HM. Although these LC-PUFAs are provided to the child by FM in the TG structure, the TGs and phospholipids might not be equivalent to dietary sources of PUFAs with different metabolic FA handling. In particular, it was shown that the brain accretion of AA and DHA was more effective for dietary phospholipids containing AA and DHA than for TGs . Moreovef lipids .Notably, we observed that phospholipids containing LC-PUFAs are present in the HM of all lactation stages. Interestingly, the content of many lipids known to be involved in the brain and neurological system development, such as PC and SMs, were higher in mature milk obtained during the advanced stage of lactation (after 1 year postpartum) than at earlier stages of lactation. For these compounds, the difference in the content between HM and FM was higher after the age of 1 year because the content of these compounds in FM was almost at the same level considering various age targets. The higher content of SMs in HM in the advanced lactation stage may be associated with the decreased number of those feeding with HM after introducing the complimentary food and providing the sufficient amount of these lipid compounds with HM.Differences in the content of phospholipids\u2014not only GPs but also SMs\u2014in FM and HM may refer to the differences in neurocognitive development between formula-fed and breastfed infants . ExogenoIn this study, we employed a lipidomic approach to comprehensively and semi-quantitatively compare the lipid composition of HM at various lactation stages and FM with different age range targets. The results of our study clearly showed that human milk and formula milk vary within all milk lipid classes. Although HM lipid composition varies individually between mothers and in the stage of lactation, FM lipid content differs concerning the fat source and brand used. Human milk lipid components change over the course of lactation. However, no trend can be indicated because this process is lipid species-specific. We also observed a higher content of lipids related to neurodevelopment in samples collected after 1 year postpartum than at the earlier lactation stages. We did not observe similar changes in formula milk samples. HM contains different contributions of lipid species within specific classes of lipids than FM and contains a higher content of many biologically important lipids than FM, such as ether analogues of glycerophosphoethanolamines. Detailed knowledge about changes in the lipid composition of HM and FM, including a long-term perspective, is required to investigate the impact of lipid components on child health and development and to move from standardised nutritional protocols to tailored, individualised nutrition in infants ."} +{"text": "Nonalcoholic fatty liver disease (NAFLD) with pathogenesis ranging from nonalcoholic fatty liver (NAFL) to the advanced form of nonalcoholic steatohepatitis (NASH) affects about 25% of the global population. NAFLD is a chronic liver disease associated with obesity, type 2 diabetes, and metabolic syndrome, which is the most increasing factor that causes hepatocellular carcinoma (HCC). Although advanced progress has been made in exploring the pathogenesis of NAFLD and penitential therapeutic targets, no therapeutic agent has been approved by Food and Drug Administration (FDA) in the United States. Gut microbiota-derived components and metabolites play pivotal roles in shaping intrahepatic immunity during the progression of NAFLD or NASH. With the advance of techniques, such as single-cell RNA sequencing (scRNA-seq), each subtype of immune cells in the liver has been studied to explore their roles in the pathogenesis of NAFLD. In addition, new molecules involved in gut microbiota-mediated effects on NAFLD are found. Based on these findings, we first summarized the interaction of diet-gut microbiota-derived metabolites and activation of intrahepatic immunity during NAFLD development and progression. Treatment options by targeting gut microbiota and important molecular signaling pathways are then discussed. Finally, undergoing clinical trials are selected to present the potential application of treatments against NAFLD or NASH. Nonalcoholic fatty liver disease (NAFLD) is the most common chronic liver disease worldwide, affecting about 25% of the global population . The preFactors such as lipotoxicity and inflammation can drive NAFLD progression to NASH and ending stage of liver disease . Gut micLactobacillus and Bifidobacterium spp., resulting in amelioration of adipose tissue inflammation in obesity and NAFLD [Diet plays an important role in modulating gut microbiota and metabolic pathways in the development of NAFLD ,11. Consnd NAFLD ,14. In aChange of intestinal or hepatic metabolites impacts intrahepatic immune cell profiles, as well as the expression of proinflammatory cytokines and chemokines in the fatty liver. Innate immunity plays an essential role in NAFLD or NASH pathogenesis. For example, the frequency of macrophages was increased in the NASH liver in amylin liver NASH (AMLN) diet-fed mice compared to that in standard chow diet-fed mice . Our resHowever, the role of diet and gut microbiota interaction-derived metabolites in modulating intrahepatic immune response remains to be explored. A better understanding of the underlying molecular mechanism is helpful to find a new therapeutic target for NAFLD or potential diagnostic marker. For this purpose, a search was conducted in PubMed, Web of Science, Google Scholar, and Embase with the keywords including NAFLD or NASH, metabolite, gut microbiota, signaling pathway, and immune response in the last five years. The originally retrieved publications were independently reviewed by two authors. The inclusion criteria were (1) the study contained at least three keywords, (2) either animal or human studies. Excluding criteria included (1) studies were abstracts or unpublished studies, (2) studies with similar findings from another study. All the rest studies were carefully reviewed by the authors, and representative findings in the last five years were selected. Few supporting studies prior to this period were added to explain the underlying mechanism.In this review, we first summarize the latest findings of metabolites that are implicated in the development of NAFLD, as well as the progression to NASH. Then, we investigate the underlying cellular and molecular mechanisms of these metabolites in hepatic immunity in animal models to study NAFLD or NASH or clinical samples. Finally, we summarize the currently ongoing clinical trials to evaluate potential therapeutic reagents by targeting key molecules or proteins for NAFLD and NASH treatment.LPS, a major component of Gram-negative bacterial cell membrane, plays a pivotal in the pathogenesis of mouse and human NAFLD via Toll-like receptor 4 (TLR4) signaling pathway . In addiPlasma amino acids (AAs), such as glutamate and valine, are shown to increase in NAFLD patients with or without obesity compared to non-NAFLD controls . Hoyles Bile acids (BAs) play important roles in NAFLD pathogenesis by modulating hepatic lipid and glucose metabolism, consisting of primary and secondary BAs . PrimaryFirmicutes [Choline can be metabolized by the gut microbiota to trimethylamine (TMA), which is absorbed in the liver and further converted to trimethylamine N-oxide (TMAO) by flavin-containing monooxygenase 3 (FMO3) . In addirmicutes ,33,34,35rmicutes .Klebsiella pneumoniae) to mice can increase ethanol production, increase liver injury, and impair mitochondrial function in mice, indicating a causative factor for NAFLD [Excessive consumption of alcohol causes alcohol fatty liver disease (AFLD). Endogenous ethanol produced by gut microbiota can impair mitochondrial function and promotes NAFLD development . Gavage or NAFLD . Fastingor NAFLD . Furtheror NAFLD . Thus, eDietary fibers (DF) consist of carbohydrate polymers resistant to digestive enzymes in the small intestine, which can be digested by bacteria in the large intestine . DF can Bifidobacterium and Lactobacillus genera compared to placebo or low-fiber consumption, which is associated with a high concentration of butyrate in feces [Fermentation of DF can impact the diversity of gut microbiota. For example, a meta-analysis revealed that DF intervention can increase the abundance of in feces . Consumpin feces .SCFAs, consisting of acetate, propionate, and butyrate, are produced by gut microbiota from dietary fibers and starch. They play important roles in energy metabolism, tissue homeostasis, and immune regulation. Here, we discuss their roles in the pathogenesis of NAFLD.Ruminococcus flavefaciens and portal acetic acid concentration, resulting in a reduction in hepatic fat accumulation [Oral administration of branched-chain amino acids (BCAAs), including leucine, isoleucine, and valine, significantly increased the abundance of gut mulation . In addimulation .A randomized controlled trial study showed that dietary supplementation with inulin that is mainly metabolized into acetate in the colon increased intrahepatocellular lipid. In contrast, dietary supplementation of inulin-propionate ester, which is designed to deliver propionate to the colon and to attenuate the acetate-mediated increase in intrahepatocellular lipid .Akkermansia muciniphila and butyrate and sugar expenditure in the distal intestine [Supplementation with grape polyphenols reduced Western diet (WD)-induced adiposity and hepatic steatosis in mice by increasing the abundance of ntestine .Overall, the dietary metabolites or metabolites derived from gut microbiota impact the progression of NAFLD and NASH .The intrahepatic immune response plays an essential role in the progression of NAFLD/NASH. Gut microbiota-derived metabolites and components circulating in the portal vein system can enter the liver to modulate intrahepatic immunity to impact NAFLD. This process is involved in a complicated communication among different liver non-parenchymal cells, including macrophages, monocytes, T cells, B cells, neutrophils, and HSCs . Herein,The composition of liver macrophages was altered in mice fed a high-fat high-sucrose diet (60% fat and 10% sucrose), with a decrease in liver resident macrophage Kupffer cells (KCs) and an increase in monocyte-derived macrophages (MdMs) detected by single-cell RNA sequencing (scRNA-seq) . A subsebrightNK cells decreased in intrahepatic lymphocytes in NAFLD patients, while CD56dimNK cells increased compared to that in healthy controls, indicating the complex roles of each subtype of NK cells in NAFLD [+ NK cells can inhibit the progression of NASH and liver fibrosis via suppressing the expression of profibrogenic genes as well as M2 polarization (anti-inflammatory phenotype) of liver macrophages [The number of natural killer (NK) cells was increased in a methionine- and choline-deficient diet (MCD)-induced mouse NASH liver via C-X-C motif chemokine ligand (CXCL)10/chemokine receptor (CXCR)3 signaling . These iin NAFLD . Naturalrophages . TherefoClostridium spp. induced secondary bile species (sBAs) activated liver sinusoidal endothelial cells (LSECs) to produce the chemokine CXCL16 to attract accumulation of hepatic CXCR6+NKT cells [Activation of invariant natural killer T (iNKT) cell subsets was shown in choline-deficient L-amino acid-defined HFD (CDAHFD)-induced murine NASH, accompanying the accumulation of plasmacytoid dendritic cells (pDCs) . In addiKT cells . CD1d-deKT cells . OverallNeutrophils are one of the first response cells that are recruited to the injury site to participate in the inflammatory response and tissue repair. Neutrophil depletion treated with antibody 1A8 (200 \u03bcg/mouse per week for four times) can reduce body weight gain and attenuate liver lipid accumulation with activation of lipid \u03b2-oxidation in HFD-fed mice compared to mice treated with isotype control . Neutrop+ T cells play different roles in NAFLD pathogenesis. Fatty acid composition can modulate the frequency of CD4+ T cell profiles in PBMCs of NAFLD patients, with an increase in CD25+CD45+CD4+ T cells and a decrease in PD1+CD4+ T cells [Different subtypes of CD4 T cells .+ T helper 17 (Th17) cells and concomitant expression of IL-17a, interferon (IFN)-\u03b3, and TNF-\u03b1, resulting in NAFLD progression [+CD4+ T cells were significantly increased in the liver during NAFL to NASH progression [Obesity increased the accumulation of inflammatory hepatic CXCR3gression . Cellulagression . The ratgression . IL-17+Cgression . The incgression .Hepatic infiltration of Tregs was increased in CD62L-deficient mice, which was associated with less hepatic lipid accumulation, reduced liver fibrosis, and improved insulin resistance . However+ T cells were increased in obese patients with NASH, which was associated with the expression of \u03b1-SMA, a marker of HSC activation [+ T cells reduced hepatic macrophages and \u03b1-SMA expression in obesity or hyperlipidemia-induced NASH mice, but not in lean mice [+ T cells in mice with NASH [+ T cell accumulation and activation with the expression of proinflammatory cytokines, but not CD4+ T cells and NK cells. Ex vivo studies revealed that microbiota-derived extracts in NAFLD-HCC patients compared to that can induce an immunosuppressive phenotype in human PBMCs, characterized by a suppression of CD8+ T cells and expansion of Tregs [+ T cell activation and suppresses its cytotoxicity to tumor cells by inducing immune tolerance.Liver CD8tivation . Depletiean mice . RNA-seqith NASH . Perforiof Tregs . NAFLD pFecal microbiota transplantation (FMT) of gut microbiota from human NAFLD patients into recipient mice can accelerate NASH progression via inducing accumulation and activation of liver B cells . ScRNA-s+ T cells can trigger apoptosis of activated HSCs via Fas /FasL-mediated signaling [Activation of HSCs, the major cells that contribute to liver fibrosis, is mediated by the activation of intrahepatic immunity during NASH. For example, proinflammatory cytokines such as TNF-\u03b1, transforming growth factor (TGF)-\u03b21, and IL-1\u03b2 expressed by intrahepatic macrophages can activate HSCs to promote the progression of liver fibrosis and NASH . In contignaling . TherefoThe recruitment of immune cells into the fatty liver plays a critical role in the pathogenesis of NAFLD/NASH. Chemokines and their receptors are the key factors involved in the recruiting process. For example, CCL2/CCR2 and CXCL9/10/CXCR3 signaling pathways are involved in the migration of myeloid cells and T cells ,76,77,78+CD4+ T cells was positively associated with hepatic steatosis, inflammation, and fibrosis via its ligand mucosal addressin cell adhesion molecule 1 in a Western diet (WD)-fed mice [Hepatic accumulation of integrins \u03b14\u03b27fed mice . Anotherfed mice .A soluble form L-selectin/CD62L was dramatically increased in the liver in patients with NASH. CD62L-deficient mice showed dampened NASH features compared to wild-type mice, including less hepatic lipid accumulation, reduced liver fibrosis, and improved insulin resistance . HepaticInfiltration of hepatic infiltration of macrophages in HFD-induced NAFLD mice was associated with an increase in hepatic Runx2 expression . Both inMany treatment agents have been tested in preclinical animal studies for the treatment of NAFLD or NASH with promising effects, including modulation of gut microbiota, FXR modulators, targeting chemokines and their receptors, anti-inflammatory or antioxidant agents, and modulation of fibroblast growth factors (FGFs) and microRNAs (miRNAs).A prospective cohort in Japan showed that the prevalence of NAFLD and NASH was 82.4% and 77.5%, respectively, in morbidly obese patients . BS treaClostridium difficile infection (CDI) induced by gut dysbiosis [Christensenellaceae and Lactobacillus and intestinal tight junction protein ZO-1, and reduced hepatic lipid accumulation, proinflammatory cytokines, and NAFLD activity score (NAS) in HFD-fed mice [FMT has been tested as a therapeutic strategy to prevent and treat different diseases associated with gut microbiota dysbiosis. FMT is a procedure to transfer healthy donor stool into the gastrointestinal tract of the patient in order to restore the balance of gut microbiota. For example, FMT is an effective and safe treatment for the recurrence and reduction in severe ysbiosis ,88. The ysbiosis , the indfed mice . In addifed mice . Still, Lactobacillus acidophilus (ATCC B3208), Bifidobacterium lactis (DSMZ 32269), Bifidobacterium bifidum (ATCC SD6576), Lactobacillus rhamnosus (DSMZ 21690) for 12 weeks showed reduced liver injury and a higher percentage of normal liver sonography in compared to placebo treatment [Bacteroides uniformis (CBA7346), a strain isolated from the healthy human gut, can ameliorate liver injury, inflammation, and lipid accumulation in NAFLD mice induced by feeding an HFD via improving insulin resistance and regulating de novo lipogenesis-related proteins, such as fatty acid synthase (FAS) and peroxisome proliferator-activated receptor-gamma (PPAR\u03b3) [Treatment with probiotics significantly ameliorated HFD-induced NAFLD in rats by decreasing the abundance of pathogenic bacteria and upregulating the bile acid receptor FXR/FGF15 signaling pathway . A meta-reatment . Adminis (PPAR\u03b3) .Clifford et al. showed that FXR activation both in mice and humans can specifically decrease the levels of monounsaturated fatty acids (MUFA) and polyunsaturated fatty acids (PUFA) in the liver . FXR agoChemokines and their receptors, such as CCL25 and CCR9, play important roles in the hepatic infiltration of macrophages and other immune cells in NAFLD/NASH ,98. Ther+ macrophages and expression of inflammatory cytokines IL-1\u03b2 and TNF-\u03b1 [Treatment with aldafermin, an engineered analog of FGF19, markedly reduced serum BAs, specifically hydrophobic BAs, such as DCA, lithocholic acid (LCA), glycodeoxycholic acid (GDCA), glycochenodeoxycholic acid (GCDCA), and glycocholic acid (GCA) in NASH patients . In addind TNF-\u03b1 .Natural polyphenols such as resveratrol with anti-inflammatory and antioxidant properties show potential efficiency against NAFLD . PolypheAdministration of hydro-alcoholic extract of spinach reduced the expression of proinflammatory cytokine TNF-\u03b1 and enhanced the expression of PPAR-\u03b3 in the livers of NAFLD rats at prevention and treatment phases . DietaryMicroRNAs (miRNAs) play important roles in regulating cell apoptosis, migration, and lipid metabolism during the development of NAFLD , which m+/hiM2c-macrophages to NAFLD mice increased serum level of high-density lipoprotein (HDL) and decreased total NAFLD pathological score via reducing liver inflammation, cell death, and fibrosis [The above-discussed treatment options are summarized in a figure . In addifibrosis .Many treatment agents have been tested in clinical trials for the treatment of NAFLD or NASH with promising effects, including chemokine receptor antagonist , FXR agonist , modulation of FGF , PPAR agonist , diet intervention , anti-inflammatory or antioxidant agents , and modulation of gut microbiota . Representative trials were selected with clinical results in the last 5 years before 11 October 2021 .NAFLD is the most chronic liver disease in the global population, and its incidence increases with the prevalence of obesity and T2D. Currently, NAFLD is the most increasing factor to induce primary liver cancer, HCC. However, there are no currently available FDA-approved treatments for NAFLD. Gut microbiota-derived metabolites and components play pivotal roles in the development and progression of NAFLD. Preclinical studies and clinical trials have been processed to evaluate potential treatment options for NAFLD and NASH, including synbiotics, omega-3, CCR2/5 antagonists, FXR agonists, and so on. A combined treatment such as combined medical treatment and physical activity could reduce the treatment time and improve the outcome. Although preclinical animal studies show the effects of pre-/probiotics and FMT, more clinical trials are waiting to verify the efficacy of balancing gut microbiota profile in patients with NALFD/NASH. In the future, meta-omics, including metabolomics with bioinformatic analysis, should be applied to search for early diagnostic markers and therapeutic targets for NAFLD and NASH."} +{"text": "Non-alcoholic fatty liver disease (NAFLD) is one of the most frequent causes of chronic liver disease in the Western world, probably due to the growing prevalence of obesity, metabolic diseases, and exposure to some environmental agents. In certain patients, simple hepatic steatosis can progress to non-alcoholic steatohepatitis (NASH), which can sometimes lead to liver cirrhosis and its complications including hepatocellular carcinoma. Understanding the mechanisms that cause the progression of NAFLD to NASH is crucial to be able to control the advancement of the disease. The main hypothesis considers that it is due to multiple factors that act together on genetically predisposed subjects to suffer from NAFLD including insulin resistance, nutritional factors, gut microbiota, and genetic and epigenetic factors. In this article, we will discuss the epidemiology of NAFLD, and we overview several topics that influence the development of the disease from simple steatosis to liver cirrhosis and its possible complications. NAFLD has become, in recent years, one of the most common liver diseases in the world. It is based on different liver disorders characterized by accumulation of fat (hepatic steatosis) in more than 5% of hepatocytes , primariThe term NAFLD has undergone an evolution throughout history, as knowledge about the disease and diagnostic methods have advanced . RecentlNAFLD can have different forms of presenting itself, from simple accumulation of fat that is a metabolic disorder that does not present symptoms ) to a symptomatic non-alcoholic steatohepatitis (NASH). NAFLD is asymptomatic in most cases and is associated with obesity and characteristics of metabolic syndrome (MS), as mentioned before ,10. NASHThe prevalence of NAFLD evolves in parallel with obesity and varies among countries and ethnicities. Globally, it has been estimated at around 25% in the general population , increasThe number of patients in the general population with NAFLD who develop NASH is unknown, but it is greater than 10% of the overall NAFLD population . PrimaryComplex interactions among environmental factors, metabolism and demography, genetic variants, and gut microbiota are involved in the pathogenesis of NAFLD . CurrentCurrently, different studies have shown a strong association between NAFLD and each of the risk factors associated with MS ,21,22, eObesity is considered the main risk factor for NAFLD, since body mass index (BMI) and waist circumference correlate positively with both the presence of NAFLD and disease progression . InteresNevertheless, a proportion of patients with NAFLD and normal body mass index (BMI) represent the already known lean or non-obese NAFLD. This condition lacks the complications associated with obesity, so it is expected to be less severe than in obese NAFLD , but datLipotoxicity and glucotoxicity play a crucial role in the development of simple steatosis in the liver and its progression to NASH. In this regard, a high fat and carbohydrate diet, which obese patients are more prone to follow, favors fat deposition in the liver by different mechanisms including mitochondrial defects and endoplasmic reticulum and oxidative stress, as mentioned ,33. ThenMS has multiple definitions, but some of its agreed manifestations are increased waist circumference, hyperglycemia, dyslipidemia, and systemic hypertension . MS inciIn MS patients, the capacity of insulin to inhibit glucose production is reduced, resulting in mild hyperglycemia, which in turn stimulates insulin secretion, causing a state of hyperinsulinemia. Insulin normally decreases the amount of VLDL, by suppressing its hepatic production, or by inhibiting lipolysis of adipose tissue . In patiT2DM has a strong relationship with the progression of NAFLD, in fact more than 50% of patients with T2DM have NAFLD ,44. DiabIR, on the other hand, is considered one of the critical cellular abnormality that causes the development of both NAFLD and T2DM, and has been recognized as an integral component of NAFLD pathogenesis , worseniIt has been increasingly recognized that inflammatory pathways are critically involved in IR development. However, is still unclear when the inflammatory processes begin ,54. MoreBacteroidetes and Firmicutes, while the predominant prokaryotic microorganisms are Euryarchaeota [The human gut microbiome is made up of 10\u2013100 trillion microorganisms, mainly bacteria, with this number about ten times higher than the number of eukaryotic human cells , susceptrchaeota . The comRecent studies have shown that the gut microbiota affects hepatic carbohydrate and lipid metabolisms and also influences the balance between pro-inflammatory and anti-inflammatory effectors in the liver, which directly affects NAFLD and its progression to NASH. Several experiments based on manipulation of the microbiome provide the most important evidence on the role of the gut microbiome in obesity and NAFLD in mice and humans. In an interesting study, germ-free mice fed with a high-fat diet exhibited lower levels of liver lipids compared to conventionally housed mice . In anotProteobacteria [Bacteroides [Gammaproteobacteria (phylum Proteobacteria) and Prevotella (Bacteroides) compared with the microbiota of obese children without NAFLD [Proteobacteria and decrease in Firmicutes, suggesting that the gut microbiome may not be stable during disease progression [NAFLD and NASH in humans tend to coincide with the existence of obesity and poor dietary habits, which makes it difficult to differentiate the effects of diet from those produced by the altered microbiome, and the metabolic changes that accompany both in liver disease. The abundance of some bacterial species in humans such as bacteria , or Bactteroides were assut NAFLD . It is ngression . Short-tgression . Similargression .Taken together, these studies show that there may be correlations between intestinal bacterial composition and NAFLD or NASH. However, these observations are limited by the lack of reproducibility of the studies and the absence of a mechanism to explain their effects on NAFLD and NASH. Furthermore, in most studies, the microbiome is sampled from stool samples, being the bacterial composition different from the communities present in the most proximal areas of the intestine .loci associated with the progression of the disease.The interaction between the genetic status and the environmental factors may explain part of the inter-individual variability observed in the manifestation of the phenotype and severity of NAFLD. Unbiased genetic and epidemiological studies show strong evidence for the heritability of characteristic traits of NAFLD and have identified gene Several clinical studies using family members demonstrate that first-degree relatives are at higher risk for NAFLD, suggesting a genetic predisposition to the disease ,70. PareEpidemiological studies describe inter-individual differences in the NAFLD prevalence depending on the ethnicity. Hispanic Americans have the highest prevalence of NAFLD, followed by Americans of European descent, and African Americans having the lowest prevalence ,77,78. IThe first genome-wide association study (GWAS) performed in NAFLD showed that the contribution of the ancestry to the observed differences in the accumulation of hepatic fat content and susceptibility to NAFLD can be partly explained by the genetic status of the individuals . Since tDifferent variants of genes implicated in the cellular metabolism of lipids in the liver define the genetic risk factors for NAFLD. As such, the most relevant loci affecting NAFLD are PNPLA3, TM6SF2, GCKR, MBOAT7, and HSD17B13 .This gene encodes an enzyme that is expressed mainly in adipose tissue, retina, and the liver ,88. WithPNPLA3 has been widely studied in NAFLD since the missense variant rs738409 C>G encoding for the I148M allele of PNPLA3 was first discovered in 2008 by GWAS and reported to explain most of the genetic contribution to the hepatic triglyceride accumulation and tendency to NAFLD in patients of different ethnicity . The PNPknockout (KO) mice revealed that lack of PNPLA3 protein did not affect the accumulation of fat in the liver, IR, or levels of liver enzymes [knock in and overexpression of the variant PNPLA3 I148M in the liver on high-fat diet reproduced the NAFLD phenotype observed in humans [Functional in vitro studies to characterize the biological relevance of the PNPLA3 I148M variant in NAFLD showed that PNPLA3 I148M is a gene variation that causes loss of function ,138. How enzymes . Interesn humans . Addition humans . For thin humans . From a n humans in mice,n humans .TM6F2 is a protein that is mainly expressed in the liver and small intestine where it regulates the intracellular trafficking and secretion of VLDL and cholesterol ,95. The The relevance of this locus in NAFLD was stablished in 2014 by an exome-wide association study that identified the missense variant rs58542926 C>T that encodes for the mutant TM6FS2 E167K . In thisThis gene is expressed in the liver of vertebrates and encodes for a protein that acts as an allosteric inhibitor of glucokinase (GCK), an enzyme responsible for blood glucose homeostasis. GCK is activated by increased levels of glucose in the portal vein and catalyzes the beginning of the glycolytic pathway by phosphorylating the glucose that enters in the cell . GCK hasThe loss-of-function variant rs1260326 C>T SNP, which encodes for the P446L protein of GCKR, is related to steatosis in the liver and risk of NAFLD even in obese children and adolescents and NASHMechanistical studies using recombinant human GCKR wild type and P446L proteins showed that physiological levels of F6P failed to activate P446L GCKR, resulting in increased activity of GCK . The conMBOAT7 gene expression yields the enzyme lysophosphatidylinositol (LPI) acyltransferase, an endomembrane protein involved in the metabolism of lipids in the liver that catA GWAS first identified the MBOAT7 variant rs641738 C>T as a risk locus in the pathogenesis of cirrhosis driven by alcohol . Later sThis gene encodes for retinol dehydrogenase, which is involved in steroid hormone signaling as well as bile acids and lipid metabolism . HSD17B1Recently, it has been described that several polymorphisms including rs72613567, rs143404524, and rs62305723 have a protective role in liver injury as they have been associated with reduced serum hepatic ALT and AST levels, diminished risk for liver damage, progression toward HCC and liver-related mortality in alcoholic and nonalcoholic liver disease ,114,115.Although the causal role of certain loci and NAFLD pathogenesis is undeniable, it is becoming clear that the rising prevalence of NAFLD cannot be explained exclusively by the contribution of environmental and genetic factors. Epigenetics, which constitutes the reversible and heritable change in gene expression without modification of the underlying nucleotide sequence, serves as a mechanistic bridge in this phenomenon . As a maWe can find examples of epigenetics on the basis of NAFLD as early as in embryo development. Maternal obesity, diabetes, or Western diet consumption lead to The relationship between DNA methylation and other metabolic diseases has been extensively studied . The amoAlthough the effect seen in NAFLD is hypomethylation, several interesting examples of hypermethylated genes with reduced expression can be found. Insulin-like growth factor-binding protein (IGFBP)-2 is often repressed in patients with NAFLD and NASH via methylation . InteresAnother prominent example of a hypermethylated gene in NAFLD is Peroxisome Proliferator-Activated receptor gamma coactivator (PGC)-1\u03b1, a master regulator of various aspects of energy metabolism, especially fatty acid oxidation and mitochondrial biogenesis, which are features involved in the pathogenesis of fatty liver. Patients of NAFLD have decreased PGC1\u03b1 expression due to promoter methylation, which correlates with mitochondrial defects and IR . To be nUnlike DNA methylation, histone modifications are less understood in the context of fatty liver disease . NeverthSirtuin 1 (SIRT1), on the other hand, is a histone deacetylase that has been traditionally related to hepatic metabolic regulation . As mattmiRNAs are known to regulate multiple biological pathways involved in the pathogenesis of NAFLD such as lipid uptake, de novo lipogenesis, lipid oxidation, and hepatic lipid export, apoptosis, cell proliferation, or fibrosis . In factmiR-33a and miR-33b, for example, negatively regulate the levels of ATP-binding cassette transporter 1 (ABCA1), which controls high-density lipoprotein biogenesis, thereby promoting high levels of circulating VLDL and triglycerides . In the NAFLD and aging are known to be strongly correlated, with increasing age being one of the strongest epidemiological factors for NAFLD, NASH, and fibrosis ,199,200.Unlike what happens with age, the influence of sex on the prevalence and course of NAFLD is not so clear. Available studies are divided between both sex, some have shown a female preponderance, while others have shown a male preponderance in the prevalence of NAFLD . In an iIn recent years, NAFLD has become a biological marker of social affluence and sedentary lifestyle ,24. As wNumerous studies have suggested that dietary composition may predispose people to NAFLD. In developed societies, human nutrition has undergone drastic changes in recent decades that, combined with a decrease in the population\u2019s physical activity, has triggered a marked imbalance between energy consumption and energy expenditure . The risLeslie T. et al. reported in 2014 that NAFOne of the underlying mechanisms that could lead to the onset NAFLD from this particular scenario might be epigenetics, as mentioned before; the mechanism that could also be responsible for the IR that this population suffer ,217. IntSeveral recent studies have reported the relationship between smoking and the incidence of an accelerated disease progression and advanced fibrosis in NAFLD ,228,229.Zein et al. demonstrAccording to some epidemiological studies, exposure to environmental particles is positively associated with increased morbidity and daily mortality caused by diseases closely related to life habits including ischemic heart disease or T2DM 2.5 can affect NAFLD through different ways [2.5) may be an important risk factor for NAFLD progression. PM2.5 may also prompt the expression of proinflammatory factors in adipocytes [Long-term exposure to PMent ways . In 2007ent ways publisheent ways demonstripocytes ,243,244,ipocytes and causipocytes .2.5 for more than 30 min showed an abnormal liver insulin signal, with an altered expression of endothelial NOS and protein kinase C. More recently, Yin et al. [2.5 exposure could inhibit the expression of PPAR\u03b1 and PPAR\u03b3, inducing hepatic steatosis, inflammation, and IR. According to numerous studies, oxidative stress is the main cause of damage of PM2.5 in cells, either through direct effects on structure and function of biological macromolecules like DNA, lipids, and proteins, or indirectly through the activation of intracellular oxidant pathways [In 2009, Sun et al. found thn et al. demonstrpathways ,250,251.Despite the results on the inflammatory and oxidative properties of air pollutants, the lack of data supports the need of more studies on the effects of environmental factors, notably air pollution, on NAFLD.NAFLD is a growing health problem worldwide. Recently, an international expert group has agreed to change the name of NAFLD to metabolic (dysfunction) associated fatty liver disease (MAFLD) ,253. Ove"} +{"text": "Non-alcoholic fatty liver disease (NAFLD) is a fast-spreading epidemic across the globe and has serious implications far beyond that of a \u201cbenign\u201d liver condition. It is usually an outcome of ectopic fat storage due to chronic positive energy balance leading to obesity and is associated with multiple health problems. While association with cardiovascular disease and hepatocellular cancer is well recognized, it is becoming clear the NAFLD carries with it an increased risk of cancers of extrahepatic tissues. Studies have reported a higher risk for cancers of the colon, breast, prostate, lung, and pancreas. Fatty liver is associated with increased mortality; there is an urgent need to understand that fatty liver is not always benign, and not always associated with obesity. It is, however, a reversible condition and early recognition and intervention can alter its natural history and associated complications. Non-alcoholic fatty liver disease (NAFLD) is one of the commonest indicators of ill-health in the modern era of obesogenic lifestyle, next to obesity itself ,2.\u00a0It afThere are multiple causes for fatty liver, but the term NAFLD is conventionally restricted to that related to nutritional causes, obesity, and metabolic syndrome. Causes of fatty liver can be genetic , drugs , surgeryEpidemiologyThe prevalence of NAFLD is disturbingly high across the world, varying between 25% and 35% globally. The highest prevalence is reported from South America (31%) and Middle East (32%), followed by Asia (27%) and the USA (24%), while the prevalence is lowest in Africa 14%) [% [14]. TRecent figures suggest that more than 50% of the Omani population is overweight, and 30% have a body mass index (BMI) of more than 30% . NeighboSurprisingly, in a study of HCC diagnosed from Oman between 2008 and 2015, of 284 patients, only two of 227 patients with cirrhosis (0.9%) were\u00a0reported to have a cryptogenic etiology , and NAFLD is a complex, \u201cchicken and egg\u201d situation.\u00a0While the events that spark the inflammation that converts steatosis to steatohepatitis have been comprehensively reviewed , what caInsulin is an anabolic, fat-storage hormone, and hyperinsulinemia results in varying degrees of obesity depending on the fat storage capacity of the individual . The laIt is clear now that hyperinsulinemia is one of the earliest manifestations of modern-day chronic calorie excess/adiposity and appears much before the onset of insulin resistance and T2DM . The modThe hyperinsulinemic syndrome has otheNAFLD and extrahepatic cancers\u00a0(EHC)It was conventionally thought that the earliest manifestation of NAFLD, the fatty liver (NAFL), is a benign condition and becomes a medical problem only when it progresses to NASH, and later to fibrosis, cirrhosis with decompensation, and/or HCC. However, recent evidence suggests that the finding of hepatic steatosis is not benign and has significant health consequences . NAFLD iIn a prospective study from Rochester County, Minnesota, Allen et al. followedIn a long-term population-based cohort study from Sweden of 8,892 patients of biopsy-proven NAFLD , EHC wasA meta-analysis by Mantovani et al. of 10 coAnother meta-analysis of 26 studies has conf\u00a0A decade-long study from China also shoWhat is interesting is that obesity is not an essential requirement for EHCs associated with NAFLD. Obesity-related cancers (ORCs) are a recognized entity , and theIn Allen\u2019s study , NAFLD wAssociation is, of course, not causation and the larger question is, does fatty liver directly cause cancer? Causality is difficult to establish, and there could be another proximate cause for both fatty liver and cancer. Reversal of NAFLD leads to a drop in the incidence of cancers, the most effective way being bariatric surgery . It is kMechanisms of carcinogenesisThe molecular links between obesity and cancer have been speculated on ; recent The liver itself secretes a vast repertoire of hepatocytes ,70, someThe mechanism(s) is much more than of academic interest as blocking the proliferative/mutagenic signals\u00a0should uncouple NAFLD from its remote effects such as EHCs. This should be an area of research priority as it is unlikely that efforts to curb the incidence of NAFLD will bear fruit in near future.\u00a0ImplicationsEffect on Natural HistoryIt is an intriguing question whether the presence of NAFLD alters the natural history of cancer. Some studies suggest that fatty liver increases the risk of liver metastasis (in lung cancer) , but othEffect on Drug ResponseAnother evolving issue is the altered dynamics of immune checkpoint inhibitors (ICIs), in patients with NAFLD. ICIs themselves have been reported to cause NAFLD and alsoHigh-Risk WarningAt the general practitioner level, there should be increasing awareness that hepatic steatosis is not a benign condition; it should not be dismissed as an incidental finding in the ultrasound scan of the abdomen, as is done for renal or liver cysts. Firstly, about 20% of NAFL progress to NASH, and 40% of the latter go on to develop fibrosis; the risk of progression to NASH and fibrosis can be rapid, with NAFL progressing to fibrosis in an average of 14 years while NASH can in half that duration , as doctRecommendations for Screening?It has been suggested that patients with NAFLD should be candidates for targeted screening for EHCs . Three oManagementReversal by Weight LossReversal of NAFLD can either be by weight loss or by specifically targeting NAFLD. Weight loss through lifestyle modification (diet and exercise) is the most cost-effective intervention to reverse NAFLD ,89 but dTargeted ReversalSpecific treatment of NAFLD is in its nascent stages . It is cAn epidemiological study from China gives inSeveral agents that directly reverse NAFLD such as controlled-release mitochondrial protonophore (CRMP) are being studied . Two ratFinally, research is essential to discover the mechanisms linking NAFLD to EHCs so as to enable intervention. It is unlikely that we will achieve any form of control over the burgeoning obesity and NAFLD rates in near future . The preChronic positive energy balance overwhelms safe fat storage depots in subcutaneous WAT, leading to ectopic fat storage in unsafe sites such as VAT and liver. This causes insulin resistance, metabolic syndromes, and, as is increasingly clear, EHC. Clinicians and their patients should be made aware that fatty liver is just the tip of an iceberg; NAFLD is an indicator of an underlying plethora of metabolic disturbances. Aggressive measures must be instituted early to reverse it. Obesity and NAFLD will continue to rise and will overwhelm the health care system in a not-too-distant future. Current advice on the control of obesity such as lifestyle changes is a clear failure. It would be prudent to investigate the causative link(s) between NAFLD and EHCs with the ultimate aim of discovering methods to uncouple them."} +{"text": "Non-alcoholic fatty liver disease (NAFLD) is the most common chronic liver disease. Its worldwide prevalence is rapidly increasing and is currently estimated at 24%. NAFLD is highly associated with many features of the metabolic syndrome, including obesity, insulin resistance, hyperlipidaemia, and hypertension. The pathogenesis of NAFLD is complex and not fully understood, but there is increasing evidence that the gut microbiota is strongly implicated in the development of NAFLD. In this review, we discuss the major factors that induce dysbiosis of the gut microbiota and disrupt intestinal permeability, as well as possible mechanisms leading to the development of NAFLD. We also discuss the most consistent NAFLD-associated gut microbiota signatures and immunological mechanisms involved in maintaining the gut barrier and liver tolerance to gut-derived factors. Gut-derived factors, including microbial, dietary, and host-derived factors involved in NAFLD pathogenesis, are discussed in detail. Finally, we review currently available diagnostic and prognostic methods, summarise latest knowledge on promising microbiota-based biomarkers, and discuss therapeutic strategies to manipulate the microbiota, including faecal microbiota transplantation, probiotics and prebiotics, deletions of individual strains with bacteriophages, and blocking the production of harmful metabolites. Non-alcoholic fatty liver disease (NAFLD) is characterised by an excessive intrahepatic fat accumulation, i.e., steatosis, without significant alcohol consumption. Liver steatosis is defined as fat accumulation, in >5% of hepatocytes . NAFLD mThe first stage of alcoholic liver disease (ALD) is also characterised by hepatic steatosis. However, unlike NAFLD, the primary trigger of ALD, i.e., excessive alcohol consumption, is known and the disease is preventable. Ethanol probably does not play a prominent role in NAFLD pathogenesis but is discussed as one of the possible contributing factors. A detailed discussion of the role of the gut microbiota in ALD pathogenesis is beyond the scope of this review and has been discussed elsewhere .NAFLD is closely associated with many features of metabolic syndrome, including obesity, insulin resistance, hyperlipidaemia, and hypertension ,4 and inNAFLD is the most common chronic liver condition in the USA and Europe. Its global prevalence is rapidly increasing and is currently estimated at 24%. The highest rates are reported from the Middle East (32%) and South America (31%) and the lowest from Africa (14%) . The estNAFLD pathogenesis is complex and not fully understood. The current understanding is that NAFLD is caused by a complex interplay of environmental factors mostly dietary, gut microbiota disturbances, and host factors.This review discusses the involvement of the gut microbiota in the pathogenesis of NAFLD, focusing on factors that modulate microbiota composition and intestinal permeability. In addition, NAFLD-associated microbiota signatures, immunological mechanisms behind liver tolerance to gut-derived antigens, and the gut\u2013liver axis will be explained. Finally, we review advances in microbiota-based biomarkers and therapeutic interventions.Enterobacteriaceae family, a subgroup of Proteobacteria phylum, is frequently observed in many immune-mediated and metabolic diseases including NAFLD [Liver diseases, including non-alcoholic fatty liver disease (NAFLD), alcoholic liver disease (ALD), cirrhosis, and hepatocellular carcinoma are assong NAFLD ,12. The ng NAFLD .Diverse gut microbiota of each individual may endow the host with unique metabolic apparatus and the ability to adapt to changing environment and substrate availability. With decreasing microbial diversity during urbanisation/industrialisation this adaptability was partially lost as human gut microbiota gained new abilities aimed at sugar and xenobiotics processing ,15.Dysbiosis might be caused by host-derived factors such as genetic background, health status , and lifestyle habits or, even more importantly, by environmental factors such as diet , xenobiotics , or hygienic environment.Profound shifts in gut bacterial and fungal microbiota can be quickly achieved with shifts in macronutrients. These changes have significant physiological consequences, as diets rich in animal protein or simple sugars worsen the intestinal inflammation induced by dextran sulphate sodium. However, while the former increases proinflammatory tuning in gut monocytes the latter worsens the gut barrier function. In both cases, however, interactions between diet and microbiota are necessary for this deleterious effects as they fail to appear in germ-free condition or after transfer of the microbiota to naive mice ,17.The effect of food additives on gut microbiota has been long overlooked, but recently, several groups, including ours published data demonstrating that some human gut microbes are highly susceptible to food preservatives and thatHost-derived factors modifying gut microbiota load and composition are bactericidal fluids produced by gastric glands and the liver, i.e., gastric acid and bile, and antimicrobial molecules, such as defensins, lysozymes, and antibacterial lectins (Reg3\u03b3) produced by Paneth cells or SIgA produced by plasma cells .Dysbiotic microbiota can influence the host immune and metabolic systems and mucosal integrity via various mechanisms. Immune system-modulating mechanisms include the modulation of inflammasome signalling through microbial metabolites, the modulation of Toll-like receptor (TLR) and NOD-like receptor (NLR) signalling, the degradation of secretory IgA (SIgA), the shifting of the balance between regulatory and proinflammatory T cell subsets, direct mucolytic activity and others . MetabolWhether dysbiosis is a direct cause of NAFLD or merely reflects disease-associated changes in the host\u2019s immune and metabolic systems remains unclear. However, there is accumulating evidence from both preclinical and clinical studies suggesting that gut microbiota dysbiosis plays a key role in the initiation of the disease and its maintenance.The gut microbiota alterations associated with NAFLD are dependent on the clinical stages of the disease . The mosEnterobacteriaceae (family), Escherichia, Bacteroides, Dorea, and Peptoniphilus (genus) and decreased Rikenellaceae, Ruminococcaceae (family), Faecalibacterium, Coprococcus, Anaerosporobacter, and Eubacterium (genera) , (genera) . The NAFirrhosis as well irrhosis , T2DM [3irrhosis , or in iirrhosis or IBS [irrhosis . F. prauoperties . Anotherfibrosis , is assofibrosis . Interestococcus , in the Candida overgrowth [Unlike patients with ALD who have fungal dysbiosis characterised by decreased diversity and ergrowth , patientStaphylococcus and Acinetobacter can be cultured from venous blood of cirrhosis patients [Evidence of viable liver or circulating microbiota in NAFLD patients is limited. Nevertheless, bacteria of genera patients . These cpatients .The gut\u2013liver axis is a bidirectional communication through the biliary tract, portal vein, and systemic circulation. Liver-derived factors, such as bile acids, influence gut microbiota composition and function, and gut-derived products, either dietary or microbial, regulate bile acid synthesis and as well as glucose and lipid metabolism in the liver. The disruption of the gut\u2013liver axis by, for example, environmental factors inducing gut dysbiosis and/or increased intestinal permeability leads to proinflammatory changes in the liver, and its failure to regulate gut microbiota results in further disease progression . A comprThe key function of the barrier is to protect tissues and organs from harmful luminal contents, such as parasites, microorganisms, MAMPs, microbial metabolites, dietary antigens, or toxins while preserving nutrient absorption. The intestinal barrier consists of several functional elements. The physical barrier consists of commensal bacteria, mucins secreted by goblet cells, and the intestinal epithelium sealed with tight junction proteins. The immunological barrier includes components of cellular and humoral immunity. Humoral factors, such as antimicrobial peptides and SIgA, control the load and composition of the microbiota in the lumen. The major antimicrobial peptides are defensins, cathelicidins, resistin-like molecules, and lectins produced by specialised epithelial cells and Paneth cells. SIgA is secreted by plasma cells of the lamina propria, activated by antigen-presenting cells, and transported into the intestinal lumen by epithelial cells after binding to the polymeric Ig receptor (pIgR) . SIgA biThere is no consensus as to which factors are major contributors to increased intestinal permeability, however, there is accumulating evidence that environmental factors, especially an unhealthy diet characterised by low fibre, high sugar and HFCS content, and some food additives play a significant role. For example, chronic fructose consumption is associated with tight junction disruption and incrThe liver is evolutionarily programmed to tolerate low-level exposure to innocuous dietary and microbial antigens delivered via the portal vein. Liver tolerance is maintained by hepatic antigen-presenting cells (HAPCs), which include dendritic cells, liver sinusoidal endothelial cells (LSECs), Kupffer cells, and hepatic stellate cells ,57. AntiGut-derived factors involved in pathogenesis of NAFLD might originate in the diet, be a product of gut microbiota, or be host-derived. Dietary factors, such as ethanol, fructose, or choline, might act directly or after processing by microbiota. Microbiota-derived factors comprise microbial components, such as LPS, and products of microbiota metabolism. Host-derived factors are, for example, primary bile salts or mucin.The exposure of the liver to whole bacteria and/or their components is under normal/healthy conditions insignificant. However, if the gut barrier is disrupted due to the direct effects of dietary factors, such as ethanol or fructose, or indirectly due to gut dysbiosis, the liver is exposed to a significant microbial load. The increased exposure to microorganisms and their products, such as LPS, peptidoglycan, viral or bacterial DNA, or fungal beta-glucan, leads to the induction of proinflammatory changes. These components, collectively labelled microbe-associated molecular patterns (MAMPs), are then recognised by liver innate immune cells . The activation and long-term maintenance of inflammation leads to fibrosis, cirrhosis, or even HCC.The metabolites which are exclusively produced by gut microbiota were identified by comparing metabolomic profiles of germ-free or antibiotic-treated mice with conventional mice. The microbiota-derived metabolites implicated in NAFLD pathogenesis include the choline metabolite trimethylamine (TMA), the secondary bile acids deoxycholic acid (DCA) and lithocholic acid (LCA), and SCFA ,68,69.Fructose is a monosaccharide that is naturally present in fruits and honey. However, fructose is also a major component of high-fructose corn syrup (HFCS), a ubiquitous sweetener made from corn starch, and sucrose, a glucose-fructose disaccharide. Human exposure to fructose-containing sugars was historically very low. Two hundred years ago, the yearly per capita sugar consumption in Europe and the United States was about 8 kg; by 1900, the consumption had increased to almost 40 kg, and the current consumption of fructose-containing sweeteners including HFCS is close to 70 kg per person per year [Abundant evidence both from preclinical and clinical studies suggests a major role for fructose in the pathogenesis of NAFLD and also NASH . FructosImportantly, fructose also has direct and harmful effects on the liver. The unique metabolism of fructose which is distinct from glucose metabolism leads to ATP depletion ,74,75, tFor humans and many animals, choline is an essential nutrient with many functions. It is required for the synthesis of phosphatidylcholine, which in turn is required for the synthesis of cellular membranes and VLDL, which are responsible for the transport of triglycerides out of the liver . CholineHowever, under certain conditions, choline may play a negative role in NAFLD pathogenesis because specific subsets of gut bacteria can convert it into trimethylamine (TMA) ,86 whichCholine conversion by microbiota seems to be involved in NAFLD pathogenesis by at least two mechanisms. Firstly, it decreases choline liver bioavailability, which leads to the inefficient export of VLDL particles out of the liver, lipid accumulation, and liver inflammation . SecondlSCFAs are defined as fatty acids with fewer than six carbons and the most abundant SCFAs are acetate, propionate, and butyrate. They are generated by gut microbiota mainly from dietary nondigestible starch and fibre in the colon and both composition of individual\u2019s microbiota and source of nondigestible fibre determine which SCFAs are produced . They plOn the other hand, butyrate is a major source of energy for colonocytes, and the supplementation of tributyrin, a butyrate prodrug, as well as acetate and propionate, was shown to protect against diet-induced obesity, hepatic steatosis, and insulin resistance ,95.Published data on the immunomodulatory effects of SCFAs are mostly positive, i.e., favouring the induction of anti-inflammatory T cells. However, the evidence that SCFAs might, under certain conditions, induce proinflammatory T cells, such as Th1 and Th17 exists ,97. InteObviously, further research is needed to determine how significant the contribution of SCFA in NAFLD pathogenesis is and to discriminate whether the net effect is positive or negative. Additionally, it is important to discriminate between circulating and faecal SCFA levels as circulating SCFA are more directly linked to metabolic health .Klebsiella pneumoniae, are significant alcohol producers and that the NAFLD phenotype is transferable to experimental mice using FMT [Even healthy subjects with no alcohol consumption have low levels of blood ethanol , and gutsing FMT .NAFLD has been associated with increased luminal and serum levels of ethanol and its metabolites, acetate, and acetaldehyde. Additionally, children with an inflammatory form of NAFLD, NASH, have increased serum levels of ethanol compared to healthy children . HoweverIncreased levels of endogenous ethanol and its metabolites fuel the progression of the disease by several mechanisms. In the gut, ethanol and its metabolite, acetaldehyde, increase intestinal permeability by stimulating the production of inflammatory cytokines, decreasing the production of AMPs, and disrupting tight junctions. This leads to gut barrier dysfunction, the translocation of microbiota-derived molecules into the liver resulting in increased production of inflammatory cytokines and the induction of lipogenesis. In the liver, ethanol disrupts lipid metabolism by inducing fatty acid uptake and de novo lipogenesis, impairing fatty acid oxidation and inhibiting VLDL export . ContinuInterestingly, gut microbiota can also metabolise ethanol and protect the liver against alcohol-induced liver injury. This was well documented by a study in germ-free mice showing that the absence of microbiota leads to increased liver exposure to ethanol, increased expression of ethanol metabolising enzymes, and exacerbation of hepatic steatosis . TherefoThe primary bile acids (BAs), i.e., cholic and chenodeoxycholic (CDCA), are organic molecules synthesised in a multistep process from cholesterol in the pericentral hepatocytes and secreted in the form of glycine/taurine conjugated bile salts via the bile duct together with other bile components such as bilirubin phospholipid, cholesterol, amino acids, porphyrins, and xenobiotics, into the duodenum . The BAsSeveral preclinical and clinical studies associate the disturbance of bile acid metabolism with NAFLD. There are two well-studied mechanistic pathways implicated in the NAFLD pathogenesis. Firstly, a decreased signalling via the bile acid receptor farnesoid X receptor (FXR) due to an increased ratio of secondary to primary BAs (DCA/CDCA) leads to dysregulation of lipid and glucose metabolism ,108. AddThe primary source of SIgA in the intestine is intestinal plasma cells. However, plasma cells capable of producing microbiota-specific SIgA are also present in the liver . The livThe most common diagnostic and prognostic methods for NAFLD are blood tests, imaging methods, and liver stiffness assessment methods. Blood tests are used to detect elevated liver enzymes such as alanine aminotransferase (ALT) and aspartate aminotransferase (AST). Such laboratory abnormalities are often the first sign of liver dysfunction. The most common tests used to visualise the liver include ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI). MRI is the most sensitive test for imaging hepatic steatosis and can be used to calculate the amount of fat in the liver. Liver stiffness or elasticity could be measured with ultrasound or magnetic resonance elastography (MRE). The low frequency vibrations are visualised to create a map, an elastogram, that shows the stiffness and elasticity over the liver. Elastography methods can detect the early stages of liver fibrosis and may eliminate the need for an invasive test, i.e., liver biopsy ,115.An increased knowledge of gut microbiota, MAMPs, and microbiota-derived metabolites\u2019 involvement in the NAFLD pathogenesis could be exploited in novel diagnostic and therapeutic approaches. NAFLD is a heterogeneous group of conditions, which consist of several subtypes driven by various combinations of the above-mentioned factors, and this knowledge should be reflected in both diagnosis and therapy.Veillonellaceae with liver fibrosis in non-obese NAFLD patients and suggest Veillonellaceae as a diagnostic marker [Currently, a liver biopsy is still required for the diagnosis and monitoring of disease progression. Therefore, noninvasive and reliable markers for NAFLD evaluation are urgently needed. Recent progress in gut microbiota research suggests that some microbiota members and metabolites might be useful as diagnostic and prognostic markers. For example, Loomba et al. found that a panel of gut microbiota consisting of 37 bacterial strains can be used to accurately diagnose advanced fibrosis in patients with NAFLD and Lee c marker .Bacteroidaceae and Prevotella [There are several microbiota-derived metabolites that could be used as NAFLD biomarkers. Some of the most promising are succinate, phenylacetic acid, and 3-(4-hydroxyphenyl) lactate. Interestingly, NAFLD patients have often a decreased microbial gene richness resulting in an alteration in aromatic amino acid and branched-chain amino acid metabolism. For example, the above mentioned 3-(4-hydroxyphenyl) lactate which is associated with liver fibrosis, is a product of aromatic amino acid metabolism . Serum levotella , was fouevotella .The deep knowledge of gut microbiota could be exploited for therapeutic purposes on several levels. Firstly, the whole microbiota communities might be restored using FMT. Secondly, single strains or collections of beneficial strains (probiotics) could be introduced to gut microbiota to deliver missing functionality while harmful or undesirable strains might be deleted using antibiotics, antimycotics, or bacteriophages. Lastly, microbial metabolic pathways could be targeted to decrease or block the production of harmful metabolites or to stimulate the production of those which are beneficial.Data on the efficacy of FMT in the treatment of NAFLD are limited. To date, FMT has been used successfully in cirrhotic patients with hepatic encephalopathy and alcoProbiotics, prebiotics, and synbiotics have also been used in NAFLD treatment. Probiotics are nonpathogenic live microorganisms that, when consumed, can have a positive effect on host health . The mosE. faecalis, using a bacteriophage was shown to be effective in ethanol-induced liver disease in humanised mice [The therapeutic strategy of targeting single strain, specifically cytolytic sed mice .The therapeutic approaches aimed at manipulating microbiota to increase the production of protective metabolites or to block/decrease the production of harmful metabolites are also promising. For example, the conversion of dietary choline into TMA by microbial TMA lyases can be blocked by 3,3-dimethyl-1-butanol, which is a structural analogue of choline . MicrobiIncreasing evidence suggests that the gut microbiota plays a critical role in the pathogenesis of NAFLD. Metabolites derived from the gut, whether beneficial or harmful, are transported via the portal vein directly to the liver for further processing, including energy production, detoxification, and synthesis. Problems arise when the liver becomes overloaded with harmful, toxic or proinflammatory molecules due to unhealthy dietary habits, dysbiosis of the gut microbiota and/or increased gut permeability. These molecules originate from either the diet or the microbiota and negatively affect liver physiology. The dysfunctional liver cannot effectively control the gut microbiota via bile acids and other microbiome-regulating factors, resulting in gut dysbiosis and gut barrier dysfunction. This reinforcing loop has deleterious effects on the liver, gut, and human health in general.The major disruptors of the gut microbiota are environmental factors, particularly diet and medications. Genetic factors provide the playing field for NAFLD pathogenesis but may not be responsible for the steady increase in NAFLD incidence due to their relatively stable nature.Several clinical studies have uncovered promising NAFLD-associated microbiome signatures that could be used for noninvasive NAFLD diagnosis or monitoring of disease progression. However, the usefulness of the signatures needs to be further validated in well-designed large cohort longitudinal studies that take into account important confounding factors such as comorbidities, including T2DM and obesity, ethnic background, medication and diet. The diagnostic and prognostic value of microbiota signatures could be further enhanced by combining them with microbiota-derived metabolites detected in blood/plasma, urine, or faeces. Such an approach will allow more accurate differentiation of NAFLD subtypes and more precise evaluation of the efficacy of therapy.A comprehensive understanding of the interactions between microbiota and liver will enable us to develop effective microbiota-based therapies. There will be two main approaches that can be used in combination. The first will be direct manipulation of the gut microbiota by removing unwanted strains, introducing beneficial strains or replacing the dysbiotic microbiota with FMT. The second approach will be based on exploiting microbial metabolites by either inducing or blocking their production.In summary, a growing body of evidence suggests that NAFLD is triggered and driven by a gut microbiota distorted by environmental factors. A deep knowledge of the interactions between microbiota and liver will allow us to introduce new noninvasive diagnostic and prognostic methods based on NAFLD-specific microbiota signatures and metabolites, and also to develop effective microbiota-based therapies, such as probiotic interventions, FMT, or metabolite manipulations."} +{"text": "Nonalcoholic fatty liver disease (NAFLD), the most common cause of chronic liver disease, ranges from simple hepatic steatosis to nonalcoholic steatohepatitis (NASH), which is a more aggressive form characterized by hepatocyte injury, inflammation, and fibrosis. Increasing evidence suggests that NASH is a risk factor for hepatocellular carcinoma (HCC), which is the fifth most common cancer worldwide and the second most common cause of cancer-related death. Recent studies support a strong mechanistic link between the NASH microenvironment and HCC development. The liver has a large capacity to remove circulating pathogens and gut-derived microbial compounds. Thus, the liver is a central player in immunoregulation. Altered immune responses are tightly associated with the development of NASH and HCC. The objective of this study was to differentiate the roles of specific immune cell subsets in NASH and HCC pathogenesis. Clarifying the role of specific cells in the immune system in the transition from non-alcoholic fatty liver disease (NAFLD) to liver cancer will help to understand disease progression and may open avenues towards new preventive and therapeutic strategies. NAFLD is the most common chronic liver disease. Growing evidence suggests that its most aggressive form, non-alcoholic steatohepatitis (NASH), can promote the development of liver cancer, the second most common cause of cancer deaths worldwide. Chang-Woo Lee and colleagues at Sungkyunkwan University, Suwon, South Korea review the immunological distinction between NASH and liver cancer, focusing on the levels and activities of six key types of immune system cells. Chronic inflammation mediated by the immune system can create conditions for NAFLD, NASH and liver cancer to develop and worsen. HCC is the sixth leading cause of cancer-related deaths globally and is expected to become the third leading cause of liver cancer-related deaths by 20302. Such changes in HCC incidence are affected by obesity, type 2 diabetes, and nonalcoholic fatty liver disease (NAFLD), which is the most common liver disease3. Although NAFLD has a spectrum of liver pathologies similar to those of alcohol-induced fatty liver damage, NAFLD can occur in patients even in the absence of alcohol abuse4. NAFLD is characterized by a steatosis or the accumulation of triglycerides in lipid droplets inside hepatocytes (hepatic steatosis)5. Such accumulation of lipids is closely associated with metabolic syndromes such as obesity, type 2 diabetes, hypertension, and dyslipidemia6. NAFLD is highly prevalent on every continent. The global prevalence of NAFLD was ~25%. The Middle East has the highest prevalence rate of 32%, followed by South America (31%). Africa has the lowest prevalence at 14%7. NAFLD can progress to a more severe form called nonalcoholic steatohepatitis (NASH). NASH is marked by abnormal fat accumulation in the liver and immune cell infiltration into the liver due to chronic hepatitis and inflammation. In addition, it seems that most NASH patients develop progressive fibrosis7. NASH can cause liver diseases such as cirrhosis and HCC and is also associated with an increased risk of cardiovascular disease8.Hepatocellular carcinoma (HCC) is the most common type of liver cancer and accounts for 70\u201385% of all liver cancer cases9. NASH is the fastest increasing cause of HCC in the United States10. As such, the incidences of NAFLD and NASH increase each year. Patients with these disorders are highly likely to have more than one metabolic syndrome. These individuals are at high risk of developing HCC12. The incidence of NAFLD/NASH-released HCC has continuously increased in many ethnic groups, including in the United States13 Europe16, South Korea17, and Japan18, over the past decades. A study released in 2010 stated that NAFLD/NASH (59%) was the most common etiological risk factor in the United States, followed by diabetes (36%) and hepatitis C virus (22%)19. Given recent advances in anti-hepatitis C virus (HCV) therapy, NASH is highly likely to become a major cause of progressive liver disease within the next three decades.The prevalence of NASH among NAFLD patients in the United States has been estimated to be 21% . The prevalence of NASH in the United States accounts for ~3\u20134% of the entire population20. To overcome this growing burden of NASH and NAFLD/NASH-HCC, it is crucial to understand the factors associated with NASH and HCC to develop preventive and therapeutic strategies.Thus, the epidemiology of NASH-associated HCC is continuously changing as the number of patients with metabolic syndrome surges yearly. Compared to patients with other causative factors, patients with NASH-associated HCC are more prone to complications such as diabetes, obesity, dyslipidemia, and hypertension. These factors can exacerbate the clinical complexity of patients and eventually result in a difficult situation for clinical management. Additionally, although patients with lesions caused by HCV or HBV can be partially treated because of the development of treatments, effective treatment is currently unavailable for NASH-associated HCC patients22. In pathological liver injury, these cells are part of a complex proinflammatory and fibrogenic background, and hepatocyte death occurs, promoting disease progression. Various pathobiological factors, including proinflammatory cytokines (such as interleukin (IL)-6 and tumor necrosis factor (TNF)-\u03b1), leptin, hyperinsulinemia, the gut microbiota, bile acid, and free fatty acid, can interact with components in the liver microenvironment. These factors may cause inflammation, fibrosis, and lipotoxicity as a result of interactions with the liver microenvironment. In the long term, the interactions of these factors with the liver microenvironment may lead to the progression to NASH and increase the possibility of HCC development21.Recent studies have shown that the liver microenvironment may play a crucial role in NAFLD/NASH and HCC progression. The liver provides a unique proinflammatory microenvironment that is composed of a variety of immunologically active cells, including Kupffer cells (KCs), T cells, antigen-presenting cells (APCs), and hepatic stellate cells (HSCs)23. NASH may trigger the death of liver cells by inducing metabolic stress in hepatocytes. The generation of damage-associated molecular patterns (DAMPs) occurs after an influx of activated immune cells and is called sterile chronic inflammation. HCC is an inflammation-related cancer. A chronic inflammatory state can trigger the initiation and development of cancer24. Altered immune responses in NASH with chronic inflammation are also associated with the development of HCC27.A proinflammatory microenvironment created by toxic lipid-induced hepatocyte injury (lipotoxicity), which may occur under NASH conditions, has a significant effect on the deterioration of NASH28. Thus, it is crucial to understand that the microenvironment of the liver plays a key role in the pathogenesis of NAFLD, NASH, and HCC. The pathogenesis of NASH and HCC is affected by various factors within the microenvironment. However, as inflammation is the most prominent feature of NASH and HCC, the present study reviews the function of immune cells in the respective microenvironments of NASH and HCC.Various factors and microenvironmental changes have been reported in HCC patients. In particular, cancer-related microenvironmental components such as immune cells, fibroblasts, endothelial cells, and extracellular matrix (ECM) can facilitate tumor cell differentiation, proliferation, and invasion29. NK cells play crucial roles in NAFLD and NASH -\u03b3 and the formation of a proinflammatory environment35.Natural killer (NK) cells are a group of innate immune cells that show cytolytic activity against cells under stress, such as tumor cells and virus-infected cellsASH Fig. . The num36. In addition, it seems that NK cell function is decreased by myeloid-derived suppressor cells (MDSCs) during the development of HCC cells and CD8+ cytotoxic T cells. Th cells can be classified into Th1 and Th2 cells. Th1 cells produce IL-2 and IFN-\u03b3 and are involved in cellular immunity by regulating macrophages to regulate neutrophils39 diet can trigger the loss of CD4+T lymphocytes and promote the development of HCC in liver-specific MYC oncogenic transgenic mice. In the same context, obesity-induced lipid accumulation can result in the selective loss of CD4+ T lymphocytes and promote disease progression from NAFLD to hepatocellular carcinoma43. These CD4+ T cells are biased to Th1 and Th17 subtypes in NASH conditions44. Th1 cells can promote the differentiation of macrophages into M1 macrophages that play a proinflammatory role by secreting IFN-\u03b3 and TNF-\u03b146. Th17 cells secrete IL-17, which exacerbates hepatic steatosis and inflammation and induces the transition from simple steatosis to hepatic steatosis48. Th17 cells also produce neutrophil-attracting chemokines to recruit neutrophils and lymphocytes in NASH49. Fibrosis progression is further induced through HSCs50. In contrast, the number of hepatic regulatory T cell (Tregs) is decreased in NASH51. In mice with HFD-induced steatosis, Treg apoptosis can be induced by increased oxidative stress, whereas liver inflammation is markedly reduced by adoptive transfer of Tregs52. In fact, the ratio of Th17 cells/Tregs is higher in NASH patients than in NAFLD patients and normal controls, indicating that the ratio of Th17/Tregs may play a crucial role in the NASH environment53.The selective loss of CD454. The number of Tregs is significantly increased in HCC patients55. An increase in Tregs is mainly observed around tumor sites. Tregs extracted from the tumor site contribute to uncontrolled growth of HCC cells by inhibiting the proliferation of CD8+ cytotoxic T cells and play a cytolytic role , and GrB production might be interrupted by the increased numbers of Tregs in tumors60 cells are unconventional cells that express NK and T cell surface markers and secrete various cytokines, such as IFN-\u03b3 and IL-4, to control innate and adaptive immunity. NKT cells account for 30% of hepatic nonparenchymal cellsr64 Fig. . The nums64 Fig. . Additio 65 Fig. . Moreove66. Based on differences in T cell receptor (TCR) usage, these NKT cells can be classified into two types: Type I and Type II. Type I NKT cells express invariant TCR\u03b1 chains that are readily detectable by \u03b1-galactosylceramide -loaded CD1d tetramers, while type II NKT cells express a broader TCR repertoire68. However, studies on Type II NKT cells in the liver are insufficient because of the absence of a specific marker. Thus, Type I NKT cells called invariant NKT (iNKT) cells have been mostly studied in patient tumors and inflammatory cytokines such as IL-1\u03b2, TNF, and IL-6. They participate in Th1 responses and mediate resistance to intracellular parasites and tumors. In contrast, M2 cells are characterized by an IL-12loIL-23loIL-10hiTGF-\u03b2hi phenotype in vitro. M2 cells participate in polarized Th2 responses, allergies, parasite clearance, dampening of inflammation, tissue remodeling, angiogenesis, immunoregulation, and tumor promotion73. In the liver, hepatic resident macrophages are called KCs and are involved in inflammation signaling and metabolic changes.Macrophages that circulate in the blood or exist in various organs and tissues are the first barriers to any disease. These cells play the most important role in innate and acquired immunity. Macrophages are generally divided into two phenotypes: classically activated (M1) macrophages and alternatively activated (M2) macrophages74. The level of KCs decreases in NASH, while the infiltration of Ly6C+ monocyte-derived macrophages is increased in the early stage of NASH in an MCD diet model75. In NASH, toll-like receptor (TLR) 4 in KCs is more highly expressed than other TLRs76. When LPS binds to TLR4, KCs are activated, and the production of proinflammatory cytokines such as TNF-\u03b1, IL-1\u03b2, IL-2, IL-6, IL-10, and IFN-\u03b3 is enhanced, triggering the recruitment of lymphocytes and other leukocytes77 and promoting the activation of NF-\u03baB, MAPK, ERK1, p38, JNK, and IRF378. Additionally, the expression of hepatic inflammatory genes is increased by oxidized low-density lipoprotein (LDL) trapped in KC lysosomes in a NASH model79. Recent studies have shown an increase in endolysosomal lipid accumulation in KCs during the progression from NAFLD to NASH80, indicating that KCs play a crucial role in the progression of NASH. Along with KCs, infiltrated Ly6C+ macrophages produce cytokines such as TNF-\u03b1 and IL-1\u03b2, thus promoting inflammation and activating HSCs81.In NASH, various metabolic syndromes and insulin resistance promote the accumulation of free fatty acids (FFAs) in hepatocytes and blood, triggering the innate immune response by stimulating lipotoxin and LPS. In a healthy liver, the KCs primarily function as the body\u2019s frontline defense against phagocytosis and pathogens from the portal vein and arterial circulation82. Indeed, KCs can secrete profibrogenic cytokines such as transforming growth factor-\u03b2 (TGF-\u03b2) and platelet-derived growth factor (PDGF)78. In other words, KCs contribute to inflammation and fibrosis progression through various processes, exacerbating NASH 84.KCs act as a major mediator in fibrosis Fig. . In vivoASH Fig. . In addi85 -induced HCC model, hepatocellular carcinogenesis was attenuated when the activation of KCs was mitigated by deletion of the proinflammatory myeloid cell surface receptor triggering receptor expressed on myeloid cell-1 (TREM-1), which is expressed by KCs. It has been revealed that the activation of KCs is crucial for tumor development in the early stage of chemical-induced carcinogenesis86. Furthermore, IL-1a secreted either by oncogene-induced senescent hepatocytes or by apoptotic hepatocytes in the DEN model can induce the production of IL-6 in KCs and trigger compensatory proliferation and tumor progression, which are essential for HCC development22. Once a primary tumor is established, liver-infiltrated macrophages play a more prominent role than KCs. Macrophages infiltrate near tumors. Macrophages present in the TME are called tumor-associated macrophages (TAMs)87. A tumor with highly infiltrated TAMs leads to a worse prognosis in HCC88 of HCC have low expression of costimulatory molecules but high expression of coinhibitory molecules. Moreover, due to the presence of MDSCs, the antitumor function of KCs is suppressed85 Fig. . Recentl91. Macrophages that are transformed into the M2 phenotype can exacerbate HCC. M2-polarized macrophages enable epithelial\u2013mesenchymal transition (EMT) and the migration of HCC cells via the TLR4/signal transducer and activator of transcription (STAT) 3 signaling pathway92. Moreover, increased stability of HIF-1\u03b1 in hypoxic conditions in HCC increases the secretion of IL-1\u03b2 and promotes EMT and metastasis of tumor cells93. TAMs can also secrete IL-6 and promote the expansion of cancer stem cells and tumor formation through IL-6/STAT3 signaling94 , a TNF superfamily member, is secreted by adipocytes. BAFF regulates the maturation and development of B cells101. NASH patients have higher levels of BAFF than control patients with simple steatosis. BAFF is associated with the degree of ballooning hepatocytes and hepatic fibrosis102. In BAFF-knockout mice, the number of mature B cells is decreased. Recent studies have shown that deletion of BAFF in the NAFLD model attenuates hepatic fat accumulation, inhibits inflammation and fibrosis in VAT, improves insulin resistance, and weakens liver steatosis. These results indicate that BAFF plays a role in exacerbating NAFLD and NASH103. BAFF-knockout mice show reduced systemic inflammation104. In response to LPS stimulation, hepatic B cells produce more IFN-\u03b3, IL-6 and tumor necrosis factor (TNF)-\u03b1 but less IL-10 than those from secondary lymphoid tissue, indicating that hepatic B cells promote inflammatory responses105. Increased TGF-\u03b2 levels in NASH patients106 trigger the transformation of IgM-expressing B cells into IgA-expressing B cells with regulatory activity. The level of serum IgA is increased in NASH patients. A high level of IgA is associated with the state of fibrosis108. Additionally, the accumulation of liver-resident IgA+PDL1+ cells that secrete IL-10 concurrent in NASH can promote tumor growth by inhibiting the function of CD8+ T cells107.B cells are immune cells that play a role in antibody secretion, antigen presentation, T cell costimulation, and cytokine secretion. B cells play an immunomodulatory role and contribute to autoimmunity and disease pathogenesis109. An increased level of tumor-infiltrating B cells can lead to better clinical outcomes. However, intratumor infiltration of B cells is impaired during the progression of HCC110. B cells secrete immunoglobulin with a direct antitumor effect that is beneficial for HCC patients111 is also associated with hepatic cholesterol accumulation, inflammation, and fibrosis. MPO enhances macrophage cytotoxicity and induces neutrophil activation116. Additionally, neutrophil-derived human neutrophil peptide (HNP)-1 induces hepatic fibrosis through HSC proliferation and exacerbates NASH117 . These cells serve as mediators of inflammation119) expelled by neutrophils are increased in sera samples from NASH patients. NETs play a key role in the formation of chronic inflammatory liver microenvironment in NASH that promotes the progression to HCC. The formation of NETs has been observed in livers of STAM mice , followed by an influx of monocyte-derived macrophages, the formation of inflammatory cytokines, and progression to HCC120.Neutrophils also play a crucial role in the transition from NASH to HCC Fig. . The lev122. Increased levels of IL-17 in HCC promote the secretion of CXC chemokines to recruit neutrophils around the peritumoral stromal regions and promote the growth and metastasis of malignant cells that are activated by GM-CSF, which is secreted by hepatoma cells in the HCC microenvironment125. Taken together, these findings indicate that neutrophils play a protumorigenic role in HCC progression, growth, and metastasis.Neutrophils are enriched mainly around the peritumoral stromal region in HCC. The frequency of neutrophil infiltration in the liver is an indicator of poor survival in HCC126. Since DCs induce the adaptive immune response, the importance of DCs during hepatic inflammation has emerged. The hepatic DC population becomes expanded and mature in the NASH liver127. When DCs are depleted, intrahepatic fibrosis-associated inflammation is markedly exacerbated. Hence, in NASH, DCs can limit CD8+ T cell expansion and restrict Toll-like receptor expression and cytokine production in innate immune effector cells, including KCs, neutrophils, and inflammatory monocytes127. DCs can capture HCC-related antigens, process antigens, and activate antigen-specific T cells to remove tumors. However, in advanced HCC, the functions of DCs and IL-12 production are impaired. Thus, antigen-specific T cells cannot be activated properly128.Other myeloid cells also have various roles in NASH and HCC pathogenesis. Particularly, dendritic cells (DCs) are the most powerful APCs and can induce adaptive immune responses and generate tolerance to self-antigensintLy6Chi monocytes appear to be the predominant source of TNF\u03b1 production at a later time point in MCD diet-induced disease, leading to NASH progression129. These infiltrated monocytes have potent proinflammatory activities. In addition, together with KC, infiltrated monocytes can propagate lipid accumulation induced by the MCD diet mainly through the production of TNF\u03b1129. In HCC patients, PD-L1+ monocytes are highly enriched in the peritumoral stroma. These monocytes are activated to suppress tumor-specific T cell immunity. In fact, high infiltration of these monocytes is associated with poor survival of HCC patients130.Monocytes play a pivotal role in inflammation and metabolic stresses. The infiltration of monocytes is an important feature of NASH. CD11b24. As mentioned above, NAFLD and NASH are major causes of the increased prevalence of HCC. Additionally, NASH-related HCC mainly occurs in the context of cirrhosis. Although NASH-related HCC cases occur mainly in cirrhotic patients, an increasing number of HCC cases have been reported in NAFLD and NASH patients with little or no cirrhosis131. While many studies on the respective microenvironments of NASH and HCC have been conducted, such studies on changes in the progression from NASH to HCC are insufficient. Since various immune cells have different functions that can affect the development of NASH and HCC, it is critical to develop an experimental model to identify the transition from NASH to HCC.Acute inflammation is a useful reaction to achieve tissue recovery by promoting regeneration. Conversely, chronic inflammation is maladaptive and provides an environment that is conducive to the development of NASH and HCC. Chronic injury triggers the secretion of significant amounts of proinflammatory molecules, including IL-1, IL-6, TNF-\u03b1, and lymphotoxin-\u03b2, that can facilitate HCC development132. Thus, patients with NASH are at increased risk of adverse liver-related outcomes, with the degree of fibrosis contributing most significantly to this increased risk. However, medications have not yet been approved by the Federal Drug Administration (FDA) or European Medicines Agency (EMA) for the treatment of NASH. Many agents are currently being studied in clinical trials Phase 2 clinical trials have been for the TLR4 antagonist JKB-121 , which targets cells expressing TLRs, such as KCs, and solithromycin , which inhibits TNF-\u03b1/CXCL8 production and MMP9 activity in monocytic cells. Obeticholic acid (OCA) is a derivative of the primary human bile acid chenodeoxycholic acid (CDCA) and functions as an agonist of farnesoid X receptor (FXR). There was a randomized placebo-controlled, phase IIb trial and a large phase III trial to evaluate the safety and efficacy of OCA for the treatment of NASH and fibrosis133. Elafibranor is an agent currently in a phase III trial for the treatment of NASH . Elafibranor, a dual peroxisome proliferator-activated receptor (PPAR)-\u03b1/\u03b4 agonist133, and cenicriviroc (CVC), a dual CCR2/CCR5 chemokine receptor antagonist that has been shown to play key roles in hepatic inflammation and fibrosis134, are being investigated in current phase III clinical trials in patients with NASH and fibrosis, respectively . Additionally, the immune checkpoint inhibitors nivolumab , pembrolizumab , and tremelimumab are undergoing phase III clinical trials in patients with HCC.NASH with advanced cirrhosis is currently the primary etiology for liver transplantation and is estimated to become the leading indication for liver transplantation135. Furthermore, instead of using only checkpoint inhibitors, combined therapies targeting immune cells involved in the progression of HCC or targeting cells participating in angiogenesis with checkpoint inhibitors are now presenting promising outcomes137. However, combinational therapy that controls steatosis, chronic inflammation, and fibrosis could be the most efficient therapeutic strategy for NASH treatment. In addition, overcoming the hypocellular hepatic microenvironment is the most important condition for inducing efficient drug effects. Thus, developing effective drug candidates that can reverse the functions of immune cells in the respective microenvironments of NASH and HCC is expected to become a novel therapeutic strategy.Various mechanisms and microenvironmental changes are involved in the onset and progression of NASH and HCC, raising a fundamental question on whether one targeted treatment would be effective. Indeed, most of the abovementioned drug candidates did not show expected therapeutic efficacies in phase II or phase III clinical trials. Importantly, a combination therapy that uses more than two types of targeted drugs to provide additional and/or synergistic effects could be a more effective strategy for NASH and HCC treatment. For instance, antidiabetic drugs or antifibrotic drugs with an FXR agonist, a well-known drug for NASH, have shown notable effects in clinical trials"} +{"text": "Non-alcoholic fatty liver disease (NAFLD) is a leading cause of liver cirrhosis and hepatocellular carcinoma. NAFLD is associated with metabolic disorders such as obesity, insulin resistance, dyslipidemia, steatohepatitis, and liver fibrosis. Liver-resident (Kupffer cells) and recruited macrophages contribute to low-grade chronic inflammation in various tissues by modulating macrophage polarization, which is implicated in the pathogenesis of metabolic diseases. Abnormalities in the intestinal environment, such as the gut microbiota, metabolites, and immune system, are also involved in the pathogenesis and development of NAFLD. Hepatic macrophage activation is induced by the permeation of antigens, endotoxins, and other proinflammatory substances into the bloodstream as a result of increased intestinal permeability. Therefore, it is important to understand the role of the gut\u2013liver axis in influencing macrophage activity, which is central to the pathogenesis of NAFLD and nonalcoholic steatohepatitis (NASH). Not only probiotics but also biogenics (heat-killed lactic acid bacteria) are effective in ameliorating the progression of NASH. Here we review the effect of hepatic macrophages/Kupffer cells, other immune cells, intestinal permeability, and immunity on NAFLD and NASH and the impact of probiotics, prebiotics, and biogenesis on those diseases. Nonalcoholic fatty liver disease (NAFLD) is one of the most common chronic liver disorders worldwide and its prevalence is increasing ,2. NAFLDSeveral cross-sectional clinical studies have focused on the pathogenesis of NASH. In addition, mouse models of NASH mimic the pathogenesis of diet-induced obesity and its resulting metabolic disturbances, including NAFLD and NASH ,7,8,9,10Innate immune responses of NAFLD and NASH involve resident Kupffer cells, recruited macrophages derived from bone marrow cells, neutrophils, and natural killer T cells. These cells contribute to the progression of NASH by inducing inflammation, by promoting the production of cytokines, chemokines, eicosanoids, nitric oxide, and ROS. In addition, excessive free fatty acids (FFAs) and cholesterol cause hepatic lipotoxicity and stimulate macrophage activation, and production of proinflammatory cytokines . PalmitaLiver macrophages comprise several populations and play a key role in liver immune homeostasis and the pathogenesis of liver disease . Kupfferhi) inflammatory blood monocytes. The latter differentiate to CD11b+F4/80+ inflammatory macrophages (M1-type), which have phagocytic activity and secrete proinflammatory cytokines, such as TNF-\u03b1, IL-6, and IL-1\u03b2, and ROS [lowF4/80+ macrophages (M2-type) with an immunosuppressive, pro-fibrogenic phenotype is observed in the reparative phase of NAFLD and NASH. M2-type macrophages secrete high levels of IL-13 and transforming growth factor-\u03b21 (TGF\u03b21), resulting in progressive fibrosis [Kupffer cells and recruited macrophages regulate liver immune homeostasis and the development of liver diseases. Kupffer cells recruit additional immune cells, including neutrophils and lymphocyte antigen 6C-high and include CCR2 + Ly6Chi and CX3CR1 + Ly6C- monocytes. Under inflammatory conditions, CCR2 + Ly6Chi monocytes transmigrate and differentiate into M1 macrophages. In steady state, CX3CR1 + Ly6C- monocytes differentiate into anti-inflammatory M2 macrophages and mediate tissue repair . In highThe increased prevalence of NAFLD may be linked to increased energy intake caused by dietary changes, such as increased intake of carbohydrate , fat, and fructose. Moreover, increased use of corn syrup or fructose as sweeteners and sucralose as a non-caloric artificial sweetener may affect the development of NAFLD ,62,63. TThe liver and gut are impacted by nutrients and the microbiome via the biliary tract, portal vein, and systemic mediators. Liver damage caused by disruption of the gut microbiome, its derived metabolites, and the gut immune system is implicated in the pathogenesis of obesity-induced insulin resistance and NAFLD. The liver is exposed to portal system products, such as pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs) and is strongly influenced by diet-induced dysbiosis. PAMPs and DAMPs induce an inflammatory response in hepatocytes, Kupffer cells, and hepatic stellate cells (HSCs) by a Toll-like receptor (TLR) cascade, enhancing release of cytokines and chemokines , resulting in liver damage. Mice fed a HF or choline-deficient diet and patients with NAFLD have increased intestinal permeability ,65,66, tEscherichia coli and Bacteroides vulgatus and advanced fibrosis in patients with NAFLD [Escherichia abundance was higher in obese children with NASH compared to those with only obesity [Bacteroides and Ruminococcus were significantly increased, and Prevotella decreased, in patients with NASH (stage \u2265 2 fibrosis) compared to those without NASH, shown by 16S amplicon sequencing [Bacteroides and Prevotella are competitors in the gut microbiota, depending on dietary composition [Several clinical studies have suggested a link between the gut microbiota and the pathogenesis of NAFLD, but causality has not been established . Shotgunth NAFLD . Escheri obesity . Moreovequencing . This fiposition .+ and TCR\u2212 IELs exhibit different subtypes depending on the developmental conditions: TCR+ IELs are induced after antigens are encountered, natural TCR\u03b1\u03b2+ IELs undergo thymic agonist selection, TCR\u03b3\u03b4+ IELs differentiate either intrathymically or extrathymically, and the development of TCR\u2212 IELs is similar to that of peripheral innate lymphocytes [+), NK T lymphocytes, dendritic cells, macrophages, ILCs, IgA+ plasma cells, IgG+ and IgM+ plasma cells, and B lymphocytes [+ T cells in the LP comprised primarily T helper (Th) 17 cells and regulatory T (Treg) cells. Th17 cells release IL-17A, IL-17F, and IL-22, preventing bacterial dissemination by inducing the expression and secretion of antimicrobial peptides [Intestinal immune cells contribute to the establishment of the intestinal mucosal barrier. These cells are classified as intraepithelial and lamina propria cells. Intraepithelial cells include intestinal intraepithelial lymphocytes (IELs), encompassing several T cell receptor (TCR)-positive and -negative subsets. TCRphocytes ,75. IELsphocytes ,77. CD4+peptides . IL-17A peptides . The redpeptides . By contpeptides . Notablypeptides ,81. Intepeptides . The effMany pharmacotherapeutic strategies have been used for NASH, which can progress to cirrhosis. Insulin sensitizers, such as metformin or pioglitazone, have been studied for the treatment of NASH. In addition, vitamin E, a food ingredient with antioxidant properties, has also been studied for its therapeutic effect on NASH ,84. In tLactobacillus, Streptococcus, Lactococcus, Enterococcus, Bifidobacterium, Bacillus, and Clostridium. These probiotics promote an anti-inflammatory environment and intestinal epithelial growth and survival and counteract pathogenic bacteria by modulating immunity. Prebiotics were defined by Gibson et al. in 1995 as, \u201cA non-digestible food ingredient that beneficially affects the host by selectively stimulating the growth and/or activity of one or a limited number of bacteria in the colon, and thus improves host health\u201d [Studies have investigated the ability of functional foods, such as lactic acid bacteria, to improve the gut microbiome in NASH and NAFLD. Functional foods are classified as probiotics, prebiotics, and biogenics based on their mechanisms of action . Probiot health\u201d .Prebiotics include oligosaccharides and dietary fibers. Oligosaccharides encompass fructooligosaccharides and galactoses, such as lactulose, which have highly selective availability to bifidobacteria and promote their growth. The term \u201csynbiotic\u201d refers to a combination of probiotics and prebiotics, as defined by Gibson et al.Biogenics was proposed by Mitsuoka et al., and applies to food ingredients that contain biologically active peptides and immunopotentiators produced directly or indirectly by modulation of the intestinal microflora . BiogeniLactobacillus, Bifidobacterium, and Streptococcus, or Lactobacillus alone. VSL#3 protected against insulin resistance and NAFLD by inhibiting inflammatory signaling, such as c-Jun N-terminal kinase (JNK) and nuclear factor-\u03baB (NF-\u03baB) and restoring the reduced number of hepatic natural killer T cells caused by a HF diet [Lactobacillus plantarum NA136 isolated from fermented food suppressed the body weight gain and decreased the mass of fat tissues of HF diet and fructose-fed mice (NAFLD model).; lipids, AST, and ALT levels were also reduced L. plantarum NA136 decreased de novo lipogenesis and increased fatty acid oxidation by activating the AMPK pathway to phosphorylate ACC and suppress SREBP-1/FAS signaling in a NASH model. Furthermore, L. plantarum NA136 reduced oxidative stress in the liver by activating the AMPK/NF-E2-related factor 2 (Nrf2) pathway in a NAFLD model. These effects resulted in L. plantarum NA136 attenuating NAFLD [Lactobacillus paracasei decreased the expression of TLR-4, CCL2 and TNF-\u03b1 and attenuated hepatic steatosis. L. paracasei decreased the proportion of M1 Kupffer cells and increased that of M2, leading to M2-dominant shift in the liver of a NASH animal model [TM, containing Lactobacillus acidophilus NCIMB 30175, Lactobacillus plantarum NCIMB 30173, Lactobacillus rhamnosus NCIMB 30174, and Enterococcus faecium NCIMB 30176) on the composition of human intestinal microbiota was studied using the M-SHIME\u00ae system on an in vitro human intestinal model. Three probiotics showed colonization and growth in the luminal and mucosal compartments of the proximal and distal colon, and growth in the luminal proximal colon. This increased the proximal and distal colonic lactate concentrations. Lactate stimulated growth of lactate-consuming bacteria, resulting in increased short-chain fatty acid (SCFA) production, especially butyrate. Additionally, the probiotics exerted immunomodulatory effects, such as increased production of anti-inflammatory cytokines (IL-10 and IL-6) and decreased production of proinflammatory chemokines [Several animal model studies and clinical trials have reported evidence of benefits. For example, VSL#3 and modified VSL#3 are a mixture of several probiotic bacteria of the genera HF diet ,93. By c HF diet . Other a HF diet ,96. Lactng NAFLD . Moreoveal model . The effd MCP-1) . A meta-d MCP-1) .Faecalibacterium prausnitzii and Bifidobacterium and reduced the plasma endotoxin level by increasing GLP-1 secretion as well as the GLP-2 trophic effect on gut barrier integrity [Prebiotics may have a beneficial effect on NAFLD and NASH. In animal models, prebiotics altered the gut microbiota composition and increased the plasma glucagon-like peptide-2 (GLP-2) level, improving gut barrier function. Moreover, prebiotics reduced liver inflammation and improved metabolic disorders in obesity and diabetes . Furtherntegrity . In humantegrity . Moreoventegrity .Lactobacillus reuteri GMNL-263 (Lr263) reduced fibrosis in the liver and heart by TGF-\u03b2 suppression in HF diet-fed mice [Lactobacillus plantarum L-137 (HK L-137), which is isolated from fermented fish and rice dishes, attenuated adipose tissue and hepatic inflammation in DahlS. fa/LeprfaZ-Lepr rats as a model of metabolic syndrome [Lactobacillus pentosus strain S-PT84, isolated from Kyoto pickles (shibazuke), reportedly enhances splenic natural killer activity and interferon-\u03b3 production in mice [Lactobacillus pentosaceus LP28 (LP28), isolated from longan fruit (Euphoria longana), showed reduced body weight gain, liver triglyceride and cholesterol in HF diet-fed mice. However, heat-killed LP28 did not prevent metabolic syndrome [Heat-killed lactic acid bacteria in biogenics, which are easier to handle compared with live lactic acid bacteria, have been used in studies of NAFLD and NASH. Heat-killed fed mice . Similarfed mice . Moreovesyndrome . Live an in mice ,109. Hea in mice . By contsyndrome . Heat-kiLactobacillus plantarum 299 v for at least 1 week preoperatively during the postoperative period in elective surgical patients did not influence bacterial translocation, gastric colonization, or the incidence of postoperative septic morbidity [Probiotic, prebiotic, and biogenic treatment of NAFLD and NASH is new and under development, and these agents regulate the gut microbiota and immunity. By contrast, probiotics do not regulate the intestinal environment or improve the symptoms of acute pancreatitis or Crohn\u2019s disease. For instance, administration of orbidity ,113. A morbidity ,115. HowNAFLD is a common chronic liver disease worldwide, and its prevalence is increasing. Moreover, its association with obesity, type 2 diabetes mellitus, insulin resistance, metabolic syndrome, and progression to cirrhosis and hepatic carcinoma increase its clinical importance. The pathogenesis of NASH and NAFLD is complex, involving not only the hepatic immune system (monocyte or macrophage polarization) mechanisms but also adipokines produced by adipose tissue, and microbiome. Moreover, an altered gut microbiota composition and intestinal immunity are related to liver disease and are important in progression from NAFLD to NASH. In human and animal studies of NAFLD and NASH, probiotics, prebiotics and biogenics reduced serum levels of liver aminotransferases, inflammatory cytokines, and chemokines, and ameliorated insulin resistance and hepatic steatosis . Probiot"} +{"text": "Non-alcoholic fatty liver disease (NAFLD) has become the leading cause of chronic liver disease, exposing to the risk of liver fibrosis, cirrhosis, and hepatocellular carcinoma (HCC). Angio-genesis is a complex process leading to the development of new vessels from pre-existing vessels. Angiogenesis is triggered by hypoxia and inflammation and is driven by the action of proangiogenic cytokines, mainly vascular endothelial growth factor (VEGF). In this review, we focus on liver angiogenesis associated with NAFLD and analyze the evidence of liver angiogenesis in animal models of NAFLD and in NAFLD patients. We also report the data explaining the role of angiogenesis in the progression of NAFLD and discuss the potential of targeting angiogenesis, notably VEGF, to treat NAFLD. Non-alcoholic fatty liver disease (NAFLD) has become the most common cause of chronic liver disease worldwide . NAFLD iAngiogenesis is a complex process leading to the development of new vessels from pre-existing vessels. Angiogenesis occurs under physiological conditions during normal wound healing and also in pathological contexts, such as tumorigenesis, so that antiangiogenic molecules monoclonal antibody) are used in the treatment of different cancers, including HCC, according to recent guidelines . MoleculAngiogenesis also takes place during chronic liver diseases. Indeed, liver fibrosis progression is accompanied by angiogenesis, regardless of the etiology of the liver disease ,5. In thIn the context of chronic liver disease, angiogenesis leads to quantitative changes of liver vessels with the emergence of new vessels but also consists in qualitative changes of vessels (both pre-existing and new vessels), resulting in a process known as vascular remodeling. Such qualitative vascular changes include dedifferentiation of liver sinusoidal endothelial cells (LSECs), also called capillarization, defined by the loss of their fenestrae and the acquisition of a basement membrane . In the This review focuses on angiogenesis associated with NAFLD. First, we report the manifestations of angiogenesis in animal models and in patients with NAFLD. Second, we address the role of angiogenesis in the progression of NAFLD. Finally, we discuss the potential of targeting angiogenesis to treat patients with NAFLD.Liver angiogenesis can be assessed either directly, typically by showing an increase in the number of hepatic vessels, or indirectly, by measuring the expression of angiogenesis markers, such as markers of endothelial cells or proangiogenic cytokines. Because indirect methods are the most easily accessible, they are commonly used, but one should keep in mind their limitations. Indeed, the expression of endothelial cell markers does not necessarily correlate with the number of endothelial cells because some endothelial markers, such as CD105, are upregulated during the activation of endothelial cells associated with angiogenesis .Many rodent models have been used to study NAFLD: each of them mimicking one or several features of human NAFLD, i.e., steatosis, NASH, fibrosis and HCC . SeveralSpecific techniques of imaging were used to analyze the global vasculature of mouse liver with NAFLD. Using scanned electronic microscopy of vascular corrosion casts of the liver, Coulon et al. showed that NAFLD was associated with a global alteration of the hepatic vascular architecture that consisted not only in an increased number of vessels but also in a clearly different phenotype of vessels, which displayed an enlarged diameter and a disrupted organization . The samCompared to studies in animal models, data related to liver angiogenesis in patients with NAFLD are more limited. We showed that the liver from patients with NAFLD displayed an increased expression of the endothelial marker, vWF especially in patients with advanced fibrosis . FurtherNAFLD patients display increased serum levels of angiogenic markers such as VEGF, soluble VEGFR-1 (sVEGFR1) and sVEGFR2 . HoweverEvidence of liver angiogenesis in animal models of NAFLD and in NAFLD patients suggests a role of liver angiogenesis in NAFLD pathogenesis. During NAFLD, angiogenesis drives inflammation and fibrosis, as reviewed elsewhere . Liver aThe molecular pathways of angiogenesis are intermingled with those of NAFLD. Proangiogenic cytokines have an impact on NAFLD. Indeed, VEGF is also involved in lipogenesis so that anti-VEGFR2 treatment can induce changes in the expression of lipogenesis genes, as shown in MCD mice . HypoxiaIncreasing evidence indicates that macrophages play a critical role in NAFLD development and progression ,33. Yet,One of the mechanisms that promote angiogenesis is the secretion of extracellular vesicles. Extracellular vesicles are submicron membrane-bound structures secreted from different cell types. They contain a wide variety of molecules and exert important functions in cell-to-cell communication . Many stCirrhosis is associated with the risk of developing HCC in all chronic liver diseases including NAFLD. However, non-cirrhotic patients with NASH have a higher risk of HCC compared to patients with other types of liver disease . HCC is Ultimately, the best way to demonstrate the role of angiogenesis in the pathogenesis of NAFLD is to assess the effect of antiangiogenic treatments in NAFLD.A few antiangiogenic molecules have been tested in animal models of NAFLD. Coulon et al. showed that treatment with anti-VEGFR2 decreased steatosis and inflammation in MCD mice diet and reduPlacental growth factor (PlGF) is a proangiogenic cytokine associated selectively with pathological angiogenesis. A previous study has shown that inhibition of PlGF reduced angiogenesis, inflammation and fibrosis in a non-NAFLD animal model of liver disease , whereasBesides anti-VEGFR2, another antiangiogenic agent has been assessed in NASH, i.e., the peptibody L1-10 which inhibits the interaction of Ang-2 with its receptor Tie2 . AdminisThe inhibition of formation of extracellular vesicles (by genetic inhibition of caspase 3) has been shown to limit the production of hepatocyte-derived proangiogenic extracellular vesicles and to protect mice fed with MCD diet from angiogenesis and fibrosis, independent of steatosis and inflammation .The treatment of CDAA rats with angiotensin II type 1 receptor blocker has been shown to decrease neovascularization and liver fibrosis . These rOne should note that the antifibrotic effects of the antiangiogenic treatments are not specific to NAFLD and have also been reported in other models of chronic liver diseases ,52,53,54In conclusion, many studies have provided convincing evidence of early and intensive liver angiogenesis in NAFLD pathogenesis both in animal models and in patients. Angiogenesis promotes the development of NAFLD, the progression of fibrosis and the emergence of HCC. Antiangiogenic treatment can reduce NASH and prevent HCC formation in different animal models. Antiangiogenic molecules approved to treat advanced hepatocellular carcinoma, notably Bevacizumab , could h"} +{"text": "Obesity has become a worldwide health concern among the pediatric population. The prevalence of non-alcoholic fatty liver disease (NAFLD) is growing rapidly, alongside the high prevalence of obesity. NAFLD refers to a multifactorial disorder that includes simple steatosis to non-alcoholic steatohepatitis (NASH) with or devoid of fibrosis. NAFLD is regarded as a systemic disorder that influences glucose, lipid, and energy metabolism with hepatic manifestations. A sedentary lifestyle and poor choice of food remain the major contributors to the disease. Prompt and timely diagnosis of NAFLD among overweight children is crucial to prevent the progression of the condition. Yet, there has been no approved pharmacological treatment for NAFLD in adults or children. As indicated by clinical evidence, lifestyle modification plays a vital role as a primary form of therapy for managing and treating NAFLD. Emphasis is on the significance of caloric restriction, particularly macronutrients in altering the disease consequences. A growing number of studies are now focusing on establishing a link between vitamins and NAFLD. Different types of vitamin supplements have been shown to be effective in treating NAFLD. In this review, we elaborate on the potential role of vitamin E with a high content of tocotrienol as a therapeutic alternative in treating NAFLD in obese children. The primary reason for chronic liver disease among children can be attributed to non-alcoholic fatty liver disease . NAFLD eAccording to the National Health and Morbidity Survey (NHMS) 2015, the prevalence of obesity among children aged 10\u201314 years in Malaysia was 14.4% . While iUntil today, there is no FDA approved drug in managing NAFLD among children. The safer choice of treatment to tackle this disease was probiotics or antioxidant agents . The ratThe prevalence of NAFLD was closely related to the increased prevalence of obesity in adults and children . The occThe prevalence of NAFLD increases with age, with a greater percentage of NAFLD among adolescents (15\u201319 years old) at 17.3% as compared to toddlers (2\u20134 years old) at 0.7% . NAFLD iPNPLA3) and transmembrane six superfamily 2 (TM6SF2) that affect the severity of liver damage as reported in adult patients with NAFLD with metabolic syndrome were shown to have a significant increase in their ALT levels , and gamma-glutamyl transferase (GGT) was noted in the patatin-like phospholipase domain containing three genes (th NAFLD . In a PAof NAFLD . The PNPT levels . A meta-se (GGT) . In an aHispanic . On the . The differences in storing fat capability in different adipose tissue compartments differed significantly between children with and without NAFLD . Ye. Ye. Theut NAFLD . In termut NAFLD . It was ut NAFLD . On the ut NAFLD .de novo lipogenesis in the liver. For instance, the conversion of fructose to fat in the liver further drives the accumulation of liver fat plus spironolactone (25 mg once daily) for 52 weeks showed a significant decrease in liver fat score and homeostasis model of assessment-insulin resistance (HOMA-IR) in women with NAFLD , \u03b2 (beta), \u03b4 (delta), and \u03b3 (gamma). Unlike tocopherol, tocotrienol is an unsaturated form that features an isoprenoid side chain, making it easier to absorb and penetrate better into tissues with saturated fatty layers, including the liver and the brain . TocotriIn vitro research revealed that \u03b1-tocotrienol considerably decreased the total cholesterol (TC) compared to untreated SH-SY5Y cells and cells treated with \u03b1-tocopherol (d-\u03b4-tocotrienol inhibited the differentiation of murine 3T3-F442A preadipocytes through the downregulation of peroxisome proliferator-activated receptor \u03b3 (PPAR\u03b3), a key regulator of adipocyte differentiation (Tocotrienol has been shown to have hypolipidemic properties in cells. The hypolipidemic property of tocotrienol is mediated through reduction of HMG-CoA (3-hydroxy-3-methyl-glutaryl-coenzyme A reductase), a key enzyme for the cholesterol biosynthesis , 99. In copherol . A studycopherol . Tocotricopherol . Adipogecopherol . Zhao etcopherol . The stucopherol , 104. Bacopherol . Neverthcopherol . These rntiation . Shen etntiation , 108. Thntiation . The actntiation . Additiontiation in LPS-intiation .Similar ntiation . In humantiation , 113.In vivo analysis of TRF supplementation with royal jelly in calorie restricted diet-fed rats indicated significant weight reduction and reduced expression of inflammatory cytokines (i.e. TNF-a and MCP1) as compared to TRF or royal jelly intervention alone (CPT1A), and cytochrome P450 family 7 subfamily A member 1 (CYP7A1) mRNA, a pro-inflammatory marker was observed (Mcp1), Nlrp3, Tnf\u03b1, Il-1\u03b2, and cdc11 . High-pu7a1 gene . Since Nobserved . The autobserved . Furtherobserved . Shen etobserved . Likewisnd cdc11 .n = 200) of subjects with or without NAFLD, Ezaizi et al. reported a significant association between high-sensitivity C-reactive protein (hs-CRP) with NAFLD in Asian Indians , which are supposed to reduce cardiovascular-related, hepatic morbidity in NASH patients. Nevertheless, patients often found that participating in physical activity and changing dietary habits are tough to achieve and maintain. Therefore, the characteristics of tocotrienol, which are easily found and exceptional acceptability to people, have made it a practical treatment choice for NAFLD patients. NAFLD clinical experiments showed a moderate improvement in histology and liver biochemistry induced by tocotrienol treatment. However, further monotherapy clinical trials and pharmacological assessments are still required to explain the fundamental molecular mechanisms of treatment. Also, possible unpleasant consequences of tocotrienol to identify the best daily intake needed for pediatric NAFLD. In conclusion, tocotrienol is recommended as a functional diet to be used in combination with existing diabetes and obesity regulation strategies for the treatment of pediatric NAFLD.NM and ZA contributed to the conception of the idea for the manuscript. FA-B and AI performed literature search and drafted the manuscript. RR, NM, ZA, and KM critically revised the work. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Non-alcoholic fatty liver disease (NAFLD) and its more severe form non-alcoholic steatohepatitis (NASH) are a major public health concern with high and increasing global prevalence, and a significant disease burden owing to its progression to more severe forms of liver disease and the associated risk of cardiovascular disease. Treatment options, however, remain scarce, and a better understanding of the pathological and physiological processes involved could enable the development of new therapeutic strategies. One process implicated in the pathology of NAFLD and NASH is cellular oxygen sensing, coordinated largely by the hypoxia-inducible factor (HIF) family of transcription factors. Activation of HIFs has been demonstrated in patients and mouse models of NAFLD and NASH and studies of activation and inhibition of HIFs using pharmacological and genetic tools point toward important roles for these transcription factors in modulating central aspects of the disease. HIFs appear to act in several cell types in the liver to worsen steatosis, inflammation, and fibrosis, but may nevertheless improve insulin sensitivity. Moreover, in liver and other tissues, HIF activation alters mitochondrial respiratory function and metabolism, having an impact on energetic and redox homeostasis. This article aims to provide an overview of current understanding of the roles of HIFs in NAFLD, highlighting areas where further research is needed. Non-alcoholic fatty liver disease (NAFLD) is a progressive, widespread form of chronic liver disease with a large global burden. Worldwide, around 25% of the population have NAFLD and its prevalence is increasing . NAFLD iVegfa, encoding vascular endothelial growth factor, and genes encoding many glycolytic enzymes for 8 weeks, though it remains unclear how this local hypoxia develops . The livRegulation of cellular metabolism is a major canonical function of HIFs. In order to maintain energy charge in hypoxia, HIFs increase the expression of genes encoding glycolytic enzymes such as lactate dehydrogenase , while rVhl-deficient mice, showed that HIF2\u03b1, but not HIF1\u03b1, is responsible for the suppression of FAO in these mice (Vhl-deficient mice had lower expression of peroxisome proliferator-activated receptor \u03b1 (PPAR\u03b1)-target genes, such as carnitine-palmitoyl transferase 1 (Cpt1) and acyl CoA oxidase (Acox), lowering fatty acid-supported oxidative phosphorylation but not Hif1a deletion , however, led to lower Lipin1 mediated PPAR\u03b1/PGC1\u03b1 pathway activation, which worsened steatosis relative to wild type mice , and this was normalised by HIF2\u03b1 silencing (Fas and Scd1 relative to HFD fed mice not exposed to CIH (in vivo and in vitro models of hepatosteatosis (via Vhl disruption has been associated with decreased expression of lipogenic genes such as fatty acid synthase (Fas) (Vhl disruption (via genetic manipulation or hypoxia) can cause steatosis via inhibition of FAO and upregulation of lipid uptake, that liver hypoxia and HIF2\u03b1 activation occur in NAFLD, and that HIF2\u03b1 upregulates lipogenesis in diet-induced steatosis, which worsens lipid accumulation and can be prevented by treatment with HIF2\u03b1 antagonists. However, whether HIF2\u03b1 also impairs FAO in NAFLD remains unclear.Increased lipogenesis is an important feature of NAFLD in human patients \u201344. Agaiilencing . Similard to CIH . Conversteatosis . Thus, ise (Fas) , or, in sruption . These csruption . CD36 exsruption . Therefocd1, and CD36 in wild type activation led to improved insulin tolerance and glucose handling ] and lipogenic genes. Interestingly, unlike other models of liver specific HIF2\u03b1 activation, Phd3 deletion was not associated with worsened steatosis. The authors observed that deletion of Phd1-3, which increased HIF2\u03b1 stabilisation still further, did worsen steatosis, suggesting that lower level HIF2\u03b1 activation may be predominantly beneficial via improved insulin sensitivity, while higher levels of stabilisation, as occurs with Phd1-3 and Vhl deletion has a detrimental effect due to inhibition of FAO, leading to worsened steatosis.HIF2\u03b1 also appears to be involved in hepatic insulin signalling, handling . HIF2\u03b1 dsitivity . Again, via inhibition of FAO and upregulation of lipogenesis, though studies investigating the effect of Hif2a deletion in NAFLD on FAO are needed to confirm this. Meanwhile, low levels of HIF2\u03b1 activation in metabolic diseases appear to have a beneficial effect on insulin sensitivity and glucose handling. Whether HIF1\u03b1 activation is protective or harmful in the context of metabolic disease and hepatic steatosis remains less clear. There are conflicting results which may be the result of opposing roles in different cell types and tissues, although in hepatocytes specifically, HIF1\u03b1 activation in obesity appears to improve insulin sensitivity and may be required to maintain FAO and prevent increased lipogenesis.Overall, significant evidence points toward a steatosis-promoting role for chronic HIF2\u03b1 activation in liver, likely occurring 4 induced fibrosis deletion and hepatic stellate cells (HSCs) see , and by SCs) see .in vitro and in vivo models. Isolated mouse hepatocytes exposed to hypoxia show increased expression of plasminogen activator-inhibitor 1 (PAI-1), and this is partially prevented by Hif1a deletion and completely prevented by Hif1b deletion, suggesting both HIF1\u03b1 and HIF2\u03b1 may be involved (Hif1a protects against collagen deposition and suppresses collagen crosslinking in the media of isolated hepatocytes exposed to hypoxia (Lox) expression, which requires Hif1a in vitro and Acta2 (\u03b1SMA) mRNA. Collectively, these studies highlight that one role of HIF activation in liver fibrosis is the direct regulation of fibrogenic genes in hepatocytes and that this likely occurs in NAFLD. However, hepatocytes are not considered major sources of ECM deposition in vivo, and so it remains unclear how central this mechanism is to the pathology of NAFLD.HIF regulated expression of fibrogenic mediators in hepatocytes has been demonstrated in several relevant involved -\u03baB signalling. NF-\u03baB is thought to be an important driver of fibrosis and inflammation in NAFLD mRNA. Palmitic acid treatment also induced HIF1\u03b1 in macrophages in vitro, and silencing of Hif1a suppressed the activation of NF-\u03baB (via control of hepatocyte production of the cytokine histidine rich glycoprotein (HRGP) . HRGP inn (HRGP) . CholineG2 cells . It therin vitro. This suggests HIF signalling in patients with NAFLD/NASH and OSA may induce or worsen inflammation, though more studies are needed to confirm this.HIF-signalling may also be involved in the link between OSA and NAFLD progression generally, and regarding inflammation in particular. Severity of nocturnal hypoxia in OSA correlates with NAFLD/NASH severity, including liver inflammation, independent of other risk factors in patients , and subThe evidence currently available highlights potential mechanisms by which HIF signalling may be involved in several key aspects of NAFLD, namely steatosis, inflammation and fibrosis. Further work is required to confirm many of these mechanisms and provide a more detailed understanding, and to determine whether targeting HIF signalling is a viable treatment strategy to improve these aspects of the pathology. It also remains uncertain what drives liver hypoxia and HIF activation in NAFLD in the first place.in vitro studies using ROS scavengers to investigate whether this prevents HIF activation in in vitro models of NAFLD. Investigation of SIRT4 in this context could also be valuable as reduced levels of this sirtuin have been demonstrated in patients with NAFLD and other co-activated pathways. Further investigations into how closely linked HIF activation and OSA are in patients with NAFLD would be useful, and studies in animal models of NAFLD exposed to CIH\u2014a way of mimicking OSA in rodents, which do not develop OSA spontaneously\u2014could provide insight into whether and how these pathophysiological mechanisms differ.The ultimate goal of understanding the involvement of HIF signalling in NAFLD would be to attempt to treat the disease by targeting this pathway. Current evidence supports the use of animal studies to investigate this, and both HIF1\u03b1 and HIF2\u03b1 antagonists have been developed, largely with a view to treating cancers . Early sFinally, while this review has focussed on the role of HIF signalling in the liver, some studies point toward roles of HIFs in other tissues and organs that are likely to impact on NAFLD and outcomes in NAFLD. For example, HIF activation in adipocytes and adipvia oxygen therapy or antagonism suggest that HIF2\u03b1 drives lipogenesis. These mechanisms could explain the protective effect that Hif2\u03b1 deletion has against steatosis in NAFLD. While some beneficial effects of HIF activation have been noted, such as a potential role in improving insulin sensitivity, on balance, HIF activation appears to be harmful in NAFLD, and may therefore be a useful therapeutic target. Further research is required to fully elucidate the mechanisms by which HIF activation contributes to NAFLD and NASH, in particular the effect on FAO, the signalling pathways involved in regulating the expression of lipogenic, fibrogenic, and pro-inflammatory genes, and the link between HIF signalling and OSA in NAFLD and NASH.In conclusion, considerable evidence points toward HIF activation occurring in NAFLD and NASH, and having widespread, predominantly harmful effects. Both HIF1\u03b1 and HIF2\u03b1 activation appear to worsen inflammation, though the mechanisms involved in this require further study. Further, evidence from studies of fibrosis shows important HIF mediated mechanisms, including control of profibrotic gene expression in hepatocytes and HSCs, regulation of HSC activation and HIF mediated pathological angiogenesis, though only control of profibrotic gene expression has been demonstrated to occur in animal models of NAFLD and NASH. Evidence also highlights a role for HIFs, in particular HIF2\u03b1, in driving steatosis. Studies of HIF activation under normoxic conditions suggest that HIF2\u03b1 can inhibit FAO, while studies that interfere with HIF2\u03b1 activation in NAFLD All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.LH was supported by a 4-year PhD studentship Program funded by the Wellcome Trust (Grant Number: 220033/Z/19/Z). AM was supported by the Research Councils UK (Grant Number: EP/E500552/1).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Non-alcoholic fatty liver disease (NAFLD) is an increasingly common condition associated with type 2 diabetes (T2DM) and cardiovascular disease (CVD). Since systemic metabolic dysfunction underlies NAFLD, the current nomenclature has been revised, and the term metabolic-associated fatty liver disease (MAFLD) has been proposed. The new definition emphasizes the bidirectional relationships and increases awareness in looking for fatty liver disease among patients with T2DM and CVD or its risk factors, as well as looking for these diseases among patients with NAFLD. The most recommended treatment method of NAFLD is lifestyle changes, including dietary fructose limitation, although other treatment methods of NAFLD have recently emerged and are being studied. Given the focus on the liver\u2013gut axis targeting, bacteria may also be a future aim of NAFLD treatment given the microbiome signatures discriminating healthy individuals from those with NAFLD. In this review article, we will provide an overview of the associations of fructose consumption, gut microbiota, diabetes, and CVD in patients with NAFLD. In 1986, the term nonalcoholic fatty liver disease (NAFLD) was proposed by Schaner and Thaler , and sooThe definition of NAFLD combines the presence of steatosis in more than 5% of hepatocytes and metabolic risk factors, especially obesity and T2DM, and exclusion of excessive alcohol consumption defined as \u226530 g per day for men and \u226520 g per day for women, or other chronic liver diseases . It shouThe latter definition of MAFLD is based on the presence of hepatic steatosis and at least one other condition such as overweight/obesity, T2DM, or metabolic abnormalities with no additional exclusion criteria ,11. MetaDiabetes mellitus is a heterogeneous group of disorders that results in an increase in blood glucose concentration, yet the pathophysiological processes underlying type 1 and type 2 of the disease differ. Insulin resistance, which is tightly linked to T2DM, is not always present in type 1 of the disease, thereby explaining the different prevalence of NAFLD in those two subpopulations of patients .NAFLD has a high prevalence in the general population, varying from 13.5% in Africa to 31.8% in the Middle East ,15. LiveConsumption of fructose has increased over the last century mainly due to the use of high-fructose corn syrup . This phWhile there is the perception that NAFLD is a benign liver condition, it is the second most common cause of end-stage liver disease and the Since MAFLD is a new term and has been seldom used in published studies, for this review article, we have summarized the current state of knowledge regarding NAFLD and its association with fructose consumption, microbiota, diabetes, and CVD.Fructose\u2019s use in foods was limited until the late 1960s due to its high price . Since tFructose is a major dietary monosaccharide that occurs naturally in ripe fruits, honey, and in small amounts in some vegetables such as carrot, onion, paprika, and sweet potato. It also comes in industrially manufactured foods because it is a main ingredient in the most widely used sweeteners like disaccharide sucrose and high fructose glucose syrup (mixture of fructose with sucrose or glucose) ,27.Fructose, in contrast to glucose, is almost totally cleared from circulation by the liver with the use of glucose transporter type\u20145 (GLUT 5). A large amount of acetyl-CoA is produced following fructose uptake because fructose clearance omits glycolysis, which is the rate-limiting step in acetyl-CoA production . Some acHowever, it is not only lipogenesis; other hepatotoxic effects are exerted by fructose, namely inducing an increase in oxidative stress . FructosNumerous animal and human studies have revealed the association of a close relationship between the consumption of fructose and the development of NAFLD .In a systematic review and meta-analysis of observational studies involving 4639 individuals, SSB consumption led to a 53% increased risk of developing NAFLD in comparison to participants who did not ingest SSB . Other sThe pioneering studies using germ-free mice and gut microbiota transfer related to the association of gut microbiome with metabolic diseases revealed a contribution of gut microbiota to weight gain and metabolic alterations . During Fecal microbiota transfer from patients with NASH to germ-free mice causes hepatic steatosis and inflammation in these animals . On the Human studies have been based on comparisons of gut microbiota between patients with NAFLD, NASH, and cirrhosis and individuals with a healthy liver performed to demonstrate gut microbiota signatures in these pathological conditions . Gut micThe gut\u2013liver axis is an association between gut microbiota and the liver. The interaction is conducted through the portal vein, which transports products from the gut to the liver and, in return, bile and antibodies from the liver to the intestine . An impoAbnormalities of gut microbiota composition in stools of patients with NAFLD have been highlighted in a meta-analysis showing increased abundance of Escherichia, Prevotella, and Streptococcus and decreased abundance of Coprococcus, Feacalibacterium, and Ruminococcus . When paNAFLD also results in increased fecal ester volatile organic compounds (VOC) and the presence of higher concentrations of fecal propionate and isobutyric acid and serum 2-hydroxybutyrate and L-lactic acid ,66. VOC The aforementioned associations between NAFLD and gut microbiota have resulted in studies investigating the effects of microbiome alternation with probiotics, prebiotics, synbiotics, or fecal microbiota transplantation (FMT) on the clinical course of NAFLD with promising results ,70,71,72According to the definitions formulated by the WHO and Food and Agriculture Organization of the United Nations (FAO), probiotics are live strains of strictly selected microorganisms which, when administered in adequate amounts, confer a health benefit on the host . Prebiotbifidobacterium animalis and insulin [In this review article, we focus on the meta-analysis of RCT , showing that probiotic/synbiotic therapy results in improving liver enzymes\u2019 activity and/or reduced steatosis/fibrosis in patients with NAFLD ,69,70,71 insulin . In anot insulin .clostridium difficile infection in 2013 [The role of FMT in patients with NAFLD has limited RCT evidence. FMT was first proven to be a good treatment method of antibiotic-resistant in 2013 and soon in 2013 . In an R in 2013 . In the in 2013 . The afoAccording to epidemiological data, the overall prevalence of NAFLD among patients with T2DM is 55.5%, and NASH is 37.3%; also, 17% of T2DM patients who underwent liver biopsy have advanced fibrosis . CurrentNAFLD and insulin resistance are interconnected, and therefore the development of prediabetes and diabetes is the most direct consequence of them at the extrahepatic level ,89. PartIn another cohort study of 10,141 participants, future diabetes mellitus risk could be modified with time by changes in NAFLD status where resolution of NAFLD could diminish the risk of diabetes onset. On the other hand, the development of NAFLD raised the risk of developing diabetes . CurrentIt is important to distinguish between simple steatosis, which does not progress to advanced fibrosis, and non-alcoholic steatohepatitis (NASH), which can lead to hepatocellular carcinoma. Indeed, NASH is characterized by fat accumulation, inflammation and necrosis, ballooning of the cells, and different stages of liver fibrosis, up to cirrhosis. The only method to differentiate between steatosis and NASH is liver biopsy. Masarone et al. performed this invasive procedure in 215 patients with elevated transaminases and metabolic syndrome or T2DM. The prevalence of NAFLD in patients with metabolic syndrome was 94.82% and was present in all the patients with T2DM. NASH was found in 58.52% of participants with metabolic syndrome and 96.82% of T2DM patients. According to the authors, one can assume that patients with T2DM have NASH. As insulin resistance is of crucial importance in the pathophysiology of both T2DM and NASH, NASH may be one of the early complications of T2DM [Some studies have reported that a reduction in the T2DM incidence and the The strong link between T2DM and NAFLD was underlined in 2020 by an international panel of experts who proposed the new term MAFLD instead of NAFLD . The resCurrently, there is no single, independent of the presence or absence of T2DM, pharmaceutical treatment for NAFLD that has been approved by international guidelines . The freThe best documented direct beneficial effect remains assigned to pioglitazone ,102,103,Current guidelines state that pharmacotherapy is reserved for patients with NASH and for patients at high risk of disease progression. Among the antidiabetic drugs recommended, only pioglitazone is used for the treatment of NASH with insulin resistance ,124. OldTwo relatively new antidiabetic drug classes, namely SGLT-2i (sodium-glucose cotransporter type-2 inhibitors) ,116,117 An upcoming drug of huge interest is a dual GIP and GLP-1 RA agent (tirzepatide), which is under investigation. It is being administered once a week in the therapy of T2DM patients with NASH and fibrosis . TreatmeBoth NAFLD and CVD are highly prevalent and associated with metabolic disturbances; hence they frequently coexist . This caNAFLD is linked with different manifestations of CVD, including subclinical atherosclerosis, overt atherosclerosis, and cardiovascular events and deaths ,130,131.One meta-analysis from the year 2021 deserves special attention because it was performed among people with histologically confirmed NAFLD who did not present with CVD at baseline and were prospectively followed\u2013up for a median of 13.6 years . This stThere are also studies that indicated that CVD risks in patients with NAFLD who are non-obese or non-oOne retrospective, \u201creal world\u201d cohort study aimed to describe the CVD burden and mortality in patients with NAFLD during 14-years follow-up observation following hospital discharge, showing that in patients with non\u2013cirrhotic NAFLD, the condition was associated with increased overall mortality . In contDue to the recent change in nomenclature of fatty liver disease (NAFLD to MAFLD), the prevalence of fatty liver disease and associated CVD risk required re-evaluation, taking into account each of these definitions separately . An analOn the other hand, another study concerning atherosclerotic lesions in T2DM patients found no significant difference in carotid IMT between T2DM patients with and without NAFLD . The samWhile an association between NAFLD and T2DM microvascular complications seems plausible, the number of studies on this topic is limited and inconclusive ,153,154.It is well established that patients with DM compared to people without carbohydrate disorders have a higher risk of heart failure and CVD. There is a term \u201cdiabetic cardiomyopathy\u201d, deteriorating patient\u2019s prognosis and being described as a form of heart disease occurring in diabetic patients, which causes significant structural as well as functional changes in the myocardium. There is a common pathophysiological mechanism of diabetic cardiomyopathy and NAFLD, namely insulin resistance . It resuMAFLD is a new clinical definition for fatty liver disease, which shifts NAFLD from a disease of exclusion to one of inclusion, where the pathogenic processes originate from underlying metabolic dysfunction. Because MAFLD is not widely used terminology in the scientific literature, most published data focus on NAFLD. The latter as an epidemic is tightly linked to T2DM, which are known to frequently coexist with and synergistically increase the CVD risk.Despite the high prevalence of NAFLD and many epidemiological studies showing correlations between NAFLD and CVD, it is still difficult to unequivocally identify a causal relationship between the two entities and to sWe are still unclear whether the diagnosis of NAFLD can be used as a tool to improve cardiovascular risk and modify treatment . Lifesty"} +{"text": "Non alcoholic steatohepatitis (NASH) is the inflammatory reaction of the liver to excessive accumulation of lipids in the hepatocytes. NASH can progress to cirrhosis and hepatocellular carcinoma (HCC). Fatty liver is the hepatic manifestation of metabolic syndrome. A subclinical inflammatory state is present in patients with metabolic alterations like insulin resistance, type-2 diabetes, obesity, hyperlipidemia, and hypertension. Platelets participate in immune cells recruitment and cytokines-induced liver damage. It is hypothesized that lipid toxicity cause accumulation of platelets in the liver, platelet adhesion and activation, which primes the immunoinflammatory reaction and activation of stellate cells. Recent data suggest that antiplatelet drugs may interrupt this cascade and prevent/improve NASH. They may also improve some metabolic alterations. The pathophysiology of inflammatory liver disease and the implication of platelets are discussed in details. The term non-alcoholic fatty liver disease (NAFLD) was coined by Ludwig and colleagues to descrNAFLD is histologically characterized by macrovesicular steatosis and further categorized into non-alcoholic fatty liver (NAFL) and non-alcoholic steatohepatitis (NASH) a more severe and evolutive disease, including inflammation and balooning.The definition by EASL Guidelines for the management of non-alcoholic fatty liver disease is the following: NAFLD characteristic is the excessive hepatic fat accumulation, which is associated with insulin resistance. NAFLD is also defined by the presence of steatosis in more than 5% of hepatocytes in histological analysis or by density fat fraction exceeding 5.6% as assessed by proton magnetic resonance spectroscopy or quantitative fat/water selective magnetic resonance imaging. The term NAFLD includes two distinct conditions with different prognosis: non-alcoholic fatty liver (NAFL) and non-alcoholic steatohepatitis (NASH). While NAFL is a milder condition, NASH covers a wide spectrum of disease severity, including fibrosis, cirrhosis and hepatocellular carcinoma (HCC). .Now the term metabolic associated fatty liver disease (MAFLD) has been adopted, instead of NAFLD/NASH, chosen by a group of experts . Three cAnother condition, MAFLD-related cirrhosis, is characterized by cirrhosis with past or present evidence of metabolic risk factors that meet the criteria to diagnose MAFLD, with at least one of the following: 1) Documentation of MAFLD on a previous liver biopsy. 2) Historical documentation of steatosis by hepatic imaging . Being aRisk factors for the development of NAFLD, or its evolution to NASH, are the same of metabolic syndrome, a disorder consisting by definition of obesity, arterial hypertension, impaired glucose metabolism and atherogenic dyslipidaemia, a clinical condition with high prevalence in the adult population worldwide, particularly in industrialized countries .In some countries, NAFLD represents the primary cause of cirrhosis , the maiIn the United States the prevalence of NAFLD, as assessed using ultrasound associated with transaminases increases or scores like fatty liver index/NAFLD score, reaches 19 to 46 percent in the adult population, with most biopsy-based studies reporting a prevalence of NASH of 3\u20135 percent . WorldwiEstimates of prevalence of NAFLD in Asia-Pacific regions range from 5 to 30 percent, depending upon the population studied while inWhile NASH is considered a condition that promotes fibrosis progression, longitudinal studies have demonstrated that the liver-related prognosis of patients with NAFLD is also mostly related to the extent of liver fibrosis as obserThe hepatic steatosis is characterized by high accumulation of lipids , cholesterol, and ceramides) and this aberrant accumulation results in liver toxicity .de novo lipogenesis process) contribute to the accumulation of hepatic and lipoprotein fat in NAFLD , has also been observed in NAFLD . Furtherin NAFLD .de novo lipogenesis exceed the oxidation rate of FFA and the export of VLDL; the impaired metabolism of lipids is also associated with the progression of NAFLD to NASH. Changes in hepatic and serum lipidomic signatures have been proposed as index of the development and progression of fatty liver disease disease is the main cause of morbidity and mortality in patients with NAFLD where thper se on CV events and death, might be questionable. However, the bidirectional relationship between NAFLD and hypertension seems to be independent of other components of the metabolic syndrome (MetS) (Trying to dissect the impact of NAFLD e (MetS) .Insulin resistance is considered the main determinant of hepatic steatosis and steatohepatitis. . Also otNAFLD patients are often obese and/or affected by type 2 diabetes mellitus, two conditions associated with peripheral insulin resistance. Nevertheless, insulin resistance has also been observed in non-obese NASH patients and in those who have normal glycemic levels, thus suggesting a strong association between insulin resistance and lipid accumulation . SubjectVisceral fat accumulation is considered an independent risk factor in NASH patients, as it has been suggested that a higher visceral fat level in these patients leads to higher liver fibrosis and inflammation: this could be linked to proinflammatory cytokines activity, like interleukin-6 (IL-6) , or actiSeveral studies have investigated the possible genetic polymorphisms present in patients with NAFLD/NASH and the results obtained have suggested a certain role of IL-6, adiponutrin apolipoprotein C3, and the peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PPARGC1A) , Sookoi. RecentlIt has been suggested that the phenotype of manifestations of NAFLD and the progression of the disease are the result of complex interactions between the environment and the genetic pool of the subject. Some studies highlight a strong heritability of lipid content in the liver and famiThere are many genes associated with insulin signaling and lipid metabolism, which are involved in the development of NAFLD, and it is not our aim to give them a full discussion here. However it is important to list at least the genetic polymorphisms of greatest interest in this pathology. Notably, patatin-like phospholipase domain-containing protein 3 (PNPLA3) I148M variant is the commonest inherited determinant of NAFLD as it is associated with progression of NAFLD, NASH, and NAFLD-related HCC . The PNP. A well-known genetic polymorphism studied in NAFLD subjects is the Membrane bound O-acyltransferase domain-containing 7 (MBOAT7) downregulation: MBOAT7 is a gene implicated in phosphatidylinositol (and other phospholipids) remodeling via the incorporation of arachidonic acid and other unsaturated fatty acids into lyso-phospholipids. The common genetic variant leads to a downregulation of MBOAT7 activity and consequantely to accumulation of lyso-phosphatidyl-inositol in hepatocytes; this in the end leads to a higher sinthesis of triglycerids in the liver and NAFLD. (The Transmembrane 6 superfamily member 2 (TM6SF2) gene E167K variant promotes a reduction in VLDL secretion from the liver, inducing higher triglycerides levels in the liver and a lower capacity of LDL secretion from hepatocytes, while patients with TM6SF2 E167K polimorfism show a reduced cardiovascular risk . A well-d NAFLD. .de novo lipogenesis in the liver; the GCKR P446L is a missens variant that causes a protein loss of function, in the end resulting in a constitutively active glucose inflow in the hepatocytes (Glucokinase regulator (GCKR) controls the glucose inflow in hepatocytes thus regulating the atocytes , (Choudhatocytes ..The protein phosphatase 1 regulatory subunit 3B (PPP1R3B) encodes for a protein involved in glycogen synthesis. PPP1R3B rs4841132 variant has been suggested to protect against hepatic fat accumulation and liver fibrosis in NAFLD subjects : this variant increases the lipid oxidation and downregulates some lipid metabolism and inflammation pathways .via the modulation of phagocytes and hepatic stellate cells activity, and a specific variant (MERTK rs4374383) exerts a protective role, reducing the MERTK expression in the liver (Interferon lambda 4 (IFNL4) rs368234815 variant is associated to higher inflammation and fibrosis in the liver; Mer T kinase (MERTK) variants could alter the hepatic inflammation and fibrosis he liver .HSD17B13 gene encodes for an enzyme that concentrates lipid droplets in hepatocytes: loss-of-function variants in HSD17B13 result in higher protection against liver inflammation, cirrhosis, and HCC .Another important pathogenetic factor is represented by the accumulation of iron, which can contribute to the development of NASH and promote oxidative stress, thus producing free oxygen species which lead to liver damage and fibrosis, and in the end NASH .Nevertheless, studying hemochromatosis gene (HFE) mutations, no significant role in the development of insulin resistance-associated liver siderosis was seen, apart from compound heterozygosity .Unexplained hepatic iron overload is frequently associated with the insulin-resistance syndrome irrespective of liver damage. This insulin-associated iron overload is characterized by a mild to moderate iron excess with hyperferritinemia and normal to mildly increased transferrin saturation .via bile salts deconjugation, and inactivation of hepatic lipotropic molecules (such as choline) .The production of endogenous alcohol and acetaldehyde is another suggested mechanism : colonicOther factors and mechanisms have been studied relating to the pathogenesis of NAFLD. The cholesterol intake with diet has been proposed as an independent factor in developing of NASH . It has Platelets are involved in different models of liver damage .Apart from the well-defined interaction between CV risk factors and NAFLD/NASH, a pro-thrombotic condition may derive from altered endothelial and vascular function and platelet activation and interaction with blood and liver cells.via intralobular hypoxia and local ischemia, . Lipid species cause inflammation and activation of both infiltrating and resident immune cells. How are platelets involved in this process? Platelets are involved in pathological processes such as chronic inflammation and atherothrombosis and possibly fibrogenesis .Platelets contain granules that are released in response to activatory stimuli (platelet release reaction). Alpha and delta (dense) granules may release in the microenvironment the proaggregatory factors ADP, serotonin and thrombin (the amplificatory process) along with appreciable amounts of inflammatory cytokines, chemokines and growth factors such as platelet-derived growth factor (PDGF), endothelial growth factor (EGF), insulin-like growth factor 1 (IGF-1), transforming growth factor beta (TGF\u03b2), tumor necrosis factor alpha (TNF\u03b1), interleukin-6 (IL-6), chemokine ligand 4 (CXCL4), vascular endothelial growth factor A (VEGF-A), hepatocyte growth factor (HGF), fibroblast growth factor (FGF). . PlatelePlatelets release factors that change gene expression in endothelial cells, leukocytes, stromal cells and fibroblasts thus directly participating to inflammation . Platelein vivo platelet activation , mean platelet volume (MPV) values, and platelets/lymphocyte ratio have been associated with hyperleptinemia and hypoadiponectinemia . In obestivation . In NAFLtivation .via activation of the classic complement pathway production and altered downstream cGMP/cGMP-dependent protein kinase (PKG) signalling system. Similarly, the inhibitory activity of prostacylin (PGI2) towards platelet activation and the engagement of the cAMP/cAMP-dependent protein-kinase (PKA) pathways are impaired . This is2 signalling, which catalyses arachidonic acid release and TXA2 generation. Activation of the aldose reductase pathway is implicated in oxidative stress-induced TXA2 biosynthesis amplified by exposure to collagen, indicating that when the vascular endothelium is damaged thromboembolic events are promoted expression are upregulated in NAFLD (W. via proteinase activated receptor 1\u20134 (PAR1\u20134) signalling, leading to platelet hyperreactivity. While experimental models confirm thrombin generation in NAFLD clinical evidence, is lacking (TXANAFLD W. . NAFLD f lacking .Patients and mice with NAFLD have increased blood levels of molecules present in granules from platelets. Thrombospondin (TSP-1), present in platelets , but synOne mechanism of interaction between platelets and leukocytes is through CD40L. CD40L belongs to the TNF superfamily, and is increased in NAFLD platelet surface, signalling leukocytes expressing CD40 . It has Glycosaminoglycans and CXCR3, a chemokine receptor, bind to CXCL4, a protein secreted by platelets . In expevia platelet-derived microparticles to hepatocytes has also been demonstrated. PMP carrying miR-25-3p promoted hepatocyte proliferation modifying gene expression through mediators with both pro- and anti-fibrotic effects . AdenineThe profibrotic effects of activated platelets are due The real role of platelets in increasing liver fibrosis is underlined by the positive effect of aspirin , documenRegarding immune response, both innate and adaptive, platelets play a very important role in its stimulation. Activated platelets attract immune cells and modulate inflammation through the expression of specific receptors and release of chemokines and cytokines in the lAccording to Malehmir et al., platelets are involved only in pathophysiology of NASH, while they do not play any role in steatosis . In expe+CD8+ T cells, CD11b + MHCII + myeloid cells and Ly6G + granulocytes (In patients with NASH and in mice with choline deficient high fat diet (CDHFD) induced NASH there is increased hepatic infiltration of CD3ulocytes . Treatmeulocytes . Aspirinulocytes .The demonstration that the improvement of NASH with the combined treatment with aspirin-clopidogrel is not COX-dependent comes from the evidence that in experimental NASH, sulindac, only a COX-inhibitor, does not modify obesity, liver/body weight ratio, hepatic triglycerides and glucose tolerance, liver damage .Considering the role of platelets receptors in NASH, it has been shown that the GPIIb subunit of the platelet fibrinogen receptor GPIIb/IIIa is not involved, confirming that platelets aggregation is not responsible for NASH , as wellDevelopment of diet induced NASH was not affectd by deletion of P-selectin (Selp\u2013/\u2013), , von-WilRecently, the contribution of platelets to liver inflammation was confirmed by immunohistochemical staining on liver biopsies showing accumulation of platelet and neutrophil extracellular traps (NET) in liver, with a correlation with NAFLD activity score. Circulating platelets from patients with NAFLD were shown to have significant increase of inflammatory transcripts, while leukocytes did not Figure .Few studies have addressed the effect of antiplatelet therapy in NAFLD/NASH, as previously anticipated.via intercepting signal transduction from Akt to c-Raf and proportion of CD4+/CD8+ and NKT cells, CD11b+F4/80hi, Kupffer cells, Kupffer cell activation, inflammatory myeloid cell infiltration were reduced. Thus, aspirin-clopidogrel prevented NASH and reduced NASH-related increase of platelet interaction with the liver endothelium, T cells and innate immune cells.The most striking evidence of the benefit of platelet inhibition is shown by Malehmir et al. . They shFinally, antiplatelet treatment seems to be effective only in the liver, affecting interactions of GPIb\u03b1+ platelets with Kupffer cells, in mouse and human NASH .Scientific evidence supports the hypothesis that platelets are implicated in the pathophysiology of NAFLD/NASH, mostly by exerting proinflammatory and profibrotic activities, rather than exerting their thrombogenic activities. The recently discovered interaction of platelets with liver cells and the immune system introduces new models of inflammation and fibrogenesis in the setting of chronic liver diseases, anticipating the potential efficacy of antiplatelet agents to prevent the progression of NAFLD towards NASH and the eventual liver cancer. Further research is required to identify detailed mechanisms and potential specific target of pharmacological intervention. Clinical trials in selected patients, who may benefit of antiplatelet intervention, are warranted."} +{"text": "Non-alcoholic fatty liver disease (NAFLD), which affects about a quarter of the global population, poses a substantial health and economic burden in all countries, yet there is no approved pharmacotherapy to treat this entity, nor well-established strategies for its diagnosis. Its prevalence has been rapidly driven by increased physical inactivity, in addition to excessive calorie intake compared to energy expenditure, affecting both adults and children. The increase in the number of cases, together with the higher morbimortality that this disease entails with respect to the general population, makes NAFLD a serious public health problem. Closely related to the development of this disease, there is a hormone derived from adipocytes, leptin, which is involved in energy homeostasis and lipid metabolism. Numerous studies have verified the relationship between persistent hyperleptinemia and the development of steatosis, fibrinogenesis and liver carcinogenesis. Therefore, further studies of the role of leptin in the NAFLD spectrum could represent an advance in the management of this set of diseases. Lepti as the p similar .Leptin is characterized by having pleiotropic effects due to the great variety of leptin receptors (known as Ob-R or LEPR), thus being able to affect many biological processes at different levels. The six existing spliced Ob-R forms are called Ob-Ra, Ob-Rb, Ob-Rc, Ob-Rd, Ob-Re and Ob-Rf, and belong to the class I cytokine superfamily ,7 but diThis adipokine, leptin, is mostly recognized for playing a key role in the central control of both energy metabolism and obesToday, several problems are associated with NAFLD. This entity is the leading cause of liver disease worldwide and its prevalence is increasing , affecti2) and 25% of lean patients (BMI 20.0\u201324.9 kg/m2) and can be affected by other factors [NAFLD is a clinicopathologic entity comprising a broad spectrum of liver diseases ranging from simple steatosis to NASH, a more aggressive form of NAFLD and associated with varying degrees of hepatic fibrosis, cirrhosis, and HCC . NAFLD hnd race) . The hignd race) . This mand race) ,27.NAFLD is also considered the hepatic component of the metabolic syndrome, whose prevalence is increasing worldwide at the same time as obesity and type 2 diabetes-mellitus (T2DM) ,30. In fThe pathogenesis of NAFLD entail a complex interplay between environmental factors, obesity, changes in the microbiota and predisposing genetic variants that result in altered lipid homeostasis and hepatocyte triglyceride accumulation . In turnLeptin acts by binding to its receptors. Specifically, Ob-Rb isoform is the main leptin receptor as it provokes signaling cascades . After tLeptin and other inflammatory adipokines such as IL-6 or TNF-\u03b1 (Tumor nechrosis factor) promote insulin resistance, which has been extensively described in the pathophysiology of NAFLD during the last few decades ,53,54, aMoreover, some pathologies can cause NAFLD. This is the case of congenital or acquired lipodystrophy, which is characterized by the total or partial absence of subcutaneous adipose tissue and promotes ectopic accumulation of fat in other locations, including the liver, which leads to severe insulin resistance and the development of NAFLD ,58. PatiNAFLD comprises a set of liver diseases, some of them irreversible. NAFLD development is divided into three main steps: simple steatosis, NASH, and liver cirrhosis. However, NAFLD can eventually trigger in HCC. Simple steatosis is caused by factors such as high-fat and/or high sugar diet, obesity, T2DM, and other metabolic diseases, while NASH can be developed by inflammation and hepatocyte apoptosis. If liver fibrosis is provoked in this step, cirrhosis (and possibly HCC) will be also developed . In the Hepatic steatosis has different degrees of severity related to liver damage in NAFLD: from simple steatosis to NASH, which is the most important disease in the NAFLD spectrum, since its prevalence is estimated to be approximately 1.5\u20136.5% in the general population, and considerably increasing this percentage in obese individuals . AlthougHepatic steatosis can be caused by both aberrant lipid and glucose metabolism. One of leptin functions is to limit the storage of triglycerides in adipocytes and non-adipose tissues including the liver, thereby preventing lipotoxicity. Under normoleptinemia conditions, leptin exerts an anti-steatotic effect and improves insulin sensitivity by suppressing hepatic glucose production and lipogenesis ,65. ThisLeptin has also been suggested to have a synergistic effect when used together with insulin, probably inhibiting the production of very low-density lipoproteins (VLDL) ,67,69. ABy contrast, high leptin levels have also been associated with hepatic steatosis and NAFLD pathogenesis since a high percentage of NAFLD patients have been observed to suffer obesity, which is closely related with hyperleptinemia ,50,74. TIn NAFLD, most patients have simple steatosis, but those with NASH can advance to the next step of the disease, which is fibrosis. The mechanisms of progression from simple steatosis to NASH are not entirely clear, but some factors are known to be involved in the process , includiAdvanced fibrosis implies an increased risk for developing other NAFLD-related complications, such as cirrhosis and HCC. For that reason, an early diagnosis of patients with advanced fibrosis is crucial . In this+CD57+ T cells, found in NASH progression [Activated HSC also contribute to increase inflammation and liver fibrosis by releasing TGF-\u03b21, angiopoietin-1, VEGF , and collagen-I. In addition, HSC appear to produce leptin, and have also been proposed to express Ob-Rb, which establishes a vicious cycle by stimulating proliferation and preventing apoptosis of HSC and thus affecting hepatic inflammation and fibrosis . KC can gression . Also, pgression .According to several studies with small sample sizes, progression from NASH to liver cirrhosis can occur in up to 25% of patients. This high disease burden has led to an increase in the number of NASH-related transplants, possibly becoming in the leading cause of liver transplantation worldwide in coming decades, displacing the hepatitis C virus . Up to nThere are references in other types of liver cirrhosis about leptin, in which this hormone has been demonstrated to be in both high and low Obesity and T2DM are cancer promoters and, in coexistence with NAFLD, the aggressive potential can be underestimated. HCC is the neoplasm most closely related to obesity in men. In this regard, HCC incidence increased by 3% per year in the last decade, unlike other malignancies also associated with obesity, such as breast or colon cancer, whose incidences remained stable or decreased. In part, this fact may be explained by the increase in the prevalence of NASH . NAFLD pIn HCC there are some established risk factors, including chronic hepatitis B, chronic hepatitis C, alcohol consumption, and NAFLD, all of them potentially linked to leptin . As withHowever, Elinav et al. (2006) suggested a beneficial role of leptin in HCC murine models since this hormone decreased tumor size and improved survival . In the NAFLD is a worldwide health problem due to its increasing prevalence, so the research on its diagnosis, follow-up, and subsequent treatment has become essential. Moreover, NAFLD requires a multidisciplinary approach given its high risk of cardiovascular morbidity and mortality. In this sense, there is an urgent need for non-invasive diagnostic methods to replace liver biopsy, so that early diagnosis and treatment monitoring is possible in a large part of the population. Leptin, due to its direct relationship with body fat levels and insulin resistance, has been shown to be an independent predictor of the presence or development of NAFLD. This adipokine has been shown to have antisteatotic effects, although it has also been associated with hepatic steatosis and may promote more advanced stages of NAFLD that include NASH and liver fibrosis. The role of leptin in both NAFLD-related cirrhosis and HCC has never been studied. Its functions in other liver cirrhosis remains controversial. However, there is much evidence to establish the protumoral role of this hormone in HCC derived from other liver diseases.Treatment with leptin has proven to be effective in patients with congenital leptin deficiency; however, its use in the rest of the affected subjects remains controversial, which highlights the importance of continuing the line of research on the development of leptin analogues that conserve the antisteatotic effect and lack proinflammatory and profibrogenic action, as well as leptin sensitizers, or their synergistic effect when associated with different drugs. While further observational studies and large clinical trials with long-term follow-up are needed to fully evaluate the efficiency of the use of this adipokine, leptin could be used as an interesting biomarker in the diagnosis and follow-up of NAFLD, including the combination of leptin level measurement together with metabolic analyses, lipid profile, and glucose levels."} +{"text": "Pkd1\u2212/\u2212 and 786\u20100 cells. Furthermore, its administration curbed cystogenesis in Pkd2 zebrafish and early\u2010onset Pkd1\u2010deficient mouse models. In conclusion, NS398 is a potential therapeutic agent for ADPKD.Autosomal\u2010dominant polycystic kidney disease (ADPKD) is characterized by uncontrolled renal cyst formation, and few treatment options are available. There are many parallels between ADPKD and clear\u2010cell renal cell carcinoma (ccRCC); however, few studies have addressed the mechanisms linking them. In this study, we aimed to investigate their convergences and divergences based on bioinformatics and explore the potential of compounds commonly used in cancer research to be repurposed for ADPKD. We analysed gene expression datasets of ADPKD and ccRCC to identify the common and disease\u2010specific differentially expressed genes (DEGs). We then mapped them to the Connectivity Map database to identify small molecular compounds with therapeutic potential. A total of 117 significant DEGs were identified, and enrichment analyses results revealed that they are mainly enriched in arachidonic acid metabolism, p53 signalling pathway and metabolic pathways. In addition, 127 ccRCC\u2010specific up\u2010regulated genes were identified as related to the survival of patients with cancer. We focused on the compound NS398 as it targeted DEGs and found that it inhibited the proliferation of When PC1 or PC2 is missing, the levels of intracellular calcium ions decrease, whereas those of cyclic AMP increase. This affects downstream signalling pathways, cell proliferation and the secretion of cyst fluid, ultimately leading to the formation and growth of cysts.ADPKD is mainly caused by mutations in Previous studies have found that ADPKD and tumours share many mechanisms and signalling pathways; thus, ADPKD has been called \u2018neoplasia in disguise\u2019.We have previously developed the Renal Gene Expression Database (RGED), a bioinformatics database comprising comprehensive gene expression datasets of studies on renal diseases published in the NCBI Gene Expression Omnibus (GEO) database.To clarify the similarities and dissimilarities between ADPKD and ccRCC and screen and identify candidates for ADPKD therapy, we designed an integrated bioinformatics approach based on the aforementioned rationale and conducted corresponding experiments for validation. Thus, our results will provide insights into the relationship between ADPKD and ccRCC, as well as a new therapeutic strategy for ADPKD.22.1http://rged.wall\u2010eva.net) and the high\u2010throughput gene expression datasets from GEO (http://www.ncbi.nlm.nih.gov/geo/). The two datasets retrieved were GSE7869 and GSE53757, both constructed based on the GPL570 platform (Affymetrix Human Genome U133 Plus 2.0 Array). GSE7869 included cystic tissues of different sizes from 5 Pkd1 polycystic kidneys and non\u2010cancerous renal cortical tissues from 3 nephrectomized kidneys, whereas GSE53757 included 72 ccRCC tumour tissues of all disease stages and matched kidneys.We collected the gene expression profiles of renal cystic tissues of patients with ADPKD and tumour tissues of patients with ccRCC from RGED were used for background correction, standardization and expression value calculation of the original datasets GSE7869 and GSE53737. Fold\u2010change and adjusted p\u2010values were used to screen DEGs. The annotate package was utilized to annotate DEGs. Volcano maps of DEGs were constructed using ggplot2. Subsequently, we identified intersecting DEGs between the datasets using the Venn package in R.R packages simpleaffy, affyPLM, affy, gcrma and limma (2.3https://david.ncifcrf.gov) to investigate the function of the DEGs. Gene ontology (GO) annotation includes three categories: biological process (BP), cellular component (CC), and molecular function (MF). Significant GO terms and KEGG pathways were defined as p\u00a0<\u00a00.05. The interaction between proteins was identified using the String database . Kaplan\u2010Meier plots and log\u2010rank test results were retrieved using the cgdsr, survival and survminer package in R. The high\u2010expression group was defined as those with expression value > 75% quantile, whereas the low\u2010expression group was defined as those with expression value < 25% quantile. p\u00a0<\u00a00.01 was set as the cut\u2010off value.Function enrichment and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses were performed using the Database for Annotation, Visualization and Integrated Discovery 6.8 provides gene expression profiles from cultured human cells treated with bioactive small molecules and can be used to discover functional connections among diseases, genetic perturbation and drug action.P value < 0.05.The CMap were a kind gift from Yiqiang Cai and cultured as described by Shibazaki et al.ADPKD cyst\u2010lining epithelial mouse cells for 4\u00a0h, DMSO was added to dissolve the crystallized products. Absorbance at 490\u00a0nm was measured. The measurement was repeated in six duplicate wells.2.7Cell cycle staining was performed using a propidium iodide (PI) cell cycle kit according to the manufacturer's instructions (Multi Sciences Biotech). Briefly, cells were harvested, washed twice with phosphate\u2010buffered saline and stained with 50\u00a0\u00b5g/ml PI. Cells were then counted in a flow cytofluorometer based on red emissions at 630\u00a0nm, and data were analysed using a Beckman Coulter CyAn ADP analyzer.2.8Pkd1\u2212/\u2212 or 37\u00b0C for 786\u20130 cells) for 4\u00a0h. After fixing with 4% formaldehyde, the click reaction cocktail (0.2\u00a0ml) was added to each well, and the DNA was stained with Hoechst. The samples were imaged using Olympus BX\u201051 fluorescence microscopy.EdU assays were conducted following the manufacturer's instructions (Beyotime). After adding 2\u00d7 EdU working solution to each well, 24\u2010well plates were placed in the incubator (33\u00b0C for 2.92O (ddH2O) and 5\u00d7 protein loading buffer (Yamei) was added to the protein sample. The solution was boiled in a 95\u00b0C dry heater for 15\u00a0min. An equal amount (20\u00a0\u03bcl) of protein was loaded onto an SDS\u2010PAGE gel and electrophoresed, and separated proteins were transferred onto a polyvinylidene fluoride membrane. After incubation with the appropriate primary and secondary antibodies, development was performed using an enhanced chemiluminescence reagent (Thermo Fisher Scientific). Densitometric analysis for the determination of relative protein expression was performed using Image lab system (Bio\u2010Rad Laboratory), with GAPDH as a loading control.A column tissue and cell protein extraction kit (Minute) was used to extract the proteins of cultured and processed cells and mouse kidney tissues. Protein was quantified using a BCA kit (Thermo Fisher Scientific) following the manufacturer's instructions. The protein concentration of each sample was balanced by double\u2010distilled H2.10Pkd2 morpholino (Pkd2 MO): 5\u02b9\u2010AGGACGAACGCGACTGGAGCTCATC\u20103\u02b9 was constructed using Gene Tools (Philomath) and injected at the 1\u2010cell stage.Pkd2 morphants were divided into two groups 24\u00a0h post\u2010fertilization. One group was treated with NS398 (10\u00a0\u03bcM), whereas the other was treated with DMSO as the vehicle\u2010treated control group. The embryos were anaesthetized and immobilized in methylcellulose 72\u00a0h post\u2010fertilization to visualize the pronephric cyst formation and measure the rate of cyst formation in all embryos. The experiment was independently conducted three times.Wild\u2010type zebrafish (N\u00fcsslein\u2010Volhard Laboratory) were maintained, mated, and staged as described previously.Pkd1fl/fl: tamoxifen\u2010Cre mice at postnatal day 10 (P10) to induce Pkd1 deficiency. NS398 was administered every other day via intraperitoneal injection from P13 at 20\u00a0\u03bcg/g body weight. Mice were killed by isoflurane using a rodent gas anaesthesia machine (Yuyan Machine) at P30; blood was collected by retro\u2010orbital puncture and kidneys were harvested. The serum blood urea nitrogen (BUN) concentration was measured with a urea assay kit (BioAssay Systems). The kidneys were weighted and collected for morphological analysis and Western blot analysis.The early\u2010onset PKD mouse model was generated as described previously.The study protocol was approved by the Second Military Medical University's Animal Care and Use Committee .2.11Kidney sections were fixed in 4% paraformaldehyde buffered solution and embedded in paraffin. Then kidneys were sliced into 2\u2010\u03bcm\u2010thick sections and stained with haematoxylin and eosin (HE). Aperio XT and Leica Aperio (Leica Biosystems) were applied to scan all specimens. The size and shape of kidney cysts were quantitatively measured from the sagittal section of the kidney. The cyst index (CI) was determined as previously described by Yang et al.2.12t test. p\u00a0<\u00a00.05 was considered statistically significant.Data are presented as the mean\u00a0\u00b1\u00a0standard error of at least triplicates or as a representative of three separate experiments. Significance was determined using a two\u2010sided Student\u2019s 33.12FC|\u00a0\u2265\u00a02 and p\u00a0<\u00a00.001. From the Venn diagrams, 22 up\u2010regulated and 95 down\u2010regulated DEGs were found to be common to both datasets . The top 6 genes were PPP1R18, PLAUR, TMEM44, JAK3, PTTG and ENTPD1 ranked by the hazard ratio, and their Kaplan\u2010Meier survival curves are displayed in Figure\u00a0Because the main difference between ADPKD and ccRCC is the malignant and metastatic nature of ccRCC, which determines the prognosis of patients, we have been suggested that ccRCC\u2010specific DEGs are closely related to the prognosis of patients with ccRCC. Of the 880 genes, 127 were identified to be significantly related to the survival of patients with kidney cancer . Consistent with the results of the MTT assays, the percentage of EdU\u2010positive cells was lower in Pkd1\u2212/\u2212 cells treated with 50\u00a0\u03bcM NS398 than that in cells treated with DMSO . The EdU proliferation assay showed a similar result\u2014the number of EdU\u2010positive 786\u20100 cells was remarkably reduced after treatment with 100\u00a0\u03bcM NS398 reduced the levels of phospho\u2010 and total Akt and ERK in s Figure\u00a0. NS398 i3.5Pkd2 morphants administered DMSO or NS398 are shown in Figure\u00a0Pkd2 morphants significantly decreased after treatment with 10\u00a0\u03bcM NS398 and renal clear\u2010cell carcinoma (786\u20100) cells. Furthermore, NS398 significantly inhibited cyst formation in zebrafish and mouse animal models of PKD. Thus, our results indicate the value of NS398 in the treatment of ADPKD and/or tumours. Basic and clinical studies are warranted to validate our results.In summary, we applied an integrated bioinformatics approach based on the high\u2010throughput data of ADPKD and ccRCC and verified these results The authors confirm that there are no conflicts of interest.Sixiu Chen: Data curation ; Formal analysis ; Investigation ; Methodology ; Software ; Validation ; Visualization ; Writing\u2010original draft (lead). Linxi Huang: Data curation ; Formal analysis ; Investigation ; Methodology ; Software ; Validation ; Visualization (lead); Writing\u2010original draft . Shoulian Zhou: Data curation ; Formal analysis ; Investigation ; Methodology ; Validation (lead); Writing\u2010original draft . Qingzhou Zhang: Data curation ; Formal analysis ; Software (lead). Mengna Ruan: Formal analysis ; Investigation ; Methodology ; Validation . Lili Fu: Data curation ; Formal analysis ; Visualization . Bo Yang: Data curation ; Formal analysis ; Methodology . Dechao Xu: Data curation ; Formal analysis ; Methodology ; Resources (supporting); Writing\u2010original draft (supporting). Changlin Mei: Conceptualization ; Formal analysis (supporting); Funding acquisition ; Project administration ; Resources ; Supervision ; Writing\u2010review & editing . Zhiguo Mao: Conceptualization ; Formal analysis (supporting); Funding acquisition ; Project administration ; Resources ; Supervision ; Writing\u2010review & editing .Fig S1Click here for additional data file.Supinfo S1Click here for additional data file."} +{"text": "The cyclic tensile behavior of steel-reinforced high strain-hardening ultrahigh-performance concrete (HSHUHPC) was investigated in this paper. In the experimental program, 12 HSHUHPC specimens concentrically placed in a single steel reinforcement under cyclic uniaxial tension were tested, accompanied by acoustic emission (AE) source locating technology, and 4 identical specimens under monotonic uniaxial tension were tested as references. The experimental variables mainly include the loading pattern, the diameter of the embedded steel rebar, and the level of target strain at each cycle. The tensile responses of the steel-reinforced HSHUHPC specimens were evaluated using multiple performance measures, including the failure pattern, load\u2013strain response, residual strain, stiffness degradation, and the tension-stiffening behavior. The test results showed that the enhanced bond strength due to the inclusion of steel fibers transformed the failure pattern of the steel-reinforced HSHUHPC into a single, localized macro-crack in conjunction with a sprinkling of narrow and closely spaced micro-cracks, which intensified the strain concentration in the embedded steel rebar. Besides, it was observed that the larger the diameter of the embedded steel rebar, the smaller the maximum accumulative tensile strain under cyclic tension, which indicated that the larger the diameter of the embedded steel rebar, the greater the contribution to the tensile stiffness of steel-reinforced HSHUHPC specimens in the elastic\u2013plastic stage. In addition, it was found that a larger embedded steel rebar appeared to reduce the tension-stiffening effect (peak tensile strength) of the HSHUHPC. Moreover, the residual strain and the stiffness of the steel-reinforced HSHUHPC were reduced by increasing the number of cycles and finally tended toward stability. Nevertheless, different target strain rates in each cycle resulted in different eventual cumulative tensile strain rates; hence the rules about failure pattern, residual strain, and loading stiffness were divergent. Finally, the relationship between the accumulative tensile strain and the loading stiffness degradation ratio under cyclic tension was proposed and the tension-stiffening effect was analyzed. Ultrahigh-performance concrete (UHPC), as one of the current advanced high-performance cementitious composites, exhibits eminent mechanical properties: comparatively high compressive strength \u2265120 MPa) and tensile strength (\u226510 MPa), strain-hardening characteristics under tensile load for a given volume of discontinuous internal fibers, and excellent durability due to an optimized dense matrix as well 20 MPa an. BesidesThe intensified properties of UHPC have urged a wide range of researchers and engineers to apply it in various types of innovative civil engineering, where bending prevails . NeverthRecently, the rehabilitation of a concrete bridge deck or orthotropic steel deck was widely conducted by employing a thin reinforced UHPC overlay, instead of a relatively thicker reinforced concrete layer, because the demand to upgrade the load-bearing capacity of bridge structures is rising ,11. In tThe objective of the present study was to investigate the behavior of steel-reinforced HSHUHPC members under cyclic axial tension at the serviceability-limit state. Twelve steel-reinforced HSHUHPC dog-bone-shaped specimens under cyclic uniaxial tension and four identical specimens under monotonic uniaxial tension were fabricated and tested, supplemented with crack width detection and AE source locating technology as well. AE technology provides support in detecting the internal damages of reinforced UHPC from the microcosmic point of view. AE source locating is a method to obtain AE source distribution in three-dimensional space. The experimental variables included the loading pattern, diameter of embedded steel rebar, and the level of target strain at each cycle. All of the specimens were loaded for 10 cycles. The evolution of mechanical properties, damage mode, crack spacing, distribution of damage points, residual strain, tension-stiffening response, and stiffness degradation mechanism of steel-reinforced HSHUHPC members under cyclic tension were investigated and analyzed.A total of 12 steel-reinforced HSHUHPC dog-bone-shaped specimens under cyclic uniaxial tension and four specimens under monotonic uniaxial tension as reference specimens, with identical dimensions, were fabricated and tested. The detailed dimensions of all specimens are shown in The HSHUHPC materials used in this research consist of UHPC premixed powder, steel fibers, and water. For steel-reinforced HSHUHPC specimens, HRB400 deformed steel rebars with different diameters and a length of 500 mm were used for longitudinal reinforcement. Three bare steel rebars were tested to obtain the tensile stress\u2013strain characteristics. The average tensile stress\u2013strain relationship of the steel rebars, which were tested in a universal testing machine, was obtained as shown in A direct tensile test was carried out through a universal testing machine (WDW-300 servo-controlled testing system) running in a displacement control manner. The configuration, dimensions, and test setup of the steel-reinforced HSHUHPC dog-bone specimens are presented in i + 1 cycle; the total strain amplitude of the i + 1 cycle is the superposition of the cumulative residual strain generated on the i cycle and the target strain value. Moreover, the loading rate remained at a speed of 0.3 mm/min throughout the whole loading process, whether for the loading part or the unloading part. All the specimens were loaded for 10 cycles.Four loading scenarios were performed in this study. Before each test, a preloading of 0.5 kN was applied and was then unloaded to zero in order to stabilize the testing system and obtain the complete stress\u2013strain curve originating from zero point. For the monotonic loading scheme, the specimens were subjected to a displacement control load with a speed of 0.3 mm/min until the specified strain. For cyclic loading schemes, three different loading histories were used to investigate the damage evolutions of steel-reinforced HSHUHPC under cyclic tension with three levels of target strains corresponding to the different service states of reinforced UHPC structures. In the case of a given target strain, the loading part of each cycle was performed when the preloading force reached 0.5 kN, while after attaining the target strain, the unloading part was started and finished until 0.5 kN. What calls for special attention is that for the The test results implied that despite significantly different reinforcement ratios and loading patterns, the level of target strain in the case of cyclic tension led to a distinct difference in the failure pattern of steel-reinforced HSHUHPC members caused by various eventual accumulative ultimate strains. When the level of target strain was 1000 \u03bc\u03b5, after 10 cycles, all the specimens exhibited multiple micro-cracks invisible to the naked eye. It is worth noting that the boundary of the crack width is 0.05 mm between the invisible micro-crack and the visible macro-crack. This also demonstrates that the HSHUHPC possesses good control ability in terms of crack width.When the levels of target strain for the tested specimens are 2000 \u03bc\u03b5 and 2500 \u03bc\u03b5, after 10 cycles, the maximum tensile strain generally has exceeded the yield strain of the reinforcement and ultimate tensile strain of HSHUHPC. The damage pattern is transformed into a single macro-crack along with a series of narrow and closely spaced micro-cracks. The main reason is that the tensile strain-hardening characteristics of the HSHUHPC provide the steel-reinforced HSHUHPC members with an effective crack width control ability for the initially opened multiple micro-cracks prior to the yielding of the embedded steel rebar. Once the embedded steel rebar begins to yield, the gradually increased demand for deformation on the specimen progressively results in the occurrence and propagation of a localized macro-crack at the weakest cross section, possibly where the least effective number of fibers are distributed in the direction parallel to the tension loading; however, the local strain in the other segment is still below the ultimate strain. Eventually, there is a pronounced post-yield localized macro-crack. It is worth noting that the occurrence of the visible macro-crack was between the yielding strain of the steel rebar and the ultimate tensile strain of HSHUHPC, that is, around 2700\u20132840 \u03bc\u03b5. This may be due to the fact that as the steel rebar yields, the steel fibers at the crack are motivated to debond and pull out gradually and the synergistic effect between the steel rebar and HSHUHPC weakens. Besides, the steel fibers at the crack cannot balance the large deformation demands of the steel rebar after yielding, despite the presence of the bridging effect of the steel fibers. As shown in As shown in When the level of the target strain was 2000 \u03bc\u03b5 and 2500 \u03bc\u03b5, the residual strain of steel-reinforced HSHUHPC specimens gradually remained stable once the steel rebar yielded as the number of cycles increased. Upon the yielding of the steel rebar, the crack pattern of the steel-reinforced HSHUHPC transformed the uniformly distributed micro-cracks into a localized single macro-crack. Meanwhile, once the crack width progressed beyond a certain threshold, the role of the bridge effect of fibers in the cracks was enhanced and gradually undermined due to the debonding and pull-out of the fibers.i-th cycle to the initial loading stiffness, and in the same way, the normalized unloading stiffness of each cycle refers to the ratio of the unloading stiffness at the i-th cycle to the initial unloading stiffness. These reflect the stiffness degradation degree of the members. Regardless of the level of the target strain, the normalized loading stiffness and unloading stiffness were considerably reduced during the 2nd or 3rd cycle and remained stable with some fluctuations as the number of cycles increased. Compared with For all the steel-reinforced HSHUHPC specimens, it could be observed that the size of the embedded rebars had a positive effect on the tensile stiffness of the specimens in the elastic stage. Besides, in the case of the elastic\u2013plastic stage, increasing the diameter of the steel rebar upgrades the tensile stiffness effectively. AE analysis technology could effectively monitor and detect the internal damages of UHPC under cyclic tensile loading on the microcosmic level. As mentioned above, the diameter of the embedded steel rebar has little influence on the damage pattern; thus a typical specimen for each level of target strain was discussed herein. Additionally, in order to simplify the analysis, the research took 8 cycles for in-depth analysis due to the fact that the experimental phenomena showed that after the 4th cycle, the loading and unloading curves would tend to be stable for all the specimens. The total tensile bearing capacity of steel-reinforced HSHUHPC consists of steel rebar and HSHUHPC. By getting rid of the pure steel rebar response from the total tensile load\u2013strain envelope curve, the contributions of HSHUHPC in the steel-reinforced HSHUHPC member, usually called the tension-stiffening effect, are obtained as shown in In addition, it is hypothesized that steel yields when the concrete crushes in traditional reinforced concrete design methods based on the mechanics of slender beams. It is observed experimentally that an early strain hardening effect in the steel occurs due to the high bond strength between HSHUHPC and steel rebar and the tension-stiffening effect. Therefore, traditional methods based on the mechanics of slender beams might typically underestimate the strength of reinforced HSHUHPC components subjected to bending, even when accounting for the tensile capacity of the HSHUHPC material. Further experimental research, including the effect of reinforcement ratio, layout of reinforcement, and evaluation of the flexural capacity of the UHPC slender beam considering the tension-stiffening effect, need to be done in combination with numerical simulation, theoretical calculation, and so on.2) of 0.902, 0.869, 0.776, and 0.913 for different diameters of the embedded steel rebar. The presence of some large discreteness is due to the great difference in stiffness degradation rate at the elastic stage and the strain-hardening stage, respectively. The relationship model needs to be studied further using more tests.As mentioned above, it could be supposed that there was hardly a slip generated between HSHUHPC and steel rebar prior to the yielding of the steel rebar. Hence the steel-reinforced HSHUHPC could be regarded as the isotropic material and undergo linear elastic deformation in coordination between steel rebar and HSHUHPC until reached at the ultimate tensile load. According to the research study by Guo Jun-Yuan et al. , it is aTwelve steel-reinforced HSHUHPC specimens under cyclic axial tension, accompanied with AE sources locating technology, were tested in this paper. The effects of the loading pattern, the diameter of the embedded rebar, and the level of target strain in each cycle on the tensile behavior of steel-reinforced HSHUHPC were studied. The following conclusions were drawn:(1)For a target strain of 1000 \u03bc\u03b5 in each cycle, the typical failure of steel-reinforced HSHUHPC exhibited uniformly distributed multiple micro-cracks due to the excellent crack width control ability of HSHUHPC, while for the target strain of 2000 and 2500 \u03bc\u03b5 in each cycle, the typical failure pattern exhibited a single, localized macro-crack in conjunction with a sprinkling of narrow and closely spaced micro-cracks due to the inclusion of steel fibers, which intensified the strain concentration in the embedded steel rebar. AE source locating technology provides strong evidence for the damage distribution of the whole loading process.(2)In terms of load\u2013strain response and envelope curves, the larger the diameter of embedded steel rebar, the smaller the maximum accumulative tensile strain under cyclic tension, which indicated that the larger the diameter of embedded steel rebar, the greater the contribution to the stiffness of the R-HSHUHPC specimens in the elastic\u2013plastic stage.(3)For the target strain levels of 2000 \u03bc\u03b5 and 2500 \u03bc\u03b5 in each cycle, once the embedded steel rebar yielded and the crack width progressed beyond a certain threshold, fibers began to pull out, and the ability of the fibers to bridge the cracks began to decrease. Hence the residual strain and stiffness degradation rate in each cycle remained stable.(4)According to the tension-stiffening response of steel-reinforced HSHUHPC members, the larger the embedded steel rebar, the lower the peak tensile strength of the HSHUHPC after deducting the steel rebar. This is most likely due to the fact that the larger diameter of the steel rebar results in a worse discontinuous distribution of the steel fiber, although the contact area between steel rebar and UHPC increases, and due to the presence of autogenous shrinkage, the bond interaction at the interface between steel rebar and UHPC is weaker.(5)The relationship model between the loading stiffness degradation ratio and the cumulative residual strain for steel-reinforced HSHUHPC members was proposed. The relationships can be employed to evaluate the stiffness of steel-reinforced HSHUHPC under cyclic tension to some extent.Further studies are needed to include different fiber volumes, steel rebar diameter-to-cover ratios, and reinforcement ratios to better understand how the mutual contribution of steel fibers and rebars, splitting cracks, and so on influence the damage progression and deformation capacity of the reinforced UHPC. Additional tests are necessary to evaluate the flexural capacity of reinforced UHPC members considering the tension-stiffening effect."} +{"text": "The declaration of the Mediterranean Diet as Intangible Cultural Heritage of Humanity in order to preserve a cultural and gastronomical legacy included the protection of lifestyles, knowledge, sociability, and environmental relationships. However, the patrimonialization, popularization, and globalization of a certain conception of this diet have turned it into a de-territorialized global phenomenon. As a consequence of this process, it has been necessary to notably increase the production of its ingredients to satisfy its growing demand, which, in turn, has generated \u201csecondary effects\u201d in some Mediterranean environments of Southeastern Spain. If, on the one hand, their wealth has increased and population has been established, on the other hand, the continuity of certain cultural landscapes linked to local knowledge and particular lifestyles has been broken, replacing them with agro-industrial landscapes exclusively at the service of production. This, at the same time, has caused social and environmental inequalities It has become commonplace to bring up a certain saying attributed to Josep Pla\u2014\u201cthe cuisine of a country is its landscape set in a pot\u201d\u2014to refer to the relationship between diet and landscape. As if, by necessity, the landscape of a particular space limited and conditioned its dominant diet, establishing a relationship of dependence between environmental conditions and the possibilities of nourishment. In a way, this long-held idea underlies, as well, the Resolution approved by the European Parliament on 12 MaIn this paper, I present some concrete examples of the environmental, social, or cultural impacts that the rise of this diet is having, or at least a certain way of conceiving it, in some specific areas of the Spanish Mediterranean. Due to the mobility restrictions inherent to the current pandemic, these pages cannot include recent information produced by field work and are based on the analysis of the existing literature on these places, completed with a follow-up of the documentation generated by the administrations public and collected in the specialized magazines. Thus, in the first place, I tell how the conception of the Mediterranean Diet promoted by UNESCO and other organizations has been gradually replaced by a biomedical consideration that, instead of emphasizing landscapes and lifestyles, highlights the value of products regardless of their origin and how they were produced. Below are the specific effects that this conception is having on certain cultural landscapes that are being replaced by new agro-industrial landscapes that, in turn, are changing the ways of life that the UNESCO declaration said must be protected. With this, it is exposed how the declaration of the Mediterranean Diet as a World Heritage Site and its promotion, in addition to the positive effects on the health of those who consume it, may also have had negative effects in some environmental and social contexts.Sweetness and Power. He states that, generally speaking, human groups hardly ever eat every foodstuff found in their surroundings, and, while some are cherished, others are rejected. Likewise, Jes\u00fas Contreras, Antoni Riera, and F. Xavier Medina [https://www.alimentosdespana.es/es/campanas/ultimas-campanas/alimentos-de-espana/el-pais-mas-rico-del-mundo/default.aspx (accessed on 29 January 2021)).The intimate connection between landscape and diet was already questioned in 1985, by Sidney Mintz , when anr Medina have poir Medina (p. 18).r Medina (p. 46),r Medina or consuHowever, if the relationship between landscape, product, and diet is no longer a direct one, but one that comes mediated by other elements , it is worth questioning if the foodstuff that include in their nomenclature the name of a place must have been produced fully in said location or if it is enough for this place to be present in some phase of its manufacturing. For instance, when making wine coming from any of the Designation of Origin (DO) areas, must the grapes have been harvested in said DO, or can they have been harvested far away and processed in the DO? Or, considered from across the pond, can we consider a \u201cMediterranean diet\u201d one that employs products with similar characteristics and features to those of the heterogeneous Mediterranean landscapes when these products have been grown in their totality in California or Mexico? Ultimately, does a Mediterranean Diet require a Mediterranean landscape? Although, apparently, the answer to these questions seems simple, in practice it is conditioned by the existence of different conceptions about what the Mediterranean Diet is.As part of the defense prepared by the permanent representatives of Spain, Greece, Italy, and Morocco to the UNESCO to promote the inscription of the Mediterranean Diet in the Representative List of the Intangible Cultural Heritage of Humanity, it is explicitly stated that \u201cThe Mediterranean Diet is the set of skills, knowledge, rituals, symbols and traditions, ranging from the landscape to the table, which in the Mediterranean basin concerns the crops, harvesting, picking, fishing, animal husbandry, conservation, processing, cooking, and particularly sharing and consuming the cuisine\u201d (p. 6), https://ich.unesco.org/es/RL/la-dieta-mediterranea-00884 (accessed on 22 January 2021)) will be able to confirm how, even though this list includes knowledge, skills, traditions, agriculture-related symbols, ways of managing food, commensality, fellowship, communal living, intercultural dialogue, craftsmanship, holidays, markets, etc., there is no reference to Mediterranean landscapes. Likewise, any similar mention is missing from the inventory that supports the Intangilbe Cultural Heritage candidacy presented by Spain (https://ich.unesco.org/doc/src/19700.pdf (accessed on 22 January 2021)). Landscape also fails to play a fundamental part in the candidacy of the set of institutions who joined the proposal, whose consent is included in https://ich.unesco.org/doc/src/16813.pdf (accessed on 22 January 2021). In fact, half of the associations or institutions that supported this candidacy present it both generically and vacuously, as an indispensable addendum that needs to be named in order to fulfil bureaucratic requirements. Furthermore, references to the landscape appears in a vague text\u2014the same one in almost every case, merely changing the name of the signing institution\u2014that states the inclusion of the Mediterranean Diet in the aforementioned list must be contemplated due to the negative effects that \u201cglobalization and socio-cultural changes\u201d are having on \u201cthe health and welfare of people, the stability of the rural population, and the protection of the environment, landscape and culture of Mediterranean communities\u201d . At the same time, a significant number of the signers do not even explicitly mention the relationship between landscape and diet, or they present a variation on the stated formula.However, in spite of these considerations, those engaging the information offered by the UNESCO in regards to the 2013 inscription of the Mediterranean Diet in the Representative List of the Intangible Cultural Heritage of Humanity (https://ich.unesco.org/doc/src/16813.pdf (accessed on 22 January 2021). Annexe 4).Something similar takes place with the 41 proves of consent by the associations or institutions for their knowledge, practices, or activities to be disseminated in order to support the presentation of the candidacy of the Mediterranean Diet to UNESCO, linked to the city and province of Soria. While most of them mention landscape in passing, only the document signed by the Sorian Association for the Defence and Study of Nature\u2014Asociaci\u00f3n Soriana para la Defensa y Estudio de la Naturaleza ASDEN (Ecologistas en acci\u00f3n)\u2014points out the importance of this inclusion, \u201cin order to better the health of citizens, as well as their quality of life, collaborating with the conservation of peculiar ecosystems with centennial traditions, and the conservation of peculiar landscapes\u201d ). That is, apparently, the Sorian support for the Mediterranean Diet would assume that one of the defining elements of a local diet\u2014every association endorsing the project showed the connection between their local or provincial activities and this local diet\u2014is a product that requires importation from other places, since the province\u2019s environmental conditions preclude its manufacture. In this case, therefore, the knowledge, traditions, symbols, food management, etc., presented by the UNESCO declaration are necessarily extra-local and, in consequence, point to a disconnection between diet and landscape.In spite of its continental features and its lack of proximity to the Mediterranean Sea, Soria\u2019s shows of consent proved decisive in the process of making the Mediterranean Diet Intangible Heritage, since Soria was the first Spanish town to support the candidacy through a long process where diverse interest came into conjunction . During This separation becomes possible because in the particular case of the Mediterranean Diet, this crucial role in the patrimonialization process has been played by agents linked in one way or the other to the medical profession and food science. Within the 53 backers of the inscription of the Mediterranean Diet presented by the permanent representatives of Spain, Greece, Italy, and Morocco, we find, among others, the Spanish Academy of Nutrition and Food Science, the University of Barcelona\u2019s Research Group for Communitarian Nutrition (Grupo de Recerca en Nutrici\u00f3 Comunit\u00e0ria), the Spanish Federation of Food and Wine Guilds, the Mediterranean Diet Foundation, the Mediterranean Agronomical Institute of Zaragoza, the Department of \u201cFood Systems, Culture and Society\u201d of the Open University of Catalonia, the NGO Nutrition without Borders, Foundations for Nutritional Research, the UNITWIN-UNESCO Chair of Research, Planning and Development of Local Sys-tems of health of the University of Las Palmas de Gran Canaria, the UNESCO Chair of Visual Health and Development of the UNESCO chairs network of the Polytechnic University of Catalonia, the Foundation for the Promotion of health, etc.In consequence, its relevance is supported by the positive effects following this diet has, and not on its connection with social or cultural processes in any particular region. However, as an \u201cideal model of a healthy diet\u201d (p. 19),If there ever was such a thing as the customs, styles, and values that classical social anthropology in the 20th century identified as prototypically Mediterranean in regards to food consumption, the actual identification of the Mediterranean Diet and its bio-medical justification has superseded them for good. This process involves a \u201csubstitution of traditional and socially regulated reasons that recommended eating this or that, in a particular state, for other nutritional and scientifically legitimated reasons, in such a way that the hierarchy of goals of the act of eating\u2014pleasure, socialization, health\u2014varies\u201d (p. 201)https://cruceroland.com/recetas-sanas-faciles-gastronomia-mediterranea/ (accessed on 20 January 2021)), while both the ecosystems and coast-adjacent residents see themselves as being highly affected by the environmental\u2014among other kinds\u2014pollution these ships generate. In fact, even though it is difficult to establish a linear causal relationship, the emphasis on the products without a focus on the environmental contexts where they are grown sometimes generates very negative effects on the second of these.In this setting, it is not surprising, for instance, that the hundreds of cruise ships that would, before the emergence of this devastating pandemic, cross the waters of the Mediterranean, carrying thousands of tourists from port to port, could offer in their many restaurants \u201cMediterranean gastronomy\u201d or \u201cMediterranean Diet\u201d .Olimerca (p. 21).Furthermore, of a drastic nature are the changes generated in the landscape by the importance the Mediterranean Diet gives to horticulture. According to the World Vegetable Map 2018 report, developed by RaboResearch Food & Agribusiness, there is a \u201cgrowing importance of production in greenhouses and vertical farms\u201d . AccordiThe price paid in exchange has been an increase in social inequality up to hard-to-conceive levels and the depletion of aquifers that the greenhouse owners want to recover by expanding saltwater desalination and reducing underground water consumption . In the On top of the substitution of heterogeneous rural landscapes associated with local production by others, homogenous and agro-industrial in nature, the emphasis on the product itself can also affect the ethno-ecologies and put \u201cbio-cultural memory\u201d at risk . As thesIn other occasions, these risks to the upkeep of local know-how is related to the attempts to achieve heritage status for a particular product through a certification of quality. Thus, Cantero and Ruiz Ballesteros counter Productive homogeneity allows for the easy identification of a product, and, in consequence, the achievement of heritage status for itself or any other aspect tied to it\u2014an area, a landscape, a custom, a dish, etc. This status, however, may generate discomfort and resistance. For instance, some wineries are abandoning the Rioja Qualified Denomination of Origin , \u201cin order to offer the consumer the opportunity to discover the diversity of our land\u201d However,On the other hand, this move toward the \u201cecological\u201d or \u201csustainable\u201d being identified with the \u201clocal\u201d or \u201ctraditional\u201d may have quite diverse consequences: Some defend this trend as being a result of an exercise in awareness of the limitations of the planet and the effects of climate change; others, however, set their eyes on economic factors and see this as an alternative to the industrial Common Agricultural Policy (CAP) that, they believe, has generated negative consequences in the rural sphere. There can also be found some who understand agro-ecology as a part of the defense of food sovereignty or linked to the love of the land. For others, finally, it is a simple fad that is largely related to \u201cthe association of its consumption to a particular status and social class\u201d that turns its products into \u201creferents of exclusivity, sybaritism, and elitism\u201d (p. 67).The conversion of the multiple particular diets associated with the different Mediterranean landscapes in which they were produced and consumed into a single global diet is having very negatives effects on some of the Spanish landscapes associated with the products that medical rationality considers better for health. That is, by focusing the diet on the products, regardless of how and where they are produced, some of the landscapes, far from being protected as the UNESCO declaration foresaw when it was incorporated into the Representative List of the Intangible Cultural Heritage of Humanity, are suffering irreversible transformations that distance them from the complex cultural landscapes stereotypically characterized as \u201cMediterranean\u201d, becoming new and uniform agro-industrial landscapes. These new landscapes, subordinated to market logics, by breaking with the historical processes that have originated them, produce the \u201cMediterranean\u201d, but they are no longer \u201cMediterranean\u201d.The transformation of cultural landscapes into a commodity destined exclusively to produce merchandise in a process of de-historization that causes an emptying of the meanings linked to local identities and connections, allows us to evoke the image of the Mediterranean within a transnational political economy of feelings ,71 that No Logo, suggested that \u201cwe were so busy analyzing the pictures being projected on the wall to notice that the wall itself had been sold\u201d [Naomi Klein, in her renowned work en sold\u201d (p. 161)In accordance with this market logic, food, landscapes, customs, etc., have been selected as referents of heritage in a process that, equally, disregards everything that does not fit within these parameters and that makes it difficult to consume the image they evoke. Mediterranean landscapes and diet meet in the global space of the tourist who, while consuming both of them at the same time, buys, above all, their image.In any case, this de-territorialized and de-historized image is continuously reformulated due to tensions between local and global dynamics. On the one hand, Mediterranean gastronomy, on which the diet is based, renovates itself constantly though the transformations that take place within local and regional cuisines. On the other hand, landscapes must continuously adjust to settled processes of socialization and to multi-levelled normative that incorporate local and regional demands linked to the processes that reconfigure them."} +{"text": "Pseudorhaetus sinicus is a stag beetle common to China and Vietnam, but whose distribution is limited within China. Little is known about the molecular biological characteristics of this species, so we characterized its complete mitochondrial genome (GenBank accession number MZ504793.1). The mitogenome consists of a circular DNA molecule of 18,126\u2009bp, with 67.693% AT content. It contains 13 protein-coding genes (PCGs), 22 tRNA genes, and two rRNA genes. The PCGs have typical ATN (Met) start codons and TAN stop codons. Phylogenetic analysis suggests that P. sinicus is closely related to Prosopocoilus confucius. This newly described mitochondrial genome provides a valuable resource for the phylogenetic analysis of Lucanidae beetles. Pseudorhaetus sinicus, which belongs to order Coleoptera, family Lucanidae, is distributed mainly in Vietnam and China. It is commonly found in Fujian, Zhejiang, Jiangxi, and Guizhou Provinces, China. The partial mitochondrial genome (KP987575.1) of P. sinicus from Daming Mountain in Guangxi Province, China, has previously been sequenced , Fujian Tianbaoyan National Nature Reserve, Yong\u2019an City, Sanming City, China, on 24 September 2020, and deposited by Yu Cao (Email: yucaosuccess@126.com) in the animal specimen room of Guiyang University (specimen accession number: GYU-20200924-001). Total genomic DNA was isolated using an Aidlab Genomic DNA Extraction Kit . Universal primers were designed (Supplementary Table 1) to match generally conserved regions to amplify short fragments from 12S and 16S rRNA, cox1, cox2, nad1, nad2, and nad5. PCR products were cloned into a pMD18-T vector and then sequenced, or sequenced directly by the dideoxy nucleotide procedure, using an ABI 3730 automatic sequencer . Thirty-six short sequences were obtained which range in length from 253\u2009bp to 1081\u2009bp.A specimen of adult P. sinicus (GenBank accession number: MZ504793.1) was assembled manually. It is a circular DNA molecule that is 18,126\u2009bp long . Using Perna and Kocher\u2019s formula (Perna and Kocher The complete mitogenome of P. sinicus mitogenome contains 13 protein-coding genes (PCGs), 22 tRNA genes, and two rRNA genes, which were annotated using the MITOS web server (http://mitos.bioinf.uni-leipzig.de/) from Daming Mountain, Tenebrio obscurus had ATA start codons, one (nad2) had an ATC start codon, five had ATG start codons, and four had ATT start codons. All 13 PCGs had typical TAN stop codons. Four genes had TAA stop codons, four had TAG stop codons, and five had incomplete stop codons that were completed by the addition of A nucleotides at the 3\u2032 ends of the encoded mRNAs. The 22 tRNA-encoding genes were interspersed throughout the coding region and ranged from 61\u2009bp (tRNA-Cys) to 71\u2009bp (tRNA-Lys) long. The genes encoding large rRNA and srRNA were 1269\u2009bp and 760\u2009bp long, respectively.The P. sinicus, its mitochondrial PCGs and those of 16 other species in class Insecta were used to construct a maximum-likelihood phylogenetic tree with 1000 replicates using MEGA X software (Kumar et\u00a0al. T. obscurus (Bai et\u00a0al. Z. atratus (Bai et\u00a0al. P. sinicus is closely related to Prosopocoilus confucius (Lin et\u00a0al. P. sinicus, and also provides essential genetic and molecular data for further phylogenetic and evolutionary analyses of family Lucanidae.To validate the phylogenetic position of"} +{"text": "It is known that an RNA\u2019s structure determines its biological function, yet current RNA structure probing methods only capture partial structure information. The ability to measure intact RNA structures will facilitate investigations of the functions and regulation mechanisms of small RNAs and identify short fragments of functional sites. Here, we present icSHAPE-MaP, an approach combining in vivo selective 2\u2032-hydroxyl acylation and mutational profiling to probe intact RNA structures. We further showcase the RNA structural landscape of substrates bound by human Dicer based on the combination of RNA immunoprecipitation pull-down and icSHAPE-MaP small RNA structural profiling. We discover distinct structural categories of Dicer substrates in correlation to both their binding affinity and cleavage efficiency. And by tertiary structural modeling constrained by icSHAPE-MaP RNA structural data, we find the spatial distance measuring as an influential parameter for Dicer cleavage-site selection. Sequencing methods such as icSHAPE were developed to probe RNA structures transcriptome-wide in cells. To probe intact RNA structures, the authors develop icSHAPE-MaP and apply to Dicer-bound substrates showing that distance measuring is important for Dicer cleavage of pre-miRNAs. Dimethyl sulfate (DMS), 1-methyl-7-nitroisatoic anhydride (1M7), 2-methylnicotinic acid imidazolide-azide (NAI-N3), and Kethoxal are\u00a0widely used reagents for RNA structure probing in vivo6. DMS modifies the N1 and N3 positions of adenine and cytosine bases within single-stranded regions in vivo3, whereas NAI-N3 acrylates the free 2\u2032-hydroxyl groups for all four single-stranded bases, allowing for in vivo structure probing of the transcriptome by selective 2\u2032-hydroxyl acylation followed by Primer Extension (icSHAPE)4. icSHAPE has been used to uncover structural variations of RNAs associated with different biological processes, such as translation, RNA-protein interactions, and N6-methyladenosine modification in living cells7.Genome-wide RNA structure studies marry chemical probing with next-generation sequencing8, and SHAPE-MaP9 measure the rate of mutations generated during reverse transcription. However, DMS-MaPseq provides partial nucleotide coverage (only adenosine \u201cA\u201d and cytidine \u201cC\u201d nucleotides could be probed) and current SHAPE-MaP reagents have only moderate cell membrane-penetration abilities, limiting their usage in vivo.Structure probing methods, including DMS-seq and icSHAPE, measure reverse transcription truncations arising at chemically induced nucleotide modifications to determine the probability that a nucleotide is in a single-stranded conformation. A limitation, however, is that structural information at the 3\u2032 terminus of a probing target will be missing, due to the loss of mapping of short sequencing reads and miRNA biogenesis. Studies have proposed that Dicer measures a certain number of nucleotides from either the 3\u2032 overhang of dsRNA substrates (the 3\u2032 counting rule)14 or from the phosphate group of the 5\u2032 end for select pre-miRNAs and dsRNAs (the 5\u2032 counting rule)15. In addition, our in vivo studies of short hairpin RNAs (shRNAs) and pre-miRNAs revealed that Dicer uses a single-stranded region to precise anchor the cleavage site 2-nt downstream (the loop counting rule)16. However, questions remain, in regard to when and to what extent these mechanisms operate in pre-miRNA processing. In addition, Dicer also binds to a variety of substrates without apparent classical miRNA or siRNA processing activities17, suggesting that it has other roles in RNA metabolism. Whether and how Dicer differentiates between the cleavable and non-cleavable substrates is unknown.Dicer belongs to the RNase III family and cleaves double-stranded RNAs (dsRNAs) and pre-miRNAs into mature small interfering RNAs (siRNAs) or miRNAs, respectivelyIn this work, we present an approach to probe the structures of intact RNAs in living cells. Briefly, we harness the advantages of icSHAPE reagents and of mutational profiling in reverse transcription to develop a structure probing method, icSHAPE-MaP. To demonstrate its capabilities, we use icSHAPE-MaP to determine the complete structural information for cellular sRNAs. In addition, we combined icSHAPE-MaP with RNA immunoprecipitation (RIP) to determine the structural landscape for substrates of the RNA endonuclease Dicer. By combining structural information obtained by icSHAPE-MaP and tertiary structural modeling, we discover that spatial distance measuring is an influential parameter in Dicer-mediated pre-miRNA processing.3 to modify RNA, and subsequently maps mis-incorporation events generated by the reverse transcriptase Superscript II at the nucleotides with NAI-N3-induced RNA modifications , that uses the icSHAPE reagent NAI-N3 was added directly to cells, which preferentially reacts with free 2\u2032-hydroxyl groups of unstructured and flexible nucleotides, and then the sRNA fraction was purified. For in vitro probing, the sRNA fraction was first purified, refolded, and then treated with NAI-N3 in a tube. The remaining library construction steps were essentially the same for both, where sRNAs were ligated with two adapters at the 5\u2032 and 3\u2032 ends, and reverse transcribed with Superscript II reverse transcriptase. In principle, both reverse transcription mutations (RT-mut) and reverse transcription stops (RT-stop) indicate NAI-N3 modification and hence the structural flexibility of nucleotides. However, Superscript II reverse transcriptase usually adds a random number of non-template nucleotides at the 3\u2032 end of cDNA18, which confounds accurate RT-stop identification. In icSHAPE-MaP, we thus only used RT-mut for RNA structural probing. To remove RT-stop fragments, we added adapters at both 5\u2032 and 3\u2032 end with polymerase chain reaction (PCR) primers prior to reverse transcription, thus only full-length sequences were amplified for subsequent analysis in HEK293T cells. We performed both in vivo and in vitro icSHAPE-MaP structure probing (see \u201cMethods\u201d). For in vivo probing, NAI-NTo map RT-induced mutation sites, we performed deep sequencing and computational analyses, generating an icSHAPE-MaP reactivity/structure score for each nucleotide (see \u201cMethods\u201d). The score negatively correlates with the likelihood of the nucleotide being paired, providing a measure of its secondary structure information. The icSHAPE-MaP experiments were reproducible between independent biological replicates as the mutation rates of each transcript were highly correlated between two replicates . We sequenced ~200\u2009M reads and obtained structure scores for 186 transcripts with in vivo samples and 250 transcripts with in vitro samples and 1-methyladenosine (m1A) modifications. This finding underscores the importance of analyzing the background DMSO libraries to determine NAI-N3-independent signals. Our analysis also revealed that the increase in mutational rates was more significant at A and U residues, consistent with previous observations that single-stranded regions are enriched for A/U compared to G/C4 between replicates of regions at different sequencing depths. We found that a cutoff of 2000\u00d7 sequencing coverage yields very high-quality scores, and that 1000\u00d7 or even 500\u00d7 coverage is a reasonable cutoff when considering the trade-off between the cost and the reproducibility was purified and structures were profiled by icSHAPE-MaP.We combined icSHAPE-MaP with RIP to profile the RNA structure landscape of Dicer substrates Fig.\u00a0. Enriche17. Despite the different enrichment strategies, the two lists of RNAs agreed well with each other, with more than 50% common pre-miRNA coverage . Most of the AUCs are well above 0.5 RNAs Fig.\u00a0, top. Inops Fig.\u00a0, bottom.ops Fig.\u00a0.4. This is consistent with our observations of the more extended terminal loops of pre-miRNAs in our experimentally inferred models compared to the theoretical miRBase models. This analysis highlights the necessity of using experimental information to constrain computational modeling, which will otherwise generate more base-pairings as a result of energy minimization. These results show that the use of icSHAPE-MaP scores can more precisely model RNA secondary structures, providing a structural basis for RNA processing and functional studies.In general, most (78/100) pre-miRNAs have at least one, and almost half (47/100) of pre-miRNAs have at least five structurally different positions between the constrained experimentally inferred and theoretical models. Remarkably, the structurally different positions were often found around the terminal loop region (~\u00b15 nt around the coordinate \u201c0\u201d) Fig.\u00a0. In geneTo identify the structural determinants of Dicer substrates, we developed a computational method to de novo cluster the icSHAPE-MaP structure profiles (see \u201cMethods\u201d). We first aligned the profiles based on the central loop in predicted structures with constraints of icSHAPE-MaP scores and then performed principle component analysis (PCA) to reduce them into a two-dimensional space. The substrate structures obtained from Dicer RIP experiments established three distinct groups by the K-means clustering, based on the top two principal components Fig.\u00a0. Interesp\u2009=\u20092.15e\u221227 and p\u2009=\u20091.05e\u221220, respectively). And substrates from Cluster II are slightly more enriched than those of Cluster III but with no statistical significance . Together, these results suggest that perfect hairpin-like structures, e.g., pre-miRNAs, are generally the preferential binding substrates for Dicer.Cluster I of Dicer substrates showed a relatively large terminal loop with a median size of about 9-nt, flanked by a near-perfect double-stranded stem, which resembles the characteristic hairpin structure of pre-miRNAs . Likewise, Cluster II had a higher cleavage score than Cluster III . These data suggest a direct correlation between Dicer binding affinity and cleavage activity.To directly measure the cleavage activities of Dicer on its substrates, we expressed wild-type (WT) or catalytic-dead Dicer in 293T Dicer-deficient cells, isolated the sRNA fraction of 40\u2013200 nt, and performed RNA-Seq (see \u201cMethods\u201d). As a proxy for the cleavage activities of Dicer, we calculated a \u201ccleavage score\u201d as the log2 fold of change in target abundance in catalytic-dead Dicer vs. WT Dicer cells , an intron of the signal recognition particle 68 (SRP68), and an intron of ER lipid raft associated 1 (ERLIN1) that belongs to a repeat family with sequence homology to pre-tRNA-Tyr. These findings suggest additional Dicer functions beyond miRNA biogenesis .We also detected Dicer substrates within mRNA introns and exons that display both high binding and cleavage activities 2-nt downstream a bulge or loop (the loop counting rule); (2) a physical distance from the C5\u2032 atom of the 1st nucleotide of miRNA on 3p arm to the O3\u2032 atom of the 3\u2032 end of pre-miRNA . For each pre-miRNA hairpin, we searched for single-stranded regions on both the 5p and 3p arms. We found that 45% of the 1st nucleotides for 3p miRNAs . The distances of both D3p_miRNA and D5p_miRNA were in proximity to 59\u2009\u00c5 for most pre-miRNAs, irrespective of the length of cleaved product of longer RNAs. The examples demonstrated in the present study showcase the application of icSHAPE-MaP to unveil the genome-wide structural landscape for Dicer substrate sRNAs. In the future, icSHAPE-MaP can be applied to reveal the structural features of binding by other RBPs.Our work presents a biotechnology \u201cicSHAPE-MaP\u201d to accurately probe RNA secondary structures in vivo while obtaining the complete information for their intact form. icSHAPE-MaP leverages mutational profiling of reverse transcriptase to detect NAI-N27. Our RIP-icSHAPE-MaP data suggests this structural feature for pre-miRNAs to be a general property of Dicer processing. In terms of Dicer cleavage-site selection on pre-miRNAs, our data suggest that distance counting in three-dimensional space is an important parameter. We hypothesize that Dicer cleaves pre-miRNAs at the phosphodiester bond of the nucleotide with the closest proximity to the physical distance D3p_miRNA (59\u2009\u00b1\u20091\u2009\u00c5), upon which mature miRNA lengths are determined 15. Together with the \u201cloop counting rule\u201d, they coordinately regulate Dicer cleavage-site selection on pre-miRNAs.In the analysis of the RNA structure of Dicer substrates, we observed that most pre-miRNAs bear a large terminal loop with a nearly perfect stem, structurally and statistically different from other Dicer substrates Fig.\u00a0. This is29. Furthermore, we observed that pre-miRNAs with long 3p arms tend to display the \u201c3\u2032 counting rule\u201d, with a lesser percentage featuring the \u201c3\u2032 loop counting rule\u201d , and 10% fetal bovine serum in a humidified incubator at 37\u2009\u00b0C with 5% CO2. All transfection assays were done by using polyethylenimine (PEI) (Sigma-Aldrich).The 293T cell line was purchased from ATCC (cat. #CRL-3216). The Dicer-deficient 293T cell line (NoDice 2-20) was a gift from Dr. Bryan R. Cullen at Duke University. Cells were maintained in DMEM/HIGH glucose with 6 cells were seeded on the first day and transfected with 20\u2009\u00b5g plasmids 24\u2009h later with 60\u2009\u00b5l (1\u2009\u00b5g/\u00b5l) PEI. In detail, the plasmids and PEI was first incubated with 1\u2009mL Opti-MEM I Reduced Serum Medium (Gibco) separately. Then the two mixtures were combined and kept at room temperature for 15\u2009min before adding to cells. Forty-eight hours later, cells were lysed in the lysis buffer , supplemented with the proteinase inhibitor cocktail (Roche) and RNase inhibitor RiboLock . The lysate was centrifuged at 15,000g for 10\u2009min at 4\u2009\u00b0C to remove the insoluble cell debris. The supernatant was incubated with anti-FLAG M2 magnetic beads (Sigma) for 3\u2009h at room temperature.To analyze Dicer substrates at a large scale, NoDice 2\u201320 cells were transfected with a plasmid expressing human Dicer with two mutations in its RNase III domains . For one 15-cm plate, 9\u2009\u00d7\u2009102, 150\u2009mM NaCl, 50\u2009mM NAI-N3) on a Thermomixer at 37\u2009\u00b0C for 12\u2009min at 1,000\u2009rpm (NAI-N3 group). For the DMSO group, NAI-N3 was replaced by DMSO in the modification buffer. RNA was isolated with Trizol, following the manufacturer\u2019s instruction.After incubation, the beads were washed once with the high salt wash buffer , RiboLock ) and twice with the low salt wash buffer , RiboLock ). After the last wash, the beads were incubated with the modification buffer . Then the cells were resuspended in 100\u2009mM NAI-N3 and incubated at 37\u2009\u00b0C on a Thermomixer at 1,000\u2009rpm for 5\u2009min. The reaction was stopped by centrifugation at 2500g for 1\u2009min at 4\u2009\u00b0C, and the supernatant was subsequently removed. The cells were resuspended in 250\u2009\u00b5l PBS, and then 750\u2009\u00b5l TRIzol LS Reagent was added for RNA extraction. The extracted RNA was size-selected on a 6% Urea\u2013PAGE gel for 25\u2013200 nt. For the\u00a0DMSO control group, the preparation of small RNA was the same as the\u00a0in vivo modified group except that the HEK293T cells were treated with DMSO instead. For the\u00a0in vitro group, the RNA extracted from the DMSO group was first heated in metal-free water at 95\u2009\u00b0C for 2\u2009min, then was\u00a0chilled on ice. The 3.3 \u00d7 SHAPE folding buffer \u00a0was added to RNA followed by incubation at 37\u2009\u00b0C for 10\u2009min. Then 1\u2009M NAI-N3 was added to a final concentration of 100\u2009mM, followed by incubation at 37\u2009\u00b0C for 10\u2009min. Finally, the reaction was stopped by adding 2 \u00d7 Binding buffer of an RNA concentration kit (Zymo), and the RNA was purified, following the manufacturer\u2019s instruction. The details of library construction are described in the following section \u201cConstruction of icSHAPE-MaP library\u201d.For in vivo small RNA (<200 nt) structure probing, HEK293T cells were treated with NAI-NThe RIP pulled-down and \u201cinput\u201d RNA was size-selected on a 6% Urea\u2013PAGE gel for 25\u2013200 nt. Gel purification was performed by crushing the gel and incubating in the gel crush buffer with rotation at 4\u2009\u00b0C overnight. The eluate was collected by centrifuging through 0.45\u2009\u00b5m Spin-X columns (Thermo Fisher), concentrated, and purified by an RNA concentration kit (Zymo). The 8\u2009\u00b5L purified RNA was ligated with a 3\u2032 linker by incubation with the 3\u2032 ligation mix . Forty-eight hours later, sRNA (<~200 nt) was extracted using the mirVana kit (Thermo Fisher). One microgram sRNA was then incubated with the end-repairing mix , 1\u2009\u00b5L Fast AP (Thermo Fisher), 1\u2009\u00b5L RiboLock, 4\u2009\u00b5L nuclease-free water) at 37\u2009\u00b0C for 1\u2009h. The 3\u2032 ligation mix were transfected into the NoDice 2\u201320 cells (5\u2009\u00d7\u2009102), 1\u2009\u00b5L RiboLock, and 1\u2009\u00b5L SuperScript II (Thermo Fisher) were added. The reaction mix was incubated at 25\u2009\u00b0C for 3\u2009min, and then 42\u2009\u00b0C for 3\u2009h. cDNA products were purified by running on 6% Urea\u2013PAGE gel. The following steps, including circularization, PCR amplification and PCR products purification, were done as previously described4. Briefly, circularization was conducted as follows, 16\u2009\u00b5L cDNA products were mixed with 2\u2009\u00b5L 10 \u00d7 Circligase buffer, 1\u2009\u00b5L Circligase II, 1\u2009\u00b5L 50\u2009mM MnCl2 and incubated at 60\u2009\u00b0C for 2\u2009hr. The circularization reaction was purified with a DNA concentration kit (Zymo). PCR was set up with 20\u2009\u00b5L eluted circularized cDNA and PCR reaction mix . PCR primers were designed to amply the genomic regions flanking pre-miRNAs (~200-nt upstream and downstream). Mouse mir124-1 was cloned downstream of the CMV promoter between Hind III and BamH I to serve as a control for transfection and expression experiments. Pre-miR-217-WT was cloned downstream of the EF-1\u03b1 promoter between Not I and Kpn I. All other variants of pre-miR-217 were made using the QuikChange Lightning Multi Site-Directed Mutagenesis Kit (Agilent). All oligo sequences for cloning can be found in Supplementary Data\u00a016, with the exception that luciferase activities were measured 48\u2009h after cotransfection of 100\u2009ng pre-miRNA plasmids and 100\u2009ng corresponding reporter plasmids.Cloning of luciferase reporters for miRNA binding sites was done as previously described16. Briefly, total RNA was extracted with the mirVana miRNA Isolation Kit (Thermo Fisher). The 3\u2032 linker ligation was set up in 20\u2009\u00b5L for 16\u2009\u00b0C overnight, using 5\u2009\u00b5g total RNA and the ligation mix , 100\u2009mM DTT, 1 \u00d7 RNA ligation buffer, 1\u2009\u00b5L T4 RNA Ligase 2 with truncated KQ (NEB), 0.4\u2009U/\u00b5L RiboLock (Thermo Fisher)). The ligation product was size-selected on a 15% Urea\u2013PAGE gel (Thermo Fisher) for ~17\u201335 nt long sRNAs. Gel purification was performed by crushing the gel and incubating in the RNA elution buffer with vigorous shaking at 37\u2009\u00b0C overnight. The eluate was collected by centrifuging through 0.45\u2009\u00b5m Spin-X columns (Thermo Fisher), followed by ethanol precipitation. The 5\u2032 adapter ligation was set up in 20\u2009\u00b5L for 16\u2009\u00b0C overnight, using the 3\u2032 ligation products and the ligation mix was used to transfect HEK293T cells. To reduce the viscosity, cell lysate was subject to sonication on ice . The supernatant was incubated with 40\u2009\u03bcL of anti-Flag M2 affinity gel (Sigma) with constant rotation for 2\u2009h at 4\u2009\u00b0C. The reactions were performed in a total volume of 30\u2009\u03bcL in 2\u2009mM MgCl2, 1\u2009mM DTT, 5\u2032/3\u2032-end-labeled pre-miRNA of 1\u2009\u00d7\u2009104 to 1\u2009\u00d7\u2009105\u2009c.p.m. and 15\u2009\u03bcL of the immunopurified Dicer in the reaction buffer . The reaction mixture was incubated at 37\u2009\u00b0C for 60\u2009min, followed by the addition of 2 \u00d7 loading buffer (Thermo Fisher) and separation on 18% polyacrylamide gel\u00a0with 8M urea.Immunoprecipitation of Flag-Dicer and in vitro processing for pre-miRNAs were conducted as previously describedImmunoblotting was used to verify the enrichment of FLAG-tagged Dicer in RIP. The experiment was performed with primary antibodies for GAPDH , DICER , FLAG tag , and secondary antibodies: goat anti-mouse IgG (H\u2009+\u2009L)-HRP Conjugate ; Goat anti-rabbit IgG (H\u2009+\u2009L)-HRP conjugate . 1% of the sample volume of input or pull-down samples was used for immunoblotting.32-labeled DNA probes . Totally, 10\u201330\u2009\u03bcg RNA was run on an 18% (w/v) polyacrylamide/8\u2009M urea gel for electrophoresis, transferred onto an Amersham Hybond-XL membrane , and blotted with P3 (5%) or treated with 1\u2009\u00b5L DMSO (unmodified control) in presence of 3 \u00d7 SHAPE buffer for 15\u2009min at 37\u2009\u00b0C. RNA was extracted in phenol:chloroform and resuspended in 5\u2009\u00b5L RNase-free water or 4\u2009\u00b5L for sequencing lanes. A DNA primer complementary to the Universal miRNA Cloning Linker (NEB) ligated to the 3\u2032 end of RNA was 5\u2032 labeled with T4 polynucleotide kinase (NEB) and \u03b3-P32-ATP. 1\u2009\u00b5L of the 5\u2032 labeled primer (~250\u2009nM) was annealed to the resuspended RNA samples at 95\u2009\u00b0C and slowly cooled down to ~40\u2009\u00b0C. Finally, primer extension was carried out in presence of Superscript III (Thermo Fisher), first-strand buffer, DTT, RiboLock, and 0.5\u2009\u00b5L of 2\u2009mM dNTP (plus 1\u2009\u00b5L of 10\u2009mM ddNTPs for sequencing lanes) at 55\u2009\u00b0C for 15\u2009min in a 10\u2009\u00b5L final volume reaction. The reaction was terminated with 1\u2009\u00b5L of 4\u2009M NaOH and incubated at 95\u2009\u00b0C for 5\u2009min, followed by ethanol precipitation on dry ice. DNA pellet was resuspended in 10\u2009\u00b5L of 2 \u00d7 Gel Loading Buffer II (Thermo Fisher) and separated on 15% Urea-PAGE gel. All (\u2212) lanes were those from DMSO-treated samples.RNA structure analysis was performed by selective 2\u2019-hydroxyl acylation analyzed by primer extension (SHAPE). 1 pmol of RNA was modified with 1\u2009\u00b5L NAI-NmirVana miRNA Isolation Kit (Thermo Fisher). Complementary DNA (cDNA) synthesis was performed after DNase treatment using Superscript IV with the random oligo. qPCR analysis was performed with iTaq Universal SYBR Green Supermix and Bio-Rad CFX384 cycler as described in the manufacturers\u2019 instructions.Cells were harvested and total RNA was isolated using TRIzol (Thermo Fisher). Isolation of small RNAs from total RNA Samples was done with the HEK293T cells were transfected with DICER1 \u00a0or control siRNA by Lipofectamine 3000 (Thermo Fisher). Totally, 72\u2009h after transfection, RNA extraction and RT-qPCR were conducted as described above.34, filtering high-quality reads with Trimmomatic (Version 0.33)35, and duplication removal with an in-house Perl script (available upon request). Clean reads were mapped to the reference sequences with STAR36 by default settings. The read count was calculated using an in-house Perl script (available upon request). The enrich score and cleavage score calculation is described in the part \u201cCalculation of the RIP enrichment scores and cleavage scores for Dicer\u201d. The transcripts with enrichment scores\u2009>\u20090 were defined as RIP-enriched ones. sRNA-Seq data analysis was done by following\u00a0our published protocol, except that only reads with the perfect match to pre-miRNAs were used16.The sequencing data were processed by removing adapters with Cutadapt (Version 1.16)34, filtering high-quality reads with Trimmomatic (v0.33)35, and duplication removal with an in-house Perl script (available upon request).Pre-process. The sequencing data was processed by removing adapters with Cutadapt (v1.16)23, snoRNAs (from Gencode v26)37, snRNAs (from Gencode v26), tRNAs (from GtRNAdb v2.0)19, vault RNAs (from RefSeq v109)38, Y RNAs (from RefSeq v109), and 5S rRNAs. The processed reads as above were mapped to them with STAR (v2.7.1a)36 with parameters\u00a0--outFilterMismatchNmax 3\u00a0--outFilterMultimapNmax 10\u00a0--alignEndsType Local\u00a0--scoreGap -1000 --outSAMmultNmax 1. To find out other sRNA fragments with limited annotation on the human genome, the unmapped reads were mapped to human genome (version GRCh38.p12) for the reiteration of data analysis mentioned above.Mapping. Human sRNAs sequences less than ~200-nt in length were collected, such as miRNAs (from miRbase v22)39. Shapemapper2 (v2.1.4)40 was used to calculate final scores as follows:Mutations on each read were parsed with shapemapper_mutation_parser. The script counts 8 mutation types: mismatch, insertion, deletion, multi-mismatch, multi-insertion, multi-deletion, complex-insertion, and complex deletion.Number of mutations for each nucleotide was counted with shapemapper_mutation_counter.icSHAPE-MaP reactivity scores were calculated with make_reactivity_profiles.py.Raw scores were normalized with normalize_profiles.py.Calculate icSHAPE-MaP scores. Replicate samples were combined (with samtools merge)The calculation process for each base can be briefly summarized by the following formula:The icSHAPE-MaP score for base The total read counts from two replicates were balanced by down-sampling. All bases were sorted by coverage. The bases with coverage greater than 500, 1000, 2000, 3000, 4000, or 5000 were selected to calculate the replicate correlation of mutation rate with sliding window . Finally, the data under each cutoff was used to generate a cumulative distribution curve.22 was used to predict the secondary structures of RNAs. The icSHAPE-Map scores were used as constraints with parameters: -si -0.6 -sm 1.8 -SHAPE icSHAPE-Map.shape -mfe.The Fold program in the RNAstructure package (v5.6)41 command line. Colors of bases were applied with parameter \u201c-basesStyle1 on and -applyBasesStyle1on\u201d.RNA secondary structures were visualized with VARNAv3-93The secondary structures of RNAs were predicted with icSHAPE-MaP scores as constraints. They were aligned to the center of their central loop and the icSHAPE-MaP scores of flanking 30-nt were used for PCA analysis in the Sklearn (v0.20.3) package. The raw 60-dimentional space was reduced to a 2-dimentional space. K-means clustering (K\u2009=\u20093) was carried out with 2-dimentional vector in the Sklearn package.i was calculated asReads were pre-processed as described above, followed by mapping to reference sequences with STAR. The RIP enrichment score for RNA i was denoted as The number of reads mapped to RNA 42 from secondary structures with constraints from icSHAPE-MaP data. First, RNA helices were built using a custom Python script adopted from Rosetta (available upon request). These helices were fixed throughout the following modeling steps. Second, de novo modeling was done through an established algorithm, Fragment Assembly of RNA with Full Atom Refinement (FARFAR)42. A Python script (FARFAR_setup.py) was used to call the Rosetta-built pipeline for FARFAR. 500 tertiary structures for each pre-miRNA per modeling run were produced. The top 50 structures with the lowest Rosetta energy were used for the distance measurement of the Dicer cleavage site (see below).de novo tertiary structure models of pre-miRNAs were produced by Rosetta5p_arm) or for that between C5\u2032 on the first nucleotide next to the terminal loop on 3p arm and O3\u2032 on the last nucleotide of each pre-miRNA (D3p_arm).The Dicer cleavage sites on pre-miRNAs were inferred from miRBase, based upon the annotated 3\u2032 end of 5p or the 5\u2032\u2019 end of 3p miRNA sequences. The distance of Dicer cleavage was calculated using the physical distance between the fifth carbon atom (C5\u2032) on the first nucleotide and the third oxygen atom (O3\u2032) on the last nucleotide for each miRNA. The median distance was presented from the measurement of top 50 predicted tertiary structures for each miRNA. To calculate the arm length, it was measured for the physical distance between C5\u2032 on the first nucleotide and O3\u2032 on the last nucleotide next to the terminal loop on 5p arm (DFurther information on experimental design is available in the\u00a0Supplementary InformationPeer Review FileSupplementary\u00a0Data\u00a01Supplementary\u00a0Data\u00a02Supplementary\u00a0Data\u00a03Supplementary\u00a0Data\u00a04Supplementary\u00a0Data\u00a05Supplementary\u00a0Data\u00a06Supplementary\u00a0Data\u00a07Description of additional supplementary filesReporting summarySource\u00a0Data\u00a0files"} +{"text": "Immunotherapy with nivolumab has become the standard treatment for patients with metastatic renal cell carcinoma (mRCC) after progression to single-agent tyrosine kinase inhibitors. However, the optimal duration of immunotherapy in this setting has not yet been established.We retrospectively reviewed all patients treated with nivolumab at our institution from January 2014 to December 2021 and identified those who discontinued treatment for reasons other than disease progression (PD). We then associated progression-free survival (PFS) and overall survival following treatment cessation with baseline clinical data.Fourteen patients were found to have discontinued treatment. Four patients (28.6%) ceased treatment due to G3/G4 toxicities, whereas the remaining ten (71.4%) opted to discontinue treatment in agreement with their referring clinicians. The median duration of the initial treatment with nivolumab was 21.7 months (7.5-37.3); during treatment, two patients (14.3%) achieved stable disease as the best response, and the remaining twelve (85.7%) a partial response. At a median follow-up time of 24.2 months after treatment discontinuation, 7 patients (50%) were still progression-free. The median PFS from the date of discontinuation was 19.8 months (15.2 - not reached); a radiological objective response according to RECIST and treatment duration of more than 12 months were associated with a longer PFS. Three patients were re-treated with Nivolumab after disease progression, all of whom achieved subsequent radiological stability.In our experience, the majority of patients who discontinued treatment in the absence of PD were still progression-free more than 18 months after discontinuation. Patients whose initial treatment duration was less than 12 months or who did not achieve a radiological objective response had a greater risk of progression. Immunotherapy rechallenge is safe and seems capable of achieving disease control. Renal cell carcinoma (RCC) is the most common type of kidney cancer in adults and accounts for 3-5% of new cancer diagnoses each year , 2. NowaIn recent years, immunotherapy in the form of immune checkpoint inhibitors (ICIs) has revolutionised the treatment of metastatic RCC.Nivolumab, an ICI that targets the programmed cell-death protein 1 (PD1), has become the standard treatment for patients with mRCC following progression to single-agent tyrosine kinase inhibitors (TKI) . In combHowever, the maximum duration of treatment differed in those trials. In the 2015 Checkmate 025 trial (nivolumab vs. everolimus for pre-treated mRCC), the first trial that paved the way for nivolumab in the management of RCC, treatment continued until disease progression or the development of treatment-limiting toxicities . In the The reason for limiting the maximum duration of immunotherapy treatment is the growing body of evidence indicating that the disease\u2019s clinical control is often long-lasting and may be maintained even after therapy is discontinued. In fact, due to their unique mechanism of action, ICIs are capable of achieving long-term disease control in many solid malignancies, even after treatment discontinuation or interruption \u201312. TherData from retrospective analyses indicated that treatment interruption after a certain number of cycles could be safe for selected patients \u201313. MoreA patient-tailored \u201cstop and go\u201d approach could be an alternative option for selected patients in order to reduce overtreatment, limit the occurrence of treatment-related toxicities, and improve the possible financial toxicity of those therapies without compromising the treatment\u2019s oncological results .This paper presents a retrospective analysis of patients treated with nivolumab at our institution, who opted to discontinue treatment in the absence of disease progression.We retrospectively reviewed all patients treated with nivolumab at our institution from January 2014 to December 2021 and identified those who discontinued treatment for reasons other than disease progression. Clinical data were extracted from electronic patient records.Inclusion criteria included a histological diagnosis of RCC, previous treatment with nivolumab interrupted in the absence of PD, and the availability of all necessary data.From electronic patient charts, we collected baseline clinical data, the reason for treatment discontinuation, the treatment\u2019s oncological outcome , and data about subsequent treatments administered after disease progression. Adverse events were graded in accordance with the Common Terminology Criteria for Adverse Events (CTCAE) v5.0; radiological response was defined using Response Evaluation Criteria in Solid Tumors (RECIST) v1.1 criteria.Treatment duration was defined as the time between the first and last dose of nivolumab. Progression-free survival (PFS) was calculated using the Kaplan-Meier method from the date of treatment interruption to the date of disease progression or death (whichever occurred first); progression-free survival was censored at the last patient follow-up visit without progression. Overall survival was calculated from the date of drug interruption to the date of death from any cause. For patients re-treated with nivolumab after disease progression, PFS for the second course of immunotherapy was calculated from the beginning of the second course until the occurrence of new disease progression.Key metrics were summarised by means of descriptive statistics. Patient PFS and OS were compared using the log-rank test and Cox\u2019s proportional hazards method (when applicable). We performed univariate and multivariate analyses to determine the association between baseline characteristics and PFS from the time of treatment discontinuation; the covariates that showed any association with the oncological outcome with a p value of at least less than 0.1 in the univariate analyses were included in the multivariate analysis. Results were classified as statistically significant if their p-values were < 0.05. All statistical analyses were performed with \u201cR\u201d v4.0.5 and the \u201csurvival\u201d package v2.44-1.1.At the time of their first visit to our institution, all patients gave their written consent for the use of their clinical data for scientific purposes. The study was conducted in accordance with the Declaration of Helsinki. Data collection was approved by the local Ethical Committee.Fourteen patients were found to have discontinued treatment for reasons other than disease progression. The median age was 77.7 years (range: 42.3-82.1 years). Eleven patients had been diagnosed with clear cell RCC (78.6%), one with papillary RCC, one with chromophobe RCC and one with RCC not otherwise specified. Twelve patients were treated with nivolumab in the second-line setting, while two patients were treated in the third-line. All but one patient had received nephrectomy prior to treatment. All patients were in good clinical condition at the start of Nivolumab treatment (ECOG PS of 0 or 1); 5 patients were classified as belonging to the good risk class according to IMDC criteria, while the remaining 9 patients were classified in the intermediate risk class; none of the patients were considered to be at poor risk. Patient clinical characteristics are summarised in The median duration of initial treatment with nivolumab was 21.7 months (7.5-37.3). During treatment, two patients (14.3%) achieved stable disease as the best radiological response, while the remaining twelve patients (85.7%) achieved a partial response. Twelve patients (85.7%) developed immune-related adverse events of any grade during therapy, requiring at least a brief interruption of nivolumab or treatment with systemic corticosteroids; four patients reported the onset of grade 3/4 toxicities . Data on treatment outcomes are reported in Ten patients (71.4%) opted to discontinue treatment in agreement with their referring clinicians; however, for 5 of these patients (50%), the previous occurrence of low-grade (G1-G2) adverse events was an important factor in their decision. The other four patients (28.6%) discontinued treatment after developing G3/G4 toxicities.At a median follow-up time of 24.2 months after treatment discontinuation, 7 patients (50%) were still progression-free. For 5 of the 7 patients who progressed, radiological progression was defined by the enlargement of known pre-existing lesions, and for the other 2, by the emergence of metastases at new sites .The median PFS from the date of discontinuation until disease progression was 19.8 months (15.2 - not reached); the median overall survival was not reached, with just one patient having died by the time of data cut off. Data on the post-interruption outcomes are reported in At univariate analysis, stable disease as the best radiological response and a treatment duration of less than 12 months were associated with a worse PFS Table\u00a03;After disease progression, two patients were considered ineligible for other oncological treatments due to their poor clinical condition and were, therefore, only treated with best supportive care. One patient, whose CT scan revealed an oligoprogressive disease, was successfully treated twice in succession with stereotactic ablative radiotherapy and has not yet begun additional systemic therapy.Systemic therapy was initiated for the other four patients: due to the previous occurrence of immune-related colitis, one patient started third-line treatment with cabozantinib; the other three patients were re-treated with nivolumab. For two of these three patients, the cause for initial discontinuation was the emergence of an irAE (two grade 3 hypertransaminasemia) At the time of data cut off, the patients re-treated with nivolumab had been treated for 4, 5 and 12 months and are all progression-free; to date, no immune-related adverse event of any grade has been reported for either of them. Data on the treatments administered after disease progression and outcomes are reported in Immunotherapy has drastically improved the prognosis and natural history of patients with advanced renal cell carcinoma. The role of immunotherapy has been enhanced with the publication of recent trials, and a combination treatment with an immune checkpoint inhibitor is currently considered to be the standard of care in the first-line setting . NeverthDespite the fact that in the first and older trials, treatment with immune checkpoint inhibitors was continued until disease progression or the development of severe toxicities, ICIs have been shown to achieve long-term disease control even in the event of interruption , 12. TheLong-term follow-up analysis of clinical trials using ICIs in melanoma and NSCLC demonstrated that many patients maintain the therapeutic benefit long after the end of treatment , 12.In the case of RCC, 27 patients with a response to nivolumab discontinued treatment in the Checkmate 025 trial and never received additional subsequent systemic therapy ; 13 of these patients were still alive and free from disease progression at the last follow-up .Many clinical trials are currently set for a maximum 2-year period of ICI treatment for all patients enrolled , 16. HowIn metastatic melanoma, a retrospective analysis of patients treated with anti-PD1 (pembrolizumab or nivolumab) for a median initial treatment duration of 12 months showed that the risk of relapse after treatment discontinuation was low, particularly in patients who achieved complete radiological response during treatment .Conversely, in non-small cell lung cancer (NSCLC), a randomised trial revealed that a fixed duration of one year seems to be inferior, in terms of PFS and OS, to continuous treatment with nivolumab in the whole population .Several authors have investigated the optimal duration and management of ICI treatment for RCC. In a recent phase II trial, 5 out of 12 patients (42%) who opted to discontinue nivolumab after achieving a radiological response within the first 6 months of treatment were progression-free one year after the discontinuation of treatment .Ornstein et\u00a0al. conducted a phase II trial to evaluate the outcomes of intermittent treatment with nivolumab in a similar setting; of five patients who opted to discontinue nivolumab after obtaining a radiological reduction of 10% in tumor size, only one patient had to restart treatment at a median follow-up of 48 weeks .These small trials demonstrate that, for some patients, treatment interruption could be a viable option, but additional and larger studies are needed to increase the level of evidence and refine patient selection.However, following the decision to discontinue treatment, another important unanswered question concerns the immunotherapy rechallenge\u2019s efficacy. Retrospective analysis in patients with other solid malignancies revealed an interesting response rate and a clinical benefit in patients re-treated with immunotherapy after disease progression (with the same ICI after a therapeutic pause or with a different ICI in the event of PD during treatment) , 20.In a retrospective, multicentric analysis of renal cell carcinoma, Ravi et\u00a0al. found a response rate of 23% with low incidence of severe adverse events in a cohort of 69 patients who underwent anti-PD1/anti-PDL1 rechallenge treatment . The occThe rechallenge strategy must be evaluated differently depending on whether the decision to discontinue therapy was due to the occurrence of toxicities or due to the patients\u2019 or physicians\u2019 preferences, as opposed to the progression of disease during treatment. Unfortunately, many studies, such as the abovementioned ones, did not distinguish between patients whose disease was under control or progressing when they discontinued treatment. These clinical situations are clearly distinct, and the results of re-treatment in one setting may not be applicable in another.In fact, recent trials specifically designed for patients after progression or a lack of response to treatment with a single ICI are evaluating the intensification strategy using combination treatment (TKI plus anti-PD1 or anti-PD1 plus another ICI) rather than a single ICI , 22.The final important question concerns the rechallenge\u2019s toxicity profile. Many retrospective analyses demonstrated that, for patients who previously discontinued immunotherapy due to the occurrence of irAEs, these irAEs do not typically recur after the immunotherapy rechallenge\u2019s commencement. Moreover, irAEs are usually milder and more manageable during rechallenge , 23\u201325. The majority of patients in our population who opted to discontinue treatment were safe and progression-free after more than one year from the start of the therapeutic break. As reported by other authors, the risk of progression was lower in patients who had been treated for more than 12 months and in patients who had previously achieved an objective radiological response. Re-treatment appeared to be safe for patients who had progressed; it is interesting to note that, despite the limitations of a short follow-up, no treatment-related adverse events were reported, in spite of the fact that two of the patients had initially discontinued treatment due to grade 3 toxicities (hypertransaminasemia). Accordingly, we decided not to re-treat the patient who had previously reported grade 3 colitis.Our analysis has several limitations. Due to the retrospective design, there was a selection bias in the population, which consisted of patients with a very good clinical condition and good prognostic characteristics at baseline. The small sample size limited the possibility of finding prognostic and predictive indicators for a prolonged drug holiday period; this could explain why many well-established prognostic factors, such as the IMDC class and performance status, did not seem to be associated with this PFS. Finally, radiological evaluation was performed as per the clinician\u2019s decision, and radiological images were not re-examined.In our experience, the discontinuation of nivolumab treatment in a cohort of highly selected patients seems to be safe and capable of sustaining the disease\u2019s long-term clinical control. Treatment duration of more than one year and the achievement of a radiological objective response were prognostic of longer progression-free survival from the date of treatment discontinuation. Rechallenge with nivolumab after the occurrence of progression seemed to be safe for the selected patients, including those patients who had previously reported the occurrence of certain toxicities.More studies are urgently needed to determine the optimal duration and management of treatment with ICIs, especially given the ever increasing importance of immunotherapy. An improvement in the selection of patients who can safely discontinue treatment with ICIs could result in a dramatic improvement in treatment customisation and individualisation.marco.maruzzo@iov.veneto.it.The data analyzed in this study is subject to the following licenses/restrictions: Available upon request. Requests to access these datasets should be directed to The studies involving human participants were reviewed and approved by Comitato Etico IOV IRCCS. The patients/participants provided their written informed consent to participate in this study.MM, DB, and VZ study design. EL, NC and AM data collection. MD, DB and FP data analysis. MD, DB and MM data interpretation and review. UB, DB and MM supervision. All authors: final review and approval of the final paper.This research received \u201cRicerca Corrente\u201d funding from the Italian Ministry of Health to cover publication costs.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Immune checkpoint inhibitors (ICI) have become the cornerstone of treatment of certain malignancies. However, they can result in systemic toxicities including hepatitis. Societal guidelines recommend initial management with high dose steroids, then a slow taper as hepatitis resolves. However, there is significant variation in steroid response with some patients experiencing a relapse of hepatitis as steroid doses are tapered (\u201csteroid relapse\u201d).Identify clinical features that predict relapse, and to explore variations in steroid management, in patients with ICI hepatotoxicity.Patients receiving ICI in early phase clinical trials at Princess Margaret Cancer Centre, or treated at the Toronto Centre for Liver Disease for ICI hepatotoxicity, were included. Patients with CTCAE Grade (G)3 ICI hepatotoxicity (ALT >5 x ULN) were identified and clinical records reviewed for management and outcomes. Patients with an alternate cause for ALT elevation; who did not receive corticosteroids; or with HCC or viral hepatitis, were excluded.rd line therapy with Tacrolimus. Ultimately, hepatitis resolved in all patients.Between August 2012 and December 2021, 36 patients with G3 ICI hepatotoxicity were identified. Most had metastatic melanoma. Thirteen received anti-CTLA-4/PD-1; 18 anti-PD-1 or anti-PD-L1, and 5 anti-CTLA-4 monotherapy. All patients initially received corticosteroids . Thirteen patients (36%) were steroid relapsers. Consistent steroid response was seen in 18 (50%). Age, sex, liver metastases, prior ICI exposure, peak ALT or starting dose of steroids did not predict relapse, although relapsers were more likely to have been treated with combination anti-CTLA-4/PD-1 (7 (54%) relapsers, vs 3 (16%) responders, p = 0.02). Relapse occurred after a median of 14.5 days (range 8-111), and after taper to median 54% (5-100) of initial steroid dose. In responders, ALT normalisation occurred after median 14 days (range 3-56). In 27 patients where sufficient data were available, societal guidelines on ALT thresholds to initiate steroid taper were followed in 13 . However, initiation of steroid taper was delayed in responders compared to relapsers (after median 7 days (2-15) in responders vs 4 days (range 2-9) in relapsers, p = 0.04). Overall, 5 relapsers responded to re-escalation of steroids. Eight required additional treatment with MMF, and 4 required 3In patients with ICI hepatotoxicity, combination ICI therapy confers a higher risk of steroid relapse than monotherapy. There is significant heterogeneity in management of steroid dosing in patients with ICI hepatotoxicity. Delayed initiation of steroid taper may be associated with a reduced risk of relapse and warrants prospective evaluation as part of a standardised management algorithm.NoneNone Declared"} +{"text": "This study aimed to investigate the possible factors affecting dentists\u2019 behavior relating to performing oral cancer examinations as part of routine clinical examination. A total of 95 direct clinical observation sessions\u2014utilizing an instrument consisting of 19 evidence-based observational criteria for oral cancer examinations\u2014were observed by four calibrated dentists. Thirty-two final-year students, 32 interns, and 31 faculty members of Jazan Dental School were examined between April 9 and May 4, 2017. A descriptive analysis was conducted to investigate the frequencies/percentages of the performed observing criteria by all examiners. ANOVA and Tukey tests were carried out to investigate the difference between the examiner groups. A total number of 32 patients participated in the study, whereby each patient was examined by three different examiners from each group, as well as by the attending observer/s. Fewer than 50% of the examiners performed the clinical steps necessary for an oral cancer examination\u2014for example, taking into account past medical history, as well as extra and intra-oral examinations. More than 90% of the examiners examined hard tissue, whereas fewer than 30% of them educated their patients about possible risk factors. A significant difference between examiner groups was found in favor of faculty members. A gap between knowledge and actual practice of oral cancer examinations was evident: majority of participants failed to perform the necessary steps for an oral cancer examination. Previous experience and confidence in performing oral cancer examination are possible explanations for the dentist\u2019s behavior toward oral cancer examination. Global incidences of oral cancer are still rising, with South Asian countries having the highest incidence rates . In SaudMost oral cancer cases are detected at a late stage when the tumor has already metastasized to another location in the body . SeveralThe present study was performed according to the ethical standards of the institutional research committee, as well as the 1964 Helsinki Declaration. It received ethical approval from Jazan University (registry no. [CDREC-06]), dated 21 December 2016. All participants have consented prior to their participation, including with regard to the publication of the findings. The reporting of this present study followed the STROBE guidelines for reporting cross-sectional studies .The present study utilized a descriptive cross-sectional study design, in which direct clinical observation was carried out among final-year dental students, interns, and faculty members between April 9, 2017, and May 4, 2017. The main targets of this study were the dental interns who are the first line of treatment in JDS clinics and who oversee the completion of the primary dental charts of all new patients. The total number of interns was 40 at the time of planning this study. A personal invitation was sent to all the interns and an explanation of the study process was delivered to them in the form of two discussion sessions by the principal investigator (PI). To minimize effects on the behavior of clinicians under observation in dental examination sessions, the study and its process were explained to the participants without mentioning that the study would focus on the dental examinations and, in particular, oral cancer screening performance and the required related steps as a follow-up. We aimed to reduce the Hawthorne effect were required. Items 1\u201310 related to reporting the patient\u2019s general health, family history of cancer, habits, and diet. Items 11\u201316 related to the performance by the examiner of different steps for the head, neck, and oral examination. Item 17 was for obtaining a plain radiograph, and items 18 and 19 were for conditions when the further evaluation was necessary to diagnose potential positive oral cancer cases. Each item was given a weight of one, two, or three, based on the degree of significance and relevance as supported by current evidence [An instrument for the observation of oral cancer screening practice in form of checklist was developed and was based on the current available evidence relating to the recommended practice of oral cancer screening. The observation instrument included the most appropriate clinical steps to be taken during comprehensive clinical examinations of dental patients , 36, as evidence \u201353, withp\u2009<\u2009.000). Prior to the present study\u2019s dental examination sessions, each participating patient was first examined by the observer/s who then observed the clinicians\u2019 dental examination sessions. All 95 dental examination sessions were observed by the main observer and mentored by the PI. In addition, 18 of these sessions were observed by all the observers, while others were overseen by two or three observers with Tukey\u2019s HDS comparison between each occupation category. Additionally, Pearson\u2019s correlations were run to investigate any correlation between patient age and total score by the main observer and mean total score by all observers. A qualitative approach was utilized to perform post-exam investigations in the form of two focus group discussions (FGDs) with the participants of the current observation study. The FGDs aimed to explore the possible factors explaining the observed findings. The 95 participants in the cross-sectional study were invited to participate in the follow-up discussions by direct contact.Descriptive analyses were run to investigate the participants\u2019 demographics, as well as the frequency of the individual performance items in the observational checklist by the main observer. Furthermore, paired The total number of participating examiners was 95, with an almost equal distribution of gender and occupation . The total number of participating patients was 32, with ages ranging between 19 and 70\u00a0years old and with 62.5% males (see Table Table r\u2009=\u2009.807) between the main observer and the second observer (see Table p\u2009=\u2009.571 and fourth observer p\u2009=\u2009.062), except for the third observer (p\u2009=\u2009.018); see Table p\u2009=\u2009.007 and p\u2009=\u2009.031, respectively) as they assumed it has no clear relationship with their patients\u2019 oral health. Participants linked the cultural and religious unacceptability of alcohol use to the observed low score in item 8 . For items 6, 7, and 19 , female students and interns had higher scores than their counterpart. The participants mentioned that female students/interns are more vigilant to the oral changes associated with tobacco as they are used to examine mainly female patients who are usually non-smokers. Moreover, female participants mentioned that the tobacco advice they had given to the patients was based on their personal beliefs, as they did not receive formal training on tobacco counseling. In relation to item 17 (obtaining radiograph), participants denoted the reason for obtaining radiograph is related to the chief complaint only. Dental interns revealed that two factors could be related to their general low score in comparison to students and faculty members. The first one was because they rely on the other dentists whom the patient will be referred to do the next oral care/treatment. While the second factor was because they cannot perform full oral screening on each patient as they have a busy clinical schedule with large number of patients.The present study investigated possible explanations for dentists\u2019 behavior by means of direct observations of routine dental clinical examination sessions. The interns in this study were recruited from the same group that were evaluated in two recent studies that included students, interns, and faculty members for their knowledge, attitude, self-efficacy, and opinions regarding oral cancer practice , 54, in The knowledge of oral cancer among dentists has been investigated thoroughly in previous studies and postulated to be related to dentists\u2019 practice of oral cancer examinations \u201329, 55. Dentists have actual control over their practice of oral cancer examinations when they demonstrate their ability to perform the following essential sub-behaviors: extra- and intra-oral screening skills, obtaining radiographs, taking a biopsy, writing referral reports, specialist consultations, communicating with or counseling patients , specifying an oral cancer provisional treatment plan , and referring suspicious cases to specialized centers Table . AdditioDirect clinical observation methods strengthen this study by capturing the clinical steps that dentists may or may not have performed, which leads to a better understanding of the behavior\u2019s potential causes. Due to the cross-sectional nature of this study, the findings need to be tested through experimental study design, in order to measure the effect of experience, skills, and self-efficacy on dentists\u2019 behavior. Furthermore, having a main observer present, who observed all dental clinical examination sessions, added to the reliability of the comparison between different examiners. Similarly, having four observers to compare findings with added to the internal validity of this study. Moreover, the observational instrument that was developed for the study had not been tested in previous independent work. However, the findings of this study indicated that the developed instrument had the capacity to investigate oral cancer examinations as part of routine dental clinical examinations. All observers had strong statistical correlations, with no statistical difference being found between the three of them.To conclude, the present study has shed light on the gap that existed between knowledge and actual practice of oral cancer examinations by dentists. The practice of oral cancer examinations is a complex behavior that is influenced by multiple factors: oral cancer knowledge, perceptions, experience, self-efficacy, actual control, and other external factors such as the afforded clinic time per patient. Furthermore, experience and confidence are essential determinants for performing oral cancer examinations. Therefore, dental schools and decision-makers should be aware about the influence of these determinants on oral cancer examinations and should be stressed in the future interventions that intend to improve the practice of oral cancer examinations as part of routine dental clinical examination sessions."} +{"text": "People from the general population often tend to believe that psychiatric patients may be incurable, dangerous, and unpredictable. Stigma represents a critical issue which should be defeated. In spite of the interest of research, little is known about the relationship between personality traits and level of stigma toward people with mental illness.To evaluate whether certain personality traits can influence the level of stigma towards mental illness in a population of university students.A web-survey was spread on social networks between March and June 2020 through Google Forms. Eligibility criteria for inclusion were:1) Being 18 years of age or older; 2) Attending a degree course in an Italian University; 3) Provide informed consent. Socio-demographic characteristics of the participants were collected. Stigma was measured using the Attribution Questionnaire (AQ-27), personality traits were evaluated through the Big Five Inventory (BFI) and the Mental Health Knowledge Schedule (MAKS-i) investigated the knowledge about mental illness. Statistical analyses were performed using SPSS 24.0.We computed a multiple linear regression to calculate potential predictors of stigma, adjusted on the basis of the knowledge of mental illness. Results showed that age and faculty class were not related to stigma. Agreeableness (A) and Openness to experience (O) were associated with less stigmatizing attitudes. Conversely, Neuroticism (N) and Conscientiousness (C) seemed to predict higher levels of stigma.Our results suggest an interesting relationship between personality traits and stigmatizing attitudes, which deserves to be further studied. They also confirm the importance of implementing appropriate strategies against the stigma of mental illness.No significant relationships."} +{"text": "The alterations of gut microbiota have been associated with multiple diseases. However, the relationship between gut microbiota and adverse outcomes of hyperlipidemic stroke patients remains unclear. Here we determined the gut microbial signature to predict the poor outcome of acute ischemic stroke (AIS) with hyperlipidemia (POAH).Fecal samples from hyperlipidemic stroke patients were collected, which further analyzed by 16s rRNA gene sequencing. The diversity, community composition and differential gut microbiota were evaluated. The adverse outcomes were determined by modified Rankin Scale (mRS) scores at 3 months after admission. The diagnostic performance of microbial characteristics in predicting adverse outcomes was assessed by receiver operating characteristic (ROC) curves.Enterococcaceae and Enterococcus were increased, while the relative abundance of Lachnospiraceae, Faecalibacterium, Rothia and Butyricicoccus were decreased. Moreover, the characteristic gut microbiota were correlated with many clinical parameters, such as National Institutes of Health Stroke Scale (NIHSS) score, mean arterial pressure, and history of cerebrovascular disease. Moreover, the ROC models based on the characteristic microbiota or the combination of characteristic microbiota with independent risk factors could distinguish POAH patients and GOAH patients (area under curve is 0.694 and 0.971 respectively).Our results showed that the composition and structure of gut microbiota between POAH patients and good outcome of AIS with hyperlipidemia (GOAH) patients were different. The characteristic gut microbiota of POAH patients was that the relative abundance of These findings revealed the microbial characteristics of POAH, which highlighted the predictive capability of characteristic microbiota in POAH patients. Acute ischemic stroke (AIS) was a leading cause of death and chronic disability worldwide. Stroke survivors frequently had various complications, such as cognitive impairment and physical disability, which had a great impact on the quality of life . Recent Recent studies have emphasized that the characteristic gut microbiota (GM) are associated with AIS. It was reported that stroke patients showed significant dysbiosis of bacteria with enriched short-chain fatty acids (SCFAs) . Our preAlcaligenaceae and Acinetobacter could remarkably distinguish autism spectrum disorders from the healthy group > 2.28 mmol/L or total cholesterol (TC) > 6.2 mmol/L or high-density lipoprotein (HDL) < 0.91 mmol/L or low-density lipoprotein (LDL) > 3.4 mmol/L. Exclusion criteria: application of antibiotics or probiotics within three months, restriction of diet, concurrent pregnancy, schizophrenia, bipolar disorder, or other serious life-threatening illnesses . The modified Rankin Scale (mRS) was applied to assess the post-stroke functional outcome of each patient in a 90-day follow-up after the stroke onset. The included AIS with hyperlipidemia were divided into the good functional outcome group (mRS score < 3) and the poor functional outcome group (mRS score \u2265 3).All hyperlipidemic stroke patients were collected basic information at enrollment, including sex, age, years of education, history of smoking and drinking, presence of hypertension and diabetes, and history of cerebrovascular disease. Hypertension was considered as blood pressure \u2265 140/90 mmHg. Diabetes was defined as fasting blood glucose \u2265 7.0 mmol/L or 2\u00a0h blood glucose \u2265 11.1 mmol/L in an oral glucose tolerance test. The blood samples were extracted on an empty stomach after fasting overnight and centrifuged at 1300xg for 10 minutes. The biochemical indicators analyzed included TG, TC, HDL, LDL, creatinine, vitamin B12, folic acid (FOA), uric acid (UA), homocysteine (Hcy), C-reactive protein (CRP), hypersensitive C-reactive protein (hs-CRP), fasting blood glucose (FPG), glycosylated hemoglobin, thyrotropin, free triiodothyronine (FT3), free tetraiodothyronine (FT4), mean arterial pressure (MAP), D-dimer, alanine transaminase (ALT), aspartate transaminase (AST) and troponin. Moreover, computed tomography (CT) and magnetic resonance imaging (MRI) were used to identify new lesions of patient. Stroke severity was evaluated based on the National Institutes of Health Stroke Scale (NIHSS) by professional physicians within 24 hours of admission. Sleep condition was also quantified through Pittsburgh Sleep Quality Index (PSQI) during hospitalization.\u00ae Manual of soil Kit , and the concentration and purity of which were detected with NanoDrop2000 UV-vis spectrophotometer . The hypervariable regions of the 16s rRNA gene were amplified using PCR with primers 338F: ACTCCTACGGGAGGCAGCAG and 806R:GGACTACHVGGGTWTCTAAT. Next, PCR products were recycled by 2% agarose gel, and paired-end sequenced (2 \u00d7 300) on an Illumina MiSeq platform . Alpha diversity was analyzed through Shannon and ACE. Principal coordinates analysis (PCoA) on the Bray-Curtis dissimilarity index was used for beta diversity analysis. The intestinal typing analysis was performed at the genus level by clustering samples with similar dominant microbiota structures into a class. Moreover, we identified the significant differences in relative abundance at levels of phylum, class, order, family, genus, and species by Wilcoxon rank sum tests based on the obtained community abundance data. Linear discriminant analysis (LDA) effect size (LEfSe) was applied to find significantly enriched taxa and their influence between the two groups using nonparametric Kruskal Wallis (KW) sum rank test, with thresholds of LDA score > 2.Fresh stool samples (200 mg) were obtained, and fed into a labeled 2\u00a0ml sterile centrifuge tube and quickly stored in a -80\u00b0C freezer. The bacterial DNA was isolated by E.Z.N.A. P value < 0.05 was considered to be of significance.Statistical analysis was carried out by SPSS V.22.0 . Chi-square test and multivariate logistic analysis were used to analyze the categorical variable data. Odds ratio (OR) and 95% confidence interval (95% CI) were figured out. The values of continuous variables were represented as median with quartile or mean with standard deviation (SD) based on the fact whether they were normally distributed, and compared by rank sum test or t-test respectively. The P < 0.001), and decreased MAP were the independent risk factors of POAH.According to the follow-up mRS results, 231 hyperlipidemic stroke patients were divided into two groups: 58 POAH patients and 173 good outcomes of AIS with hyperlipidemia (GOAH) patients. As showed in Alpha diversity was evaluated by the Ace index , Faecalibacterium (P < 0.01) and Butyricicoccus (P < 0.05) were negatively correlated with the mRS score, while Enterococcus was positively correlated with the mRS score (P < 0.05). Spearman correlation heatmap . Moreover, the predictive model combined with the five genera and the three independent risk factors could also distinguish POAH from GOAH .We screened out the five genera as biomarkers according to the LDA value, including Enterococcus increased while the abundance of bacteria producing SCFAs decreased, which was closely related to independent risk factors, such as cerebrovascular history, NIHSS score, and MAP. Moreover, the characteristic microbiota and microbiota plus with the three independent risk factors could establish a distinction for predicting POAH. These results indicated that GM might provide novel microbial biomarkers for predicting POAH.This study revealed that GM feature of POAH was that the abundance of Enterococcus in POAH was enriched, and positively related to the mRS score, indicating that the abundance of Enterococcus might be related to the risk of POAH. It was reported that Enterococcus was an opportunistic pathogen in the gastrointestinal tract, and the risen level of Enterococcus was relevant to many neurological and metabolic diseases, such as Parkinson\u2019s disease, Alzheimer\u2019s disease and diabetes and post-stroke affective disorder . The divdiabetes . Enterocdisorder , which w as IL-6 , and fur as IL-6 , which l as IL-6 . Evidenctabolism . Hu X etpatients , suggestLachnospiraceae, Faecalibacterium, Rothia and Butyricicoccus. Moreover, Lachnospiraceae, Faecalibacterium, and Butyricicoccus were associated with lower mRS score. Lachnospiraceae, a primary producer of butyrate, was related to the functional prognosis of diseases can be found below: NCBI, PRJNA894329.The protocol of the study was reviewed and approved by The Ethics Committee of the Second Affiliated Hospital of Wenzhou Medical University (LCKY2020-207). The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. JL, JS and SC designed the experiments. JC, BC, JM, JZ, QG, HX, YK and SY performed the experiments and conducted the statistical analyses. All authors contributed to the article and approved the submitted version.This work was supported by Clinical Medical Research Project of Zhejiang Medical Association (2022ZYC-D10).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Understanding the factors that play a role in the initiation of alcohol use and the subsequent transition to later alcohol abuse adolescence is of paramount importance from the context of developing better-targeted types of secondary (\u201cpro-active\u201d) prevention interventions . Peer and family influences together with temperament traits have been suggested to be of cardinal importance regarding the initiation of alcohol use. In addition to these factors neurobiological and genetic factors play a major role in the risk of developing alcohol abuse upon initiation. The presentation will highlight the different psychological, neurobiological, and social factors underlying the risk of the transition to abuse and dependence in adolescence. In addition, examples of targeted prevention interventions will be highlighted.No significant relationships."} +{"text": "Therefore, this work aimed to investigate the overall nutritional quality of these new products. Regulated information [Regulation (EU) 1169/2011], the presence/absence of nutrition or health claim and organic declarations, the gluten-free indication, and the number of ingredients were collected from the food labels of 269 commercial meat analogues currently sold on the Italian market. Nutritional information of reference animal meat products was used to compare the nutrition profile. As an indicator of the nutritional quality, the Nutri-Score of meat analogues and counterparts was also determined. Plant-based steaks showed significantly higher protein, lower energy, fats and salt contents, and better Nutri-Scores than the other analogues. All the meat analogues showed a higher fibre content than meat products, while plant-based burgers and meatballs had lower protein contents than meat counterparts. Ready-sliced meat analogues showed a lower salt content than cured meats. Overall, all these plant-based products showed a longer list of ingredients than animal meat products. Results from this survey highlighted that plant-based steaks, cutlets, and cured meats have some favourable nutritional aspects compared to animal-based products. However, they cannot be considered a \u201c The high consumption of red meat and mostly processed meat is currently debated for many reasons related to human, planet, and animal health. First, diets high in red meat and processed meat have been considered risk factors for a high number of deaths and years lived with disability (DALYs) in the last Global Burden of Disease study . HoweverFor all these reasons, there is currently an urgent call to promote plant-based diets limiting the consumption of meat and animal foods. Plant-based diets foresee the inclusion of crops and their traditional derivatives , but also of a large plethora of plant-based foods developed to mimic animal foods. These products include plant-based drinks, cheese substitutes, and even meat analogues, with the latter reaching a 4.6 USD billion global market in 2020 and expecting 6 USD billion within 4 years . It is aMeat analogues\u2014also known as meat substitutes\u2014are plant-derived food products usually processed to resemble meat flavour, texture, and appearance. They can be derived from various vegetable sources. Soy and textured vegetable proteins are the most common, but pulses , cereals , mushrooms, and seeds are also used , 6. MoreMeat analogues can be produced with two different approaches aiming at obtaining a fibrous structure, which is essential for the texture . In the The interest in these innovative products and, consequently, their purchase in supermarkets is constantly growing. Moreover, these products are perceived by consumers as healthier than meat , even thThis work is part of the Food Labelling of Italian Products (FLIP) study that systematically investigates the overall quality of the pre-packed foods of the most important food groups and related categories sold on the Italian market.The selection of samples was performed, as previously described by Angelino et al. , in the The inclusion criteria taken into consideration for the selection were the item's availability in at least one online shop and all the data to be retrieved on the pack or in the retailer's online shop. Exclusion criteria were: incomplete images of all the sides of the pack, unclear images of nutrition declaration or list of ingredients, and products marked as \u201cproduct currently unavailable\u201d on all the selected online stores during the data collection period.Caprinae meat ; other animals and the related energy and nutrient contents were retrieved as described for meat analogues.Data extraction was conducted as described by Angelino et al. . For eacThe following additional information was collected: fibre content (g/100 g), number of ingredients, presence/absence of a gluten-free indication, and plant they were made of .Collected data were organised in a dataset where commercial products were sub-grouped depending on: (1) category, (2) type, (3) presence/absence of an organic declaration, (4) presence/absence of nutrition claims, (5) presence/absence of health claims, (6) presence/absence of indications related to gluten. All the plant-based products investigated were divided into two major categories: plant-based meat analogues (PBMA) and plant-based ready-sliced meat analogues (PBSMA). Among the PBMA, we identified different types of products they intend to resemble, i.e., steaks, burgers, meatballs, and cutlets. As for the PBMA, we considered as control products: steaks, both white meat and red meat, burgers, meatballs, and cutlets.Sant\u00e9 publique\u2014France , this value was estimated by subtracting the energy provided by each macronutrient from the total energy, divided by 2 kcal/g. The percentage of fruit, vegetables, pulses, nuts and rapeseed, walnut, and olive oil was retrieved from the list of ingredients. As the salt is added to steak during cooking, different amounts of salt (from 0.05 to 0.5 g) were considered to determine the score of this control product. In comparing the analogue products and the meat counterparts, 0.5 g of salt was considered. For meat products retrieved in the Food Composition Database for Epidemiological Studies, Nutri-score was calculated considering nutrition values reported in the database, even though this type of front-of-pack labelling is not applied to the not pre-packed products.For all the products, the Nutri-Score was also determined by using the Excel sheet provided by the e\u2014France . Sodium p < 0.05 of the significance level. The normality of data distribution was rejected through the Kolmogorov\u2013Smirnov test. Therefore, data related to energy, macronutrient, fibre, and salt were expressed as median and interquartile ranges. In order to investigate differences among categories, the Kruskal\u2013Wallis non-parametric one-way ANOVA for independent samples with multiple pairwise comparisons test was used. Differences between products with or without organic declaration, nutrition and health claim declaration, and indication related to gluten were assessed by using the Mann\u2013Whitney non-parametric test for two independent samples. In addition, nutritional values of meat replacer products were compared to those of control counterparts using the Mann\u2013Whitney non-parametric test for two independent samples. Results were graphically shown using Origin software .The statistical analysis was carried out using the Statistical Package for Social Sciences software and performed at Two hundred and ninety meat analogues were collected. After removing products not respecting inclusion criteria, a total of 269 products were considered in the final analysis, 229 PBMA and 40 PBSMA , 2. PBMAOverall, 59.3% of the PBMA products had pulses as the main ingredient, while 22.7 and 18.0% were mainly based on vegetables and (pseudo)cereals , respectively. Among the PBSMA, 45.0% were based on cereals , 25.0% on pulses , 7.5% on oils , and the remaining 22.5% on other ingredients (water emulsions of different ingredients) (data not shown).As for controls, 25 out of the 225 items retrieved from the online stores were removed because not respect inclusion criteria. Besides these 200 commercial control products, 69 controls were obtained from the Food Composition Database for Epidemiological Studies in Italy . Controlp < 0.05). The median protein content was 14.0 (9.8\u201317.0) g/100 g, and the steaks had a significantly higher protein content than the other analogues. Overall, analogue steaks were also significantly lower in total fat, saturates, total carbohydrates, sugars, fibre, and salt than the other categories. The other categories did not differ from each other for total fat, saturates, sugars, fibre, and protein contents (p > 0.05). Conversely, they differed from each other for salt, of which cutlets had the highest content (p < 0.05). In addition, cutlets also differ from burgers and meatballs for the total carbohydrates, having a significantly higher content.The median energy content of all plant-based meat analogues was 198 (155\u2013230) kcal/100 g, with steaks being the analogues with the lowest content 1169/2011 , and theThe median energy content of all these products was 212 (198\u2013247) kcal/100 g, with protein and fats as the main nutrients contributing to the energy content. Products bearing at least a nutrition claim were significantly lower in energy, total fat, saturates, and salt than products without a nutrition claim. The products that reported a protein claim, in addition to being, as expected, higher in protein, were significantly lower in total fat. Intriguingly, products with and without a claim on fibre did not significantly differ for their fibre content. Nutrition declaration of organic products was not different from conventional counterparts.p < 0.05), while plant-based cured meats showed a significantly lower energy content than controls . The salt content differed only between PBSMA and cured meats (p < 0.05).On the contrary, all the meat analogues showed a higher total carbohydrates, sugars, and fibre content than controls , higher in the absolute value than the median number [9 (6\u201310)] found in the commercial meat products. Instead, the plant-based meatballs showed a similar number of ingredients [13 (11\u201315)] as controls [13 12\u201315)] as well as the plant-based cutlets had a similar number of ingredients [14 (12\u201317)] as meat cutlets [13 (11\u201315)]. The PBSMA showed a number of ingredients around 13 (8\u201315). It was not possible to estimate the median number of ingredients for cured meats controls since this information is not provided in the database had a D score. The plant-based cured meats had 32.5% of products with an A and B score, 65% of products with a C and D score. Only 2.5% of plant-based cured meats had an E score, while 69.2% of cured meats had that score. The remaining 30.8% of cured meats had a C and D score.Data shown in To the best of our knowledge, this is the first survey to evaluate the nutritional quality of several different types of PBMA and PBSMA sold on the Italian market. The critical evaluation of such products falls into a field in great expansion\u2014the conception and development of meat analogue products\u2014seen the consistent promotion of a transition towards plant-based and sustainable dietary models . Indeed,N = 50), which were lower in protein and higher in carbohydrate than meat burgers . The comparison with worldwide data is not easy. Despite the consumer attraction for these products, there are not so many published surveys on the presence of PBMA and PBSMA in the large-scale distribution. A very recent cross-sectional study considered the nutritional facts of 207 meat analogues and 226 meat products available from 14 retailers in the UK . Althoug burgers , differeSafefood\u201d has identified on the Irish market 354 analogue products, including also other categories not considered in the present survey, i.e., fish substitutes and pastry-based meat substitutes , while irotein\u201d) this valrotein\u201d) . These lrotein\u201d) , the abstout court animal-derived protein substitutes due to: (i) a different PDCAAS, (ii) presence of trypsin inhibitors\u2014usually inactivated by heat\u2014which affects the protein digestibility, (iii) deficiency in some essential amino acids availability of essential amino acids compared with unprocessed or minimally processed soy protein; (ii) protein digestibility-corrected amino acid score (PDCAAS) close to 1.00, referring value for animal-based proteins; (iii) improvement in colour, flavour and texture parameters of the products . Cerealsno acids . Besidesno acids , 31].Moreover, unlike meat which is a protein-rich food, cereal-based products have a higher content of carbohydrates and sugars in comparison with their meat counterparts. As for cereal-based analogues, pulses also lead to having carbohydrates and sugars in the final product. Therefore, these substitutes, which aim to replace meat, also add carbohydrates and sugars to the diet. On the contrary, meat products are not contributors to the intake of sugars, already widely introduced in our diet.In addition to this, carbohydrates are also added to the products because of their binding capacity. They are added as starches and flours to improve products' texture and gums to improve stability . Adding As for carbohydrates, other ingredients are added to the formulations to contribute to the final product. Salt is added as a seasoning but also for toughening the product. Even if in small quantities, the presence of salt involves a change in the structure of the proteins which allows obtaining the desired structure of the final product . Our surBesides comparing the nutrition facts, PBMA, PBSMA, and meats were also compared by the Nutri-Score. This is one of the most common front-of-package labels adopted in several countries. There is still an intense debate whether it is a good way to explore the nutritional quality of food items , 33. NutOur study has several strengths and limitations worth to be noted. Among strengths, to the best of our knowledge, this is the first survey investigating the nutritional quality and the Nutri-Score of PBMA and PBSMA sold in Italy, retrieving the large majority of products currently on the market. Moreover, focusing on information reported on the food labelling, we were able to focus not only on nutrition declaration but also on other aspects such as the presence of nutrition and health claims and the presence of gluten-free or organic declarations. However, this point can also be considered as a limitation since we cannot consider data on nutrients that are not mandatory based on the current European legislation . Moreover, we cannot exclude the presence on the marker of other items sold in other minor retailers or other channels .The first finding of the present survey on the nutritional profile of PBMA and PBSMA sold on the Italian market relates to the huge number of products present nowadays on the market. Moreover, wide variability in the nutritional values among products was observed. This variability was related to the heterogeneity of these products in terms of types and ingredients used. As for animal meats, plant-based steaks resulted nutritionally different from the other categories, showing a higher protein amount and a lower energy, other macronutrients, and salt content. The Nutri-Score values were more favourable in the meat analogues compared to the animal counterparts. Concerning nutritional and health claims and organic ones, as previously reported in other FLIP study papers, a nutritionally significant advantage of PBMA and PBSMA boasting such claims has not been observed compared to their counterparts.tout court alternative to animal meat. Thus, further studies focusing on evaluating the biological value of protein from plant-based sources and the bioaccessibility of micronutrients are needed. In addition, it is noteworthy the need to improve the formulation of meat analogues in terms of the number of added ingredients, processing, etc. At last, there is also a need for adequate nutritional education programs in order to increase consumers' knowledge and awareness about the differences between animal- and vegetal-based products. All these steps before and during the shopping time are the most important decisional times for the customers, as they drive their intention-to-buy and, finally, the consumption of products, which will directly impact their health.The similarity of the nutritional values of plant-based alternatives and animal meats retrieved by the nutrition facts, and more for steaks, should not induce the consumer to consider PBMA and PBSMA as animal meat equivalent or alternatives. In the present manuscript, as in a previous FLIP manuscript focusing on plant-based beverages vs. milk, it has been widely discussed that the difference in terms of the biological quality of proteins and the micronutrient amounts and bioavailability does not allow to consider PBMA and PBSMA a The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.Margherita Dall'Asta, Department of Animal Science, Food and Nutrition, Universit\u00e0 Cattolica del Sacro Cuore, Piacenza, ItalyMarika Dello Russo, Institute of Food Sciences, National Research Council, Avellino, ItalyDaniele Nucci, Veneto Institute of Oncology IOV-IRCCS, Padova, ItalyStefania Moccia, Institute of Food Sciences, National Research Council, Avellino, ItalyGaetana Paolella, Department of Chemistry and Biology A. Zambelli, University of Salerno, Fisciano, ItalyVeronica Pignone, Freelance Nutritionist, San Giorgio del Sannio, ItalyAlice Rosi, Department of Food and Drug, University of Parma, Parma, ItalyEmilia Ruggiero, Department of Epidemiology and Prevention, IRCCS Neuromed, Pozzilli, ItalyCarmela Spagnuolo, Institute of Food Sciences, National Research Council, Avellino, ItalyGiorgia Vici, School of Biosciences and Veterinary Medicine, University of Camerino, Camerino, Italy.SC was involved in the protocol design, data analyses, and in the interpretation of results and drafted the manuscript. DA was involved in the protocol design and in the interpretation of the results and contributed to the drafting of the manuscript. TT was involved in the protocol design and critically reviewed the manuscript. NP participated in the conceptualisation of the study, in the interpretation of the results, and critically reviewed the manuscript. DM was involved in the interpretation of the results, conceived and designed the protocol of the study, critically reviewed the manuscript, and had primary responsibility for the final content. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "In the last decade, various foods have been reformulated with plant protein ingredients to enhance plant-based food intake in our diet. Pulses are in the forefront as protein-rich sources to aid in providing sufficient daily protein intake and may be used as binders to reduce meat protein in product formulations. Pulses are seen as clean-label ingredients that bring benefits to meat products beyond protein content. Pulse flours may need pre-treatments because their endogenous bioactive components may not always be beneficial to meat products. Infrared (IR) treatment is a highly energy-efficient and environmentally friendly method of heating foods, creating diversity in plant-based ingredient functionality. This review discusses using IR-heating technology to modify the properties of pulses and their usefulness in comminuted meat products, with a major emphasis on lentils. IR heating enhances liquid-binding and emulsifying properties, inactivates oxidative enzymes, reduces antinutritional factors, and protects antioxidative properties of pulses. Meat products benefit from IR-treated pulse ingredients, showing improvements in product yields, oxidative stability, and nutrient availability while maintaining desired texture. IR-treated lentil-based ingredients, in particular, also enhance the raw color stability of beef burgers. Therefore, developing pulse-enriched meat products will be a viable approach toward the sustainable production of meat products. It is necessary for people to have access to sufficient food that is safe, nutritious, and affordable and that is produced using environmentally responsible and sustainable production systems. Several dietary surveys have also shown the widespread prevalence of the suboptimal intake of protein and micronutrients for certain segments of the world population ,2,3,4. TProteins in the diet and their sources have evolved over time. In the beginning, slaughtered animals were used as entire carcasses or whole meat cuts, and this eventually led to their use in processed meat products to achieve full utilization of the meat and byproducts such as organs and other tissues. Prolongation of the shelf life as well as minimizing wastage through the ability to use various animal carcass parts in processed meat provides a significant advantage in the food basket of many civilizations and cultures. Such products are still popular because of their perfect merge with safe, tasteful, and convenient ready-to-eat protein-rich food in today\u2019s fast-paced lifestyle. Owing to the dominance of existing fast-food practices, the global market demand for processed meats grew at a compound annual growth rate (CAGR) of 6.5% (2017\u20132021) and is expected to rise 6.3% (2022\u20132032), reaching a total of USD 605.14 billion in 2032 [As the world\u2019s population grows, so does the demand for animal protein in which a considerable portion is satisfied with processed meat products. Some evidence suggests that the intake of red meat and extensively processed meat is associated with an increased risk of chronic illnesses such as cardiovascular diseases, type 2 diabetes, and some forms of cancer ,8. ThereThe reformulation of conventional meat products incorporating plant-based ingredients offers the meat industry several opportunities that benefit both the processor and the consumer. These include developing healthier products with more natural, functional ingredients; reducing waste generation while maximizing the total carcass utilization; diversification of product categories with new and improved hedonics; and lowering production costs to increase consumer affordability. The hundreds of different types of comminuted meat products available worldwide vary based on the method of manufacturing and composition of ingredients used in them. In an 1899 book on sausage making, Duff and are According to Vatanparast et al. , CanadiaUtilizing pulses and pulse flours in meat product formulations presents some challenges that need to be addressed at the ingredient and/or product manufacturing stage. Pulses can bring in minor compounds that are nutritionally undesirable, incompatible techno-functional properties, off-flavors, and prooxidants that hinder their application in meat products. Among the various thermal treatments of pulses to improve their usability as ingredients for various food products, infrared (IR) heating has been under scientific investigation over the last decade to overcome these limitations, and the findings appear promising ,17,18. IThe aim of the present review is to bring together the literature on the addition of pulse-based ingredients to comminuted meat products and, in particular, those of IR-treated pulse ingredients in meat processing. The studies described in this review were from author collections and also found through database searches of Web of Science using search terms such as \u201cpulse, lentil, pea, bean\u201d and \u201cmeat, beef, chicken, pork, sausage, burger, patty, meatball\u201d and \u201cinfrared, micronization, heat-treated\u201d over the last 20 years. Taking into consideration all the pertinent research conducted on the application of IR-treated pulses in meat products, more than 80% has focused on lentil-derived products. Therefore, this review emphasizes the contribution of IR-treated lentils on the techno-functional and sensory aspects of comminuted meat products. A common definition of processed meat is \u201cmeat that has been transformed through salting, curing, fermentation, smoking, or other processes to enhance flavor or improve preservation\u201d . ProcessWhole muscle products are made from large pieces or whole intact cuts of meat that are often molded and primarily seasoned, heat-processed, and smoked, while comminuted meat products are made from small pieces of meat that have been ground, minced, or chopped and often include a combination of meat and non-meat components . The earOne of the benefits of manufacturing comminuted meat products is that it allows for the utilization of lower-value meat, which can then be turned into even more valuable products. Meat product manufacturers use meat trimmings from inexpensive parts of the carcass, such as the head and cheeks, as well as mechanically separated meat (MSM), in the formulation of comminuted meat products. MSM is a cost-effective method of utilizing the last traces of meat from the carcass. Therefore, the manufacturing of comminuted meat products enables optimal usage of the animal carcass, minimizing waste generation in the meat processing industry. Comminuted meat products can be further subdivided into formed ground or emulsion-based meat products based on the degree of comminution that the meat has gone through . ComminuEmulsion-type meat products, such as frankfurters, wieners, and bologna sausage, are made with a viscous mass of finely chopped ground meat that is composed of muscle and fatty tissues. These meat mixtures (batters) are relatively complex systems and have many properties of an emulsion . The disThe overall quality of meat products can be described as a set of properties that together identify what consumers appreciate about a product at purchase and/or consumption. Therefore, the quality traits of meat products are generally those associated with human sensory perception: appearance, including color, shape, and size; taste/flavor; texture; juiciness; and odor. Other than those traits, the quality of meat products is also expressed as freshness or wholesomeness, which relates to the perception that they are safe to eat in terms of being free of pathogens, parasites, toxins, or allergens. The selection of suitable non-meat ingredients for a particular meat application necessitates an understanding of the required functionalities. For example, the quality of emulsion-based meat products is determined by the stability of the emulsion, nutritional composition, and palatability aspects as perceived by the consumers. For burgers or meatballs, the meat particle size is often coarser with less protein extraction desired during mixing, but the need for adequate water and fat-holding is still important to maintain palatability. Thus, high-quality meat products require the application of suitable processing technologies along with meat and non-meat ingredients that offer the required functionalities.The application of non-meat ingredients provides opportunities to enhance the overall quality of processed meat products because of the desirable functionalities they provide to the final product. Generally, food standards of the country regulate the levels of non-meat ingredients in the meat products that are produced or marketed in their jurisdictions. Pulses are dry edible legume seeds consumed in many regions of the world as part of a staple diet and have a long history dating back approximately 11,000 years. In general, pulses are rich in protein, slowly digestible carbohydrates, dietary fiber, and a variety of micronutrients such as selenium, iron, vitamins E and A, folate, niacin, and thiamine . Table 2From a nutritional standpoint, pulses are rich in macronutrients and micronutrients. Thus, pulses in various forms have been utilized in the reformulation of meat products as binders or to enhance their nutritional and healthy features or sensory qualities. The protein, starch, and fiber contents of pulses make them great binders as these biopolymers form complex gel networks with meat proteins . Among tPulses undergo several processes before they are used in food formulations. A summary of pulse processing into various products is shown in The most explicit use of pulses in meat products is as an extender or binder/filler. Extenders and binders serve many purposes in meat products, mostly through their effects on formulation costs, improving nutritional value and benefitting processing parameters such as the viscosity and adhesiveness of meat batters and enhancing product quality by retaining more liquid (oil and water), texture, flavor, mouthfeel, and appearance ,12,13,14The water and oil retention properties of the starch, protein, and fiber constituents of pulse flours lend to their use as a binder in food systems. In products such as patties and burgers, pulse flours trap water, fat, and other substances and form complex gel networks with meat proteins when heated 42). It has been shown that pulse flours enable high liquid retention in meat systems, thereby lowering cooking losses, improving the texture, and increasing the product yield. One particular advantage that pulses have as an ingredient in meat products is their native starches will gelatinize at normal meat thermal processing temperatures , . It has Lean and fat components are often used in meat product formulations, where protein and fat from skeletal muscles make up the majority of the composition next to water. The lean meat component of the formulation is responsible for fat emulsification, product structure, water binding, and end product color . Plant pStudies have shown that adding pea fiber to meat products increases the cook yield and water retention and reduces shrinkage ,50. ThesBioactive chemicals are often non-nutritive components of food that are present in smaller quantities than macronutrients. Pulses too contain an abundance of bioactive compounds that are not considered nutrients. Phenolic compounds, enzymes, oligosaccharides, and resistant starch are among the bioactive components commonly found in pulses, which makes them suitable for application in a wide range of food products.In a recent comprehensive review, Matallana-Gonz\u00e1lez et al. pointed Among the antioxidant compounds found in pulses, phenolic compounds have a significant role in delaying the oxidative degradation of meat products. Due to their redox characteristics, phenolic compounds are capable of functioning as hydrogen donors, reducing agents, and singlet oxygen quenchers , therebyAlthough bioactive compounds act in beneficial roles, these compounds in raw pulses can also have negative nutritive properties at certain concentration levels ,60,61. TPulses, as with all seeds, contain enzymes such as superoxide dismutase and glutathione reductase, which provide protection against the oxidation of meat . These eConsumers often choose foods based on their sensory appeal including appearance, taste, flavor, and texture. Several studies have shown that pulses are less acceptable because of their natural sensorial properties, especially to those consumers who are not familiar with pulses. In a recent review, Chigwedere and colleagues identifiColor is another attribute that may be affected when incorporating pulse flours and other pulse-derived ingredients into meat products. The use of some pulse flours in meat products has had a considerable impact on their lightness, redness, and yellowness scores. Dzudie et al. observedAs shown in In pulse\u2013meat combined product systems, the thermo-incompatibility between pulse and meat proteins is a technical challenge when incorporating pulse proteins with meat . Even thOver the past decade, research has focused on infrared (IR) heating as one of the thermal treatments for pulses to increase their suitability as ingredients for different foods. IR treatment is a rapid heating technique that employs IR electromagnetic radiation. IR heating is considered an advanced thermal process with reported benefits such as environmental friendliness and homogeneity of heating with low energy consumption. Thus, IR heating could serve as a viable treatment that can be used in pulse processing as a pre-treatment to enhance their functional properties.IR heat treatment has been referred to as IR micronizing or micronization, which is a dry thermal process patented by the Micronizing Company UK. The name of the company comes from the unit of measurement of infrared wavelengths . IR is eIR ovens have a modular design that allows them to be easily fitted into most production lines, occupy less floor space than convection ovens, and require less maintenance. IR radiation applied to biological materials is absorbed, reflected, or scattered in the matrix. To maximize the effectiveness of the heating process, the material to be exposed to IR radiation should have a high absorptivity. According to the wavelength range of the incident radiation, energy absorption mechanisms are classified as follows: (1) changes in the electronic state between 0.2 and 0.7 \u00b5m (ultraviolet and visible rays), (2) changes in the vibrational state between 2.5 and 1000 \u00b5m (FIR), and (3) changes in the rotational state greater than 1000 \u00b5m (microwaves) . In geneWhen pulses are heated in an IR oven, the emitted radiation transmits through the air and is absorbed on the outer surface cell layers of the seeds, resulting in an excited, vibrational state of constituent molecules . This geTempering or adjusting seed moisture content (MC) is a common pre-treatment prior to IR heating. In this process step, water or an aqueous solution is applied to raise the MC to a predetermined level. The moisture level, time, and solution composition influence the quality of the end product and would allow the food processor to tailor the pulse ingredient to a specific application. Tempering is an important factor that determines the level of starch gelatinization, protein gelation, and cooking quality in IR heating . AccordiSome studies have shown that a higher seed MC prior to IR heating results in a higher degree of starch gelatinization . An incrThe physicochemical and functional properties of pulse ingredients determine their potential applications in different food products. Nitrogen/protein solubility, WHC, oil absorption capacity (OAC), gelation, and emulsifying capacity (EC) are a few of the functional properties of pulses and pulse-derived ingredients that may be affected by IR heating. Protein solubility is a critical factor that influences its functionality in different food products and can be measured as nitrogen solubility index (NSI) or protein dispersibility index (PDI). IR heating decreased the nitrogen solubility of lentil flour by 33\u201364% over a pH range of 2\u20139, which is suggested to be due to heat-induced denaturation of proteins that exposes the hydrophobic regions leading to their aggregation in aqueous solutions . TempereStarch granule gelatinization is the loss of molecular structure when exposed to heating under excess moisture conditions with accompanying viscosity increases and possible gel formation. During gelatinization, water is absorbed into starch granules, resulting in hydration, swelling, and ultimately the loss of birefringence and crystallinity . Due to WHC refers to the ability to absorb, retain, and physically entrap water. It is a critical functional attribute for food applications. Several studies have revealed that tempering followed by IR heating can increase the WHC of flour derived from pulses. According to Bai et al. , tempereOHC is equally as important as WHC when it comes to the utilization of pulse ingredients in foods. Meat products may contain a high percentage of fat that needs to be retained within the product during processing and storage. The type of pulse, seed MC, and IR heating temperature affect the OHC of pulse ingredients. Flours with a higher concentration of hydrophobic amino acids are thought to have a higher oil-holding capacity. Pathiratne indicatep < 0.01) and negatively correlated with protein solubility . Since IR heating and tempering can both improve starch gelatinization while decreasing protein solubility, they may have opposing effects on the ability of pulse flours to emulsify fat.Emulsifiers provide a cohesive film over dispersed oil droplets in an aqueous medium, preventing structural changes such as coalescence, creaming, flocculation, or sedimentation. This property is important for ingredients added to meat products to stabilize emulsions and prevent finely chopped fat particles or liquid oil (dispersed phase) from coming together. Bai et al. found thIR heating has been reported to impart minimal negative effects on the nutritional properties of pulses and may contribute to enhanced bioavailability of nutritional components in pulses, primarily through lowering enzyme inhibition and increasing the digestibility ,17,18,94Increased seed temperatures (>100 \u00b0C) as a result of IR treatment may have an effect on enzyme activity by inducing conformational changes. Therefore, IR heating has been investigated as a method of inactivating oxidative enzymes, which are associated with the production of off-flavor and off-color in pulses. Shariati-Ievari and colleagues examinedResults from Shariati-Ievari et al. and PathPolyunsaturated fatty acids such as omega-3 fatty acids are essential nutrients in the diet and are highly prone to oxidation. In pulses and meat products, VOCs are formed as oxidation products of unsaturated fatty acids. Lipoxygenases in pulses catalyze the oxidation of linoleic acid to generate VOCs. Shariati-Ievari et al. investigIngredients derived from pulses retain nutritional and functional benefits of the carbohydrates, protein, micronutrients, and phytochemicals of the seed and can be employed in various meat product formulations. Raw/native pulse ingredients have been successfully incorporated as emulsifiers and binders in processed meat products ,117,118.Color is an important consideration when purchasing uncooked burgers or fresh sausage and can be assessed using instrumental methods and visual observation. According to Fern\u00e1ndez-Lopez et al. , the redOne of the most unique functionality benefits of incorporating lentil flour from IR-heated lentils was the improvement of color stability of fresh beef products during storage. Der evaluateOne study set out to define the optimum IR heating conditions of green lentils for maintaining meat product redness . HeatingThe pulse type, market class, variety, and quantity applied of IR-treated pulse ingredients used in meat products may also influence color stability. However, according to Der , both thMore recently, focus shifted to investigating which part of the lentil seed could be attributed to color stability in meat products. Li investigMyoglobin (Mb) is the main meat pigment that determines the red color of meat. Therefore, maintaining the brown-colored metmyoglobin (metMB), the oxidized form of Mb, at low levels is vital for meat products. There was consensus that the delay in the oxidation of Mb and the stability of the red color in products formulated with heat-treated lentils was related to the inactivation of oxidative enzymes (such as lipoxygenase) and the concomitant increase in antioxidant activity ,17,18. AFrozen meat products also benefit from formulating with pulse ingredients. An assessment of lentil ingredients in frozen beef burgers confirmeSimilar to the results observed for refrigerated burgers, IR-treated lentil flour exhibited stronger antioxidant effects in beef burgers than toasted wheat crumb (6%), due to the endogenous antioxidants, phenolic compounds, and enzymes. Overall, work by Li suggesteOxidation of unsaturated lipids is a major contributor to the loss of quality in both fresh and processed meat, which continues during refrigerated or frozen storage of the products. The quality of oxidized meat degrades in different ways, including changes in color, flavor, texture, and nutritional value. There is clear evidence that pulse ingredients that have been subjected to IR radiation minimize lipid oxidation in both fresh and frozen meat ,18. In l5 units/mL to 1.3 \u00d7 105 units/mL [5 units/g protein to 40.5 \u00d7 105 and 0.4 \u00d7 105 units per g at heating temperatures of 115 \u00b0C and 130 \u00b0C, respectively, and there was no significant difference in lipoxygenase activity when the lentil was heated above 130 \u00b0C [Optimum reductions in the lipid oxidation of beef patties with the addition of IR-heated lentil flour appear to be related to the use of higher-temperature IR heating conditions. IR treatment at 130 \u00b0C and 150 \u00b0C was more effective than 115 \u00b0C in delaying oxidation, and this was attributed to the lower (or lack of) residual activity of the oxidative enzymes . It has units/mL . With ree 130 \u00b0C . Additioe 130 \u00b0C rather tThe antioxidant properties of the different types of lentil ingredients were found to be at their highest in whole lentil flour and hull fiber derived from heat-treated seeds, which is in agreement with effects previously described for color retention . Low-fatThe susceptibility of meat products to lipid oxidation is influenced by a variety of variables, including product type, meat species, fat composition, and other additives. Even though there have been few research studies conducted on various meat products, the available evidence indicates that meat products benefit from the application of IR pre-treatment to pulses when they are utilized as an ingredient in the formulation. Burgers are formed meat products made of ground meat mixed with other ingredients and additives, and they may contain varying amounts of fat. In low-fat (<10% fat) beef burgers, IR-treated lentil ingredients have increased the stability of the unsaturated lipids ,17,18. WThus far, most studies on the addition of IR-heated lentil flour focused on low-fat meat products. Recently, Kim and Shand studied The oxidation of myoglobin and unsaturated lipids in muscle-based foods have a close relationship, and some studies show inter-dependencies . The stuEmulsion-type meat products as a replacement for phosphates (sodium tripolyphosphate: STTP) in uncured bologna-type chicken sausage, confirmed that lentil flour (6%) was effective in delivering the liquid-binding properties similar to synthetic phosphates.Alkaline phosphates are commonly used to improve the water retention and textural properties of cooked meat products by increasing the pH and ionic strength ,128,129.Texture is a multi-parameter sensory property that is linked to the product\u2019s structure, mechanical, and surface properties . ProteinThe type/market class and quantity of the pulse used, as well as the type of product, will determine the overall effect on their texture. The evidence available to date, however, shows that the type of pulse or the level of the addition of IR-heated pulse flours has little impact on the texture properties beyond that of the untreated flour control. For example, the application of IR-treated lentil flour at 6% to 12% in burgers produced\u00ae, 43% starch and 25% protein), despite the fact that the product was IR treated [The use of pulse ingredients influences the sensorial quality of meat products ,133,134 treated . It was treated and toas treated .The appearance/color of meat products is another sensory property that consumers critically consider. Pulses, especially those with colored seed coats, contribute to the product color and may be difficult to mask in meat products. In chicken bologna, the addition of IR-treated lentil flour and seed coat a.This effect was also reflected by the redness (a*) values; a lowering of the red color was observed when lentil components were added to bologna, and a higher redness was observed in the control sample. Pulse ingredients can bring a grainy appearance when seed coat component-rich material is used as observed by Xu with porToday\u2019s food industry is reengineering processed food formulations that can deliver sufficient dietary proteins, consisting of both animal and protein sources. Blending meat and plant-based ingredients is an ideal approach to reduce meat content and incorporate plant proteins and other nutrients, as well as bioactive phytochemicals, into products without compromising the overall nutritional and sensory quality. In general, pulses, including lentils can be blended with meat and bring comparable nutritional profile to that of animal muscles. Lentils, in particular, can be processed to yield several ingredients differing in nutrient/chemical composition and techno-functional characteristics, as illustrated in The enhancement of the antioxidant properties of lentil ingredients was one of the most distinctive functional benefits of IR heating of lentils. This particular improvement resulted in extending the stability of the color and delaying lipid oxidation in both raw and cooked meat products, in either refrigerated or frozen conditions. An increase in oxidative stability in combination with the red color retention (less brown color) of meat products reduces the rejection of meat products at the grocery aisle due to low consumer appeal. The incorporation of lentil flour into meat products increased the liquid-binding capacity and thus the product cook yield as sufficiently as its untreated counterpart. The product cook yield is a significant quality consideration of the cooked, ready-to-eat product offerings at home and at food services. The effect of IR treatment on the textural properties and nutritional composition was minimal, having no detrimental effects compared with the commercial binders used in meat products. IR heat-treated lentil flour is a natural ingredient possessing antioxidant properties that can be used in developing clean-label products. The limitations of lentil flour, especially whole flour, could be the contribution to foreign flavor and, at high addition levels, the masking of the reddish color of the products. Although IR pre-treatment did not show a significant control to these limitations, these effects depend on the level of addition to the product, necessitating studies to optimize the level of addition in meat products without compromising product flavor and taste.Overall, adding IR-treated lentil ingredients promotes meat product yields, oxidative stability, nutritional availability, and sustainability measures associated with animal foods. In the development of lentil-enriched meat products, the use of IR heat treatment widens the applications beyond the primary functions of binders. Introducing this ingredient into plant\u2013meat blended products could bring viability and improve the sustainability of protein-rich food products.The macro-components of pulses provide the necessary binding ability of meat particles in comminuted meat products, stabilization of heat-treated meat-fat emulsions, and absorbent function of expressed liquid from meat particles during heat treatment. Heat pre-treatments, particularly those that use IR heat, have been effective in modifying macromolecules and minor components of pulses, as evident from the research carried out with lentils. The changes brought about by IR pre-treatments have been beneficial to meat products. The prolongation of a fresh red color, lowering of lipid oxidation products, and flavor changes were the most prominent benefits of IR pre-treatment, while the texture-forming properties were generally not affected significantly from those observed with the addition of raw, unheated pulse ingredients. IR-treated pulse ingredients, especially lentil-based, may also be used to replace commercial binders such as toasted wheat crumbs, isolated soy protein, and corn starch, as well as synthetic antioxidant compounds, in the transition to clean-label products. Thus, in the development of pulse-enriched meat products, the use of IR heat treatment further widens the applications beyond the primary functions of binders and will be a viable approach toward improving the sustainability of protein-rich food products. The effects of IR heat treatment on the properties of pulses and how they perform in meat products, however, were mostly limited to lentils and chickpeas, especially with whole and dehulled flours in a limited number of meat products. Studies carried out so far show the effects of IR treatment vary depending on the chemical composition of the seed components, seed coats and cotyledons, and thus how they affect the components of meat in product systems. Future research needs to focus on assessing the effects of IR heating on different pulses and their derived ingredients and their impact on a wide range of meat products. The efforts to date have been successful through the cooperation of scientists across a number of disciplines. Expanding on the multi-functional benefits of pulses requires an intimate knowledge of the ingredient functionalities and requirements of the specific meat system."} +{"text": "Have neonatal intensive care resources at hospitals where infants born extremely preterm are delivered changed over the past decade?In this cohort study including 357\u2009181 infants born at 22 to 29 weeks\u2019 gestation between 2009 and 2020, births at neonatal intensive care units (NICUs) with lower levels of care or lower birth volumes increased, while births at NICUs with higher levels of care or higher birth volumes decreased.The findings of this study suggest increasing deregionalization of extremely preterm birth. This cohort study examines the distribution of extremely preterm births between 2009 and 2020 stratified by neonatal intensive care resources at the delivery hospital in the US. In an ideal regionalized system, all infants born very preterm would be delivered at a large tertiary hospital capable of providing all necessary care.To examine whether the distribution of extremely preterm births changed between 2009 and 2020 based on neonatal intensive care resources at the delivery hospital.This retrospective cohort study was conducted at 822 Vermont Oxford Network (VON) centers in the US between 2009 and 2020. Participants included infants born at 22 to 29 weeks\u2019 gestation, delivered at or transferred to centers participating in the VON. Data were analyzed from February to December 2022.Hospital of birth at 22 to 29 weeks\u2019 gestation.Birthplace neonatal intensive care unit (NICU) level was classified as A, restriction on assisted ventilation or no surgery; B, major surgery; or C, cardiac surgery requiring bypass. Level B centers were further divided into low-volume (<50 inborn infants at 22 to 29 weeks\u2019 gestation per year) and high-volume (\u226550 inborn infants at 22 to 29 weeks\u2019 gestation per year) centers. High-volume level B and level C centers were combined, resulting in 3 distinct NICU categories: level A, low-volume B, and high-volume B and C NICUs. The main outcome was the change in the percentage of births at hospitals with level A, low-volume B, and high-volume B or C NICUs overall and by US Census region.A total of 357\u2009181 infants were included in the analysis. Across regions, the Pacific (20\u2009239 births [38.3%]) had the lowest while the South Atlantic (48\u2009348 births [62.7%]) had the highest percentage of births at a hospital with a high-volume B\u2013 or C-level NICU. Births at hospitals with A-level NICUs increased by 5.6% , and births at low-volume B\u2013level NICUs increased by 3.6% , while births at hospitals with high-volume B\u2013 or C-level NICUs decreased by 9.2% . By 2020, less than half of the births for infants at 22 to 29 weeks\u2019 gestation occurred at hospitals with high-volume B\u2013 or C-level NICUs. Most US Census regions followed the nationwide trends; for example, births at hospitals with high-volume B\u2013 or C-level NICUs decreased by 10.9% [95% CI, \u221214.0% to \u22127.8%) in the East North Central region and by 21.1% in the West South Central region.This retrospective cohort study identified concerning deregionalization trends in birthplace hospital level of care for infants born at 22 to 29 weeks\u2019 gestation. These findings should serve to encourage policy makers to identify and enforce strategies to ensure that infants at the highest risk of adverse outcomes are born at the hospitals where they have the best chances to attain optimal outcomes. We examined regionalization trends in the birthplace NICU level, incorporating NICU volume from 2009 to 2020 overall and by US region among newborns born at 22 to 29 weeks\u2019 gestation.STROBE) reporting guideline.For this cohort study, the use of VON\u2019s deidentified research repository was determined to be exempt from review and informed consent by the University of Vermont\u2019s committee for human research because it was not human participants research. This study is reported following the Strengthening the Reporting of Observational Studies in Epidemiology and high-volume . Since many studies have also found no outcome differences between high-volume B\u2013level vs C-level centers ,25 we combined high-volume B\u2013level and C-level centers,25 resulting in 3 distinct NICU categories.For VON members, NICU level is collected through the annual membership survey and classified as level A, restriction on assisted ventilation or no surgery; B, major surgery; or C, cardiac surgery requiring bypass. Given the strong association between the preterm infant admission volume and mortality,26 Member NICUs contribute data from medical records using standardized VON forms. Maternal race and ethnicity were obtained by personal interview with the mother, review of the birth certificate, or medical record, in that order. Race and ethnicity were classified as American Indian, Asian, Black, Hispanic, White, and other . Race and ethnicity were included in analysis as descriptive characteristics.In January 2015, VON started collecting the names of the non-VON birth center for outborn infants transferring to a VON center. For these non-VON centers, we obtained the center\u2019s level of neonatal care from publicly available resources. Level I centers were coded as having a well-baby nursery; level II, having an equivalent of VON\u2019s classification of level A NICU; level III, having a level B NICU; and level IV, having a level C NICU. Centers with well-baby nursery and those with level A NICUs were combined into 1 group, labeled as level A NICUs. For outborn infants born before 2015, we did not know the NICU level of the non-VON center, so we only included infants transferred to VON centers within 3 days of life because we were more confident in assuming that these infants were being transferred from hospitals with level A NICUs or well-baby nurseries. NICU census regions and divisions were classified according to the US Census Bureau classifications as West (Mountain and Pacific), Midwest , South , and Northeast (New England and Mid-Atlantic) to estimate the probability of birth in each NICU level, along with 95% CIs. We also calculated the absolute change in the percentage of births along with the 95% CI by NICU level between 2009 and 2020.We conducted sensitivity analyses to determine the robustness of our findings. The first analysis was conducted restricting to hospitals that were in the sample the whole period. The second analysis recoded the missing NICU level into low-volume B instead of A for outborn infants transferred to VON centers within 3 days of life. A third analysis used a complete case analysis without recoding the missing level for outborn infants transferred within 3 days of life.P values were 2-sided, and statistical significance was set at P\u2009=\u2009.05. Statistical analyses were conducted using R software version 4.0.5 . Data were analyzed from February to December 2022.From 2009 to 2020, 322\u2009407 neonates were inborn at 822 VON centers and 39\u2009652 were outborn and transferred to 714 VON centers. Among the inborn infants, 305\u2009866 infants (94.9%) did not die in the delivery room and were admitted to the NICU. Among the outborn infants, 17\u2009464 infants transferred from a birth center with a known NICU level. Among the remaining 22\u2009188 outborn infants, 20\u2009087 (90.5%) were transferred within 3 days of life and were classified as transferred from a hospital with a well-baby nursery or level A NICU, and 2100 (9.5%) were transferred after 3 days of life and were classified as missing birth NICU level were born at centers with restricted ventilation. Among infants born at 22 to 29 weeks\u2019 gestation, most regions followed the nationwide trends among infants born at 32 weeks\u2019 gestation or less.13 This association was even stronger among infants weighing less than 1000 g .13 Another study showed that among newborns younger than 28 weeks\u2019 gestation in a tertiary hospital, birth in a nontertiary hospital was associated with increased odds of mortality , while transfer from a nontertiary to a tertiary hospital in the first 48 hours was associated with increased odds of severe brain injury.27 In addition to the large number of studies on the benefits associated with being born at a hospital with a high-level NICU, an evolving body of evidence emphasizes the importance of NICU volume.25 Collectively, these studies provide strong evidence that the best outcomes are associated with delivery at high-level and high-volume NICUs.27Previous research has shown the importance of birth at a hospital with a high-level NICU for infants who are very preterm or have VLBW.23 For example, a study by Wehby et al23 found that being delivered at hospitals with low-volume NICUs (<50 infants with VLBW per year), compared with high-volume NICUs (>100 infants with VLBW per year), was associated with nearly 2-fold increased mortality odds under the classic risk-adjusted model, compared with more than 5-fold increased mortality odds using the instrumental variable approach.23However, previous studies have underestimated the benefits associated with regionalization and with NICU level and NICU volume. Given that patients who are more severely ill are more likely to deliver at hospitals with high-level and high-volume NICUs, studies conducted using instrumental variables methods that control for this unobserved selection bias show even stronger associations with improved neonatal outcomes when infants with high risk of adverse outcomes are delivered at high-level and high-volume NICUs, compared with low-level and low-volume NICUs.6 Almost all (88%) of the VLBW deliveries at these new NICUs were shifted from hospitals with a high-volume B\u2013 or C-level NICU.6 Thus, this increase in midlevel NICUs was essentially all deregionalization, not improved NICU care access.6We speculate that the increasing share of infants delivered in hospitals with level A or low-volume B\u2013level NICUs is accounted for by a decrease in the share of infants delivered in hospitals with high-volume B\u2013 or C-level NICUs. A study conducted in California examined how delivery location shifted in response to midlevel NICU openings .7 In California, for example, approximately 80% of births that occurred in smaller, lower-level NICUs were located within 25 miles of a large tertiary NICU.12 This deregionalization in perinatal care has occurred due to the wide availability of neonatal intensive care technologies and neonatologists, economic factors and financial incentives derived from installing new NICUs,28 and different state policies that influence regulation of regionalized systems.29 The effects of these different factors have also varied by US region, contributing to the regionalization trends we observed.The proliferation of midlevel NICUs is mainly happening in urban and suburban areas that are already being served by tertiary centers.Between 2009 and 2020, births at high-volume B\u2013 or C-level NICUs decreased between 3.7% and 21.2% and births at level A NICUs increased between 4.2% and 14.0%. Births at low-volume B\u2013level NICUs increased across 6 regions, and the increase ranged between 2.3% and 8.1%. Only 2 regions stood out in their regionalization trends: the East South Central and Pacific regions. The East South Central region had a lower regionalization rate at the start of the study and shifted toward the national mean by the end, while the Pacific region had a minor change in the delivery site during the study period because this region had already experienced a decline in the level of NICU care regionalization.30 in the Pacific region occur in California. California had already shown deregionalization trends starting in the 1990s.31 This trend continued and between 1990 and 2001: there were 48 new midlevel units established or upgraded,6 and between 2005 and 2011, the overall percentage of infants with VLBW born at hospitals providing the highest degree of care decreased from 42.5% to 26.5%.19Most births at 22 to 29 weeks\u2019 gestation (75%)32 This resulted in more than 90% of all very preterm deliveries occurring in a hospital with a high-volume NICU, which was associated with decreasing the neonatal mortality rate from 8.1 deaths to 2.7 deaths per 1000 live births.32 On a local level, in the greater Cincinnati, Ohio, region, the implementation of perinatal outreach programs stressing the importance of transfer of mothers with high risk to subspecialty perinatal centers decreased the percentage of infants with VLBW delivered at hospitals without tertiary perinatal care from 25% to 11.8%.33Given the negative outcomes associated with birth at nontertiary or low-volume hospitals, the fact that more than 50% of these infants with high risk were born in hospitals with level A and low-volume B\u2013level NICUs is unacceptable. Previous studies have shown that the delivery of these infants with high risk can be shifted to tertiary centers if there is a systemwide effort. In 1990, Portugal closed all the small maternity units and small NICUs and implemented a system to facilitate maternal transport of high-risk deliveries.34This study has several limitations. One limitation is the missing data on the birth hospital level for a large proportion of outborn infants. We analyzed our data by region, which might mask individual state-level differences. To classify the NICU level for outborn infants, we used publicly available resources, which might have incomplete and inaccurate web descriptions that claim a higher level of care.In this cohort study of neonates born at 22 to 29 weeks\u2019 gestation, we identified concerning deregionalization trends. Given the strong evidence showing risk of worse outcomes when the birth of newborns with high risk does not occur at large tertiary centers, our findings should serve to encourage policy makers to identify and enforce strategies that ensure that infants with the highest risk of adverse outcomes are born at the hospitals where they have the best chance to attain optimal outcomes."} +{"text": "The aim of the study was to develop a computerized method for distinguishing COVID-19-affected cases from cases of pneumonia. This task continues to be a real challenge in the practice of diagnosing COVID-19 disease. In the study, a new approach was proposed, using a comprehensive set of diagnostic information (CSDI) including, among other things, medical history, demographic data, signs and symptoms of the disease, and laboratory results. These data have the advantage of being much more reliable compared with data based on a single source of information, such as radiological imaging. On this basis, a comprehensive process of building predictive models was carried out, including such steps as data preprocessing, feature selection, training, and evaluation of classification models. During the study, 9 different methods for feature selection were used, while the grid search method and 12 popular classification algorithms were employed to build classification models. The most effective model achieved a classification accuracy (ACC) of 85%, a sensitivity (TPR) equal to 83%, and a specificity (TNR) of 88%. The model was built using the random forest method with 15 features selected using the recursive feature elimination selection method. The results provide an opportunity to build a computer system to assist the physician in the diagnosis of the COVID-19 disease. In 2019, the coronavirus disease, known as COVID-19, began to propagate, swiftly evolving into a global predicament of considerable magnitude. The COVID-19 outbreak precipitated an unforeseen and distressing scenario, ushering the entire world into the throes of a pandemic and resulting in the tragic loss of thousands of lives. According to the World Health Organization (WHO), COVID-19 has extended its reach to 220 countries and territories, with recorded infections and fatalities exceeding 771 million and around 7 million, respectively, as of 26 September 2023 . This deThe infection caused by severe acute respiratory syndrome coronavirus (SARS-CoV-2) leads to a wide range of clinical manifestations, from completely asymptomatic to severe acute respiratory distress syndrome, which complicates the diagnostic process ,8. The mGenerally, the diagnosing of COVID-19 can be achieved using three different methodologies: real-time reverse transcriptase-polymerase chain reaction (RT-PCR), chest CT imaging scan, and numerical laboratory tests [RT-PCR tests are fairly quick, sensitive, and reliable. The sample is collected from a person\u2019s throat or nose; some chemicals are added to remove any proteins, fats, and other molecules, leaving behind only the existing ribonucleic acid (RNA). The separated RNA is a mixture of a person\u2019s RNA and the coronavirus RNA, if present. The RT-PCR test suffers from the risk of false negative and false positive results. Consequently, the spread of COVID-19 infection has increased because RT-PCR tests cannot immediately distinguish the infected people. Although several studies have observed that the sensitivity of chest CT in the diagnosing of COVID-19 is higher than that of RT-PCR, CTs and X-rays are not accurate tools for diagnosing COVID-19 ,11. TherRecently, machine learning is an adjunct tool for clinicians. Machine learning can automatically support medical diagnosis as a helping tool for identifying and detecting the novel coronavirus ,14,15. HThe Ghaderzadeh M. et al. review study presents a comprehensive assessment of the current status of all models utilized in the identification and diagnosis of COVID-19 using radiology techniques, particularly those relying on deep learning for processing . The resIn the article by Ali R.H. et al., a model for predicting the COVID-19 virus was introduced leveraging feature selection techniques [In the article by Shanbehzadeh M. et al., the dataset comprises 501 case records categorized into two classes: COVID-19 and non-COVID-19. It encompasses 32 columns, each representing diagnostic features . VariousThe study described by Huang C. et al. described a diagnostic strategy for COVID-19 known as feature correlated naive Bayes (FCNB) . FCNB enIn the work of Ahamad, Md.M. et al., a model was developed using supervised machine learning algorithms with the aim of identifying the key features that could serve as predictive markers for diagnosing COVID-19 . The feaThe study of Alotaibi A. et al. focused on the early prediction of disease severity using patient history and laboratory data, employing three distinct types of artificial neural networks (ANNs), support vector machines, and random forest regression with various learning methods . FeatureAs presented in article Chauhan H. et al., an artificial neural network served as the diagnostic model for assessing coronavirus positivity . The resIn the article by Srinivasulu A. et al., a function for predicting diseases caused by the COVID-19 virus was introduced . This reThe aim of the study described by Ghaderzadeh M. et al. was to develop an efficient computer-aided detection (CAD) system for the COVID-19 virus using a neural search architecture network (NASNet) . A localIn the study presented by Pourhomayoun M. et al., a dataset of over 2,670,000 laboratory-confirmed COVID-19 cases from 146 countries was used, including 307,382 labeled samples . To addrNew studies are constantly appearing in the literature, presenting various methods of diagnosing COVID-19 and distinguishing it from other diseases that cause changes in the lungs, such as pneumonia. These methods use different parameters from the study of patients. The most popular trend in the diagnosis of COVID-19 is the analysis of X-ray images ,28,29,30Many methods have been provided for COVID-19 detection based on machine learning techniques. Despite the efficiency of these methods, they suffer from many limitations, such as low diagnosis accuracy, high complexity, and long prediction time. The work in this paper is concentrated on providing a new COVID-19 diagnosis system based on NLTs, which have proven to be the most effective methodology for COVID-19 diagnosis. The development of a disease prediction model based on clinical variables and standard clinical laboratory tests is proposed. The clinical variables include demographics, signs and symptoms, laboratory test results, and COVID-19 diagnosis. The variables include age, sex, serum levels of neutrophils, serum levels of leukocytes, serum levels of lymphocytes, reported symptoms , body temperature, and underlying risk factors .The new contribution and value of the conducted study is that the authors have attempted to distinguish the cases of COVID-19 infection from the cases of pneumonia. This is a far more difficult task than distinguishing COVID-19 virus infection from completely healthy cases. The latter task is often described in the literature, while from the point of view of complexity, it is much easier than the one presented. Furthermore, the proposed method of diagnosis is based on a comprehensive set of diagnostic information using multiple sources of information. This approach is more reliable compared with the proposed solutions based on a single source of information such as radiological imaging.The COVID-19 dataset is a real dataset that is used to detect COVID-19 patients. These raw hospital data contain the results of clinical laboratory tests and clinical variables collected from different cases who were admitted to the hospital . The clinical variables include demographics, symptoms, laboratory test results, and COVID-19 diagnosis. The variables include blood test results (various blood properties), reported symptoms , body temperature, anamnesis data, and underlying risk factors . The total number of cases in this real dataset is 128. According to this dataset, it is considered two class categories, called COVID cases and non-COVID cases, as shown in The distribution of the cases used in the collected dataset is represented according to age and gender, as shown in The important features of basic information, symptoms , diagnostic results, prior disease, symptom history that are directly or indirectly related to the COVID-19 disease were extracted. The patients did not all develop the same symptoms; symptoms of every patient were found individually. In detail, some keywords for each feature were selected, and then those keywords were matched with text data and the features were extracted individually. However, much of the data was in the form of unstructured text information, which can be difficult to process. The data contained patient symptoms in a text format. Categorical data were converted to dummy variables, because non-numerical data are not allowed in the adopted machine learning algorithm. A data transforming process was applied through transforming the data into numeric forms (\u201c0\u201d and \u201c1\u201d). The true value (\u201c1\u201d) means the existence of the feature on a patient, while the false value (\u201c0\u201d) means the absence of the feature. Lastly, a final dataset, which contained the main features, was generated. In this dataset, most of the values were binary, and several fields were numerical values. Brief descriptions of some features that affect COVID-19 patients are shown in Before the essential steps of machine learning began, the data were preprocessed . This stHandling categorical data. The original data contained a number of categorical features with text labels. In contrast, most machine learning algorithms require numeric feature values to work properly. Therefore, the categorical data were encoded with dummy variables. After this operation, the value of each categorical feature was \u201c1\u201d if there was a symptom associated with the feature, or \u201c0\u201d otherwise.Data imputation. The initial dataset contained missing values (about 10%). Such a situation can cause errors during data analysis. To avoid this, various methods of data imputation are used. One of the most popular methods, referred to as replace with the summary, was applied. Missing categorical values were replaced with the value occurring most often (mode), while continuous values were replaced with the average value.Data resampling. Initially, the dataset consisted of 128 cases. Its structure was such that there were 77 cases (60%) diagnosed with the COVID-19 disease and 51 cases (40%) with diagnosed pneumonia (non-COVID-19). The dataset was unbalanced, which could risk biasing the prediction of the majority class by classifiers built on its basis. To avoid this danger and ensure a balanced number of positive and negative observations, a resampling method known as a synthetic minority oversampling technique (SMOTE) was used. The SMOTE technique is based on the fact that the size of the minority class is increased by generating new synthetic cases by combining all K-nearest neighbors of the minority class using similarities in feature space (Euclidean distance). After applying the SMOTE technique, the dataset counted 154 cases, with both classes (COVID-19 and non-COVID-19) having 77 cases each.Dataset splitting (train/test). The full dataset (154 observations) was randomly divided into a training and a testing part in such a way that the training part accounted for 70% and the testing part for 30% of the full set. During the division, the same proportion (50%) between observations belonging to both classes was maintained as in the full set. Details of the size of each subset are provided in Data rescaling. Ninety-nine feature descriptors were obtained as a result of assessing the patients\u2019 diagnosis parameters. Due to the fact that the feature descriptors were measured on different scales , the features were rescaled using classical standardization: ijx is the value of feature j for observation i; ijz is the standardized value of feature j for observation i; j; and js is the standard deviation of feature j. After standardization, the interval scale and normal distribution N were in effect for all features. Scaling was performed once for the training data. Then, the test data were transformed in a similar manner. During this process, the mean and variance vectors obtained during the standardization of the training data were used for each feature.Data cleaning. A four-step data cleaning procedure was carried out:Removal of the features taking constant values ;Removal of the features assuming almost constant values (variance less than 0.01);Removal of the features assuming duplicate values;r) was used, which detects linear dependencies between features and assumes normality of their distribution. The features for which |r|> 0.33, indicating the presence of a moderate and strong correlation, were removed.Removal of the mutually correlated features. The Pearson correlation coefficient set of features to a certain subset that includes the features that are important for the criterion used. As a result of data cleaning, the full set of 99 features was reduced to 23 features. Such a significant reduction was mainly due to the presence of a large number of features with moderate and strong cross-correlation. Therefore, at the selection stage, an additional reduction was abandoned, and the goal was to build a ranking list for each of the selection methods used. The ranking lists contained a set of features in the order that reflects their importance in the process of identifying the observations related to one or another diagnosis. As a result of this processing step, nine training sets and nine test sets were obtained.Univariate Fisher\u2019s method (FISHER) and the method of analysis of variance (ANOVA);Multivariate Relief method (RELIEF).Filter methods:Sequential forward selection (SFS);Sequential backward selection (SBS);Recursive feature elimination along with LogisticRegression estimator (RFE).Wrapper methods:SelectFromModel method with logistic regression (LR) evaluation;SelectFromModel method with AdaBoost (ADA) evaluation;SelectFromModel method with LightGBM (LGBM) evaluation.Embedded methods:Nine feature selection methods were used. Their acronyms are given in brackets.X be the general set of objects and Y be the set of possible labels of objects connected by some unknown dependency y: X \u2192 Y. The task of supervised learning is to construct an algorithm C that approximates the y dependence on a known training sample T = {, \u2026, } according to some measure of quality Q. In the case of a classification problem, each object is described by some discrete variable that the algorithm tries to predict; that is, to predict the class of the object. The simplest classification task is binary classification, in which there are two classes of objects.The goal set in the research was to solve a binary classification task, which belongs to supervised learning problems. Let us give a general formulation of the problem of learning with a teacher. Let During the study, the grid search method was used in the training process, which made it possible to determine the optimal values of the hyperparameters of each model. The GridSearchCV method, available in the model_selection module of the scikit-learn library, was used for this purpose. The models were evaluated based on 10-fold cross-validation. Each model, which was considered optimal in terms of a given number of features, was saved as a file on a disk. Then, the best model was selected from the available set . Such a model is named as optimal in Linear discriminant analysis (LDA);Logistic regression (LR);Support vector machines with the C regularization parameter (SVM);Support vector machines with the nu parameter to control the number of support vectors (NuSVM);K-nearest neighbors (KNN);Decision trees (DT);Multilayer perceptron (MLP).Methods to build a single model:Random forest (RF);Gradient boosting (GRADBoost);AdaBoost model combination (ADABoost);eXtreme gradient boosting (XGBoost);Light gradient boosting machine (LGBM).Ensemble methods:Twelve different supervised learning methods were used to build the classifiers. Their acronyms are given in brackets.The basis for calculating the value of model quality metrics was the confusion matrix , the eleThe evaluation of the models using the test set resulted in the calculation of overall classification accuracy (ACC), sensitivity , specificity (TNR), precision (PREC), F1 index (F1), and area under the ROC curve (AUC). The number of features in the lists built by the separate methods was as follows: filter and wrapper methods\u201423; embedded methods: logistic regression\u20148, AdaBoost\u20146, LGBM\u201423 . The selAs In the process of training the classification models, nine training sets (one for each selection method) were used, arranged in accordance with the ranking of features. It should be emphasized that in this scenario of the study, the full set of features was used; i.e., the one that was returned by a given selection method. While the models were being trained, the number of features in the training set changed according to the ranking list. For the filter and wrapper methods, this number varied from 2 to 23; for logistic regression, from 2 to 8; for the AdaBoost method, from 2 to 6; and for the LGBM method, from 2 to 23.On the basis of the results presented in The criterion for selecting the best model was the same as for selecting the optimal model; i.e., the highest validation accuracy with the minimum number of features in the training set. For the RF model, the validation accuracy was 0.92, and 21 features were used. For all filter and wrapper methods, a model selection procedure was used that was previously characterized using the Fisher method. In the case of embedded methods, only those classifiers that were used in the selection process were constructed and configured.The summary results are presented in When several learning algorithms were used, many different models were evaluated. From these, the most effective model was selected, which could then be used to forecast future new data. It should be added here that the parameters of the procedures used for data preprocessing, such as feature scaling or dimensional reduction, were determined solely on the basis of the training set. Then, they were used to transform test data and new data. The following models had the highest classification accuracy for the test sample: F-RF-21 (ACC = 85%), R-RF-23 (ACC = 85%), SBS-RF-22 (ACC = 85%), and RFE-RF-15 (ACC = 85%). The accuracy of testing other models was not much worse, and fluctuated between 81% and 83%. The results of the models\u2019 validation are listed in The classification accuracy of all the models for the test set exceeded the level of 80%. It should also be noted that almost all models achieved a higher TNR than TPR. The high quality of the four models mentioned above is also confirmed by the ROC curves in The RFE-RF-15 model was created using 15 features. In Among the advantages of the conducted study is that, unlike many studies presented in the literature, the obtained results are not on distinguishing the cases of COVID-19 disease infection from healthy cases, which seems like an easy task, but on distinguishing the cases of COVID-19 infection from cases of pneumonia. This task is definitely more difficult and remains a real challenge in the practice of COVID-19 disease diagnosis. The information obtained through medical history, demographic data, signs and symptoms of the disease, as well as laboratory test results was used. These data form a comprehensive set of diagnostic information that is far more reliable than any other data, such as those based only on radiological imaging. In contrast, some drawbacks of the conducted study include the relatively small number of observations used (128) and the unbalanced original dataset, which necessitated the use of the resampling technique.Comparing the obtained results with studies published in the literature, significant differences related to the material used in the research can be seen. One of the best results presented in the literature is the combination of chest x-ray and CT images proposed in . The utiStudies published present a classification method that leverages radiomics features extracted from CT chest images to distinguish the patients with COVID-19 from those with other forms of pneumonia . For theThe study by Zhao B. et al. tested the usefulness of vibrational spectroscopy combined with machine learning for early screening of COVID-19 patients . The autAnother avenue in COVID-19 diagnostics involves laboratory tests, which have garnered substantial validation in numerous studies for their significant diagnostic efficacy. Routine blood tests present a cost-effective and rapid means of COVID-19 detection. These precise algorithms can prove highly effective, especially during the peak of a pandemic when hospitals are inundated with patients. The research presented by Chadag K. et al. shows that the use of machine learning methods allows achieving satisfactory diagnostic results . In thisA novel diagnostic model for COVID-19 has been developed that significantly improves the overall prediction accuracy . The datWhen analyzing the studies presented in the literature, it can be seen that the use of clinical and laboratory data has great diagnostic potential. The number of publications proves the intensive search for the algorithms that give better and better results and the importance of this diagnostic method. The results presented in this article do not differ in effectiveness from others presented in the literature. This proves the potential of the method used. This method can become an alternative to expensive imaging tests .The main result of the study corresponds to the feature vectors obtained on the basis of numerical clinical laboratory tests, which, together with the proposed classification methods, can be an element of a computer system that supports the doctor in the diagnostic process. This system will allow the classification of analyzed cases into patients who need to detect cases of COVID-19. Nine of the constructed models achieved classification accuracy at a level exceeding 80%. On the basis of the results obtained, two models were adopted as a proposal for the final solution of the problem of automatic data classification, supporting the doctor in the diagnosis of the infectious disease COVID-19. They are RFE-RF-15 and E-ADA-4, since these models had higher accuracy than other models and worked with fewer features. The first is a classifier built using the random forest. This model uses 15 features selected using the recursive feature elimination along with logistic regression estimator (RFE) method. For the data belonging to the test set, this resulted in accuracy of 0.85 (85%), sensitivity of 0.83, and specificity of 0.88. The second model was built using the SelectFromModel method with AdaBoost classifier, which resulted in accuracy of 0.83 (83%), sensitivity of 0.83, and specificity of 0.83. This model uses four features.nu parameter to control the number of support vectors (NuSVM), K-nearest neighbors (KNN), decision trees (DT), and multilayer perceptron (MLP). In addition, five ensemble methods were used: random forests (RF), gradient boosting (GRADBoost), AdaBoost model combination (ADABoost), eXtreme gradient boosting (XGBoost), and light gradient boosting machine (LGBM).The study used three filter methods (univariate-Fisher\u2019s method (FISHER), the method of analysis of variance (ANOVA), and multivariate-relief method (RELIEF)), three wrapper methods , sequential backward selection (SBS), and recursive feature elimination along with LogisticRegression estimator (RFE)), and three embedded methods (SelectFromModel method with logistic regression (LR) evaluation, SelectFromModel method with AdaBoost (ADA) evaluation, and SelectFromModel method with LightGBM (LGBM) evaluation). Each method of feature selection selected a different number of features. Twenty-three features were selected by the filter and wrapper methods; with the embedded methods the results were as follows: logistic regression\u20148 features, AdaBoost\u20146 features, and LGBM\u201423 features. The features revealed by the filter and wrapper methods were used by 12 supervised machine learning methods. These were: linear discriminant analysis (LDA), logistic regression (LR), support vector machines (SVM), support vector machines with the The plan for further work in the scope presented in the dissertation includes attempts to create a complete system for diagnosing infectious COVID-19 diseases based on clinical and laboratory data. First of all, the developed algorithm needs to be tested on a larger number of patients. This is one of the most problematic issues when looking for new solutions in medical diagnostics. Difficulties in accessing relevant data-protected patient databases and finding results that meet strict criteria usually force investigators to conduct studies on fewer samples. When building systems that aim to create solutions commonly used in healthcare settings, extensive testing must be performed to ensure that the system works correctly and fulfills its diagnostic roles and procedures. Another important aspect of the process of creating a tool for accurately diagnosing COVID-19 infectious diseases is the inclusion in the predictive system of a greater number of classes to which the test samples are assigned.The authors intend to continue the research presented in this paper. Within this framework, collecting a larger and more balanced dataset is planned. In addition, the CSDI collection will be expanded to include textural features of patients\u2019 radiological images, which are likely to improve the predictive accuracy of the diagnostic models. To support data collection in a database that integrates diverse diagnostic information, a dedicated computer application will be built. The obtained results can be used to solve the problems of selecting informative features and classifying laboratory and clinical data in medicine and biology."} +{"text": "Artificial intelligence (AI) has the potential to transform medical research by improving disease diagnosis, clinical decision-making, and outcome prediction. Despite the rapid adoption of AI and machine learning (ML) in other domains and industry, deployment in medical research and clinical practice poses several challenges due to the inherent characteristics and barriers of the healthcare sector. Therefore, researchers aiming to perform AI-intensive studies require a fundamental understanding of the key concepts, biases, and clinical safety concerns associated with the use of AI. Through the analysis of large, multimodal datasets, AI has the potential to revolutionize orthopaedic research, with new insights regarding the optimal diagnosis and management of patients affected musculoskeletal injury and disease. The article is the first in a series introducing fundamental concepts and best practices to guide healthcare professionals and researcher interested in performing AI-intensive orthopaedic research studies. The vast potential of AI in orthopaedics is illustrated through examples involving disease- or injury-specific outcome prediction, medical image analysis, clinical decision support systems and digital twin technology. Furthermore, it is essential to address the role of human involvement in training unbiased, generalizable AI models, their explainability in high-risk clinical settings and the implementation of expert oversight and clinical safety measures for failure. In conclusion, the opportunities and challenges of AI in medicine are presented to ensure the safe and ethical deployment of AI models for orthopaedic research and clinical application.Level of evidence IV However, increasingly sophisticated applications in engineering, business, and industrial sectors have shown the rapid technological advancement and maturity of AI, with a growing interest for implementation in medical research, and the healthcare sector [Artificial intelligence (AI) is set to transform the landscape of medical research with innovative approaches to improve disease detection, clinical decision-making, and outcome prediction. The majority of medical research conducted throughout 20e sector , 37. Acce sector , 31. TheIn recent years, the growing availability of healthcare data and the increasing maturity of AI as a technological tool initiated a gradual transformation of the medical research landscape. Patient registries containing granular information about the demographics and therapeutic interventions of numerous patient populations present new avenues for research in the age of big data. Electronic medical records permit the storage and traceability of data collected over the entire duration of medical treatment for patients with different medical conditions, including patient history, physical examination results, diagnostic images, interventions and outcomes over time.Artificial intelligence has the potential to revolutionize medical research by enabling rapid and accurate analysis of vast amounts of data, containing demographic, genetic, clinical, surgical, and rehabilitation-specific information or a combination of these from thousands of patients, in pursuit of patterns associated with specific diseases or conditions. Furthermore, many AI systems possess the advantage of the ability to detect patterns, trends and connections that may not be easily recognized by humans, potentially leading to new clinical insights and breakthroughs in disease prevention, diagnostics and treatment. Analysis of large datasets, often with multimodal data content , would be tedious and inefficient with the statistical methods currently employed in medical research . AnotherApplications of AI can be useful in a broad range of research scenarios with far-reaching potential for clinical utility. The aim of this section is to provide the reader with a broad overview of areas with vast potential in orthopaedics using existing examples from AI-intensive medical research.The continuous growth in the availability of high-quality medical data presents new avenues in the analysis of information derived from the results of clinical trials and national patient registries , 18. OneWhile the clinical implementation of AI-driven predictive algorithms is still in its nascency, their potential is demonstrated by several use cases in the current literature. One notable example is a clinical calculator for ACL revision risk prediction, developed with ML models applied to data from the Norwegian Knee Ligament Registry , 28. WhiImage analysis is perhaps the most well-known application of AI in medicine. The ability of ML algorithms to perform classification and pattern recognition tasks when trained on radiographic images led to the proposal of numerous useful applications across fields, such as histopathology, dermatology, cardiology, ophthalmology, and radiology. Promising applications of AI and imaging technologies in these fields include the detection and grading of prostate cancer based on digitalized prostate biopsy slides , automatIn orthopaedics, AI-based image analysis applications have primarily made an impact on diagnostics, surgical planning, and implant design in traumatology, arthroplasty, and spine surgery. While similar approaches are currently underutilized in sports medicine, momentum is increasing in imaging applications for soft-tissue injury detection. A recent study demonstrated excellent diagnostic performance of an ACL tear detection ML algorithm trained on approximately 20,000 magnetic resonance images (MRI), with similar success after external validation on patient groups from two different countries . SimilarThe broad categories and types of data and ML models have led to advances in the implementation of multimodal AI\u00a0systems . DespiteCurrently, evaluation of the efficiency and efficacy of medical interventions relies on time-consuming clinical trials, registry studies, and small-scale clinical investigations. While the results of clinical trials are considered the gold-standard of evidence synthesis, the clinical benefit of certain medical interventions may vary among individuals in a population. The digital twin is a concept adopted from engineering, and consists of a virtual representation of a real-world physical entity, such as an object, a system, or a patient . The intThe European AI Act, established in 2022, proposes a risk-based approach to the regulation of AI systems, and characterizes medical applications to be of high-risk [In the context of AI, provenance refers to the origins and history of a particular dataset or model. Provenance comprises information about how the data was collected, who collected it, where it was collected, and any transformations or modifications that were applied to it. Provenance is important in AI because it can help ensure that the data and models being used are reliable, trustworthy and can also help identify potential biases or errors in the data. Provenance in AI-based medical research is essential to build the trust required for clinical implementation of decision support systems and prediction tools, and to enable the design of replicable and transparent studies using AI. A hypothetical clinical decision support system designed to help clinicians optimize the treatment of patients with anterior cruciate ligament (ACL) injury can serve as an example to illustrate the role of provenance. Research studies for testing the validity of such a system will need to disclose the origin of the data the AI model was trained on, including the characteristics of the population, the types of variables collected, the timeframe of data collection, potential sources of bias, to name a few. Furthermore, the decision support system will require a detailed description of the data processing pipeline, model selection process, statistical analysis, methods applied to train, test, and validate models, as well as the parameters used to fine-tune the decision support system. Another important step is to disclose metrics used for the assessment of model performance. While seemingly a tedious task, ensuring provenance is necessary to meet the high standards required for the safe and reliable implementation of AI-driven medical research.One of the major concerns with the ability of AI systems to predict events is that steps taken by certain models to reach predictions are often inaccessible. This characteristic, termed black-box decision making, results in an inability for human observers to explain model output in terms of the original input variables. This feature is particularly problematic for medical applications, as current decision making-systems are based on empirical rules, which allow human interpreters to trace the logic behind reasoning that leads to a certain outcome. This currently accepted and transparent approach enables humans to learn from systems, and perhaps as importantly, to detect and rectify errors and biases in the system, which may otherwise lead to false conclusions and even dangerous consequences. While methods have been proposed to improve the explainability of ML models, their implementation may not be feasible with all data types. Consequently, future AI-intensive medical research should focus on enhanced human interpretability, with the conversion of insight provided by the model to tangible knowledge that mirrors those of medical experts, with potential avenues for error detection. White-box ML models, aptly named to show the contrast to black-box models, provide a broken-down explanation of the steps taken to reach a conclusion with insights about how the input data was used throughout the decision process . This feAs previously discussed, training AI systems on high-quality datasets is a major requirement for clinical adaptation. However, even models trained with the most attention to detail and with carefully curated data may not be universally applicable to every clinical setting. What happens when AI\u00a0systems encounter unexpected changes in clinical context? Some examples of this phenomenon may be obvious, such as the erroneous prediction of ACL rerupture risk in female downhill skiers by a system that was trained predominantly on male football players. However, a more subtle example may be the poor reproducibility of ACL rerupture risk prediction in patients from one country based on a model trained on registry data from another, with different demographics, injury profiles, and surgical techniques. The inability of AI\u00a0systems to adapt to new situations, termed distributional shift, is a central problem for the universal application of models across different settings, and may be influenced by countless forms of selection bias that are difficult for researchers to foresee . Recent Recent developments towards standardizing the reports of AI-intensive research include the CONSORT-AI , STARD-AWhile the deployment of AI systems opens exciting possibilities in medical research, mitigation of the potential risks of false predictions will be an essential task in the ensuing years. Navigation between models that produce truthful versus misleading outputs may present unique challenges. An important question is the role of human involvement in the training phase of models used in AI systems. While medical research is heavily rooted in evidence-based thinking and expert consensus, it is also prone to human error and bias. Consequently, excessive human supervision in AI-driven research may force AI systems to make errors akin to those made by human reasoning. However, it is also clear that black-box models preclude the explainability required for the implementation of AI systems in high-risk clinical settings . While cThis presents an important dilemma with practical and philosophical implications. One approach to solving complex research questions is to entrust models built on ground truths founded on human clinical knowledge and existing evidence. The advantage of such supervised learning is that truths are derived using representations comprehensible to humans, which in turn allows human assessment for correctness. Alternatively, certain models are capable of a more intuitive approach, with ground truths based on implicitly derived knowledge by the model, without human supervision of the learning process. In turn, an unsupervised learning approach can provide the benefit of superior pattern recognition and complex, intuitive reasoning at the cost of human interpretability and assessment of the clinical relevance in the underlying logic. Future research will be required to reconcile supervised and unsupervised approaches in medical AI system development, and to ensure explainability and truthfulness .While the recent application of large language models, such as ChatGPT by Open AI , 35 and A fundamental technical introduction to AI and ML for orthopaedic researchers, with a focus on the potential approaches to be used in medical research.Familiarity with the current state of AI in medical research and understanding of the potential benefit conferred by AI in orthopaedics.Approaching hypotheses and research questions in orthopaedic research using AI methods and requirements for interdisciplinary collaboration.Data management for AI-driven orthopaedic research projects.Understanding and interpreting the output of ML-models and AI\u00a0systems.End-product verification, safety in clinical use, and regulatory concerns.A comprehensive checklist with regards to the previous principles to guide implementation of AI-driven research in orthopaedics.The boundaries of the safe and ethical use of AI in orthopaedic research remain to be determined. In the long-term, over-reliance on AI-driven algorithmic diagnosis, risk-prediction, and prognostics may erode the critical thinking skills considered so essential for clinical medical practice today. Similar to the broad range of industries and scientific domains, careful planning will likely be required to strike the appropriate balance between human- and AI-driven innovation in orthopaedics and sports medicine. While AI will likely exceed human performance in areas such as data analysis, pattern recognition, and decision-making, the goal of clinicians and researchers will be to identify and execute innovative AI-driven applications in medicine and thereby enhance the quality of patient care. The aim of subsequent parts of this learning series is to supply readers with the competence to design and implement AI-driven research projects through proficiency in the following topics:"} +{"text": "The application of artificial intelligence (AI) technologies in screening and diagnosing retinal diseases may play an important role in telemedicine and has potential to shape modern healthcare ecosystems, including within ophthalmology.In this article, we examine the latest publications relevant to AI in retinal disease and discuss the currently available algorithms. We summarize four key requirements underlining the successful application of AI algorithms in real-world practice: processing massive data; practicability of an AI model in ophthalmology; policy compliance and the regulatory environment; and balancing profit and cost when developing and maintaining AI models.The Vision Academy recognizes the advantages and disadvantages of AI-based technologies and gives insightful recommendations for future directions. The initial concept of artificial intelligence (AI) was first coined as far back as 1956 \u20134. AlexNIn recent years, many studies have applied AI algorithms to the most modern retinal imaging modalities, including optical coherence tomography, to detect or quantify an array of retinal features of interest \u201323. AI-eSuch AI-enabled technologies have potential to be implemented into clinical practice in several ways. The use of AI technologies to screen or classify retinal diseases may play a role in telemedicine. They may also assist healthcare providers with greater speed, repeatability, reproducibility, and consistency than human graders. Uniting clinicians with AI systems has been proven to be synergistic, achieving better performance than either alone . TherefoAcademic institutions and technology companies (e.g. Google) increasingly engage in AI research and boost their investment and involvement in this field . FurtherThe key issues for deploying AI technologies in telemedicine or healthcare systems may have a profound and lasting influence on near-future practice in ophthalmology.In this article, we summarize four key requirements surrounding the application and execution of AI-enabled technology for diagnosis and screening in retinal diseases in real-world practice. Informing and operationalizing an AI healthcare system includes processing large data sets, practicability in ophthalmology, policy compliance and regulatory environment, and balancing profit and cost in adopting AI-enabled technologies.\u00a0Data processing is crucial before developing an AI model. It includes data standardization, data sharing system, and the maintenance of data privacy in the infrastructure of AI systems.The high dependency of modern ophthalmology on imaging makes it an attractive field for the development of AI models. However, the diversity of proprietary devices, image acquisition, and data storage processes poses a barrier to research teams. The need for data standardization has become pivotal, not only for expanding the scale of AI models but for providing more effective ways to achieve clinical benefits. In 2021, the American Academy of Ophthalmology suggested that manufacturers of ophthalmic devices should standardize the format of digital images, integration of medical data, and picture archiving to comply with the 12 Digital Imaging and Communications in Medicine standards, developed by the American Academy of Ophthalmology in collaboration with manufacturers . Such stAs of 2021, nearly 94 ophthalmic data sets containing more than 500 000 images were openly accessible . The traThe shared data should be deidentified and anonymized to comply with privacy and cybersecurity frameworks. Data privacy poses challenges in technological, legal, and ethical fields. It can be difficult to precisely define data privacy because traditional deidentification is vulnerable to linkage attacks from intended third parties . MoreoveData privacy may also be addressed by collating all relevant data into trusted research environments or data decentralization. In federated analysis, the algorithmic code is sent to each data site for individual analysis; then the results are brought back to the central site for aggregation and further analysis . RecentlIn 2021, the American Academy of Ophthalmology Committee on Artificial Intelligence raised three ethical concerns: transparency, meaning the adequate explanation or interpretation of the AI model; responsibility, which addresses moral or legal concerns; and scalability of implementation of AI models, which depends on equality of data distribution and potential systemic bias in AI models \u25aa. AI tecAs the field of ophthalmic AI evolves, the quality of reporting results from different AI systems may be discrepant and incomprehensive. It is necessary to have consensus in determining adequate description, translation, and appraisal of ophthalmic AI research to ensure robust algorithms and generalizability into real-world settings.\u25aa\u25aa]. Subsequently, other reporting guidelines further emphasized the transparency of AI technologies in healthcare. These have presented new standards for evaluating the results from AI-related clinical trials [44\u25aa\u25aa,\u25aa\u25aa,In 2020, CONSORT-AI and SPIRIT-AI were announced to provide reporting guidance in AI-related clinical trials \u25aa\u25aa. Subseals 44\u25aa\u25aa,\u25aa\u25aa,46 andHealthcare equity has been a major concern due to the imbalanced distribution of resources between urban and rural areas. The deployment of AI screening algorithms accompanied by well-established infrastructure of cybersecurity, including cloud-based systems or even home-based devices, can facilitate the adoption of AI technologies and reduce the medical resources gap between urban and rural areas. On the contrary, the quality of data-driven technologies may be affected by the inequality of data distribution , which reduces the generalizability of the AI models to specific populations due to the scarcity of related data. Overemphasizing an AI system without carefully considering the condition of health data poverty has potential to cause harm . Even ifRegulatory considerations for AI medical devices or software should include data security and sourcing, the design and development of algorithms, and evidence generation from AI-enabled technologies. Governmental regulatory bodies should provide clear guidance regarding the evidence requirements for AI medical devices and, to streamline the process of training, continuous education, and relicensing .For data privacy and confidentiality, in 2016 and 2017 the EU introduced the General Data Protection Regulation 2016/679 , the EU For adopting and marketing AI-based instruments and algorithms, different regulatory groups are responsible for ensuring the security and safety of the products. The U.S. Food and Drug Administration (FDA) announced the Digital Health Innovation Action Plan to streamline the premarket review and to outline its approach to AI-based frameworks, known as \u201csoftware as medical device\u201d , which iIn contrast to the United States, which adopts more market-oriented regulations, the EU takes a more customer-oriented approach to build the framework for AI-enabled technologies, with the Conformit\u00e8 Europ\u00e9enne playing a critical role in the licensing of AI products . Many otFor adoption of AI technologies, achieving a practical balance between profit and cost is another important issue. Premarketing costs include significant effort and workforce in data collection, research development, and validation. Postmarketing costs include upgrading software, sustaining hardware over the long term, training operators, and incorporating new patient data ,56. FundThe cost of adopting AI-enabled technologies should be balanced between manufacturing price and reimbursements as it is considered by and for healthcare providers. In the example of Singapore's national DR screening programs, cost savings of approximately U.S. $21.9 million were achieved for a group of 170 000 patients with diabetes who underwent AI-assisted screening . In the Current AI-enabled systems with regulatory compliance are outlined in Table The Vision Academy recognizes the advantages of AI technology and recommends the use of them to be of additive and synergistic value to current standards of care. In terms of applying such technologies in diagnosing and screening retinal diseases, we summarize the following directions and emphasize several viewpoints important for the future.The integration of meta-data, including multimodal images and structured clinical information from multiple data sets with different ethnic groups, and establishment of a data processing and sharing system will empower data-driven AI technologies in ophthalmic practice. Ongoing research will be needed to build up data storage and sharing systems in a cybersecurity framework for broader use.While retinal images possess biometric information that could be reidentified by AI technologies, care should be taken when collecting and processing these images. Some novel learning tasks can obscure bioidentical information or even provide unsupervised models for small-scale data sets. The question of how to universalize data formats will be one of the key factors for extending the scalability and generalizability of AI-enabled technologies.The complexity and inexplicability of AI are encompassed in the term \u201cblack box phenomenon.\u201d Black box algorithms have potential to cause misuse of AI in healthcare ecosystems ,101\u25aa. TrThe role of AI-enabled technologies in the real world is not to replace ophthalmologists but to assist them and to hybridize both AI models and human experience for making more efficient and accurate decisions. Such time-saving abilities could streamline medical procedures and give clinicians more time to communicate with their patients. Improper implementation of AI could be harmful to doctor\u2013patient relationships and could affect patients\u2019 trust if AI algorithms were used only for improving workflow but not patient care.A key hurdle in deploying AI-enabled technologies in clinical practice is the fear of making an incorrect decision and harming patients. Legal liability should be well defined as the implementation of AI becomes more popular. Such liability should only be at the precise claim of screening targeted diseases. Unlike retinal specialists, the developers of an AI model should only be liable for the designed algorithm for screening specific diseases. Healthcare providers should still take full responsibility for being aware of the capacity of AI models.The legal boundaries between developers and healthcare providers are still unresolved, and legislative and governance systems need to be more established to refine liability rules and the regulatory environment. Policy and specific authorities should be set up not only for verification of AI models but for data security and legal liability. Cross-sector and cross-disciplinary collaborations will be important to ensure the integrity of AI healthcare ecosystems.Continuing education, promotion of practical application, and user-friendly, understandable interfaces for healthcare providers are equally important to streamline the workflow and broaden the applicability of AI systems. Communication and collaboration between cross-functional teams, including ophthalmologists, optometrists, computer scientists, statisticians, data scientists, patient organizations, and engineers, can have a positive impact on vision health and preservation through AI-enabled technologies.The establishment of AI-enabled technologies may have potential to improve the efficiency of existing healthcare pathways, provide better patient-centered services, minimize the impact of labor shortage, and bridge the gap between urban and rural areas. However, no advancement in clinical practice is flawless, so it is necessary for healthcare providers and legislators to be aware of the limitations of AI-enabled devices.Editorial assistance was provided by Elle Lindsay, PhD, Macha Aldighieri, PhD, and Rachel Fairbanks, BA (Hons), of Complete HealthVizion, Ltd, an IPG Health Company, funded by Bayer Consumer Care AG, Pharmaceuticals Division, Basel, Switzerland.The Vision Academy is a group of over 100 international ophthalmology experts who provide guidance for best clinical practice through their collective expertise in areas of controversy or with insufficient conclusive evidence. The Vision Academy is funded and facilitated by Bayer. The opinions and guidance of the Vision Academy outputs are those of its members and do not necessarily reflect the opinions of Bayer.Financial arrangements of the authors with companies whose products may be related to the present report are listed in the \u201cConflicts of interest\u201d section, as declared by the authors.Yu-Bai Chou is a consultant for Alcon and Bayer. Paolo Lanzetta is a consultant for Aerie, AbbVie, Apellis, Bausch & Lomb, Bayer, Biogen, Boehringer Ingelheim, Genentech, Novartis, Ocular Therapeutix, Outlook Therapeutics, and Roche. Tariq Aslam is a consultant for, and has received grants, speaker fees, and honoraria from, Allergan, Bayer Pharmaceuticals, Canon, NIHR, Roche, and Topcon. He is also a board member for the Vision Academy, Macular Society, and Fight for Sight charity. Jane Barratt has received honoraria from Bayer. Carla Danese is a consultant for Bayer. Bora Eldem is a consultant for Allergan, Bayer, Novartis, and Roche. Nicole Eter is an advisor for AbbVie, Alcon, Apellis, Bayer, Biogen, Janssen, Novartis, and Roche and has received speaker fees from AbbVie, Apellis, Bayer, Novartis, and Roche and research grants from Bayer and Novartis. Richard Gale is a consultant for AbbVie, Allergan, Apellis, Bayer, Biogen, Boehringer Ingelheim, Notal, Novartis, Roche, and Santen and has received research grants from Bayer, Novartis, and Roche. Jean-Fran\u00e7ois Korobelnik is a consultant for Allergan/AbbVie, Apellis, Bayer, Carl Zeiss Meditec, Janssen, Nano Retina, Roche, and Th\u00e9a and is a member of the Data and Safety Monitoring Boards for Alexion and Novo Nordisk. Igor Kozak is a consultant for Alcon, Bayer, and Novartis. Anat Loewenstein is a consultant for Allergan, Annexon, Bayer Healthcare, Beyeonics, Biogen, ForSight Labs, IQVIA, Iveric Bio, Johnson & Johnson, MJH Events, Nano Retina, Notal Vision, Novartis, Ocuphire Pharma, OcuTerra, OphtiMedRx, Roche, Ripple Therapeutics, Syneos, WebMD, and Xbrane. Paisan Ruamviboonsuk is a consultant for, and has received research funds from, Bayer and Roche. Taiji Sakamoto is a consultant for Bayer Yakuhin, Boehringer Ingelheim, Chugai, Nidek, Nikon, Novartis, Santen, and Senju. Daniel S.W. Ting has received research grants from the National Medical Research Council Singapore, Duke-NUS Medical School Singapore, and Agency for Science, Technology and Research Singapore. Peter van Wijngaarden is the co-founder of Enlighten Imaging, an early-stage medical technology start-up company devoted to hyperspectral retinal imaging and image analysis, including the development of AI systems, and has received research grant support from Bayer and Roche and honoraria from Bayer, Mylan, Novartis, and Roche. Sebastian M. Waldstein is a consultant for Apellis, Bayer, Boehringer Ingelheim, Novartis, Roche, and Santen. David Wong is a consultant for AbbVie, Alcon, Apellis, Bayer, Bausch Health, Biogen, Boehringer Ingelheim, Novartis, Ripple Therapeutics, Roche, Topcon, and Zeiss, has received financial support (to institution) from Bayer, Novartis, and Roche, and is an equity owner at ArcticDx. Lihteh Wu is a consultant for Bayer, Lumibird Medical, Novartis, and Roche. Miguel A. Zapata is a consultant for Novartis and Roche, has received grants and speaker fees from DORC, Novartis, and Roche, honoraria from Alcon, Bayer, DORC, Novartis, and Roche, has served on advisory boards for Novartis and Roche, has received equipment from Allergan, and has stock or stock options in UpRetina. Javier Zarranz-Ventura has received grants from AbbVie, Allergan, Bayer, Novartis, and Roche, has served on scientific advisory boards for AbbVie, Allergan, Bayer, Novartis, and Roche, and has been a speaker for AbbVie, Alcon, Alimera Sciences, Allergan, Bausch & Lomb, Bayer, Brill Pharma, DORC, Esteve, Novartis, Roche, Topcon Healthcare, and Zeiss. Aditya U. Kale, Xiaorong Li, and Xiaoxin Li have no conflicts of interest to report."} +{"text": "Artificial intelligence (AI) has opened new medical avenues and revolutionized diagnostic and therapeutic practices, allowing healthcare providers to overcome significant challenges associated with cost, disease management, accessibility, and treatment optimization. Prominent AI technologies such as machine learning (ML) and deep learning (DL) have immensely influenced diagnostics, patient monitoring, novel pharmaceutical discoveries, drug development, and telemedicine. Significant innovations and improvements in disease identification and early intervention have been made using AI-generated algorithms for clinical decision support systems and disease prediction models. AI has remarkably impacted clinical drug trials by amplifying research into drug efficacy, adverse events, and candidate molecular design. AI\u2019s precision and analysis regarding patients\u2019 genetic, environmental, and lifestyle factors have led to individualized treatment strategies. During the COVID-19 pandemic, AI-assisted telemedicine set a precedent for remote healthcare delivery and patient follow-up. Moreover, AI-generated applications and wearable devices have allowed ambulatory monitoring of vital signs. However, apart from being immensely transformative, AI\u2019s contribution to healthcare is subject to ethical and regulatory concerns. AI-backed data protection and algorithm transparency should be strictly adherent to ethical principles. Vigorous governance frameworks should be in place before incorporating AI in mental health interventions through AI-operated chatbots, medical education enhancements, and virtual reality-based training. The role of AI in medical decision-making has certain limitations, necessitating the importance of hands-on experience. Therefore, reaching an optimal balance between AI\u2019s capabilities and ethical considerations to ensure impartial and neutral performance in healthcare applications is crucial. This narrative review focuses on AI\u2019s impact on healthcare and the importance of ethical and balanced incorporation to make use of its full potential. In the modern world, the doctor-patient encounter has rapidly risen due to the introduction of artificial intelligence (AI). The cost of healthcare and the uneasy accessibility form the primary issues when it comes to the availability of healthcare in rural and urban areas. Besides, the number of treatment providers and the inability to cope with rising technology is also becoming a major problem. Therefore, to cope with the rising spread of disease and to be able to provide a more budget-friendly and timely cure, AI is the next best step .The tiresome task of drug development and medicine manufacture, which required heavy investments and human intervention earlier, has now been reduced to a click. Ranging from cure identification to execution of its use has all become an AI-monitored task. Hence, effectiveness and efficiency are considerably targeted when the role of machines and new technology comes into play .With accumulated laboratory technological data combined with analysis of patient history, AI proved to be nothing short of a blessing during the COVID-19 pandemic. A complete review of the post-COVID-19 syndromes helped to improve vaccine development. Thus, AI is a multifaceted approach to not just curative but also preventive measures .While there is no doubt that AI has revolutionized healthcare to its core and is continuing to do so, ethical and private concerns form a major part of the demerits associated with it. The first problem is the unavailability of credible data. The second problem is algorithm dysfunction. Safe storage of data and elimination of breach of information forms the basis of patient privacy. Ethically, genomic replication or even human clone formation is resented by many cultures and religions worldwide . TherefoSelection processWe defined the criteria for selecting articles, including publication date, relevance to the topic, study design, and geographic scope, and used multiple databases and keywords to search for articles related to the topic and to ensure we capture all relevant studies. Subsequently, the authors reviewed titles, abstracts, and full texts to identify potential articles meeting inclusion criteria and excluded those that did not meet the criteria. We also checked reference lists of selected articles to find additional relevant sources that may have been missed initially. Some authors evaluated the quality of selected articles based on factors such as study design, methodology, sample size, and author credibility. Finally, we extracted the key information from each selected article, including findings, methodologies, and conclusions, and organized this information systematically.Artificial intelligence-driven diagnostics and imagingAI employs algorithms such as machine learning (ML) and deep learning (DL) to diagnose diseases in a time-efficient manner. This helps diagnose diseases early and reduces complications . A cliniTime is crucial for the early identification and containment of epidemics. AI-based deep Q learning auxiliary diagnosis (DQAD) was used to diagnose epidemics, reducing the time of diagnosis compared to the current methods . Using cA highly efficient shock advice algorithm (SAA) is essential for an automated external defibrillator to diagnose and function correctly. A new SAA with DL and ML algorithms had better sensitivities and specificities than the existing SAA. Researchers used magnetic resonance imaging (MRI) histogram peaks to design AI algorithms that accurately detect tumor volumes, with improved specificity, sensitivity, and interoperator repeatability. ML models combining radiomics and clinical indicators have successfully predicted molecular subgroups of medulloblastomas and differentiated skull base chondromas from chondrosarcomas .Diagnosing stroke early is essential as a delay in endovascular thrombectomy (EVT) reduces effectiveness. MethinksLVO (large vessel obstruction), an ML software, uses non-contrast computed tomography (NCCT) to identify the LVO location to facilitate the transfer of patients to EVT facilities early, thus improving outcomes . The useThe role of AI in diagnosing certain types of cancers, such as pancreatic cancer, which is often diagnosed in late stages, could be life-changing for many patients. Many challenges occur in the diagnosis, such as late symptom onset, inconsistent symptoms, and lack of molecular markers. AI tools such as K-nearest neighbor (KNN), artificial neural network (ANN), and support vector machine (SVM) are studied to identify subtle abnormal imaging findings that aid in the early detection of pancreatic cancer . The AI-Biopsy remains the gold standard method for diagnosing skin cancers. Using DL algorithms on fluorescence spectroscopy, basal cell carcinoma (BCC) was diagnosed in real time. DSL-1 was the device employed, which used the fluorescence spectra data from BCC lesions and normal skin as a base to identify the lesions . A studyTumor borders that are difficult to identify become challenging during surgery, such as gliomas, depending on frozen sections for delineating the boundaries in real time but at the cost of time. Fluorescent light CNN (FLCNN) was studied using the near-infrared window (NIR-II) images as an alternative or in conjunction with the surgeon\u2019s optical judgment. The study showed that the FLCNN captured the details of the image better than the surgeon .AI\u2019s role in diagnosing neurological diseases is being explored further . BattineCurrently, the diagnosis of obstructive sleep apnea (OSA) is made by polysomnography (PSG) which employs the apnea-hypopnea index (AHI). AI-based CNN algorithm uses a single-lead ECG signal to diagnose OSA with reasonable accuracy, sensitivity, and specificity. This method is easy and efficient but still needs PSG to confirm if the estimated risk is high . AI\u2019s roThe use of AI in diagnosing hematologic disease is on the rise. Cellavision, a peripheral blood smear analyzer approved by the Food and Drug Administration (FDA), utilizes the ML-ANN algorithm to perform tests such as complete blood count, cell differential, identification, and location . MorphogAI-driven diagnostics and imaging are proliferating and finding applications in every possible field of medicine. Further exploration into these algorithms can be assimilated to provide better patient care, considering the ethics and social regulations .Personalized treatment and precision medicineArtificial Intelligence in Genomics and Its Role in Personalized Treatment PlansPersonalized medicine is a method that considers an individual\u2019s unique genetic makeup, environment, and lifestyle for treating and preventing diseases . It shifML is a method that employs mathematical algorithms to construct models using datasets; these models can enhance their performance through experience . Among tFunctional genomics explores the intricate interplay between genetic attributes and environmental conditions. This endeavor has been significantly aided by the utilization of advanced deep architectures . For insin silico nano dissection, an ML algorithm, to study mRNA expression in glomeruli from IgA nephropathy patients [Significant strides have been made in applying functional genomics to enhance precision medicine for prevalent non-cancerous conditions, such as kidney disease . An exampatients . They idpatients . In anotpatients .ML models have the potential to analyze multimodal data obtained from electronic medical records and other curated sources to identify patients who could benefit from early treatment or participation in randomized control trials of innovative interventions . For insDrug Discovery and Development With Artificial Intelligence AssistanceAI is already deeply embedded in almost every facet of modern society. In exchange, it is undergoing rapid innovation. Because of this, it is rapidly gaining importance in the pharmaceutical industry, particularly in the realm of drug research and development .Drug development is time-consuming and labor-intensive, predicated on iterative trial-and-error experimentation and high-throughput screening processes. On the other hand, using algorithms such as ML and natural language processing (NLP) will significantly speed up the entire process while also requiring substantial data inputs. Errors may be reduced, and the quality of analytical findings improved by streamlining the process .Using the DL method makes it easier to ascertain the effectiveness of a substance. Using AI techniques, we can also determine if a medicine is toxic . How AI AI makes use of a number of methods, including reasoning, knowledge representation, problem resolution, and a model of ML. This analysis will focus on the SVM classification model. SVM is a supervised machine learning module that generates analytical outputs by classifying two distinct sets of troublesome subgroups using a hyperplane between their extreme edges. Therefore, with the use of a decision boundary or hyperplane, the two sets can be organized into usable data .perceptons that are analogous to biological neurons. An ANN is a collection of nodes that accept various inputs and convert them to the output, either individually or in a multi-link configuration, using algorithms to solve problems [ANNs are also used in the domain of ML known as DL. These can be described as a collection of high-tech computers that mimic the way in which the human brain sends and receives electrical signals by using problems .Anticipating global infectious disease hazards using AI-based surveillance models may be very helpful ,54. The Artificial intelligence-enabled patient monitoring and disease managementPublic health surveillance faces data sourcing and analytics challenges in identifying reliable signals of health anomalies and disease outbreaks from data sources. AI helps tackle these challenges by enabling and enhancing public health surveillance through methods and techniques such as DL, reinforcement learning, knowledge graphs, Bayesian networks, and multiagent systems. AI enhances surveillance of various open novel or unexplored data, unstructured and semi-structured data, complex spatiotemporal data, and epidemics\u2019 evolution, which is difficult with traditional surveillance methods .Remote patient monitoring (RPM) or telemonitoring uses digitally transmitted health-related data to improve patient care through education, early disease decompensation detection, intervention, and enhanced patient-physician relationship . TelemonAn ML approach predicted cardiac arrest within 24 hours more accurately than the traditional modified early warning scores for critically ill patients in the emergency department . A retroVipin et al. proposed an edge AI-enabled Internet of Things (IoT) healthcare monitoring system for the real-time scheduling of patients and provision of resources on a priority basis depending on the patient\u2019s condition. This tool collects and transmits data and triggers appropriate action on other integrated devices, simplifying the process of monitoring and assistance provision, which is particularly useful in the elderly or disabled, and in pandemic situations .The FDA issued Emergency Use Authorization for wearable and mobile ECG technologies to record QT intervals in patients taking hydroxychloroquine or azithromycin. However, AI-ECG technologies need further evaluation for external validity in diverse populations to be applied in routine clinical practice . AI-baseArtificial intelligence for remote and telemedicineTelemedicine refers to delivering healthcare services using information and communications technologies, considering distance and accessibility as critical factors. Initially envisioned for tech-savvy patients, the COVID-19 pandemic propelled telemedicine into mainstream healthcare globally . Its scaA typical deep neural network utilized for processing and analyzing visual images is the CNN . This neSimilarly, age-related macular degeneration (AMD) is a significant cause of vision loss among the elderly population globally. However, the growing number of AMD patients and frequent follow-ups create a pressing need for a robust automatic mechanism. DL and telemedicine offer potential solutions for AMD management, including initial screening, subsequent monitoring, and treatment prediction. DL algorithms can assess visual acuity (VA) and optical coherence tomography (OCT) findings, aiding in determining AMD treatment strategies .In the mental health domain, a smartphone-sensing wearable device was proposed to assess behavior and identify depressive and manic states in patients with bipolar disorder .\u00a0PatientTelemedicine, especially in dermatology, has rapidly embraced AI due to increasing demand, the necessity for high-quality images, and the availability of advanced technology . While iIn neurosurgery, telemedicine demonstrates its potential with applications such as telehealth stroke triage and ML-based prognostics for neuro-trauma . RecentlIn diabetes management, several projects have emerged utilizing new information and communication technologies and Web 2.0 technologies for automatic data transmission and remote interpretation of patient information -93. A noArtificial intelligence-driven behavioral health interventionsExpanding access to digital mental health therapies, the increasing need for psychological services, along with the development of AI, has led to the rise of digital mental health interventions (DMHsI) in recent years. Patient evaluation, symptom evaluation, modifying behaviors, and information dissemination are just a few of the areas where AI-powered chatbots are currently being employed in DMHIs. Chatbots may be as basic as algorithms based on rules such as ELIZA or as complex as AI models that use ML and language processing methods .Support, screening, training, intervention, surveillance, and avoiding recurrence are just a few of the chatbot-based services offered inside DMHIs. Using chatbots for mental health diagnosis and triage might reduce the burden on healthcare workers and expedite the treatment of individuals who need it the most . ChatbotOverall, 51% of mental health professionals surveyed thought using chatbots for diagnostic purposes was a challenge. AI-assisted diagnostics, on the other hand, might aid in the early identification of those at risk, allowing for more effective intervention and problem avoidance ,107. To tracks to assist users in zeroing in on a specific issue, such as stress management [For instance, certain chatbots leveraging NLP in nature may mimic a therapeutic conversational approach for the purpose of deploying and educating users about different therapies . The mosnagement .Anna may be found in these tracks. After selecting a route that includes Anna, users are welcomed by the chatbot with a personalized video message and explanation of the chatbot\u2019s capabilities. Listeners may also be polled by Anna with specific questions aimed at collecting information for refining the personalized playlist. To increase user involvement and the likelihood that they will complete the activities as intended, Anna also provides particular assignments inside these tracks. So, Anna helps make the most of the benefits of that move. Anna tracks the elements that contribute most to successful outcomes for each activity, evaluates responses based on those elements, and then prompts participants to fill in any gaps in their initial remarks. Whenever a user\u2019s first response to a task requiring gratitude does not demonstrate an adequate amount of gratitude, Anna will ask for clarification. Anna guides users on how to make better use of the platform and listens to their feedback to allay concerns that conversations with chatbots are not engaging enough .Artificial intelligence in medical education and trainingWith the increased integration of AI in healthcare, electronic health records (EHRs) can be used for novel techniques such as data processing and enhanced decision-making, prompting accurate data input into EHRs by physicians to maximize the benefits of AI . These sA scoping review by Nagi et al. on current applications of different AI methods such as ML, robotics training, and virtual reality (VR) in several domains of medical education has shown enhanced practical skills of medical students following the implementation of AI-based assistance. Through VR, students can train their decision-making skills in surgical and medical procedures in a controlled and safe manner . DespiteCreutzfeldt et al. trained 12 Swedish medical students in cardiopulmonary resuscitation by avatars to better understand their reactions and experiences using a multiplayer virtual world. The positive aspects of learning cardiopulmonary resuscitation were confirmed after a data-driven approach in qualitative methodology. Further clinical performance should be analyzed to rule out erroneous self-belief bias. Participants noted insufficient psychomotor skills and lack of stress to be unrealistic, which could affect their real-world performance .A meta-analysis by Zhang et al. showed that AI could also be applied in medical education at different stages, such as teaching implementation, evaluation, and feedback. The quality of teaching can be assessed with feedback and evaluation from AI. However, it is challenging to verify the effectiveness of AI implementation . TeacherAlthough ChatGPT has revolutionizing potential in medical education, it cannot completely replace hands-on clinical experience and mentorship. Moreover, algorithms use large databases, which may contain biases resulting in biased system output that could lead to a loss of fairness in treating minority or underrepresented groups. Therefore, AI systems output should be monitored to ensure the absence of bias and eliminate any biases present . FurtherThe use of AI is not without its challenges, such as black box problems , data prArtificial intelligence ethical considerationsHealthcare is a high-risk field in which mistakes can lead to severe consequences for the patient. Patients meet with physicians at a time when their overall health is already compromised, leaving them vulnerable . Therefoblack box issue, where the algorithms responsible for the outputs are not disclosed [The main legal concern is the limited transparency of algorithms used in AI, referred to as the isclosed ,134. Traisclosed . SuccessThe FDA has already approved autonomous AI diagnostic systems based on ML. These ML-healthcare applications (ML-HCAs) create algorithms from large datasets and make predictions without requiring explicit programming to avoid biases and errors . While tPrivacy protection poses challenges to ML. ML necessitates large datasets for effective training, leading to high accuracy in results. However, when significant amounts of data are used to train ML applications from multiple sources, the data\u2019s origin can ultimately be traced back to the patients, negating attempts at privacy . Ownershphysical evidence related to patient data [In Europe, patients own and have usage rights to all their data, whereas healthcare providers in North America may own the ent data ,139. Witent data in the Eent data .Accountability for AI errors remains a gray area, with no clear individual or entity held responsible. Consequently, AI severely limits the ability to assign blame and/or ownership of the decision-making process . The lacIn the study conducted by Buolamwini and Gebru, selection bias was evident in automated facial recognition and the associated datasets, resulting in reduced accuracy in recognizing darker-skinned individuals, particularly women. The datasets used by MLs are derived from specific populations. Consequently, when these AI systems are applied to underserved or underrepresented patient groups, they are more likely to produce inaccurate results . Due to The contentious debate surrounding the legality of AI systems has prompted governing bodies such as the European Parliament to pass the GDPR , aimed aChallenges and future directionsAI is a rapidly evolving technical discipline that plays a crucial role in medicine. However, the implementation of AI faces several challenges, including technical, ethical, safety, and financial concerns. AI technology offers significant opportunities for diagnostic and treatment purposes in medicine, but it requires overcoming challenges, particularly related to safety . One of The integration of AI into cancer research shows promising outcomes and is currently addressing challenges where medical experts struggle to control and cure cancer. AI provides tools and platforms that enhance our understanding and approach to tackling this life-threatening disease. AI-based systems aid pathologists in diagnosing cancer more accurately and consistently, thereby reducing error rates . AI has Despite the considerable advantages and notable progress in contemporary methodologies, rule-based systems commonly employed in medical AI exhibit limitations. These encompass substantial expenses associated with development, constraints arising from the intricate representation of multifaceted connections, and the need for extensive medical expertise .While AI\u2019s involvement in direct patient care is currently limited, its expanding role in complex clinical decision-making processes is on the horizon. Establishing fundamental guidelines for AI\u2019s scope, transparent communication with patients for informed consent, and comprehensive evaluation of AI\u2019s implementation are crucial steps toward setting a universal standard. This emphasizes the importance of algorithmic transparency, privacy protection, stakeholder interests, and cybersecurity to mitigate potential vulnerabilities. Proactive leadership from professional organizations can play a vital role in building public trust in the safety and efficacy of medical AI, thereby driving further advancements in this promising domain .AI has the potential to collaborate with other digital innovations such as telemedicine, enabling virtual consultations and the implementation of the Internet of Medical Things to enhance referral procedures. AI\u2019s capabilities extend to precise risk stratification of cancer stages and the selection of suitable treatment paths. Shared factors are evident across multiple tiers during data analysis, enabling AI models to identify causal links between variables. By amalgamating this knowledge, AI research can achieve higher precision in examining cancer-related medical incidents. In the future, comprehensive databases will emerge, generating new data sets encompassing every aspect of human health. These repositories will empower highly intricate models, capable of personalizing therapy choices, precise dosage calculations, surveillance strategies, timetables, and other pertinent factors .Prominent AI technologies such as ML and DL have immensely influenced diagnostics, patient monitoring, novel pharmaceutical discoveries, drug development, and telemedicine. Significant innovations and improvements in disease identification and early intervention have been made using AI-generated algorithms for clinical decision support systems and disease prediction models. AI has changed the face of healthcare through unique advancements in patient monitoring, diagnosis, and treatment planning. AI has made strides in optimally implementing clinical drug trials, telemedicine, and personalized therapeutic regimes. However, ethical issues regarding data protection and transparency of AI-driven algorithms remain a cause for concern. Although AI holds promise in mental health interventions, medical education, and virtual training, its role must be aligned with human expertise. The main challenge is to optimize AI\u2019s transformative impact while keeping ethical and regulatory principles in line."} +{"text": "Artificial intelligence (AI) has been available in rudimentary forms for many decades. Early AI programs were successful in niche areas such as chess or handwriting recognition. However, AI methods had little practical impact on the practice of medicine until recently. Beginning around 2012, AI has emerged as an increasingly important tool in healthcare, and AI-based devices are now approved for clinical use. These devices are capable of processing image data, making diagnoses, and predicting biomarkers for solid tumors, among other applications. Despite this progress, the development of AI in medicine is still in its early stages, and there have been exponential technical advancements since 2022, with some AI programs now demonstrating human-level understanding of image and text data. In the past, technical advances have led to new medical applications with a delay of a few years. Therefore, now we might be at the beginning of a new era in which AI will become even more important in clinical practice. It is essential that this transformation is humane and evidence based, and physicians must take a leading role in ensuring this, particularly in hematology and oncology. Currently, the majority of AI applications in cancer medicine are related to digital image processing in fields such as dermatology, endoscopy, radiology and pathology . Breakthroughs in large language models (LLMs) have moved this field of AI in the center of public attention. LLMs are AI models which can understand and synthesize text with human-level performance. LLMs can understand, summarize and write scientific articles, understand and write computer code and medical texts, they can converse and make jokes. Google, Microsoft-backed OpenAI and other large technology companies are pushing LLMs towards medical applications, such as information retrieval, summary, chatbot functionalities, among many other potential applications (Singhal et al. The rapid development of AI requires physicians to stay up to date and comprehend the medical implications of these technologies. Being digitally literate and capable of critically evaluating clinical evidence in the AI era is a fundamental skill which physicians must build and cultivate. All physicians must become aware of AI and comprehend its fundamental principles. In addition, some physicians might choose to delve deeper into AI and actively use it as a tool for research. By enabling the processing of large amounts of unstructured information, AI could fundamentally alter and transform nearly all aspects of contemporary medicine, including preclinical research, drug discovery, clinical trials, and even clinical routine activities, including communication. We are living in exciting times and the world of hematology and oncology as we know it will change in the future Topol . In thes"} +{"text": "Artificial intelligence (AI) has great potential to increase accuracy and efficiency in many aspects of neuroradiology. It provides substantial opportunities for insights into brain pathophysiology, developing models to determine treatment decisions, and improving current prognostication as well as diagnostic algorithms. Concurrently, the autonomous use of AI models introduces ethical challenges regarding the scope of informed consent, risks associated with data privacy and protection, potential database biases, as well as responsibility and liability that might potentially arise. In this manuscript, we will first provide a brief overview of AI methods used in neuroradiology and segue into key methodological and ethical challenges. Specifically, we discuss the ethical principles affected by AI approaches to human neuroscience and provisions that might be imposed in this domain to ensure that the benefits of AI frameworks remain in alignment with ethics in research and healthcare in the future. Artificial intelligence (AI) leverages software to digitally simulate the problem-solving and decision-making competencies of human intelligence, minimize subjective interference, and potentially outperform human vision in determining the solution to specific problems.Over the past few decades, neuroscience and AI have become to some degree, sublimated with the development of machine learning (ML) models using the brain circuits as the model for the invention of intelligent artifacts .Neuroscience inspired and then ironically, validated the architecture of various AI algorithms such as Traditional, clinical neuroradiologists identify abnormalities of the spine, head and neck, and spinal cord through pattern abnormalities on MRI and CT . Radiomics and AI leverage imperceptible variations in images. Some have termed this as seeing the unseen. Research on applying AI to neuroradiology, and all imaging, has rapidly grown over the past decade. The number of scholarly publications related to the development and application of AI to the brain has more recently increased astoundingly .The immense increase in publications on the development of AI models shows that AI is rapidly gaining importance in neuroradiologic research and clinical care. The growth in research is related to the combination of more powerful computing resources as well as more advanced measurement techniques \u20138, combiFor neuroradiology, a deep learning (DL) model receives image series in the input layer, then the extracted features are analyzed in the hidden layer using various mathematical functions. The output layer encodes the desired outcomes or labeled states . The goal of training a DL model is to optimize the network weights so that when a new series of sample images are fed to the trained model as inputs, the probabilities measured at the output are heavily skewed to the correct class Figure\u00a0.For the short term, these automatic frameworks will likely serve as decision-support tools that will augment the accuracy and efficiency of neuroradiologists. However, the progress in developing these models has not corresponded with progress in implementation in the clinic . This maIn this paper, we contribute to the ethical framework of AI in neuroradiology. Below we present several specific ethical risks of AI, as well as air some principles that might guide its development in a more sustainable and transparent way. In our review, we highlight the ethical challenges that are raised by providing input for the AI models and the output of the established frameworks in neuroradiology . In eachThe value of any recorded medical observation - whether it is images, physical examination results, or laboratory findings - primarily lies in how they contribute to that patient's care. However, when the data are collected (and anonymized), this data can also be used to generate useful information for potential research. This information also may eventually have commercial potential. Databases can help us understand disease processes or may generate new diagnostic or treatment algorithms. These databases can be used to assess therapies in silico.Because of the value of medical data, especially images, when conglomerated, those who participate in the health care system have an interest in advocating for their use for the most beneficial purposes. Hence, the controversy regarding who has the right to control and benefit from collected images, patients, or provider organizations.There has been a paradigm shift over the last decade with respect to the ethos of data ownership . While sMost AI scientists believe in the ethical responsibility of all patients to share their data to improve patient care, now, and in the future. Thus, these researchers believe the data should be widely available and treated as a form of public good to be used for the benefit of future patients , 22. IndThe significance of the diligent and ethical use of human data has been highlighted recently to promote a culture of data sharing for the benefit of the greater population, while also protecting the privacy of individuals . TherefoOther issues specifically related to neuroradiology, such as the presence of personal identifiers, which need to be removed both from the digital imaging and communications in medicine (DICOM), metadata, as well as from the images prior to data sharing . For exaRegarding the increasing usage of digital data in neuroscience, it is important to manage data resources credibly and reliably from the beginning. Data governance is the set of fundamental and policies that illuminate how an organization manages the data assets . Data goCentralized and distributed networks are two organized methods that have been developed for tactical governance of data sharing . While iThe quality and amount of the annotated images for training AI models are variable, based on the target task. Although using poor-quality images may lead to poor predictions and assessments, in heterogeneous large quantities, it is known that AI algorithms can be trained on relatively poor-quality images , 33. HowMost currently implemented AI algorithms for medical image classification tasks are based on a supervised learning approach. This means that before an AI algorithm can be trained and tested, the ground truth needs to be defined and attached to the image. The term ground truth typically refers to information acquired from direct observation . For direct observation references standards images are annotated by medical experts- such as neuroradiologists.Manual labeling is often used in the labeling and annotation of brain imaging data for AI applications. When a relatively small number of images are needed for AI development, medical expert labeling and segmentation may be feasible. However, this approach is time-consuming and costly for large populations, particularly for advanced modalities with numerous images per patient, such as CT, PET, or MRI. Prior to the widespread use of AI, and markedly improved through AI, semiautomated and automated algorithms have become somewhat widely used in labs.Radiology reports are not created for the development of AI algorithms and the extracted information may contain noise. Although, recently more reports are structured, or protocol-based, more often they are narrative. These narrative reports have previously been assessed through natural language processing programs.Neural networks can still be relatively robust when trained with noisy labels . HoweverAI could serve a role to reduce these errors and may help in the more accurate and time-efficient annotation of CT and MRI scans. However, this assumes a sufficiently sized dataset to adequately train the AI models. Furthermore, recent efforts in the automation of the annotation process particularly in neuroimaging data have shown a significant increase in the performance of the annotation process by using AI systems in large scale of data. For example, a study in brain MRI tumor detection indicated that applying semi-supervised learning to mined image annotations can improve tumor detection performance significantly by achieving an F1 score of 0.95 .There is a trend toward interactive collaboration between AI systems and clinical neuroradiologists. The ability to self-validate and learn from mistakes of AI systems makes the system able to recognize their errors and self-correct their own data sets. On the other hand, external validation by neuroradiologists, manually fine-tuning the AI system, that it has made an error, and allowing the AI system to update its algorithm to avoid errors is helpful for the improvement of these models. Such interactive collaboration annotations have been used effectively for labeling open-source data sets to improve annotation quality while saving radiologists time , 39.Neuroradiology is overall the third most common imaging subspecialty , and thiA recent review paper explored the application of AI in neuroradiology and identified the majority of these applications provide a supportive role to neuroradiologists rather than replacing their skills. For example, assessing quantitative information such as the volume of hemorrhagic stroke and automatically identifying and highlighting areas of interest such as large vessel occlusions on CT angiogram . AI can While these capabilities are very promising, AI systems are not without limitations. For an AI system's algorithm to accurately work, they require large datasets and accurate expert labeling that may not always be possible. Therefore, as Langlotz suggested , maybe tDespite the widespread application of AI, it is important that radiologists remain engaged with AI scientists to both understand the capabilities of existing methods and direct future research in an intelligent way that is supported by the sufficient clinical need to drive widespread adoption . An AI-nThe subject of explicability is crucial as AI models are still considered as black box, due to lack of clarity regarding the data transformations after various passages within the convolutional neural networks (CNNs) despite the mathematical and logical processes . TherefoThere is insufficient practicality related to the final stages such as reporting and communication compared to the early stages of the AI application . This inThe emergence of using AI in neuroradiology research has highlighted the necessity of code sharing because these are key components to facilitating transparent and reproducible experiments in AI research . HoweverAlthough these low rates of code sharing and code documentation may be discouraging, it has shown that code sharing over time is an uphill journey . NonetheIn addition to code sharing, data availability is another key component of the reproducibility of AI research studies, because DL models may have variable performance on different neuroimaging datasets . HoweverMedical imaging, including MRI, is one of the most common ways of brain tumor detection. MRI scans such as T2-weighted and post-contrast T1-weighted are preferred since they provide more precise images and provide better visualization of soft tissue, therefore they can be used for brain tumor segmentation . CNNs, aBias is an important ethical theme in research and clinical care. It is quite easy for potential bias to be embedded within algorithms that grow from \u201cselected\u201d data that are used to train algorithms , 54. MosA recently published review by Das et al. investigSome studies suggested using AI itself to potentially mitigate existing bias by reducing human error and biases that are present within healthcare research and databases . These sIn this section, we explore a frequent question regarding the application of AI in neuroradiology as to who is responsible for errors that may happen through the process of developing and deploying AI technology. AI algorithms may be susceptible to differences in imaging protocols and variations in patient numbers and characteristics. Thus, there are by the nature of the beast, specific scenarios where these algorithms are less reliable. Therefore, transparent communication about selection criteria and code sharing is required to validate the developed model by external datasets to ensure the generalizability of algorithms in different centers or settings .Despite the success and progress of AI methods, they are ultimately implemented by humans, hence, some consideration of user confidence and trust is imporSeveral studies recommend that since healthcare professionals are legally and professionally responsible for making decisions in the patient's health interests, they should be considered responsible for the errors of AI in the healthcare setting, particularly with regard to errors in diagnostic and treatment decisions , 66. HowThe use of AI technology in neuroradiology should be cognizant that ultimately it is a person or persons who are responsible for the proper implementation of AI. Part of this responsibility lies in the appropriate implementation and use of guidelines to minimize both medical errors and liability. Guidelines should be implemented to reduce risks and provide reasonable assurance including the well-documented developed methods, research protocols, appropriate large datasets, performance testing, annotation and labeling, user training, and limitations . EspeciaWhen we do research, we have a moral obligation to provide the highest quality product we can. That is why IRBs not only look at the risks and benefits but also the research protocol, to ensure that the results will be worthwhile. One of the aspects of this IRB assessment is the size and type of population studied. Similarly in AI research, the database should include a large enough number of subjects to avoid overfitting. In addition, an external validation set is ethically required, to ensure generalizability. The numbers needed in the development set have been a moving target but are appropriately moving towards substantive requirements.Dataset size is a major driver of bias and is particularly associated with the size of training data; the AI models should be trained on large, heterogenous, annotated data sets . Small tPrevious studies also have shown that using a large dataset decreases model bias and yields optimal performance because each MRI technique has different characteristics, therefore, integrating various modalities and techniques yields more accurate results than any single modality. Using multimodality and heterogenous datasets also can handle the overfitting problem regarding the trained model using a specific dataset to make the model generalize for an external validation set , 73.In diseases, or neurologic disorders that are less common, finding large enough datasets may be difficult and may require pooled resources. When providing larger datasets is not possible due to rare diseases and under-represented populations, transfer learning, and data augmentation can be used to avoid overfitting due to small or limited data sets . The stuSeveral ethical themes of the applications of AI in neuroradiology were presented in this review including privacy and quality of data that are used for training AI systems as well as the availability and liability of the developed AI models. In this scoping review, we addressed the ethics of AI within neuroradiology and reviewed overarching ethical concerns about privacy, profit, liability, and bias, each of which is interdependent and mutually reinforcing. Liability, for instance, is a noted concern when considering who is responsible for protecting patient privacy within data-sharing partnerships and for AI errors in patient diagnoses. We note that liability is related to specific laws and legislation, which by definition vary from one to another jurisdiction.These broad ethical themes of privacy and security, liability, and bias have also been reported in other reviews on the application of AI in healthcare and radiology in general, and neuroradiology in particular. For example, in a review by Murphy et al. , the autIn another study, the authors discussed ethical principles including accountability, validity, the risk of neuro-discrimination, and neuro-privacy that are affected by AI approaches to human neuroscience . These lAnother article with a focus on patient data and ownership covered key ethical challenges with recommendations towards a sustainable AI framework that can ensure the application of AI for radiology is molded into a benevolent rather than malevolent technology . The othIn conclusion, the ethical challenges surrounding the application of AI in neuroradiology are complex, and the value of AI in neuroradiology increases by interdisciplinary consideration of the societal and scientific ethics in which AI is being developed to promote more reliable outcomes and allow everyone equal access to the benefits of these promising technology. Issues of privacy, profit, bias, and liability have dominated the ethical discourse to date with regard to AI and health, and there will undoubtedly be more that arise. AI is being developed and implemented worldwide, and thus, a greater concentration of ethical research into AI is required for all applications amidst the tremendous potential that AI carries, it is important to ensure its development and implementation are ethical for everyone."} +{"text": "AI-based prediction models demonstrate equal or surpassing performance compared to experienced physicians in various research settings. However, only a few have made it into clinical practice. Further, there is no standardized protocol for integrating AI-based physician support systems into the daily clinical routine to improve healthcare delivery. Generally, AI/physician collaboration strategies have not been extensively investigated. A recent study compared four potential strategies for AI model deployment and physician collaboration to investigate the performance of an AI model trained to identify signs of acute respiratory distress syndrome (ARDS) on chest X-ray images. Here we discuss strategies and challenges with AI/physician collaboration when AI-based decision support systems are implemented in the clinical routine. Indeed, AI seems to be everywhere, from the realistic appearing conversations of ChatGPT to the image recognition software allowing many of us to unlock our cellphones with a glance. Within healthcare, both excitement and concern about AI are nothing new.In a recent New York Times editorial, an author dubbed artificial intelligence (AI) a \u201cPandora\u2019s box\u201d that humans were lifting the lid of2, ophthalmology3, and radiology4. Thus far, the prevailing thought within healthcare has been that AI will be most useful within clinical decision support systems (CDSS). CDSS aim to aid clinicians with the complex decision-making process to improve healthcare quality. Where exactly AI fits best within the decision-making process remains debated5.Over the last years, several AI-based models have demonstrated their usefulness in various medical disciplines, including dermatology5 systematically determined how AI could be integrated into a specific clinical scenario. They took an AI model6 trained to identify patterns of acute respiratory distress syndrome (ARDS) on chest X-ray images and evaluated its strengths and weakness compared to physicians. In doing so, they tested four different physician\u2013AI collaboration strategies. One strategy involved the AI model reviewing a chest X-ray first and then deferring to a physician in cases of uncertainty. This strategy achieved higher diagnostic accuracy compared to the other three, which included physicians reviewing the X-ray first and deferring to the AI model in cases of uncertainty, the AI model examining the X-ray alone, or the physician examining the X-rays alone.In a recent case study, Farzaneh et al.5. This could mean that those caring for ARDS patients can use AI models to help triage these patients when the X-ray findings are clear, while physicians focus on interpreting the more complicated images.Ultimately, these findings imply that the AI model had higher and more consistent accuracy on less complicated chest X-rays, while physicians had higher accuracy on difficult chest X-raysDespite the promises of AI-based CDSS, obstacles are blocking its implementation. Four key challenges to the widespread use of AI-based CDSS include trust, bias, scalability, and deployment.8. This study by Farzaneh et al. 5 is an example of the type of research that must continue to be done to evaluate and build trust in AI/physician collaboration workflows. Additionally, AI models will need to be explainable, describing why and which parameters impacted a model\u2019s decision. For example, visualizing alerts or displaying regions of concern in an X-ray can help to overcome the distrust in the \u201cblack-box\u201d nature of most AI-based systems9. On the other hand, excessive trust and reliance on a CDSS can interfere with developing clinical skills10.Both patients and clinicians must trust AI models used for decision support if they are to be widely adopted12. For example, AI models trained on a dataset with unequal representation of a particular minority group might generate less accurate predictions for that minority patient population, leading to worse patient care. Various strategies for detecting and mitigating bias13 have been developed to tackle this issue, but further approaches are pivotal for generalizability and fairness.Datasets used to train AI models can contain bias, amplifying existing inequity in the healthcare system. This type of bias specifically harms disadvantaged populations5 describe a specific case study that helps optimize AI-physician collaboration, it is unrealistic to expect every implementation of AI support to occur only after a published trial. Instead, AI-CDSS will likely scale based on inferences from studies of similar clinical challenges. The specific point at which AI will be implemented in a workflow will ultimately differ among healthcare settings.While Farzaneh et al.14. Other challenges include insufficient IT infrastructure in under-resourced clinical settings where building on AI is challenging. As technical complexity increases, increased computer literacy and proficiency will be required. Lacking these skills can be hindering for clinical decision support systems adoption15.Looking into the future, generalizable CDSS tools will be implemented in healthcare settings in which they were not actually developed. The details surrounding how such generalizable tools will be developed and implemented in local workflows remain up to question16.Challenges to AI-based CDSS on the deployment level are centered around regulatory concerns and long-term effects. AI-based CDSS will require new rules around where responsibility lies for potential mistakes. To take just one example, not using physician decision support systems could be considered malpractice by an individual physician, or a healthcare institution that has adopted the AI-CDSS tool17. For example, for medical students AI tools could provide a three-dimensional virtual reality experience that can change the way of Anatomy teaching and learning18. Further, as medical training during the COVID-19 pandemic became challenging, surgeon training in lung cancer surgery through the metaverse was implemented in a smart operating room19. Other recent publications demonstrated how AI could be used to identify a surgeon\u2019s skill22 and thus improve continuous learning. Integrating AI in medical (student) education may not only enrich the teaching and learning experience, but may also help to teach opportunities and challenges of AI, and thus results in more awareness, trust, and better use of AI-based systems.AI as well as augmented and virtual reality offer unique opportunities for medical training and educationIn many cases, CDSS demonstrated a better outcome if AI algorithms collaborated with physicians. However, integrating CDSS into the daily clinical routine using real-world data requires rigorous clinical validation in a real-world environment before implementation in clinical practice. Key challenges include trust, bias, scalability, and deployment. Further, regulatory and privacy issues need to be addressed."} +{"text": "The use of artificial intelligence (AI) in medicine is expected to increase significantly in the upcoming years. Advancements in AI technology have the potential to revolutionize health care, from aiding in the diagnosis of certain diseases to helping with treatment decisions. Current literature suggests the integration of the subject of AI in medicine as part of the medical curriculum to prepare medical students for the opportunities and challenges related to the use of the technology within the clinical context.We aimed to explore the relevant knowledge and understanding of the subject of AI in medicine and specify curricula teaching content within medical education.For this research, we conducted 12 guideline-based expert interviews. Experts were defined as individuals who have been engaged in full-time academic research, development, or teaching in the field of AI in medicine for at least 5 years. As part of the data analysis, we recorded, transcribed, and analyzed the interviews using qualitative content analysis. We used the software QCAmap and inductive category formation to analyze the data.The qualitative content analysis led to the formation of three main categories with a total of 9 associated subcategories. The experts interviewed cited knowledge and an understanding of the fundamentals of AI, statistics, ethics, and privacy and regulation as necessary basic knowledge that should be part of medical education. The analysis also showed that medical students need to be able to interpret as well as critically reflect on the results provided by AI, taking into account the associated risks and data basis. To enable the application of AI in medicine, medical education should promote the acquisition of practical skills, including the need for basic technological skills, as well as the development of confidence in the technology and one\u2019s related competencies.The analyzed expert interviews\u2019 results suggest that medical curricula should include the topic of AI in medicine to develop the knowledge, understanding, and confidence needed to use AI in the clinical context. The results further imply an imminent need for standardization of the definition of AI as the foundation to identify, define, and teach respective content on AI within medical curricula. Artificial intelligence (AI) has been of broad scientific interest in medicine for over a decade. This is reflected in the publication of more than 18,000 scientific publications mentioning AI-related terms in that time. AI is expected to revolutionize health care systems around the world. Apart from the economic benefits, AI is expected to make health care more efficient for both patients and health care professionals .With increased public and scientific interest, research into the potential challenges of AI is becoming more commonplace. Recent developments in the use and handling of algorithms in AI applications have raised highly relevant ethical concerns that need to be addressed, in addition to crucial questions regarding patient safety and data . These iResearchers propose that addressing potential challenges regarding the use of AI in medicine requires adequate knowledge of the technology ,6. FurthTo prepare future generations of physicians for the use of AI within the rapidly changing health care system, education needs to adapt to the new challenges. As the development of new curricula modules and teaching content is a time-intensive and complicated process due to traditional structures and accreditation procedures, significant research is needed to define relevant competencies and teaching content regarding AI in medicine.AI has been a topic of interest in computer science since the 1950s . HoweverA distinction can be made between so-called strong AI and weak AI. \u201cStrong AI\u201d defines an AI whose intellectual abilities are comparable to those of humans . HoweverThe application areas of \u201csymbolic AI\u201d in medicine mainly include rule-based expert systems, where the rules to be followed by the AI have been previously defined by experts. Clinical decision support systems can be used in patient care, for example, to support doctors in diagnosis and treatment . The subThe present publication is based on the definition of \u201cweak AI\u201d with its subdomains and all results should be interpreted against this background.The study was conducted to explore essential knowledge and understanding regarding AI in medicine, relevant to define curricula teaching content within medical education. The results should provide the foundation for the improvement of the education of medical students and the medical curriculum.The following section of this study aims to provide a detailed description of the study design, data collection, and data analysis techniques used in this research. The methods used in this study were chosen to ensure the validity and reliability of the results and to ensure that ethical standards were met.The study, conducted from September to November 2022, aimed to identify relevant knowledge and understanding of AI-related teaching content in medical education using semistructured expert interviews. From the total of 68 initially identified and contacted experts in the field of AI in medicine and health care , we were able to include 12 in this study. Most experts were based in Germany (n=10), with 2 experts being included from Austria. For the qualitative data collection, we defined experts as individuals who have been engaged in full-time academic research, development, or teaching in the field of AI in medicine for at least 5 years.Experts were recruited by email and personal recommendation by the participants. Of the total of 12 included experts, half were primarily working in the field of research and practical development of AI-based applications in the field of medicine . The remaining 6 experts were primarily associated with teaching and research in the field of medical informatics, AI, and digital medicine as part of the medical curriculum . As the experts were primarily recruited by email, an email address that was not publicly accessible through a web-based search was an exclusion criterion.Additional exclusion criteria were no or less than 5 years of experience in the field of AI in medicine, a lack of consent to the transcription or voice recording as well as a missing current or recent involvement in projects related to the research, development, or teaching of AI in medicine.The Research Committee for Scientific Ethical Questions (RCSEQ) of the UMIT TIROL \u2013 Private University for Health Sciences and Health Technology, Hall in Tirol, Austria, granted ethical approval for the study.Web-based interviews were conducted, using the Cisco Webex Meeting application. The meetings were recorded using an analogous voice recorder. We obtained consent from the participants before conducting the interviews, including their agreement to be recorded and their data to be used for research purposes. As part of the interview, a semistructured guideline was used. The guideline included questions about the experts\u2019 education and experience in AI, the anticipated impact of AI in medicine, as well as key competencies required for use of AI in medicine, and possible teaching content (please see the supplementary information for the interview guideline). On average, the interviews lasted for 35 minutes.The recorded interviews were transcribed manually with the help of the transcription software f4transkript and a transcription service provider was used to transcribe some of the transcripts. Transcription followed the established rules of Dresing and Pehl . To analAs a result of the qualitative content analysis, we defined 3 main categories with a total of 9 subcategories. Each of the subcategories is defined by quotes from the participants to highlight the procedure and the original meaning. An overview of the 3 main categories with all associated subcategories is shown in Based on the results of the qualitative content analysis, the first main category was defined. Given the interdisciplinary data collection, the \u201cknowledge\u201d main category summarizes suggested knowledge, which medical students should learn regarding the topic of AI in medicine as part of their education.The first subcategory \u201cbasic understanding of AI\u201d highlights the need for basic knowledge and definitions, without an in-depth understanding:But that's not, in my opinion, about people really understanding the technology down to the smallest detail and being able to implement and train on things themselves. I don't think they need that.Interview 7The second subcategory \u201cstatistics\u201d relates to the good statistical knowledge needed to understand AI, which was mentioned by half of the experts.The basis is statistics. (...). So that's the basis, because these learning AI methods are all based on statistics.Interview 5This subcategory should also account for the importance of understanding probabilities and their application within medicine. Especially with AI-based applications, statistical knowledge will play a key role in the interpretation of results, which will be further addressed in the second main category.Half of the interviewed experts mentioned the need for an understanding of ethical competencies related to the use of AI in medicine, which is captured in the third subcategory \u201cethics.\u201dAnd then just ethical competencies and I think that has a high requirement (...)Interview 10The use of AI-based applications in medicine requires adequate ethical competencies to address the new challenges arising through the interaction with patients and the usage of their data. This does not only refer to the well-known \u201cblack-box\u201d phenomena of deep learning or potential bias through unrepresentative training data but rather addresses the topics like the medical self-imagine or the physician\u2013patient relationship too. Although ethics has a long tradition within medical curricula, it also needs to adapt to new technological developments in medicine to address associated challenges and discussions.The last subcategory \u201cdata protection and regulation\u201d of the first main category summarizes the need for an understanding of data protection laws and regulations concerning the use of AI in the clinical context, mentioned by 4 of the interviewed experts.(...) where we have to have a good idea of how we can use it, but also what the legal limitations of the whole thing are.Interview 10The need for an understanding of data protection laws does not only apply to the use of AI in medicine but is of increasing significance due to the accelerated digitalization of medicine. An understanding of the regulation regarding the use of AI in medicine can help to prevent uncertainties and potential disapproval by users.The second main category \u201cinterpretation\u201d accounts for the high importance to interpret and evaluate the results provided by AI-based applications in medicine. This main category summarizes the statements related to the evaluation of results and should highlight the importance of sufficient knowledge and competencies needed to address all associated challenges.The first subcategory \u201ccritical reflection\u201d addresses the need for adequate knowledge and understanding to question the results yielded from AI-based applications critically.(...) also of the possibilities to critically question these things.Interview 4The ability to critically reflect and question the results shows the importance of adequate teaching of content relating to AI in medicine. As with any traditional technology or application, AI-based applications are not free of mistakes, which in the clinical context can have significant consequences.As users need to be aware of potential consequences and risks associated with the results provided by AI, the second subcategory \u201cassociated risks\u201d reflects the answers of 5 of interviewed experts:(...) also what are the, yes, risks? What can go wrong? Well, the AI also makes mistakes, of course.Interview 2One of the most mentioned risks was related to false-positive results provided by AI. Without any critical questioning of the results, this can lead to unnecessary treatments for the patients. Although this might be of minor significance in the case of additional physical examination, it could lead to additional exposure to radiation or punctations. Although false-positive results can lead to more imminent negative consequences, the mentioned consequences of false-negative results can be of major significance too in case a disease is not recognized and treated. False-negative or positive results highlight the need to be aware of the associated risks related to the results of AI-based applications in medicine. Furthermore, critical reflection of the results is not only connected to potential associated risks, but rather to an understanding of the data that were used to train AI applications.The third subcategory of the second main category \u201cdata basis\u201d represents the statements of 4 of the experts and describes the need for a good understanding and reflection of the data used in the development process of the AI-based application.And, of course, you also have to think about the data that might be fed into it now, do they make sense? Are they representative?Interview 2Both are important requirements to interpret the results and are closely associated not only with the other subcategories of this main category but rather with the subcategories from the first main category too. Without a basic understanding of statistics and how AI-based applications work, it is hard to understand the need for representative data samples. Potential bias makes ethical competencies necessary to interpret and critically question the results based on the data basis. This subcategory does not only refer to the need for an understanding of whether the data basis is representative of the current patient, but rather the imminent need to understand that current AI applications have very narrow use cases. To prevent false diagnosis and associated consequences, it is necessary to critically reflect on the unreliable results that can arise from deviation from the specific use case.Analysis of the interviews yielded a third main category named \u201capplication.\u201d This category comprises 2 subcategories and summarizes the requirements to apply AI-based applications in clinical practice.The first subcategory \u201cpractical skills\u201d addresses the practical skills required, to use AI-based applications of any kind.In clinical practice, the most important thing is actually the practical application.Interview 1This subcategory further includes basic technological understanding and skills needed, to apply any software application. Based on the feedback from half of the interviewed experts, this includes for example competency to use hardware like desktop computers, including keyboard and mouse or operating software used in the clinical context. Moreover, this subcategory summarizes the knowledge and understanding needed to apply AI software within the clinical workflow. Users need to understand whether it makes sense to use the applications and how they can be used to improve the workflow in clinical practice.The second subcategory \u201ctrust\u201d represents a base layer needed to use any technology. This subcategory relies on adequate knowledge (first main category) and teaching within the medical curricula. The absence of teaching as part of the medical curriculum could further lead not only to the lack of trust and potentially the disapproval of the application, but could also lead to a blind trust, which can have significant consequences as part of the interpretation of results.Creating trust, but not blind trust.Interview 12Creating trust not only concerning the use of AI-based applications but rather trust regarding the own competencies in the process of applying AI-based applications within the clinical context is one of the challenges that can be addressed as part of medical education.The results indicate the significance of the integration of teaching content regarding AI as part of the medical curriculum. All experts interviewed agreed on the importance of teaching AI content in the medical curriculum, which echoes the current state of literature ,8,19. AlThis agreement is reflected through the definition of the 3 main categories . Most experts recommended that medical students should only receive basic knowledge of current AI models and terminology, as they will not be required to develop or train AI-based applications themselves, which is also in line with recommendations of current publications ,20. HoweThe practical challenges and barriers of implementing new teaching content, such as the need for the renewal of accreditation or sufficient knowledge of the teaching staff, further reinforce the recommendations of the experts to only facilitate a basic level of knowledge acquisition of AI as part of the medical education . The expFor many of the interviewed experts, the ability to interpret results provided by AI applications concerning the data basis and the associated risks is highly important when it comes to preferred teaching outcomes. The results from this study confirm the imminent need for an early and conscientious implementation of curricula teaching content on AI, as suggested by earlier studies ,26,27. FThe experts\u2019 statements reveal a disagreement and lack of standardization in the definition of AI. Recent publications on the integration and teaching of AI within medical education commonly lack a specific and dedicated definition of AI ,8,19. GiThe need for standardization in the definition of AI as a foundation for related teaching content is further emphasized by the potential ethical challenges and issues that may arise from the use of different types of AI in a clinical context. For example, in the context of bias, clinical decision support systems can be subject to bias arising from the unintended transfer of existing bias on the part of the developers ,32. FocuAlthough the integration and teaching of AI as part of medical education have been of increased scientific interest in recent years, further highlighting the need for early and adequate education of medical students, the available research is still limited ,8,19,34.The results of this study highlight the need for comparability, as the experts\u2019 statements not only confirm the results of current literature but further specify and highlight the importance of awareness of associated risks, critical questioning of the results, as well as the significance of basic technical and technology skills ,25,36. FThere are several limitations of this study. Using qualitative research methods, the level of generalization is limited due to a small sample size. Although we sought an interdisciplinary approach to the data collection, the results of the study still represent the subjective opinions of the participants. Furthermore, the results are likely to be subject to a selection bias, as no randomization was used and participants were recruited through recommendation. As only a limited number of standardized questions within the data collection were used, interviewer\u2019s bias is also possible. Additionally, as the data collection was conducted through a web-based service provider, technical difficulties may have affected the quality of the collected data.This study aimed to explore and define relevant knowledge and understanding concerning the subject of AI in medicine as part of the medical curriculum. The results of the study, based on qualitative content analysis of expert interviews, indicate that knowledge and understanding of the fundamentals of AI, statistics, ethics, and privacy and regulation should be part of medical education. Furthermore, medical students need to be able to interpret and critically reflect on the results provided by AI, considering the associated risks and data basis. The development of trust in AI as well as the acquisition of related practical skills, including the need for basic technological skills, should be an indispensable part of medical education.As AI in medicine is likely to become increasingly significant in the future, medical users will need adequate knowledge and understanding to use it effectively. Due to the new opportunities and challenges associated with the use of AI-based applications in medicine, medical education needs to adapt to those changes, to provide future generations of physicians with the necessary knowledge and competencies. The research aims to emphasize the importance of integrating teaching content related to AI into the medical curriculum. The results provide implications for the creation of new teaching content based on interdisciplinary data collection. Furthermore, the results further imply a need for standardization in the definition of AI as a foundation for associated teaching content and the integration of AI into medical education. Subsequent research should explore the practical implications of this study and how the results can be transferred into the medical curriculum. Furthermore, research and the development of tools are needed to assess the current knowledge and competencies of medical students regarding the use of AI in medicine. This will not only have practical implications for the creation of new teaching content but will rather allow an assessment of the success of new teaching content in the future."} +{"text": "Factors of Perceived Trustworthiness from Mayer, Davis, and Schoorman\u2019s Model of Trust and made propositions to define AI\u2019s role in trustful scholarly communication.In this article, we reflect on the pros and cons of artificial intelligence (AI)-augmented scientific writing for more comprehensible research towards society to gain trust for science-led policy. For this purpose, we integrated our thoughts into the Letter to the editorIn ancient Greek drama, the deuteragonist is the second actor in the play. While interacting closely with the protagonist to support and assist, the deuteragonist can also assume a role of conflict or rivalry; ultimately helping to develop the main character and shape the overarching narrative of the play .Since artificial intelligence (AI) came to the stage in 1956 ,\u00a0it has Balancing the benefits and risks of AI in scholarly communicationThe concept of AI assisting researchers in scientific writing is appealing. If executed correctly, AI could support more comprehensible research and increase acceptance of science-led policy by the general public. However, concerns have been raised about issues such as plagiarism and obscurity in research, particularly with unregulated use of AI-driven scientific writing assistance (SWA) and chatbots ,18. In tThe prosSWA has the potential to make research more accessible and understandable by facilitating core findings for wider target groups, including non-scientific stakeholders, policymakers, and the public (benevolence+) . In addiAI can generate various visual representations of research, ranging from simple semiotic visualizations to more complex schematic views. These offer effective and easily comprehensible data communication for other researchers and a broader audience in a non-textual form (ability+).Humans are prone to errors, especially when performing repetitive and monotonous tasks like formatting citations and proofreading . AI autoAI can quickly analyze and summarize vast amounts of scientific literature or content data, enabling researchers to uncover patterns, correlations, and trends not easily identifiable through manual analysis alone (ability+). This can also contribute to improve the overall quality of the research process; for example, in identifying relevant research questions and interpreting data, resulting in a more targeted design of original research and literature reviews (benevolence+) ,27.The cons and possible measures-). This should be addressed by fostering the establishment of balanced training of datasets, bias detection, and correction algorithms, as well as close human supervision and feedback mechanisms [Since AI chatbots are based on large language models (LLM), their answers are always a function or a derivate of their repertoire and hold a risk of bias to scientific writing, based on possible biases in the training datasets themselves. This may include racial, sexual, or religious bias, making the results less inclusive -30 . Mitigating such concerns must be enforced by public dialogue, addressing integrative scenarios including supervising, and collaborating definitions as well as adequate education and reskilling programs for the people in an increasingly AI-automated world.Automation of tasks, e.g., the aforementioned SWA, may stoke fear of job loss . This could be closed by the aforementioned human-AI collaboration, as well as pairing LLMs with augmented reality, evolutionary algorithms, generative models, fine-tuning, and zero-/few-shot learning to generate inclusive novel ideas and writings [The crystalline nature of LLMs precludes chatbots from actuality. More importantly, while they are neither living nor experiencing culture, LLM chatbots are put into an insurmountable epistemic distance toward\u00a0the user\u2019s world of thought, which ultimately makes current chatbots incapable of generating a priori ideas or conclusions for society (integritywritings -36.-), lowering people\u2019s trust in it, and open the door for predatory journals to misuse AI to mass-produce low-quality, inaccurate, and misleading content, based on monetary interests [-). We suggest the encouragement of education and training in young researchers on critical thinking and writing skills, highlighting that AI is a tool of assistance, not a replacement for the human intellect [Overreliance on AI for writing tasks can lead to a decline in critical thinking and writing skills among researchers, as well as an impoverishment of creativity and inclusivity. This could potentially jeopardize the integrity of scientific research . We advise fact-checking and the use of verification systems to confirm the accuracy of AI-generated text before publication or use, predominantly by human reviewers, but also other AI-based systems, as well as the development of AI algorithms specifically designed to detect and correct hallucinations and other errors in AI-generated text [Lack of explainability and the ted text .-). Additionally, access to AI tools may be limited due to factors such as cost or institutional constraints . This could be counteracted by faculties by providing comprehensive training to researchers on the use of AI tools, ensuring that they are comfortable and proficient in their application.Implementation and effective utilization of AI in scientific writing requires technical expertise and resources. Not all researchers may have access to or be comfortable with using it, potentially creating a split in the scientific community ,35, it iLimitations of AI for Scientific WritingThe inherent risk of introducing bias, the phenomenon of \"hallucination,\" a lack of originality, limited accessibility, and insufficient transparency and explainability hinder AI from becoming a protagonist in scientific writing at its current technological stage and without clear guidelines.For now, we advocate the general disclosure of AI usage in the methods section of research articles as an initial step. The declaration should include the type of AI used and clarify the role the AI had in the creation of the manuscript. In addition to that, the corresponding scripts and prompts, as well as chatbot parameters, should accompany the article as supplementals and data repositories. To guarantee full transparency, the following measures should be developed and established: (a) standardized reporting frameworks customized to the AI model in use. This ensures that essential AI-related details are reported consistently and thoroughly, providing additional clarity to the reader. (b) Introduce a visual cue on the published article, in analogy to open access labeling, by creating an international AI usage badge Figure ; a visuaConclusionsThe boundary that separates AI as a mere tool and a potential autonomous contributor - a protagonist, akin to a coauthor - gradually becomes less distinct as its technology and usage evolve. However, the future role of AI as a protagonist in research has to be postponed. For now, AI will remain in the second line, making us protagonists better authors through its help and our consecutive reflection on the rules of good scientific writing for a more comprehensible and trustful scholarly communication."} +{"text": "The field of regenerative medicine is constantly advancing and aims to repair, regenerate, or substitute impaired or unhealthy tissues and organs using cutting-edge approaches such as stem cell-based therapies, gene therapy, and tissue engineering. Nevertheless, incorporating artificial intelligence (AI) technologies has opened new doors for research in this field. AI refers to the ability of machines to perform tasks that typically require human intelligence in ways such as learning the patterns in the data and applying that to the new data without being explicitly programmed. AI has the potential to improve and accelerate various aspects of regenerative medicine research and development, particularly, although not exclusively, when complex patterns are involved. This review paper provides an overview of AI in the context of regenerative medicine, discusses its potential applications with a focus on personalized medicine, and highlights the challenges and opportunities in this field. Artificial intelligence (AI) refers to the development of computer systems that can perform tasks that would typically require human intelligence. This includes learning, reasoning, perception, and problem-solving. AI systems are designed to mimic human cognition and to work autonomously, learning from data and prior experiences to improve their performance over time ,2. The cDeep learning is a subset of machine learning that uses artificial neural networks to learn from data. These neural networks are designed to mimic the structure and function of the human brain, allowing them to identify more complex patterns and make decisions based on the data they have been trained with ,11. DeepRegenerative medicine is a rapidly evolving field that seeks to restore or replace damaged or diseased tissues and organs through advanced technologies such as stem cell-based therapies, gene therapy, and tissue engineering ,22. WithThis paper offers a distinct contribution by synthesizing and analyzing the available literature on AIs applications in regenerative medicine, providing an overview, identifying gaps in the existing literature, and proposing novel research directions. By adopting a holistic perspective, we not only consider empirical studies but also include theoretical perspectives and expert opinions. This approach broadens the scope of our analysis and allows for a more comprehensive understanding of the topic. By incorporating diverse sources of evidence, our manuscript offers a unique perspective that is not limited to a single methodological approach. Thus, our study presents a novel synthesis of the literature, shedding light on the potential of AI to revolutionize regenerative medicine.AI has become a crucial aspect in performing computational simulations and in silico studies in medical applications and offers several advantages, such as lower costs and faster results compared to other medical investigation approaches, such as clinical and laboratory methods ,26,27. Cde novo drug design) and drug screening ..145].In conclusion, AI has tremendous potential to revolutionize and accelerate the development of therapies in regenerative medicine. From enhancing drug discovery to optimizing tissue engineering and cellular therapies, AI can provide insights by analyzing vast molecular and genomic datasets that would be impossible for humans to perceive. While AI shows promise to advance regenerative medicine research and development, there are significant technical challenges that must be addressed before these technologies can be widely adopted. One of the significant limitations is the lack of large, high-quality datasets needed to train sophisticated machine-learning models. Regenerative medicine involves complex biological interactions that are difficult to fully capture in data.Additionally, developing accurate computational models that can simulate and predict cell behavior over time poses immense technical challenges due to our still-limited understanding of cellular and molecular pathways. Validating AI systems and gaining regulatory approval also requires extensive clinical testing, which takes considerable time and resources. Addressing concerns around data privacy, security, and bias and ensuring fair and equitable access to new tools is equally important. Moreover, obtaining clinician buy-in for technologies promising more effective personalized care will require overcoming adoption hurdles. Substantial ongoing research is still needed to overcome these limitations and translate AIs theoretical potential in regenerative medicine into real-world solutions that tangibly improve patient outcomes. Researchers, policymakers, healthcare providers, and AI developers must work together to develop appropriate safeguards, oversight mechanisms, and guidelines for using AI in regenerative medicine. As AI technologies continue to improve and more high-quality data becomes available, the opportunity for refining and customizing AI algorithms specifically for regenerative medicine purposes will increase.Moving forward, further innovations in areas such as machine learning, natural language processing, computer vision, and robotics have the potential to uncover new insights that could revolutionize how regenerative therapies are developed and delivered. Combining AI with other emerging technologies such as nanotechnology, genome editing, and 3D bioprinting may lead to unprecedented advances in creating personalized regenerative solutions. With the appropriate ethical framework and governance structures in place, the future of AI-driven regenerative medicine seems promising. However, progress will depend on maintaining a human-centric approach that utilizes AI capabilities to serve the best interests of patients and society. Through multidisciplinary collaborations and responsible development and use of these technologies, we may one day realize the full potential of AI to usher in a new era of customized and effective regenerative therapies.AI can help accelerate drug discovery by analyzing large datasets to identify promising drug candidates and optimize drug properties.AI-enabled disease modeling can provide insights into disease mechanisms and aid in the identification of new therapeutic targets.AI can improve predictive modeling to identify patients who may benefit from regenerative therapies and optimize treatment plans.AI can enable the development of personalized medicine approaches based on a patient\u2019s genetic and health data.AI can optimize materials and fabrication methods for tissue engineering applications.AI can assist in identifying the most suitable cell types for cell therapies and optimizing cell delivery and monitoring.AI can enhance the efficiency and accuracy of clinical trial design.AI can be used to monitor patients in real-time to detect changes and risks early.AI can provide personalized patient education materials tailored to individual needs and preferences.AI can improve regulatory compliance through enhanced data analysis, traceability, and transparency.AI also has roles in related fields such as immunotherapy, genetic engineering, nanotechnology, and microfluidics, which can further advance regenerative medicine.In summary, AI has the potential to significantly enhance various aspects of regenerative medicine research and development through analyzing large and complex datasets, identifying patterns and trends, and making accurate predictions to optimize processes and therapies. However, challenges related to ethics, data quality, and regulation must be addressed to ensure the safe and effective use of AI in regenerative medicine."} +{"text": "Introduction: Intercellular adhesion molecule 1 (ICAM-1) is a critical molecule responsible for interactions between cells. Previous studies have suggested that ICAM-1 triggers cell-to-cell transmission of HIV-1 or HTLV-1, that SARS-CoV-2 shares several features with these viruses via interactions between cells, and that SARS-CoV-2 cell-to-cell transmission is associated with COVID-19 severity. From these previous arguments, it is assumed that ICAM-1 can be related to SARS-CoV-2 cell-to-cell transmission in COVID-19 patients. Indeed, the time-dependent change of the ICAM-1 expression level has been detected in COVID-19 patients. However, signaling pathways that consist of ICAM-1 and other molecules interacting with ICAM-1 are not identified in COVID-19. For example, the current COVID-19 Disease Map has no entry for those pathways. Therefore, discovering unknown ICAM1-associated pathways will be indispensable for clarifying the mechanism of COVID-19.Materials and methods: This study builds ICAM1-associated pathways by gene network inference from single-cell omics data and multiple knowledge bases. First, single-cell omics data analysis extracts coexpressed genes with significant differences in expression levels with spurious correlations removed. Second, knowledge bases validate the models. Finally, mapping the models onto existing pathways identifies new ICAM1-associated pathways.Results: Comparison of the obtained pathways between different cell types and time points reproduces the known pathways and indicates the following two unknown pathways: (1) upstream pathway that includes proteins in the non-canonical NF-\u03baB pathway and (2) downstream pathway that contains integrins and cytoskeleton or motor proteins for cell transformation.Discussion: In this way, data-driven and knowledge-based approaches are integrated into gene network inference for ICAM1-associated pathway construction. The results can contribute to repairing and completing the COVID-19 Disease Map, thereby improving our understanding of the mechanism of COVID-19. Elucidating the underlying mechanisms of coronavirus disease 2019 (COVID-19) still remains a global issue. Uncovering the mechanism requires fully understanding the COVID-19-specific interactome, a complex network of interactions among different components. In previous studies on COVID-19, attempts to provide insights into interactions within cells have been made. In contrast, interactions between cells have not been closely examined.ICAM1. ICAM-1 is a transmembrane glycoprotein expressed on leukocytes, vascular endothelial cells, and respiratory epithelial cells. Its differential expression is critical for proinflammatory immune responses and viral infection. Additionally, ICAM-1 enables interactions between cells by controlling leukocyte migration, homing, and adhesion from outside to inside the cell (outside-in) and regulation from inside to outside the cell (inside-out) , encoded by ide-out) . These fide-out) . Anotheride-out) . The preide-out) . In factS-CoV-2) . MoreoveS-CoV-2) . SpecifiS-CoV-2) . In HIV-S-CoV-2) and induS-CoV-2) . The afoin vitro experimental results on the expression level of ICAM-1 in SARS-CoV-2-infected cells. One study shows the time-dependent ICAM-1 expression level changes in COVID-19 patients . Consequently, it is significant to uncover the ICAM1-associated pathways to better understand the interactions between cells in the context of COVID-19.Nevertheless, the interactions arising from ICAM-1 are not explicitly recognized as indispensable in the case of COVID-19. In particular, there is little insight into the signaling pathways surrounding ICAM-1, that is, the upstream and downstream signaling cascades that occur upon the functional activation of ICAM-1 and its specific signaling molecules interacting with ICAM-1 pathways , undirected graphical models are edited as dependency graphs with validated relationships.Steps 1 and 2 are dedicated to gene network inference purely from data, and Step 3 validates the data-driven gene network with domain knowledge. In this study, we call the rectification of data-driven objects with knowledge a data-driven and knowledge-based (DD-KB) approach. Step 1 involves obtaining the COVID-19-specific differentially expressed genes (DEGs) and a network of differentially coexpressed genes (DCGs) via single-cell omics data analysis. Here, DEGs are the genes whose expression levels differ significantly in COVID-19-positive patients and -negative controls, and DCGs are coexpressed DEGs. Step 2 removes spurious edges from the correlation networks, thereby building ICAM1-associated pathways. Through the framework, single-cell omics data and multiple knowledge bases are integrated, which allows the inference of gene networks containing the components absent from the current COVID-19 Disease Map.Steps 4 and 5 are pathway construction steps. In Step 4, a functional annotation tool is used to convert genes into encoded proteins. Pathway mapping and unification (Step 5) refine the results as the final outputs, the In this subsection, explanations for each step of the framework are provided.ICAM1-associated DCGs from the omics data.Single-cell omics data analysis adopts a combination of the standard methods defined as three subroutines, i.e., dimensionality reduction, clustering, and Wilcoxon rank-sum test, for each gene pair and each cell pair . This stk-nearest neighborhood graph are computed and thresholded at the closest neighbors defined for data points of the manifold in Euclidean space. Following PCA, UMAP conditional on all other variables. The edge is weighted as The equations for the zero-order, first-order, and second-order partial correlations are shown in Eqs k (WGCN) .ICAM1 gene of interest are inferred without guaranteeing the validity of each edge. Namely, possible errors within data, such as noise, could result in nodes or edges with no biological meaning. Hence, the models require corroboration and validation with heuristics based on domain knowledge. To corroborate and validate each relationship of gene networks, we query multiple knowledge bases, including Pathway Commons Web Service 12 , while the nodes of the KEGG pathways are primarily proteins. Therefore, this gap needs to be filled before overlaying the DD-KB gene networks onto the KEGG pathways. The DAVID functional annotation tool 6.8 allows uICAM1-associated pathways, visualized in yFiles Hierarchical Layout.In order to examine what types of pathways are activated, we conduct pathway enrichment analysis by mapping the protein node lists and edge lists of the DD-KB gene networks onto the KEGG pathways. In particular, the DD-KB gene networks and the KEGG pathways in the KEGG Markup Language (KGML) format are unified using Cytoscape 3.9.0 , resultiICAM1-associated pathways between different locations where ICAM1 is expressed (case study 1) and between different time points starting from hospitalization (case study 2).We applied the aforementioned framework to the two COVID-19 datasets for comparing the Homo sapiens\u201d AND h5ad to the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO) dataset (Inputting the search term [(COVID-19 OR SARS-CoV-2) AND gse(entry type)] AND \u201c dataset provided dataset . Especia dataset . The cel dataset .Likewise, GSE180578, another transcriptome omics dataset, was fetched ICAM1, our framework would be reusable in another context for mixing quantitative data and domain knowledge into building models capturing pathways. To allege the generality of the framework, in case study 2, we also applied the framework to other genes related to the interaction between cells, including ACTB and C15orf48. Here, in this main manuscript, we focus only on providing the results of ICAM1-associated pathways considering the purpose of this study. The results of other genes can be referred to in the Not limited to Before proceeding to the further steps, the single-cell omics data underwent quality control. Quality control included filtering, scaling, and normalization by Scanpy version 1.8.2 . Given tThe framework was applied concerning the two datasets. The machine configuration was as follows: Python 3.7, GPU Tesla V100-SXM2-16GB, and 51.01\u00a0GB of RAM.The data initially contained 77,650 cells \u00d7 24,714 genes. Removing the cells with a high proportion of mitochondrial RNA resulted in 15,220 cells. After filtering, the whole dataset contained 68,734 cells \u00d7 24,001 genes. Among this dataset, we used the data of four cell types, 21,819 genes in the SARS-CoV-2-infected 15,481 cells (cells with SARS-CoV-2 transcripts detected). The doublet discrimination provided 14,723 cells. The filtering processes excluded 8,916 cells with more than 5,000 expressed genes, 700 genes detected in fewer than three cells, mitochondrially encoded genes, and cells with a low percentage ICAM1, and this cluster was made of 178 ICAM1\u2019s DCGs. The results of cellwise clustering, heatmap of DCGs, and rank-sum test are shown in p-values, while p-values of ICAM1 expression variation and the computation times (sec.) for each cell type are as follows: p = 0.250, time = 51.6 (infected AT1 and AT2); p = 2.50E-4, time = 26.1 (migratory DC); p = 2.57E-12, time = 46.6 (TRAM2); p = 7.55E-2, time = 109.0 (MoAM2); and p = 0.241, time = 178.9 (summation).The genewise analysis extracted the 18 gene clusters with differential expression patterns specific to COVID-19. Excluding the duplicated genes extracted 1,434 DEGs in 9,050 single cells. The results of genewise clustering, heatmap of DEGs, and rank-sum test are shown in Removal of spurious correlations yielded undirected graphical models . The finDependency graphs are shown in The nodes in the dependency graphs were annotated with protein names, which helped us map the nodes onto the KEGG pathways in the next step.Pathway mapping discovered which subpathways within existing signaling pathways reflect the activity of a group of varying genes and coexpressed in a disease-specific manner in the observed gene expression data. ICAM1-associated pathways for each cell type are shown in ICAM1-associated pathway for infected AT1 and AT2 in the pathway for migratory DCs , NFKB2 (gene encoding NF-\u03baB p105 subunit 2), RELA (gene encoding NF-\u03baB p65 subunit), JUN , and CHUK . These m\u03baB pathway, which is an upstream pathway of ICAM1 and MAP3K14 . These molecules are known as triggers of the non-canonical NF-\u03baB pathway , for example, is a gene that is not found in the other pathways and could not be found without taking summation. SOD2 is known as a gene whose expression variation has been confirmed accompanied with ICAM1 in COVID-19 for each time point were as follows: p = 1.50E-05, time = 77.9 (day 1); p = 3.09E-2, time = 55.4 (day 5); and p = 3.04E-06, time = 56.3 (day 10). This manuscript does not explain the other detailed results of the single-cell omics data analysis because the procedures were the same as that in case study 1. These other results can be found at 10.6084/m9.figshare.23590755. As for the remaining steps, we explain the results of spurious correlation removal and pathway construction.Like case study 1, the original dataset underwent single-cell omics data analysis, and ICAM1-associated DCGs extracted from the omics data are shown in The pathways resulting from combining the KEGG pathways with the partial correlation networks of CXCL1, 2, 3) induced by interleukin-1 (IL1B) or TNF-\u03b1-induced protein 6 (TNFAIP6). These molecules are along the green-colored molecules, such as the NF-\u03baB p105 subunit (NFKB1) and NLRP3 inflammasome , which are related to proinflammatory effects and activation of the NF-\u03baB pathway or the MAPK pathway (C15orf48), chemokine ligands acting as macrophage inflammatory protein (CCL3 and CCL3L1), transmembrane protein (TMEM176A/B), nuclear factor erythroid (NFE2), peptidyl arginine deiminase 4 (PADI4), RAS oncogene (RAB20), and proto-oncogene (ETS2). These are also related to inflammation, immune response, or membrane fusion (The common characteristics and the unique attributes of the found pathways for each time point are as follows: one common feature is the presence of molecules responsible for the immune response included across the three time points. The immunoreactive molecules include the chemokine ligand ( pathway . Other ce fusion .PHLDA2) and chemokine ligand (CXCL5) (CCL4L2) (FOLR3), haptoglobin (HP), and chemokine ligand (CXCL16) . The pat(CCL4L2) . The pat(CXCL16) . These a(CXCL16) .\u03baB pathway or the MAPK pathway . The absence of these molecules may be due to insufficient gene coverage in the original omics data.The obtained pathways do not contain the molecules in the NF-ICAM1-associated pathways in case study 1 with the COVID-19 Disease Map reveals existing and unknown nodes. For example, MAP2K3, MAPK14, JUN, FOS, ITGA2, ITGB1, RSAD2, OAS, and STAT2 have already been mapped onto the COVID-19 Disease Map, while RELB, ITGAL, CDC42, ACTB, CD40, DCTN1, BCL3, and CD83 in the obtained pathways are still absent in the current COVID-19 Disease Map.The comparison between the obtained ICAM1-associated pathways and the current COVID-19 Disease Map from the results of case study 2. For instance, IL1B, NFKB1, NLRP3, PYCARD, and CASP1 are listed in the COVID-19 Disease Map, while there are molecules absent from it, including CCL3, 3L1, 4L2, CXCL1, 2, 3, 5, 16, TNFAIP6, C15orf48, TMEM176A/B, NFE2, PADI4, RAB20, ETS2, PHLDA2, FOLR3, and HP.Likewise, we can identify the difference between the obtained \u03baB pathway would likely be activated, which reflects that our framework can reproduce the already-known fact that the NF-\u03baB pathway is activated in COVID-19, as seen in the KEGG\u2019s COVID-19 pathway (hsa05171). As a new insight into the unknown pathways is missing from the current COVID-19 Disease Map, the results imply that COVID-19 involves the following two up/downstream pathways:\u03baB pathway\u2022 Upstream pathway with proteins on the non-canonical NF-\u2022 Downstream pathway with integrins and cytoskeletal elements associated with actin and the motor protein dynein for cell transformationThe non-canonical NF-\u03baB pathway is reasonable because it is relevant to the proinflammatory response in viral infections such as COVID-19. It is also creditable that TNFRSF11A is found only in the pathway of DCs and cell division control protein 42 homolog are essential for VS formation in HTLV-1 cells (The results from both case studies indicated that the NF-y of DCs since TNy of DCs . The inv-1 cells . Althoug-1 cells would beThe need to identify unknown pathways has accelerated the work related to gene network inference in COVID-19. For example, ICAM1-associated pathways currently absent from the COVID-19 Disease Map. While previous analysis or curation work found the canonical NF-\u03baB pathway (\u03baB pathway, and a downstream pathway that may lead to MTOC formation subject to observation.As a summary of contributions, this study discovered novel pathway , the nonIn addition to the scientific findings, our framework, which integrates single-cell omics data analysis and model validation using multiple knowledge bases, is also original and versatile. Especially single-cell omics data analysis in Step 1 and model validation by multiple knowledge bases in Step 3 are realized to construct pathways in different cases See also . For theThe existence of undirected edges within the final pathways would be a limitation of our framework. These edges without direction arise from correlation networks that find direct and indirect relationships but do not distinguish between causality and correlation . Our metConsequently, future work will include the following two tasks: one is to infer causal networks based on data and knowledge via Bayesian networks or other observational causal discovery techniques . The othICAM1-associated pathways constructed from the data and knowledge in this study will expedite the repair and completion of the COVID-19 Disease Map for a deeper understanding of SARS-CoV-2 pathogenesis.Overall, the"} +{"text": "Capillary leak syndrome (CLS) is a rare, potentially life-threatening systemic disease with a mortality rate of more than 30%. Its major clinical manifestation and diagnostic basis are systemic hyperedema. However, we lack knowledge about the presence of severe myocardial edema in patients with CLS. If myocardial edema cannot be detected, it will become a dangerous hidden condition that threatens the safety of patient lives. With the routine application of point-of-care critical ultrasound (POC-CUS) in clinical practice, we found that 2 of 37 (5.41%) CLS patients had severe myocardial edema as the main manifestation. It is also necessary to distinguish it from myocardial noncompaction in newborn infants with severe myocardial edema. This paper will help us to have a deeper understanding and correct management of CLS and, thus, help us to improve the prognosis of patients. This article also suggests the necessity of routine implementation of POC-CUS in the neonatal intensive care unit. The birth weight was 2260 g, and the Apgar score was 1-7-10 points/at 1, 5 and 10 min of birth. He was admitted to the hospital due to dyspnea for 140 min after resuscitation from asphyxia. At birth, there was only a heartbeat, there was no spontaneous breathing and loss of muscle tone. The patient was transferred to our neonatal intensive care unit (NICU) after resuscitation with endotracheal intubation, positive pressure ventilation and chest compressions. Physical examination on admission showed that the premature infant had a poor appearance, poor mental response, hypotonia and disappearance of primitive reflexes. Arterial blood gas analysis showed pH 7.177, BE \u221216.6 mmol/L and a blood lactic acid value of 16.0 mmol/L. Severe disseminated intravascular coagulation (DIC) occurred rapidly after admission and was treated with heparin and low molecular dextrose. However, during the recovery stage of DIC, the patient gradually developed systemic highly nondepressed edema and significantly reduced urine volume and hypotension, and blood biochemical tests showed that the patient\u2019s plasma albumin was significantly decreased. Based on these findings, we made a diagnosis of capillary leak syndrome (CLS). Point-of-care critical ultrasound (POC-CUS) examination showed that the infant had severe biventricular edema in addition to obvious lung edema. The ventricular wall of the infant was significantly thickened, the size of the heart chamber was significantly reduced with a reduction in ejection fraction and cardiac output. The thickness of the interventricular septum was 0.45 cm, the thickness of the left ventricular wall was 0.54 cm and mild asphyxia at birth (Apgar scores were 6 at 1 min of birth). The infant was admitted to our NICU 2 h after birth due to hyporesponsiveness. Physical examination on admission showed a full-term appearance and a slightly poor response. His vital signs were stable with a normal blood pressure. The muscle tone was slightly lower, and the primitive reflex slightly diminished. Routine POC-CUS examination showed marked thickening of the biventricular wall and narrowing of the cardiac chamber, especially in the right heart, with a small amount of pericardial effusion (Capillary leak syndrome (CLS) is a rare, potentially life-threatening systemic disease caused by increased vascular permeability due to cytokine damage to the endothelium that causes the intravascular fluid and proteins to shift into the interstitial space, with subsequent hypovolemic hypotension and shock ,2,3. Geneffusion . VentricThe incidence of CLS has been reported to account for 1.62% of hospitalized critically ill neonates in the early stage , showingThe mortality rate of CLS has remained above 30% or even as high as 50% ,5. Our tIt should be noted that severe myocardial edema could be misdiagnosed as ventricular noncompaction. Ventricular noncompaction mainly involved the left ventricle on echocardiography 92%), nearly half of the patients had a family history of ventricular noncompaction, echocardiography typically requires the following criteria: (1) the presence of multiple echocardiographic trabeculations, (2) multiple deep intertrabecular recesses communicating with the ventricular cavity, as demonstrated by color Doppler imaging and the recesses demonstrated in the apical or middle portion of the ventricle and (3) a 2-layered structure of the endocardium with a noncompacted to compacted ratio >1.4 ,7,8,9,102%, nearlIn conclusion, POC-CUS plays an increasingly important role in clinical practice, which not only contributes to the accurate diagnosis of diseases, but also helps to change the traditional understanding of diseases or increase the new understanding of diseases ,13,14. F"}